Anonymous Anonymity - Request For Comments
On The Internet - July 4, 2005
A Declaration of Anonymity
Revised August 29, 2005
Anonymous Anonymity - Request For Comments
"I think paranoia can be instructive in the right doses. Paranoia is a
skill." - John Shirley
This document is available / updated at the following:
https://gandalfddi.z19.web.core.windows.net/anonymous_anonymity.htm
Richard W. Hamming has said 'Indeed, one of my major complaints about the
computer field is that whereas Newton could say, "If I have seen a little
farther than others, it is because I have stood on the shoulders of
giants," I am forced to say, "Today we stand on each other's
feet."'.
Where this may be true with many aspects of Computer Science, Anonymity is one
area where this is absolutely not true. I found many helpful resources
and people while working on this idea. Everybody was helpful and pointed
me towards other areas to search. There are volumes of data going back to
1981 when D. Chaum first discussed this subject. It remains to me a
fascinating area of study.
I would like to ask the community to read this and comment on the
"Issues" section. I am struggling with the how to fix the
issues presented. If you can conceive a better way to fix the first issue
I would appreciate that input. If there is a solution that is already
well known, please tell me. Thanks.
Table Of Contents:
1) Abstract
2) High Level Description
3) Description
4) Issues
Abstract:
The current state of anonymous proxies does not provide adequate protection for
the entity wishing to preserve their anonymity. Anonymous remailers and
their ISP's have had court orders to have their logs subpoenaed in court
(i). There is also a "trust" that the anonymous proxy is truly
anonymous.
Given that Country "C" restricts access to certain sites on "The
Internet" located in country "A". Also given that country
"C" wishes to gain knowledge of which of its citizens are trying to
access restricted sites, country "C" could set up anonymous proxies
in country "N" to monitor its own citizens. In addition if
country "C" wished to monitor already popular anonymous sites for
traffic, they could install a employee in the offices of the ISP that serves
the popular anonymous site and have that employee surreptitiously monitor the
traffic going to / leaving that site.
Proposed is a truly anonymous system wherein no one entity has a complete picture
of the transaction. This system can be installed on a corporate LAN
(Local Area Network) to allow anonymous access of "sensitive" data
(Example Anonymous employee suggestions, Human Resources "sensitive"
procedures / documentation (medical forms, complaint procedures)) or it can be
installed on "The Internet".
I have seen the statement "Information Wants to Be Free". I
would revise that statement to "Information Will Be Free". The
information does not care one way or the other. But humans, simply by
their curiosity and need to explore ideas will make the information free.
High Level Description:
The software will facilitate the transfer of files (HTTP, FTP, etc.) between
two computers using anonymous proxies. Every machine will have "the
least" amount of knowledge to make the transfer possible. One
computer (the end point) will have access to the data and will know the
intermediary proxy but will not know what computer the file is ultimately
destined for. Another computer (the intermediary server or the
intermediary proxy) will know what two computers the file is being transferred
between but will not know the contents of the file. The last computer
(The destination / anonymous machine) will know what the file is and who the
proxy is, but not where the file is coming from.
When the software is launched, it decides how much bandwidth is available for
the connection. If it is a low bandwidth then the machine will perform
the services of an Intermediate Proxy or End Point. If high bandwidth
then the machine can perform as a Intermediary Server and / or as a
Intermediary Proxy. This information is only known by the machine that
runs the software, it is not told to any other computer. This way nobody
know if a computer is a server or just a transfer agent.
Connections are made to other computers, requests are sent out for additional
connections until "enough" (depending on bandwidth) connections are
made. Scalability is not an issue, as connections / servers are
overloaded traffic will simply be dropped or passed onto other servers that are
not as loaded.
Searches are passed to all connected machines. If the operator makes a
selection then that data is transferred to the machine. Searches are
performed via full URI Scheme (ii) request, by words or phrases contained in
the file or by filename (or parts thereof). Files retrieved (either from
"The Internet" or from another machine) are saved in cache on each
machine. When the file cache is full, the files that haven't been accessed
for the longest time are deleted. This allows for a "shadow"
Internet, sites that are censored or deleted are still available via the
Anonymous Anonymity network.
In addition it would also circumvent the blocking of searches for specific
information / items by people living in certain countries.
I have looked at The Freenet Project (iii), and they deserve the credit in this
project for the idea of a "shadow" Internet, but the Anonymous
Anonymity Network is fundamentally different. On The Freenet Project web pages
are published only on The Freenet Project (and does not allow for searching),
the Anonymous Anonymity Network allows for searching of not only files on the
Anonymous Anonymity Network but also the anonymous transfer of files into the
Anonymous Anonymity Network from "The Internet", thus connecting
"The Internet" with the Anonymous Anonymity Network. Also, the files
do not have to be passed from node to node to get to the final destination (as
in The Freenet Project), they are fetched and sent (via one hop) to the final
requestor.
I have also looked and run The Onion Router (Tor) (xii). While this
system is very well conceived, I have a few issues with the system:
1) Anybody from country "C" that communicates with any of the Tor
well known directories immediately exposes themselves as wanting
anonymity. Duck tells me that hostnames, ports and fingerprints of the
trusted dir servers are hardcoded in the source. If I was administering a
network and told to look for a specific activity, I would allow and log that
activity to see who was going to that site.
2) Many of the Tor Routers seem to be running on ephemeral ports, usually in
the TCP port 9000 to TCP port 10000 range. This is a huge flag to the
network operators that someone is doing something that isn't quite right,
specifically (if the network operators know about Tor) that someone is
accessing an anonymous network. Tor asks that they run on ports 80 and
443 and this could be mitigated by an outgoing firewall on the client.
3) If country "C" can control all exit points then they can set up
their own fake directory server(s), assuming that they have enough CPU power to
break the known directory servers fingerprint key(s).
Detailed Description:
There are up to five devices involved in each transaction.
1) Destination Machine - The machine that wishes to remain anonymous
2) Intermediary Server.
3) Intermediary Proxy.
4) End point - HTTP anonymous Proxy or file server
5) The (HTTP, FTP, NNTP, etc) server that the Anonymous Machine wishes to
reach.
With this anonymous network, as with the original design of "The
Internet", there is no central server. The software is initiated on
the users machine. The bandwidth is detected:
1) "Low Bandwidth" - Less than 512 kilobits / second the machine
establishes itself mainly as a Intermediary Proxy / End Point.
2) "High Bandwidth" - Greater than 512 kilobits per second and TCP
port inbound is available, the machine establishes itself mainly as a
Intermediary Server.
All connections / communications will use the specific protocol encoding (see
next paragraph for an example). Since some providers block port 80 the
following standard protocols will be tested to see if they are available for
traffic flow (In the same fashion AOLIM searches for any open port):
1) Port 20 - FTP Data
2) Port 21 - FTP Control
3) Port 80 - HTTP (Also port 81, 8080)
4) Port 443 - SSL
5) Instant Messenger
6) IRC
7) NNTP - The originator would specify the URI they wish encoded, the newsgroup
and message title they wished and their public key. The data would be
encoded via HTTPEncode from a random Google page fetch, without the HTML, per
the description in the next paragraph.
An example of protocol encoding is HTTPEncode encoding. See https://gandalfddi.z19.web.core.windows.net/anonymous_httpencode.htm
for a complete description.
When using FTPEncode for port 20, the data port, UUEncoding will suffice.
When using port 21, again the data will need to be messaged to look like FTP
commands. Port 443 is encoded already so the data conversion should be
minimal.
When the software is installed the user is asked if they have any filtering
software that blocks what sites they are able to go to / monitors what sites
they go to. If they do then their machine is not allowed to be an end
point that fetches "fresh" web pages. Any firewall devices will
have to be set up to allow inbound port 80 (or port "X", user defined
(since some ISP's block port 80)) connections. If this cannot be done
then this machine is primarily a outbound / Intermediary Proxy connect
machine. If a machine can accept inbound connections it will receive
preferential service from other machines. Note: If an unexpected
HTTP connection occurs, then the server will display a previous page that it
has displayed per the <word>.html request from the Destination machine
(see above HTTPEncode page). This will prevent an attacker from figuring
out that the server is not a "real" http server, same with FTP or SSL
connections. Of course if the attacker assumes that this system is
running the Anonymous Anonymity network then they could just use a normal
connect request and if the machine connects then they are right.
The software then attempts connection to a Intermediary Server. The IP
address of an initial intermediary server can be entered manually (as obtained
through a social network, sneakernet, etc.), downloaded from a web site or
retrieved from a periodic post to UseNet news groups or banner ads (suggested
by Sherwood B.). Note: If the initial IP address is downloaded this
can indicate to anybody watching that the Anonymous Anonymity software is being
used, the user should be aware that they may be exposing themselves. The
IP addresses are checked against the internal Autonomous System database to see
if they are geographically diverse, per Feamster and Dingledine (v) and to
mitigate a organization setting up multiple servers that are (in essence) same
machine (viii). If possible make connections to countries that are not
particularly friendly to your country. IPv6 will make this process easier
because it addresses according to the location of the machine. Servers
that are "farthest away" will be chosen over servers that are
"close". If the machine is set at the highest security setting,
then the software should not connect to known cable home user networks (Cox,
Shaw, Etc.) as these networks do not generally host web servers and connecting
to them might raise suspicion. Intermediary servers should have port 80
(for example, but any "standard" protocol port would be open, FTP,
FTP Data, NNTP, etc.) open as an inbound connection so that they appear to be
another web server. If the machine has determined that it has the
capabilities to be a Intermediary Server then it should allow connections to
itself as a Intermediary Server. The machine should also search for other
Intermediary Servers so that requests are distributed between many
servers. Note: If inbound port 80 cannot be established then that
machine can still act as a server by making the port 80 outbound connection
when asked to by another machine (see next paragraph handing off to another
server). Obviously when the port 80 outbound connection is made two way
communications can then ensue.
If a server has too many nodes, it should pass any new connections off to
another server and notify the machine that is trying to connect of this handoff
so that it can establish a direct connection to the other server. If an
Intermediary Proxy is using more than 50% of its bandwidth proxying
connections, then additional connection requests should be denied.
All connections / communications should be encrypted. Elliptic curve
cryptography (ECC) encryption is the standard
since this is a less CPU intensive / smaller key method than the other
encryption algorithms (xiv). Each connection creates a unique encryption
public/private key pair for use in communication (this is so that the user
cannot be identified by using the same public key over and over again).
For destination machines, since "Get" and "Post" requests
should not be made frequently, or of large amounts of data to minimize raising
suspicion about that node.
Routing - Since data is routed node to node, the routing will not allow least
cost (efficient) routing. Just individual direct connections will be in
the routing table (IP Address / Search Request). Data would be
"routed" by each node keeping a table of incoming IP Address / search
request hashes paired with outgoing IP Address / search request hashes.
The route back being (of course) the path of pairs of IP Address's and search
request hashes that are related. This gives each node the "least
knowledge" of the source and destination. An Intermediary Server should
not know whether a node that is connected is another Intermediary Server or a
Anonymous Machine or an End Point.
Searches can be of the form:
1) URI Scheme request (http, https, ftp, gopher, file, etc)
2) File Name (or parts of file name)
3) Data in file (words, phrases, ANDed words or ORed words)
The search request and the public key are concatenated and then hashed (ix).
This is referred to as the search hash. For each search
request, each node will concatenate their available search terms with the
public key and compare, if there is a favorable match then the search term will
be encrypted with the public key and returned. The search hash with a
unique public key is passed from the Anonymous Machine to all Intermediary
Servers. When a search request is seen, a lookup of the search hash is
made on the server in the "already known searches" search table and
if the hash of the search matches a already received search, the search is
dropped (this search has already been through this machine). If the
search is not dropped, the search hash is stored in a lookup table with the IP
Address that the search was received from. The Intermediary Server passes
the entire search to all Intermediary Servers Anonymous Machines and End Point
machines it knows except for the machine the search request came from (the
server doesn't know what "kind" of machine it is connected to).
If an end point machine can satisfy the request / has matches for the request
then that data relating to the request is encrypted using the public key and
passed back to the Intermediary Server with the search response hash
number. The response data from the end point machine is the URI, the URI
hash, the size of the file, and the date of the file (when the file was
created). When positive responses are received then those responses are
returned via the routing (above) to the IP Address that initiated the
request. The search request table will age out after a "short"
amount of time (2 minutes or so) so that tracing the connection will be more
difficult and memory utilization in each machine will be minimized.
To better optimize bandwidth and protect against replay attacks / attempts / to
find the end point (x) and spam attacks, replies that hash as the same value
are duplicates and will be dropped.
Stanislav S. mentioned that the system needs to prevent an attacker from
posting a page that is "interesting" in hopes of luring unsuspecting
anonymous users and convincing them to view a page with the page sending
information about the anonymous user (IP Address, etc) back to the
attacker. The end point machine should act like a http proxy. When
fetching a URI From "The Internet", all JAVA scripts / ActiveX
controls should be converted to animated GIF's / sanitized. All GIF's /
JPEG's should (obviously) be imported and transferred as data. All drop
down menus should be sanitized.
The Anonymous Machine then (by operator choice or by random) chooses a one of
the hashes to act on the request. If no node or only one or two nodes has
the URI available in cache then a node that can connect to the URI is
chosen. If only one or two nodes return a positive search result, it is
possible that these results are an attacker (x), to protect against false
information and to better populate the network the Anonymous machine should
request a new version of the URI, and if that does not work then retrieve one
of the stored versions. If any node returns a hash indicating that they
have the file, then a second search request is sent out via another connection
(i.e. do not send the hash request out via the server that the original search
response came in from) using the hash as the search request. Nodes that
have that hash return the hash and hash dictionary (see below). Again,
any duplicate replies that hash the same are dropped. If the file is
large, this search hash / hash dictionary will allow the Anonymous Machine to
transfer parts of the file from many sources. The Anonymous Machine will
also be able to offer portions of the file out when as they are received if
other machines are looking for that same file. Note: To further obfuscate
the "real" requests, Anonymous Machines should take random incoming
requests / pick random words and send them out as fake requests to Intermediary
Servers. Results from these fake requests are, of course, ignored.
When the search table fills, requests are dropped in a FIFO manner for a
specific IP Address. If someone tries to flood the network with requests
to empty the tables, only the IP Address they are connected to will suffer, not
other IP Address's.
Note: The positive responses to the search may be a form of "I can
act as your proxy for that URL, but I don't have the URL" or "I have
the entire URL, and this is the last date that I accessed that page plus here
is the hash of the data on that page". The operator can choose
whether they want a copy (possibly stale) or if they want to chose a proxy that
can get the current page. All links on that page are different files that
are searched / requested for. Additionally (in this manner) the Anonymous
Anonymity network could host its own WWW network where those pages were only
accessible to someone connected to the Anonymous Anonymity network, or via a
machine proxying for the Anonymous Anonymity network.
When the Anonymous Requester receives a request that is acceptable, a
connection request is sent along the path that is in the response data using
the IP Address / search hash connection pair generated in the previous
paragraph. This connection request has a new public key associated with the
request. The Intermediary Proxy Server sends out a request on all
connections for a proxy and randomly chooses one of those responses and
requests that Intermediary Proxy IP address. The IP address of this
Intermediary Proxy is sent to the path of both the End Point machine and the
Anonymous Requester. Note: To minimize tracking of a specific file and to
minimize "fingerprinting" of web site interaction ("Y"
bytes of a web page on a monitored circuit = "Y" bytes out on a
suspected circuit, with a specific pattern), chunks of the file should be
requested from as many sources as possible (x).
The user of both the Endpoint machine and the Anonymous Requestor configures a
value for the number of servers they wish to include in the chain called a
chain link value. The Endpoint
machine and the Anonymous Requestor choose a random intermediary and sends the
chain link value to their first server.
That server subtracts one from the chain link value and then chooses a
geographically diverse server as the next link in the chain and sends the chain
link value to that server. When
the chain link value is 0, the server encodes sends a public key to the Endpoint
machine or the Anonymous Requestor (whichever machine initiated the chain).
When the Endpoint machine / the Anonymous Requestor receives that public
key they encode the IP address of the Intermediary Proxy and send that address
to the last "Link 0" Server.
The End Point "Link 0" server and the Anonymous Requester "Link
0" server set up connections with the Intermediary Proxy on TCP Port
80. Again, data is encrypted and then HTTPEncoded. The Intermediary
Proxy knows about two servers in the chain, but not what data is being
exchanged or who the source and destination are. When the data exchange
is complete the connection is terminated.
The whole idea behind this network is for each node to know the minimum
information for the system to work. The less a node knows the less
information that can be pieced together to get the whole picture. In
training for Security Clearances the quote goes something like
"Unclassified information can easily be combined to reveal classified
information."
File name:
The file name is retuned with a SHA-2 hash and a SHA-2 hash dictionary.
The SHA-2 hash is just a SHA-2 hash of the file. The SHA-2 Hash
Dictionary is a SHA-2 hash of "X" bytes of the file (where
"X" is size of file / 1023 and where "X" is greater than 16
KBytes). The Anonymous Machine would request chunk "y" of the
file from the End Point. These requests would continue until the
Anonymous Machine has all the chunks it needs or until the connection is
broken. In this manner the Anonymous Machine could be requesting parts of
a particular file while also sending out parts of a particular file to other
users. If the file is less than 32 MBytes then the hash table would be 32
KBytes chunks of the file with the number of hashes indicated in the hash
table. This hash allows (in the case, for example, of large FTP URI
Scheme requests) requests to be made of parts of the file being requested if it
is a large file. The file hash and the hash segment of the file would be
requested, therefore several machines could be sending parts of the file to the
anonymous requester at the same time.
Communications via Anonymous Chat:
Another phase of this project would be to use a version of the Mimicry (vi,
vii) software for Internet Relay Chat (IRC) or Instant Messenger (IM)
communication. The first portion of the conversation would be an exchange
of public keys from a generated set of public / private keys. From there on the
conversation would be encrypted before it is sent out to the IRC / IM channel.
The software could also be used to download HTTP / files from the Anonymous
Anonymity network.
Communications via UseNet:
Using the above HTTPEncode algorithm, with the "fetch a page via Google,
encrypt it and post it", two parties can communicate or post requested
URIs via Usenet. Because this is a broadcast medium, if someone read all
the messages in a particular newsgroup then nobody could tell who the message
was bound for. Each message would (obviously) have to use some kind of
anonymizing NNTP server to post the message. The software would support a
"output message to a text file" option.
Software Distribution:
Thomas J. Boschloo asked "The problem remains, how to download this
software without drawing attention onto oneself!".
Stanislav S. answered " Put a modified Knoppix (that does what you want)
on CDs. Go to country C with 1000 of these disks. Meet
dissidents. Give them the disks and leaflets with SHA256 hash of the disk
content. Provide an easy way for disk replication (on machines with CD-Rs)
from inside the software. If it doesn't spread, it's not needed or
doesn't solve the problem. Find dissidents from country C in your
country. (My town has weekly meetings, if I am not mistaken.) Talk
to them." And he also added "If you do write code, and decide to
release it, I urge you to at least make sure it's clearly marked as unsuitable
for use in life-and-death situations such as those that arise when viewing
dissenting material in oppressive countries.". He also mentioned
that Operating System companies / antivirus could imbed a detection for the
software, and report back to a central authority the presence of that
software. I mentioned Polymorphic code of some kind might be useful in
this case. In addition, Network Interface Cards (NICs) that allow MAC
address modification should insert a random number in the last 3 bytes and test
to see if the machine still has network connectivity, if so then that MAC
address should be used for communication. This will help to obfuscate the
computer used of layer 2 detection is used.
Sherwood B adds that the data should be encrypted and the key stored in RAM
"On installation the kernel generates a key to be used for the data
partition. This is stored at known locations in the kernel itself.
(Allocate several hundred char sized variables short int space, and use the
upper byte to store keys.)"
Alternately the keys could be stored as yet another file on the Anonymous
Anonymity network that only the owner knows what the file is and what it is
good for.
Issues:
1) The issue with a party owning the server and the anonymous proxier and / or
the intermediary machine. This is essentially the Man In The Middle
attack. The attacker "owns" the server in the middle which
directs the anonymous machine to proxies and end point devices that it also
controls, therefore the server knows the anonymous machine and what they are
requesting. Same thing if the attacker wants to find out what files are
on the end point machine, they act like the anonymous requester and the
intermediary servers / proxies and make requests. While this issue is not
completely solved by the above scheme, it is mitigated by the Anonymous Machine
searching on the hash after the initial responses are received. Even if
the server is acting as a man in the middle, the server would need to maintain
a table of URIs / hashes returned. As the network grows this table would
become huge.
2) HTTPS connections. The HTTPS transfer would require several data
requests that would require the end point to serve up multiple pages to the
anonymous requester. The Man In The Middle attack would be mitigated by
the fact that the anonymous requester would be able to verify the SSL
certificate of the site that they are visiting.
3) Abuse of the anonymous system by someone who is stalking, etc. The IP
address of the proxier is the address that shows up on the logs and stalking /
spamming / etc. would be blamed on whoever owns the IP Proxier address.
4) Not being able to make HTTP requests that divulge the end stations IP
address. (Example http://www.whatismyip.com/
)
5) Spammers - Assuming that the this system is programmed in open source, you
will (at some time) have some smart spammer figure out a way to redirect HTTP
requests to them and they will serve out their own spamvertized pages.
Same with data files, nodes could put out data files that have nothing to do
with the request made. A local file should be kept where the user can
ignore all responses from a specific connection or ignore a specific
hash. The file would be only locally significant because if it became
global then nefarious people could "poison" sites that are serving
out good information and say that these are "bad" sites.
6) There has always been an issue of false data flooded into the network.
This will still be an issue and the user will have to make several requests to
get the correct data (this is in conjunction with item 6 above).
7) The most important feature of this network is education, making sure that
users don't do things to hurt themselves. Because we are security
experts, the users of the software need to be completely aware of the dangers
and risk they are taking by using the software. It is OUR responsibility
to educate them enough that they know the risks that they are taking, what is
the smallest risk and what risks are greater. If the Internet connection
of the person that is using this system is monitored, long enough analysis of
the connections and data will inevitably lead to the conclusion that they are
using some kind of software to evade detection. The user should be
warned (over and over again) against using this kind of software on a machine
that will / could be monitored for any length of time. If they can jump
from machine to machine they should. Part of the responsibility of
programming the system is also education the user as to all the risks that they
may be taking by using the software (see software distribution section).
We need to warn the users of such things as Stylometry http://en.wikipedia.org/wiki/Stylometry
, that if they are posting messages they need someone to change their style.
8) A government entity can (essentially) "NAT" all the known server
or end point connections and try to look like those servers or end points and
learn who is going where by directing all connections to yourself.
9) Stanislav S. mentions that the software should not have a
"pattern" of communicating that can be profiled, i.e. when people web
surf they take some time between clicks, the software should emulate this
rather than immediately fetching the next page.
i) Newman, Ron and Copeland, Frank "The Church of Scientology vs. Grady
Ward" (Specifically "Scientology targets ISPs and anonymous
remailers"). URL: http://www.xs4all.nl/~kspaink/cos/rnewman/grady/home.html
Wednesday, July 24, 1996 (Accessed July 4, 2005)
ii) IANA Registry of URI Schemes "Uniform Resource Identifier (URI)
SCHEMES". URL: http://www.iana.org/assignments/uri-schemes
03 June 2005 (Accessed July 4, 2005)
iii) The Freenet Project "The Freenet Project - index - beginner".
URL: http://freenetproject.org, 04
July 2005
iv) Simova, Martina Pollett, Chris and Stamp, Mark "STEALTHY
CIPHERTEXT". URL: http://www.cs.sjsu.edu/faculty/stamp/papers/stealthy.pdf,
,March,2005 (Accessed July 23, 2005)
v) Nick Feamster, Roger Dingledine "Location Diversity in Anonymity
Networks". URL: http://www.freehaven.net/doc/routing-zones/routing-zones.ps
, 2004 (Accessed July 23, 2005)
vi) Mystic "Mimicry". URLs: http://www.defcon.org/html/defcon-11/defcon-11-speakers.html#Mystic
http://www.inventati.info/pub/defcon11/Mimic-Mimicry/Mimicry.ppt
(Accessed July 23, 2005).
vii) Mystic "Mimicry" software. URL: http://www.inventati.info/pub/defcon11/Mimic-Mimicry/
(Accessed July 23, 2005)
viii) John R. Douceur "The Sybil Attack". URL: http://www.cs.rice.edu/Conferences/IPTPS02/101.pdf
(Accessed July 30, 2005)
ix) Jianning Yang "APTPFS: Anonymous Peer-to-Peer File
Sharing". URL: http://www.cs.sjsu.edu/faculty/stamp/students/Tom_CS298_Report.doc
April, 2005
x) Nick Mathewson, 5th hope conference 2004 "How To Break Anonymity
Networks". URL: http://www.the-fifth-hope.org/hoop/5hope_speakers.khtml#panel029
July 10, 2004 (Accessed July 31, 2005)
xi) D. Chaum. Untraceable electronic mail, return addresses, and digital
pseudo-nyms. Communications of the ACM, 4(2), February 1981.
xii) Tor: An anonymous Internet communication system. URL: http://tor.eff.org/ (Accessed August 1, 2005)
xiii) Certicom.Com "ECC Cryptography Tutorial - 1.0 Introduction".
URL: http://www.certicom.com/index.php?action=ecc,ecc_tut_1_0
2005 (Accessed August 20,2005)
xiv) Levent Ertaul, Weimin Lu "ECC Based Threshold Cryptography for Secure
Data Forwarding and Secure Key
Exchange in MANET (I)". URL: http://www.mcs.csuhayward.edu/~lertaul/34620102.pdf
2005 (Accessed August 20,2005)
I would appreciate any and all comments on the above Anonymous Anonymity
network. Specifically any solutions to the presented problems or if
someone has already covered this ground I would appreciate pointers to their
work.
Thank you for your comments.
Ken Hollis
---------------------------------------------------------------
Do not meddle in the affairs of wizards for they are subtle and
quick to anger.
Ken Hollis - Gandalf The White - O- TINLC
WWW Page - https://gandalfddi.z19.web.core.windows.net/