Study the feasibility of introducing
caching queries and responses on hosts on the Gnutella network.
t o p
We first “recorded” sample messages off the
network. The recorder filtered out pings, pongs and pushes. So the sample only
consisted of Queries and Responses. The messages were later “played-back” to a
host that caches queries, and a hit count was calculated based on various
The caching algorithm used makes a distinction between broad
queries and specific queries. Broad queries are defined as queries that contain
a wild card character or begin with a “.”. (A better implementation would have
filtered queries like “mp3” and other commonly seen extensions). The cache disregards broad queries. The
reason is that if a broad query comes along, almost all entries in the cache
would be appropriate responses.
t o p
t o p
The results indicate that the best performance is achieved
with the LRU replacement policy. But intuitively random replacement may be ideal
since it prevents adjacent hosts from caching the same results. Perhaps a
hybrid of replacement algorithms (like Random tending to FIFO) would be good.
Cache size is an important consideration. Even small caches
can be a drain on the resources on memory for clients on the network. Another
important consideration is that even if each host has a smaller cache size, the
overall cache size of the network is the compounded size of all the caches in
There can be some not so good behavior that results from
caching. Consider the situation that a new host comes on to the network and
wants to share files that other people may want to be able to access. If there
is another host in the middle that caches results to which the new host would
have (normally) responded. In this
situation caching the results has reduced the effectiveness of the network.
Given these problems and the
general nature the traffic on the Gnutella network, perhaps it’s a good idea to
invalidate entries in the cache that are more than 15 seconds old.
t o p
Comments and questions are welcome at email@example.com
t o p