LRU Cache

I have been coming across lots of interesting caching techniques the more I work on storyteller accelerator. Last night while browsing the source for Google’s vitess project, I came across an LRU cache implementation in go using doubly linked lists. The implementation was small and well tested, so I decided to extract it, and wrap it up as a library of it’s own. The source is available on github if you wish to jump right into code.

What is an LRU Cache

A cache is a container for holding things that are frequently accessed. They are useful when the object you need is expensive to access. It could be expensive because it is computationally complex, or because it resides on disk, or over a network connection. In an ideal world, everything would fit in memory, and you wouldn’t need a cache. However most of the time that’s not the case. In those cases, caches are useful. Until they become full, that’s where the LRU part is useful.

LRU stands for Least Recently Used it is a policy that dictates what to do when the container fills and you try to add another item. The idea is pretty simple, every time something in the cache is accessed, update a timestamp with when. When the cache is full, delete the thing with the oldest timestamp. This library goes the extra step of moving frequently accessed things to the front of the list. That way the oldest thing is always at the end, and removing it is pretty simple.

An Example

Using the library is simple, first include it:

	import "go-lrucache/cache"

Or if you don’t freeze your dependencies locally:

	import "github.com/danieltoshea/go-lrucache/cache"

Creating a cache is very easy, call cache.NewLRUCache and pass it the capacity for the cache.

	c := cache.NewLRUCache(100)

The following example (trimmed down for brevity) shows how to search the cache, and not finding an item, add it:

The type returned by cache.Get : cache.Value is probably not what you want, so remember to do a type assertion to restore it the original.

Photo Credit, NASA GRIN; Image # : C1966-3680

blog comments powered by Disqus