Skip to content

LRU cache implementation and demonstration using webpages

Notifications You must be signed in to change notification settings

vigneshb27/lru_cache

Repository files navigation

ALGORITHM

Least Recently Used (LRU) cache algorithm keeps recently used items near the front of cache. Whenever a new item is accessed, the LRU places it at the head of the cache. When the cache reaches to its capacity, items that have been accessed less recently will be removed starting from the end of the cache.

HOW IT WORKS?

Generally, LRU algorithm is implemented with HashMap and Doubly Linked List, Queue.

newNode() : This function creates a new node which contains the page and the pointer to the next and the previous node.

createQueue() : This function creates the queue with the provided size and initialize the the front and rear pointer as NULL.

createHash() : It creates an array with the provided size and initialize all the values with NULL.

areAllFramesFull() : This function returns the Boolean value based on the condition whether the queue is full or not.

isQueueEmpty() : Returns the Boolean value based on whether the queue is empty or not.

dequeue() : Based on the property of queue (First In First Out) the queue will dequeue the rear node.

enqueue() : It will create the new node to the front of the queue and its node value is stored in the hash at its respective page value.

referencePage() : It will check if the node is already present in the queue if so, it makes the node at the front of the queue and if the node is new to the queue it will make it as the front node of the queue. And at each time the function is called all the values present in the cache at the moment will be written into a file.

getThePage() : It will search for the required page in the queue if found, it will make the updates to the cache and return the value else it will return -1.

printAll() : This function will print all the values present at the moment in the cache to the console.

OPERATIONS ON LRU :

ADD :

    • Create new node for the given value and insert it to the head of the queue.

    • Put the new node to HashMap with the given value as key.

    • Size is increased by one.

WHEN CACHE IS FULL :

    • Remove the last element(The one tail.prev is pointing) from the list.

    • Create new node for the given value and insert it to the head of the linked list.

    • Put the new node to HashMap with the given value as key.

    • Size remains unchanged.

GET :

    • Find the given value in HashMap.

    • If the corresponding node is not at the head position of the linked list, move it to head.

    • Update the tail pointers accordingly.

    • Return the value.

DATA STRUCTURES :

Data Structures used in the project are,

    i) Linked List

    ii) Queue

    iii) HashMap

Linked List : The doubly linked list is actually here for the purpose of keep tracking the next and the previous node at the same instance and linked list implementation of the data structure queue is used.

Queue : The data structure queue is used mainly to maintain the eviction order and also the size of the cache is been regulated by the size of the queue. Whenever a new node is added the queue ‘s front pointer will be pointing to it. And the eviction is done at the rear pointer of the queue. The principle used in queue First In First Out (FIFO) plays a major role in the implementation of the LRU Cache. So the most recently accessed elements are tracked using the queue.

HashMap : The HashMap largely reduces the complexity of the program while searching for the node as O(1). While looking for the node if the node is present in the hashmap make it as the front node of the queue, if the searched node is not present in the hashmap then insert the new node to the front of the queue.

REAL – LIFE APPLICATION :

A simple website called ‘Sports’ has been created using Node JS to illustrate how the LRU Cache works.

The ‘localhost:3000’ is the url for the home page of the website and the other pages are followed with the url of ‘localhost:3000/r’ where r corresponds to the page number.

The url ‘localhost:3000/1’ corresponds to ‘Cricket’ page

The url ‘localhost:3000/2’ corresponds to ‘Football’ page

The url ‘localhost:3000/3’ corresponds to ‘Tennis’ page

The url ‘localhost:3000/4’ corresponds to ‘BasketBall’ page

The url ‘localhost:3000/5’ corresponds to ‘BaseBall’ page

Home page :

ss1

Now the C program for the LRU cache is made to run.

ss2

The cache is created with the size of three. i.e. We can store only three webpages inside the cache, if the url is present in the cache while retrieving we can access the page else we will end up with 404 (not found) error.

ss3

ss4

ss5

The value 3, 2, 1 have been added to the cache. We can visualize it by printing the elements in the cache from the front of the queue.

ss6

Now we can view the webpage which have the page number 1 or 2 or 3 but if we try to access the webpage other than this we will end up with the 404 error.

ss7

ss8

ss9

ss10

Since the webpage ‘/4’ is not found in the cache we get the not found error. Now a few modifications are done in the cache storage.

ss11

ss12

Now the page 4 is added to the cache, so the least recently used page 3 is evicted from the cache and 4 is added to the front of the queue.

ss13

ss14

So now when we try to access the page 4 it will be accessed but when we try to access the page 3 it will show the not found error.

ss15

ss16

Since 2 is already present in the cache it will just shift to the front of the queue. i.e. The queue 4 1 2 will now became 2 4 1. And now the page 5 is added so the cache becomes 5 2 4 and so the page 1 is evicted from the cache.

ss20

ss17

When we try to get a page not in the cache we will also end up with the 404 error.

ss19

ss21

The following project thus illustrates the implementation of the LRU Cache.

About

LRU cache implementation and demonstration using webpages

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published