We are provided the entire number of pages that can be referred to. The cache (or memory) size is also specified (Number of page frames that cache can hold at a time). When the cache is full and a new page is referred to that is not in the cache, the LRU caching method removes the least recently used frame.
An LRU cache is implemented using two data structures.
A Queue: A doubly linked list is used to achieve this. The queue will have a maximum size equal to the total number of frames available (cache size).
The pages that have been used the most recently will be towards the front end, while the pages that have been used the least recently will be near the back end.
A Hash: with the page number as the key and the address of the queue node as the value When a page is referred, it’s possible that the needed page is already in memory. If the node of the list is in memory, we must detach it and move it to the front of the queue.
We bring the needed page into memory if it is not already there. To put it another way, we add a new node to the front of the queue and update the hash with the appropriate node address. We remove a node from the back of the queue and add a new node to the front of the queue if the queue is full, i.e. all the frames are full.