When we’re working with programs, system performance is understandably one of the biggest things we’re trying to optimize. While there are many ways to modify our code to optimize performance, we can also employ other processes. For example, one such technique is paging, which is a type of memory management. We’re going to explain what paging in programming is and how it works, with examples.
What is Paging in Programming?
Managing memory usage and access is crucial in helping our programs run as efficiently as possible. Therefore, we often use paging. This permits the operating system to load processes from the secondary memory in the form of a page. In programming, a page is a contiguous block of virtual memory, with a fixed size. By doing this, we can allocate memory dynamically, and only load the processes we require.
The Working Behind Paging
For paging, unsurprisingly, we need to divide each process into page form. As well as this, paging divides the memory into frames, which are a block of physical memory of the same size as the page. We store 1 page in 1 frame of the main memory, prioritizing contiguous gaps in the memory. The secondary storage keeps the pages while not in use. Because of this, we avoid the need to allocate contiguous physical memory. In essence, we effectively end up with a larger space of contiguous memory than we have available physically.
For example, we can consider a main memory of size 9 KB. This can be split into 9 1 KB frames. We also have 3 processes, P1 to P3, which can each be split into 3 1KB pages. If all the frames are empty, we can map the pages contiguously, for example:
Main Memory |
---|
P1 |
P1 |
P1 |
P2 |
P2 |
P2 |
P3 |
P3 |
P3 |
However, we may not have all frames free and want to access a different process first. We can use paging to store the processes non-contiguously in the main memory. This means we can access them as one process later on. For example, we have P1 stored, but then we want to access P4 before P1. P4 has 6 pages, which can be stored non-contiguously but still accessed together. To demonstrate, there is a graphic below.
Main Memory |
---|
P4 |
P4 |
P4 |
P1 |
P1 |
P1 |
P4 |
P4 |
P4 |
Step-By-Step Process
We’ve covered the main principle behind paging. Now, we’re going to describe the steps involved.
Firstly, the system creates a paging table. This keeps track of how the pages in the logical memory are mapped to the frames in the physical memory. Because we initially have no loaded pages, we initialize the table with default values.
Secondly, the CPU accesses the memory and generates a logical address for the page in question. The CPU then checks what is known as the Translation Lookaside Buffer (LTB), or cache memory, which stores page table entries that were accessed recently. The purpose is to retrieve the desired entry that corresponds to the requested page. However, if the entry can’t be found, the operating system handles this “page fault”, usually by identifying the missing page and loading it from secondary memory if necessary.
The operating system can also replace pages, which generally happens if the physical memory is full. We can use algorithms to determine which page to remove to make space. These usually operate on either the First-In-First-Out (FIFO) or Least Recently Used (LRU) principles.
Lastly, we update the TLB to reflect these recent changes, and we have working access to the pages we want.

©By Evan Mason – Own work, CC BY-SA 4.0 – License
Pros and Cons of Paging
Paging can be extremely useful, but no process is without its potential faults. Therefore, we’re examining the advantages and drawbacks of paging next, summarized in this handy table.
Pros | Cons |
---|---|
Makes memory more efficient by lessening internal fragmentation. Memory is used in fixed-size blocks, which are usually smaller than what the process requires. This prevents memory wastage as there are fewer unused blocks. | The time complexity of accessing memory is increased as paging introduces indirection. |
Can lessen external fragmentation as well, as pages don’t need to be stored contiguously. | Can run into page faults, where pages aren’t accessed correctly, which can impair performance. |
Memory is more flexible since unused pages can be temporarily moved to secondary storage if the physical memory becomes full. | Page table management can make the system significantly more complex. |
Uses virtual memory to support large logical address spaces, so that we can use programs that require a lot of memory without being limited by the physical size of the memory. | Time complexity of accessing memory is increased as paging introduces indirection. |
By assigning processes to their own page tables, memory is better protected since the processes can’t access memory outside of their allocated blocks. | If paging is excessive, can result in thrashing. This is where the system performance decreases because page retrieval is too frequent as the memory demand exceeds the physical capacity. |
Access to pages can be achieved quicker once the page has been moved to physical memory, speeding up memory usage. | TLB misses also hinder performance as the full page table must be searched. |
Wrapping Up
Overall, paging in programming is a useful technique in optimizing performance, when managed correctly. What’s more, by separating the logical memory into fixed-size blocks, we can dynamically allocate and deallocate memory at will, depending on the processes we want to execute. Be sure to configure your page size and algorithms properly, and consistently monitor your system to detect thrashing as and when it arises. As long as you handle errors and faults promptly and logically, paging remains an effective way to manage your system’s memory.
The image featured at the top of this post is ©thinkhubstudio/Shutterstock.com.