Understanding Deadlock in OS, With Examples

history of Operating Systems

Understanding Deadlock in OS, With Examples

Key Points

  • Deadlocks occur when processes can’t proceed due to a lack of access to necessary resources, impacting system performance and stability.
  • Four conditions, known as Coffman conditions, must be present for a deadlock to occur.
  • Deadlocks can be managed through prevention, detection, and recovery strategies.
  • Prevention strategies include using Banker’s algorithm to check resource availability and eliminating conditions necessary for deadlocks.

When you work in any field involving computers, the last thing you want is for things to come to a complete standstill. Resource allocation and management are crucial factors in system performance, and something we must always pay attention to. But when systems are struggling to share resources effectively, we can run into a problem known as a deadlock. In this article, we’re going to explain what deadlocks are, how they occur, and what you can do to prevent and resolve them.

What is a Deadlock in OS?

Deadlocks happen when processes can’t proceed because they don’t have access to the resources they need to operate. You can think of a deadlock as similar to two buses trying to drive down a road. Both of these buses are trying to complete the same process, i.e., traveling down the road. Both also require the same resources, i.e., the bus lane. But neither can move past because they can’t share the resource (without causing a collision). Of course, when the system can’t execute vital processes, this severely impacts performance and any task we’re trying to carry out. Even worse, if critical processes can’t happen, the system’s stability and security are threatened. Therefore, it’s imperative that we understand how deadlocks occur and what we can do to fix them.

You can visualize the deadlock situation by considering the following graphic. We have two processes: Process 1 and Process 2. Process 1 is holding resource 1, but it needs to access resource 2 to complete. However, process 2 is holding resource 2, but requires access to resource 1. Both of these processes depend on the resources the other has to finish executing, and can’t release their resources in the meantime.

graphic showing competing deadlocked processes.
When processes are holding and competing for resources, they become deadlocked and can’t execute.


How do Deadlocks Happen?

Four conditions must be present for a deadlock to occur. These are known as Coffman conditions because Edward Coffman first described them in the early 1970s. These conditions are:

  • Mutual exclusion – this means that at least one required resource must be non-shareable. Otherwise, we would have no problem with both processes accessing it.
  • Resource holding – this refers to when one process is holding at least one resource and needs another, which is held by another process.
  • Absence of preemption – this condition arises when the required resource can only be voluntarily released by the process (preemption is where a task can be interrupted when desired).
  • Circular wait – It’s not enough just for a resource to be non-shareable and for a process to need another process’ resource. The second process must also require the resource that the first process is holding, indicating a circular dependency.

How Can We Deal With Deadlocks?

Generally, we can manage deadlocks through a combination of prevention, detection, and recovery. If we can’t preemptively stop the deadlock from occurring, then we can implement strategies to try and resolve it. We’ll explore each of these methods in turn.


We can think of this as a kind of predictive strategy, as we try to make sure we know exactly what resources our process requires before initiating it. We use methods such as Banker’s algorithm (which was actually developed by Dijkstra) to check resource availability based on some assumptions. This algorithm works on the basis that we have finite resources required by multiple processes. We also need information on how many resources each process needs. Then, we can simulate the scenario and estimate whether this allocation of resources will cause a deadlock or not.

Eliminating conditions

Another way to help prevent deadlocks is by mitigating the conditions necessary for their occurrence. Strategies to deal with each condition are listed in the table.

Mutual exclusionAllow processes to share resources simultaneously, i.e., by using buffering or spooling to minimize competition, or by using read and write locks.
Resource holdingRequire processes to declare their requirements at the first stage, and by using Banker’s algorithm. Stop processes from executing until they have all required resources (two-phase lock).
Absence of preemptionAssign priorities to each process, so resources can be preempted. Employ rollback, i.e., allow the system to revert to a safe state and reallocate resources.
Circular waitImplement an order of resource allocation, so that processes can only request resources in the given order.


This method may sound like we’re just avoiding the entire problem altogether. And that is partly true. Some operating systems, such as Linux and Windows, use this principle. Since a deadlock in OS generally occurs very rarely, these systems essentially ignore this possibility, and only deal with it when it happens. Usually, we can fix the deadlock by rebooting the system. We can refer to this as the “Ostrich algorithm”. This isn’t actually a specific algorithm as such but is so named due to Ostriches’ tendency to bury their head in the sand, i.e., ignore problems. This approach to deadlock resolution isn’t ideal, as we can encounter severe consequences as a result of not employing any mitigation strategies.


If we haven’t been able to prevent a deadlock, then we must attempt to recover the safe state of our system. We can do this by terminating all deadlocked processes, or sequentially terminating them until the deadlock resolves itself. Alternatively, we can prioritize resources at this stage, assigning them to urgent processes until we resolve the deadlock.

Understanding Deadlock in OS: Wrapping Up

To summarize, deadlocks in OS can occur when processes are holding a resource and competing for access to another resource. Specifically, certain conditions must be present for deadlocks to happen. These include waiting on a resource, a circular dependency on resources, a non-shareable resource being required, and an inability to temporarily pause one of the involved processes. Being able to effectively detect, prevent and resolve deadlocks is crucial in maintaining system performance and stability. Techniques include using Banker’s algorithm, using read and write locks, using two-phase locks, ordering resource allocation, and prioritizing processes.

As technology continues to progress and become more efficient, it’s likely we’ll see more advanced and improved strategies for dealing with deadlocks. These may involve technologies such as blockchain technology, quantum computing, and machine learning.

Frequently Asked Questions

What is a deadlock in OS?

A deadlock in the operating system means that several processes are stalled because they require access to a resource being held by another process. This leads to a stalemate, where no processes can execute.

What conditions must be present for deadlocks to occur?

For deadlocks to occur, four conditions must be present, known as Coffman conditions. These are a lack of preemption (processes can’t be paused), resource holding (processes are holding resources that another needs), mutual exclusion (i.e. non-shareable resources), and circular wait (a circular dependency of each process on another).

What problems can deadlocks cause?

Deadlocks can lead to vastly reduced system performance, as well as cause major issues with stability and security. They can even result in data corruption or loss of data.

How can we detect deadlocks?

Banker’s algorithm is commonly used to simulate processes and determine how resources should be allocated to maintain a safe state. We can also use resource allocation graphs to help detect deadlocks.

How can deadlocks be prevented?

Aside from simply ignoring the possibility of a deadlock, i.e., the “Ostrich algorithm”, we can prevent deadlocks by using several techniques. These include process prioritization, using two-phase or read/write locks, and ordering resource allocation.

What strategies can be used to resolve deadlocks?

To resolve a deadlock that’s already occurred, we can reboot the system, which usually solves the issue. Alternatively, we can implement preemption, where we take resources from one process so another can complete. We can also terminate all processes or terminate them sequentially until the deadlock resolves. Another way is to roll back the system to a safe state and then allocate resources in a different manner.

What's the difference between a deadlock and starvation?

Deadlocks and starvation are similar but different concepts. All deadlocks can be considered starvation, but not all cases of starvation are deadlocks. Starvation simply means a circumstance where lower priority processes are stopped from executing, due to higher priority processes using the resources. As such, they don’t stall forever like deadlocks but do have delayed finish times. Resources are still being used, instead of being completely blocked.

To top