Key Points
- Deadlocks occur when processes can’t proceed due to a lack of access to necessary resources, impacting system performance and stability.
- Four conditions, known as Coffman conditions, must be present for a deadlock to occur.
- Deadlocks can be managed through prevention, detection, and recovery strategies.
- Prevention strategies include using Banker’s algorithm to check resource availability and eliminating conditions necessary for deadlocks.
When you work in any field involving computers, the last thing you want is for things to come to a complete standstill. Resource allocation and management are crucial factors in system performance, and something we must always pay attention to. But when systems are struggling to share resources effectively, we can run into a problem known as a deadlock. In this article, we’re going to explain what deadlocks are, how they occur, and what you can do to prevent and resolve them.
What is a Deadlock in OS?
Deadlocks happen when processes can’t proceed because they don’t have access to the resources they need to operate. You can think of a deadlock as similar to two buses trying to drive down a road. Both of these buses are trying to complete the same process, i.e., traveling down the road. Both also require the same resources, i.e., the bus lane. But neither can move past because they can’t share the resource (without causing a collision). Of course, when the system can’t execute vital processes, this severely impacts performance and any task we’re trying to carry out. Even worse, if critical processes can’t happen, the system’s stability and security are threatened. Therefore, it’s imperative that we understand how deadlocks occur and what we can do to fix them.
You can visualize the deadlock situation by considering the following graphic. We have two processes: Process 1 and Process 2. Process 1 is holding resource 1, but it needs to access resource 2 to complete. However, process 2 is holding resource 2, but requires access to resource 1. Both of these processes depend on the resources the other has to finish executing, and can’t release their resources in the meantime.

©History-Computer.com
How do Deadlocks Happen?
Four conditions must be present for a deadlock to occur. These are known as Coffman conditions because Edward Coffman first described them in the early 1970s. These conditions are:
- Mutual exclusion – this means that at least one required resource must be non-shareable. Otherwise, we would have no problem with both processes accessing it.
- Resource holding – this refers to when one process is holding at least one resource and needs another, which is held by another process.
- Absence of preemption – this condition arises when the required resource can only be voluntarily released by the process (preemption is where a task can be interrupted when desired).
- Circular wait – It’s not enough just for a resource to be non-shareable and for a process to need another process’ resource. The second process must also require the resource that the first process is holding, indicating a circular dependency.
How Can We Deal With Deadlocks?
Generally, we can manage deadlocks through a combination of prevention, detection, and recovery. If we can’t preemptively stop the deadlock from occurring, then we can implement strategies to try and resolve it. We’ll explore each of these methods in turn.
Prevention
We can think of this as a kind of predictive strategy, as we try to make sure we know exactly what resources our process requires before initiating it. We use methods such as Banker’s algorithm (which was actually developed by Dijkstra) to check resource availability based on some assumptions. This algorithm works on the basis that we have finite resources required by multiple processes. We also need information on how many resources each process needs. Then, we can simulate the scenario and estimate whether this allocation of resources will cause a deadlock or not.
Eliminating conditions
Another way to help prevent deadlocks is by mitigating the conditions necessary for their occurrence. Strategies to deal with each condition are listed in the table.
Condition | Strategy |
---|---|
Mutual exclusion | Allow processes to share resources simultaneously, i.e., by using buffering or spooling to minimize competition, or by using read and write locks. |
Resource holding | Require processes to declare their requirements at the first stage, and by using Banker’s algorithm. Stop processes from executing until they have all required resources (two-phase lock). |
Absence of preemption | Assign priorities to each process, so resources can be preempted. Employ rollback, i.e., allow the system to revert to a safe state and reallocate resources. |
Circular wait | Implement an order of resource allocation, so that processes can only request resources in the given order. |
Ignorance
This method may sound like we’re just avoiding the entire problem altogether. And that is partly true. Some operating systems, such as Linux and Windows, use this principle. Since a deadlock in OS generally occurs very rarely, these systems essentially ignore this possibility, and only deal with it when it happens. Usually, we can fix the deadlock by rebooting the system. We can refer to this as the “Ostrich algorithm”. This isn’t actually a specific algorithm as such but is so named due to Ostriches’ tendency to bury their head in the sand, i.e., ignore problems. This approach to deadlock resolution isn’t ideal, as we can encounter severe consequences as a result of not employing any mitigation strategies.
Recovery
If we haven’t been able to prevent a deadlock, then we must attempt to recover the safe state of our system. We can do this by terminating all deadlocked processes, or sequentially terminating them until the deadlock resolves itself. Alternatively, we can prioritize resources at this stage, assigning them to urgent processes until we resolve the deadlock.
Understanding Deadlock in OS: Wrapping Up
To summarize, deadlocks in OS can occur when processes are holding a resource and competing for access to another resource. Specifically, certain conditions must be present for deadlocks to happen. These include waiting on a resource, a circular dependency on resources, a non-shareable resource being required, and an inability to temporarily pause one of the involved processes. Being able to effectively detect, prevent and resolve deadlocks is crucial in maintaining system performance and stability. Techniques include using Banker’s algorithm, using read and write locks, using two-phase locks, ordering resource allocation, and prioritizing processes.
As technology continues to progress and become more efficient, it’s likely we’ll see more advanced and improved strategies for dealing with deadlocks. These may involve technologies such as blockchain technology, quantum computing, and machine learning.
The image featured at the top of this post is ©iStock.com/Guillaume.