Failure of Ostrich Algorithms
#Ostrichalgorithms #Deadlock #os #operatingsystems #circularfashion #blockchain #iot #machinelearning
What is Ostrich Algorithm?
The widespread belief that an ostrich hides from danger by burying its head in the sand is the source of the name "Ostrich Algorithm." In computer operating systems, this approach is frequently used to resolve deadlock problems. When processes have been given exclusive access to resources like devices, files, etc., deadlocks might happen. Machines can also become stuck in a deadlock. Deadlock will be covered in depth first, then the Ostrich Algorithm.
What is a Deadlock?
When all of the processes in a group are awaiting an event that only one other process in the group may trigger, the group of processes is said to be "deadlocked."
When a process enters a waiting state in an operating system because another waiting process is holding the requested resource, the condition is known as a deadlock (e.g. Printers, Tables in a database, Tape Drivers). When many processes share a certain kind of mutually exclusive resource, deadlock is a common issue.
The release of resources that are currently being handled by another member of the set is often the action that each process is waiting for. In other words, each process in the set of stalled ones is awaiting an owned resource.
The following four conditions must all be true at once in order to raise a stalemate. There cannot be a resource deadlock if any one of these requirements holds true:
How Ostrich Algorithm can be used?
This algorithm is a popular technique for ignoring issues. It takes its name from the ostrich, which is known for sticking its head in the sand and ignoring everything going on around it, as the image below illustrates.
Recommended by LinkedIn
This tactic elicits varying responses in various people. Most engineers would not be ready to pay a significant penalty in performance or convenience to prevent deadlocks if the deadlock happens on average once every five years but system crashes due to hardware failures, compiler mistakes, and operating system problems happen once a week.
Deadlocks that are not even noticed, much less automatically broken, are a potential problem for the majority of operating systems. The quantity of entries in the process table determines the overall number of processes in a system. The quantity of entries in the process table determines the overall number of processes in a system. The slots in this process table, which the operating system maintains for scheduling and context switching, are limited resources. An appropriate course of action for the program doing the fork is to wait a random amount of time and try again if a fork (call to create a new process from an existing operating process) fails because the table is full.
Suppose a UNIX system has 100 process slots at this point. There are 10 programs running, and each one needs to start 12 (sub) processes, for a total of 120 processes. The 10 initial processes and the 90 new processes exhaust the table once each process has generated nine further processes. The ten original processes are now stuck in a never-ending cycle of forking and failing, which is known as a deadlock.
Even if there is a very slim chance that this will occur, it is still possible. So, if we want to fix the problem, should we cease using processes and fork calls? The size of the i-node table similarly limits the number of open files, so when it gets full, a similar issue arises. Another finite resource on the disc is swap space. In actuality, each operating system table represents a limited resource. Should we do away with all of these in case a group of n processes manages to claim 1/n of the total and then try to claim another one?
Assuming that most users would prefer an occasional stalemate to a rule limiting all users to one process, one open file, and one of everything, the majority of operating systems, including UNIX and Windows, simply ignore the issue. There wouldn't be much debate if deadlocks could be resolved for nothing. The issue is that the cost is substantial, primarily in the form of unfavorable process limitations. As a result, we must choose between being convenient and being right. It is challenging to discover broad solutions in this situation.
Ostrich Algorithm can be used when:
References:
Student at Rajalakshmi Institute of Technology
2ynice one !!!