Monday, 8 October 2012

OS Interview Questions-8


1. What is starvation and aging?
Ans :
Starvation: Starvation is a resource management problem where a process does not get the resources it needs for a long time because the resources are being allocated to other processes.
Aging: Aging is a technique to avoid starvation in a scheduling system. It works by adding an aging factor to the priority of each request. The aging factor must increase the request’s priority as time passes and must ensure that a request will eventually be the highest priority request (after it has waited long enough)

2.Different types of Real-Time Scheduling?
Ans :Hard real-time systems – required to complete a critical task within a guaranteed amount of time.
Soft real-time computing – requires that critical processes receive priority over less fortunate ones.

3. What are the Methods for Handling Deadlocks?
Ans :
->Ensure that the system will never enter a deadlock state.
->Allow the system to enter a deadlock state and then recover.
->Ignore the problem and pretend that deadlocks never occur in the system; used by most operating systems, including UNIX.

4. What is a Safe State and its’ use in deadlock avoidance?
Ans :When a process requests an available resource, system must decide if immediate allocation leaves the system in a safe state
->System is in safe state if there exists a safe sequence of all processes.
->Sequence is safe if for each Pi, the resources that Pi can still request can be satisfied by
currently available resources + resources held by all the Pj, with j
If Pi resource needs are not immediately available, then Pi can wait until all Pj have finished.
When Pj is finished, Pi can obtain needed resources, execute, return allocated resources, and terminate.
When Pi terminates, Pi+1 can obtain its needed resources, and so on.
->Deadlock Avoidance Þ ensure that a system will never enter an unsafe state.

5. Recovery from Deadlock?
Ans :Process Termination:
->Abort all deadlocked processes.
->Abort one process at a time until the deadlock cycle is eliminated.
->In which order should we choose to abort?
Priority of the process.
How long process has computed, and how much longer to completion.
Resources the process has used.
Resources process needs to complete.
How many processes will need to be terminated?
Is process interactive or batch?
Resource Preemption:
->Selecting a victim – minimize cost.
->Rollback – return to some safe state, restart process for that state.
->Starvation – same process may always be picked as victim, include number of rollback in cost factor.

6.Difference between Logical and Physical Address Space?
Ans :
->The concept of a logical address space that is bound to a separate physical address space is central to proper memory management.
Logical address – generated by the CPU; also referred to as virtual address.
Physical address – address seen by the memory unit.
->Logical and physical addresses are the same in compile-time and load-time address-binding schemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme

7. Binding of Instructions and Data to Memory?
Ans :Address binding of instructions and data to memory addresses can happen at three different stages
Compile time: If memory location known a priori, absolute code can be generated; must recompile code if starting location changes.
Load time: Must generate relocatable code if memory location is not known at compile time.
Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., base and limit registers).

8. What is Memory-Management Unit (MMU)?
Ans :Hardware device that maps virtual to physical address.
In MMU scheme, the value in the relocation register is added to every address generated by a user process at the time it is sent to memory.
->The user program deals with logical addresses; it never sees the real physical addresses

9. What are Dynamic Loading, Dynamic Linking and Overlays?
Ans :
Dynamic Loading:
->Routine is not loaded until it is called
->Better memory-space utilization; unused routine is never loaded.
->Useful when large amounts of code are needed to handle infrequently occurring cases.
->No special support from the operating system is required implemented through program design.
Dynamic Linking:
->Linking postponed until execution time.
->Small piece of code, stub, used to locate the appropriate memory-resident library routine.
->Stub replaces itself with the address of the routine, and executes the routine.
->Operating system needed to check if routine is in processes’ memory address.
->Dynamic linking is particularly useful for libraries.
Overlays:
->Keep in memory only those instructions and data that are needed at any given time.
->Needed when process is larger than amount of memory allocated to it.
->Implemented by user, no special support needed from operating system, programming design of overlay structure is complex.

10. What is fragmentation? Different types of fragmentation?
Ans : Fragmentation occurs in a dynamic memory allocation system when many of the free blocks are too small to satisfy any request.
External Fragmentation: External Fragmentation happens when a dynamic memory allocation algorithm allocates some memory and a small piece is left over that cannot be effectively used. If too much external fragmentation occurs, the amount of usable memory is drastically reduced.Total memory space exists to satisfy a request, but it is not contiguous
Internal Fragmentation: Internal fragmentation is the space wasted inside of allocated memory blocks because of restriction on the allowed sizes of allocated blocks.Allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used Reduce external fragmentation by compaction
->Shuffle memory contents to place all free memory together in one large block.
->Compaction is possible only if relocation is dynamic, and is done at execution time.

11. Define Demand Paging, Page fault interrupt, and Trashing?
Ans :
Demand Paging: Demand paging is the paging policy that a page is not read into memory until it is requested, that is, until there is a page fault on the page.
Page fault interrupt: A page fault interrupt occurs when a memory reference is made to a page that is not in memory.The present bit in the page table entry will be found to be off by the virtual memory hardware and it will signal an interrupt.
Trashing: The problem of many page faults occurring in a short time, called “page thrashing,”

12. Explain Segmentation with paging?
Ans : Segments can be of different lengths, so it is harder to find a place for a segment in memory than a page. With segmented virtual memory, we get the benefits of virtual memory but we still have to do dynamic storage allocation of physical memory. In order to avoid this, it is possible to combine segmentation and paging into a two-level virtual memory system. Each segment descriptor points to page table for that segment.This give some of the advantages of paging (easy placement) with some of the advantages of segments (logical division of the program).

13. Under what circumstances do page faults occur? Describe the actions taken by the operating system when a page fault occurs?
Ans : A page fault occurs when an access to a page that has not been brought into main memory takes place. The operating system verifies the memory access, aborting the program if it is invalid. If it is valid, a free frame is located and I/O is requested to read the needed page into the free frame. Upon completion of I/O, the process table and page table are updated and the instruction is restarted

14. What is the cause of thrashing? How does the system detect thrashing? Once it detects thrashing, what can the system do to eliminate this problem?
Ans :
Thrashing is caused by under allocation of the minimum number of pages required by a process, forcing it to continuously page fault. The system can detect thrashing by evaluating the level of CPU utilization as compared to the level of multiprogramming. It can be eliminated by reducing the level of multiprogramming.


No comments:

Post a Comment