top of page

Operating System IMP Que. (BCA sem-ii)

Bachelor Computer Application (BCA)

Operating System (OS)

Most important question banks


UNIT - 1


Unit 1

Q.1) What is Operating System? Explain different functions of OS.

=> OS :

➢ Operating system is a collection of set of programs.

➢ Which manage all the resources of the computer system.

➢ It is an intermediator between the user and hardware

➢ A computer system can be divided into three components:

1. The hardware

2. Systems programs

3. Application programs

❖ Function of OS :-

➢ Memory management :

The task of allocation and de-allocation of memory space to programs in need of this resources.

➢ Processor management :

The OS assigns processor to the different task that must performed by the computer system.

➢ Device management :

OS performs the task of allocation and de-allocation of the devices.

➢ File management :

OS manages all the file-related activities such as organization storage, retravel, naming, sharing, and protection of files.

➢ Security management :

It helps OS, that secure and protect the computer system internally as well as externally.

➢ Job scheduling :

It is the process of allocating system resources to many different task by an OS.

➢ Time sharing :

It co-ordinate and assigns compilers, assemblers, utility programs, and other software packages to various users working on computer system.


Q.2) Explain block diagram of computer.

=> Computer Block Diagram System:

👉 Mainly computer system consists of three parts, that are central processing unit (CPU), Input Devices, and Output Devices. The Central Processing Unit (CPU) is divided into two parts again: arithmetic logic unit (ALU) and the control unit (CU). The set of instruction is in the form of raw data.


👉A large amount of data is stored in the computer memory with the help of primary and secondary storage devices. The CPU is like the heart/brain of the computer. The user does not get the desired output, without the necessary option taken by the CPU. The Central processing unit (CPU) is responsible for the processing of all the instructions which are given by the user to the computer system.



👉 The data is entered through input devices such as the keyboard, mouse, etc. This set of instruction is processed by the CPU after getting the input by the user, and then the computer system produces the output. The computer can show the output with the help of output devices to the user, such as monitor, printer, etc.


CPU (Central Processing Unit):-

The computer system is nothing without the Central processing Unit so, it is also known as the brain or heat of computer. The CPU is an electronic hardware device which can perform different types of operations such as arithmetic and logical operation.


The CPU contains two parts: the arithmetic logic unit and control unit. We have discussed briefly the arithmetic unit, logical unit, and control unit which are given below:


Control Unit:-

The control unit (CU) controls all the activities or operations which are performed inside the computer system. It receives instructions or information directly from the main memory of the computer.


When the control unit receives an instruction set or information, it converts the instruction set to control signals then; these signals are sent to the central processor for further processing. The control unit understands which operation to execute, accurately, and in which order.


Arithmetic and Logical Unit:-

The arithmetic and logical unit is the combinational digital electronic circuit that can perform arithmetic operations on integer binary numbers.It presents the arithmetic and logical operation. The outputs of ALU will change asynchronously in response to the input. The basic arithmetic and bitwise logic functions are supported by ALU.


Storage Unit:-

The information or set of guidelines are stored in the storage unit of the computer system. The storage unit provides the space to store the data or instruction of processed data. The information or data is saved or hold in computer memory or storage device. The data storage is the core function and fundamental of the computer components.


Q.3)Explain the architecture of OS.

=>The core software components of an operating system are collectively known as the kernel. The kernel has unrestricted access to all of the resources on the system. In early monolithic systems, each component of the operating system was contained within the kernel, could communicate directly with any other component, and had unrestricted system access. While this made the operating system very efficient, it also meant that errors were more



● As operating systems became larger and more complex, this approach was largely abandoned in favour of a modular approach which grouped components with similar functionality into layers to help operating system designers to manage the complexity of the system. In this kind of architecture, each layer communicates only with the layers immediately above and below it, and lower level layers provide services to higher-level ones using an interface that hides their implementation.


● The modularity of layered operating systems allows the implementation of each layer to be modified without requiring any modification to adjacent layers. Although this modular approach imposes structure and consistency on the operating system, simplifying debugging and modification, a service request from a user process may pass through many layers of system software before it is serviced and performance compares unfavourably to that of a monolithic kernel. Also, because all layers still have unrestricted access to the system, the kernel is still susceptible to errant or malicious code. Many of today’s operating systems, including Microsoft Windows and Linux, implement some level of layering.



● A microkernel architecture includes only a very small number of services within the kernel in an attempt to keep it small and scalable. The services typically include low-level memory management, inter- process communication and basic process synchronisation to enable processes to cooperate. In microkernel designs, most operating system components, such as process management and device management, execute outside the kernel with a lower level of system access.


● Microkernels are highly modular, making them extensible, portable and scalable. Operating system components outside the kernel can fail without causing the operating system to fall over. Once again, the downside is an increased level of inter- module communication which can degrade system performance.


Q.4)What is the goal of OS ?

=> There are two types of goals of an Operating System i.e. Primary Goals and Secondary Goal.

Primary Goal: The primary goal of an Operating System is to provide a user-friendly and convenient environment. We know that it is not compulsory to use the Operating System, but things become harder when the user has to perform all the process scheduling and converting the user code into machine code is also very difficult. So, we make the use of an Operating System to act as an intermediate between us and the hardware. All you need to do is give commands to the Operating System and the Operating System will do the rest for you. So, the Operating System should be convenient to use.


Secondary Goal: The secondary goal of an Operating System is efficiency. The Operating System should perform all the management of resources in such a way that the resources are fully utilised and no resource should be held idle if some request to that resource is there at that instant of time.


Q.5)Explain different type of OS.


[A].Batch OS:-

● Batch processing requires grouping of similar jobs which consist of programs and data. Batch processing is suitable for program with large computation time with no need for user interaction. Initially serial system was used where execution of program is sequential, but it was very slow Computer operator gives a command to start the processing of a batch, the kernel set up the processing of the first job. Job was selected from the job queue and loaded into main memory when a job completed execution its memory was released and the output for the job was copied .When a job is completed it returns control, which immediately reads in the next job. Following figure shows the concept of a batch system.


Advantages:

• Processors of the batch systems are aware of the time duration of the job even when it is present in the queue.

• Batch systems can be shared by multiple users.

• There is very less idle time of the batch system.

• It enables us to manage an efficiently large load of work.


Disadvantages:

• It is very difficult to debug the batch systems .

• proves to be costly sometimes.

● If any job fails, then it is difficult to It predict the time.


[B].Multiprogramming OS:-

⨁ CPU remains idle in batch system. At any time either CPU or I/O device was idle in batch system to keep CPU busy, more than one program job must be loaded for execution. It increases the CPU utilization. OS executes multiple programs. One program gives control to other program when it is waiting for some I/O or when it completes its execution. Here CPU Scheduling is required. Memory management is also require.


Advantages :-

• No CPU idle time.

• Tasks runs in parallel.

• Shorter response time.

• Maximizes total job throughput of a computer.

• Increases resource utilization.


Disadvantages:-

• Sometimes long time jobs have to wait long time.

• Tracking of all processes sometimes difficult..

• Requires efficient memory management.

• No user interaction with any program during execution.


[C].Multitasking OS:-

Multitasking means that the computer can deal with more than one program at a time. Here CPU scheduling must be given. Here every process is having its PCB (Process Control Block ). Which stores all registers, memory, etc. When one program gives control of CPU to other program before its execution, it is called as context switching.


Advantages :-

 It provides logical parallelism.

 It provides a shorter response time.

 It provides CPU utilization.


Disadvantages :

 It couldn't be executed on a slow-speed processor.

 It needs a large amount of storage memory to do the work.


[D].Time Sharing OS:-

In case of multiprogramming only one program runs at a time, that program gives control to other program when it will complete its tasks. It means if we have more processes then last process has to wait for very long time. To avoid this time sharing OS is used where certain time slot is allocated to every program. It will get CPU only its given time slot. After that CPU will be hand over to other program. Here most of the time round robin scheduling is used. It is more complex than time sharing OS.


Advantages:

• Duplication of software is less probable

• Each task is given equal importance

• The CPU idle time can be decreased


Disadvantages:-

 Problem of reliability

 Care of security and integrity is to be taken of user data and programs

 There is a problem in data communication


[E].Distributed OS:-

 A Distributed operating system is one that looks to its users like an ordinary centralized operating system but runs on multiple independent CPU. User sees the environment as a virtual environment. Here client-server mechanism is used. Users request OS service to the server. Server gives a response to that request. Distributed OS allows programs to run on several processors at the same time without users being aware of this distribution. Distributed systems are more reliable than uniprocessor-based systems. The distributed system has a performance advantage over traditional centralized systems. It is simple to implement. But that is also a disadvantage for distributed systems.


Advantages:

• The data exchange speed is increased by using electronic mails

• All systems are entirely independent]et of each other.

• Failure of one system is not going to affect the other

• The resources are shared and hence the computation is very fast and speedy

• There is a reduction in load on their host computers

• Delay in processing reduces


Disadvantages:

• If the main network fails, this will stop the complete communication.

• To establish such systems, the language which is used are not clearly and well defined still.

• They are very expensive.

• The underlying software is highly complex.


[F].Parallel Computing:-

 The use of multiple computers with multiple processes to solve a problem with greater speed than using a single computer.

 The use of multiple computers with multiple processes to solve a problem with greater speed than using a single computer.ng is dividing up tasks ver multiple microprocessors which are independent of each other. Instructions from each part execute simultaneously on different CPU.


Advantages :-

• It saves time and money as many resources working together will reduce the time and cut potential costs.

• It can be impractical to solve larger problems on Serial Computing.

• It can take advantage of non-local resources when the local resources are finite.

• Serial Computing ‘wastes’ the potential computing power, thus Parallel Computing makes better work of the hardware.


Disadvantages:

• It addresses such as communication and synchronization between multiple sub-tasks and processes which is difficult to achieve.

 The algorithms must be managed in such a way that they can be handled in a parallel mechanism.

 The algorithms or programs must have low coupling and high cohesion. But it’s difficult to create such programs

 More technically skilled and expert programmers can code a parallelism-based program well.


[G].Real Time OS:-

⨀Time constraints is the key parameter in real time operating system. It is used in air traffic control, satellite, etc. Real time system. Are of two types. Hard real time and soft real time critical task is completed within the time limit in hard real time system. Soft real time cannot guarantee that it will be able to meet deadline under all condition.


Advantages:

• Maximum use of devices and system thus gives more output from all the resources

• Time given for shifting tasks is very less

• It Focusses on running applications and gives less importance to queue applications

• Size of programs are small

• Error free

• Memory allocation is well managed


Disadvantages:-

• Only some task run at the same time

• Sometimes the system resources are not good enough and they are costly as well

• Complex and difficult to write algorithms are used

• It requires specific device drivers

• They are very less prone to switching tasks


1. [6]. Differentiate user level threads from kernel level threads.

=>There are two types of threads.

1. User Level Thread

2. Kernel Level Thread


Kernel level thread

The kernel thread recognizes the operating system. There is a thread control block and process control block in the system for each thread and process in the kernel-level thread. The kernel-level thread is implemented by the operating system. The kernel knows about all the threads and manages them. The kernel-level thread offers a system call to create and manage the threads from user-space. The implementation of kernel threads is more difficult than the user thread. Context switch time is longer in the kernel thread. If a kernel thread performs a blocking operation, the Banky thread execution can continue. Example: Window Solaris.


Advantages of Kernel-level threads:-

• The kernel-level thread is fully aware of all threads.

• The scheduler may decide to spend more CPU time in the process of threads being large numerical.

• The kernel-level thread is good for those applications that block the frequency.


Disadvantages of Kernel-level threads:-

1. The kernel thread manages and schedules all threads.

2. The implementation of kernel threads is difficult than the user thread.

3. The kernel-level thread is slower than user-level threads.


Unit- 2


1. What is PCB? Explain all attributes of PCB.

 Every process is represented in the operating system by a process control block, which is also called a task control block.

Process state:

A process can be new, ready, running, waiting, etc.

Program counter:

The program counter lets you know the address of the next instruction, which should be executed for that process.

CPU registers:

This component includes accumulators, index and general-purpose registers, and information of condition code.

CPU scheduling information:

This component includes a process priority, pointers for scheduling queues, and various other scheduling parameters.

Accounting and business information:

It includes the amount of CPU and time utilities like real time used, job or process numbers, etc.

Memory-management information:

This information includes the value of the base and limit registers, the page, or segment tables. This depends on the memory system, which is used by the operating system.

I/O status information:

This block includes a list of open files, the list of I/O devices that are allocated to the process, etc.


2.What is process? Explain process life cycle.

 Process is the execution of a program that performs the actions specified in that program. It can be defined as an execution unit where a program runs. The OS helps you to create, schedule, and terminates the processes which is used by CPU. A process created by the main process is called a child process.




3. What is process scheduling? Explain scheduling algorithms with suitable example.

 A Process Scheduler schedules different processes to be assigned to the CPU based on particular scheduling algorithms. There are six popular process scheduling algorithms which we are going to discuss in this chapter.

 These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are designed so that once a process enters the running state, it cannot be preempted until it completes its allotted time, whereas the preemptive scheduling is based on priority where a scheduler may preempt a low priority running process anytime when a high priority process enters into a ready state.

Priority Based Scheduling:-

 Priority scheduling is a non-preemptive algorithm and one of the most common scheduling algorithms in batch systems.

Each process is assigned a priority. Process with highest priority is to be executed first and so on.

 Each process is assigned a priority. Process with highest priority is to be executed first and so on.

 Priority can be decided based on memory requirements, time requirements or any other resource requirement.




4.Difference between preemptive and nonpreemptive.



5.Explain FCFS scheduling algorithm with its advantages and disadvantages(prepare all scheduling algorithm like this).

 These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are designed so that once a process enters the running state, it cannot be preempted until it completes its allotted time, whereas the preemptive scheduling is based on priority where a scheduler may preempt a low priority running process anytime when a high priority process enters into a ready state.

 First Come First Serve (FCFS) :-

• Jobs are executed on first come, first serve basis.

• It is a non-preemptive, pre-emptive scheduling algorithm.

• Easy to understand and implement.

• Its implementation is based on FIFO queue.

• Poor in performance as average wait time is high.


6.flow of execution of tasks of a process so it is also known Explain following terms: Turn around time, Execution Time, Arrival Time, waiting Time, Thread.


1. Turn around time:

-> Turnaround time (TAT) is the amount of time taken to complete a process or fulfill a request. The concept thus overlaps with lead time and can be contrasted with cycle time.

2. Execution time:

->The execution time or CPU time of a given task is defined as the time spent by the system executing that task, including the time spent executing run- time or system services on its behalf. The mechanism used to measure execution time is implementation defined.

3. Arriving time:

-> Definitions of arrival time. the time at which a public conveyance is scheduled to arrive at a given destination. synonyms: time of arrival.

4. Waiting time:

-> Waiting time is the total time spent by the process in the ready state waiting for CPU. For example, consider the arrival time of all the below 3 processesto be 0 ms, 0 ms, and 2 ms and we are using the First Come First Serve scheduling algorithm. Then the waiting time for all the 3 processes will be: P1: 0 ms.

5 .Thread:

-> A thread is a single sequential as thread of execution or thread of control. There is a way of thread execution inside the process of any operating system. Apart from this, there can be more than one thread inside a process.


7.Find Average waiting time using priority based scheduling algorithm.

Process

Arrival Time

Execution Time

Priority

Service Time

P0

0

5

1

0

P1

1

3

2

11

P2

2

8

1

14

P3

3

6

3

5

=>





UNIT – 3


1. What is Race Condition ?

❖ RACE CONDITION:

• A race condition occurs when two threads access a shared variable at the same time. The first thread reads the variable, and the second thread reads the same value from the variable.

• A race condition is a situation that may occur inside a critical section. This happens when the result of multiple thread execution in critical section differs according to the order in which the threads execute. : Ex : Dual Switch

• Race conditions in critical sections can be avoided if the critical section is treated as an atomic instruction. Also, proper thread synchronization using locks or atomic variables can prevent race conditions.


2. What is Critical section ? Explain three conditions of Critical Sections.

❖ CRITICAL SECTION:

• The critical section in a code segment where the shared variables can be accessed. Atomic action is required in a critical section i.e. only one process can execute in its critical section at a time. All the other processes have to wait to execute in their critical sections.

• a critical section may be used to ensure that a shared resource, for example, a printer, can only be accessed by one process at a time.

• The entry sections handles the entry into the critical section. It acquires the resources needed for execution by the process. The exit section handles the exit from the critical section. It releases the resources and also informs the other processes that critical section is free.

• The critical section problem needs a solution to synchronize the different processes. The solution to the critical section problem must satisfy the following conditions −

• Mutual Exclusion Mutual exclusion implies that only one process can be inside the critical section at any time. If any other processes require the critical section, they must wait until it is free.

• Progress Progress means that if a process is not using the critical section, then it should not stop any other process from accessing it. In other words, any process can enter a critical section if it is free

• Bounded Waiting Bounded waiting means that each process must have a limited waiting time. It should not wait endlessly to access the critical section.


3. Explain Peterson’s Algorithm.

❖ PETERSON’S SOLUTION:-

• Peterson's Solution preserves all three conditions : Mutual Exclusion is assured as only one process can access the critical section at any time. Progress is also assured, as a process outside the critical section does not block other processes from entering the critical section.

• This is a software mechanism implemented at user mode. It is a busy waiting solution can be implemented for only two processes. It uses two variables that are turn variable and interested variable.

• the Peterson solution provides you all the necessary requirements such as Mutual Exclusion, Progress, Bounded Waiting and Portability.

• Peterson’s Algorithm is used to synchronize two processes. It uses two variables, a bool array flag of size 2 and an int variable turn to accomplish it.

In the solution i represents the Consumer and j represents the Producer. Initially the flags are false. When a process wants to execute it’s critical section, it sets it’s flag to true and turn as the index of the other process.

• This means that the process wants to execute but it will allow the other process to run first. The process performs busy waiting until the other process has finished it’s own critical section. After this the current process enters it’s critical section and adds or removes a random number from the shared buffer. After completing the critical section, it sets it’s own flag to false, indication it does not wish to execute anymore.

• The program runs for a fixed amount of time before exiting. This time can be changed by changing:


# define N 2

# define TRUE 1

# define FALSE 0

int interested[N] = FALSE;

int turn;

voidEntry_Section (int process)

{

int other;

other = 1-process;

interested[process] = TRUE;

turn = process;

while (interested [other] =True && TURN=process);

}

voidExit_Section (int process)

{

interested [process] = FALSE;

}


4. Explain Following Term: Race Condition, Critical Section, Hardware Solution, Mutex Lock, Semaphore.


❖ RACE CONDITION :-

• A race condition occurs when two threads access a shared variable at the same time. The first thread reads the variable, and the second thread reads the same value from the variable.

• A race condition is a situation that may occur inside a critical section. This happens when the result of multiple thread execution in critical section differs according to the order in which the threads execute. : Ex : Dual Switch

• Race conditions in critical sections can be avoided if the critical section is treated as an atomic instruction. Also, proper thread synchronization using locks or atomic variables can prevent race conditions.


❖ CRITICAL SECTION :-

• The critical section in a code segment where the shared variables can be accessed. Atomic action is required in a critical section i.e. only one process can execute in its critical section at a time. All the other processes have to wait to execute in their critical sections.

• a critical section may be used to ensure that a shared resource, for example, a printer, can only be accessed by one process at a time.

• The entry sections handles the entry into the critical section. It acquires the resources needed for execution by the process. The exit section handles the exit from the critical section. It releases the resources and also informs the other processes that critical section is free.

• The critical section problem needs a solution to synchronize the different processes. The solution to the critical section problem must satisfy the following conditions −

• Mutual Exclusion Mutual exclusion implies that only one process can be inside the critical section at any time. If any other processes require the critical section, they must wait until it is free.

• Progress Progress means that if a process is not using the critical section, then it should not stop any other process from accessing it. In other words, any process can enter a critical section if it is free

• Bounded Waiting Bounded waiting means that each process must have a limited waiting time. It should not wait endlessly to access the critical section.


❖ HARDWARE SOLUTION :-

• The hardware-based solution to critical section problem is based on a simple tool i.e. lock. The solution implies that before entering into the critical section the process must acquire a lock and must release the lock when it exits its critical section. Using of lock also prevent the race condition .

❖ MUTUAL EXCLUSION :-

• A mutual exclusion (mutex) is a program object that prevents simultaneous access to a shared resource. This concept is used in concurrent programming with a critical section, a piece of code in which processes or threads access a shared resource.

❖ SEMAPHORE :-

• Semaphore is simply an integer variable that is shared between threads. This variable is used to solve the critical section problem and to achieve process synchronization in the multiprocessing environment. This is also known as mutex lock. It can have only two values – 0 and 1

• A mutex object allows multiple process threads to access a single shared resource but only one at a time.

• semaphore allows multiple process threads to access the finite instance of the resource until available. In mutex, the lock can be acquired and released by the same process at a time.


Unit-4


1. What is Deadlock? Explain the conditions that lead to deadlock.

 Deadlock Is a situation where a set of processes are blocked because each process is holding a resource and waiting for another resource acquired by some other process. Consider an example when two trains are coming toward each other on the same track and there is only one track, none of the trains can move once they are in front of each Other.



 A process in operating system uses resources in the following way.

1) Requests a resource

2) Use the resource

3) Releases the resource


 A similar situation occurs in operating systems when there are two or more processes that hold some resources and wait for resources held by other(s). For example, in the below diagram, Process 1 is holding Resource 1 and waiting for resource 2 which is acquired by process 2, and process 2 is waiting for resource 1.



 Deadlock can arise if the following four conditions hold simultaneously

(Necessary Conditions)

Mutual Exclusion: Two or more resources are non-shareable (Only one process can use at a time)

Hold and Wait: A process is holding at least one resource and waiting for resources.

No preemption:-A resource cannot be taken from a process unless the process releases the resource.

Circular Wait: A set of processes are waiting for each other in circular form.


2.In which different conditions required to arise deadlock?

 Conditions for Deadlock in Operating System :

Deadlock is a situation which involves the interaction of more than one resources and processes with each other. We can visualise the occurrence of deadlock as a situation where there are two people on a staircase. One is ascending the staircase while the other is descending. The staircase is so narrow that it can only fit one person at a time. As a result, one has to retreat while the other moves on and uses the staircase. Once that person has finished, the other one can use that staircase. But here, neither person is willing to retreat and waits for the other to retreat. None of them are able to use the staircase. The people here are the processes, and the staircase is the resource. When a process requests for the resource that is been held another process which needs another resource to continue, but is been held by the first process, then it is called a deadlock.



 There are four conditions necessary for the occurrence of a deadlock. They can be understood with the help of the above illustrated example of staircase:


1. Mutual Exclusion: When two people meet in the landings, they can't just walk through because there is space only for one person. This condition allows only one person (or process) to use the step between them (or the resource) is the first condition necessary for the occurrence of the deadlock.

2. Hold and Wait: When the two people refuse to retreat and hold their ground, it is called holding. This is the next necessary condition for deadlock.

3. No Preemption: For resolving the deadlock one can simply cancel one of the processes for other to continue. But the Operating System doesn't do so. It allocates the resources to the processors for as much time as is needed until the task is completed. Hence, there is no temporary reallocation of the resources. It is the third condition for deadlock.

4. Circular Wait: When the two people refuse to retreat and wait for each other to retreat so that they can complete their task, it is called circular wait. It is the last condition for deadlock to occur.

Note: All four conditions are necessary for deadlock to occur. If any single one is prevented or resolved, the deadlock is resolved.


3.Explain deadlock detection situations.

 Deadlock Detection:

1. If resources have a single instance - In this case for Deadlock detection, we can run an algorithm to check for the cycle in the Resource Allocation Graph. The presence of a cycle in the graph is a sufficient condition for deadlock.



2.In the above diagram, resource 1 and resource 2 have single instances. There is a cycle R1 → P1 → R2 is Confirmed. P2. So, Deadlock.


3. If there are multiple instances of resources -

Detection of the cycle is necessary but not sufficient condition for deadlock detection, in this case, the system may or may not be in deadlock varies according to different situations.


4. Explain Banker’s algorithm.

 banker algorithm used to avoid deadlock and allocate resources safely to each process in the computer system. The 'S-State' examines all possible tests or activities before deciding whether the allocation should be allowed to each process. It also helps the operating system to successfully share the resources between all the processes. The banker's algorithm is named because it checks whether a person should be sanctioned a loan amount or not to help the bank system safely simulate allocation resources.

In this section, we will learn the Banker's Algorithm in detail. Also, we will solve problems based on the Banker's Algorithm. To understand the Banker's Algorithm first we will see a real word example of it.


 Suppose the number of account holders in a particular bank is 'n', and the total money in a bank is 'T'. If an account holder applies for a loan; first, the bank subtracts the loan amount from full cash and then estimates the cash difference is greater than T to approve the loan amount. These steps are taken because if another person applies for a loan or withdraws some amount from the bank, it helps the bank manage and operate all things without any restriction in the functionality of the banking system.


 Similarly, it works in an operating system. When a new process is created in a computer system, the process must provide all types of information to the operating system like upcoming processes, requests for their resources, counting them, and delays. Based on these criteria, the operating system decides which process sequence should be executed or waited so that no deadlock occurs in a system. Therefore, it is also known as deadlock avoidance algorithm or deadlock detection in the operating system.


UNIT – 5


1. What is Memory Management ?

• Memory partitioning

Memory partitioning means dividing the main memory into chunks of the same or different sizes so that they can be assigned to processes in the main memory. There are two Memory Management techniques: Contiguous, and Non-Contiguous. In Contiguous Technique, executing process must be loaded entirely in the main memory. Contiguous Technique can be divided into:

1. Fixed (or static) partitioning

2. Variable (or dynamic) partitioning

▪ Fixed Partitioning: This is the oldest and simplest technique used to put more than one process in the main memory. In this partitioning, the number of partitions (non-overlapping) in RAM is fixed but the size of each partition may or may not be the same.



In above figure, first process is only consuming 1MB out of 4MB in the main memory.

Hence, Internal Fragmentation in first block is (4-1) = 3MB.


Sum of Internal Fragmentation in every block = (4-1)+(8-7)+(8-7)+(16-14)= 3+1+1+2 = 7MB.


Suppose process P5 of size 7MB comes. But this process cannot be accommodated in spite of available free space because of contiguous allocation.


Advantages of Fixed Partitioning –

1. Easy to implement

2. Little OS overhead

• Disadvantages of Fixed Partitioning –

1. Internal Fragmentation

2. External Fragmentation

3. Limit process size

4. Limitation on Degree of Multiprogramming


• Variable Partitioning –

It is a part of Contiguous allocation technique. It is used to alleviate the problem faced by Fixed Partitioning. In contrast with fixed partitioning, partitions are not made before the execution or during system configure.


• Advantages of Variable Partitioning –

1. No Internal Fragmentation

2. No restriction on Degree of Multiprogramming

3. No Limitation on the size of the process

• Disadvantages of Variable Partitioning –

1. Difficult Implementation

2. External Fragmentation


2. What is static and Dynamic linking ?

• Static Linking:

When we click the .exe (executable) file of the program and it starts running, all the necessary contents of the binary file have been loaded into the process’s virtual address space. However, most programs also need to run functions from the system libraries, and these library functions also need to be loaded.

In the simplest case, the necessary library functions are embedded directly in the program’s executable binary file. Such a program is statically linked to its libraries, and statically linked executable codes can commence running as soon as they are loaded.

• Disadvantage:

Every program generated must contain copies of exactly the same common system library functions. In terms of both physical memory and disk-space usage, it is much more efficient to load the system libraries into memory only once. Dynamic linking allows this single loading to happen.


• Dynamic Linking:

Every dynamically linked program contains a small, statically linked function that is called when the program starts. This static function only maps the link library into memory and runs the code that the function contains. The link library determines what are all the dynamic libraries which the program requires along with the names of the variables and functions needed from those libraries by reading the information contained in sections of the library.

After which it maps the libraries into the middle of virtual memory and resolves the references to the symbols contained in those libraries. We don’t know where in the memory these shared libraries are actually mapped: They are compiled into position-independent code (PIC), that can run at any address in memory.

• Advantage:

Memory requirements of the program are reduced. A DLL is loaded into memory only once, whereas more than one application may use a single DLL at the moment, thus saving memory space. Application support and maintenance costs are also lowered.


3. How the swapping process run in the OS ?

• Swapping :

The process of swapping means removing all the pages of the process from memory, or marking the pages so that we can remove the pages with the help of the page replacement process.



If the process is suspended, it means the process cannot run. but we can swap out the process for sometime. After some time, the process can be swapped back by the system from the secondary memory to the primary memory.


4. What is Fragmentation?

▪ Fragmentation:

A Fragmentation is defined as when the process is loaded and removed after execution from memory, it creates a small free hole. These holes can not be assigned to new processes because holes are not combined or do not fulfill the memory requirement of the process. In operating system two types of fragmentation:


• Internal fragmentation: Occurs when memory blocks are allocated to the process more than their requested size. Due to this some unused space is leftover and creates an internal fragmentation problem.


• External fragmentation: In external fragmentation, we have a free memory block, but we can not assign it to process because blocks are not contiguous.


5. Explain Internal & External Fragmentation?

• Fragmentation:

A Fragmentation is defined as when the process is loaded and removed after execution from memory, it creates a small free hole. These holes can not be assigned to new processes because holes are not combined or do not fulfill the memory requirement of the process. In operating system two types of fragmentation:


▪ Internal fragmentation: Occurs when memory blocks are allocated to the process more than their requested size. Due to this some unused space is leftover and creates an internal fragmentation problem.

Example: Suppose there is a fixed partitioning is used for memory allocation and the different size of block 3MB, 6MB, and 7MB space in memory. Now a new process p4 of size 2MB comes and demand for the block of memory. It gets a memory block of 3MB but 1MB block memory is a waste, and it can not be allocated to other processes too. This is called internal fragmentation.


▪ External fragmentation: In external fragmentation, we have a free memory block, but we can not assign it to process because blocks are not contiguous.

Example: Suppose (consider above example) three process p1, p2, p3 comes with size 2MB, 4MB, and 7MB respectively. Now they get memory blocks of size 3MB, 6MB, and 7MB allocated respectively. After allocating process p1 process and p2 process left 1MB and 2MB. Suppose a new process p4 comes and demands a 3MB block of memory, which is available, but we can not assign it because free memory space is not contiguous. This is called external fragmentation.


😊 Good Luck On Your Exams 😊







304 views0 comments

Recent Posts

See All

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

Connect To Me 

  • YouTube
  • Instagram
  • LinkedIn
  • Facebook
  • Twitter
  • Pinterest
bottom of page