Summary of Operating System Interview Questions Consolidate your computer foundation

Summary of Operating System Interview Questions Consolidate your computer foundation

I have not studied computer operating system courses systematically. I was preparing for an interview and found that Dachang would like to take this aspect of the question. So based on the questions faced by some friends, I searched for some good articles and summarized them. a bit. Which is like

The difference between process and thread, synchronous and asynchronous, blocking, non-blocking, deadlock
To understand these concepts, I think it is very helpful to deepen some knowledge of the front-end. For students who want to hit the big factory like
Disk seek
,
Communication between processes
These issues are also frequently investigated.

1. What is the difference between a process and a thread?

There are many answers to this question on the Internet, and there are also many vivid examples, but here is an answer that I agree with, for reference only.


Essentially, processes and threads are descriptions of CPU working time periods , that is , descriptions of program instructions in operation .

background knowledge:


In order to explain the above sentence, we first need to understand some basic knowledge about computers.

  1. For the CPU, its task execution speed is very fast. That is to say, even if there are multiple tasks to be executed, each task is executed in turn for the CPU . The execution speed is so fast that we think it is processing multiple tasks at the same time.

  2. The CPU needs to prepare all relevant resources before executing a certain program or task. These tasks are all in the ready queue, and then wait for the scheduling algorithm of the operating system to select a task to be executed by the CPU. Then the PC pointer points to the beginning of the code. The CPU fetches instructions and executes them one by one. The CPU executes a certain task. At that time, the corresponding resources will be loaded in RAM.

  3. Here the process of the CPU executing each task can actually be understood as a process , and the resources needed to be prepared can be understood as the context required during the execution of the process . When a process is executed , or assigned to him The time period of the CPU is used up, and the context of the current process needs to be saved first , and the next execution of the CPU is waited.


So how are multiple processes executed in an orderly manner? First load the context environment of a process, save the context environment after the execution is completed, then load the context environment of the next process, and save the context environment after the execution is completed. . . Repeat so.

Having said this, I think everyone should have their own understanding of the nature of the process. In order to describe the time period when the CPU executes the program between the context of the computer , we have the two terms process and thread .

Insert something irrelevant: nouns are references to objective things, and adjectives are descriptions of objective things.


The thread afterwards is well understood. The thread is a smaller CPU execution time period than the process . We assume that the CPU will execute a program A, which is to process a process, and this program is composed of a, b, and c . You can look at the following example to understand.

We can imagine that opening the Chrome browser is equivalent to opening a process, and if we need to display a page that requires the Chrome browser to parse the JS code thread, send the request data thread, generate the bitmap data, etc., they are in a certain order The work in turn finally generates the page we want to display.


The execution process is to first load the context of A, then the CPU executes part a, then executes b, then executes c, and then saves the context of A. Here , the time consumed when switching the context of the thread is much less than the time required to switch the context of the process , so the utilization of the CPU is greatly improved.

We can use global variables almost anywhere in the program, which is equivalent to all threads sharing the context of the current process, that is, the address space of the current process .

Note that the process thread concept described here is different from the process thread described in the actual code. The definition in the programming language is only the realization of the language, it is the materialization of the concept of process thread.


What are single-threaded and multi-threaded?


The execution time of the CPU can be divided into processes and threads from the macro and micro levels, and the threads can be divided into

Single thread
versus
Multithreading
.

  • Single thread : This is easy to understand. Our Javascript is single threaded, which means that when the browser parses the JS code, only one thread is working. If it finds an error in a certain place, it will stop and cannot be executed subsequently. This process that cannot be executed after an error is also called blocking .
  • Multithreading : Multiple threads execute in turn in a certain order. In the above example, multithreading can be well explained, but we should note that a multithreaded program looks like multiple threads running at the same time. In fact, for For the CPU, they just take turns to execute in a certain order .


Simply put, in a process, only one thing can be done at the same time is single-threaded, and multiple things can be done is multi-threaded.

What is concurrency and parallelism?

  • Concurrency : Concurrency, in the operating system, means that several programs in a period of time are between started and run to completion, and these programs are all running on the same processor, but at any point in time Only one program runs on the processor.

We imagine that there are multiple users accessing the server. If one user s request is processed before the next user s request can be processed, it would be a waste of time, so in fact we seem to be able to visit a website at the same time, but for the visited website As far as the server is concerned, it still processes the requests of different clients one by one, but the speed of processing the requests is so fast that we mistakenly think that multiple users are visiting the website at the same time.


  • Parallel : At the same time, multiple instructions run on different processors. For example, when we write code while listening to a song, the player process and the VScode process are parallel .


What is the difference between process and thread?


After saying the above, I will answer this question. I still hope that everyone can remember from the perspective of understanding, so as to give the interviewer a satisfactory answer during the interview.

  1. First of all, processes and threads are essentially descriptions of the time period during which the CPU is working . This sentence can get 50 points
  2. Process is the basic unit of resource allocation , thread is the basic unit of program execution
  3. The process has an independent address space , and a multi-process program crashes without affecting other processes. Multi-threaded programs share the address space of the current process, a thread crash may cause the entire process to crash
  4. The communication between threads is very easy. They share the same memory, as long as the pointer points to it, but the communication between processes is more troublesome. Here the interviewer may ask how to communicate between processes, etc. There is an answer later in the article


In fact, from a macro point of view, a multi-threaded program allows multiple functions to be executed at the same time when a program is running, but in essence, it is still inseparable for the CPU to switch the context of the current thread (of course, it is easier to switch the context of the thread). Execute different threads in a certain order, etc.

2. What is synchronous and asynchronous?


Synchronous and asynchronous are essentially concerned with the message communication mechanism  (synchronous communication/asynchronous communication).

Synchronous : If you initiate a call, the call will have no return value before the result is obtained. If there is a result, the call will have a return value. That means we need actively wait for the result of the call .

Asynchronous : Asynchronous is the opposite. If we initiate a call, the call returns immediately, so no result is returned . That is to say, when a call is initiated, we will not get the result immediately , but wait for the callee to notify the caller through status and notification. , or process the call through a callback function (

Very like Promsie
).

For example : you call the bookstore owner to ask if the book "Distributed System" is available. If it is a synchronous communication mechanism, the bookstore owner will say, "Let me check", then start to check, and wait for it. Okay (maybe 5 seconds, or maybe a day) to tell you the result (return the result).
As for the asynchronous communication mechanism, the bookstore owner directly tells you that I will check it, and call you after checking it, and then hang up the phone directly (no result is returned). Then check it out, and he will take the initiative to call you. Here the boss calls back by "calling back".


3. What is blocking and non-blocking?


Blocking and non-blocking focus on the state of the program while waiting for the call result (message return value) .

A blocking call means that the thread will be suspended before the current call returns . No other calls can be performed , only after the results are returned.

A non-blocking call means that the thread can satisfy other calls before it returns a result without being blocked .

For example : you call the bookstore owner to ask if the book "Distributed System" is available. If you are using a blocking call, you will keep "suspending" yourself until you get the result of whether the book is available or not. Blocking calls, regardless of whether the boss tells you or not, you should go and play on your own. Of course, you should check whether the boss has returned the result once in a few minutes.

Here, blocking and non-blocking have nothing to do with whether they are synchronous or asynchronous. It has nothing to do with the boss's answer to your result.

To sum up, synchronous and asynchronous mainly focus on the communication mechanism of messages, while blocking and non-blocking mainly focus on the state of waiting for the result of the program call .

Synchronous and asynchronous are mainly concerned with the way of message notification. If there is a return value immediately when the call is called, it is synchronous. If the call is not immediately returned, but when there is a return value, it informs us in some way, then it is asynchronous.

Blocking and non-blocking are mainly concerned with the state while waiting for notification. If you can't do other things while waiting for notification, it is blocking, and waiting for notification and doing other things have no effect, then it is non-blocking.

4. What are the communication methods between processes, and what are their advantages and disadvantages?


As mentioned above, each process has its own user address space , and the global variables or data of other processes cannot be obtained in the process . Therefore, if communication between processes is to be carried out, a "third party", which is the kernel, must be passed. A memory buffer is opened in the kernel . Process A copies the data to the kernel buffer. If process B wants to use it, it must go to the kernel buffer to read it. This completes the mutual communication between processes. The kernel provides this mechanism is called inter-process communication ( the IPC, interProcess communication ).



pipeline:


There are two types of pipes, one is an anonymous pipe , and the other is a named pipe . In Linux, "|" is used to denote pipes, and students who are interested can go to see its usage.

The essence of the pipeline is a kernel buffer. The process accesses data from the buffer in a first-in, first-out manner. The process at one end of the pipeline writes data into the buffer sequentially, and the process at the other end reads out the data sequentially. Think of it as a queue.


Anonymous pipe :

  • It is half-full duplex, data can only flow in one direction, if both parties need to communicate, two channels need to be established
  • It can only be used between relative processes, parent-child processes, and sibling processes
  • Data reading and writing: What a process writes to the pipe is read by the process at the other end of the pipe. The written content is added to the end of the pipeline buffer every time, and data is read from the head of the buffer every time.
  • Note that data communications or in the memory of

Named pipe :

  • Solved the problem that anonymous pipes can only communicate between relative processes
  • The name of the pipe is stored in the system disk, and the content is stored in the kernel


The advantages and disadvantages of the pipeline : the

advantages are very clear, it is very simple to use in the Linux system, there is a blocking mechanism, if you can ensure that the data is taken away .

The disadvantage is that the buffer of the kernel is limited , and it is precisely because of the blocking mechanism that the a process transfers data to the b process, and the b process can only return after the b process reads the data. So it is not suitable for frequent communication.

signal:

The signal is a simulation of the interrupt mechanism at the software level. It is an asynchronous communication method. The signal can directly interact between the user space process and the kernel. The kernel can use the signal to notify the user space process of what system events have occurred.

  • Signal is a commonly used method of communication between processes in Linux systems, which can communicate with the process at any time without knowing the state of the process
  • If the current process is not in the execution state, then the signal will be saved in the kernel and passed to the process when the process is executing
  • If the process sets the signal to be blocked, the signal will be delayed until the blocking is cancelled

That is to say, the signal is mainly transmitted directly between the process and the kernel , and can be used to control the process to terminate and exit. There are two main sources of signal events.

  • Hardware source: user key input
    Ctrl+C
    Exit, hardware abnormalities such as invalid storage access, etc.
  • Software termination: the process signal is terminated, other processes call the kill function, and the software abnormally generates a signal .


The signals in Linux are:

(1) SIGHUP: When the user logs off from the terminal, all started processes will receive the process. The processing of this signal in the system default state is to terminate the process. (2) SIGINT : program termination signal. While the program is running, press

Ctrl+C
The key will generate the signal. (3) SIGQUIT : Program exit signal. While the program is running, press
Ctrl+\\
The key will generate the signal. (4) SIGBUS and SIGSEGV : The process accesses illegal addresses. (5) SIGFPE : Fatal errors occur during operation, such as division by zero, data overflow, etc. (6) SIGKILL : The user terminates the process execution signal. Execute under the shell
kill -9
Send the signal. (7) SIGTERM : End process signal. Execute under the shell
kill process pid
Send the signal. (8) SIGALRM : Timer signal. (9) SIGCLD : The child process exit signal. If the parent process does not ignore the signal or process the signal, the child process will form a zombie process after exiting.


message queue:


We know that there is a blocking mechanism for the pipeline . If one end wants to transmit data, the process can not return until the other end receives the data, so can the process return immediately after putting the data in a certain memory?

The answer is the communication mode of the message queue. Process A puts the data in the message queue and then returns. If process B needs it later, it will fetch it from the message queue. This is a bit similar to caching.

Of course, this method solves the problem that the pipeline can only transmit in one direction and there is a blocking mechanism. However, if the communication data is too large, the system consumption is still relatively serious in the case of repeated communication.

Note: Anonymous pipes exist in memory files , named pipes exist in the system disk , and message queues exist in the kernel .

Shared memory:


Since the message queue still has the problem of inefficient copying of large amounts of data , if two processes can share a piece of memory, then this problem will not be solved?

In fact, what the system allocates to a process is not real physical memory, but address space . We let multiple processes share an address space to achieve the fastest speed of inter-process communication, but each process still has its own independent address space. Only part is shared.

signal:


We know that JS is designed to be single-threaded in essence so that if multiple threads modify the DOM at the same time , what is the state of the DOM in the page?

The purpose of the semaphore is essentially to solve the problem that if multiple threads share memory, when one thread uses the memory, other threads will no longer use it, so as to prevent simultaneous modification problems.

The semaphore is essentially a counter. The principle is to assume that the initial semaphore is 1. If the semaphore is set to 0 when the A process accesses the memory, when the other memory needs to be accessed, it will not be accessed when it sees the semaphore is 0.

Socket:


If the communication between processes mentioned above is in the same computer, then Socket provides a way for processes to communicate between different computers.


 


We can see that it is between the application layer and the transport layer, and is the bridge between the application layer and the transport layer.


That's almost it. Although there are only a few, I still checked many articles, hoping to summarize them in a simple and easy-to-understand manner. I don t want to memorize many too complicated ones. After all, this part is rarely used daily. .

5. please tell me about the disk scheduling (seeking) algorithm?

Let's take a look at the disk structure first: the



common mechanical disk is the left side of the figure above. The middle circle is the disk platters. Generally there are multiple platters, each with its own head. The picture on the right is the structure of a disc. Each layer of the disc is divided into multiple tracks, and each track is divided into multiple sectors. Each sector is

512
byte. Then, multiple tracks with the same number form a cylinder, which is called the cylinder of the disk, as shown in the middle of the picture above.

The purpose of the disk scheduling algorithm is very simple, it is to improve the access performance of the disk, which is generally done by optimizing the order of disk access requests.

The seek time is the most time-consuming part of disk access. If the request sequence is optimized properly, it will inevitably save some unnecessary seek time, thereby improving disk access performance.

Next, we will mainly introduce several seek algorithms:

  • First come first serve algorithm
  • Shortest seek time priority algorithm
  • Scanning algorithm algorithm
  • Cyclic scanning algorithm
  • LOOK and C-LOOK algorithm


First come first serve algorithm FCFS:






Scheduling is based on the order in which processes access the disk.

Advantages : simple, fair, and each process request can be processed in turn, and there will be no long-term unsatisfying the request of a certain process.

Disadvantages : Since this algorithm does not optimize the seek, the average seek time is longer.

The shortest time first algorithm SSTF:


Prioritize the process request on the track closest to the current head.

Advantages : It is better in performance than the first-come, first-served algorithm, which can minimize the time of each seek.

Disadvantages : If there are dynamic requests, starvation problems may occur , causing the head to keep moving in a small range, and the requests of some processes can never be processed.

Scanning algorithm (elevator algorithm) SCAN:


Because the shortest time priority algorithm will cause starvation problems, that is, some magnetic circuits may not be accessible, causing the head to always move in a small range, so we let the head move in one direction until it reaches the last track in this direction and then change the direction. Because this seek process is very similar to an elevator, it is also called an elevator algorithm.

Advantages : The performance of this seek algorithm is better, and there is no starvation problem.

Disadvantages : The middle track will be cheaper, and the response frequency of the middle part of the track will be higher, that is, the frequency of the response of each track is not uniform.

Cyclic scanning algorithm CSCAN:

Because the scanning algorithm will cause the processing frequency of each track response to be uneven, we have a cyclic scanning algorithm to keep the head moving in one direction until all the tracks in this direction have been processed in response to the last track in this direction. We quickly moved the head to the edge of the other party , and no other requests were processed during the movement , and then moved in the same direction again. In other words, it only resolves requests in one direction.

The cyclic scanning algorithm solves the problem of uneven processing of the response of the scanning algorithm on each track, and is a relatively perfect algorithm.


LOOK and C-LOOK:


Look: Look:


LOOK and C-LOOK are for scanning algorithms and cyclic scanning that need to reach the last track in a certain direction before returning. After optimization, the direction will be changed as long as the last request in a certain direction is processed, without reaching the last The tracks are up.


C-LOOK:



6. talk about your understanding of deadlock

Definition of deadlock:


In a set, each process is waiting, and other processes in this set can trigger events (resources), then this set of processes is deadlocked.

The simple understanding is that if process A needs to obtain resource a while it is running, but resource a is occupied by process B, then process A cannot continue to execute and the resource b acquired by process A cannot be released, so it remains stuck there. After a while, the B process also needs to obtain the b resource. Because the A process is stuck there and neither can continue to run, a deadlock occurs.

Therefore, the essence of deadlock is the situation in which processes cannot continue to run due to resource competition (which can also be understood as improper order of resource acquisition).

Note: The resources mentioned above can be many things such as locks, network connections, notification events, disks, etc.

Necessary conditions for deadlock to occur:

  • Mutually exclusive conditions: at the same time, the same resource can only be occupied by one process, when the resource is occupied, other processes cannot use it
  • Request and hold conditions: When a process needs to obtain resources when it is running, when another process is occupied, the process will always be in the request state, and the currently occupied resources cannot be released.
  • Inalienable condition: When a process uses a resource, the resource cannot be used by other processes until the resource is released
  • Loop waiting condition: In the process group where the deadlock occurs, one process will occupy the resources needed by another process to form a waiting closed loop

How to prevent and avoid deadlock:


1. Reasonable planning of the order of resource use

Since most of the deadlocks are caused by the unreasonable resource allocation time, then we will formulate a more reasonable order of resource use to avoid deadlocks as much as possible.

For example, the original process A needs to use the resource a first and then the resource b, and the B process needs to use the resource c first and then the resource b, then a deadlock situation is very likely to occur, we can let the process B process use the resource b first. Use resource c after release, so as to avoid deadlock as much as possible.

The above is just an example to illustrate that deadlocks can be avoided as much as possible through reasonable resource allocation.


2. Set the maximum resource occupancy time.

When a process needs to request the use of a resource and the resource is occupied by other processes, it will always keep the currently acquired resource occupied and in a waiting state, which may cause other processes to be unable to The use of resources, resulting in deadlock.

Since the deadlock may be caused by a resource being occupied by a certain process for a long time, then we set the maximum occupancy time of the resource, and release it as soon as the time exceeds the time.


3. Pre-allocate the required resources.

When a process is running, then get the resources it needs. If the resource is being occupied by other processes, it is likely to cause a deadlock situation, so we can allocate in advance when the process runs Give the process all the resources needed when it is running, so that deadlocks can be avoided as much as possible.

But this will lead to a decrease in the utilization of the resource, and some resources are not always needed while the process is running.



4. Banker's Algorithm

Bankers are good at calculating , so the banker's algorithm is to calculate the resources currently used by all processes, the resources to be used, and the remaining available resources to analyze whether the current state is safe and ensure sufficient resources No deadlock will occur.

Detect deadlock:


We can use some algorithms to regularly detect whether there is a deadlock in the system, such as detecting the current utilization of memory resources.

To remove the deadlock:


When a deadlock is detected, we need to solve the deadlock. The most direct way is to restart the computer, but as a programmer, we should be more professional.

  • Terminate related processes: the process that detects the deadlock is cancelled or suspended, and it is forced to release the memory
  • Deprivation of resources: forcibly depriving the resources occupied by the process to remove the deadlock
  • Process rollback: rollback the deadlock process to before the problem occurred, but it is more difficult to achieve



7. Reference:


Knowing the "scheduling algorithm" that Kobayashi coding factory interviews love to ask, 20 pictures were taken in one fell swoop