Computer Basics

Computer Basics

Five-layer Internet Protocol Stack:

  1. Application layer (dns, http) DNS resolves to IP and sends http request
  2. The transport layer (tcp, udp) establishes a tcp connection (three-way handshake)
  3. Network layer (IP, ARP) IP addressing
  4. Data link layer (PPP) encapsulation and framing
  5. Physical layer (using physical media to transmit bit streams) Physical transmission (then transmit through various media such as twisted pair wires, electromagnetic waves, etc.)

OSI seven-layer framework:

  1. Physical layer: bit stream transmission
  2. Data link layer: provide media access, link management, etc.
  3. Network layer: addressing and routing
  4. Transport layer: establish the host end-to-end link
  5. Session layer: establish, maintain and manage sessions
  6. Presentation layer: processing data format, data encryption, etc.
  7. Application layer: provides communication between applications

The TCP/IP reference model can be divided into four major application layers in terms of layer 4, namely the TCP/IP protocol suite that supports network transmission: HTPP protocol -> transport layer: TCP and UDP -> network layer: IP protocol -> link layer

Browser process and rendering mechanism

Browser process

In Chrome, there are 4 main processes:

Browser Process: Responsible for the forward and backward of the TAB of the browser, the work of the address bar, the bookmark bar, and the processing of some invisible underlying operations of the browser, such as network requests and file access. Renderer Process: Responsible for display-related work in a Tab, also known as the rendering engine. Plugin Process: Responsible for controlling the plugin GPU process used by the webpage (GPU Process): Responsible for processing the GPU tasks of the entire application

Enter the URL in the address bar of the browser. The browser process will send a request to this URL. After obtaining its HTML content, hand it over to the rendering process to parse the HTML content. When the analysis encounters the resource that needs to request the network, it will return to the browser. The process loads, and the browser process is notified at the same time that the plug-in process needs to load the plug-in resources and execute the plug-in code. After the parsing is complete, the rendering process calculates the image frames, and passes these image frames to the GPU process to convert them into an image display screen

Rendering mechanism

  1. Understand the rendering mechanism, mainly for performance optimization
  2. Understand how the browser loads. When referencing external style files and JS files, put them in the right place. This is the fastest time for the browser to load the files.
  3. Understand how the browser performs parsing, choose the best writing method, build the DOM structure, and organize the CSS selector to improve the parsing rate of the browser
  4. Understanding how the browser performs rendering can reduce the consumption of "redrawing" and "relayout"

Rendering process

  1. Parse HTML and generate DOM tree (DOM): Because the browser cannot directly understand and use HTML, so, HTML needs to be converted into a structure that the browser can understand, that is, the DOM tree
  2. Parse CSS and generate CSSOM tree
  3. Combine DOM and CSSOM to generate Rendere-Tree
  4. Calculate the layout of the render tree: Layout --> Box Model
  5. Render the layout to the screen Paint: Draw --> target interface

Enter the URL to the display process

DNS query, TCP connection, HTTP request as response, server response, client rendering

  1. Parse the URL and escape if there are illegal characters

Resolve url: protocol header, host domain name or ip address, port number, directory path, query parameters, hash value

  1. Start the network thread to issue a complete http request (DNS resolution, tcp/ip request construction, five-layer Internet protocol stack)
  • The input is the domain name, which needs to be resolved into IP by dns, and the general process of judging whether there is cache in the requested resource: browser cache-local cache-host-dns query
  • tcp/ip request construction: 3 handshake, 4 waves
  • Five-layer Internet Protocol Stack
  1. From the server receiving the request to the corresponding background receiving the request (load balancing, security interception and internal processing in the background)

Load balancing: all requests initiated by users are directed to the scheduling server (reverse proxy server, such as installing nginx to control load balancing), and then the scheduling server allocates different requests to the servers in the corresponding cluster for execution according to the actual scheduling algorithm, and then the scheduler Wait for the HTTP response from the actual server and feed it back to the user

  1. HTTP interaction between the background and the foreground (http message structure, cookie, gzip compression, long connection and short connection)
  2. The browser gets the html, then parses and renders

Reflow

Reflow is more expensive than redrawing, redrawing will definitely cause redrawing, and redrawing will not necessarily cause redrawing

When the size, structure, and attributes of some or all of the elements in the Render Tree change, the process of re-rendering part or all of the document by the browser is called the first rendering of the reflow page

The process is: modify the CSSOM tree-->update the rendering tree-->relayout-->redraw

Operation: browser window size changes, element size or position changes, element content changes, element font size changes, visible DOM elements are added or deleted, CSS pseudo-classes (:hover) are activated, some attributes are checked, or some methods are called ( offsetTop, clientTop,,,,)

Repaint

When the change of the element style on the page does not affect its position in the document flow (such as: color, background-color, visibility, etc.), the browser does not need to recalculate the geometric properties of the element, and directly draw a new style for the element. The process is: modify the CSSOM tree --> update the rendering tree --> redraw

How to avoid reflow and redraw

  • css:
    • Avoid using table layout
    • Avoid setting multiple inline styles
    • Apply animation effects to position: absolute/fixed
    • Avoid using CSS expressions (calc())
  • js:
    • Avoid frequently operating styles, it is best to rewrite the style attribute all at once
    • Avoid frequent manipulation of the DOM
    • Avoid frequently reading attributes that will cause reflow/redraw. If you really need to use it multiple times, use a variable to cache it
    • Use absolute positioning for elements with complex animations to keep it out of the document flow, otherwise it will cause frequent reflow of parent elements and subsequent elements

http

Hypertext Transfer Protocol: It is an application layer protocol based on TCP/IP communication protocol to transmit data

Overview

HTTP is a convention and specification for transmitting text, pictures, audio, video and other hypertext data between two points in the computer world

  1. HTTP usually runs on top of the TCP/IP protocol stack, relying on IP protocol for addressing and routing, TCP protocol for reliable data transmission, DNS protocol for domain name lookup, and SSL/TLS protocol for secure communication
  2. Where HTTP exists: in the first layer "application layer" in the TCP/IP network layered model

Other protocols at the application layer include:

  • FTP: File transfer protocol, used to transfer files between client and FTP server
  • DNS domain name system: provides resolution services between domain names and IP addresses
  • SMTP: mail sending protocol, users send mail through SMTP server
  • DHCP: Dynamic Host Configuration Protocol, DHCP server dynamically assigns IP addresses to clients
  • POP3: Mail receiving protocol, used to receive mail from POP3 server

Features and disadvantages

  1. The HTTP protocol supports the client/server model, which is also a request/response model.
  2. Flexible and extensible: one is semantic freedom, only the basic format is specified, and the other parts are not strictly restricted; the second one allows the transmission of any type of data object, such as text, picture, audio, etc. The type of transmission is determined by Content-Type to mark
  3. Reliable transmission, HTTP is based on TCP/IP, so this feature is inherited
  4. Stateless, that is, HTTP requests do not have the function of saving previously sent requests or responses, and each request is independent and irrelevant

The HTTP protocol is a protocol that does not save the state. It does not save the communication state between the request and the response. That is to say, the HTTP level does not make persistent processing for the sent request or response. Every request and response are It is brand new and independent, (no need to record the state, the server reduces the consumption of CPU and memory resources)

  1. Persistent link
  • Concept: the establishment of a TCP connection can carry out multiple request or response interactions
  • Cause: The initial version of HTTP is to disconnect a TCP connection every time an HTTP communication is performed, and to reconnect and disconnect the next time it is performed again. Nowadays, the requested resources are getting larger and larger. If there is unnecessary TCP connection and disconnection for each request, it is a big overhead
  • Features: As long as one party does not explicitly propose to disconnect, the TCP connection state is maintained
  • Advantages: Reduce the extra overhead caused by TCP connection and disconnection, reduce the load on the server, and load the Web page faster
  • Note: All connections in HTTP/1.1 are persistent by default (that is, the header field Connection: keep-alive, if you want to close, set the value to close), but HTTP/1.0 is not standardized

Disadvantages:

  1. Clear text transmission (not encrypted), the content may be eavesdropped: the message in the protocol does not use binary data, but text form
  2. The integrity of the message cannot be verified, and the content may be tampered with

Refers to the accuracy of the information because the receiver or the sender has no way to confirm whether the data sent by the other party has been tampered with in the middle

  1. If the identity of the communicating party is not verified, it may be disguised

The HTTP protocol does not confirm the communication party. Anyone can send the request, and the server will not confirm the received request. As long as the request is received, it will return a response (of course this is only the IP address of the sender or (On the premise that the port number is not restricted by the Web server settings)

  1. Stateless, it is a disadvantage and an advantage, divided into different scenarios
  • For some long-connection scenarios, context information needs to be saved to avoid the transmission of duplicate data.
  • For some applications, it is not necessary to save context information just to obtain data, and statelessness reduces network overhead.
  1. Head-of-line blocking
  • The fundamental reason is that HTTP is based on a request-response model. In the same TCP long connection, if the previous request is not responded, the subsequent request will be blocked.
  • Concurrent connections and domain slicing are used to solve this problem. But it is not solved from the level of HTTP itself. It just increases the TCP connection and shares the risk.
  • Multiplexing in HTTP/2 solves this problem from the level of HTTP itself
  • The difference from TCP header blocking: The unit of TCP transmission is a data packet, and its header blocking means that the next message will not be uploaded to HTTP if the previous message is not received. The HTTP header blocking is at the request-response level. The previous request has not been processed, and the subsequent request is blocked.

Request method

  • GET: Get resources, idempotent operation: there should be no side effects
  • HEAD: Get the header of the message, which is similar to GET but does not return the message body, idempotent operation
  • POST: Create or update resources, non-idempotent operation

The HTTP response should contain the creation status of the post and the URI of the post. Two identical POST requests will create two resources on the server side. They have different URIs, so POST is "non-idempotent"

  • PUT: Create or update the resource itself, idempotent operation

After the first PUT method is executed, the resources generated on the server cannot be changed by subsequent PUT methods, so the side effects of multiple PUTs on the same URI are the same as one PUT, so it is "idempotent", , POST, PUT can be used to create and update resources, but the essential difference lies in idempotence. The URI corresponding to POST is not the creation of the resource itself, but the "recipient of the resource"

  • PATCH: partial update of resources, non-idempotent operation
  • DELETE: delete resources, contrary to PUT function, idempotent operation

Deleting resources has side effects (meaning that it will modify the content of resources on the server), but it is "idempotent", but the side effects on the system are the same. Therefore, the caller can call or refresh the page multiple times. Don't worry about causing errors

  • OPTIONS: Query the types of HTTP methods supported by the server (idempotent operation)

In order to obtain the methods supported by the server, I generally know that it is used when a proxy is used and then a pre-request is made. It is idempotent

Before the official cross-domain, an option request will be initiated and pre-check. Check whether the sent request is safe, and send the request address and request method at the same time. The server determines whether the source domain name is in the permission list. The request method is not supported. If the request is supported, the complex request will be sent before the pre-check is sent. The following is a complex request

  • One of put/delete/patch/post
  • Send json format (content-type: application/json)
  • Custom headers in the request

Why do we need to conduct a pre-inspection? Complex requests may have side effects on the server, data modification, etc. So check if the source of the request is on the permission list

  • CONNECT: Establish a connection tunnel for proxy server, idempotent operation
  • TRACE: Tracking the request, query how the sent request is processed/tampered, idempotent operation. Easily trigger XST cross-site tracking attacks

Whether a method is idempotent or not is to judge whether a method is executed multiple times and the effect is the same. If it is idempotent, it essentially means that the result of the successful execution of the request has nothing to do with the number of executions. As far as I know, only "POST" and "PATCH" are non-idempotent, the others are idempotent operations

The difference between GET and POST

  1. Cache: GET will be actively cached by the browser, leaving a history record, but the POST will not refresh and postpone the form may be submitted repeatedly
  2. Data type: GET can only be URL-encoded, it can only receive ASCII characters, but POST has no restrictions
  3. Security: GET is generally placed on the URL to pass parameters. Through historical records, it is easy to find data information in the cache. POST is placed in the request body, which is more suitable for passing sensitive information.
  4. Idempotent: GET is idempotent, but POST is not
  5. Link: Both are essentially tcp links, and there is no difference. However, due to HTTP regulations and browser/server limitations, they show some differences in the application process
  6. tcp: GET generates one TCP data packet; POST generates two TCP data packets
  • Browser response: When a get request is made, the browser will send headers and data together. When the server responds with 200 (returning data) and post requests, the browser first sends headers, the server responds with 100 continue, and the browser sends data again, and the server responds. 200 (return data

Although POST will be sent in two packets, the time difference between sending one packet and sending two packets can basically be ignored if the network conditions are good. In the case of poor network conditions, two-packet TCP has a greater advantage in verifying the integrity of the data packet

status code

  • 1xx informative

"The request has been received and needs further processing to complete, but HTTP/1.0 is not supported" 101 Switching Protocols: When HTTP is upgraded to WebSocket, if the server agrees to the change, it will return 101

  • 2xx success status: successfully processed the request
  • 200 OK means that the request sent from the client is correctly processed on the server side, and the response body is usually included in the returned data
  • 201 Created request has been implemented, and new resources have been created according to the needs of the request
  • 202 Accepted The request has been accepted, but has not been executed yet, and there is no guarantee that the request will be completed
  • 204 No Content: Same as 200, but the response message does not contain the body of the entity
  • 206 Partial Content: The client has made a range request and the server has processed it normally. The header of the response message should also include the Content-Range field to specify the range of the entity. The usage scenario is HTTP block download and resumable transmission
  • 3xx Redirection: Redirection status, resource location has changed, need to re-request

Temporary redirect: 302, 303, 307,, permanent redirect: 301

  • 301: Permanent, the latest URI is the Location field in the header of the response message. Scenario: The website has changed its address, and the previous address is no longer needed. If the user still enters from the previous address, it will return 301 and bring the latest in Location URI. And the browser will optimize the cache by default to reduce the pressure on the server, and automatically visit the redirected address during the second visit.
  • 302 Found: Temporary, unlike 301, it means that the requested resource was temporarily moved to another URI. Because it is temporary, it will not be cached
  • 303: Temporarily, the requested resource is temporarily moved to another URI, but it is clear that the client should use the GET method to obtain the resource
  • 304 Not Modefied: The client is allowed to return the resource even if the condition is not met when the condition is requested. Although it is divided into 3xx, it has nothing to do with redirection. Scenario: 304 Not Modefied will be returned after the negotiation of the cache is successful, indicating that the requested resource has not been changed on the server, and telling the requester that the cache can be used
  • 307: Temporary, more specific than 302, the redirect request method and entity are not allowed to change. Scenario: HSTS protocol, the client is forced to use https to establish a connection, the website is upgraded from HTTP to HTTPS, and if you still access via http, it will return 307
  • 4xx client error
  • 400: There is a syntax error in the request message, but it does not specify where it is
  • 401: Authentication information that passed HTTP authentication is required or indicates that user authentication has failed
  • 403: The requested resource was rejected due to: for example, legal prohibition, sensitive information
  • 404: The requested resource is not found, indicating that the corresponding resource was not found on the server
  • 408: The client request timed out and the server waited too long
  • 409: The requested resource may cause a conflict
  • 413: Request body data is too large
  • 429: Too many requests sent by the client
  • 5xx server error
  • 500: Internal error of the server, but did not specify where it is, a bit like 400
  • 501: Indicates that the function requested by the client is not yet supported
  • 502: The server itself is normal, but the proxy server cannot get a legal response
  • 503: The server is overloaded or shut down for maintenance

What are the radicals

Further reading: juejin.cn/post/684490...

Request header: the client sends the requested message to the server

  • Accept the media type that the client or proxy can handle
  • Accept-Encoding Prioritize the encoding format that can be processed
  • Accept-Language gives priority to natural language that can be processed
  • Accept-Charset gives priority to the character set that can be processed
  • If-Match compare entity mark (ETage)
  • If-None-Match compare entity mark (ETage) is opposite to If-Match
  • If-Modified-Since compare resource update time (Last-Modified)
  • If-Unmodified-Since compares resource update time (Last-Modified), opposite to If-Modified-Since
  • Send entity byte range request when if-Rnages resource is not updated
  • Range entity's byte range request
  • Authorization web authentication information
  • Proxy-Authorization proxy server requires web authentication information
  • Host The server where the resource is requested
  • From user's email address
  • User-Agent client program information
  • Max-Forwrads Maximum number of hops
  • Priority of TE transfer coding
  • Referer request original URL
  • Expect expects a specific behavior of the server

Response header: when responding from the server to the client

  • Accept-Ranges acceptable byte range
  • Age estimates the elapsed time of resource creation
  • Location The URI to redirect the client
  • vary the cache information of the proxy server
  • ETag can represent a string of unique resources
  • WWW-Authenticate server requires client authentication information
  • Proxy-Authenticate proxy server requires client authentication information
  • Server server information
  • The header field used together with Retry-After and status code 503, indicates the time of the next request to the server

Common header field: Both request message and response message will be used

  • Cache-Control control cache
  • Connection connection management, item by item header
  • Upgrade to other protocols
  • Information about the via proxy server
  • Wraning error and warning notifications
  • Transfor-Encoding The transmission encoding format of the message body
  • List of headers at the end of the Trailer message
  • Pragma message command
  • Date Date the message was created

Entiy Header Fields: Use headers for the entity part of request messages and response messages

  • Allow resources to support http request methods
  • Content-Language entity resource language
  • Content-Encoding entity encoding format
  • Content-Length entity size (bytes)
  • Content-Type entity media type
  • Content-MD5 entity message summary
  • Content-Location replaces the yri of the resource
  • Content-Rnages The position of the entity body is returned
  • Last-Modified resource The last modified resource
  • Expires the expired resources of the entity subject

The role of keep-alive

In the early HTTP/1.0, a connection must be created for each http request, and a TCP connection must be disconnected for each HTTP communication. Disconnection means that if you connect again, you will have to perform a three-way handshake again, which will consume resources. The process of creating a connection requires resources and time. In order to reduce resource consumption and shorten the response time, a persistent connection was added later, which was added to the http request header

Connection: keep-alive
To tell the other party not to disconnect after the request is completed, keep the TCP connection state, that is, one handshake can communicate multiple times. Next time, we will continue to communicate with this request.

  • Less CPU and memory usage (due to fewer simultaneous open connections)
  • Allow HTTP pipelining of requests and responses
  • Reduce congestion control (reduced TCP connections)
  • Reduce the delay of subsequent requests (no need for handshake)
  • No need to close the TCP connection to report errors

So why does the chat software use scoket instead of http?

Because scoket can be two-way communication after establishing a long connection, the client can send to the server, the server can also send to the client, and HTTP can only be one-way, so you have to use socket

socket: HTTP belongs to the layer that should be used, TCP belongs to the transport layer, the usual HTTP request is the direct operation of HTTP, and then HTTP helps to call TCP, instead of directly calling TCP, Scoket can be seen as an encapsulation of tcp, which is convenient for direct calling )

Browser caching mechanism

DNS cache

The full name of the Domain Name System, that is, the domain name system

As a distributed database that maps domain names and IP addresses on the World Wide Web, users can access the Internet more conveniently without having to remember the IP numbers that can be directly read by machines. DNS protocol runs on top of UDP protocol, using port number 53

DNS resolution: Through the domain name, the process of finally obtaining the IP address corresponding to the domain name is called domain name resolution (or host name resolution): www.dnscache.com (domain name)-DNS resolution -> 11.222.33.444 (IP address)

Where there is dns, there is cache. Browser, operating system, Local DNS, root domain name server, they will all cache DNS results to a certain extent

The dns caching process is as follows:

  1. First search the browser's own DNS cache, if it exists, the domain name resolution is completed here.
  2. If the corresponding entry is not found in the browser's own cache, it will try to read the hosts file of the operating system to see if there is a corresponding mapping relationship. If there is, the domain name resolution is completed.
  3. If the local hosts file does not have a mapping relationship, search for the local DNS server (ISP server, or DNS server manually set by yourself), if it exists, the domain name resolution is completed here.
  4. If the local DNS server has not been found, it will send a request to the root server for recursive query

CDN cache

The full name is Content Delivery Network, that is, content delivery network

CDN is a caching server, which can get resources in the shortest request time at the nearest CDN node, which plays a role of shunting and reduces server load pressure

Regarding the CDN cache, after the browser s local cache becomes invalid, the browser will initiate a request to the CDN edge node. Similar to browser caching, CDN edge nodes also have a set of caching mechanisms. CDN edge node caching strategies vary with different service providers, but generally follow the http standard protocol, and set the CDN edge node data caching time through the Cache-control: max-age field in the http response header

When the browser requests data from the CDN node, the CDN node will determine whether the cached data has expired. If the cached data has not expired, it will directly return the cached data to the client; otherwise, the CDN node will send a back-to-origin request to the server. The server pulls the latest data, updates the local cache, and returns the latest data to the client. CDN service providers will generally specify the CDN cache time based on file suffixes and directories, and provide users with more refined cache management.

CDN advantage

  1. The CDN node solves the problem of cross-operator and cross-regional access, and the access delay is greatly reduced.
  2. Most of the requests are completed at the edge nodes of the CDN, and the CDN plays a role in offloading, reducing the load on the origin server

Browser cache

Browser cache is actually the browser saves all resources obtained through HTTP, which is a behavior of the browser to store network resources locally

Where are the cached resources stored

  • memory cach: cache the resource in the memory, wait for the next access without re-downloading the resource, but get it directly from the memory
  • disk cache: cache the resource to the disk, wait for the next access without re-downloading the resource, but directly obtain it from the disk, its direct operation object is CurlCacheManager

Because the CSS file can be rendered once loaded, we don't read it frequently, so it is not suitable for caching in memory, but scripts such as js may be executed at any time. If the script is in the disk, we are executing the script Sometimes it needs to be fetched from the disk to the memory, so the IO overhead is very large, which may cause the browser to become unresponsive

priority
  1. Search in the memory first, if there is, load it directly.
  2. If it does not exist in the memory, it will be searched in the hard disk, and if there is, it will be loaded directly.
  3. If there is none in the hard disk, then a network request is made.
  4. The requested resource is cached to the hard disk and memory

Two kinds of caches can exist at the same time, and the priority of the strong cache is greater than that of the negotiated cache. When performing strong cache, should the cache hit, the data cache is used directly in the database, caching negotiation is not performed advantages

  1. Reduced redundant data transmission
  2. Reduce the burden on the server, greatly improve the performance of the website
  3. Speed up the loading of web pages on the client side
classification

When the browser requests resources from the server, it first judges whether it hits the strong cache, and then judges whether it hits the negotiation cache!

Strong cache

When the browser loads the resource, it will first judge whether it hits the strong cache based on the information expires and cahe-control in the header of the locally cached resource. If it hits, it will directly use the resource in the cache and will not send a request to the server.

Expires : The data expiration time returned by the server: the value is an absolute time GMT format time string, such as Expires:Mon,18 Oct 2066 23:59:59 GMT. Represents the expiration time of this resource. Before this time, the cached data is used directly

Disadvantages: Since the expiration time is an absolute time, when the server and client time deviation is large, it will cause cache confusion, so in addition, most use Cache-Control instead

Cache-Control : Mainly use the max-age value for judgment. A relative time, such as Cache-Control: max-age=3600, represents the validity period of 3600 seconds, and the following commonly used settings:

  • no-cache: It can be cached, but the server must verify whether the cache is available every time, otherwise the cache needs to be negotiated
  • no-store: The cache is forbidden, and the data must be re-requested every time
  • public: Both the client and proxy server (end user and CDN) can be cached
  • private: It can only be cached by the browser of the end user, and it is not allowed to be cached by relay cache servers such as CDN.
  • Cache-Control and Expires can be enabled at the same time in the server configuration, and Cache-Control has a higher priority when enabled at the same time
Negotiation cache

When the strong cache is not hit, the browser will send a request to the server, and the server will judge whether the cache is hit according to the Last-Modify/If-Modify-Since and ETag/If-None-Match in the header. If the command returns 304 to tell the browser that the resource has not been updated, the local cache can be used: Last-Modify/If-Modify-Since

Last-Modified : When the server responds to the request, it will return the last modification time of the browser resource. Because it may be modified multiple times in the last second, or the server and the client time are different, it may cause a cache miss, so the etag was launched later

if-Modified-Since : When the browser requests again, the request header will contain this field: the last modification time obtained in the cache. After the server receives this request and finds if-Modified-Since, it judges whether it hits the cache according to the last modification time of the resource. If it is consistent, it returns a 304 response. The browser only needs to get the data from the cache. Return to Last-Modify

  • The resource is modified, the server transmits a whole data, and the server returns 200 OK
  • It has not been modified, the server only needs to transmit the response header information, and the server returns 304 Not Modified

Disadvantages:

Resources have changed in a short period of time, Last-Modified will not change. Periodic changes. If the resource is modified back to its original appearance within a cycle, it is considered that the cache can be used, but Last-Modified does not think so, so there is ETag

ETag/If-None-Match

Etag : When the server responds to the request, this field tells the browser the unique identifier of the current resource generated on the server (the generation rule is determined by the server). ETag can ensure that each resource is unique, and resource changes will cause ETag changes. The server judges whether the cache is hit according to the If-None-Match value sent by the browser

If-Match : conditional request, carrying the Etag of the resource in the last request, the server judges whether the file has new modification according to this field

f-None-Match : When requesting the server again, the browser s request message header will contain this field, and the following value is the identifier obtained in the cache. After the server receives the message, it finds that If-None-Match is The unique identifier of the requested resource is compared

  • Different, indicating that the resource has been modified, corresponding to the entire resource content, the status code 200 is returned
  • The same, indicating that the resource has not been modified, respond to the header, the browser directly obtains the data information from the cache, and the server returns a status code of 304

The accuracy of Etag is higher than that of Last-modified, which belongs to strong verification, requiring consistent resource byte level and high priority. The server will verify the ETag first. If they are consistent, it will continue to compare Last-Modified, and finally decide whether to return 304

Summary: When the browser visits a resource that has already been visited again, it will do this:

  • See if it hits the strong cache. If it hits, the cache is used directly.
  • If the strong cache is not hit, a request is sent to the server to check whether the negotiation cache is hit.
  • If it hits the negotiation cache, the server will return a 304 to tell the browser to use the local cache.
  • Otherwise, return the latest resource

Cache scene

For most scenarios, strong caching and negotiation caching can be used to solve, but in some special cases, you may need to choose a special caching strategy

  • For resources that do not need to be cached, Cache=Control: no-store can be used to indicate that the resource does not need to be cached
  • Frequently changing resources, you can use Cache-Control: no-cache with Etag to indicate that the resource has been cached, but every time a request is sent to ask whether the resource is updated
  • Code file, use Cache-Control: max-age=31536000 to cooperate with policy cache, and then perform fingerprint processing on the file, once the file name changes, the new file will be downloaded immediately

HTTP2

Binary framing

A binary framing layer is added between the application layer and the transport layer (binary protocol parsing is more efficient) to improve transmission performance and achieve low latency and high throughput

Server push

The server can actively push other resources when sending the page HTML, instead of waiting for the browser to parse to the corresponding location, initiate a request and then respond. For example, the server can actively push the JS and CSS files to the client, instead of sending these requests when the client parses the HTML.

The server can actively push, and the client has the right to choose whether to receive it. If the resource pushed by the server has been cached by the browser, the browser can reject it by sending an RST_STREAM frame. Active push also complies with the same-origin policy, and the server will not push third-party resources to the client casually

Header compression

http header compression, reducing volume

HTTP/1.x will repeatedly carry infrequently changed and lengthy header data in the request and response, which will bring additional burden to the network

HTTP/2 uses "header tables" on the client and server to track and store previously sent key-value pairs. For the same data, it is no longer sent through each request and response.

The header table always exists during the duration of the HTTP/2 connection, and is gradually updated by the client and server together;

tips:
It can be understood that only the difference data is sent instead of all, thereby reducing the amount of information in the header

Multiplexing

A tcp/ip connection can request multiple resources

Set the HTTP header: Connection: keep-alive, tell the server that you will have to request it later, so you can avoid tcp handshake three times again.

TCP's keepAlive focuses on maintaining the connection between the client and the server. Heartbeat packets are sent from time to time to verify whether the connection is disconnected. If there is no such mechanism, one party is disconnected and the other party does not know it, which will cause a greater impact on server resources. Impact

Request priority

If the stream is given a priority, it will be processed based on this priority, and the server determines how many resources are needed to process the request

Can the same ip be found for the same domain name? What is the role of ip?
Browser dns cache, local host, router, dns server, according to root, top-level domain name, etc., know the concept of nearby access?

tcp/udp

tcp

TCP is a connection-oriented, reliable, byte stream-based transport layer protocol, the core:

  1. Connection-oriented. Refers to the connection between the client and the server. Before the two parties communicate with each other, TCP needs a three-way handshake to establish a connection, while UDP does not have a corresponding process of establishing a connection.
  2. reliability. TCP has spent a lot of effort to ensure the reliability of the connection. In what aspects does this reliability reflect? One is stateful, the other is controllable

TCP will accurately record which data has been sent, which data has been received by the other party, and which have not been received. It also ensures that the data packets arrive in order, and no errors are allowed. This is stateful. When aware of packet loss or poor network environment, TCP will adjust its behavior according to the specific situation, control its own transmission speed or retransmit. This is controllable

Correspondingly, UDP is stateless and uncontrollable

  1. Oriented to byte streams. UDP data transmission is based on datagrams. This is because it only inherits the characteristics of the IP layer. In order to maintain state, TCP turns each IP packet into a byte stream.

3.handshake

Corresponding to the three-way handshake of TCP, it is also necessary to confirm the two capabilities of both parties: the ability to send and the ability to receive are both in the CLOSED state from the very beginning. Then the server starts to monitor a certain port and enters the LISTEN state.

  1. The client actively initiates a connection, sends SYN, and becomes a SYN-SENT state.
  2. The server receives it, returns SYN and ACK (corresponding to the SYN sent by the client), and becomes a SYN-REVD.
  3. The client then sends an ACK to the server, and it becomes the ESTABLISHED state; after the server receives the ACK, it also becomes the ESTABLISHED state

SYN needs to consume a sequence number, and the sequence number of the corresponding ACK should be increased by 1 next time.

Anything that needs to be confirmed by the peer must consume the sequence number of the TCP message, and SYN needs the confirmation from the peer, and ACK does not need it, so SYN consumes a sequence number and ACK does not need it.

Why not twice? Root cause: If the client s receiving capability cannot be confirmed twice, now I have sent a SYN message and want to shake hands, but this packet has been stuck in the current network and has not arrived. TCP thinks that this is a lost packet, so it retransmits. The two handshake establishes the connection. But after the connection is closed, what if the packet stuck in the network reaches the server? At this time, because there are two handshake, as long as the server receives and sends the corresponding data packet, the connection is established by default, but now the client has been disconnected, which brings a waste of connection resources

Why not four times? The purpose of the three-way handshake is to confirm the ability of both parties to send and receive. Is the four-way handshake possible? Of course, 100 times. But in order to solve the problem, three times is enough, no more usefulness

Can data be carried during the three-way handshake? The third handshake can be carried. The first two handshake cannot carry data

If the first two handshake can carry data, then once someone wants to attack the server, then he only needs to amplify the amount of data in the SYN message in the first handshake, then the server will inevitably consume more time and memory space to process these Data increases the risk of server attacks.

In the third handshake, the client is already in the ESTABLISHED state and has been able to confirm that the server's receiving and sending capabilities are normal. This time it is relatively safe and can carry data

Wave four times

At the beginning, both parties were in ESTABLISHED state

  1. The client is about to disconnect and send a FIN message to the server

After sending, the client becomes FIN-WAIT-1 state. At this time, the client also becomes a half-close (half-close) state, that is, it cannot send a message to the server, but can only receive it. 2. The server confirms to the client after receiving it, and it becomes the CLOSED-WAIT state. 3. The client receives the confirmation from the server and becomes the FIN-WAIT2 state. Subsequently, the server sends a FIN to the client and enters the LAST-ACK state. 4. After the client receives the FIN from the server, it becomes the TIME-WAIT state, and then sends an ACK to the server. At this time, the client needs to wait for a long enough time, specifically, 2 MSL (Maximum Segment Lifetime, the maximum survival time of the message). If the client does not receive a retransmission request from the server during this period, it means ACK arrives successfully and ends with a wave, otherwise the client resends ACK

The meaning of waiting for 2MSL. If you don t wait, the client runs away. When the server has a lot of data packets to be sent to the client and it is still on the way, if the client s port happens to be occupied by a new application at this time, it will be received. Useless data packets, causing data packet confusion. Therefore, the safest thing to do is to wait for the data packets sent by the server to be dead before launching a new application.

  • 1 MSL ensures that the last ACK message of the active closing party can reach the opposite end after four waves
  • 1 MSL to ensure that the opposite end does not receive the ACK and the retransmitted FIN message can be reached

This is the meaning of waiting for 2MSL

Why wave four times instead of three times?

Because the server does not return the FIN immediately after receiving the FIN, it must wait until all the messages on the server have been sent before sending the FIN. Therefore, an ACK is sent first to indicate that the FIN from the client has been received, and the FIN is sent after a period of delay. This caused four waves of hands

What's the problem if it is three waves?

It is equivalent to saying that the server combines the sending of ACK and FIN into one wave. At this time, the long delay may cause the client to mistakenly think that the FIN has not reached the client, so that the client keeps retransmitting the FIN.

tcp header format

  • Serial number
  • Confirmation Number
  • Data offset
  • Acknowledge ACK
  • Synchronize SYN
  • Terminate FIN
  • window

tcp state machine

  • Three-way handshake for link establishment
  • Disconnected and waved four times

Congestion handling

  • Slow start algorithm
  • Congestion Avoidance Algorithm
  • Fast retransmission
  • Quick recovery

tcp and udp www.cnblogs.com/fundebug/p/

zhuanlan.zhihu.com/p/24860273

Why do we need a TCP handshake: client-server-client: After a successful connection is established, the data will be officially transmitted next

**tcp/ip concurrency limit: **Browser has restrictions on concurrent tcp connections under the same domain name (ranging from 2 to 10), optimization results in the difference between get and post

In order to prevent the invalid connection request segment from being suddenly transmitted back to the server, resulting in errors

  1. The client sends a syn (synchronization sequence number) request, enters the syn_send state, and waits for confirmation
  2. The server accepts the syn package and confirms it, sends the syn + ack package, and enters the syn_recv state
  3. The client accepts the syn + ack package, sends the ack package, and both parties enter the established state

Wave four times

  1. The client sends fin to the server to close the data transmission from the client to the server. The client enters the fin_wait state
  2. After the server accepts the fin, it sends an ack packet to the client. The server enters the close_wait state
  3. The server sends a fin to the client to close the data transmission from the server to the client. The server enters the last_ack state
  4. After the client receives the fin, it enters the time_wait state, and then sends an ack to the server, and the server enters the closed state

Why is the establishment of a three-way handshake, and the closing of the four-way handshake? Because when the connection is established, the client accepts the syn + ack package. When it is closed, after the server accepts the fin, the client just no longer sends data, but it can still receive data. At this time, the server can choose to close the connection immediately, or send some data, and then send the fin packet to close the connection. Therefore, fin and ack packets are generally sent separately

udp

UDP is a connectionless transport layer protocol

  • Unreliable
  • Message-oriented
  • Efficient
  • transfer method

cookie, localStorage, sessionStorage

Same: store data locally (browser side)

different:

  • localStorage, sessionStorage

As long as the localStorage is under the same protocol, the same host name, and the same port, you can read/modify the same localStorage data, which is permanent storage, unless you manually delete the sessionStorage in addition to the protocol, host name, and port, but also requires the same Under the window (that is, the browser's tab page), e when the session ends (when the current page is closed, it is automatically destroyed)

The cookie data will be sent to the server at the same time every time an http request is sent, while localStorage and sessionStorage will not

cookie

HTTP is a stateless protocol: each request is completely independent, the server cannot confirm the identity information of the current visitor, and cannot distinguish whether the sender of the previous request and the sender of this time are the same person. Therefore, in order to track the session between the server and the browser, they must actively maintain a state, which is used to inform the server whether the two requests come from the same browser. And this state needs to be achieved through cookie or session

  • Cookies are stored on the client: A cookie is a small piece of data sent by the server to the user's browser and saved locally. It will be carried and sent to the server when the browser initiates a request to the same server next time.
  • Cookies are not cross-domain: each cookie is bound to a single domain name and cannot be used under other domain names. Shared use between the first-level domain name and the second-level domain name is allowed (by domain)

The interaction process between the client and the server about Cookie

  1. Client requests server
  2. The server generates cookie information and uses Set-Cookie to add it to the response message header
  3. The client stores cookies on the browser after getting it
  4. In the next request, pass the information to the server by writing the information into the Cookie field in the header of the request message
  • Set-Cookie: Set the cookie information to be delivered to the client in the header of the response message: Set-Cookie: name=xxx; HttpOnly
  • Cookie: Cookie information passed by the client to the server: Cookie: name=xxx

Life cycle

  • Session Cookies: If the cookie does not contain an expiration date, the session cookie is stored in the memory and will never be written to the disk. When the browser is closed, the cookie will be permanently lost thereafter because it does not specify Expires or Max-Age instructions. However, Web browsers may use session restoration, which makes most session cookies permanent, as if the browser has never been closed
  • Persistent Cookies: Will not expire when the client is closed, but expire on a specific date (Expires) or a specific length of time (Max-Age)
Set-Cookie: id=a3fWa; Expires=Wed, 21 Oct 2015 07:28:00 GMT; Copy code

Important attributes

  • name=value:
    The key-value pair, the name of the cookie and the corresponding value, must be of string type, if the value is a Unicode character, it needs to be a character encoding, if the value is a binary data, it needs to use BASE64 encoding
  • domain:
    Specify the host name to which the cookie can be delivered. If not specified, the default is the current domain name (not including the subdomain name). If the Domain is specified, the subdomain name is generally included: Set-Cookie:Domain=mozilla.org, and the cookie is also included in the subdomain. In the domain name (such as developer.mozilla.org)
  • path:
    Specify the path (route) under which the cookie takes effect. The default is'/'. If set to/abc, only routes under/abc can access the cookie, such as:/abc/read

The Domain and Path identifiers define the scope of the cookie: which URL the cookie should be sent to

  • maxAge: the time when the cookie expires
    • 0: delete this cookie immediately
    • Positive number: the browser writes it persistently into the cookie: no matter the customer closes the browser or the computer, as long as it is max-age seconds before, the cookie is still valid when logging in to the website
    • Negative number: Session Cookie: will not be persisted and will not be written to the cookie file. The cookie information is stored in the browser's memory, so the cookie disappears when the browser is closed. The default max-age value of cookie is -1

The unit is second. If it is an integer, the cookie expires after maxAge seconds. If it is a negative number, it is a temporary cookie, and it will become invalid after closing the browser, and the browser will not save the cookie in any form. If it is 0, it means delete the cookie. The default is -1. Better to use than expires.

  • expires:
    Expiration time, the cookie will become invalid after a set point in time. Generally, the browser s cookies are stored by default. When the browser is closed and the session is ended, the cookie will also be deleted.
  • secure:
    Indicates that the cookie is only transmitted under HTTPS, and the default is false. When the secure value is true, the cookie is invalid in HTTP and only valid in HTTPS

-

httpOnly:
Cookies with HttpOnly can only be transmitted through the HTTP protocol, and cannot be accessed through JS script files (document.cookie): Set-Cookie: name=xxx; HttpOnly Because cookies with the HttpOnly flag can only be used in the HTTP request process, Cannot be read with JS scripts; and because many XSS attacks are through embezzlement of cookies, setting the HttpOnly attribute can also protect the security of our cookies. It is an important means to prevent XSS attacks.

Questions to consider when using cookies

  • Because it is stored on the client, it is easy to be tampered with by the client, and the legality needs to be verified before use
  • Do not store sensitive data, such as user passwords, account balances
  • Use httpOnly to improve security to a certain extent
  • Minimize the size of the cookie, and the amount of data that can be stored cannot exceed 4kb
  • Set the correct domain and path to reduce data transmission
  • cookie cannot cross domain
  • A browser can store up to 20 cookies for a website, and browsers generally only allow 300 cookies to be stored
  • The mobile terminal does not support cookies very well, and the session needs to be implemented based on cookies, so tokens are commonly used on mobile terminals

set up:

Client settings:

document.cookie ='Name=Value'; document.cookie ='username=cfangxu;domain=baike.baidu.com' and the effective domain is set tips: Note: The client can set the following options for the cookie: expiration, domain, path, security (conditional: only in the https protocol webpage, the client can successfully set a secure type of cookie), but the HttpOnly option cannot be set Copy code

Server-side settings

No matter if you request a resource file (such as html/js/css/picture) or send an ajax request, the server will return a response. And there is an item in the response header called set-cookie, which is specifically used by the server to set cookies

The Set-Cookie message header is a string with the following format (the part in brackets is optional): Set-Cookie: value[; expires=date][; domain=domain][; path=path][; secure] tips: A set-Cookie can only set one cookie, when you want to set multiple cookies, you need to add the same number of set-Cookie strings The server can set all cookie options: expiration, domain, path, security, HttpOnly, these options specified by Set-Cookie will only be used on the browser side, and will not be sent to the server side Copy code

Read

When obtaining the cookie under the current website through document.cookie, the value obtained in the form of a string contains all cookies under the current website (to avoid cross-domain scripting (xss) attacks, this method can only obtain non-HttpOnly types Cookies). It will concatenate all cookies through a semicolon + space, for example username=chenfangxu; job=coding

  • Modify cookie: You need to re-assign the value line, the old value will be overwritten by the new value. But when setting a new cookie, the path/domain options must remain the same as the old cookie. Otherwise, the old value will not be modified, or a new cookie will be added.
  • Delete: Set the expiration time of the cookie to be deleted to the time that has passed, and the path/domain/options must remain the same as the old cookie

localStorage

  • Life cycle: Persistent local storage, unless the data is actively deleted, the data will never expire.
  • The stored information is shared in the same domain.
  • When localStorage is operated on this page (add, modify, delete), this page will not trigger storage events, but other pages will trigger storage events.
  • Size: It is said to be 5M (related to the browser manufacturer)
  • It can be opened locally in non-IE browsers. IE browser should be opened in the server.
  • localStorage is restricted by conservative policies
  1. Setting: localStorage.setItem('username','cfangxu');
  2. Value: localStorage.getItem('username') or localStorage.key(0) #Get the first key name
  3. Delete: localStorage.remove('username') or delete all storage at once localStorage.clear()

sessionStorage

Used to store data in a session locally. These data can only be accessed by pages in the same session and the data will be destroyed when the session ends. Level of storage. Unless the browser window is not closed, if you refresh the page or enter another similar page, the data still exists. After closing the window, sessionStorage is destroyed, or another similar page is opened in a new window, sessionStorage is also absent

Other storage

session

The HTTP protocol is a stateless protocol, that is, every time the server receives a request from the client, it is a brand new request, and the server does not know the history request record of the client; the main purpose of Session and Cookie is to make up for the lack of HTTP. State characteristics

A mechanism for recording the session state of the server and the client

The session is implemented based on the cookie, the session is stored on the server, and the sessionId will be stored in the cookie on the client. The session authentication process:

  • When the user requests the server for the first time, the server creates the corresponding Session according to the relevant information submitted by the user
  • When the request is returned, the unique identification information SessionID of this Session is returned to the browser
  • After the browser receives the SessionID information returned by the server, it will store this information in the Cookie, and the Cookie will record which domain name this SessionID belongs to
  • When the user visits the server for the second time, the request will automatically determine whether there is cookie information under this domain name. If there is, the cookie information will be automatically sent to the server. The server will obtain the SessionID from the Cookie, and then look up the corresponding Session according to the SessionID. If no information is found, it means that the user is not logged in or the login is invalid. If a Session is found, it means that the user has already logged in. The following operations

Difference from cookie:

  • Security: Session is safer than Cookie, Session is stored on the server side, Cookie is stored on the client side
  • The type of access value is different: Cookie only supports storing string data. If you want to set other types of data, you need to convert it into a string. Session can store any data type.
  • Different validity period: Cookie can be set to keep for a long time, such as the default login function, Session generally expires for a short time, and the client is closed (by default) or Session timeout will expire.
  • Different storage sizes: single

The data stored by the cookie cannot exceed 4K, and the data stored in the session is much higher than that of the cookie, but when there are too many visits, it will take up too much server resources

Disadvantages of Session

The Session mechanism has a shortcoming. For example, if server A stores Session, after load balancing, if A's access volume increases sharply for a period of time, it will be forwarded to B for access, but Server B does not store A's Session, which will cause Session Failure

Questions to consider when using session

  • Store sessions in the server. When users are online at the same time, these sessions will occupy more memory, and the expired sessions need to be cleaned up regularly on the server side.
  • When the website adopts cluster deployment, it will encounter the problem of how to share session between multiple web servers. Because the session is created by a single server, but the server that processes the user's request is not necessarily the server that created the session, the server will not be able to get the information such as the login credentials that have been put into the session before.
  • When multiple applications want to share a session, in addition to the above problems, they will also encounter cross-domain problems, because different applications may be deployed on different hosts, and cross-domain cookie processing needs to be done in each application.
  • The sessionId is stored in the cookie. What if the browser prohibits cookies or does not support cookies? Generally, the sessionId is followed by the url parameter to rewrite the url, so the session does not necessarily need to be implemented by cookies
  • The mobile terminal does not support cookies very well, and the session needs to be implemented based on cookies, so tokens are commonly used on mobile terminals

token

Resource credentials required to access the resource interface (API)

Generally, after the user successfully logs in through the user name and password, the server will digitally sign the login credentials, and the encrypted string will be used as the token

Composition: uid (user's unique identity), time (time stamp of the current time), sign (signature, the first few digits of the token are compressed into a certain length of hexadecimal string by a hash algorithm)

Features: stateless server, good scalability, support for mobile devices, security, support for cross-program calls

Issues to consider when using tokens

  • If you think that using a database to store tokens will result in too long query time, you can choose to store them in memory. For example, redis is very suitable for your needs for token query.
  • The token is completely managed by the application, so it can avoid the same-origin policy
  • Token can avoid CSRF attacks (because cookies are not needed)
  • The mobile terminal does not support cookies very well, and the session needs to be implemented based on cookies, so tokens are commonly used on mobile terminals

**Where is the token generally stored? ? **Unprofessional answer:

  1. It will be returned to the client after the user logs in successfully. The client mainly has the following storage methods:
  2. Stored in localStorage, each time the interface is called, it is passed to the background as a field
  3. Stored in a cookie, let it be sent automatically, but the disadvantage is that it cannot cross domains
  4. After getting it, store it in localStorage, and put it in the Authorization field of the HTTP request header every time the interface is called.

The authentication process of token:

  1. The client uses the user name and password to request login
  2. The server receives the request to verify the user name and password
  3. After the verification is successful, the server will issue a token and send the token to the client
  4. After the client receives the token, it will be stored, such as in a cookie or localStorage
  5. The client needs to bring the token issued by the server every time it requests resources from the server
  6. The server receives the request, and then verifies the token contained in the client request. If the verification is successful, it returns the requested data to the client
    • Every request needs to carry the token, and the token needs to be placed in the HTTP header
    • Token-based user authentication is a stateless authentication method on the server side, and the server side does not need to store token data. The calculation time of parsing token is exchanged for session storage space, thereby reducing the pressure on the server and reducing frequent database query
    • The token is completely managed by the application, so it can avoid the same-origin policy

summary:

Cookie: It is used to communicate between the browser and the serve, but it is'borrowed' to local storage and can be modified with document.cookie='...'

Disadvantages: The maximum storage size is 4KB, the http request needs to be sent to the server, which increases the amount of requested data and can only be modified with document.cookie='...'

h5 storage:

localStorage and sessionStorage: The maximum storage capacity is 5M, the API is simple and easy to use setTiem, getItem, will not be sent out with the http request

localStorage data will be stored permanently, unless coded or manually deleted

The sessionStorage data only exists in the current session, the browser is closed and the sky is clear, generally use localStorage more

The difference between the above three: storage size, ease of use of API, whether to send out with http request

https

concept

The communication interface part is replaced by the SSL/TLS protocol (an intermediate layer is established between HTTP and TCP). That is, HTTPS is actually HTTP in the shell of the SSL protocol: HTTPS = HTTP + SSL/TLS

Difference from HTTP

  • HTTPS standard port 443, HTTP is 80
  • HTTPS is based on the transport layer and HTTP is based on the application layer
  • Data privacy, the content is symmetrically encrypted;
  • Data integrity, the content has been checked for integrity;
  • Identity authentication, the third party cannot disguise the identity of the client/server

Solve the shortcomings of http

Solve the eavesdropping of content (encryption and decryption)

Symmetric key encryption (shared key encryption): the same key is used for encryption and decryption

  • Process: 1. The party sending the secret text sends the content encrypted by the key to the recipient together with this key; 2. After receiving the secret text, the recipient uses the key to decrypt the secret text to get the content inside.
  • Advantages: fast encryption and decryption efficiency
  • Disadvantages: insecure, anyone can decrypt as long as they get the key

Asymmetric key encryption (public key encryption): there will be two keys

  • Concept: Use a pair of asymmetric keys, a private key (only you can have) and a public key (can be released to anyone), the data packet encrypted with the private key can only be decrypted by the public key, and the data packet encrypted with the public key Only the private key can solve
  • Process: The party sending the secret text encrypts the information with "the other party's public key", and the other party receives the encrypted information and then decrypts it with its own private key
  • Advantages: the transmission content cannot be cracked. Public key encrypted data, even if a third party intercepts the data but does not have the corresponding private key, it cannot be cracked
  • Disadvantages:
    • The public key is public, anyone can get it
    • The public key does not contain the server's information, and the use of an asymmetric encryption algorithm cannot ensure the legitimacy of the server's identity. There may be a man-in-the-middle attack, that is, the public key sent by the server to the client may be intercepted and tampered with on the way.
    • It takes a certain amount of time to encrypt and decrypt data
    • Reduce the efficiency of data transmission

Hybrid encryption mechanism (the method adopted by HTTPS) -

concept:
Combining the advantages of the two encryption methods, the asymmetric encryption method is used in the key exchange link, and the symmetric encryption method is used in the subsequent communication and exchange message stage.

  • Process:
    The party sending the ciphertext uses the "other party's public key" to encrypt the "symmetric key", and then the other party uses its own private key to decrypt to obtain the "symmetric key" after receiving the ciphertext, which ensures the exchange of secrets. Use symmetric encryption to communicate under the premise that the key is safe
  • advantage:
    That is to ensure that the symmetric key can be safely transmitted between the two parties, and the symmetric encryption method can be used for communication, which is much faster than simply using asymmetric encryption for communication. In this way, the problem that the content in HTTP may be eavesdropped is solved
  • Disadvantages:
    Hybrid encryption is mainly to solve the problem that the content in HTTP may be eavesdropped. However, it cannot guarantee the integrity of the data, which means that the data may be tampered with by a third party during transmission, such as being completely replaced, so it cannot verify the integrity of the data. If you need to do this, you need to use
    digital signature

https workflow

HTTPS mainly uses a hybrid encryption mechanism that combines symmetric key encryption and asymmetric key encryption for transmission. That is, the party sending the ciphertext uses the "other party's public key" to encrypt the "symmetric key", and then the other party uses its own private key to decrypt to obtain the "symmetric key" after receiving it, which ensures the double-sending exchange The symmetric key is used for communication under the premise of security, and the process is as follows:

  1. The client first sends an HTTPS request to the server
  2. The server will return the pre-configured public key certificate along with other information to the client
  3. The client verifies after receiving the certificate sent by the server. The verification process refers to the verification of the digital certificate, and it will get the information of the server and its public key.
  4. After the verification is successful, a parameter called client_params is generated and sent to the server; at the same time, it generates a secret with a pseudo-random function, and this secret is the symmetric key for their subsequent communication.
  5. After the server receives the client_params just now, it will also generate a secret according to the pseudo-random function. At this time, both parties have the same symmetric key.
  6. Subsequent transmissions will use this secret for symmetric key encryption and decryption transmission

Resolve content tampering (digital signature)

  • cause:
    • In order to verify the integrity of the data: Although there is a hybrid encryption mechanism to ensure that the content is not monitored, the transmitted data may be tampered with (for example, completely replaced), that is, the integrity of the data cannot be verified
  • Digital signature process:
    • Use the Hash function to generate a message digest from the original text, and use the sender s private key to encrypt the message digest. This generated thing is called a digital signature, and it is usually sent to the recipient together with the original text.

Hash function: a function that compresses messages of any length to a fixed-length message digest

  • Verification process
  1. 1. the sender will send the original text together with the digital signature (encrypted digest) to the receiver
  2. The recipient will receive these two things, namely the original text and the digital signature
  3. The receiver uses the Hash function to process the original text and gets a message summary
  4. At the same time, using the sender s public key to decrypt the digital signature will also get a message digest
  5. As long as the two message digests are equal to each other, you can verify whether the data has been tampered with

To ensure that the public key passed by the sender is trustworthy, a digital certificate must be used:

Solve the disguised identity of the communicating party (digital certificate)

In order to solve the problem of the identity of the communicating party being disguised, the identity of the communicating party is verified

In HTTPS, although there is a hybrid encryption mechanism to ensure that the data is not monitored, and a digital signature to verify the integrity of the data, the premise of the digital signature verification is to obtain the public key of the sender and ensure that the public key is available. Trusted, so you need a digital certificate,

It is a file issued to the server by some authoritative digital certification bodies. Digital certification authority is abbreviated as CA, which is a third-party organization trusted by both the client and the server

  • Digital certificate issuance process:
    • The operator of the server will submit his public key, organization information, personal information, etc. to the certification body and apply for certification
    • After obtaining this information, the certification body will verify the authenticity of the information submitted by the applicant through various online and offline channels.
    • After confirming its authenticity, the certification authority will give the information (the applicant s public key, organization information, personal information and the certification authority s own information, etc.), abbreviated as plain text information, and perform a digital signature. The process is the number mentioned in the signature. Signing steps:
      • 1. Process the plaintext information through the Hash function to generate an information summary;
      • 2. Use the certification body's own private key to encrypt the message digest.
      • The file generated through these two steps is called a digital signature.
    • After that, a certificate composed of plain text information and digital signature will be issued to the applicant (server)

The composition of the certificate

  • Plaintext information
    • Applicant's public key
    • Applicant's organization information and personal information
    • Information of the issuing agency CA
    • Clear text information such as validity time and certificate serial number
  • signature
    • Its generation process is actually the generation of the digital signature described above
    • Generation process: CA first processes the public plaintext information through the Hash function to generate an information digest, and then uses its own private key to encrypt the information digest to generate a signature

The combination of these plaintext information and this signature is called a certificate, and the certification authority will issue the certificate to the applicant (server

SSL/TLS

  • SSL Secure Sockets Layer
  • TSL Transport Layer Security (Transport Layer Security)

Why not all websites use HTTPS

The implementation of HTTPS requires a threshold, because from the selection, purchase, and deployment of certificates, the traditional mode is more time-consuming and labor-intensive, and the cost of purchasing certificates is slower. HTTPS is slower because of its encrypted communication compared to HTTP's plaintext transmission. Will consume more CPU and memory resources (but you can deploy the certificate to SLB (load balancing)/CDN to solve this problem through performance optimization)

eventLoop

Browser environment

zhuanlan.zhihu.com/p/72507900 segmentfault.com/a/119000001...

  • Function call stack: When the engine encounters JS code for the first time, it will generate a global execution context and push it onto the call stack. Every time a function call is encountered later, a new function context is pushed onto the stack. The JS engine will execute the function at the top of the stack, and after the execution is complete, the corresponding context will pop up
  • js feature is single-threaded + asynchronous: some asynchronous tasks do not need to be executed immediately, so when they are dispatched, they do not have the qualifications to enter the call stack, so these tasks to be executed, according to certain rules, obediently queue up , Waiting for the time to be pushed into the call stack-this queue is called the "task queue". The macro task and micro task type further divide the task queue
    • macro: setTimeout, setInterval, setImmediate, script (overall code), I/O operations, etc.
    • micro: process.nextTick, Promise, MutationObserver, async/await, etc.

promise,setTimeout(),promise.then(),async,await: www.sohu.com/a/285466361...

  • The code of the function body in the Promise constructor is executed immediately
  • Async/await is converted into promise and then callback functions at the bottom. Every time await, the interpreter creates a promise object, and then puts the remaining operations in the async function into the then callback function. async is shorthand for "asynchronous", and shorthand for await can be thought of as waiting for the completion of asynchronous method execution async
function f () {await p console.log ( 'ok')} to simplify the understanding: function f () {return RESOLVE (p) .then (() => {console.log ( 'ok')})} Copy Code
  • await will generate a micro task (Promise.then is a micro task). But we need to pay attention to the timing of this microtask. After executing await, it jumps out of the async function and executes other codes (here is the operation of the coroutine, A suspends execution, and control is given to B). After the other code is executed, return to the async function to execute the remaining code, and then register the code behind await to the micro task queue
  • A function with the async keyword, it makes the return value of your function must be a promise object, that is, if the async keyword function returns not a promise, it will automatically be wrapped with Promise.resolve()

eventloop process:

  1. The entire script is executed as the first macro task
  2. During the execution process, the synchronization code is directly executed, the macro task enters the macro task queue, the micro task enters the micro task queue, and the current macro task is executed out of the queue, the call stack Stack will be emptied
  3. Check the micro-task queue, if there is one, take the task at the head of the queue and put it into the call stack until the micro-task queue is empty and the call stack is also empty;

If a micro task is generated during the execution of the micro task, it will be added to the end of the queue and will also be called for execution in this cycle

  1. Then check the macro task queue, take out the task at the head of the queue, and put it into the call stack for execution. After the execution is complete, the call stack is empty

Tips: The macro queue only takes one task from the queue for execution at a time, and executes the tasks in the micro task queue after execution. The micro tasks are executed in one team by one team.

  1. Perform rendering of the browser UI thread
  2. Check whether there is a Web worker task, and execute it
  3. Execute the new macro task at the head of the team, go back to 2, and loop accordingly until the macro task and micro task queues are both empty

Summary: The browser can be understood as only one macro task queue and one micro task queue. The global Script code is executed first. After the synchronization code is executed, the call stack is cleared, and all tasks are taken out from the micro task queue in turn and put into the call stack for execution. , After the micro task queue is emptied, only take the task at the head of the macro task queue and put it into the call stack for execution, only take one, and then continue to execute all tasks in the micro queue, and then go to the macro queue to take one to form Event loop

Why introduce microtasks?

The original intention of introducing microtasks is to solve the problem of asynchronous callbacks. There are two ways to handle asynchronous callbacks.

  1. The asynchronous callback is used to enqueue the macro task queue.
  2. Put the asynchronous callback at the end of the current macro task

Using the first method, the timing of the execution of the callback should be after all the previous macro tasks are completed. If the current task queue is very long, the callback will not be executed late, causing the application to freeze. In order to circumvent such problems, V8 introduces the second method, which is the solution of micro-tasks. Define a micro task queue in each macro task. When the macro task is executed, the micro task queue will be checked. If it is empty, the next macro task will be executed directly. If it is not empty, the micro task will be executed in sequence. Go to the next macro task after completion

Where return Promise.resolve(4); is equivalent to: return new Promise((reslove) => { reslove(4) }) Copy code

Promise: characteristics, principles: answer the characteristics of ideas: proxy object, three states, state switching mechanism, Promise.all(), how to implement three promises, one of which throws an error, the other two continue to execute

Promise solves the problem of nested callbacks

Asynchronous and single threaded:

Based on js is a single-threaded situation: synchronization will block code execution, asynchronous will not block code execution

What are the practical and asynchronous front-end scenarios? ajax, picture request, timer

node environment

juejin.cn/post/684490...

Front-end security issues

juejin.cn/post/684490...

How does the packet capture tool capture packets?

First of all, let s understand the "proxy". The proxy is an application with forwarding function. It plays the role of the "middleman" between the server and the client. It receives the request sent by the client and forwards it to the server. It also receives the return from the server. Respond and forward to the client

The first configuration before packet capture is to open the proxy of the mobile phone, and the proxy address is the IP of the computer, so that the packet capture tool can act as a proxy.

http mode

During transmission, it is transmitted without encryption, so the packet capture tool acts as a proxy, and all request and response data passes through it and can be seen

https mode

You have to download a certificate, and you need to trust this certificate in your phone. If you want to see the original data, you must first obtain the symmetric encryption key, because when the data is finally transmitted, the symmetric encryption key is used, but the symmetric encryption key is transmitted. The process was encrypted with the server's public key. The packet capture tool does not have the server's secret key, so it can't be decrypted even if it is intercepted. What should I do?

When the server gives the client a "digital signature + public key", it will be intercepted by the packet capture tool and give the client its own public key and its own digital signature certificate for signing and encrypting the public key,

Use the given "public key" (the client thought it was the server's public key, but it was actually dropped by the packet capture tool, which is the packet capture tool) to encrypt the symmetric encryption secret key and send it to the server and then to the server In the process, it was intercepted by the packet capture tool because it was encrypted with its own public key and has its own private key, so it can be decrypted. After decryption, the symmetric encryption key,

The encrypted secret key cannot be directly forwarded to the server, because the server cannot decrypt it, because it is not encrypted by the server s public key. If it cannot be decrypted, the request will be terminated, so the next step is to use the public key intercepted in step 4. (1) Encrypt the secret key you decrypted, and forward it to the server so that the server can use its own secret key to decrypt, obtain the symmetric encryption secret key, and then start data transmission

Why should JS be single-threaded

If it is multi-threaded: two threads add and delete operations to a DOM at the same time. At this time, the browser is required to determine how to take effect. The execution result of which thread is to avoid the greater complexity caused by the introduction of locks. Single-threaded execution was initially selected

Why JS blocks page loading

Since JS can manipulate the DOM, if you modify these element attributes while rendering the interface (that is, the JS thread and the UI thread are running at the same time), the element data obtained before and after the rendering thread may be inconsistent. Therefore, in order to prevent unpredictable results from rendering, the browser sets the GUI rendering thread and the JS engine to be mutually exclusive.

Therefore, if the JS execution time is too long, this will cause the rendering of the page to be inconsistent, resulting in the feeling of blocking the page rendering and loading

Will css loading cause blocking

  • CSS loading will not block the parsing of the DOM tree
  • CSS loading will block the rendering of the DOM tree
  • CSS loading will block the execution of subsequent js statements

Although DOM and CSSOM are built in parallel, Render Tree depends on DOM Tree and CSSOM Tree, so he must wait for the completion of CSSOM Tree construction, that is, SS resource loading is completed/failed, before starting rendering, therefore, CSS loading will block the rendering of Dom And because JS can manipulate DOM and CSS, if you modify these element properties while rendering the interface, the element data obtained before and after the rendering thread may be inconsistent. Therefore, the style sheet will be loaded and executed before the subsequent js is executed. So css will block the execution of subsequent js

If downloading the CSS file is blocked, will it block the synthesis of the DOM tree? Will it block the display of the page?

Change blocking mode: defer and async

<!-- Output 1 2 3 in order from top to bottom --> <script async> console.log("1"); </script> <script defer> console.log("2"); </script> <script> console.log("3"); </script> Copy code

defer

Indicates the delayed execution of the imported JS, that is, the HTML does not stop parsing when the JS is loaded, and the two processes are parallel. After the entire document is parsed and defer-script is loaded, all JS code loaded by defer-script will be executed, and then the DOMContentLoaded event will be triggered

defer

Will not change the execution order of the code in the script
There are two differences compared with ordinary script:

  • Do not block HTML parsing when loading JS files
  • The execution phase is placed after the HTML tag parsing is completed

async

Represents the asynchronous execution of the imported JS. The difference from defer is that if it has been loaded, it will start to execute, no matter if it is at the HTML parsing stage or after DOMContentLoaded is triggered.

tips: JS loaded in this way will still block the load event, that is, async-script may be executed before or after DOMContentLoaded is triggered, but it must be executed before load is triggered

The order of execution of multiple async-scripts is uncertain

Learn from reading

juejin.im/post/5e2fb3...

juejin.im/post/5e51fe...

juejin.cn/post/684490...