Interview in 2021, sort out common Java interview questions with answers for the beginning, middle and advanced levels of the whole network

Interview in 2021, sort out common Java interview questions with answers for the beginning, middle and advanced levels of the whole network

This is part of the interview questions. For more information, see the WeChat applet "Java Selected Interview Questions", 3000+ interview questions. The content is continuously updated including basics, collections, concurrency, JVM, Spring, Spring MVC, Spring Boot, Spring Cloud, Dubbo, MySQL, Redis, MyBaits, Zookeeper, Linux, data structures and algorithms, project management tools, message queues, design patterns , Nginx, common BUG issues, network programming, etc.

What are the characteristics of object-oriented programming?

1. abstraction and encapsulation

Classes and objects embody abstraction and encapsulation

Abstraction is a word that explains the relationship between class and object. The relationship between a class and an object is an abstract relationship. To explain in one sentence: a class is an abstraction of an object, and an object is a special case of a class, that is, the concrete manifestation of the class.

Encapsulation has two meanings: one is to encapsulate relevant data and operation codes in objects to form a basic unit, and each object is relatively independent and does not interfere with each other. The second is to privatize certain attributes and operations in the object, which has reached the concealment of data and operation information, which is conducive to data security and prevents unrelated personnel from modifying it. To shield some or all of the attributes and some functions (functions) from the outside world, which means that they cannot be seen from the outside world (outside the curly braces of the class), and are unknowable. This is the meaning of encapsulation.

2. inheritance

Object-oriented inheritance is for software reuse. Simple understanding is code reuse and a means to streamline the reused code. How to streamline, when one class already has the corresponding attribute and operation code, and another class also needs to write repeated code, then use the inheritance method, regard the former class as the parent class, and the latter class as the subclass , The subclass inherits the parent class, of course. Just use a keyword extends to complete the code reuse.

3. polymorphism

There is no polymorphism without inheritance, and inheritance is the prerequisite for polymorphism. Although inherited from the same parent class, the corresponding operations are different. This is called polymorphism. Different derived classes produced by inheritance, their objects will respond differently to the same message. What is the relationship between JDK, JRE, and JVM?

1. JDK

JDK (Java development Toolkit), JDK is the core of the entire Java, including the Java runtime environment (Java Runtime Environment), a bunch of Java tools (Javac, java, jdb, etc.) and Java-based class libraries (that is, Java API includes rt.jar).

Java API is the interface of Java applications. There are many written Java classes, including some important structural languages and basic graphics, network and file I/O, and so on.

2. JRE

JRE (Java Runtime Environment), Java runtime environment. Under the Java platform, all Java programs need to be run under JRE. Only JVM can not perform class execution, because when interpreting class, JVM needs to call the class library lib needed for interpretation. There are two folders bin and lib in JRE. Here you can think of bin as JVM, lib is the class library required by JVM, and JVM and lib together are called JRE.

JRE includes JVM and JAVA core class libraries and supporting files. Unlike JDK, it does not contain development tools-compilers, debuggers, and other tools.

3. JVM

JVM: Java Virtual Machine (Java Virtual Machine) JVM is a part of JRE, it is a virtual computer, which is realized by simulating various computer functions on the actual computer. JVM has its own complete hardware architecture, including processors, stacks, registers, etc., as well as corresponding instruction systems.

JVM is the core part of the cross-platform implementation of Java. All Java programs are first compiled into class files. The main job of JVM is to interpret its own instruction set (ie bytecode) and map it to the local CPU instruction set. Or OS system call. Java uses different virtual machines for different operating systems, achieving cross-platform at one time. The JVM does not care about the upper-level Java source files, it only cares about how the class files generated by the source files compile and run the Java files using the command line?

To compile and run Java files, you need to understand two commands:

1) javac command: compile java file; usage: javac Hello.java, if there is no error, a Hello.class file will be generated in the same directory as Hello.java, this class file is a file that can be used and run by the operating system .

2) java command: Function: run the .class file; usage: java Hello, if there is no error, the Hello.class file will be executed. Note: No extension is needed after Hello here.

Create a new file and write the code as follows:

public class Hello { public static void main (String[] args) { System.out.println( "Hello world, welcome to follow the WeChat public account "Java Featured"!" ); } } Copy code

The file is named Hello.java, and the suffix is "java".

Open cmd, switch to the location of the current file, execute javac Hello.java, a Hello.class file is generated under the folder

Enter the java Hello command, the cmd console will print out the content of the code Hello world, welcome to follow the WeChat public account "Java Featured"! What are the commonly used collections?

Map interface and Collection interface are the parent interfaces of all collection frameworks

The sub-interfaces of the Collection interface include: Set interface and List interface.

The set cannot contain duplicate elements. List is an ordered collection that can contain repeated elements and provides a way to access by index.

The implementation classes of the Map interface mainly include: HashMap, Hashtable, ConcurrentHashMap, TreeMap, etc. Map cannot contain duplicate keys, but can contain the same value. Get the value according to the key, first get the set set of the key when traversing the map set, and traverse the set set to get the corresponding value.

The implementation classes of the Set interface mainly include: HashSet, TreeSet, LinkedHashSet, etc.

The implementation classes of the List interface mainly include: ArrayList, LinkedList, Stack, Vector, etc.

Iterator, all collection classes, implement the Iterator interface, which is an interface for traversing the elements in the collection, which mainly contains the following three methods:

hasNext() whether there is a next element

next() returns the next element

remove() What is the difference between a process and a thread to delete the current element?

A process is a program running in the system. Once a program runs, it is a process.

The process can be seen as an instance of program execution. A process is an independent entity that allocates system resources, and each process has an independent address space. One process cannot access the variables and data structures of another process. If you want one process to access the resources of another process, you need to use inter-process communication, such as pipes, files, sockets, etc.

A process can have multiple threads, and each thread uses the stack space of the process it belongs to. One of the main differences between threads and processes is that one of the main differences in a unified process is that multiple threads in the same process share part of the state, and multiple threads can read and write the same block of memory (a process cannot directly access the memory of another process) ). At the same time, each thread also has its own registers and stack, and other threads can read and write these stack memory.

A thread is an entity of a process and an execution path of the process.

A thread is a specific execution path of a process. When a thread modifies the resources of the process, its sibling threads can immediately see this change. What is JVM?

The cross-platform feature of Java programs mainly means that bytecode files can be run on any computer or electronic device with a Java virtual machine. The Java interpreter in the Java virtual machine is responsible for interpreting the bytecode files into specific machine codes for execution. .

Therefore, at runtime, the Java source program needs to be compiled into a .class file by a compiler.

It is well known that java.exe is the execution program of the java class file, but in fact the java.exe program is just an execution shell, which will load jvm.dll (under windows, the following are all based on the windows platform, for example, under linux and solaris are actually similar , As: libjvm.so), this dynamic link library is the actual operation processing of the java virtual machine.

JVM is part of JRE. It is a fictitious computer, realized by simulating various computer functions on an actual computer.

JVM has its own complete hardware architecture, such as processors, stacks, registers, etc., as well as corresponding instruction systems. The most important feature of the Java language is cross-platform operation. This is part of the interview questions. For more information, see the WeChat applet "Java Selected Interview Questions", 3000+ interview questions.

The use of JVM is to support cross-platform independent of the operating system. Therefore, the JAVA virtual machine JVM belongs to the JRE, and now we also install the JRE when we install the JDK (of course, you can also install the JRE separately). What is a transaction?

Transaction (transaction) refers to the smallest unit of work for database operations, which is a series of operations performed as a single logical unit of work; these operations are submitted to the system as a whole, either all executed or not executed; a transaction is a set of no more Segmented operation set (work logical unit).

In layman's terms, a transaction can be a set of orderly database operations as a unit. If all operations in the group are successful, the transaction is considered successful, even if only one operation fails, the transaction is not successful. If all operations are completed, the transaction is committed, and its modifications will act on all other database processes. If an operation fails, the transaction will be rolled back, and the impact of all operations of the transaction will be cancelled. What are the characteristics of MySQL transactions?

4.characteristics of the transaction:

1. Atomicity

The transaction is the logical work unit of the database, and all operations contained in the transaction are either done or not done

2. Consistency

The result of transaction execution must be to make the database change from a consistent state to another consistent state. Therefore, when the database only contains the results of successful transaction submission, the database is said to be in a consistent state. If a failure occurs during the operation of the database system, some transactions are forced to be interrupted before they are completed. Some of the modifications made to the database by these unfinished transactions have been written to the physical database. At this time, the database is in an incorrect state, or it is Inconsistent state.

3. Isolation

The execution of a transaction cannot interfere with other transactions. That is, the internal operations and data used by a transaction are isolated from other concurrent transactions, and each transaction executed concurrently cannot interfere with each other.

4. Sustainability

Also called permanent, it means that once a transaction is committed, its changes to the data in the database should be permanent. The following other operations or failures should not have any impact on the results of its execution. What framework is MyBatis?

The MyBatis framework is an excellent data persistence layer framework, which establishes a mapping relationship between entity classes and SQL statements, and is a semi-automatic ORM implementation. Its encapsulation is lower than Hibernate, its performance is excellent, and it is compact.

ORM is object/relational data mapping, and can also be understood as a data persistence technology.

The basic elements of MyBatis include core objects, core configuration files, and SQL mapping files.

Data persistence is a general term for converting the data model in memory to a storage model and converting the storage model to a data model in memory. What is Redis?

Redis is a high-performance key-value database. It is completely open source and free. Redis is a NOSQL database. It is a database solution for solving a series of problems such as high concurrency, high expansion, and big data storage. It is a non-relational database. However, it cannot replace the relational database, and can only be used as an expansion in a specific environment.

Redis is a database structured server that stores key-values. The data structure types it supports include: String, lists, hash, set, and ordered set (Zset). )Wait. In order to ensure the efficiency of reading, redis stores all data objects in memory. It can support periodic writing of updated data to disk files. And it also provides intersection and union, as well as some sorting operations in different ways. What is the Spring framework?

Spring translates to spring in Chinese, and is called the spring of J2EE. It is an open source and lightweight Java development framework with two cores: Inversion of Control (IoC) and Aspect-Oriented (AOP). The Java Spring framework flexibly manages transactions through a declarative way to improve development efficiency and quality.

The Spring framework is not limited to server-side development. From the perspective of simplicity, testability, and loose coupling, any Java application can benefit from Spring. The Spring framework is also a super glue platform. In addition to providing its own functions, it also provides the ability to glue other technologies and frameworks.

1) IOC control inversion

The inversion of object creation responsibility, in Spring BeanFacotory is the core interface of the IOC container, responsible for instantiating, locating, configuring objects in the application and establishing dependencies between these objects. XmlBeanFacotory implements the BeanFactory interface, and composes application objects and dependencies between objects by obtaining xml configuration file data.

There are 3 injection methods in Spring, one is set injection, the other is interface injection, and the other is constructor injection.

2) AOP aspect-oriented programming

AOP refers to vertical programming. For example, two businesses, business 1 and business 2 need a common operation, instead of adding the same code to each business, by writing the code again, let the two businesses use this code together .

There are two ways to implement aspect-oriented programming in Spring, one is dynamic proxy, the other is CGLIB, dynamic proxy must provide interfaces, and CGLIB implementation is inherited by =. What is the Spring MVC framework?

Spring MVC is a follow-up product of Spring FrameWork and has been integrated into Spring Web Flow.

The Spring framework provides a full-featured MVC module for building Web applications.

Use Spring pluggable MVC architecture, so when using Spring for WEB development, you can choose to use Spring MVC framework in Spring or integrate other MVC development frameworks, such as Struts1 (which has been basically eliminated), Struts2 (old projects are still in use or have been rebuilt Constitution) and so on.

Through the strategy interface, the Spring framework is highly configurable and contains a variety of view technologies, such as JavaServer Pages (JSP) technology, Velocity, Tiles, iText, and POI.

The Spring MVC framework does not know or restrict which view to use, so it will not force developers to use only JSP technology.

Spring MVC separates the roles of controllers, model objects, filters, and handler objects. This separation makes them easier to customize. What is the Spring Boot framework?

Spring Boot is a brand new framework provided by the Pivotal team. Its design purpose is to simplify the initial setup and development process of new Spring applications.

The Spring Boot framework uses a specific way to configure, so that developers no longer need to define boilerplate configurations. In this way, Spring Boot is committed to becoming a leader in the booming field of rapid application development.

The first version of the new open source Spring Boot lightweight framework was released in April 2014. It is designed based on Spring 4.0, which not only inherits the original excellent features of the Spring framework, but also further simplifies the entire construction and development process of Spring applications by simplifying the configuration.

In addition, Spring Boot integrates a large number of frameworks so that the version conflicts of dependent packages and the instability of references have been well resolved. What is the Spring Cloud framework?

Spring Cloud is an ordered collection of a series of frameworks. It uses the development convenience of Spring Boot to cleverly simplify the development of distributed system infrastructure, such as service discovery registration, configuration center, message bus, load balancing, circuit breakers, data monitoring And so on, you can use Spring Boot's development style to achieve one-click startup and deployment.

Spring Cloud does not repeatedly manufacture wheels, it just combines the relatively mature and practical service frameworks developed by various companies, and repackages them through the Spring Boot style to shield out the complex configuration and implementation principles, and finally develop them. The authors set aside a set of distributed system development kits that are easy to understand, easy to deploy, and easy to maintain.

The sub-projects of Spring Cloud can be roughly divided into two categories:

The first category is the encapsulation and abstraction of the existing mature framework "Spring Boot", and it is also the largest number of projects;

The second category is to develop a part of the implementation of the distributed system infrastructure, such as Spring Cloud Stream playing the role of kafka, ActiveMQ.

For developers who quickly practice microservices, the first category of sub-projects is basically sufficient, such as:

1) Spring Cloud Netflix is an encapsulation of a distributed service framework developed by Netflix, including service discovery and registration, load balancing, circuit breakers, REST clients, request routing, etc.;

2) Spring Cloud Config saves the configuration information centrally, and configures Spring Cloud Bus to dynamically modify the configuration file;

3) Spring Cloud Bus distributed message queue is an encapsulation of Kafka and MQ;

4) Spring Cloud Security encapsulates Spring Security and can be used with Netflix;

5) Spring Cloud Zookeeper encapsulates Zookeeper so that it can be used by other Spring Cloud sub-projects;

6) Spring Cloud Eureka is part of the Spring Cloud Netflix microservice suite. It is a secondary package based on Netflix Eureka and is mainly responsible for completing the service governance function in the microservice architecture. Note that starting from 2.x, the official will not continue to open source, if you need to use 2.x, there are still risks. But I don't think the problem is big. The current functions of eureka are already very stable. Even if it is not upgraded, these functions of service registration/discovery are sufficient. Consul is a good alternative, there are other alternative components, follow-up pages will be detailed or follow the WeChat public account "Java Featured", there are detailed alternative source code sharing. What are the advantages and disadvantages of the Spring Cloud framework?

Advantages of Spring Cloud:

1) The granularity of service split is finer, which is conducive to resource reuse and improves development efficiency. Each module can be independently developed and deployed, and the code coupling is low;

2) It is possible to formulate optimized service plans more accurately and improve the maintainability of the system. Each service can be deployed separately. When upgrading a module, only the corresponding module service needs to be deployed separately, which is more efficient;

3) The microservice architecture adopts decentralized thinking, and the services adopt lightweight communication such as Restful, which is lighter than ESB, and the specificity of modules is improved. Each module only needs to care about the functions of its own module. Need to be concerned about other module business, more specific, more convenient for the development and expansion of functional modules; this is a part of the interview questions, more see the WeChat applet "Java Selected Interview Questions", 3000+ interview questions.

4) The technical selection is no longer single. Since each module is developed and deployed separately, each module can have more technical selection schemes. For example, it is possible to select mysql for the database of module 1 and oracle for module 2;

5) It is suitable for the Internet era and the product iteration cycle is shorter. System stability and performance improvement. Since microservices are projects or processes composed of several services, the advantage over traditional single projects is that the service provided by a certain module will not crash the entire system after the downtime, and microservices The disaster tolerance and service degradation mechanism inside can also greatly improve the stability of the project; in terms of performance, since each service is deployed separately, each module can have its own set of operating environment. When the performance of a certain service is low When you can configure a single service or upgrade the code, so as to achieve the purpose of improving performance.

Disadvantages of Spring Cloud:

1) There are too many microservices and high governance costs, which are not conducive to maintaining the system. The cost of interface calls between services increases. Compared with the previous single project, a method or interface can be called directly through local method calls, but when switching When it is a microservice, the calling method cannot be debugged in the previous way. The current mainstream technologies are http api interface calling, RPC, WebService and other methods for calling, and the calling cost is higher than that of a single project;

2) The high cost of distributed system development (fault tolerance, distributed transactions, etc.) poses a big challenge to the team

2) Independent databases and microservices produce transaction consistency problems. Since the technologies used by each module are different and each service will be called with high concurrency, there will be a problem of distributed transaction consistency;

3) Distributed deployment causes increased operating costs. When compared to a single application, operators only need to deploy and load balance a single project, but each module of the microservice requires such an operation, which increases the operation Time cost

4) Since the entire system is composed of various modules, when a service is changed, it is necessary to perform regression testing on all the functions involved before and after. The test function cannot be limited to only one module, which increases the difficulty and cost of testing;

Generally speaking, the advantages outweigh the disadvantages. At present, it seems that Spring Cloud is a very complete microservice framework. At present, many companies have begun to use microservices. The advantages of Spring Cloud are obvious. What is a message queue?

The full name of MQ is Message Queue. Message Queue (MQ) is an application-to-application communication method.

Message queuing middleware is an important component in a distributed system. It mainly solves problems such as application coupling, asynchronous messaging, and traffic cutting. Achieve high performance, high availability, scalability and eventually consistent architecture. It is an indispensable middleware for large-scale distributed systems.

MQ is a typical representative of the consumer producer model. One end continuously writes messages to the message queue, while the other end can read the messages in the queue.

The message producer only needs to publish the message to MQ regardless of who gets it, and the message consumer only needs to get the message from MQ regardless of who published the message, so that both the producer and the consumer do not need to know the existence of each other. 

Currently in the production environment, the most commonly used message queues include ActiveMQ, RabbitMQ, ZeroMQ, Kafka, MetaMQ, RocketMQ, etc. What are the application scenarios of message queues?

List five scenarios of message queue scenario, asynchronous processing, application decoupling, traffic cutting, log processing and message communication.

1. Asynchronous processing scenario

After registering, users need to send a registration email and registration SMS. There are two traditional methods

1) Serial mode: After the registration information is successfully written into the database, the registration email is sent, and then the registration SMS is sent. After the above three tasks are all completed, return to the client.

2) Parallel mode: After the registration information is successfully written into the database, the registration email is sent at the same time as the registration SMS. After the above three tasks are completed, return to the client. The difference with serial is that the parallel method can increase the processing time.

According to the above description, the user's response time is equivalent to the time for the registration information to be written into the database, that is, 50 milliseconds. After registering emails and sending short messages into the message queue, they return directly. Therefore, the speed of writing into the message queue is very fast and can basically be ignored. Therefore, the user's response time may be 50 milliseconds. Therefore, using the message queue method, the throughput of the system is increased to 20 QPS per second. It is 3 times higher than serial and twice higher than parallel.

2. Application decoupling scenarios

After the user places an order for an item, the order system needs to notify the inventory system. The traditional approach is that the order system calls the interface of the inventory system. Disadvantages of this model:

1) If the inventory system cannot be accessed, the order reduction will fail, which will result in the failure of the order;

2) The order system is coupled with the inventory system;

How to solve the above problems?

Order system: After the user places an order, the order system completes the persistence processing, writes the message to the message queue, and returns the user's order to be successfully placed.

Inventory system: subscribe to the order information, use pull/push method to obtain order information, and the inventory system performs inventory operations based on the order information.

If: The inventory system cannot be used normally when the order is placed. It also does not affect the normal order placement, because after the order is placed, the order system writes to the message queue and no longer cares about other follow-up operations, thereby realizing the decoupling of the order system and the inventory system.

3. Traffic cutting scene

Traffic cutting is also a common scenario in message queues, and is generally widely used in spike or group grab activities.

The spike activity usually causes a surge in traffic and application downtime due to excessive concurrency of traffic. In order to solve this problem, a message queue is generally added to the front end of the application.

Control the number of active people, and ease the application of high flow over a short period of time. When the user's request is received by the server, it is written to the message queue first. If the length of the message queue exceeds the maximum number, the user request is directly discarded or the error page is jumped to, and its spike service performs subsequent processing based on the request information in the message queue.

4. Log processing scenarios

Log processing refers to the use of message queues in log processing, such as the application of Kafka, to solve the problem of a large number of log transmissions. The log collection client is responsible for log data collection, regular writes and writes to the Kafka queue, while the Kafka message queue is responsible for the reception, storage and forwarding of log data, and log processing applications subscribe and consume log data in the Kafka queue.

5. News communication

Message communication means that message queues generally have built-in efficient communication mechanisms, so they can also be used in pure message communication. Such as the realization of point-to-point message queues, or chat rooms. What is the Linux operating system?

The full name of Linux is GNU/Linux. It is a free-to-use and freely disseminated Unix-like operating system. It is a POSIX-based multi-user, multi-tasking, multi-threaded and multi-CPU operating system.

With the development of the Internet, Linux has gained support from software enthusiasts, organizations, and companies all over the world. In addition to maintaining a strong momentum of development in servers, it has made considerable progress in personal computers and embedded systems. Users can not only intuitively obtain the implementation mechanism of the operating system, but also modify and perfect Linux according to their own needs to maximize the adaptation to the needs of users. 

Linux is not only stable in system performance, but also open source software. Its core firewall components are highly efficient and simple to configure, ensuring the security of the system.

In many corporate networks, in pursuit of speed and security, Linux is not only used as a server by network operation and maintenance personnel, but even as a network firewall. This is a highlight of Linux.

Linux has the characteristics of open source code, no copyright, and many technical community users. Open source code allows users to tailor freely, with high flexibility, powerful functions, and low cost. Especially the embedded network protocol stack in the system can realize the function of the router after proper configuration. These characteristics make Linux an ideal development platform for developing routing and switching equipment. What is a data structure?

Data structure is the way a computer stores and organizes data. Data structure refers to a collection of data elements that have one or more specific relationships with each other. Under normal circumstances, a carefully selected data structure can bring higher operating or storage efficiency. Data structure is often related to efficient retrieval algorithms and indexing techniques.

Data structure (data structure) is a collection of data elements with structural characteristics. It studies the logical structure of data and the physical structure of data and the relationship between them, and defines suitable operations and designs for this structure. Formulate the corresponding algorithm, and ensure that the new structure obtained after these operations still maintains the original structure type.

In short, a data structure is a collection of data elements that have one or more specific relationships with each other, that is, a collection of data elements with a "structure". "Structure" refers to the relationship between data elements, which is divided into logical structure and storage structure.

The logical structure and physical structure of data are two closely related aspects of the data structure, and the same logical structure can correspond to different storage structures. The design of the algorithm depends on the logical structure of the data, and the realization of the algorithm depends on the specified storage structure.

The research content of data structure is the basis for constructing complex software systems, and its core technology is decomposition and abstraction.

The data can be divided into three levels through decomposition; then through abstraction, the specific content of the data elements is discarded, and the logical structure is obtained. Similarly, by dividing the processing requirements into various functions, and then by abstracting away the implementation details, the definition of the operation is obtained.

The combination of these two aspects can transform the problem into a data structure. This is a process from concrete (that is, concrete problems) to abstract (that is, data structures).

Then, by increasing the consideration of the implementation details, the storage structure and the implementation calculation are further obtained to complete the design task. This is a process from abstraction (that is, data structure) to concrete (that is, concrete realization). What is a design pattern?

Design pattern (Design pattern) is to solve some specific problems of software development. Some solutions can also be understood as some ideas to solve the problem. Design patterns can help us enhance the reusability, scalability, and maintainability of the code. , Good flexibility. The ultimate goal of using design patterns is to achieve high cohesion and low coupling of the code.

High cohesion and low coupling are concepts in software engineering and are the criteria for judging the quality of software design. It is mainly used for the object-oriented design of programs. It mainly depends on whether the cohesion of the class is high and the degree of coupling is low.

The purpose is to greatly enhance the reusability and portability of program modules.

Generally, the higher the degree of cohesion of the modules in the program structure, the lower the degree of coupling between the modules.

Cohesion is to measure the connections within a module from a functional point of view. A good cohesion module should do exactly one thing. It describes the functional connection within the module; coupling is a kind of interconnection between the modules in the software structure. The measurement, the strength of coupling depends on the complexity of the interface between modules, the point of entry or access to a module, and the data passing through the interface. What is Zookeeper?

ZooKeeper was developed by Yahoo Research Institute. ZooKeeper is a distributed, open source distributed application coordination service. It is an open source implementation of Google s Chubby and later hosted on Apache. It is an important component of Hadoop and Hbase.

ZooKeeper is a classic distributed data consistency solution, dedicated to providing a distributed coordination service with high performance, high availability, and strict sequential access control capabilities for distributed applications.

The goal of ZooKeeper is to encapsulate key services that are complex and error-prone, and provide users with simple and easy-to-use interfaces and systems with efficient and stable functions.

ZooKeeper contains a simple set of primitives that provide interfaces between Java and C.

The ZooKeeper code version provides interfaces for distributed exclusive locks, elections, and queues. The code is in $zookeeper_home\src\recipes. Among them, the distributed lock and queue have two versions, Java and C, and the election only has the Java version.

It officially became the top project of Apache in November 2010. It is a software that provides consistent services for distributed applications. The functions provided include: configuration maintenance, domain name services, distributed synchronization, group services, etc.

Distributed applications can implement data publishing and subscription, load balancing, naming services, distributed coordination and notification, cluster management, leader election, distributed locks, distributed queues and other functions based on ZooKeeper. How to solve the accidental occupation of application service port 8080?

1) Press the WIN+R key on the keyboard, enter the "CMD" command in the run box after opening, and click OK.

2) In the CMD window, enter the "netstat -ano" command and press Enter to view all port occupancy.

3) Find the information similar to "0.0.0.0:8080" in the local address list, and view the program PID corresponding to port 8080 through this column.

4) Open the task manager, find the corresponding application PID for detailed information (if it does not exist, you can call it out through settings), and right-click to end the task. What is the Dubbo framework?

Dubbo (pronounced [ d b ]) is a high-performance and excellent service framework open sourced by Alibaba. It enables applications to implement service output and input functions through high-performance RPC, and can be seamlessly integrated with the Spring framework.

Dubbo provides six core capabilities: high-performance RPC calls for interface agents, intelligent fault tolerance and load balancing, automatic service registration and discovery, high scalability, runtime traffic scheduling, and visual service management and operation and maintenance.

Dubbo is a high-performance and lightweight open source Java RPC framework, which provides three core capabilities: interface-oriented remote method invocation, intelligent fault tolerance and load balancing, and automatic service registration and discovery.

Core components

Remoting: Network communication framework, which implements sync-over-async and request-response message mechanisms;

RPC: An abstraction of remote procedure calls, supporting load balancing, disaster recovery and clustering functions;

Registry: The service catalog framework is used for service registration and service event publishing and subscription. What is Maven?

Maven is the project object model (POM), it can manage the construction of the project, report and document project management tool software through a short paragraph of description information.

In addition to its ability to build programs, Maven also provides advanced project management tools.

Because Maven's default build rules are highly reusable, simple projects can often be built with two or three lines of Maven build scripts.

Due to the project-oriented approach of Maven, many Apache Jakarta projects use Maven when posting, and the proportion of company projects adopting Maven continues to grow. Compared with Gradle, it will be explained in the following pages. Welcome everyone to pay attention to the WeChat public account "Java Featured" .

The word Maven comes from Yiddish (Jewish), which means the accumulation of knowledge, and was originally used in the Jakata Turbine project to simplify the construction process.

At that time, there were some projects (with their own Ant build files) with only minor differences, and the JAR files were all maintained by CVS. So I hope there is a standardized way to build projects, a clear way to define the composition of the project, an easy way to publish project information, and a simple way to share JARs among multiple projects. What are the common protocols in the application layer?

The application layer protocol defines how application processes running on different end systems transfer messages to each other.

Application layer protocol

1) DNS: An Internet service used to convert domain names into IP addresses. The Domain Name System DNS is a naming system used by the Internet to convert machine names that are easy for people to use into IP addresses.

TLDs are now divided into three categories: national top-level domains nTLDs; generic top-level domains gTLDs; infrastructure domains.

The domain name server is divided into four types: root domain name server; top domain name server; local domain name server; authority domain name server.

2) FTP: File Transfer Protocol FTP is the most widely used file transfer protocol on the Internet. FTP provides interactive access, allows customers to specify file types and formats, and allows files to have access rights.

Based on the client-server model, the FTP protocol includes two components, one is the FTP server, and the other is the FTP client, which provides interactive access and connection-oriented, reliable transportation services using TCP/IP. The main function: reduce/eliminate different operating systems Incompatibility of the following files.

3) Telnet remote terminal protocol: telnet is a simple remote terminal protocol, it is also the official standard of the Internet. Also known as terminal emulation protocol.

4) HTTP: Hypertext Transfer Protocol is a transaction-oriented application layer protocol. It is an important foundation for reliable file exchange on the World Wide Web. HTTP uses connection-oriented TCP as the transport layer protocol to ensure reliable data transmission.

5) E-mail protocol SMTP: the simple mail transfer protocol. SMTP specifies how information should be exchanged between two SMTP processes that communicate with each other. The three stages of SMTP communication: connection establishment, mail transmission, and connection release.

6) POP3: Mail reading protocol, POP3 (Post Office Protocol 3) protocol is usually used to receive e-mails.

7) Telnet protocol (Telnet): used to realize the remote login function.

8) SNMP: Simple Network Management Protocol. It consists of three parts: SNMP itself, management information structure SMI and management information MIB. SNMP defines the packet format exchanged between the management station and the agent. SMI defines general rules for naming object types, as well as encoding objects and object values. MIB creates named objects in managed entities and specifies types. What are the keywords in Java?

1) 48 keywords: abstract, assert, boolean, break, byte, case, catch, char, class, continue, default, do, double, else, enum, extends, final, finally, float, for, if, implements , Import, int, interface, instanceof, long, native, new, package, private, protected, public, return, short, static, strictfp, super, switch, synchronized, this, throw, throws, transient, try, void, volatile , While.

2) 2 reserved words (not used at present, may be used as keywords in the future): goto, const.

3) 3 special direct quantities (direct quantities refer to the values directly given in the program through the source code): true, false, and null. What are the basic types in Java?

There are two types of Java, one is the basic type, and the other is the reference type. There are eight basic types of Java.

The basic types can be divided into three categories: character type char, Boolean type boolean, and numeric types byte, short, int, long, float, and double.

Numerical types can be divided into integer types byte, short, int, long and floating-point number types float and double.

There are no unsigned numeric types in JAVA, and their range of values is fixed and will not change with changes in the machine's hardware environment or operating system. In fact, the author of "Thinking in Java" mentioned that there is another basic type void in Java, and it also has a corresponding packaging class java.lang.Void, because Void cannot be new, that is, it cannot allocate space in the heap. Store the corresponding value, so it makes sense to classify Void as a basic type.

The range of 8 basic types is as follows:

byte: 8 bits, the maximum amount of stored data is 255, and the range of stored data is between -128 and 127.

short: 16 bits, the maximum data storage capacity is 65536, and the data range is -32768~32767.

int: 32 bits, the maximum data storage capacity is 2 to the 32 power minus 1, and the data range is negative 2 to the 31 power to positive 2 31 minus 1.

long: 64 bits, the maximum data storage capacity is 2 to the 64th power minus 1, and the data range is negative 2 to the 63 power minus 1.

float: 32 bits, the data range is 3.4e-45~1.4e38, you must add f or F after the number when assigning directly.

double: 64 bits, the data range is 4.9e-324~1.8e308, you can add d or D or not when assigning values.

boolean: There are only two values, true and false.

char: 16 bits, store Unicode code, assign value with single quotation mark. Why doesn't the Map interface inherit the Collection interface?

1) Map provides a key-value pair mapping (that is, a key and value mapping), while Collection provides a set of data, not a key-value pair mapping.

2) If Map inherits the Collection interface, does the class of the Map interface implemented use Map key-value pairs to map data or a set of collection data? For example, the commonly used hashMap, hashTable, treeMap, etc. are all key-value pairs, so it is completely meaningless to inherit Collection, and if Map inherits the Collection interface, it violates the object-oriented interface separation principle.

The principle of interface separation: The client should not rely on interfaces that it does not need.

Another definition is that the dependencies between classes should be established on the smallest interface.

The interface isolation principle splits very large and bloated interfaces into smaller and more specific interfaces so that customers will only need to know the methods they are interested in.

The purpose of the interface isolation principle is to uncouple the system, so that it is easy to refactor, change, and redeploy, so that the client's reliance on the interface is as small as possible.

3) Map is different from List and Set. Map stores key-value pairs, and List and Set store individual objects. After all, because the data structure is different, the operation is different because of the different data structure, so the interface is separated, or the principle of interface separation. What is the difference between Collection and Collections?

java.util.Collection is a collection interface. It provides general interface methods for basic operations on collection objects. The Collection interface has many concrete implementations in the Java class library.

The meaning of the Collection interface is to provide a maximum unified operation mode for various specific collections. Its direct inheritance interfaces are List and Set.

Collection

List LinkedList ArrayList Vector Stack Copy code

Set

java.util.Collections is a wrapper class. It contains various static polymorphic methods related to set operations. This class cannot be instantiated, just like a tool class that serves the Java Collection framework. The concept of heap and stack, what is the difference and connection between them?

Before talking about the heap and stack, let's talk about the division of JVM (virtual machine) memory:

Java programs must open up space when running, any software must open up space in memory when running, and the Java virtual machine must also open up space when running. The JVM opens up a memory area in the memory when it is running, and performs more detailed division in its own memory area at startup. Because each memory in the virtual machine is processed in a different way, it must be managed separately.

There are five divisions of JVM memory:

1) Register;

2) Local method area;

3) Method area;

4) Stack memory;

5) Heap memory.

Focus on the heap and stack:

Stack memory: First of all, the stack memory is a memory area, which stores local variables. All defined in the method are local variables (the ones outside the method are global variables), and the internal variables defined in the for loop are also local variables. The function is loaded first In order to define local variables, the method advances to the stack and then defines the variable. The variable has its own scope. Once it leaves the scope, the variable will be released. The update speed of stack memory is very fast, because the life cycle of local variables is very short. This is part of the interview questions. For more information, see the WeChat applet "Java Selected Interview Questions", 3000+ interview questions.

Heap memory: It stores arrays and objects (in fact, arrays are objects). All new creations are in the heap. The heap stores entities (objects). The entities are used to encapsulate data, and they encapsulate multiple (entities). If a piece of data disappears, the entity does not disappear and can still be used, so the heap will not be released at any time, but the stack is different. The stack stores a single variable, and the variable is released, then There is no more. Although the entities in the heap will not be released, they will be treated as garbage. Java has a garbage collection mechanism that collects them from time to time.

For example, how the statement int [] arr=new int [3]; in the main function is defined in memory:

The main function advances to the stack, defines a variable arr in the stack, and then assigns a value to arr, but the right side is not a specific value, but an entity. The entity is created in the heap, and a space is first opened up in the heap through the new keyword. When data is stored in the memory, it is reflected by the address. The address is a continuous binary, and then a memory address is assigned to the entity. Arrays all have an index. After the entity of the array is generated in the heap memory, each space will be initialized by default (this is the characteristic of the heap memory. Uninitialized data cannot be used, but it can be used in the heap. Because it has been initialized, but it is not in the stack), the initialized value of different types is different. So variables and entities are created in the heap and stack:

So how are the heap and stack related?

Just said that an address is allocated to the heap, and the address of the heap is assigned to arr, and arr points to the array through the address. So when arr wants to manipulate the array, it passes the address instead of assigning all entities to it directly. We no longer call him a basic data type, but a reference data type. Called arr refers to an entity in the heap memory. It can be understood as a pointer to c or "c++". Java has grown from "c++" and "c++", which is very similar to "c++".

If when int [] arr=null;

arr does not do any pointing, the function of null is to dereference the pointing of the data type.

When an entity is not pointed to by a reference data type, it will not be released in the heap memory, but will be treated as garbage, which will be automatically collected at an irregular time, because Java has an automatic collection mechanism, (and "c++" No, programmers need to recycle manually. If you don t recycle, the more piles will be, until the memory overflows, so Java is better than "c++" in memory management). The automatic collection mechanism (program) automatically monitors whether there is garbage in the heap, and if there is garbage, it will automatically collect garbage, but when it will be collected is not certain.

So the difference between heap and stack is obvious:

1) The stack memory stores local variables and the heap memory stores entities;

2) The update speed of stack memory is faster than heap memory, because the life cycle of local variables is very short;

3) The variables stored in the stack memory will be released once the life cycle ends, and the entities stored in the heap memory will be reclaimed from time to time by the garbage collection mechanism. What is the difference between Class.forName and ClassLoader?

To load classes in java, you can use Class.forName() and ClassLoader. ClassLoader follows the parental delegation model, and finally calls the class loader that starts the class loader. The function achieved is to "get a binary byte stream describing this class through the fully qualified name of a class". After obtaining the binary stream, place it in the JVM .

The Class.forName() method is actually implemented by the called ClassLoader.

By analyzing the source code, it can be concluded that the last method called is the forName() method. The second parameter in the method is set to true by default. This parameter indicates whether to initialize the loaded class. When set to true, the class will be initialized. This means that the static code block in the class and the assignment of static variables will be executed.

You can also call the Class.forName(String name, boolean initialize, ClassLoader loader) method by yourself to manually select whether to initialize the class when loading the class.

The description of the parameter initialize in the JDK source code is: if {@code true} the class will be initialized, which roughly means: when the value is true, the loaded class will be initialized. Why use design patterns?

1) Design patterns are summed up by predecessors based on experience. Using design patterns is equivalent to standing on the shoulders of predecessors.

2) The design pattern makes the program easy to read. Those who are familiar with design patterns should be able to easily read and understand programs written using design patterns.

3) The design pattern can make the written program have good scalability and meet the opening and closing principles of system design. For example, the strategy mode is to encapsulate different algorithms in subclasses. When you need to add a new algorithm, you only need to add a new subclass to implement the specified interface, and you can add a new one without changing the source code of the existing system System behavior.

4) Design patterns can reduce the degree of coupling between classes in the system. For example, the factory pattern enables the dependent class to only know the interface implemented by the dependent class or the inherited abstract class, which reduces the coupling between the dependent class and the dependent class.

5) Design patterns can improve code reuse. For example, the adapter mode can make the existing function codes in the system that meet the new requirements compatible with the interfaces proposed by the new requirements.

6) Design patterns can provide ready-made solutions for some common problems.

7) Design patterns increase the way to reuse code. For example, the decorator pattern reuses existing code in the system without using inheritance. Why is the String type modified by final?

1. In order to realize the string pool

The role of final modifier: final can modify classes, methods and variables, and the modified class or method, the final modified class cannot be inherited, that is, it cannot have its own subclass, and the final modified method cannot be overridden , Final modified variables, whether they are class attributes, object attributes, formal parameters or local variables, need to be initialized.

Why String is modified by final is mainly for "safety" and "efficiency" reasons.

The final modified String type represents that String cannot be inherited, and the final modified char[] represents the immutability of stored data. Although the final modification is immutable, just the reference address is immutable, which does not mean that the array itself will not change.

Why is String immutable?

Because only strings are immutable, string pools are possible. The implementation of the string pool can save a lot of heap space at runtime, and different string variables all point to the same string in the pool. But if the string is changeable, then String interning will not be implemented. On the contrary, if the variable changes its value, the value of other variables pointing to this value will also change.

If the string is variable, it will cause serious security problems. For example, the user name and password of the database are passed in in the form of strings to obtain the database connection or in socket programming, the host name and port are passed in in the form of strings. Because the string is immutable, its value cannot be changed. Otherwise, changing the value of the object pointed to by the string will cause security holes.

2. For thread safety

Because strings are immutable, they are multi-thread safe, and the same string instance can be shared by multiple threads. This eliminates the need to use synchronization because of thread safety issues. The string itself is thread-safe.

3. In order to achieve String immutability, HashCode can be created

Because the string is immutable, the HashCode is cached when it is created and does not need to be recalculated. This makes the string very suitable as the key in the Map key-value pair, and the processing speed of the string is faster than other key objects. This is that the keys in HashMap often use strings. The basic usage of the final keyword?

The final keyword in Java can be used to modify classes, methods, and variables (including member variables and local variables). Let's take a look at the basic usage of final keywords from these three aspects.

1. Modification

When a class is modified with final, it indicates that the class cannot be inherited. In other words, if a class you will never let it be inherited, you can use final to modify it. The member variables in the final class can be set to final as needed, but note that all member methods in the final class will be implicitly designated as final methods.

final final

2

Java 143

final Java final Java final

final final

final final final private final private final

3

final final final final

final

final final

1 final

final final

2 final

final

3 final

final final

The final local variable i cannot be assigned. It must be blank and not using a compound assignment

java ArrayList LinkedList

1 ArrayList Array LinkedList Link List

2 List get set ArrayList LinkedList LinkedList

3) When adding and deleting data (add and remove operations), LinkedList is more efficient than ArrayList, because ArrayList is an array, so when adding and deleting operations in it, all data after the operation point will be subscripted The index has an impact, and data movement is required.

4) From the perspective of utilization efficiency, ArrayList has low freedom because it needs to manually set a fixed-size capacity, but it is more convenient to use. It only needs to be created, then add data, and use it by calling the subscript; while LinkedList is free Higher, it can dynamically change with the amount of data, but it is not easy to use.

5) The main control overhead of ArrayList lies in the need to reserve a certain amount of space in the List; the main control overhead of LinkList lies in the need to store node information and node pointer information. What is the difference between HashMap and HashTable?

Hashtable is thread safe, while HashMap is not thread safe. This is part of the interview questions. For more information, see the WeChat applet "Java Selected Interview Questions", 3000+ interview questions.

All implementation methods of Hashtable have added the synchronized keyword to ensure thread synchronization. Therefore, the performance of HashMap is relatively higher. If there is no special requirement in normal use, it is recommended to use HashMap. If you use HashMap in a multithreaded environment, you need to use Collections.synchronizedMap() Method to obtain a thread-safe collection.

HashMap allows the use of null as the key, but it is recommended to avoid using null as the key as much as possible. When HashMap uses null as the key, it is always stored on the first node of the table array. Hashtable does not allow null as a key.

HashMap inherits AbstractMap, HashTable inherits the Dictionary abstract class, and both implement the Map interface.

The initial capacity of HashMap is 16, and the initial capacity of Hashtable is 11. The fill factor of both is 0.75 by default.

When HashMap is expanded, the current capacity is doubled: capacity*2, and when Hashtable is expanded, the capacity is doubled +1, which is capacity*2+1.

The underlying implementations of HashMap and Hashtable are both array + linked list structure implementations. What are the stages of the life cycle of a thread?

When a thread is created and started, it neither enters the execution state as soon as it is started, nor is it always in the execution state. In the life cycle of a thread, it goes through five states: New, Runnable, Running, Blocked, and Dead. When a thread is started, it cannot always occupy the CPU to run alone, so the CPU needs to switch between multiple threads, so the thread state will switch between running and blocking multiple times.

The life cycle of a thread consists of 5 phases, including: new, ready, running, blocking, and death.

New (new Thread)

When an instance (object) of the Thread class is created, the thread enters a newly created state (not started).

Ready (runnable)

The thread has been started and is waiting to be allocated to the CPU time slice, which means that the thread is waiting in the ready queue to get CPU resources

Running

The thread obtains CPU resources and is performing tasks (run() method). At this time, unless the thread automatically gives up CPU resources or a higher priority thread enters, the thread will run until the end.

Blocked

For some reason, the running thread gives up the CPU and suspends its execution, that is, it enters a blocked state.

Sleeping: Use the sleep(long t) method to make the thread enter sleep mode. A sleeping thread can enter the ready state after a specified time has passed.

Waiting: call the wait() method. (Call the motify() method to return to the ready state)

Blocked by another thread: call the suspend() method. (Call the resume() method to resume)

Death (dead) When a thread finishes execution or is killed by another thread, the thread enters the dead state. At this time, the thread cannot enter the ready state to wait for execution.

Natural termination: terminate after running the run() method normally

Abnormal termination: calling the stop() method to make a thread terminate running. What is the difference between the start() and run() methods in the Thread class?

In the Thread class, a thread is started by the start() method. At this time, the thread is in a ready state and can be scheduled and executed by the JVM. During the scheduling process, the JVM completes the actual business logic by calling the run() method of the Thread class. After the run() method ends, the thread will be terminated, so the multi-threaded goal can be achieved through the start() method.

If you call the run() method of the thread class directly, it will be treated as an ordinary function call. There is still only the main thread in the program, that is, the start() method can call the run() method asynchronously, but directly call the run( ) The method is indeed synchronous and cannot achieve the purpose of multithreading. What is the difference between notify and notifyAll?

Java provides notify() and notifyAll() two methods to wake up threads waiting under certain conditions.

When the notify() method is called, only one waiting thread will be awakened and it cannot guarantee which thread will be awakened, depending on the thread scheduler. This is part of the interview questions. For more information, see the WeChat applet "Java Selected Interview Questions", 3000+ interview questions.

When the notifyAll() method is called, all threads waiting for the lock will be awakened, but before executing the remaining code, all awakened threads will compete for the lock.

If the thread calls the wait() method of the object, then the thread will be in the waiting pool of the object, and the threads in the waiting pool will not compete for the object's lock.

When a thread calls the object s notifyAll() method (wake up all wait threads) or notify() method (only a random wait thread is awakened), the awakened thread will enter the object s lock pool, The thread will compete for the object lock.

That is to say, after calling notify, as long as one thread will enter the lock pool from the waiting pool, and notifyAll will move all the threads in the waiting pool of the object to the lock pool, waiting for the probability that the thread with the higher priority of the lock competition will compete to the object lock Large, if a thread does not compete for the object lock, it will stay in the lock pool, and only the thread will call the wait() method again before it will return to the waiting pool.

The threads competing for the object lock will continue to execute until the synchronized code block is executed and the object lock is released. At this time, the threads in the lock pool will continue to compete for the object lock.

Therefore, the key difference between notify() and notifyAll() is that notify() will only wake up one thread, while the notifyAll() method will wake up all threads. What is an optimistic lock and what is a pessimistic lock?

Optimistic lock

Optimistic locking means optimistic thinking, that is, it is considered that there are more reads and less writes, and the possibility of concurrent writes is low. Every time data is obtained, it is considered that it will not be modified, so it will not be locked, but it will be judged during update operations. Instead of updating this data, read the current version number when writing, and then lock the operation (compare with the previous version number, if the same, then update), if it fails, repeat the read, compare and write operations.

Optimistic locks in Java are basically implemented through CAS operations. CAS is an atomic update operation that compares whether the current value is the same as the incoming value. If it is the same, it updates, otherwise it fails.

Pessimistic lock

Pessimistic lock means pessimistic thinking, that is, it thinks that there are too many writes, and the possibility of concurrent writes is high. Every time data is obtained, it is considered to be modified. Therefore, it will be locked every time when reading and writing data, and when reading and writing data It will block until the lock is obtained.

The pessimistic lock in Java is Synchronized, and the lock under the AQS framework is to first try the CAS optimistic lock to obtain the lock. If it is not obtained, it will be converted to a pessimistic lock, such as RetreenLock. What is the function of the volatile keyword in Java?

The Java language provides a weak synchronization mechanism, namely volatile variables, to ensure that the update of the variable notifies other threads.

Volatile variables have two characteristics: variable visibility and reordering prohibited.

Volatile variables will not be cached in registers or invisible to other processors, so when reading a volatile variable, the latest written value will always be returned.

Two characteristics of volatile variables:

Variable visibility

Ensure that the variable is visible to all threads. The visibility here means that when a thread modifies the value of the variable, the new value is immediately available to other threads.

No reordering

Volatile prohibits instruction rearrangement. A more lightweight synchronization lock than sychronized. When accessing a volatile variable, no locking operation will be performed, so the execution thread will not be blocked. Therefore, volatile variable is a lighter-weight synchronization mechanism than the sychronized keyword.

Volatile is suitable for scenarios: a variable is shared by multiple threads, and the thread directly assigns a value to this variable.

When reading and writing non-volatile variables, each thread first copies the variables from the memory to the CPU cache. If the computer has multiple CPUs, each thread may be processed on a different CPU, which means that each thread can be copied to a different CPU cache. While declaring variables to be volatile, JVM guarantees that every time the variable is read, it is read from memory, skipping the CPU cache step.

Applicable scene

It is worth noting that a single read/write operation on volatile variables can guarantee atomicity, such as long and double type variables, but it cannot guarantee the atomicity of "i++" operations, because in essence i++ is both read and write. Operations. It can replace Synchronized in some scenarios. However, volatile cannot completely replace Synchronized. Only in some special scenarios, volatile can be applied.

Generally speaking, the following two conditions must be met at the same time to ensure the thread safety of the concurrent environment:

1) The write operation to the variable does not depend on the current value (such as "i++"), or simply variable assignment (boolean flag = true).

2) The variable is not included in an invariant with other variables, that is, different volatile variables cannot depend on each other. Volatile can only be used when the state is truly independent of other content in the program. What are the commonly used annotations in Spring?

1) Declare the annotation of the bean

@Component component, no clear role

@Service is used in the business logic layer (service layer)

@Repository is used in the data access layer (dao layer)

@Controller is used in the presentation layer, the declaration of the controller (used on C*)

2) Annotation of injected bean

@Autowired: Provided by Spring

@Inject: Provided by JSR-330

@Resource: Provided by JSR-250

Both can be annotated on set methods and attributes, and it is recommended to annotate on attributes (clear at a glance, write less code).

3) Java configuration class related solutions

@Configuration declares that the current class is a configuration class, which is equivalent to Spring configuration in xml form (used on the class)

The @Bean annotation on the method declares that the return value of the current method is a bean, instead of the way in the xml (used on the method)

@Configuration declares that the current class is a configuration class, in which @Component annotations are combined internally, indicating that this class is a bean (used on the class)

@ComponentScan is used to scan Component, which is equivalent to xml (used on the class)

@WishlyConfiguration is a combination of @Configuration and @ComponentScan annotations, which can replace these two annotations

4) Aspects (AOP) related solutions

Spring supports the annotated aspect programming of AspectJ.

@Aspect declares an aspect (used on the class)

Use @After, @Before, and @Around to define advice, and you can directly use interception rules (points of cut) as parameters.

@After is executed after the method is executed (used on the method)

@Before is executed before the method is executed (used on the method)

@Around is executed before and after the method is executed (used on the method)

@PointCut declares the point of cut

Use the @EnableAspectJAutoProxy annotation in the java configuration class to enable Spring's support for AspectJ proxy (used on the class)

5) @Bean attribute support

@Scope sets how to create a new Bean instance in the Spring container (used on methods, @Bean is required)

The setting types include:

Singleton (singleton, there is only one bean instance in a Spring container, the default mode),

Protetype (create a new bean each time it is called),

Request (in the web project, create a bean for each http request),

Session (in the web project, create a bean for each http session),

GlobalSession (create a Bean instance for each global http session)

@StepScope is also involved in Spring Batch

@PostConstruct is provided by JSR-250 and is executed after the constructor is executed, which is equivalent to the bean initMethod in the xml configuration file

@PreDestory is provided by JSR-250 and is executed before the bean is destroyed, which is equivalent to the destroyMethod of the bean in the xml configuration file

6) @Value annotation

@Value injects a value for the attribute (used on the attribute)

7) Environment switching

@Profile Set the configuration environment that the current context needs to use by setting the ActiveProfiles of the Environment. (Used on class or method)

@Conditional Spring4 can use this annotation to define conditional beans. By implementing the Condition interface and rewriting the matches method, it is determined whether the bean is instantiated. (Used in method)

8) Asynchronous related

@EnableAsync configuration class, through this annotation to enable support for asynchronous tasks, narrative AsyncConfigurer interface (used on the class)

@Async uses this annotation in the actual execution of the bean method to declare that it is an asynchronous task (all methods on the method or class will be asynchronous, and @EnableAsync is required to open the asynchronous task)

9) Timing tasks related

@EnableScheduling is used on the configuration class to enable support for scheduled tasks (used on the class)

@Scheduled to declare that this is a task, including cron, fixDelay, fixRate and other types (in terms of method, you need to turn on the support of scheduled tasks first)

10) @Enable* annotation description

Annotations are mainly used to enable support for xxx.

@EnableAspectJAutoProxy enables support for AspectJ automatic proxy

@EnableAsync enables support for asynchronous methods

@EnableScheduling enables support for scheduled tasks

@EnableWebMvc Enable Web MVC configuration support

@EnableConfigurationProperties enables support for @ConfigurationProperties annotated configuration beans

@EnableJpaRepositories enable support for SpringData JPA Repository

@EnableTransactionManagement enables support for annotated transactions

@EnableTransactionManagement enables support for annotated transactions

@EnableCaching enables annotation-style caching support

11) Test related solutions

@RunWith runner, usually used in Spring to support JUnit

@ContextConfiguration is used to load the configuration ApplicationContext, where the classes attribute is used to load the configuration class. What are the commonly used annotations in Spring MVC?

@EnableWebMvc Enable Web MVC configuration support in the configuration class, such as some ViewResolver or MessageConverter, etc. If there is no such sentence, rewrite the WebMvcConfigurerAdapter method (used to configure SpringMVC).

@Controller declares that this class is a Controller in SpringMVC

@RequestMapping is used to map web requests, including access paths and parameters (on classes or methods)

@ResponseBody supports putting the return value in the response instead of a page, usually the user returns json data (next to the return value or on the method)

@RequestBody allows request parameters to be in the request body instead of directly connected to the address. (Placed before the parameters)

@PathVariable is used to receive path parameters, such as the path declared by @RequestMapping("/hello/{name}"). The value can be obtained before the annotation is released in the parameter. It is usually used as an interface implementation method of Restful.

@RestController This annotation is a combination annotation, which is equivalent to the combination of @Controller and @ResponseBody. The annotation is on the class, which means that all methods of the Controller are added with @ResponseBody by default.

@ControllerAdvice Through this annotation, we can place the global configuration of the controller in the same place. The methods annotated with @Controller can be annotated with @ExceptionHandler, @InitBinder, and @ModelAttribute.

This is valid for all methods in the controller annotated with @RequestMapping.

@ExceptionHandler is used to handle exceptions in the controller globally

@InitBinder is used to set up WebDataBinder, and WebDataBinder is used to automatically bind foreground request parameters to the Model.

The original function of @ModelAttribute is to bind key-value pairs to the Model. In @ControllerAdvice, the global @RequestMapping can obtain the key-value pairs set here. Why is MyBatis a semi-automatic ORM mapping?

ORM is the mapping between Object and Relation, including Object->Relation and Relation->Object. Hibernate is a complete ORM framework, and MyBatis completes Relation->Object, which is what it calls Data Mapper Framework.

JPA is the ORM mapping standard, and mainstream ORM mappings have implemented this standard. MyBatis does not implement JPA, and its design ideas are not exactly the same as the ORM framework. MyBatis embraces SQL, while ORM is closer to object-oriented. It is not recommended to write SQL. It is really necessary to write SQL-like that comes with the framework instead. MyBatis is SQL mapping instead of ORMORM mapping. Of course, both ORM and MyBatis are persistence layer frameworks.

The most typical ORM mapping is Hibernate, which is a fully automatic ORM mapping, and MyBatis is a semi-automatic ORM mapping. Hibernate can completely realize the operation of the database through the object-relational model, and has a complete mapping structure between JavaBean objects and the database to automatically generate SQL. MyBatis only has basic field mapping, and the object data and the actual relationship of the objects still need to be implemented and managed by handwritten SQL.

The portability of Hibernate database is much greater than that of MyBatis. Hibernate greatly reduces the coupling between objects and databases (oracle, mySQL, etc.) through its powerful mapping structure and HQL language. Since MyBatis requires handwritten SQL, the coupling with the database directly depends on the programmer's method of writing SQL. If SQL is not universal and uses a lot of SQL statements with certain database characteristics, portability will also be reduced a lot, and the cost is high. What is the meaning of the args parameter in the main method?

In java, args is the abbreviation of arguments, which refers to the name of a string variable, which belongs to the reference variable and belongs to the naming. The name can be customized or the default value can be used. It is generally habitually written.

String[] args is the formal parameter of the main function, which can be used to obtain the parameters entered by the command line user.

1) String variable names (args) belong to reference variables and belong to naming, and the name can be customized.

2) It can be understood as used to store an array of strings. If you remove it, you can't know what type of variable declared by "args".

3) Assuming the public static void main method, it means that this part will be started when the program is started;

4) String[] args is the formal parameter of the main function, which can be used to obtain the parameters entered by the command line user.

5) Java itself does not have a main function without String args[], and an error will occur if String args[] is removed from the java program. What is high cohesion and low coupling?

Cohesion focuses on the degree of integration of elements within the module, and coupling focuses on the degree of dependence between modules.

1) Cohesion

Also called intra-block contact. Refers to a measure of the functional strength of a module, that is, a measure of how closely the various elements within a module are combined with each other. If the elements (between names, between program segments) in a module are more closely connected, the higher its cohesion.

The so-called high cohesion means that a software module is composed of highly correlated codes and is only responsible for one task, which is often referred to as the single responsibility principle.

2) Coupling

Also called inter-block connection. Refers to a measure of the degree of closeness between the various modules in the software system structure. The closer the connection between the modules, the stronger the coupling, and the worse the independence of the modules. The level of coupling between modules depends on the complexity of the interface between modules, the calling method and the information transmitted.

For low coupling, the superficial understanding is: a complete system, between modules, as far as possible to make it independent. In other words, let each module complete a specific sub-function as independently as possible. The interfaces between modules should be as few and simple as possible. If the relationship between certain two modules is more complicated, it is best to consider further module division first. This is conducive to modification and combination. The advantages and disadvantages of the Spring Boot framework?

Spring Boot advantages

1) Create a standalone Spring application

Spring Boot runs independently in the form of a jar package. Use the java -jar xx.jar command to run the project or run the main method in the main program of the project.

2) Embed Tomcat, Jetty or Undertow in Spring Boot, deploy WAR package files out of order

When deploying a Spring project, you need to deploy tomcat on the server, and then mark the project as a war package and place it in the webapps directory of tomcat.

The Spring Boot project does not need to download traditional servers such as Tomcat separately, and the container is embedded, so that the main function of the main program of the running project can be executed, so that the project can run quickly. In addition, it also reduces the basic requirements for the running environment. The environment variable can have JDK. .

3) Spring Boot allows the starter to be obtained as needed through the maven tool

Spring Boot provides a series of starter poms to simplify our Maven dependencies. Through these starter projects, we can run Spring Boot projects in the form of Java Application without other server configuration.

starter pom:

docs.spring.io/spring-boot...

4) Spring Boot automatically configures the Spring framework as much as possible

Spring Boot provides the largest automatic configuration of the Spring framework, using a large number of automatic configurations, so that developers can reduce the configuration of Spring.

Spring Boot uses Java Config to configure Spring.

5) Provide production-ready functions, such as indicators, health checks and external configuration

Spring Boot provides monitoring of runtime projects based on http, ssh, and telnet; spring-boot-start-actuator dependency can be introduced, and the REST method can be used directly to obtain the runtime performance parameters of the process, so as to achieve the purpose of monitoring, which is more convenient .

However, Spring Boot is only a micro-framework, and does not provide corresponding service discovery and registration supporting functions, monitoring integration solutions, and security management solutions. Therefore, in the micro-service architecture, Spring Cloud is also needed to work together. You can follow the WeChat public account "Java Featured", the follow-up pages will supplement the Spring Cloud interview questions.

5) Absolutely no code generation, no configuration requirements for XML

Spring Boot advantages

1) There are too many dependent packages, a spring Boot project needs a lot of jar packages imported by Maven

2) Lack of solutions such as service registration and discovery

3) What are the core annotations of Spring Boot lacking monitoring integration and security management solutions?

1) @SpringBootApplication*

Used for the most core annotations and automation configuration files on the Spring main class, indicating that this is a SpringBoot project, which is used to open SpringBoot's capabilities.

It is equivalent to the combination of the three annotations @SpringBootConfigryation, @EnableAutoConfiguration, and @ComponentScan.

2) @EnableAutoConfiguration

Allow SpringBoot to automatically configure annotations. After opening this annotation, SpringBoot can configure Spring Beans according to the packages or classes under the current classpath.

If there is the Jar package MyBatis in the current path, the MyBatisAutoConfiguration annotation can configure each Spring Bean of Mybatis according to the relevant parameters.

3) @Configuration

An annotation added by Spring 3.0 is used to replace the applicationContext.xml configuration file. All things that can be done in this configuration file can be registered through the class where this annotation is located.

4) @SpringBootConfiguration

The variant of @Configuration annotation is just used to modify the configuration of Spring Boot.

5) @ComponentScan

An annotation added by Spring 3.1 is used to replace the component-scan configuration in the configuration file, turn on component scanning, and automatically scan the @Component annotation under the package path to register the bean instance and place it in the context (container).

6) @Conditional

An annotation added by Spring 4.0, used to identify a Spring Bean or Configuration configuration file, when the specified conditions are met, the configuration is started

7) @ConditionalOnBean

Combine the @Conditional annotation, and start the configuration when there is a specified Bean in the container.

8) @ConditionalOnMissingBean

Combine the @Conditional annotation, when there is no value in the container, the configuration can be enabled as a Bean.

9) @ConditionalOnClass

Combine the @Conditional annotation, and the configuration can be opened only when there is a specified Class in the container.

10) @ConditionalOnMissingClass

Combine the @Conditional annotation, and the configuration can be turned on when no Class is specified in the container.

11) @ConditionOnWebApplication

Combine the @Conditional annotation, and the configuration can be opened only when the current project type is a WEB project.

There are three types of projects:

ANY: any web project

SERVLET: Servlet Web project

REACTIVE: Web project based on reactive-base

12) @ConditionOnNotWebApplication

Combine the @Conditional annotation, and the configuration can be opened only if the current project type is not a WEB project.

13) @ConditionalOnProperty

Combine the @Conditional annotation, and the configuration can be opened only when the specified attribute has the specified value.

14) @ConditionalOnExpression

Combine the @Conditional annotation, and the configuration can be enabled only when the SpEl expression is true.

15) @ConditionOnJava

Combining the @Conditional annotation, the configuration is only enabled when the running Java JVM is in the specified version range.

16) @ConditionalResource

Combine the @Conditional annotation, and start the configuration only when there are specified resources in the classpath.

17) @ConditionOnJndi

Combine the @Conditional annotation, and only start the configuration when the specified JNDI exists.

18) @ConditionalOnCloudPlatform

Combine the @Conditional annotation, and the configuration can be opened only when the specified cloud platform is activated.

19) @ConditiomalOnSingleCandidate

Combining @Conditional annotations, when the specified Class has only one Bean in the container, or there are multiple at the same time but it is the first choice, the configuration is turned on.

20) @ConfigurationProperties

Used to load additional configurations (such as .properties files), which can be used on @Configuration annotated classes or @Bean annotated methods. Take a look at the several ways Spring Boot reads the configuration file.

21) @EnableConfigurationProperties

Generally, it should be used in conjunction with the @ConfigurationProperties annotation to enable the support of @ConfigurationProperties annotation configuration Bean.

22) @AntoConfigureAfter

Used on the auto-configuration class, that is, the auto-configuration class needs to be configured after the other specified auto-configuration class is configured. For example, the automatic configuration class of Mybatis needs to be after the automatic configuration class of the data source.

23) @AutoConfigureBefore

Used on the automatic configuration class, that is, the automatic configuration class needs to be configured before the other specified automatic configuration class is configured.

24) @Import

Spring 3.0 adds annotations to import one or more configuration classes decorated with @Configuration annotations.

25) @IMportReSource

Spring 3.0 adds annotations to import one or more Spring configuration files, which is very useful for Spring Boot compatible old projects. Some configuration files in one person cannot configure Spring Boot's directory structure in the form of java config?

1. The structure of the code layer

Root directory: com.springboot

1) The project startup class (ApplicationServer.java) is placed under the com.springboot.build package

2) The entity class (domain) is placed in com.springboot.domain

3) Data access layer (Dao) is placed in com.springboot.repository

4) The data service layer (Service) is placed in com, springboot.service, and the implementation interface (serviceImpl) of the data service is placed in com.springboot.service.impl

5) The front controller (Controller) is placed in com.springboot.controller

6) Place the tools (utils) in com.springboot.utils

7 (constant) com.springboot.constant

8 (config) com.springboot.config

9 (vo) com.springboot.vo