Let's talk about the future of C++11 new features and the open source project ananas (folly, std c++11 and the future of ananas are the differences?)

Let's talk about the future of C++11 new features and the open source project ananas (folly, std c++11 and the future of ananas are the differences?)

I. Introduction

1. I first saw this article from the WeChat public account and learned that the open source project ananas (A C++11/golang protobuf RPC framework) implements Linux high-performance network libraries and rpc functions, the core of which is rewriting The usage of C++11 future. The link to the WeChat article is:

mp.weixin.qq.com/s/hurLTscQv...

The author of ananas is Bert Young, and his github address is github.com/loveyacper

ananas-rpc and promise/future technology, QQ exchange group number: 784231426

2. Promise/future related source code

github.com/loveyacper/...  - Ananas project source code, after downloading and decompressing, change the folder name ananas-master to ananas and then compile, otherwise the path will not be found

github.com/loveyacper/...  - The core source code of ananas is future

github.com/facebook/fo  - folly future, Folly: Facebook Open-source Library

Tencent's tars also have the realization of promise/future:

github.com/TarsCloud/T...

Boost also implements futures, the source code is located at:

sourceforge.net/projects/bo  - src

www.boost.org/doc/libs/1_  - doc

/boost_1_68_0/boost/thread/futures/*.*

/boost_1_68_0/boost/thread/future.hpp

github.com/chenshuo/mu...  - Muduo project source code, ananas network library is similar to it, one loop per thread+threadpool

github.com/netty/netty  - netty project source code, the EventLoopGroup of ananas refers to the implementation of java netty

github.com/netty/netty...

Netty is an open source java network programming framework provided by JBOSS, which mainly re-encapsulates the java nio package. Netty provides more powerful and stable functions and easy-to-use APIs than the native java nio package. The author of netty is Trustin Lee, a Korean, he also developed another well-known network programming framework, mina. The two are very similar in many aspects, and their threading models are basically the same. However, the netty community is much more active than mina. 

Netty 3.x is currently the most stable version used by enterprises. For example, dubbo uses version 3.x

Netty 4.x introduces major features such as memory pool, which can effectively reduce the GC load. Rocketmq uses 4.x

Netty 5.x has been deprecated, please refer to github.com/netty/netty...

In concurrent programming, we usually use a set of non-blocking models: Promise, Future and Callback. The Future represents the result of an asynchronous task that may not have been actually completed. Callback can be added to this result so that the corresponding operation can be performed after the task is executed successfully or failed. The Promise is handed over to the task executor, and the task executor can pass the Promise. Mark the task as completed or failed. It can be said that this set of models is the basis of many asynchronous non-blocking architectures. Netty 4 Zhongzheng provides this Future/Promise asynchronous model. The Netty documentation states that Netty's network operations are asynchronous, and the Future/Promise model is extensively used in the source code, which is also defined in Netty:

The Future interface defines isSuccess(), isCancellable(), cause(), these methods to determine the status of asynchronous execution. (Read-only) The
Promise interface adds setSuccess(), setFailure() methods on the basis of extneds future. (Writable)

Promise/future is a very important asynchronous programming model, which allows us to get rid of the traditional callback trap.

In this way, asynchronous programming can be performed in a more elegant and clear way. Standard C++11 has begun to support std::future/std::promise,

So why does Facebook folly still provide its own set of implementations? The reason is that the future provided by the C++ standard is too simple.

The biggest improvement in the implementation of Folly is that you can add a callback function (such as then) to the future, which is convenient

Chain calls to write more elegant and indirect code, and then the improvement is not only that.

A Future means that you need something "in the future" (usually the result of a network request), but you have to initiate such a request now, and the request will be executed asynchronously. Or to put it another way, you need to perform an asynchronous request in the background.

The Future/Promise pattern has corresponding implementations in multiple languages. The most typical C++11 standard library provides future/promise. In addition, ES2015 has Promise and async-await, and Scala also has Future built-in.

Future and Promise are actually two completely different things:

Future: Used to represent an object that has not yet had a result, and the behavior that produces this result is an asynchronous operation;

Promise: A Future object can be created using a Promise object (getFuture). After creation, the value saved by the Promise object can be read by the Future object, and the shared state of the two objects is associated at the same time. It can be considered that Promise provides a means for the synchronization of Future results;

In short: they provide a set of non-blocking parallel operation processing solutions, of course, you can block the operation to wait for the result of the Future to return.

3. Related knowledge links

CentOS 7 install cmake 2.8.12.2, please pay attention to the CMake Practice tutorial

My personal protobuf-3.5.2 practice: installation and testing  - ananas depends on the protobuf protocol, please install

"Linux multi-threaded server programming: using muduo C++ network library" study notes, recommended by firecat

10.must master the new features of C++11 (std) 

Deep understanding of C++11 (std) 

C++11 concurrent programming (std) 

Introduction to C++11 multi-threaded future/promise (std, boost)

C++ asynchronous call tool future/promise realization principle (std, boost)


Folly tutorial series: future/promise (facebook) Facebook brings a robust and powerful Folly Futures library (facebook) to C++11

Facebook's C++ 11 component library Folly Futures (facebook)

Tars framework Future/Promise use

Future/Promise

C++ future proposal by Herb Sutter

 

2. Future is the core of annans. For a more detailed introduction, please see:

github.com/loveyacper/...

Redis client based on future and coroutine

C++11 ananas library series (1) Use of Future

 

3. Reprinted WeChat articles are as follows:

ananas is a basic library written in C++11, including some commonly used functions in background development: udp-tcp, epoll-kqueue network library package, python-style coroutine, easy-to-use timer, multi-threaded logger, threadPool , Tls, unittest, google-protobuf-rpc, and the powerful future-promise.

1.ananas origin

I have been in contact with C++11 for 2-3 years. I decided two months ago to sort out the commonly used code in the background and start writing ananas . It is also very coincidental. About 10 days later, in mid-December 2016, I and a few colleagues developed a simple moba game, using frame synchronization, the server only needs to maintain simple room logic and connection management, and do a good job in framed messaging. The timer can be issued. In view of the rapid demo development, the client does not intend to receive company components, so the server does not use tsf4g. It only took half an afternoon to use ananas+protobuf to communicate with the client successfully, and successfully completed the game show to the leaders a year ago. I also decided to continue to develop and maintain ananas. This article first introduces the use of ananas future.

2. Introduction to Future

After using C++11, you should find that the standard library has implemented promise/future. However, after a little understanding, you will find that this code seems to be added to complete the KPI, and its taste is no less than the std::auto_ptr of the year. Yes, you can only poll or block waiting for the future, which cannot be used in performance-focused code. Therefore, Herb Sutter and others put forward a new future proposal: click me to open the C++ future proposal ananas future realizes all the functions of the proposal, and even more (when-N, and very important timeout support). In addition, the underlying infrastructure mainly borrows from the Folly Future, which helps me solve various obscure and difficult-to-use syntax problems of C++ templates. I'll explain it in detail in the next source code implementation. For an introduction to Folly future, please see this article: Introduction to facebook folly future library

Here are a few scenarios to show the solution using ananas future.

3. Usage scenarios

3.1 Initiate requests to multiple servers in sequence: chain call

The server needs to pull the player's basic information from redis1. After obtaining the basic information, it then requests redis2 for detailed information based on its content. In the old C code, we generally need to save the context using callback, and C++11 can use shared_ptr and lambda to simulate closures to capture the context:

//1. Obtain basic information asynchronously redis_conn1->Get<BasicProfile>("basic_profile_key") .Then([redis_conn2](const BasicProfile& data) { //2. Process the returned basic information and obtain detailed information asynchronously return redis_conn2->Get<DetailProfile>("detail_profile_key"); //it return another future }) .Then([client_conn](const DetailProfile& data) { //3. SUCC processes the returned detailed information and returns it to the client client_conn->SendPacket(data); }) .OnTimeout(std::chrono::seconds(3), [client_conn]() { std::cout << "The request timed out\n"; //3. FAIL returned to the client client_conn->SendPacket("server timeout error"); }, &this_event_loop); Copy code

The first Get initiates a request and returns immediately. Use Then to register the callback processing result. After the first request returns, initiate the second Get request. When the second request returns, it will be sent to the client. Among them, OnTimeout is the case of processing request timeout. If any redis does not return a response within 3s, this event loop timeout callback will notify the client.

3.2 Initiate requests to multiple servers at the same time, and start processing when all requests are returned

Still using the above example, the condition is changed to the basic information and detailed information are not related, can be requested at the same time, and all sent to the client:

//1. Get basic information and detailed information asynchronously auto fut1 = redis_conn1->Get<BasicProfile>("basic_profile_key"); auto fut2 = redis_conn2->Get<DetailProfile>("detail_profile_key"); ananas::WhenAll(fut1, fut2) .Then([client_conn](std::tuple<BasicProfile, DetailProfile>& results) { //2. SUCC returns to the client client_conn->SendPacket(std::get<0>(results)); client_conn->SendPacket(std::get<1>(results)); }) .OnTimeout(std::chrono::seconds(3), [client_conn]() { std::cout << "The request timed out\n"; //3. FAIL returned to the client client_conn->SendPacket("server timeout error"); }, &this_event_loop); Copy code

WhenAll collects the results of all futures, and only after the collection is completed, will the callback be executed.

3.3 Initiate requests to multiple servers at the same time, when a certain request returns, start processing

If there are 3 identical servers S1, S2, S3, we want to initiate 100 request tests to see which server responds the fastest. This is the scenario using WhenAny:

struct Statics { std::atomic<int> completes{0}; std::vector<int> firsts; explicit Statics(int n): firsts(n) {} }; auto stat = std::make_shared<Statics>(3);//Count the number of times each server gets the first (fastest response) const int kTests = 100; for (int i = 0; i <kTests; ++ i) { std::vector<Future<std::string>> futures; for (int i = 0; i <3; ++ i) { auto fut = conn[i]->Get<std::string>("ping"); futures.emplace_back(std::move(fut)); } auto anyFut = futures.WhenAny(std::begin(futures), std::end(futures)); anyFut.Then([stat](std::pair<size_t/* fut index*/, std::string>& result) { size_t index = result.first; //This time, the index server responded the fastest stat->firsts[index] ++; if (stat->completes.fetch_add(1) == kTests-1) { //100 tests completed int quickest = 0; for (int i = 1; i <3; ++ i) { if (stat->firsts[i]> stat->firsts[quickest]) quickest = i; } printf("The fast server index is %d\n", quickest); } }); } Copy code

When any of the three requests returns (that is, the fastest server), the callback function is executed and the number of times is counted.

In the end, the server with the most times is basically the fastest responding.

3.4. Initiate requests to multiple servers at the same time, when more than half of the requests return, start processing

The typical scenario is paxos. In the first stage, the proposer tries to initiate a pre-proposal prepare; only when the majority acceptors' promise return package is obtained, can the second stage be initiated, requesting a value to be proposed to the acceptors:

//paxos phase1: Proposer sends prepare to Acceptors const paxos::Prepare prepare; std::vector<Future<paxos::Promise>> futures; for (const auto& acceptor: acceptors_) { auto fut = acceptor.SendPrepare(prepare); futures.emplace_back(std::move(fut)); } const int kMajority = static_cast<int>(futures.size()/2) + 1; //Use anonymous future here WhenN(kMajority, std::begin(futures), std::end(futures)) .Then([](std::vector<paxos::Promise>& results) { printf("The proposal was successful, and the majority acceptors' promise was received. Now initiate the second phase of propose!\n"); //paxos phase2: select a value: SelectValue const auto value = SelectValue(hint_value); //initiate a proposal to acceptors: //foreach (a in acceptors) //a->SendAccept(ctx_id, value);//Use ctx-id to ensure that the same proposal id number is used in the two stages }) .OnTimeout(std::chrono::seconds(3), []() { printf("prepare timeout, maybe it failed, please increase the proposal number and try again!\n"); //increase prepareId and continue send prepare }, &this_eventloop); Copy code

3.5 Specify that the Then callback is executed in a specific thread

In Herb Sutter's proposal, the ability to assign the Then callback function to be executed in a specific thread is mentioned. In this regard, I fabricated such an example:

If the server needs to read a large file, there is no non-blocking read of the file (not considering io_sumbit first), and the read may take hundreds of milliseconds. If synchronous reading is adopted, it will inevitably cause server congestion. We hope to open another IO thread for reading, and notify us when the IO thread has finished reading. Use future to write code as follows:

//In this_loop thread. //read very_big_file in another thread Future<Buffer> ft(ReadFileInSeparateThread(very_big_file)); ft.Then([conn](const Buffer& file_contents) { //SUCCESS: process file_content; conn->SendPacket(file_content); }) .OnTimeout(std::chrono::seconds(3), [=very_big_file]() { //FAILED OR TIMEOUT: printf("Read file %s failed\n", very_big_file); }, &this_loop); Copy code

Is there a problem with such code? Please note that for a tcp connection, send generally does not allow multi-threaded calls. This line of statement in callback

conn->SendPacket(file_content); Copy code

It is executed in the file-reading thread, so there is a danger of multiple threads calling send.

So we need to specify that the callback is executed in the original thread. It's very simple, just change one line and call another overload of Then:

ft.Then (& this_loop, [conn] (const Buffer & file_contents) {... duplicated code

Pay attention to the first parameter this_loop, in this way, SendPacket will run in this thread, there is no concurrency error.

4. Example: future-based redis client

I briefly introduced the various scenarios used by the future, and now I will end this article with a complete example: the redis client. The reason why we chose to implement the redis client is that redis is widely used and everyone is familiar with it; the second is that the redis protocol is simple and can ensure the orderly response of the protocol. It is not difficult to implement and will not distract everyone. .

4.1 Delivery of the agreement

For protocol packaging, I chose to use the inline protocol. Using C++11 variable-length template parameters can be very easy to do:

//Build redis request from multiple strings, use inline protocol template <typename... Args> std::string BuildRedisRequest(Args&& ...); template <typename STR> std::string BuildRedisRequest(STR&& s) { return std::string(std::forward<STR>(s)) + "\r\n"; } template <typename HEAD, typename... TAIL> std::string BuildRedisRequest(HEAD&& head, TAIL&&... tails) { std::string h(std::forward<HEAD>(head)); return h + "" + BuildRedisRequest(std::forward<TAIL>(tails)...); } Copy code

4.2 Protocol sending and context maintenance

Redis supports pipeline requests, that is, it is not necessary to answer one by one. So we need to save a context for the sent request. Because the request and response correspond strictly and orderly, it simplifies our implementation to a certain extent. When a request is made, a Promise needs to be constructed for this purpose. Here is a brief talk about Promise: promise and future are one-to-one correspondence, which can be understood as the producer operating the promise and filling it with value, and the consumer operating the future and registering the callback for it Function, these callbacks are executed when the value is obtained). In this way, the api can return its corresponding future, and the user can enjoy the fluent future interface:

//set name first, then get name. ctx->Set("name", "bertyoung").Then( [ctx](const ResponseInfo& rsp) { RedisContext::PrintResponse(rsp); return ctx->Get("name");//get name, return another future }).Then( RedisContext::PrintResponse ); Copy code

Now define the pending request context:

enum ResponseType { None, Fine,//redis returns OK Error,//return error String,//redis returns a string }; using ResponseInfo = std::pair<ResponseType, std::string>; struct Request { std::vector<std::string> request; ananas::Promise<ResponseInfo> promise; } std::queue<Request> pending_; Copy code

For each request, a Request object is created and added to the pending_ queue. The first-in-first-out feature of queue and the orderly coordination of the redis protocol are perfect:

ananas::Future<ResponseInfo> RedisContext::Get(const std::string& key) { //Redis inline protocol request std::string req_buf = BuildRedisRequest("get", key); hostConn_->SendPacket(req_buf.data(), req_buf.size()); RedisContext::Request req; req.request.push_back("get"); req.request.push_back(key); auto fut = req.promise.GetFuture(); pending_.push(std::move(req)); return fut; } Copy code

4.3 Processing the response

When the complete redis server response packet is resolved, take the promise of the header from the pending queue and set the value:

auto& req = pending_.front(); //set promise req.promise.SetValue(ResponseInfo(type_, content_)); //Pop up the request that has received the response pending_.pop(); Copy code

4.4 Call example

Initiate two requests, and when the requests are returned, print:

void WaitMultiRequests(const std::shared_ptr<RedisContext>& ctx) { //issue 2 requests, when they all return, callback auto fut1 = ctx->Set("city", "shenzhen"); auto fut2 = ctx->Set("company", "tencent"); ananas::WhenAll(fut1, fut2).Then( [](std::tuple<ananas::Try<ResponseInfo>, ananas::Try<ResponseInfo> >& results) { std::cout << "All requests returned:\n"; RedisContext::PrintResponse(std::get<0>(results)); RedisContext::PrintResponse(std::get<1>(results)); }); } Copy code

5 Conclusion

This is the end of the chapter on the use of ananas future, and the source code analysis of the future and the use and implementation of other modules will be brought later.