Learning further

90

In the next lesson we will review a bigger server example — TCP chat server. Before jumping into that part we should learn something new first.

A new thing about socket — we can ask for its endpoints on both sides of the connection:

boost::asio::ip::tcp::endpoint endpoint;
endpoint = socket.local_endpoint(); // IP:Port of local side of the connection
endpoint = socket.remote_endpoint(); // IP:Port of remote side of the connection

Note that these functions may throw an excetion. If you don't want to mess with exceptions you can use a return value overload:

boost::system:error_code error;
auto endpoint = socket.remote_endpoint(error);

endpoint works with iostreams:

boost::system:error_code error;
auto endpoint = socket.remote_endpoint(error);
std::cout << "Remote endpoint: " << endpoint << "\n";
Remote endpoint: 127.0.0.1:38529

Sometimes you need to cancel an asynchronous operation that was scheduled before. The only reliable and portable way to do so is to close an associated socket:

boost::asio::async_read(socket, buffer, completion_handler);
// ...
socket.close();

Note that socket::close may throw an exception. As always, there is a return value overload:

boost::system::error_code error;
socket.close(error);

There is socket::cancel member function that lets you cancel currently performed asynchronous operation without closing the socket. However its behavior is platform-specific — it may work as you expect, however it also may just be ignored by the operating system. Also, such a need tells almost for sure that the system has bad design. Try to avoid this operation.

When you send a data, you always know how many bytes should be transferred. When you receive a data, you also could expect getting some fixed count of bytes. However, in some cases you could read the data until some condition comes. For example — until you get some special byte sequence (e.g. "\n" character). In that case boost::asio::streambuf may be more convenient than a fixed-size buffer. When using such a technique you should limit the upper bound of the streambuf size by passing its maximum allowed size into its constructor, or else it could grow until you run out of memory:

boost::asio::streambuf streambuf(65536);

After you processed a portion of the data received with the last operation, you should throw this portion of data away from the streambuf so it won't consistently grow. That can be done with streambuf::consume function:

boost::asio::async_read_until(socket, streambuf, "\n", [&] (boost::system::error_code error, std::size_t bytes_transferred)
{
    // Process the data received
    // ...
    streambuf.consume(bytes_transferred);
});

Asynchronous operations aren't queued by the library out-of-the-box. You should wait until the current operation is completed before scheduling the next one. It means that you have to maintain the queue by yourself. That's, of course, is the issue when we're talking about the same type of operations: several reads or several writes. async_read and async_write operations can be scheduled in parallel without any issues.

Session object's lifetime can be controlled in different ways. That depends on the server's logic. Sometimes it's enough just to capture a session shared pointer into the completion handler so it could keep the session alive at least until the current asynchronous operation is completed. However sometimes the server need to know about all of the active clients. For example — to be able to iterate over them. This can be achieved by holding their pointers in some special container. Also, sometimes the session object should stay alive even if no asynchronous operations are scheduled, and, therefore, no completion handlers hold its shared pointer. This can be achived by holding the session shared pointer somewhere else (e.g. server's container I've just mentioned), or by dealing with raw pointers. Dealing with raw pointers may sound strange — we're talking about C++ after all. However in some special cases such a technique is pretty fine for the asynchronous control flow. We will discuss raw pointers technique some later.

Usually, a server “knows” about the session class it operates on. However, a session class could also need to “say” something to the server, which leads to a need of cyclic visibility. This could be overcame with the server class forward declaration and passing a server reference to the session. However, that's not very good design. A better way is to use specialized event handler function objects:

using message_handler = std::function<void (std::string)>;

// Server-side
void server::create_session()
{
    auto client = std::make_shared<session>([&] (std::string const& message)
    {
        std::cout << "We got a message: " << message;
    });
}

// Session-side
void session::session(message_handler&& handler)
: on_message(std::move(handler))
{
}

void session::async_receive()
{
    boost::asio::async_receive(socket, [...] (...)
    {
        on_message(some_buffer);
    });
}

The better — doesn't mean the best. There are several ways to do messaging between a server and its sessions, and which one is the best depends on details of your system design.

OK, we've just learned every new thing we need to know to review the next example — a simple TCP chat server.

Lesson 6
Share this page:

Learning plan

It's time to say “goodbye” to a synchronous I/O
The first simple asynchronous TCP server
How to handle Boost.Asio errors
7. Learning further
There are several new things we should learn before jumping into a bigger example of a server
A bigger example of a server where you'll need to apply everything you've learned so far
Principles you should take into consideration during the development of your applications
How to keep io_context::run running even when there is no work to do