TCP echo server, part 2


In the previous lesson we've done the first straightforward approach to the TCP echo server implementation and discussed its disadvantages. In this lesson we will get rid of these disadvantages.

Go in parallel

The second approach is to make reading and writing operations able to run in parallel so the session won't stuck if the client perform I/O operations serially.

We're going to modify session class. The rest of the code stays the same, so we don't need to review it again.

Since things should go in parallel, we need to replace our single streambuf which was used for both read and write:

io::streambuf streambuf;

with two streambufs, one for reading and one for writing operations:

io::streambuf incoming;
io::streambuf outgoing;

Now, when the session continuously reads data into incoming streambuf, we should limit its capacity. So, let's do it right away:

session(tcp::socket&& socket)
: socket  (std::move(socket))
, incoming(65536) // Maximum capacity of the incoming streambuf

We will need to know if async_write operation is currently in progress. Remember, we have to maintain delivery queue by ourselves, and also we can't operate on buffers which are currently occupied by some asynchronous task. So we're going to put boolean flag into the session, so its complete data section will look like that:

tcp::socket socket;
bool writing = false;
io::streambuf incoming;
io::streambuf outgoing;

Also we will need the ability to drop the session, so let's add a member function:

void close()
    error_code error;

We don't really care if socket::close failed with error, so we'll just ignore it.

And the main part — we need to modify three functions to achieve the desired behavior: on_read, write and on_write. The rest of the code remains the same.

We schedule async_read as before, but upon its completion we are going to do the following:

void on_read(error_code error, std::size_t bytes_transferred)
    // Check if an error has occurred or incoming streambuf is full
    if(!error && bytes_transferred)
        // Check if the session isn't currently writing data

        // Read the next portion of the data

We drop the session if an error has occurred — yes, we did it before. But also we drop it if bytes_transferred is 0. And this happens when incoming streambuf is full.

Note that read operation is now scheduled inside on_read, not inside on_write. So the session is always ready to read a data.

Before reading the next portion of data, we check if asynchronous writing is currently in progress, and if it's not then we schedule it. Since we're inside on_read handler, there is some data in the incoming streambuf and we can write it back to the client. However we can do that only if we're not currently sending the previous portion of the data.

Now let's see how is write function implemented:

void write()
    writing = true;

    // Copy received portion of the data into outgoing streambuf
    auto buffer = outgoing.prepare(incoming.size());

    // Throw copied data away from the incoming streambuf

    // Schedule asynchronous sending of the data
    io::async_write(socket, outgoing, std::bind(&session::on_write, shared_from_this(), _1, _2));

Working in parralel, we can't read and write data using the same streambuf. So we have to copy data from the incoming streambuf into the outgoing. So we need to prepare additional space inside the outgoing streambuf, copy received data into that space, and then throw away that data from the incoming streambuf since we don't need it anymore.

on_write function now looks trivial:

void on_write(error_code error, std::size_t bytes_transferred)
    writing = false;

    // Close the socket if an error has occurred

We don't need to schedule anything from it — everything is scheduled inside on_read now. Also we don't need to consume the data which was successfully sent to the client — async_write is doing it for us.

The result

So, how does it work now?

  • Session is always ready to read a data;
  • Whenever the read operation is completed, the next one is scheduled right away;
  • Write operation is scheduled when the read operation is completed, but only if the previous write isn't currently in progress;
  • Different buffers are used for reading and writing, and received data is moved from the incoming buffer into the outgoing one.
  • If the incoming streambuf is full then the session is dropped intentionally. It's not an error in terms of TCP, however we consider it as a violation of application-level restrictions.

Note that we don't need to limit outgoing streambuf capacity with constructor parameter. In this implementation it is limited naturally by the session delivery algorithm and it's equal to the incoming streambuf capacity.

Are we good now?

Once again, remember the issue from the previous lesson. If the client transmits data not as the server expects, they both will stuck. We've figured out why previously. So, in the current implementation the server is always ready to read further piece of data. So, we don't depend on the client's behavior anymore? Are we good now? NO! There is another issue now. Argh! Consider the following client's algorithm:

  • Schedule async_write and async_read of 10KB of data at the same time;
  • Wait until both are completed;
  • Repeat.

Now consider the following server-side events:

  • Schedule async_read;
  • Get into session::on_read when 1KB of data is received;
  • Inside of it call session::write to return the data back to the client and session::read to read the next portion of data;
  • Get into session::on_read when the rest 9KB of data is received. Assume that the previous async_write is still in progress. In that case we won't call session::write, however will call session::read since we call it always;
  • Get into session::on_write when 1KB of data is finally delivered back to the client;
  • Done. Both server and client got stuck!

Why? Because we didn't schedule async_write inside on_write. And because the next async_read is alreay scheduled. Client is waiting until reading of the response is completed before proceed to the next pair of async_read and async_write. However it won't complete because the server won't send the rest of the data. It will do that upon on_read handler, which will never fire. Once again, both server and client are waiting for each other.

Next time we will solve this issue as well.

Full source code for this lesson: tcp-echo-server-2.cpp

Rate this post:
Lesson 31
Share this page:

Learning plan

A short break before we go into Boost.Asio application design principles
A short notes on Boost.Asio server application quality issues
Simple straightforward implementation and discussion of TCP echo server
32. TCP echo server, part 2
First approach on improvement of TCP echo server implementation: making read and write work in parallel
Second approach on improvement of TCP echo server implementation: eliminating gaps and memory copying
Third approach on improvement of TCP echo server implementation: multithreading
An implementation of a simple terminal server which you connect to with telnet and execute commands