As you may have noticed, most of examples we've looked into so far were in TCP area. That's because most of the services, applications and application-level protocols are built on top of TCP. It's kinda default choice. If you don't know what protocol do you need — choose TCP.
UDP is a bit tricky thing, and the areas where UDP is a better choice are quite narrow. We will take a closer look at UDP features and working with UDP with Boost.Asio some later, after you advance your networking skills a bit more. Meanwhile, we'll continue to learn Boost.Asio by dealing with TCP.
Oh, there is also ICMP! This guy is even rare thing than UDP. Somewhen we will meet him as well. But now, let's get back to TCP.
So, in this lesson we will briefly remember all that we've learned so far.
Everything starts with and rotates around
boost::asio::io_context class instance. All I/O operations are handled by some
io_context. All classes providing I/O functionality are bound to some
io_context on construction and cannot be rebind to another
io_context during their lifetime. First — you create at least one
io_context class instance. Next — you do the rest on top of it.
To transmit a data over the network you need a socket. Socket is something like a file handle, however operations on a socket are more restricted and these restrictions are depend on a network protocol you're dealing with: TCP, UDP or ICMP.
TCP connection is something like a bidirectional sequential data stream. It can be open and ready to operate and it can be closed. You can write data into it and read data from it (at the same time, if you want to). It is guaranteed that all data will be delivered and it will be delivered in the same order it was sent, unless an error has occured. Unlike working with file, you can't position read or write pointer inside a TCP stream.
There are no such things as UDP connection or ICMP connection. Dealing with UDP or ICMP you send and receive separate data pieces (or “datagrams”). It is not guaranteed that datagram will be delivered, so you have to maintain data delivery by yourself. It is also not guaranteed that datagrams will arrive to the destination endpoint in the same order which they were sent. So you also have to maintain delivery order by yourself. Datagrams cannot be splitted into pieces during delivery. You can't receive part of datagram. You will receive all of it either nothing. Therefore, your reading buffer should be large enough to contain the whole datagram in one piece.
All I/O operations are presented in two variants of functions: synchronous and asynchronous. Asynchronous functions starts with
async_ prefix. In real life products you should almost always use asynchronous approach.
Asynchronous I/O operations in Boost.Asio is not queued out-of-the-box. If you need to queue such operations, then you have to implement and maintain such a queue by yourself.
Client application is an application which initiates network connection on its own. To initiate a connection you should use
socket::async_connect member function (which works with a single endpoint) or
boost::asio::async_connect free function (which works with a range of endpoints).
To resolve a hostname or domain name into endpoint which you could use to connect a socket to, use
resolver class instance.
If you need to combine Boost.Asio event polling with some other polling (OS, another API, etc) within a single thread then use
io_context::poll member function instead of
Server application is an application which waits for incoming connections instead of initiating them on its own. Use
acceptor class instance to accept incoming connection. Specify network interface and listening port in its constructor and use
acceptor::async_accept member function to start accepting incoming connections.
A socket object is usually wrapped into some sort of “session” — a higher-level abstraction which holds the socket itself as well as the other additional data associated with the connection, such as incoming and outgoing data buffers. There are different ways of maintaining session's lifetime, and the most suitable way depends on your server's design.
Go into multithreading to utilize all of your CPU cores. To do so, just run
io_context::run member function several times from the different threads for the same
io_context class instance.
io_context::strand for synchronization. Completion handlers bound to the same strand invoke serially. Organize your code in such a way so strands could satisfy all of your synchronization needs.
All synchronous functions have two overloads in terms of error handling. In case of error the first overload throws an exception, and the second returns an error code by reference. Asynchronous functions always pass an error code into completion handlers.