DRBD’s protocol A is asynchronous, but the
writing application will block as soon as the socket output buffer is
full (see the
sndbuf-size option in drbd.conf(5)). In that event,
the writing application has to wait until some of the data written
runs off through a possibly small bandwidth network link.
The average write bandwidth is limited by available bandwidth of the network link. Write bursts can only be handled gracefully if they fit into the limited socket output buffer.
You can mitigate this by DRBD Proxy’s buffering mechanism. DRBD Proxy will place changed data from the DRBD device on the primary node into its buffers. DRBD Proxy’s buffer size is freely configurable, only limited by the address room size and available physical RAM.
Optionally DRBD Proxy can be configured to compress and decompress the data it forwards. Compression and decompression of DRBD’s data packets might slightly increase latency. However, when the bandwidth of the network link is the limiting factor, the gain in shortening transmit time outweighs the compression and decompression overhead.
Compression and decompression were implemented with multi core SMP systems in mind, and can utilize multiple CPU cores.
The fact that most block I/O data compresses very well and therefore the effective bandwidth increases justifies the use of the DRBD Proxy even with DRBD protocols B and C.
See Chapter 6, Using DRBD Proxy for information on configuring DRBD Proxy.