2.6. Multiple replication transports

DRBD supports multiple network transports. As of now two transport implementations are available: TCP and RDMA. Each transport implementation comes as its own kernel module.

2.6.1. TCP Transport

The drbd_transport_tcp.ko transport implementation is included with the distribution files of drbd itself. As the name implies, this transport implementation uses the TCP/IP protocol to move data between machines.

DRBD’s replication and synchronization framework socket layer supports multiple low-level transports:

TCP over IPv4. This is the canonical implementation, and DRBD’s default. It may be used on any system that has IPv4 enabled.

TCP over IPv6. When configured to use standard TCP sockets for replication and synchronization, DRBD can use also IPv6 as its network protocol. This is equivalent in semantics and performance to IPv4, albeit using a different addressing scheme.

SDP. SDP is an implementation of BSD-style sockets for RDMA capable transports such as InfiniBand. SDP was available as part of the OFED stack of most distributions but is now considered deprecated. SDP uses an IPv4-style addressing scheme. Employed over an InfiniBand interconnect, SDP provides a high-throughput, low-latency replication network to DRBD.

SuperSockets. SuperSockets replace the TCP/IP portions of the stack with a single, monolithic, highly efficient and RDMA capable socket implementation. DRBD can use this socket type for very low latency replication. SuperSockets must run on specific hardware which is currently available from a single vendor, Dolphin Interconnect Solutions.

2.6.2. RDMA Transport

Alternatively the drbd_transport_rdma.ko kernel module is available from LINBIT. This transport uses the verbs/RDMA API to move data over InfiniBand HCAs, iWARP capable NICs or RoCE capable NICs. In contrast to the BSD sockets API (used by TCP/IP) the verbs/RDMA API allows data movement with very little CPU involvement.

2.6.3. Conclusion

At high transfer rates it might be possible that the CPU load/memory bandwidth of the tcp transport becomes the limiting factor. You can probably achieve higher transfer rates using the rdma transport with appropriate hardware.

A transport implementation can be configured for each connection of a resource. See Section 5.1.5, “Configuring transport implementations” for more details.