2.21. DRBD client

With the multiple-peer feature of DRBD a number of interesting use-cases have been added, for example the DRBD client.

The basic idea is that the DRBD backend can consist of 3, 4, or more nodes (depending on the policy of needed duplicity); but, as DRBD 9 can connect more nodes than that, one bitmap slot[3] gets reserved for a Diskless Primary, ie. a DRBD Client.

So, with a policy of having 3-way data redundancy, and reserving one bitmap slot for re-balancing, you might define your DRBD resources like this:

resource kvm-mail {
  device      /dev/drbd6;
  disk        /dev/vg/kvm-mail;
  meta-disk   internal;

  on store1 {
    node-id   0;
  on store2 {
    node-id   1;
  on store3 {
    node-id   2;

  on for-later-rebalancing {
    node-id   3;

  # DRBD "client"
  floating {
    disk      none;
    node-id   4;

  # rest omitted for brevity

Of course you can have more than one DRBD client defined, too; you just need to remember to allocate an unique node-id for each one.

NOTE. DRBD Manage (which is the recommended configuration method, see its documentation) allows to configure hosts as clients, too.

Additionally to the horizontal scaling via adding storage nodes and doing a this kind of setup allows a cluster of VM frontend servers, managed eg. via Pacemaker, to access the data without needing local storage.


This DRBD client is an easy way to get data over the wire, but it doesn’t have any of the advanced iSCSI features like Persistent Reservations. If your setup has only basic I/O needs, like read, write, trim/discard and perhaps resize (eg. for a virtual machine), you should be fine.

Furthermore, this kind of setup is not fully optimized in the DRBD-9.0.0 release. Planned enhancements include

  • not needing a bitmap slot for client nodes
  • More granular specification of read-balancing.

[3] This might be optimized away some time, perhaps even with DRBD 9.0.0.