Part II. Building, installing and configuring DRBD

Chapter 3. Installing pre-built DRBD binary packages

Packages supplied by LINBIT

LINBIT, the DRBD project's sponsor company, provides DRBD binary packages to its commercial support customers. These packages are available at and are considered "official" DRBD builds.

These builds are available for the following distributions:

  • Red Hat Enterprise Linux (RHEL), versions 4 and 5

  • SUSE Linux Enterprise Server (SLES), versions 9, 10, and 11

  • Debian GNU/Linux, versions 4.0 (etch) and 5.0 (lenny)

  • Ubuntu Server Edition LTS, versions 6.06 (Dapper Drake) and 8.04 (Hardy Heron).

LINBIT releases binary builds in parallel with any new DRBD source release.

Package installation on RPM-based systems (SLES, RHEL) is done by simply invoking rpm -i (for new installations) or rpm -U (for upgrades), along with the corresponding drbd and drbd-km package names.

For Debian-based systems (Debian GNU/Linux, Ubuntu) systems, drbd8-utils and drbd8-module packages are installed with dpkg -i, or gdebi if available.

Packages supplied by distribution vendors

A number of distributions include DRBD, including pre-built binary packages. Support for these builds, if any, is being provided by the associated distribution vendor. Their release cycle may lag behind DRBD source releases.

  • SUSE Linux Enterprise Server (SLES), includes DRBD 0.7 in versions 9 and 10. DRBD 8.2 is included in SLES 11 High Availability Extension (HAE).

    On SLES, DRBD is normally installed via the software installation component of YaST2. It comes bundled with the High Availability package selection.

    Users who prefer a command line install may simply issue:

    yast -i drbd


    rug install drbd
  • Debian GNU/Linux includes DRBD 8 from the 5.0 release (lenny) onwards, and has included DRBD 0.7 since Debian 3.1 (sarge).

    On lenny (which now includes pre-compiled DRBD kernel modules and no longer requires the use of module-assistant), you install DRBD by issuing:

    apt-get install drbd8-utils drbd8-module

    On Debian 3.1 and 4.0, you must issue the following commands:

    apt-get install drbd0.7-utils drbd0.7-module-source \
      build-essential module-assistant
    module-assistant auto-install drbd0.7

    See the section called “Building a DRBD Debian package” for details on the installation process involving module-assistant.

  • CentOS has had DRBD 8 since release 5; DRBD 0.7 was included release 4.

    DRBD can be installed using yum (note that you will need the extras repository enabled for this to work):

    yum install drbd kmod-drbd
  • Ubuntu includes DRBD 8 since release 7.10 (Gutsy Gibbon), and has had DRBD 0.7 since release 6.06 (Dapper Drake). To get DRBD, you need to enable the universe component for your preferred Ubuntu mirror in /etc/apt/sources.list, and then issue these commands:

    apt-get update
    apt-get install drbd8-utils drbd8-module-source \
      build-essential module-assistant
    module-assistant auto-install drbd8

    Ubuntu 6.10 (Edgy Eft) and 7.04 (Feisty Fawn) both contained pre-release versions of DRBD 8 that were never intended to be used on a production system. The DRBD 0.7 version also included in these Ubuntu releases, however, is fit for production use (albeit now outdated).

Chapter 4. Building and installing DRBD from source

Downloading the DRBD sources

The source tarballs for both current and historic DRBD releases are available for download from Source tarballs, by convention, are named drbd-x.y.z.tar.gz, where x, y and z refer to the major, minor and bugfix release numbers.

DRBD's compressed source archive is less than half a megabyte in size. To download and uncompress into your current working directory, issue the following commands:

tar -xzf drbd-8.3.4.tar.gz

The use of wget for downloading the source tarball is purely an example. Of course, you may use any downloader you prefer.

It is recommended to uncompress DRBD into a directory normally used for keeping source code, such as /usr/src or /usr/local/src. The examples in this guide assume /usr/src.

Checking out sources from the public DRBD source repository

DRBD's source code is kept in a public Git repository, which may be browsed on-line at To check out a specific DRBD release from the repository, you must first clone your preferred DRBD branch. In this example, you would clone from the DRBD 8.3 branch:

git clone git://

If your firewall does not permit TCP connections to port 9418, you may also check out via HTTP (please note that using Git via HTTP is much slower than its native protocol, so native Git is usually preferred whenever possible):

git clone

Either command will create a Git checkout subdirectory, named drbd-8.3. To now move to a source code state equivalent to a specific DRBD release, issue the following commands:

cd drbd-8.3
git checkout drbd-8.3.x

... where x refers to the DRBD point release you wish to build.

The checkout directory will now contain the equivalent of an unpacked DRBD source tarball of a that specific version, enabling you to build DRBD from source.


There are actually two minor differences between an unpacked source tarball and a Git checkout of the same release:

  • The Git checkout contains a debian/ subdirectoy, while the source tarball does not. This is due to a request from Debian maintainers, who prefer to add their own Debian build configuration to a pristine upstream tarball.

  • The source tarball contains preprocessed man pages, the Git checkout does not. Thus, building DRBD from a Git checkout requires a complete Docbook toolchain for building the man pages, while this is not a requirement for building from a source tarball.

Building DRBD from source

Checking build prerequisites

Before being able to build DRBD from source, your build host must fulfill the following prerequisites:

  • make, gcc, the glibc development libraries, and the flex scanner generator must be installed.


    You should make sure that the gcc you use to compile the module is the same which was used to build the kernel you are running. If you have multiple gcc versions available on your system, DRBD's build system includes a facility to select a specific gcc version.

  • For building directly from a git checkout, GNU Autoconf is also required. This requirement does not apply when building from a tarball.

  • If you are running a stock kernel supplied by your distribution, you should install a matching precompiled kernel headers package. These are typically named kernel-dev, kernel-headers, linux-headers or similar. In this case, you can skip the section called “Preparing the kernel source tree” and continue with the section called “Preparing the DRBD build tree”.

  • If you are not running a distribution stock kernel (i.e. your system runs on a kernel built from source with a custom configuration), your kernel source files must be installed. Your distribution may provide for this via its package installation mechanism; distribution packages for kernel sources are typically named kernel-source or similar.


    On RPM-based systems, these packages will be named similar to kernel-source-version.rpm, which is easily confused with kernel-version.src.rpm. The former is the correct package to install for building DRBD.

    "Vanilla" kernel tarballs from the archive are simply named linux-version-tar.bz2 and should be unpacked in /usr/src/linux-version, with the symlink /usr/src/linux pointing to that directory.

    In this case of building DRBD against kernel sources (not headers), you must continue with the section called “Preparing the kernel source tree”.

Preparing the kernel source tree

To prepare your source tree for building DRBD, you must first enter the directory where your unpacked kernel sources are located. Typically this is /usr/src/linux-version, or simply a symbolic link named /usr/src/linux:

cd /usr/src/linux

The next step is recommended, though not strictly necessary. Be sure to copy your existing .config file to a safe location before performing it. This step essentially reverts your kernel source tree to its original state, removing any leftovers from an earlier build or configure run:

make mrproper

Now it is time to clone your currently running kernel configuration into the kernel source tree. There are a few possible options for doing this:

  • Many reasonably recent kernel builds export the currently-running configuration, in compressed form, via the /proc filesystem, enabling you to copy from there:

    zcat /proc/config.gz > .config
  • SUSE kernel Makefiles include a cloneconfig target, so on those systems, you can issue:

    make cloneconfig
  • Some installs put a copy of the kernel config into /boot, which allows you to do this:

    cp /boot/config-`uname -r` .config
  • Finally, you may simply use a backup copy of a .config file which you know to have been used for building the currently-running kernel.

Preparing the DRBD build tree

Any DRBD compilation requires that you first configure your DRBD source tree with the included configure script.


The information in this section applies to DRBD 8.3.6 and above. Up until release 8.3.5, DRBD had no configure script.

When building from a git checkout, the configure script does not yet exist. You must create it by simply typing autoconf at the top of the checkout.

Invoking the configure script with the --help option returns a full list of supported options. The table below summarizes the most important ones:

Table 4.1. Options supported by DRBD's configure script

--prefixInstallation directory prefix/usr/localThis is the default to maintain Filesystem Hierarchy Standard compatibility for locally installed, unpackaged software. In packaging, this is typically overridden with /usr.
--localstatedirLocate state directory/usr/local/varEven with a default prefix, most users will want to override this with /var.
--sysconfdirSystem configuration directory/usr/local/etcEven with a default prefix, most users will want to override this with /etc.
--with-kmBuild the DRBD kernel modulenoEnable this option when you are building a DRBD kernel module.
--with-utilsBuild the DRBD userland utilitiesyesDisable this option when you are building a DRBD kernel module against a new kernel version, and not upgrading DRBD at the same time.
--with-heartbeatBuild DRBD Heartbeat integrationyesYou may disable this option unless you are planning to use DRBD's Heartbeat v1 resource agent or dopd.
--with-pacemakerBuild DRBD Pacemaker integrationyesYou may disable this option if you are not planning to use the Pacemaker cluster resource manager.
--with-rgmanagerBuild DRBD Red Hat Cluster Suite integrationnoYou should enable this option if you are planning to use DRBD with rgmanager, the Red Hat Cluster Suite cluster resource manager.
--with-xenBuild DRBD Xen integrationyes (on x86 architectures)You may disable this option if you are not planning to use the block-drbd helper script for Xen integration.
--with-bashcompletionBuild programmable bash completion for drbdadmyesYou may disable this option if you are using a shell other than bash, or if you do not want to utilize programmable completion for the drbdadm command.
--enable-specCreate a distribution specific RPM spec filenoFor package builders only: you may use this option if you want to create an RPM spec file adapted to your distribution. See also the section called “Building a DRBD RPM package”.

The configure script will adapt your DRBD build to distribution specific needs. It does so by auto-detecting which distribution it is being invoked on, and setting defaults accordingly. When overriding defaults, do so with caution.

The configure script creates a log file, config.log, in the directory where it was invoked. When reporting build issues on the mailing list, it is usually wise to either attach a copy of that file to your email, or point others to a location from where it may be viewed or downloaded.

Building DRBD userspace utilities


Building userspace utilities requires that you configured DRBD with the --with-utils option, which is enabled by default.

To build DRBD's userspace utilities, invoke the following commands from the top of your DRBD checkout or expanded tarball:

	$ make
	$ sudo make install

This will build the management utilities (drbdadm, drbdsetup, and drbdmeta), and install them in the appropriate locations. Based on the other --with options selected during the configure stage, it will also install scripts to integrate DRBD with other applications.

Compiling DRBD as a kernel module


Building the DRBD kernel module requires that you configured DRBD with the --with-km option, which is disabled by default.

Building DRBD for the currently-running kernel

After changing into your unpacked DRBD sources directory, you should now change into the kernel module subdirectory, simply named drbd, and build the module there:

cd drbd
make clean all

This will build the DRBD kernel module to match your currently-running kernel, whose kernel source is expected to be accessible via the /lib/modules/`uname -r`/build symlink.

Building against precompiled kernel headers

If the /lib/modules/`uname -r`/build symlink does not exist, and you are building against a running stock kernel (one that was shipped pre-compiled with your distribution), you may also set the KDIR variable to point to the matching kernel headers (as opposed to kernel sources) directory. Note that besides the actual kernel headers — commonly found in /usr/src/linux-version/include — the DRBD build process also looks for the kernel Makefile and configuration file (.config), which pre-built kernel headers packages commonly include. To build against precompiled kernel headers, issue, for example:

$ cd drbd
$ make clean
$ make KDIR=/lib/modules/2.6.31/build

Building against a kernel source tree

If you are building DRBD against a kernel other than your currently running one, and you do not have precompiled kernel sources for your target kernel available, you need to build DRBD against a complete target kernel source tree. To do so, set the KDIR variable to point to the kernel sources directory:

$ cd drbd
$ make clean
$ make KDIR=/path/to/kernel/source

Using a non-default C compiler

You also have the option of setting the compiler explicitly via the CC variable. This is known to be necessary on some Fedora versions, for example:

cd drbd
make clean
make CC=gcc32

Checking for successful build completion

If the module build completes successfully, you should see a kernel module file named drbd.ko in the drbd directory. You may interrogate the newly-built module with /sbin/modinfo drbd.ko if you are so inclined.

Building a DRBD RPM package


The information in this section applies to DRBD 8.3.6 and above. Up until release 8.3.5, DRBD used a different RPM build approach.

The DRBD build system contains a facility to build RPM packages directly out of the DRBD source tree. For building RPMs, the section called “Checking build prerequisites” applies essentially in the same way as for building and installing with make, except that you also need the RPM build tools, of course.

Also, see the section called “Preparing the kernel source tree” if you are not building against a running kernel with precompiled headers available.

The build system offers two approaches for building RPMs. The simpler approach is to simply invoke the rpm target in the top-level Makefile:

$ ./configure
$ make rpm
$ make km-rpm

This approach will auto-generate spec files from pre-defined templates, and then use those spec files to build binary RPM packages.

The make rpm approach generates a number of RPM packages:

Table 4.2. DRBD userland RPM packages

Package nameDescriptionDependenciesRemarks
drbdDRBD meta-packageAll other drbd-* packagesTop-level virtual package. When installed, this pulls in all other userland packages as dependencies.
drbd-utilsBinary administration utilities Required for any DRBD enabled host
drbd-udevudev integration facilitydrbd-utils, udevEnables udev to manage user-friendly symlinks to DRBD devices
drbd-xenXen DRBD helper scriptsdrbd-utils, xenEnables xend to auto-manage DRBD resources
drbd-heartbeatDRBD Heartbeat integration scriptsdrbd-utils, heartbeatEnables DRBD management by legacy v1-style Heartbeat clusters
drbd-pacemakerDRBD Pacemaker integration scriptsdrbd-utils, pacemakerEnables DRBD management by Pacemaker clusters
drbd-rgmanagerDRBD Red Hat Cluster Suite integration scriptsdrbd-utils, rgmanagerEnables DRBD management by rgmanager, the Red Hat Cluster Suite resource manager
drbd-bashcompletionProgammable bash completiondrbd-utils, bash-completionEnables Programmable bash completion for the drbdadm utility

The other, more flexible approach is to have configure generate the spec file, make any changes you deem necessary, and then use the rpmbuild command:

$ ./configure --enable-spec
$ make tgz
$ cp drbd*.tar.gz `rpm -E _sourcedir`
$ rpmbuild -bb drbd.spec

If you are about to build RPMs for both the DRBD userspace utilities and the kernel module, use:

$ ./configure --enable-spec --with-km
$ make tgz
$ cp drbd*.tar.gz `rpm -E _sourcedir`
$ rpmbuild -bb drbd.spec
$ rpmbuild -bb drbd-km.spec

The RPMs will be created wherever your system RPM configuration (or your personal ~/.rpmmacros configuration) dictates.

After you have created these packages, you can install, upgrade, and uninstall them as you would any other RPM package in your system.

Note that any kernel upgrade will require you to generate a new drbd-km package to match the new kernel.

The DRBD userland packages, in contrast, need only be recreated when upgrading to a new DRBD version. If at any time you upgrade to a new kernel and new DRBD version, you will need to upgrade both packages.

Building a DRBD Debian package

The DRBD build system contains a facility to build Debian packages directly out of the DRBD source tree. For building Debian packages, the section called “Checking build prerequisites” applies essentially in the same way as for building and installing with make, except that you of course also need the dpkg-dev package containing the Debian packaging tools, and fakeroot if you want to build DRBD as a non-root user (highly recommended).

Also, see the section called “Preparing the kernel source tree” if you are not building against a running kernel with precompiled headers available.

The DRBD source tree includes a debian subdirectory containing the required files for Debian packaging. That subdirectory, however, is not included in the DRBD source tarballs — instead, you will need to create a Git checkout of a tag associated with a specific DRBD release.

Once you have created your checkout in this fashion, you can issue the following commands to build DRBD Debian packages:

dpkg-buildpackage -rfakeroot -b -uc

This (example) drbd-buildpackage invocation enables a binary-only build (-b) by a non-root user (-rfakeroot), disabling cryptographic signature for the changes file (-uc). Of course, you may prefer other build options, see the dpkg-buildpackage man page for details.

This build process will create two Debian packages:

  1. A package containing the DRBD userspace tools, named drbd8-utils_x.y.z-BUILD_ARCH.deb;

  2. A module source package suitable for module-assistant named drbd8-module-source_x.y.z-BUILD_all.deb.

After you have created these packages, you can install, upgrade, and uninstall them as you would any other Debian package in your system.

Building and installing the actual kernel module from the installed module source package is easily accomplished via Debian's module-assistant facility:

module-assistant auto-install drbd8

You may also use the shorthand form of the above command:

m-a a-i drbd8

Note that any kernel upgrade will require you to rebuild the kernel module (with module-assistant, as just described) to match the new kernel. The drbd8-utils and drbd8-module-source packages, in contrast, only need to be recreated when upgrading to a new DRBD version. If at any time you upgrade to a new kernel and new DRBD version, you will need to upgrade both packages.

Chapter 5. Configuring DRBD

Preparing your lower-level storage

After you have installed DRBD, you must set aside a roughly identically sized storage area on both cluster nodes. This will become the lower-level device for your DRBD resource. You may use any type of block device found on your system for this purpose. Typical examples include:

  • A hard drive partition (or a full physical hard drive),

  • a software RAID device,

  • an LVM Logical Volume or any other block device configured by the Linux device-mapper infrastructure,

  • an EVMS volume,

  • any other block device type found on your system. In DRBD version 8.3 and above, you may also use resource stacking, meaning you can use one DRBD device as a lower-level device for another. Some specific considerations apply to stacked resources; their configuration is covered in detail in the section called “Creating a three-node setup”.


While it is possible to use loop devices as lower-level devices for DRBD, doing so is not recommended due to deadlock issues.

It is not necessary for this storage area to be empty before you create a DRBD resource from it. In fact it is a common use case to create a two-node cluster from a previously non-redundant single-server system using DRBD (some caveats apply – please refer to the section called “DRBD meta data” if you are planning to do this).

For the purposes of this guide, we assume a very simple setup:

  • Both hosts have a free (currently unused) partition named /dev/sda7.

  • We are using internal meta data.

Preparing your network configuration

It is recommended, though not strictly required, that you run your DRBD replication over a dedicated connection. At the time of this writing, the most reasonable choice for this is a direct, back-to-back, Gigabit Ethernet connection. If and when you run DRBD over switches, use of redundant components and the Linux bonding driver (in active-backup mode) is recommended.

It is generally not recommended to run DRBD replication via routers, for reasons of fairly obvious performance drawbacks (adversely affecting both throughput and latency).

In terms of local firewall considerations, it is important to understand that DRBD (by convention) uses TCP ports from 7788 upwards, with every TCP resource listening on a separate, configurable, but unchanging TCP port. DRBD uses two separate TCP connections (one in either direction) for every resource configured. For proper DRBD functionality, it is required that these connections are allowed by your firewall configuration.

Security considerations other than firewalling may also apply if a Mandatory Access Control (MAC) scheme such as SELinux or AppArmor is enabled. You may have to adjust your local security policy so it does not keep DRBD from functioning properly.

You must, of course, also ensure that the TCP ports you will be using for DRBD are not already being used by another application.


It is not possible to configure a DRBD resource to support more than one TCP connection. If you want to provide for DRBD connection load-balancing or redundancy, you can easily do so at the Ethernet level (again, using the bonding driver).

For the purposes of this guide, we assume a very simple setup:

  • Our two DRBD hosts each have a currently unused network interface, eth1, with IP addresses and assigned to it, respectively.

  • No other services are using TCP ports 7788 through 7799 on either host.

  • The local firewall configuration allows both inbound and outbound TCP connections between the hosts over these ports.

Configuring your resource

All aspects of DRBD are controlled in its configuration file, /etc/drbd.conf. Normally, this configuration file is just a skeleton with the following contents:

include "/etc/drbd.d/global_common.conf";
include "/etc/drbd.d/*.res";

By convention, /etc/drbd.d/global_common.conf contains the global and common sections of the DRBD configuration, whereas the .res files contain one resource section each.

It is also possible to use drbd.conf as a flat configuration file without any include statements at all. Such a configuration, however, quickly becomes cluttered and hard to manage, which is why the multiple-file approach is the preferred one.

Regardless of which approach you employ, you should always make sure that drbd.conf, and any other files it includes, are exactly identical on all participating cluster nodes.

The DRBD source tarball contains an example configuration file in the scripts subdirectory. Binary installation packages will either install this example configuration directly in /etc, or in a package-specific documentation directory such as /usr/share/doc/packages/drbd.


This section describes only those few aspects of the configuration file which are absolutely necessary to understand in order to get DRBD up and running. The configuration file's syntax and contents are documented in great detail in drbd.conf(5).

Example configuration

For the purposes of this guide, we assume a minimal setup in line with the examples given in the previous sections:

global {
  usage-count yes;
common {
  protocol C;
resource r0 {
  on alice {
    device    /dev/drbd1;
    disk      /dev/sda7;
    meta-disk internal;
  on bob {
    device    /dev/drbd1;
    disk      /dev/sda7;
    meta-disk internal;

This example configures DRBD in the following fashion:

  • You "opt in" to be included in DRBD's usage statistics (see below).

  • Resources are configured to use fully synchronous replication (Protocol C) unless explicitly specified otherwise.

  • Our cluster consists of two nodes, alice and bob.

  • We have a resource arbitrarily named r0 which uses /dev/sda7 as the lower-level device, and is configured with internal meta data.

  • The resource uses TCP port 7789 for its network connections, and binds to the IP addresses and, respectively.

The global section

This section is allowed only once in the configuration. It is normally in the /etc/drbd.d/global_common.conf file. In a single-file configuration, it should go to the very top of the configuration file. Of the few options available in this section, only one is of relevance to most users:

usage-countThe DRBD project keeps statistics about the usage of various DRBD versions. This is done by contacting an HTTP server every time a new DRBD version is installed on a system. This can be disabled by setting usage-count no;. The default is usage-count ask; which will prompt you every time you upgrade DRBD.


DRBD's usage statistics are, of course, publicly available: see

The common section

This section provides a shorthand method to define configuration settings inherited by every resource. It is normally found in /etc/drbd.d/global_common.conf. You may define any option you can also define on a per-resource basis.

Including a common section is not strictly required, but strongly recommended if you are using more than one resource. Otherwise, the configuration quickly becomes convoluted by repeatedly-used options.

In the example above, we included protocol C; in the common section, so every resource configured (including r0) inherits this option unless it has another protocol option configured explicitly. For other synchronization protocols available, see the section called “Replication modes”.

The resource sections

A per-resource configuration file is usually named /etc/drbd.d/resource.res. Any DRBD resource you define must be named by specifying resource name in the configuration. You may use any arbitrary identifier, however the name must not contain characters other than those found in the US-ASCII character set, and must also not include whitespace.

Every resource configuration must also have two on host sub-sections (one for every cluster node).

All other configuration settings are either inherited from the common section (if it exists), or derived from DRBD's default settings.

In fact, you can use a shorthand notation for the on host sub-sections, too: every option whose values are equal on both hosts may be specified directly in the resource section. Thus, we can further condense this section, in comparison with the example cited above:

resource r0 {
  device    /dev/drbd1;
  disk      /dev/sda7;
  meta-disk internal;
  on alice {
  on bob {

This notation is available in DRBD versions 8.2.1 and above.

Enabling your resource for the first time

After you have completed initial resource configuration as outlined in the previous sections, you can bring up your resource.


Each of the following steps must be completed on both nodes.

  1. Create device metadata. This step must be completed only on initial device creation. It initializes DRBD's metadata:

    drbdadm create-md resource
    v08 Magic number not found
    Writing meta data...
    initialising activity log
    NOT initialized bitmap
    New drbd meta data block sucessfully created.

  2. Attach to backing device. This step associates the DRBD resource with its backing device:

    drbdadm attach resource

  3. Set synchronization parameters. This step sets synchronization parameters for the DRBD resource:

    drbdadm syncer resource

  4. Connect to peer. This step connects the DRBD resource with its counterpart on the peer node:

    drbdadm connect resource


    You may collapse the steps drbdadm attach, drbdadm syncer, and drbdadm connect into one, by using the shorthand command drbdadm up.

  5. Observe /proc/drbdDRBD's virtual status file in the /proc filesystem, /proc/drbd, should now contain information similar to the following:

    cat /proc/drbd
    version: 8.3.0 (api:88/proto:86-89)
    GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by buildsystem@linbit, 2008-12-18 16:02:26
     1: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r---
        ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:200768

    The Inconsistent/Inconsistent disk state is expected at this point.

By now, DRBD has successfully allocated both disk and network resources and is ready for operation. What it does not know yet is which of your nodes should be used as the source of the initial device synchronization.

The initial device synchronization

There are two more steps required for DRBD to become fully operational:

  1. Select an initial sync source. If you are dealing with newly-initialized, empty disk, this choice is entirely arbitrary. If one of your nodes already has valuable data that you need to preserve, however, it is of crucial importance that you select that node as your synchronization source. If you do initial device synchronization in the wrong direction, you will lose that data. Exercise caution.

  2. Start the initial full synchronization. This step must be performed on only one node, only on initial resource configuration, and only on the node you selected as the synchronization source. To perform this step, issue this command:

    drbdadm -- --overwrite-data-of-peer primary resource

    After issuing this command, the initial full synchronization will commence. You will be able to monitor its progress via /proc/drbd. It may take some time depending on the size of the device.

By now, your DRBD device is fully operational, even before the initial synchronization has completed (albeit with slightly reduced performance). You may now create a filesystem on the device, use it as a raw block device, mount it, and perform any other operation you would with an accessible block device.

You will now probably want to continue with Chapter 6, Common administrative tasks, which describes common administrative tasks to perform on your resource.

Using truck based replication

In order to preseed a remote node with data which is then to be kept synchronized, and to skip the initial device synchronization, follow these steps.


This assumes that your local node has a configured, but disconnected DRBD resource in the Primary role.

That is to say, device configuration is completed, identical drbd.conf copies exist on both nodes, and you have issued the commands for initial resource promotion on your local node — but the remote node is not connected yet.

  1. On the local node, issue the following command:

    drbdadm -- --clear-bitmap new-current-uuid resource
  2. Create a consistent, verbatim copy of the resource's data and its metadata. You may do so, for example, by removing a hot-swappable drive from a RAID-1 mirror. You would, of course, replace it with a fresh drive, and rebuild the RAID set, to ensure continued redundancy. But the removed drive is a verbatim copy that can now be shipped off site.

    If your local block device supports snapshot copies (such as when using DRBD on top of LVM), you may also create a bitwise copy of that snapshot using dd.

  3. On the local node, issue:

    drbdadm new-current-uuid resource

    Note the absence of the --clear-bitmap option in this second invocation.

  4. Physically transport the copies to the remote peer location.

  5. Add the copies to the remote node. This may again be a matter of plugging a physical disk, or grafting a bitwise copy of your shipped data onto existing storage on the remote node.

    Be sure to restore or copy not only your replicated data, but also the associated DRBD metadata. If you fail to do so, the disk shipping process is moot.

  6. Bring up the resource on the remote node:

    drbdadm up resource

After the two peers connect, they will not initiate a full device synchronization. Instead, the automatic synchronization that now commences only covers those blocks that changed since the invocation of drbdadm -- --clear-bitmap new-current-uuid.

Even if there were no changes whatsoever since then, there may still be a brief synchronization period due to areas covered by the Activity Log being rolled back on the new Secondary. This may be mitigated by the use of checksum-based synchronization.


You may use this same procedure regardless of whether the resource is a regular DRBD resource, or a stacked resource. For stacked resources, simply add the -S or --stacked option to drbdadm.