Site icon Voina Blog (a tech warrior's blog)

#DRBD Growing online a replicated resource in a complex enterprise environment

Advertisements

At some point an increase in size of the replicated resource is needed. This can be caused by an increase in data retention times or an increase of transaction numbers affecting that resource.
If the backing block devices can be grown while in operation (online), it is also possible to increase the size of a DRBD device based on these devices during operation. This is stated in the DRBD online documentation here

This is not a trivial task when the resource we need to increase is a DRBD replicated resource on an enterprise environment due to the multiple layers of our storage stack. All the layers should be prepared and configured to increase the available increased storage space.

The current setup on which the change is going to be done has the following storage stack (listed from top to bottom).

etx4 filesystem over /dev/drbd0 mounted under /data directory
replicated DRBD resource over lv_data (/dev/drbd0)
LVM layer: Logical volume over over vg_data volume group (lv_data)
LVM layer: Volume Group over multipath device primary partitions (vg_data)
Mutipath devices mapping ISCSI targets (/dev/mpath/...)
Exported ISCSI targets
Enterprise storage device

Note that this stack both sites Primary site (PR) and Secondary site (DR) in our setup. Note that on both sites the Primary and Secondary DRBD nodes are hosted on a failover Linux cluster, so some operations must be done on both nodes of the cluster. See following post for a detailed description #DRBD based disk replication of a production cluster to a remote site cluster on RHEL 6

Lucky for us the size increase can be done with all the systems online.

The following steps must be performed to add additional replicated storage.

STEP 1: Allocate additional storage on the enterprise storage device on PR site and export it as a ISCSI resource.

STEP 2: Allocate the same additional storage on the enterprise storage device on DR site and export it as a ISCSI resource.

STEP 3: Discover the new ISCSI resources on PR

The multipath service must be installed and configured such as the following devices are visible by both cluster nodes:
Execute the following command to see all the configured multipath devices on both nodes of the cluster

# multipath -l

STEP 4: Discover the new ISCSI resources on DR

The multipath service must be installed and configured such as the following devices are visible by both cluster nodes:
Execute the following command to see all the configured multipath devices on both nodes of the cluster

# multipath -l

STEP 5: Partition the new multipath device on PR

At this point a new multipath device is available on the PR site:

/dev/mapper/mpathf

On the new multipath device create a primary partition of Linux type. This operation must be done only on one of the cluster nodes.
In a server command line console execute as root the commands:

# fdisk /dev/mapper/mpathf
At the fdisk prompt press 'n' to create a new partition
• select 1 for partition number
• select as primary partition
• create partition to span the entire device space
At the fdisk prompt press 't' to set partition type
• select the newly created partition to have type 83 (Linux partition)
At the fdisk prompt press 'w' to write changes

As a result the following device corresponding to the newly created partition will be available:
/dev/mapper/mpathfp1

STEP 6: Partition the new multipath device on DR

At this point a new multipath device is available on the DR site:

/dev/mapper/mpathf

On the new multipath device create a primary partition of Linux type. This operation must be done only on one of the cluster nodes.
In a server command line console execute as root the commands:

fdisk /dev/mapper/mpathf
At the fdisk prompt press 'n' to create a new partition
• select 1 for partition number
• select as primary partition
• create partition to span the entire device space
At the fdisk prompt press 't' to set partition type
• select the newly created partition to have type 83 (Linux partition)
At the fdisk prompt press 'w' to write changes

As a result the following device corresponding to the newly created partition will be available:
/dev/mapper/mpathfp1

STEP 7: Add the new partition to the existing volume vg_data on PR

Initialize the new partition for use by LVM:

# pvcreate /dev/mapper/mpathfp1

Extend the existing vg_data volume to use the new partition:

# vgextend vg_data /dev/mapper/mpathfp1

Check that the volume was extended:

# vgdisplay

This operation must be done only on one of the cluster nodes. Make sure that the other node sees the new partition using partprobe. If partprobe fails we may need to reboot the node. See post #DRBD investigate and solve a sudden Diskless issue where I described a situation that was caused by this issue of a node not being able to see a cluster shared partition created by the other cluster node.

STEP 8: Add the new partition to the existing volume vg_data on DR

Initialize the new partition for use by LVM:

# pvcreate /dev/mapper/mpathfp1

Extend the existing vg_data volume to use the new partition:

# vgextend vg_data /dev/mapper/mpathfp1

Check that the volume was extended:

# vgdisplay

This operation must be done only on one of the cluster nodes. Make sure that the other node sees the new partition using partprobe. If partprobe fails we may need to reboot the node.

STEP 9: Resize the existing lv_data logical volume to allocate an additional 66% of the newly added space on PR.

Resize the logical volume:

# lvresize -l +66%FREE /dev/vg_data/lv_data

Check that the logical volume was resized:

# lvdisplay

This operation must be done only on one of the cluster nodes.

STEP 10: Resize the existing lv_data logical volume to allocate an additional 66% of the newly added space on DR.

Resize the logical volume:

# lvresize -l +66%FREE /dev/vg_data/lv_data

Check that the logical volume was resized:

# lvdisplay

This operation must be done only on one of the cluster nodes.

STEP 11: Resize the drbd replicated resource.

The DRBD resource resize is done only on the PR site, only if the PR site is the primary DRBD node and the sites are in sync.
To resize the DRBD resource execute on the active node on the PR site the following:

# drbdadm resize repdata

Where repdata is the resource name as is declared in /etc/drbd.d/repdata.res

This triggers a synchronization of the new section. The synchronization is done from the primary node to the secondary node. Note that the whole new added space will be replicated.
If the space you’re adding is clean, you can skip syncing the additional space by using the –assume-clean option

# drbdadm -- --assume-clean resize repdata

This may take a while depending on how much space you added and which of the above commands you executed. Wait until changes are replicated from Primary to Secondary and the status of the drbd nodes is UpToDate/UpToDate.

STEP 12: Resize the filesystem that resides on the DRBD replicated resource

After resize operation is done check that the sites are in sync and then try to resize the filesystem. This operation is done only on the PR site, only if the PR site is the primary DRBD node and the sites are in sync.

# resize2fs /dev/drbd0
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/drbd0 is mounted on /data; on-line resizing required
old desc_blocks = 7, new_desc_blocks = 28
Performing an on-line resize of /dev/drbd0 to 114457169 (4k) blocks.
The filesystem on /dev/drbd0 is now 114457169 blocks long.

The above command will resize the ext4 filesystem from the replicated device to occupy all the available space. Note that we run this command only on the Primary site.

Check now the new size of the available space

# df -h
Filesystem                        Size  Used Avail Use% Mounted on
/dev/mapper/vg_prodb-lv_root   50G   39G   8.1G  83% 
/tmpfs                               18G   9.2G  8.5G  53% 
/dev/shm/dev/sda1                    485M  120M  341M  26% 
/boot/dev/mapper/vg_prodb-lv_home    208G  79G  120G  40% 
/home/dev/mapper/vg_log-lv_log       59G   39G   17G  70% /log
/dev/drbd0                           430G  79G  330G  20% /data

Note that all the 12 steps are done with all the systems online and clusters started on both sites.

Contribute to this site maintenance !

This is a self hosted site, on own hardware and Internet connection. The old, down to earth way 🙂. If you think that you found something useful here please contribute. Choose the form below (default 1 EUR) or donate using Bitcoin (default 0.0001 BTC) using the QR code. Thank you !

€1.00

Exit mobile version