On-Demand RAID for Laptop with SSD and USB Disk

Summary:  DRBD provides a nifty way of allowing a “RAID when available” setup, which offers more flexibility then MDADM.  When in “RAID” mode, DRBD performs about 10% slower then MDADM, but provides near SSD performance when DRBD is not mirroring.  Importantly you don’t need to have the USB drive connected all the time, so its great if you want to grab your laptop and have it mirror to your USB disk when you get back to base.

The Project

I have an HP Folio Laptop with 128 gig Samsung MZPA128 SSD drive built in, and a 2.5″ spinning disk connected to my PC over USB3 when at home – running Ubuntu.  My research on SSD drives leads me to believe that they are about as unreliable as hard drives, but with the added disadvantage that when they fail, they normally fail catastrophically, unlike hard drives where you can often  recover most data.   While RAID is not a valid method of backing up, I want the “best of both worlds” in terms of performance and resiliency.

Tests and Benchmarks

The baseline tests for the SSD and HDD were performed using raw LVMs formatted with EXT4 (defaults).

Performance results for  SSD: bonnie++ -s 8000 -u 1000:1000

folio         8000M   798  99 190836  20 95514  10  4230  99 256948  14  6882 170
folio         8000M   812  99 189668  18 95897  10  4213  99 257911  13 13106 262
folio         8000M   789  99 177182  18 95188  10  4149  99 253408  15  6537 163

Performance results for HDD: bonnie++ -s 8000 -u 1000:1000

folio         8000M   751  97 65353   9 25092   5  3675  97 78205   7 148.6   5
folio         8000M   771  96 66600   9 25452   5  4131  96 75136   7 149.3   5

MDADM

1 configuration stuff-up and complete reinstall later I have a raid array for testing built with the following command: mdadm –create /dev/md0 –level=mirror –write-behind=1024 –raid-devices=2 /dev/sda6 -W /dev/sdc1 -b /ssdgen/mdadm.bitmap

folio         8000M   805  98 33984   5 33185   5  3967  97 220468  12  2724  60
folio         8000M   783  99 34145   4 33485   5  4009  97 235770  12  2539  60

DRBD Protocol A

folio         8000M   761  98 31097   5 30644   6  3799  97 190543  14  2118  75
folio         8000M   798  99 31910   5 31631   6  4078  99 238925  12  2037  73

DRBD Protocol A, Unplugged

folio         8000M   811  99 110606  14 81152   8  4206 100 258376  13  2791 109

Setup to Support DRBD configuration.

I configured DRBD on top of LVM to allow the greatest flexibility with resizing file systems.  The configuration details I used were as follows:

root@folio:/home/davidgo# pvs
  PV         VG     Fmt  Attr PSize   PFree  
  /dev/sda5  intssd lvm2 a-   119.00g      0 
  /dev/sdc1  usbhdd lvm2 a-   298.09g 193.98g
root@folio:/home/davidgo# vgs
  VG     #PV #LV #SN Attr   VSize   VFree  
  intssd   1   3   0 wz--n- 119.00g      0 
  usbhdd   1   1   0 wz--n- 298.09g 193.98g
root@folio:/home/davidgo# lvs
  LV       VG     Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  root     intssd -wi-ao  13.04g                                      
  ssd_drbd intssd -wi-ao 104.11g                                      
  swap     intssd -wi-ao   1.86g                                      
  hdd_drbd usbhdd -wi-ao 104.11g

The DRBD Configuration is

global
{
        usage-count no;
}

common 
{
        syncer
        {
                # Rate in Megabytes
                rate 15M;
        }

        net
        {
                max-buffers 250;
        }
}

resource drbd0
{
        protocol A;
        startup
        {
            become-primary-on folio;
        }

        on  folio
        {
            device /dev/drbd0;
            disk /dev/mapper/intssd-ssd_drbd;
            address 127.0.0.1:7790;
            meta-disk "internal";
        }

        on drbd2
        {
            device /dev/drbd1;
            disk /dev/mapper/usbhdd-hdd_drbd;
            address 127.0.0.1:7791;
            meta-disk "internal";
        }
}

resource drbd1 
{
        protocol A;

        on  folio
        {
                device /dev/drbd1;
                disk /dev/mapper/usbhdd-hdd_drbd;
                address 127.0.0.1:7791;
                meta-disk "internal";
        }

        on drbd2 
        {
                device /dev/drbd0;
                disk /dev/mapper/intssd-ssd_drbd;
                address 127.0.0.1:7790;
                meta-disk "internal";
        }
}

In order to get “Automatic RAID when available working”, I usedle a custom script and UDEV. The UDEV command will need to be modified to match your device, but mine looks as follows:

/etc/udev/rules.d/95-drbdusb.rules:

# Generated by inspecting output of
# udevadm info -a -p $(udevadm info -q path -n /dev/sdd1)
# We can use "parents" info as well, not only the sdd block

KERNEL=="sd?1", ATTRS{product}=="SK7301", ATTRS{serial}=="000020110813",RUN+="/usr/local/bin/reconnectdrbd"

/usr/local/reconnectdrbd:

#! /bin/bash
/sbin/lvchange -a n /dev/usbhdd/hdd_drbd
/sbin/vgexport -a
sleep 2
/sbin/vgimport -a
/sbin/lvchange -a y /dev/usbhdd/hdd_drbd
/sbin/drbdadm attach drbd1

Remember to “chmod 755 /usr/local/reconnectdrbd”

If you have problems getting automatic recovery working (as I did), try running it manually after connecting the drive. If that works, check the udev rule is working and the script is being called (I found adding a line “/bin/date >> /tmp/debugme.log” and inspecting this file helped prove the udev rule was not matching).

This script almost certainly contains bits which are unnecessary, but its “good enough” for what I want to do.   (The same can probably be said of the DRBD configuration !!!)