Create Software RAID1 with mdadm (Part 1)
Many different situations and scenarios play out in which you would need to come up with a redundant configuration. Deciding what needs to be redundant should be something done when performing your risk assessment. Creating a redundant device(s) will help mitigate the risk of losing a particular asset.
Why not hardware RAID? Isn't a software RAID risky?
I am not sure if you've been in the same position, but you probably have.
"We can't afford that; it's not in the budget." - (Almost) Every CEO ever.
To be honest, a hardware RAID card would be the best scenario but RAID cards also run on their own software that *could* fail. Not to mention, the RAID card itself could also fail which is why you would want to / need to buy a few of the same. At this point, you're back at square one.
Software RAID has it's own risks, just like everything else in this world.
Creating a RAID1 using mdadm
Have your two disks ready:
[root@localhost ~]# fdisk -l /dev/sdb /dev/sdc Disk /dev/sdb: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdc: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
All RAID setups will super-duper prefer that all hard drives and disks are the same make, model, and size. You want this to be as exact as possible.
Create the RAID1 device with mdadm
Side note: You *can* partition these if you want before you start but it is not necessary if you are going to be using the entire disk.
[root@localhost ~]# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdb /dev/sdc mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? yes mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started.
This will create the RAID1 device /dev/md1 using /dev/sdb and /dev/sdc. You can see this in /proc/mdstat where it references the '[raid1]' "personality". You can also see that both devices are UP with the two tags of "[2/2]" and "[UU]":
[root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md1 : active raid1 sdc sdb 8380416 blocks super 1.2 [2/2] [UU] unused devices:
Create file system on RAID1
Easiest part ever. Let's just create one in ext4:
[root@localhost ~]# mkfs.ext4 /dev/md1 mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 524288 inodes, 2095104 blocks 104755 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2147483648 64 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 32 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
You should already know how to do this!
[root@localhost ~]# mkdir /mnt/raid1 [root@localhost ~]# mount /dev/md1 /mnt/raid1/ [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root 6.5G 748M 5.5G 12% / tmpfs 246M 0 246M 0% /dev/shm /dev/sda1 477M 30M 422M 7% /boot /dev/md1 7.8G 18M 7.4G 1% /mnt/raid1