RAID mdadm Setup http://gentoo-wiki.com/HOWTO_Gentoo_Install_on_Software_RAID default mdadm file is: /etc/mdadm/mdadm.conf ----- Also, each IDE drive in a RAID should be on its own IDE channel. With IDE, a dead drive on a channel can bring down the whole channel (not to mention the fact that IDE does not provide for overlapping seek so accesses to another drive on the same channel, even will all drives functioning properly, will block until the first drive is done transferring making the array SLOW). In a RAID setup this means that if a drive dies, two go down and your machine crashes. When you partition your disks, make sure that your partitions use fd (Linux raid autodetect) as Partition Type instead of the default 83 (Linux native) or 82 (swap). /boot would be best chosen as a RAID1. Recall that in RAID1, data is mirrored on multiple disks, so if there is a problem with your RAID somehow, GRUB/LILO could point to any of the copies of the kernel on any of the partitions in the /boot RAID1 and a normal boot will occur. Load the appropriate raid module: modprobe raid1 (For RAID 1) modprobe raid0 (For RAID 0) modprobe raid5 (For RAID 5) This might be a good time to play with the hdparm tool. It allows you to change hard drive access parameters, which might speed up disk access. Another use is if you are using a whole disk as a hot spare. You may wish to change its spin down time so that it spends most of its time in standby, thus extending its life. You can also setup the first disk partitions and then copy the entire partition table to the second disk with the following command: sfdisk -d /dev/sda | sfdisk /dev/sdb Now before we start creating the RAID arrays, we need to create the metadevice nodes: mknod /dev/md0 b 9 0 mknod /dev/md1 b 9 1 mknod /dev/md2 b 9 2 mknod /dev/md3 b 9 3 chmod 660 /dev/mdX After partitioning, create the /etc/mdadm.conf file using mdadm, an advanced tool for RAID management. For instance, to have your boot, swap and root partition mirrored (RAID-1) covering /dev/sda and /dev/sdb, you can use: mdadm --create -v /dev/md0 --level=raid1 --raid-devices=3 /dev/sda1 /dev/sdb1 \ /dev/sdc1 --spare-devices=1 mdadm --create -v /dev/md1 --level=raid1 --raid-devices=2 /dev/sda1 /dev/sdb1 mdadm --create -v /dev/md2 --level=raid1 --raid-devices=2 /dev/sda2 /dev/sdb2 mdadm --create -v /dev/md3 --level=raid1 --raid-devices=2 /dev/sda3 /dev/sdb3 Or if you are lazy: for i in '0 1 2 3'; do mknod /dev/md$i b 9 $i mdadm --create /dev/md$i --level=1 --raid-devices=2 /dev/sda$i /dev/sdb$i done Save your mdadm.conf file: mdadm --detail --scan >> /etc/mdadm.conf Before you can create any filesystems check /proc/mdstat to be sure that the RAID devices are done syncing. Create the filesystems on the disk. mke2fs -j /dev/md1 mke2fs -j /dev/md3 Create the swap partition: mkswap /dev/md2 Turn the swap on: swapon /dev/md2 Mount the /, /boot and /home RAIDs: mount /dev/md3 /mnt/gentoo mkdir /mnt/gentoo/boot mount /dev/md1 /mnt/gentoo/boot mount /dev/md4 /mnt/gentoo/home From now onwards, use /dev/md1 for the boot partition, /dev/md3 for the root partition and /dev/md4 for the home partition. Right before chrooting, don't forget to copy over /etc/mdadm.conf to /mnt/gentoo/etc. When you're configuring your kernel, make sure you have the appropriate RAID support in your kernel and not as module. ----- When configuring your bootloader, make sure it gets installed in the MBR of both disks if you use mirroring. Since the /boot partition is a RAID, grub cannot read it to get the bootloader. It can only access physical drives. Thus, you still use (hd0,0) in this step. If you are using a RAID 1 mirror disk system, you will want to install grub on all the disks in the system, so that when one disk fails, you are still able to boot. The find command above will list the disks, e.g. grub> find /boot/grub/stage1 (hd0,0) (hd1,0) grub> Now, if your disks are /dev/sda and /dev/sdb, do the following: grub>device (hd0) /dev/sda grub>root (hd0,0) grub>setup (hd0) This will install grub into the /dev/sda MBR, and grub>device (hd0) /dev/sdb grub>root (hd0,0) grub>setup (hd0) will install grub onto the /dev/sdb MBR. The device command tells grub to assume the drive is (hd0), i.e. the first disk in the system, when it is not necessarily the case. If your first disk fails, however, your second disk will then be the first disk in the system, and so the MBR will be correct. The grub.conf does change from the normal install. The difference is in the specified root drive, it is now a RAID drive and no longer a physical drive. Mine looks like this: default 0 timeout 30 splashimage=(hd0,0)/boot/grub/splash.xpm.gz title=My example Gentoo Linux root (hd0,0) kernel /boot/bzImage root=/dev/md3 ----- Migration: If migrating to RAID, change the devices in /etc/fstab. To use knoppix to manually mount an array: modprobe raid1 #or "raid5" depending on modules needed mdadm --assemble /dev/md0 /dev/hda1 /dev/hda2 or mdadm --assemble --scan /dev/md0 #mdadm.conf file needed. Dynamicall generating mdadm.conf from a running system: echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' >mdadm.conf mdadm --detail --scan >>mdadm.conf or from a file: echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' >mdadm.conf mdadm --examine --scan --config=mdadm.conf >>mdadm.conf Edit and delete the excess fluff from mdadm.conf. If migrating a RAID image to a single disk, create a degraded array. Use the word "missing" in place of the device name. If size descrepencies are found, the array won't be automatically run. Use the "--run" flag to override the caution. To manually force a good drive to rebuild: mdadm /dev/md0 -f /dev/hda1 -r /dev/hda1 -a /dev/hda1 To stop: mdadm --stop /dev/md0 To start: mdadm --run /dev/md0 EMail Monitoring. It will never exit once it finds md devices. mdadm --monitor --scan --mail=user@mail.com ----- EMail: Make sure that the next line is in the /etc/mdadm.conf with the correct To e-mail address: MAILADDR root@example.com To verify that e-mail notification works, use this test command: mdadm -Fs1t ----- Rebuilding: To remove the failed partition and add the new parition: mdadm /dev/mdX -r /dev/sdYZ -a /dev/sdYZ Note: do this for each of your partitions. i.e. md0 md1 md2 etc. Watch the automatic reconstruction run with: watch -n1 cat /proc/mdstat ----- Show RAID Info: mdadm --detail /dev/mdX ----- Data Scrubbing When you have multiple copies of data, you can use data scrubbing to actively scan for corrupt data and clean up the corruption by replacing the corrupt data with correct data from a surviving copy. Normally, raid passively detects bad blocks. When you attempt to read a block, if a read error occurs, the data is reconstructed from the rest of the array and the bad block is rewritten. If the block can not be rewritten the defective disk is kicked out of the active array. During raid reconstruction, if you run accross a previously undetected bad block, you may not be able to reconstruct your array without data corruption. The larger the disk, the higher the odds that passive bad block detection will be inadaquate. Therefore, with today's large disks it is important to actively perform data scrubing on your array. With a modern kernel, >=2.6.16, this command will initiate a data consistency check and a bad block check: reading all blocks, checking them for consistency, and attempting to rewrite inconsistent blocks and bad blocks: echo check >> /sys/block/mdX/md/sync_action You can monitor the progress of the check with: watch -n .1 cat /proc/mdstat You should have your array checked daily or weekly by adding the appropriate command to /etc/crontab. ----- Migrating from no RAID to RAID-1 Boot live linux CD. modprobe raid1 modprobe raid5 Assuming your old drive is sda and your new blank drive is sdb and that you adapt these instructions for multiple partitions: * Copy the partition map and make the partitions type "fd": # fdisk -l /dev/sda # fdisk /dev/sdb Change all partitions to type "fd" Make sure swap is set and /boot has the boot flag. Command (m for help): w * Set up the RAID devices, if needed: # cd /dev MAKEDEV md * Create your RAID for partition /dev/sda1 and /dev/sdb1: (/dev/sda1 is intentionally MISSING) # mdadm -C -v -l 1 -n 2 /dev/md0 missing /dev/sdb1 # cat /proc/mdstat * Format: mke2fs -j -v -c -L /myLabel /dev/md0 * Copy the data over: mkdir /mnt/source /mnt/destination mount /dev/sda1 /mnt/source mount /dev/md0 /mnt/destination cd /mnt/source cp -rav * /mnt/destination #or rsync -avH --progress /mnt/source/ /mnt/destination/ * Set up /etc/fstab for the /dev/mdX devices * Set up grub to boot. * Reboot into degraded array. * verify raid status: cat /proc/mdstat Next we destroy the data on the old drive and add it to the array: * Copy the partition map from the new drive to the old drive: # fdisk -l /dev/sdb # fdisk /dev/sda Command (m for help): w * Add the partitions to the live RAID: # mdadm /dev/md0 -a /dev/sda1 Forced Rebuild: mdadm /dev/md0 -f /dev/sda1 -r /dev/sda1 -a /dev/sda1 * Once all partitions added, rebuild /etc/mdadm/mdadm.conf. Delete the existing entries then execute: # mdadm --detail --scan >>mdadm.conf Failure of this step will cause the raid to come up in degraded mode on reboot. Also add "MAILADDR user@domain.com" for email alerts. You can watch the RAID rebuild with: # watch -n1 cat /proc/mdstat Do Not Reboot or Power Off your Computer UNTIL THE REBUILD HAS COMPLETED. ----- Migrating from no RAID to RAID-5 This is as above, but alter the create RAID command to use RAID-5 and to have additional partitions. * Create your RAID for partition /dev/sda1, /dev/sdb1, /dev/sdc1: # mdadm -C -l 5 -n 3 -v /dev/md0 missing /dev/sdb1 /dev/sdc1 ----- Adding a drive to mirrored RAID Expanding your boot partition to more drives Your boot partition must continue to be RAID-1 as GRUB does not support RAID-5. However, RAID-1 can span 2 or more disks. Assuming you will have all drives partitioned identically, and you have sda1 and sdb1 as /dev/md0 and are adding sdc1: * Copy the partition map from sda or sdb to sdc as described above. * Add the partition as a spare: # mdadm /dev/md0 -a /dev/sdc1 * Grow the RAID-1 from 2 elements to 3: # mdadm -G /dev/md0 -n 3 * Observe the rebuild: # watch -n1 cat /proc/mdstat ----- /etc/mdadm/mdadm.conf: DEVICE /dev/hd*[0-9] /dev/sd*[0-9] ARRAY /dev/md0 level=raid1 num-devices=2 UUID=...:...:...:... devices=/dev/.static/dev/hdc1,/dev/.static/dev/hdd1 MAILADDR alert@domain.net command: mdadm --detail --scan ARRAY /dev/md0 level=raid1 num-devices=2 UUID=...:...:...:... Do NOT reorder the entries coming out of the scan command. ----- Dynamically building a raid (off a knoppix CD). modprobe raid1 modprobe raid5 mkdir /mnt/raid #find the partitions in the array mdadm --query /dev/sda1 mdadm --query /dev/sdb1 mdadm --examine /dev/sda1 #for more details mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 mount /dev/md0 /mnt/raid