To make it slightly more interesting I'm going to create it initially using only 2 disks and then add the afterwards. Why? Because I can.
I'm doing this on my MicroServer. The RAID array will hold the virtual guests and the ISO storage pool.
First partition the disks and make sure they partitions are aligned:
# parted /dev/sdb
GNU Parted 2.1
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel msdos
Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? yes
(parted) mkpart primary ext4 0% 100%
(parted) set 1 raid on
(parted) align-check optimal 1
1 aligned
(parted) p
Model: ATA WDC WD20EZRX-00D (scsi)
Disk /dev/sdb: 2000GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 2000GB 2000GB primary raid
Tip - here's how to check the disks are ready to go
mdadm -E /dev/sd[bc]
/dev/sdb:
MBR Magic : aa55
Partition[0] : 3907026944 sectors at 2048 (type fd)
/dev/sdc:
MBR Magic : aa55
Partition[0] : 3907026944 sectors at 2048 (type fd)
Tip - if these aren't brand new disks check they haven't got any md superblocks already present. If they have zero the superblock (see later in post)
# mdadm -E /dev/sd[bc]1
mdadm: No md superblock detected on /dev/sdb1.
mdadm: No md superblock detected on /dev/sdc1.
Here's the magic. How to create the array, note the use of the 'missing' parameter for the 3rd disk.
# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 missing
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
Tip - how to check the array status
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc1[1] sdb1[0]
3906764800 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
bitmap: 15/15 pages [60KB], 65536KB chunk
unused devices: <none>
# mdadm --detail /dev/md0Finally save the configuration
/dev/md0:
Version : 1.2
Creation Time : Mon May 25 19:54:54 2015
Raid Level : raid5
Array Size : 3906764800 (3725.78 GiB 4000.53 GB)
Used Dev Size : 1953382400 (1862.89 GiB 2000.26 GB)
Raid Devices : 3
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Mon May 25 19:54:54 2015
State : active, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : einstein.at.home:0 (local to host einstein.at.home)
UUID : cc117a2a:439506c1:429d86cf:35514c71
Events : 0
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
4 0 0 4 removed
mdadm --detail --scan --verbose >> /etc/mdadm.conf
So now let's add the missing disk
First clone the partition table from one of the existing disks
sfdisk -d /dev/sdb | sfdisk /dev/sdd --force
(For completeness zero the superblock as shown above, although not technically necessary on a new disk of course)
Now add the disk to the array:
mdadm --add /dev/md0 /dev/sdd1
The array will now resilver. This will takes hours. You can check /proc/mdstat for progress.
Don't forget to update mdadm.conf as shown above.
Tip - You can speed up resilvering by increasing these kernel parameters
echo 50000 > /proc/sys/dev/raid/speed_limit_minFinally if you want to destroy an array
echo 16384 > /sys/block/md0/md/stripe_cache_size
# mdadm --stop /dev/md0
mdadm: stopped /dev/md0
# mdadm --zero-superblock /dev/sdb
# mdadm --zero-superblock /dev/sdc
# mdadm --zero-superblock /dev/sdd
No comments:
Post a Comment