On the test, when installing Ubuntu Server 14.04 LTS, we will create software RAID1.
I note that when creating RAID will be used mdadm automatically.
I will connect two identical disks to the server (similarly, you can try to create RAID on a virtual machine, for example, created in VirtualBox).
Tag Archives: RAID
Smart Array P420i Firmware Update
For the test, I will update the Smart Array P420i firmware on the HPE DL380p G8 server.
Continue reading “Smart Array P420i Firmware Update”Configuring Software RAID1 on a Running Ubuntu System
Here is an example of migrating a running Ubuntu system to a software RAID1.
In the process, you will need to perform two reboots.
The first step is to switch to the root user if not yet:
sudo -i
Let’s see a list of disks and partitions:
fdisk -l fdisk -l | grep '/dev/sd' lsblk -o NAME,UUID
Suppose that the system uses one disk, for example /dev/sda and has one main partition, /dev/sda1.
For the test, I installed a clean Ubuntu Server 18.04, the disk was parted by default, swap was the file on the same partition.
To create a raid, we connect another disk of the same size, it will be called /dev/sdb.
Continue reading “Configuring Software RAID1 on a Running Ubuntu System”
The solution to the warning “mismatch_cnt is not 0 on /dev/md*”
Replaced once the junk drive in the software RAID1, added it to the raid, it successfully synchronized, installed GRUB.
After a while I received an email message:
Subject: Cron <root@server> /usr/sbin/raid-check WARNING: mismatch_cnt is not 0 on /dev/md2
In my case, raid-check found that the mismatch_cnt counter is not equal to 0 for /dev/md2, which means that there may be broken sectors on the disk, or it simply needs to be resynchronized. Since I installed GRUB after adding the disk to the raid, this is probably the cause.
Example of viewing the counters of all arrays:
cat /sys/block/md*/md/mismatch_cnt
Or each in turn:
cat /sys/block/md0/md/mismatch_cnt cat /sys/block/md1/md/mismatch_cnt cat /sys/block/md2/md/mismatch_cnt
View the status of raids:
cat /sys/block/md*/md/sync_action
If mismatch_cnt is not 0 for any array, then you can try to resynchronize it:
echo 'repair' >/sys/block/md2/md/sync_action
And check:
echo 'check' >/sys/block/md2/md/sync_action
If you want to cancel the action:
echo 'idle' >/sys/block/md2/md/sync_action
Let’s see the synchronization status and other data of the array:
cat /proc/mdstat
If errors appear due to a bad disk, I recommend that you look at SMART and check it as I wrote in these articles:
Diagnostics HDD using smartmontools
Linux disk test for errors and broken sectors
See also:
How to fix the problem with mdadm disks
Description of RAID types
RAID arrays are necessary to improve the reliability of data storage and increase the speed of working with disks by combining multiple disks into one large one. RAID arrays can be either hardware, firmware or software.
Continue reading “Description of RAID types”mdadm – utility for managing software RAID arrays
I recommend reading my article Description of RAID types.
You can install mdadm in Ubuntu using the command:
Continue reading “mdadm – utility for managing software RAID arrays”How to fix the problem with mdadm disks
I received three email messages from one of the servers on Hetzner with information about raids md0, md1, md2:
DegradedArray event on /dev/md/0:example.com
This is an automatically generated mail message from mdadm
running on example.com
A DegradedArray event had been detected on md device /dev/md/0.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [raid6] [raid5] [raid4] [raid1]
md2 : active raid6 sdb3[1] sdd3[3]
208218112 blocks super 1.0 level 6, 512k chunk, algorithm 2 [4/2] [_U_U]
md1 : active raid1 sdb2[1] sdd2[3]
524224 blocks super 1.0 [4/2] [_U_U]
md0 : active raid1 sdb1[1] sdd1[3]
12582784 blocks super 1.0 [4/2] [_U_U]
unused devices:
I looked at the information about RAID and disks:
cat /proc/mdstat cat /proc/partitions mdadm --detail /dev/md0 mdadm --detail /dev/md1 mdadm --detail /dev/md2 fdisk -l | grep '/dev/sd' fdisk -l | less
I was going to send a ticket to the tech support and plan to replace the dropped SSD disks.
SMART recorded information about the dropped discs in the files, there was also their serial number:
smartctl -x /dev/sda > sda.log smartctl -x /dev/sdc > sdc.log
Remove disks from the raid if you can:
mdadm /dev/md0 -r /dev/sda1 mdadm /dev/md1 -r /dev/sda2 mdadm /dev/md2 -r /dev/sda3 mdadm /dev/md0 -r /dev/sdc1 mdadm /dev/md1 -r /dev/sdc2 mdadm /dev/md2 -r /dev/sdc3
If any partition of the disk is displayed as working, and the disk needs to be extracted, then first mark the partition not working and then delete, for example, if /dev/sda1, /dev/sda2 are dropped, and /dev/sda3 works:
mdadm /dev/md0 -f /dev/sda3 mdadm /dev/md0 -r /dev/sda3
In my case, having looked at the information about the dropped discs, I found that they are whole and working, even better than active ones.
I looked at the disk partitions:
fdisk /dev/sda p q fdisk /dev/sdc p q
They were marked the same way as before:
Disk /dev/sda: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00015e3f
Device Boot Start End Blocks Id System
/dev/sda1 1 1567 12582912+ fd Linux raid autodetect
/dev/sda2 1567 1633 524288+ fd Linux raid autodetect
/dev/sda3 1633 14594 104109528+ fd Linux raid autodetect
Therefore, after waiting for the synchronization of each returned these discs back to the raid:
mdadm /dev/md0 -a /dev/sda1 mdadm /dev/md1 -a /dev/sda2 mdadm /dev/md2 -a /dev/sda3 mdadm /dev/md0 -a /dev/sdc1 mdadm /dev/md1 -a /dev/sdc2 mdadm /dev/md2 -a /dev/sdc3
At the end, the command cat /proc/mdstat was already displayed with [UUUU].
If the disks are replaced with new ones, then they need to be broken in the same way as the ones installed.
An example of partitioning the disk /dev/sdb is similar to /dev/sda with MBR:
sfdisk -d /dev/sda | sfdisk --force /dev/sdb
Example of partitioning /dev/sdb with GPT and assigning a random UUID disk:
sgdisk -R /dev/sdb /dev/sda sgdisk -G /dev/sdb
Also on the newly installed disk you need to install the bootloader:
grub-install --version grub-install /dev/sdb update-grub
Either through the menu grub (hd0 is /dev/sda, hd0,1 – /dev/sda2):
cat /boot/grub/device.map grub device (hd0) /dev/sda root (hd0,1) setup (hd0) quit
If the grub installation is performed from the rescue disk, you need to look at the partition list and mount it, for example if RAID is not used:
ls /dev/[hsv]d[a-z]*[0-9]* mount /dev/sda3 /mnt
If you are using software RAID:
ls /dev/md* mount /dev/md2 /mnt
Either LVM:
ls /dev/mapper/* mount /dev/mapper/vg0-root /mnt
And execute chroot:
chroot-prepare /mnt chroot /mnt
After mounting, you can restore GRUB as I wrote above.
See also my other articles:
How did I make a request to Hetzner to replace the disk in the raid
The solution to the error “md: kicking non-fresh sda1 from array”
The solution to the warning “mismatch_cnt is not 0 on /dev/md*”
mdadm – utility for managing software RAID arrays
Description of RAID types
Diagnostics HDD using smartmontools
Recovering GRUB Linux