There were two additional slots in the backplane, so I added two identical 146GB disks (1 as a dedicated hot spare, 1 to expand the storage capacity). The data (storage) array, until last week, had 4 disks total (146GB SAS, total storage of 438GB). I'm using LSI MegaRAID Storage Manager on an IBM x3650 server. Used Dev Size : 1951160320 (930.39 GiB 998.I've got an array that I'm having trouble with here. Internal Bitmap : 8 sectors from superblockīad Block Log : 512 entries available at offset 16 sectorsĪrray State : AAAA ('A' = active, '.' = missing, 'R' = replacing) (without /dev/sde which is the flash drive) LV Creation host, time nas.localdomain, 10:45:47 +0100 LV UUID V5WLBp-lvkd-aGT4-isoE-U8UH-ehqG-i37BmE LV Creation host, time nas.localdomain, 10:45:46 +0100 LV Creation host, time nas.localdomain, 10:45:39 +0100 VG UUID NrW8lm-Xi70-hP73-Auis-wLVy-beHq-85TWZM PV UUID RpepyF-BxMl-9BZT-Sebw-otBV-HLCq-7XSGWX PV Size <930.39 GiB / not usable 3.00 MiB This is from the root volume of the RAID mount:Ĭat nf # nf written out by anaconda I commented out the ARRAY hoping the conf would pick up on the level and num-devices changes, but nope I/O size (minimum/optimal): 65536 bytes / 196608 bytes I/O size (minimum/optimal): 512 bytes / 512 bytesĭevice Boot Start End Sectors Size Id Type Sector size (logical/physical): 512 bytes / 512 bytes I tried manually editing the /etc/nf file but that didn't work either on reboot and found out online it's just the assemble point.Ĭat /proc/mdstat Personalities : So basically I have a OS that won't start in normal mode, while the Raid disks seem to work just fine. I don't know how to resolve this as Googling around shows me I might have toĭo some repair to the boot img, but don't know how to mount /boot in Repair modeĪnd Emergency mode doesn't seem to have anything I can work with. Mount /dev/sda1 /boot mount: /boot: unknown filesystem type 'ext4'.Īnd lsmod does indeed show no ext4 loadedīut I can see the volumes as normal and cat /proc/mdstat outputs normal active information. I only see 1 error which is /boot won't mount (can't seem to find if this is normal because of Rescue mode?) EDIT: this is not normal!Įrror I get when trying to manually mount /boot: Then I went into CentOS Rescue mode, which I selected in GrubĪnd there the Raid5 4 disks comes online just fine! very strange Trying the same thing with a FedoraCore Workstation live boot usb, I could again simply mount the raid5 disks. Turns out, I finally could mdadm assembly the 4 drives again in the Ubuntu environmentĪfter which the raid system itself started to recover and continue the Raid5 process.įinally after several hours this was done and after mounting the various volumes on the Raid system and seeing that everything was fine and finished, I rebooted. Trying to exit dracut emergency mode I see:Īfter a lot of Googling around, I finally created an Ubuntu 21.04 live bootable USB In this emergency mode, I cannot mdadm assemble the Raid as it errors with a wrong raid level issue. Md: personality for level 5 is not loaded When the power came back on, CentOS 8 wouldn't start anymore except after a long time to emergency mode. However after the final stage of the upgrade:Īnd some time after multiple times checkingįor progress on the Raid 5 setup (which was around 4/5%) Yesterday I upgraded my CentOS 8 server from a
0 Comments
Leave a Reply. |