Article 6GBC5 RAID 1 array, strange changes caused by other disk removal

RAID 1 array, strange changes caused by other disk removal

by
andrkac
from LinuxQuestions.org on (#6GBC5)
I have a problem with my disk array.
It was my first contact with creating it so maybe I did something wrong.

Hardware:
small case for five HDD disks with USB 3.0 connection.
looking from the top:
A) 1 disk 8TB (single)
B) two disks 4TB set as soft RAID 1 (mdadm command)
C) two other single disks.
Whole storage is driven by Ubuntu server running on Dell Wyse 5060 thin client (4 cores 2.4GHz, 8GB Ram, one SATA SSD drive)

Using this page: https://www.linuxbabe.com/linux-serv...e-raid-1-setup
In my case I've used /dev/sdb1 and /dev/sde1, the only one partitions on both 4T disks.
I've made /dev/md0, mounted it and copied data I want to store there. It seemed working.

But I still had one disk to insert into case and move some data to the array. So turned array off and replaced one of (C) disks with the additional one. I didn't move any of (A) or (B) disks.

And now, after restart, I can see that my 4TB disks are:
Code:sdb 8:16 0 3.6T 0 disk
sdf 8:80 0 3.6T 0 disk(both have no partitions defined)

One name has changed and /dev/md0 disappeared completely!
But, even in case of such change, I though that md0 will work with one disk working and one - failed? Why it disapeared completely?

In /etc/fstab md0 is mounted by UUID (mountpoint: /mnt/md0), now mount -a shows:
Code:root@server:/mnt# mount -a
mount: /mnt/md0: can't find UUID="a403601e-e256-417b-a86f-7941f85c1936".This is current status:
Code:root@sahara:/mnt# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Tue Nov 7 15:30:12 2023
Raid Level : raid1
Array Size : 3906886464 (3.64 TiB 4.00 TB)
Used Dev Size : 3906886464 (3.64 TiB 4.00 TB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Mon Nov 13 17:57:18 2023
State : clean, FAILED
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

Consistency Policy : bitmap

Number Major Minor RaidDevice State
- 0 0 0 removed
1 8 64 1 active sync missing
So, questions:
/dev/md0 RAID 1 array is made from /dev/sde1 and /dev/sdb1. Why it disappeared when when I removed another, not related disk?
One disk is still in the same place - why /dev/md0 does not work?
Why lsblk does not show any partitions on both disks?
May I define md0 from disks identified by UUID instead of /dev/sd*? It should prevent me from such issues.

Now I need to wait for other disk operations and then will try to reattach the same disks config in array and run once again.
Fortunately I still have backup of all data stored in /dev/md0, so no data lost, and I can do some experiments, however I don't like hours of creating md0 :(
External Content
Source RSS or Atom Feed
Feed Location https://feeds.feedburner.com/linuxquestions/latest
Feed Title LinuxQuestions.org
Feed Link https://www.linuxquestions.org/questions/
Reply 0 comments