Linux boots mounting random partitions
by plisken from LinuxQuestions.org on (#5MNV4)
Firstly, apologies if this is the wrong forum.
The problem: When rebooting, the system randomly allocates partitions from different drives.
The System: This is an existing system that has been running for almost 10 years. This is an HP server a P420 Raid controller, with 2 raid arrays, 4 physical drives and 2 logical drives (Image: https://imgur.com/oDtHjyR )
From the image, we can see that there are indeed two logical drives, each of which consist of a mirrored set of two disks, resulting in the OS seeing sda and sdb.
Now from within the OS (Debian 7) I can see the two disks and they look pretty similar in size but as far as I can tell, Array A is not mirrored to Array B.
When I do df I see a mixture of mount points across /dev/sdaand /dev/sdb and this changes randomly between boots.
As a test, I created a small text file on each mount point and then following a reboot, checked for the existance of said text file and indeed, on some mount points it didnt exist and I witnessed the switch between sda and sdb.
For example, currently, we have the below;
Code:root@dlm02u:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
rootfs 48060808 3620772 41998644 8% /
udev 10240 0 10240 0% /dev
tmpfs 2457524 424 2457100 1% /run
/dev/disk/by-uuid/a68992e0-70a8-4f94-9923-ed5292dfda30 48060808 3620772 41998644 8% /
tmpfs 5120 0 5120 0% /run/lock
tmpfs 10807800 0 10807800 0% /run/shm
/dev/sda9 547905904 12971472 507102420 3% /dl
/dev/sda5 48060808 10709604 34909812 24% /home
/dev/sdb6 96121868 1973072 89265996 3% /pcli
/dev/sdb7 96121868 1735088 89503980 2% /usr
/dev/sda8 96121868 2727000 88512068 3% /var
23.128.30.24:/volume1/BACKUP 26225153280 25394777728 830256768 97% /nasAfter a reboot, we may see /home switch to /dev/sdb5 or /dl switch to /dev/sdb9 etc and there seems to be no patters to this.
The fstab looks like below;
Code:root@dlm02u:~# cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sdb1 during installation
UUID=a68992e0-70a8-4f94-9923-ed5292dfda30 / ext4 errors=remount-ro 0 1
# /dl was on /dev/sdb9 during installation
UUID=fdfb9f30-8fcb-434f-9c47-7ce76954eebc /dl ext4 defaults 0 2
# /home was on /dev/sdb5 during installation
UUID=299206d7-94b0-4d62-9495-3524a076f58f /home ext4 defaults 0 2
# /pcli was on /dev/sdb6 during installation
UUID=4443705c-4a14-4475-b202-cab3132977e5 /pcli ext4 defaults 0 2
# /usr was on /dev/sdb7 during installation
UUID=14d03c88-7c07-4b95-b11f-7b971535aa78 /usr ext4 defaults 0 2
# /var was on /dev/sdb8 during installation
UUID=2d131822-6f1e-46f5-a854-1b57fab4a51a /var ext4 defaults 0 2
# swap was on /dev/sdb3 during installation
UUID=8228e684-6516-4795-b3b3-6b9db6679e55 none swap sw 0 0
/dev/sda1 /media/usb0 auto rw,user,noauto 0 0
/dev/sdb1 /disk1 ext3 defaults 0 2
23.128.30.24:/volume1/BACKUP /nas nfs4 defaults 0 0Another thing I found quite interesting was that if I used blkid I found that the UUID of both the sda and sdb partitions to be the same, possibly one of the reasons for this issue?
So I'm left with a dilemma, one being why would such a situation exist, why would there be two logical drives configured in such a way? I'm thinking outside my own box here and wondering if the system was originally installed, say on sda and then one of the mirrored physical drivers removed and placed in Array B, this could maybe explain why partitions on both Arrays have the same UUID and further explain why I'm seeing this random partition mounting at boot time.
A more immediate problem to me is that when the system has been running for some time, generally several months, there are small changes to source files, binaries and database entries that are lost (well kind of lost) when the system is rebooted and this is something I am going to need to try and prevent and I think changing the fstab to mount by device as opposed to UUID would solve this. I'm thinking about also removing the second logical drive pair too.
Anyway, if anyone has actually read to here, then I thank you and of course would appreciate any helpful comments or suggestions as to why this situation may exist and on my proposed solution.
Thanks in advance...
The problem: When rebooting, the system randomly allocates partitions from different drives.
The System: This is an existing system that has been running for almost 10 years. This is an HP server a P420 Raid controller, with 2 raid arrays, 4 physical drives and 2 logical drives (Image: https://imgur.com/oDtHjyR )
From the image, we can see that there are indeed two logical drives, each of which consist of a mirrored set of two disks, resulting in the OS seeing sda and sdb.
Now from within the OS (Debian 7) I can see the two disks and they look pretty similar in size but as far as I can tell, Array A is not mirrored to Array B.
When I do df I see a mixture of mount points across /dev/sdaand /dev/sdb and this changes randomly between boots.
As a test, I created a small text file on each mount point and then following a reboot, checked for the existance of said text file and indeed, on some mount points it didnt exist and I witnessed the switch between sda and sdb.
For example, currently, we have the below;
Code:root@dlm02u:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
rootfs 48060808 3620772 41998644 8% /
udev 10240 0 10240 0% /dev
tmpfs 2457524 424 2457100 1% /run
/dev/disk/by-uuid/a68992e0-70a8-4f94-9923-ed5292dfda30 48060808 3620772 41998644 8% /
tmpfs 5120 0 5120 0% /run/lock
tmpfs 10807800 0 10807800 0% /run/shm
/dev/sda9 547905904 12971472 507102420 3% /dl
/dev/sda5 48060808 10709604 34909812 24% /home
/dev/sdb6 96121868 1973072 89265996 3% /pcli
/dev/sdb7 96121868 1735088 89503980 2% /usr
/dev/sda8 96121868 2727000 88512068 3% /var
23.128.30.24:/volume1/BACKUP 26225153280 25394777728 830256768 97% /nasAfter a reboot, we may see /home switch to /dev/sdb5 or /dl switch to /dev/sdb9 etc and there seems to be no patters to this.
The fstab looks like below;
Code:root@dlm02u:~# cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sdb1 during installation
UUID=a68992e0-70a8-4f94-9923-ed5292dfda30 / ext4 errors=remount-ro 0 1
# /dl was on /dev/sdb9 during installation
UUID=fdfb9f30-8fcb-434f-9c47-7ce76954eebc /dl ext4 defaults 0 2
# /home was on /dev/sdb5 during installation
UUID=299206d7-94b0-4d62-9495-3524a076f58f /home ext4 defaults 0 2
# /pcli was on /dev/sdb6 during installation
UUID=4443705c-4a14-4475-b202-cab3132977e5 /pcli ext4 defaults 0 2
# /usr was on /dev/sdb7 during installation
UUID=14d03c88-7c07-4b95-b11f-7b971535aa78 /usr ext4 defaults 0 2
# /var was on /dev/sdb8 during installation
UUID=2d131822-6f1e-46f5-a854-1b57fab4a51a /var ext4 defaults 0 2
# swap was on /dev/sdb3 during installation
UUID=8228e684-6516-4795-b3b3-6b9db6679e55 none swap sw 0 0
/dev/sda1 /media/usb0 auto rw,user,noauto 0 0
/dev/sdb1 /disk1 ext3 defaults 0 2
23.128.30.24:/volume1/BACKUP /nas nfs4 defaults 0 0Another thing I found quite interesting was that if I used blkid I found that the UUID of both the sda and sdb partitions to be the same, possibly one of the reasons for this issue?
So I'm left with a dilemma, one being why would such a situation exist, why would there be two logical drives configured in such a way? I'm thinking outside my own box here and wondering if the system was originally installed, say on sda and then one of the mirrored physical drivers removed and placed in Array B, this could maybe explain why partitions on both Arrays have the same UUID and further explain why I'm seeing this random partition mounting at boot time.
A more immediate problem to me is that when the system has been running for some time, generally several months, there are small changes to source files, binaries and database entries that are lost (well kind of lost) when the system is rebooted and this is something I am going to need to try and prevent and I think changing the fstab to mount by device as opposed to UUID would solve this. I'm thinking about also removing the second logical drive pair too.
Anyway, if anyone has actually read to here, then I thank you and of course would appreciate any helpful comments or suggestions as to why this situation may exist and on my proposed solution.
Thanks in advance...