lvm - can i leave it in place after changing hard drive configuration
by Sum1 from LinuxQuestions.org on (#590VF)
CentOS 7 server configured as Samba Active Directory Domain Controller.
I have 2 x 2TB ssd drives configured in RAID-1 as one physical volume and one volume group on the physical volume. Config -
Code:[root@a ~]# pvdisplay
--- Physical volume ---
PV Name /dev/md125
VG Name stuff
PV Size <1.82 TiB / not usable 3.06 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 476899
Free PE 0
Allocated PE 476899
PV UUID dsvFH3-9eL0-cOqt-xxxx-xxxx-xxxx-xxxxxCode:[root@a ~]# vgdisplay
--- Volume group ---
VG Name stuff
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size <1.82 TiB
PE Size 4.00 MiB
Total PE 476899
Alloc PE / Size 476899 / <1.82 TiB
Free PE / Size 0 / 0
VG UUID 1lI07H-Wgbr-2NvR-xxxx-xxxx-xxxx-xxxxxCode:[root@a ~]# cat /etc/fstab
/dev/mapper/stuff-data /mnt/data xfs defaults 0 0The VG is full and I want to simply pull the two drives and replace them with 1 x 4TB drive containing an rsync'ed copy of the same data. I was going to comment out the VG found in the /etc/fstab and replace it with -
Code:/dev/sdc1 /mnt/data xfs defaults 0 0this is the new 1 x 4TB drive.
But in case something doesn't work, I want the option to put the other drives back in production.
Can I leave the configured PV and VG as is, and use the /dev/mapper line in /etc/fstab in the event something is not right with the 1 x 4TB storage?
Thanks for reading.


I have 2 x 2TB ssd drives configured in RAID-1 as one physical volume and one volume group on the physical volume. Config -
Code:[root@a ~]# pvdisplay
--- Physical volume ---
PV Name /dev/md125
VG Name stuff
PV Size <1.82 TiB / not usable 3.06 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 476899
Free PE 0
Allocated PE 476899
PV UUID dsvFH3-9eL0-cOqt-xxxx-xxxx-xxxx-xxxxxCode:[root@a ~]# vgdisplay
--- Volume group ---
VG Name stuff
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size <1.82 TiB
PE Size 4.00 MiB
Total PE 476899
Alloc PE / Size 476899 / <1.82 TiB
Free PE / Size 0 / 0
VG UUID 1lI07H-Wgbr-2NvR-xxxx-xxxx-xxxx-xxxxxCode:[root@a ~]# cat /etc/fstab
/dev/mapper/stuff-data /mnt/data xfs defaults 0 0The VG is full and I want to simply pull the two drives and replace them with 1 x 4TB drive containing an rsync'ed copy of the same data. I was going to comment out the VG found in the /etc/fstab and replace it with -
Code:/dev/sdc1 /mnt/data xfs defaults 0 0this is the new 1 x 4TB drive.
But in case something doesn't work, I want the option to put the other drives back in production.
Can I leave the configured PV and VG as is, and use the /dev/mapper line in /etc/fstab in the event something is not right with the 1 x 4TB storage?
Thanks for reading.