lvm raid0 on top of physical raid getting unexpected slow
by wangzw from LinuxQuestions.org on (#5PHB1)
I have a server with 24 disks and 3 raid card.
I build 4 set of raid as following:
1. Build raid 0+1 with 2 disks on raid card 1 for device sda and mount to /
2. Build raid 5 with 6 disks on raid card 1 for device sdb
3. Build raid 5 with 8 disks on raid card 2 for device sdc
4. Build raid 5 with 8 disks on raid card 3 for device sdd
Create a lvm volume group with sdb sdc and sdd.
Case 1: Create a linear logical volume and do IO test.
Write throughput is 1520 MB/s and %util is 100. Only sdd is writing data and sdc sdb are idle.
This is expected result.
Case 2: Create a striped logical volume with parameters --type raid0 --stripes
Write throughput is only 547 MB/s and %util is around 85. sdb, sdc and sdd are all writing data.
A kernel process named kworker/u128:2 reach 100% CPU usage.
OS is RHEL 7.6, XFS filesystem
Is there anyone getting the same issue and how to solve it?
Thanks in advance.
I build 4 set of raid as following:
1. Build raid 0+1 with 2 disks on raid card 1 for device sda and mount to /
2. Build raid 5 with 6 disks on raid card 1 for device sdb
3. Build raid 5 with 8 disks on raid card 2 for device sdc
4. Build raid 5 with 8 disks on raid card 3 for device sdd
Create a lvm volume group with sdb sdc and sdd.
Case 1: Create a linear logical volume and do IO test.
Write throughput is 1520 MB/s and %util is 100. Only sdd is writing data and sdc sdb are idle.
This is expected result.
Case 2: Create a striped logical volume with parameters --type raid0 --stripes
Write throughput is only 547 MB/s and %util is around 85. sdb, sdc and sdd are all writing data.
A kernel process named kworker/u128:2 reach 100% CPU usage.
OS is RHEL 7.6, XFS filesystem
Is there anyone getting the same issue and how to solve it?
Thanks in advance.