Huge inxi/pinxi upgrade, new features, Logical volumes, raid rewrite, beta testers?
by h2-1 from LinuxQuestions.org on (#5BNDH)
I've largely completed the big refactor of big chunks of pinxi/inxi. This has cleaned up a LOT of bad perl stuff I was doing, optimized the code, sped it up, and fixed some long standing issues that were not fixable before this refactor. This will become, fairly soon, unless bugs are found, inxi 3.2.00, the long awaited logical volume handling release, plus too many small and larger new features, bug fixes, improvements, to list (the changelog is currently at about 375 lines)
RAM stick reports were improved greatly thanks to a timely issue an inxi user posted, now they pick the actual speed, not the specified speed, and also look for logical absurdities in speeds and note that.
As always: pinxi -U
if you have pinxi installed already, that being the development version of inxi, and
cd /usr/local/bin && sudo wget -O pinxi smxi.org/pinxi && sudo chmod +x pinxi
to install it.
The original cause for this rewrite, besides covid boredom, were two long standing issues, one, disk used totals being wrong if raid is present, and 2, logical volume display, which is now the -L/--logical option.
Most of the block device logic had to be either refactored or rewritten to make these updates possible, but I also took the opportunity to update a lot of the codebase, last I checked, pinxi has 4500 lines different from inxi stable, ouch!! RAID required a full refactor to get this new logic working as well, since it was kind of stuck on a messy hack that only supported mdraid / zfs, and supported them badly I think.
So the potential for bugs is real. Ways bugs can manifest are for example, N/A showing instead of 0 for storage or used sizes for various features, for example.
For systems with LVM (root required, sadly), ZFS, or Mdraid, pinxi/inxi now creates two totals, raw: and usable:, the usable: comes from the actual storage space you can use on the system, which means, the raid size total minus the raid component sizes. The 'used:' disk storage now always refers to the percent of usable total. For LVM systems, pinxi will also show lvm-free: for cases when the volume groups have free space.
The examples below were made to be silly, mainly to test various scenarios to catch bugs, and make sure all the features worked, I realize/hope most people don't actually do something like that, lol, but if you do, seeing the output of pinxi -L, -Lxxy, -Lay, would be great, because I do not have enough non-me generated test systems to make sure I haven't missed anything obvious or not obvious.
Thanks for checking it out.
For regular pinxi features, the most useful options to run are pinxi alone, pinxi -b, pinxi -F, pinxi -v8, with -z to filter if you are going to post output.
Here's a sample of the short and long forms of --logical from a test system I made to create complicated luks/lvm/bcache scenarios, and to test the logic to make sure it works as intended.
Code:# pinxi -Lxxy
Logical:
Device-1: VG: vg1 type: LVM2 size: 15.62 GiB free: 0 KiB
LV-1: split1 type: linear dm: dm-0 size: 7.81 GiB
Components: p-1: sdb1
LV-2: split2 type: linear dm: dm-1 size: 7.81 GiB
Components: p-1: sdb2
Device-2: VG: vg2 type: LVM2 size: 7.79 GiB free: 0 KiB
LV-1: top1 type: linear dm: dm-3 size: 3.89 GiB
Components: c-1: dm-2 cc-1: bcache0 ccc-1: md0 cccc-1: dm-0 ppppp-1: sdb1
cccc-2: dm-1 ppppp-1: sdb2 ppp-1: sdb3
LV-2: top2 type: linear dm: dm-4 size: 3.89 GiB
Components: c-1: dm-2 cc-1: bcache0 ccc-1: md0 cccc-1: dm-0 ppppp-1: sdb1
cccc-2: dm-1 ppppp-1: sdb2 ppp-1: sdb3
Device-3: extreme type: LUKS dm: dm-2 size: 7.79 GiB
Components: c-1: bcache0 cc-1: md0 ccc-1: dm-0 pppp-1: sdb1 ccc-2: dm-1
pppp-1: sdb2 pp-1: sdb3
Device-4: bcache0 type: bcache size: 7.8 GiB
Components: c-1: md0 cc-1: dm-0 ppp-1: sdb1 cc-2: dm-1 ppp-1: sdb2 p-1: sdb3
# pinxi -Ly
Logical:
Device-1: VG: vg1 type: LVM2 size: 15.62 GiB free: 0 KiB
LV-1: split1 type: linear size: 7.81 GiB Components: p-1: sdb1
LV-2: split2 type: linear size: 7.81 GiB Components: p-1: sdb2
Device-2: VG: vg2 type: LVM2 size: 7.79 GiB free: 0 KiB
LV-1: top1 type: linear size: 3.89 GiB Components: c-1: dm-2
LV-2: top2 type: linear size: 3.89 GiB Components: c-1: dm-2
Device-3: extreme type: LUKS size: 7.79 GiB Components: c-1: bcache0
Device-4: bcache0 type: bcache size: 7.8 GiB Components: c-1: md0 p-1: sdb3For more verbose output, the -y1 option sometimes is a good choice, for key: value paired lines:
Code:pinxi -Lay1
Logical:
Device-1:
VG: vg1
type: LVM2
size: 15.62 GiB
free: 0 KiB
LV-1: split1
maj-min: 253:0
type: linear
dm: dm-0
size: 7.81 GiB
Components:
p-1: sdb1
maj-min: 8:17
size: 7.81 GiB
LV-2: split2
maj-min: 253:1
type: linear
dm: dm-1
size: 7.81 GiB
Components:
p-1: sdb2
maj-min: 8:18
size: 7.81 GiB
Device-2:
VG: vg2
type: LVM2
size: 7.79 GiB
free: 0 KiB
LV-1: top1
maj-min: 253:3
type: linear
dm: dm-3
size: 3.89 GiB
Components:
c-1: dm-2
maj-min: 253:2
mapped: extreme
size: 7.79 GiB
cc-1: bcache0
maj-min: 252:0
size: 7.8 GiB
ccc-1: md0
maj-min: 9:0
size: 7.8 GiB
cccc-1: dm-0
maj-min: 253:0
mapped: vg1-split1
size: 7.81 GiB
ppppp-1: sdb1
maj-min: 8:17
size: 7.81 GiB
cccc-2: dm-1
maj-min: 253:1
mapped: vg1-split2
size: 7.81 GiB
ppppp-1: sdb2
maj-min: 8:18
size: 7.81 GiB
ppp-1: sdb3
maj-min: 8:19
size: 4.39 GiB
LV-2: top2
maj-min: 253:4
type: linear
dm: dm-4
size: 3.89 GiB
Components:
c-1: dm-2
maj-min: 253:2
mapped: extreme
size: 7.79 GiB
cc-1: bcache0
maj-min: 252:0
size: 7.8 GiB
ccc-1: md0
maj-min: 9:0
size: 7.8 GiB
cccc-1: dm-0
maj-min: 253:0
mapped: vg1-split1
size: 7.81 GiB
ppppp-1: sdb1
maj-min: 8:17
size: 7.81 GiB
cccc-2: dm-1
maj-min: 253:1
mapped: vg1-split2
size: 7.81 GiB
ppppp-1: sdb2
maj-min: 8:18
size: 7.81 GiB
ppp-1: sdb3
maj-min: 8:19
size: 4.39 GiB
Device-3: extreme
maj-min: 253:2
type: LUKS
dm: dm-2
size: 7.79 GiB
Components:
c-1: bcache0
maj-min: 252:0
size: 7.8 GiB
cc-1: md0
maj-min: 9:0
size: 7.8 GiB
ccc-1: dm-0
maj-min: 253:0
mapped: vg1-split1
size: 7.81 GiB
pppp-1: sdb1
maj-min: 8:17
size: 7.81 GiB
ccc-2: dm-1
maj-min: 253:1
mapped: vg1-split2
size: 7.81 GiB
pppp-1: sdb2
maj-min: 8:18
size: 7.81 GiB
pp-1: sdb3
maj-min: 8:19
size: 4.39 GiB
Device-4: bcache0
maj-min: 252:0
type: bcache
size: 7.8 GiB
Components:
c-1: md0
maj-min: 9:0
size: 7.8 GiB
cc-1: dm-0
maj-min: 253:0
mapped: vg1-split1
size: 7.81 GiB
ppp-1: sdb1
maj-min: 8:17
size: 7.81 GiB
cc-2: dm-1
maj-min: 253:1
mapped: vg1-split2
size: 7.81 GiB
ppp-1: sdb2
maj-min: 8:18
size: 7.81 GiB
p-1: sdb3
maj-min: 8:19
size: 4.39 GiB


RAM stick reports were improved greatly thanks to a timely issue an inxi user posted, now they pick the actual speed, not the specified speed, and also look for logical absurdities in speeds and note that.
As always: pinxi -U
if you have pinxi installed already, that being the development version of inxi, and
cd /usr/local/bin && sudo wget -O pinxi smxi.org/pinxi && sudo chmod +x pinxi
to install it.
The original cause for this rewrite, besides covid boredom, were two long standing issues, one, disk used totals being wrong if raid is present, and 2, logical volume display, which is now the -L/--logical option.
Most of the block device logic had to be either refactored or rewritten to make these updates possible, but I also took the opportunity to update a lot of the codebase, last I checked, pinxi has 4500 lines different from inxi stable, ouch!! RAID required a full refactor to get this new logic working as well, since it was kind of stuck on a messy hack that only supported mdraid / zfs, and supported them badly I think.
So the potential for bugs is real. Ways bugs can manifest are for example, N/A showing instead of 0 for storage or used sizes for various features, for example.
For systems with LVM (root required, sadly), ZFS, or Mdraid, pinxi/inxi now creates two totals, raw: and usable:, the usable: comes from the actual storage space you can use on the system, which means, the raid size total minus the raid component sizes. The 'used:' disk storage now always refers to the percent of usable total. For LVM systems, pinxi will also show lvm-free: for cases when the volume groups have free space.
The examples below were made to be silly, mainly to test various scenarios to catch bugs, and make sure all the features worked, I realize/hope most people don't actually do something like that, lol, but if you do, seeing the output of pinxi -L, -Lxxy, -Lay, would be great, because I do not have enough non-me generated test systems to make sure I haven't missed anything obvious or not obvious.
Thanks for checking it out.
For regular pinxi features, the most useful options to run are pinxi alone, pinxi -b, pinxi -F, pinxi -v8, with -z to filter if you are going to post output.
Here's a sample of the short and long forms of --logical from a test system I made to create complicated luks/lvm/bcache scenarios, and to test the logic to make sure it works as intended.
Code:# pinxi -Lxxy
Logical:
Device-1: VG: vg1 type: LVM2 size: 15.62 GiB free: 0 KiB
LV-1: split1 type: linear dm: dm-0 size: 7.81 GiB
Components: p-1: sdb1
LV-2: split2 type: linear dm: dm-1 size: 7.81 GiB
Components: p-1: sdb2
Device-2: VG: vg2 type: LVM2 size: 7.79 GiB free: 0 KiB
LV-1: top1 type: linear dm: dm-3 size: 3.89 GiB
Components: c-1: dm-2 cc-1: bcache0 ccc-1: md0 cccc-1: dm-0 ppppp-1: sdb1
cccc-2: dm-1 ppppp-1: sdb2 ppp-1: sdb3
LV-2: top2 type: linear dm: dm-4 size: 3.89 GiB
Components: c-1: dm-2 cc-1: bcache0 ccc-1: md0 cccc-1: dm-0 ppppp-1: sdb1
cccc-2: dm-1 ppppp-1: sdb2 ppp-1: sdb3
Device-3: extreme type: LUKS dm: dm-2 size: 7.79 GiB
Components: c-1: bcache0 cc-1: md0 ccc-1: dm-0 pppp-1: sdb1 ccc-2: dm-1
pppp-1: sdb2 pp-1: sdb3
Device-4: bcache0 type: bcache size: 7.8 GiB
Components: c-1: md0 cc-1: dm-0 ppp-1: sdb1 cc-2: dm-1 ppp-1: sdb2 p-1: sdb3
# pinxi -Ly
Logical:
Device-1: VG: vg1 type: LVM2 size: 15.62 GiB free: 0 KiB
LV-1: split1 type: linear size: 7.81 GiB Components: p-1: sdb1
LV-2: split2 type: linear size: 7.81 GiB Components: p-1: sdb2
Device-2: VG: vg2 type: LVM2 size: 7.79 GiB free: 0 KiB
LV-1: top1 type: linear size: 3.89 GiB Components: c-1: dm-2
LV-2: top2 type: linear size: 3.89 GiB Components: c-1: dm-2
Device-3: extreme type: LUKS size: 7.79 GiB Components: c-1: bcache0
Device-4: bcache0 type: bcache size: 7.8 GiB Components: c-1: md0 p-1: sdb3For more verbose output, the -y1 option sometimes is a good choice, for key: value paired lines:
Code:pinxi -Lay1
Logical:
Device-1:
VG: vg1
type: LVM2
size: 15.62 GiB
free: 0 KiB
LV-1: split1
maj-min: 253:0
type: linear
dm: dm-0
size: 7.81 GiB
Components:
p-1: sdb1
maj-min: 8:17
size: 7.81 GiB
LV-2: split2
maj-min: 253:1
type: linear
dm: dm-1
size: 7.81 GiB
Components:
p-1: sdb2
maj-min: 8:18
size: 7.81 GiB
Device-2:
VG: vg2
type: LVM2
size: 7.79 GiB
free: 0 KiB
LV-1: top1
maj-min: 253:3
type: linear
dm: dm-3
size: 3.89 GiB
Components:
c-1: dm-2
maj-min: 253:2
mapped: extreme
size: 7.79 GiB
cc-1: bcache0
maj-min: 252:0
size: 7.8 GiB
ccc-1: md0
maj-min: 9:0
size: 7.8 GiB
cccc-1: dm-0
maj-min: 253:0
mapped: vg1-split1
size: 7.81 GiB
ppppp-1: sdb1
maj-min: 8:17
size: 7.81 GiB
cccc-2: dm-1
maj-min: 253:1
mapped: vg1-split2
size: 7.81 GiB
ppppp-1: sdb2
maj-min: 8:18
size: 7.81 GiB
ppp-1: sdb3
maj-min: 8:19
size: 4.39 GiB
LV-2: top2
maj-min: 253:4
type: linear
dm: dm-4
size: 3.89 GiB
Components:
c-1: dm-2
maj-min: 253:2
mapped: extreme
size: 7.79 GiB
cc-1: bcache0
maj-min: 252:0
size: 7.8 GiB
ccc-1: md0
maj-min: 9:0
size: 7.8 GiB
cccc-1: dm-0
maj-min: 253:0
mapped: vg1-split1
size: 7.81 GiB
ppppp-1: sdb1
maj-min: 8:17
size: 7.81 GiB
cccc-2: dm-1
maj-min: 253:1
mapped: vg1-split2
size: 7.81 GiB
ppppp-1: sdb2
maj-min: 8:18
size: 7.81 GiB
ppp-1: sdb3
maj-min: 8:19
size: 4.39 GiB
Device-3: extreme
maj-min: 253:2
type: LUKS
dm: dm-2
size: 7.79 GiB
Components:
c-1: bcache0
maj-min: 252:0
size: 7.8 GiB
cc-1: md0
maj-min: 9:0
size: 7.8 GiB
ccc-1: dm-0
maj-min: 253:0
mapped: vg1-split1
size: 7.81 GiB
pppp-1: sdb1
maj-min: 8:17
size: 7.81 GiB
ccc-2: dm-1
maj-min: 253:1
mapped: vg1-split2
size: 7.81 GiB
pppp-1: sdb2
maj-min: 8:18
size: 7.81 GiB
pp-1: sdb3
maj-min: 8:19
size: 4.39 GiB
Device-4: bcache0
maj-min: 252:0
type: bcache
size: 7.8 GiB
Components:
c-1: md0
maj-min: 9:0
size: 7.8 GiB
cc-1: dm-0
maj-min: 253:0
mapped: vg1-split1
size: 7.81 GiB
ppp-1: sdb1
maj-min: 8:17
size: 7.81 GiB
cc-2: dm-1
maj-min: 253:1
mapped: vg1-split2
size: 7.81 GiB
ppp-1: sdb2
maj-min: 8:18
size: 7.81 GiB
p-1: sdb3
maj-min: 8:19
size: 4.39 GiB