BTRFS raid1 corrupt leaf error?
by tjallen from LinuxQuestions.org on (#5H183)
All,
I have a RockPro64 NAS with two identical 8 TB drives in a BTRFS raid1 configuration on top of LUKS. I'm running Open Media Vault, aarch64. Every so often, generally once after each rsync backup over the network, I get kernel messages about a corrupt leaf in /dev/mapper/sda-crypt like this:
Code:
[Sat Apr 24 17:20:35 2021] BTRFS critical (device dm-1): corrupt leaf, non-root leaf's nritems is 0: block=3001724190720, roo
t=1752, slot=0
[Sat Apr 24 17:20:35 2021] BTRFS info (device dm-1): leaf 3001724190720 total ptrs 0 free space 16283though at an always increasing block number and varying root.
The filesystem always mounts without errors, so it seems as though any corruption has been fixed. A scrub never finds any errors, but I understand that this error would be invisible to a scrub. I tried different SATA cables and a different SATA adapter, but the errors still show up. Both drives show up as healthy--they have very few power-on hours!--under smartctl -a.
I also get messages such as
Code:[Sat Apr 24 13:46:48 2021] BTRFS info (device dm-1): bdev /dev/mapper/sda-crypt errs: wr 274, rd 0, flush 2, corrupt 0, gen 29after mounting the filesystem, but only for /dev/mapper/sda-crypt, never for /dev/mapper/sdb-crypt.
I'm going to try another drive power supply next, but only because that's a quick check. I also have another identical new drive and could swap it out and rebuild the RAID, but it may not be the drive.
Has anyone seen an error like this before? Any advice?


I have a RockPro64 NAS with two identical 8 TB drives in a BTRFS raid1 configuration on top of LUKS. I'm running Open Media Vault, aarch64. Every so often, generally once after each rsync backup over the network, I get kernel messages about a corrupt leaf in /dev/mapper/sda-crypt like this:
Code:
[Sat Apr 24 17:20:35 2021] BTRFS critical (device dm-1): corrupt leaf, non-root leaf's nritems is 0: block=3001724190720, roo
t=1752, slot=0
[Sat Apr 24 17:20:35 2021] BTRFS info (device dm-1): leaf 3001724190720 total ptrs 0 free space 16283though at an always increasing block number and varying root.
The filesystem always mounts without errors, so it seems as though any corruption has been fixed. A scrub never finds any errors, but I understand that this error would be invisible to a scrub. I tried different SATA cables and a different SATA adapter, but the errors still show up. Both drives show up as healthy--they have very few power-on hours!--under smartctl -a.
I also get messages such as
Code:[Sat Apr 24 13:46:48 2021] BTRFS info (device dm-1): bdev /dev/mapper/sda-crypt errs: wr 274, rd 0, flush 2, corrupt 0, gen 29after mounting the filesystem, but only for /dev/mapper/sda-crypt, never for /dev/mapper/sdb-crypt.
I'm going to try another drive power supply next, but only because that's a quick check. I also have another identical new drive and could swap it out and rebuild the RAID, but it may not be the drive.
Has anyone seen an error like this before? Any advice?