Files and Directories on an ext4 File System Have Totally Disappeared!
by firask317 from LinuxQuestions.org on (#52KE1)
Hello,
I have a Dell PowerEdge R730xd server that is connected via SAS to 2 x Dell PowerVault MD1200 HDD disk enclosures. The server is using a PERC H330 Mini RAID card to configure 3 logical volumes, one is RAID50 and two are RAID5.
The server OS is Ubuntu 16, Ubuntu sees the 3 logical volumes as /dev/sda 50 TB, /dev/sdh 20 TB, /dev/sdi 20 TB. These drives are not partitioned, they are formatted with ext4 filesystem and are used to store media files (mostly 200 - 500 MB video files).
Everything was working perfectly until a system administrator connected another RAID card and another external storage. He claimed that two days after that he noticed that the 3 aforementioned filesystems started to lose files. One day later, there were nothing at all on these 3 filesystems and df -h showed them as empty! I am totally puzzled as to what could ever cause such strange data loss. Could anyone of you please let me know how this could happen? And why did that happen to the 3 filesystems at the same time although at least one of them is on different disks and on a different disk enclosure?
I am sure the system administrator did not format the drives, because the superblocks tell the filesystems are old.
Please find the superblock of /dev/sda below:
Code:# dumpe2fs -h /dev/sda
dumpe2fs 1.42.13 (17-May-2015)
Filesystem volume name: <none>
Last mounted on: /var/www/cinema/f1a21737-0a5c-4dec-8bca-7bd4b431cb26
Filesystem UUID: f1a21737-0a5c-4dec-8bca-7bd4b431cb26
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr dir_index filetype extent 64bit flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 915619840
Block count: 14649917440
Reserved block count: 732495872
Free blocks: 14591575580
Free inodes: 915619829
First block: 0
Block size: 4096
Fragment size: 4096
Group descriptor size: 64
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 2048
Inode blocks per group: 128
Flex block group size: 16
Filesystem created: Tue Dec 17 21:21:27 2019
Last mount time: Mon Apr 20 17:06:52 2020
Last write time: Mon Apr 20 23:34:34 2020
Mount count: 0
Maximum mount count: -1
Last checked: Mon Apr 20 21:27:00 2020
Check interval: 0 (<none>)
Lifetime writes: 27 TB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: d6fe4e5c-0cc3-480f-a7da-792ec3b582f0
Journal backup: inode blocks
Journal features: journal_incompat_revoke journal_64bit
Journal size: 128M
Journal length: 32768
Journal sequence: 0x00097f73
Journal start: 0ls command on where /dev/sda is mounted lists nothing but the "lost+found" directory. However, running command debugfs -R "ls -l <2>" does shows the original directories the filesystem had, but strangely all of them point to inode 0:
Code:# debugfs -R "ls -l <2>" /dev/sda | head -30
debugfs 1.42.13 (17-May-2015)
2 40777 (2) 0 0 172032 20-Apr-2020 16:50 .
2 40777 (2) 0 0 172032 20-Apr-2020 16:50 ..
11 40700 (2) 0 0 4096 20-Apr-2020 21:27 lost+found
0 0 (2) 0 0 0 ku0xmxlKqBRsWsg
0 0 (2) 0 0 0 tJ6GF0svYxcFCjl
0 0 (2) 0 0 0 iANgilZHGH2YDCW
0 0 (2) 0 0 0 J299WDX00H0A3kz
0 0 (2) 0 0 0 BSW35o0j0TtY393
0 0 (2) 0 0 0 nQGxOEpdeods8u3
0 0 (2) 0 0 0 nZ8nQKyv3YgxL6a
0 0 (2) 0 0 0 CYmulWH3h0wqpqk
0 0 (2) 0 0 0 qUEswopz36cXFgB
0 0 (2) 0 0 0 ajVXEftJB8XanMl
0 0 (2) 0 0 0 DFrEwIHn3UD87bU
0 0 (2) 0 0 0 45OMzmxW2aJQFGn
0 0 (2) 0 0 0 lsGRa4Il8YUS3Mh
0 0 (2) 0 0 0 ZQSZjkh0c2dcU4U
0 0 (2) 0 0 0 S2rIBrwSlvKMcLa
0 0 (2) 0 0 0 9nIyCfGrpA8UFAd
0 0 (2) 0 0 0 VoA3bjeI7UEdht1
0 0 (2) 0 0 0 VKSRRoSxJXInigd
0 0 (2) 0 0 0 zdCcJJ4c6Zljpyp
0 0 (2) 0 0 0 4X9DAH13ks5AbbY
0 0 (2) 0 0 0 1PnZjr0hPC1jo0S
0 0 (2) 0 0 0 yPmTwaaGiptRq4Y
0 0 (2) 0 0 0 z4iVO10xWHb9A14
0 0 (2) 0 0 0 LQ2gKqMxoOcpvAt
0 0 (2) 0 0 0 m1lW2aQIzYSNA2k
0 0 (2) 0 0 0 Hga3UMcDsXCx4Bw
0 0 (2) 0 0 0 8ynSbw9RovijVpBDoes anyone of you please guess what could have happened? Do you think there is any way to recover the lost data? We are only interested in recovering the files if we can recover them with their original names and paths. Therefore scanning the filesystem data blocks one by one is not an option.
I am putting hopes on the journal but I do not know how to parse it to see if I can make use of it to restore files. Could you please suggest tools for that?
Many thanks in advance!
Firas


I have a Dell PowerEdge R730xd server that is connected via SAS to 2 x Dell PowerVault MD1200 HDD disk enclosures. The server is using a PERC H330 Mini RAID card to configure 3 logical volumes, one is RAID50 and two are RAID5.
The server OS is Ubuntu 16, Ubuntu sees the 3 logical volumes as /dev/sda 50 TB, /dev/sdh 20 TB, /dev/sdi 20 TB. These drives are not partitioned, they are formatted with ext4 filesystem and are used to store media files (mostly 200 - 500 MB video files).
Everything was working perfectly until a system administrator connected another RAID card and another external storage. He claimed that two days after that he noticed that the 3 aforementioned filesystems started to lose files. One day later, there were nothing at all on these 3 filesystems and df -h showed them as empty! I am totally puzzled as to what could ever cause such strange data loss. Could anyone of you please let me know how this could happen? And why did that happen to the 3 filesystems at the same time although at least one of them is on different disks and on a different disk enclosure?
I am sure the system administrator did not format the drives, because the superblocks tell the filesystems are old.
Please find the superblock of /dev/sda below:
Code:# dumpe2fs -h /dev/sda
dumpe2fs 1.42.13 (17-May-2015)
Filesystem volume name: <none>
Last mounted on: /var/www/cinema/f1a21737-0a5c-4dec-8bca-7bd4b431cb26
Filesystem UUID: f1a21737-0a5c-4dec-8bca-7bd4b431cb26
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr dir_index filetype extent 64bit flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 915619840
Block count: 14649917440
Reserved block count: 732495872
Free blocks: 14591575580
Free inodes: 915619829
First block: 0
Block size: 4096
Fragment size: 4096
Group descriptor size: 64
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 2048
Inode blocks per group: 128
Flex block group size: 16
Filesystem created: Tue Dec 17 21:21:27 2019
Last mount time: Mon Apr 20 17:06:52 2020
Last write time: Mon Apr 20 23:34:34 2020
Mount count: 0
Maximum mount count: -1
Last checked: Mon Apr 20 21:27:00 2020
Check interval: 0 (<none>)
Lifetime writes: 27 TB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: d6fe4e5c-0cc3-480f-a7da-792ec3b582f0
Journal backup: inode blocks
Journal features: journal_incompat_revoke journal_64bit
Journal size: 128M
Journal length: 32768
Journal sequence: 0x00097f73
Journal start: 0ls command on where /dev/sda is mounted lists nothing but the "lost+found" directory. However, running command debugfs -R "ls -l <2>" does shows the original directories the filesystem had, but strangely all of them point to inode 0:
Code:# debugfs -R "ls -l <2>" /dev/sda | head -30
debugfs 1.42.13 (17-May-2015)
2 40777 (2) 0 0 172032 20-Apr-2020 16:50 .
2 40777 (2) 0 0 172032 20-Apr-2020 16:50 ..
11 40700 (2) 0 0 4096 20-Apr-2020 21:27 lost+found
0 0 (2) 0 0 0 ku0xmxlKqBRsWsg
0 0 (2) 0 0 0 tJ6GF0svYxcFCjl
0 0 (2) 0 0 0 iANgilZHGH2YDCW
0 0 (2) 0 0 0 J299WDX00H0A3kz
0 0 (2) 0 0 0 BSW35o0j0TtY393
0 0 (2) 0 0 0 nQGxOEpdeods8u3
0 0 (2) 0 0 0 nZ8nQKyv3YgxL6a
0 0 (2) 0 0 0 CYmulWH3h0wqpqk
0 0 (2) 0 0 0 qUEswopz36cXFgB
0 0 (2) 0 0 0 ajVXEftJB8XanMl
0 0 (2) 0 0 0 DFrEwIHn3UD87bU
0 0 (2) 0 0 0 45OMzmxW2aJQFGn
0 0 (2) 0 0 0 lsGRa4Il8YUS3Mh
0 0 (2) 0 0 0 ZQSZjkh0c2dcU4U
0 0 (2) 0 0 0 S2rIBrwSlvKMcLa
0 0 (2) 0 0 0 9nIyCfGrpA8UFAd
0 0 (2) 0 0 0 VoA3bjeI7UEdht1
0 0 (2) 0 0 0 VKSRRoSxJXInigd
0 0 (2) 0 0 0 zdCcJJ4c6Zljpyp
0 0 (2) 0 0 0 4X9DAH13ks5AbbY
0 0 (2) 0 0 0 1PnZjr0hPC1jo0S
0 0 (2) 0 0 0 yPmTwaaGiptRq4Y
0 0 (2) 0 0 0 z4iVO10xWHb9A14
0 0 (2) 0 0 0 LQ2gKqMxoOcpvAt
0 0 (2) 0 0 0 m1lW2aQIzYSNA2k
0 0 (2) 0 0 0 Hga3UMcDsXCx4Bw
0 0 (2) 0 0 0 8ynSbw9RovijVpBDoes anyone of you please guess what could have happened? Do you think there is any way to recover the lost data? We are only interested in recovering the files if we can recover them with their original names and paths. Therefore scanning the filesystem data blocks one by one is not an option.
I am putting hopes on the journal but I do not know how to parse it to see if I can make use of it to restore files. Could you please suggest tools for that?
Many thanks in advance!
Firas