Article 6GVTT File system optimized for very large file parallel HDD read

File system optimized for very large file parallel HDD read

by
reb0rn
from LinuxQuestions.org on (#6GVTT)
I have 7 folders each with 4GB files in total 3.5TB they need to be read as fast as possible with 7 nodes at same time, it`s a spacemesh project (the HDD is only used for this and nothing else also files are copied to it so there is no defregmentation and they are at start of disk where its fastest)
My idea is to setup file system or mount option to force kernel to read as big chunks as possible before seeking HDD to next position
So far I tested:
ext4 with cluster size 64M and 128MB and I got similar results so I am not sure if it helped
the total read time was about 9h 30min

That mean HDD still seek a lot, as data are on start on disk, if doing this one folder by one it would finish in ~3h
I am not sure how to tweak xfs and try, any help advised, anything a bit faster would be ok as I need to be read under 12h in total
External Content
Source RSS or Atom Feed
Feed Location https://feeds.feedburner.com/linuxquestions/latest
Feed Title LinuxQuestions.org
Feed Link https://www.linuxquestions.org/questions/
Reply 0 comments