I am planning to use btrfs on a 50 TB RAID6 array and I want to enable lzo compression.
This is for bioinformatics setup where lots of seeking within large (1 TB -- 20 TB) files occur. (The software gets only small chunks of data scattered across the file).
What worries me is that I don't understand how seeking is performed on compressed filesystems like btrfs. Does the file need to be decompressed from the beginning till the sought-after position first? That would have a huge negative impact in my setup.
Or a more general question: does the seek-time scale with file size the same way as on non-compressed filesystem or does it get worse, e.g. O(file_length)