On 01/13/2015 02:13 PM, Carsten Aulbert wrote: > Hi Stan > > On 01/13/2015 09:06 PM, Stan Hoeppner wrote: >> This workload seems more suited to a database than a filesystem. Though >> surely you've already considered such, and chose not to go that route. >> > > Yepp, but as we do not fully control the server software and need to > work further on the binary blobs arriving, a database is also not that > well suited for it, but yes, we looked into it (and run mysql, marida, > cassandra, mongo, postgresql, ...) > >> With high fragmentation you get lots of seeking. What model disks are >> these? What is your RAID10 geometry? Are your partitions properly >> aligned to that geometry, and to the drives (512n/512e)? > > Disks are 2TB Hitachi SATA drives (Ultrastar, HUA722020ALA330). As these > are some yrs old, they are native 512byte ones. They are connected via > an Areca 1261ML controller with a Supermicro backplane. > > RAID striping is not ideal (128kByte per member disk) and thus our xfs > layout is not ideal as well. Things we plan to change with the next > attempt ;) With your file sizes that seems a recipe for hotspots. What do the controller metrics tell you about IOs per drive, bytes per drive? Are they balanced? > After the arrival of "advanced format" HDD and SSDs we usually try to > align everything to full 1 MByte or larger, just to be sure any > combination of 512b, 4kb, ... will eventually align :) It's not that simple with striping. Partitions need to start and end on stripe boundaries, not simply multiples of 4KB or 1MB as with single disks. If you use non power of 2 drive counts in a stripe, aligning at multiples of 1MB will screw ya, e.g. 12 drives * 64KB is a 768KB stripe. Stan _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs