Re: [XFS SUMMIT] SSD optimised allocation policy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 14, 2020 at 08:34:54PM +1000, Dave Chinner wrote:
> 
> Topic:	SSD Optimised allocation policies
> 
> Scope:
> 	Performance
> 	Storage efficiency
> 
> Proposal:
> 
> Non-rotational storage is typically very fast. Our allocation
> policies are all, fundamentally, based on very slow storage which
> has extremely high latency between IO to different LBA regions. We
> burn CPU to optimise for minimal seeks to minimise the expensive
> physical movement of disk heads and platter rotation.
> 
> We know when the underlying storage is solid state - there's a
> "non-rotational" field in the block device config that tells us the
> storage doesn't need physical seek optimisation. We should make use
> of that.
> 
> My proposal is that we look towards arranging the filesystem
> allocation policies into CPU-optimised silos. We start by making
> filesystems on SSDs with AG counts that are multiples of the CPU
> count in the system (e.g. 4x the number of CPUs) to drive

I guess you and I have been doing this for years with seemingly few ill
effects. ;)

That said, I did encounter a wackass system with 104 CPUs, a 1.4T RAID
array of spinning disks, 229 AGs sized ~6.5GB each, and a 50M log.  The
~900 io writers were sinking thesystem, so clearly some people are still
getting it wrong even with traditional storage. :(

> parallelism at the allocation level, and then associate allocation
> groups with specific CPUs in the system. Hence each CPU has a set of
> allocation groups is selects between for the operations that are run
> on it. Hence allocation is typically local to a specific CPU.
> Optimisation proceeds from the basis of CPU locality optimisation,
> not storage locality optimisation.

I wonder how hard it would be to compile a locality map for storage and
CPUs from whatever numa and bus topology information the kernel already
knows about?

> What this allows is processes on different CPUs to never contend for
> allocation resources. Locality of objects just doesn't matter for
> solid state storage, so we gain nothing by trying to group inodes,
> directories, their metadata and data physically close together. We
> want writes that happen at the same time to be physically close
> together so we aggregate them into larger IOs, but we really
> don't care about optimising write locality for best read performance
> (i.e. must be contiguous for sequential access) for this storage.
> 
> Further, we can look at faster allocation strategies - we don't need
> to find the "nearest" if we don't have a contiguous free extent to
> allocate into, we just want the one that costs the least CPU to
> find. This is because solid state storage is so fast that filesystem
> performance is CPU limited, not storage limited. Hence we need to
> think about allocation policies differently and start optimising
> them for minimum CPU expenditure rather than best layout.
> 
> Other things to discuss include:
> 	- how do we convert metadata structures to write-once style
> 	  behaviour rather than overwrite in place?

(Hm?)

> 	- extremely large block sizes for metadata (e.g. 4MB) to
> 	  align better with SSD erase block sizes

If we had metadata blocks that size, I'd advocate for studying how we
could restructure the btree to log updates in the slack space and only
checkpoint lower in the tree when necessary.

> 	- what parts of the allocation algorithms don't we need

Brian reworked part of the allocator a couple of cycles ago to reduce
the long tail latency of chasing through one free space btree when the
other one would have given it a quick answer; how beneficial has that
been?  Could it be more aggressive?

(Will have to ponder allocation issues in more depth when I'm more
awake..)

> 	- are we better off with huge numbers of small AGs rather
> 	  than fewer large AGs?

There's probably some point of dimininishing returns, but this seems
likely.  Has anyone studied this recently?

--D

> 
> -- 
> Dave Chinner
> david@xxxxxxxxxxxxx



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux