Re: [PATCH 19/17] mkfs: increase default log size for new (aka bigtime) filesystems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 25, 2022 at 06:54:50PM -0800, Darrick J. Wong wrote:
> From: Darrick J. Wong <djwong@xxxxxxxxxx>
> 
> Recently, the upstream kernel maintainer has been taking a lot of heat on
> account of writer threads encountering high latency when asking for log
> grant space when the log is small.  The reported use case is a heavily
> threaded indexing product logging trace information to a filesystem
> ranging in size between 20 and 250GB.  The meetings that result from the
> complaints about latency and stall warnings in dmesg both from this use
> case and also a large well known cloud product are now consuming 25% of
> the maintainer's weekly time and have been for months.

Is the transaction reservation space exhaustion caused by, as I
pointed out in another thread yesterday, the unbound concurrency in
IO completion? i.e. we have hundreds of active concurrent
transactions that then block on common objects between them (e.g.
inode locks) and serialise? Hence only handful of completions can
actually run concurrently, depsite every completion holding a full
reservation of log space to allow them to run concurrently?

> For small filesystems, the log is small by default because we have
> defaulted to a ratio of 1:2048 (or even less).  For grown filesystems,
> this is even worse, because big filesystems generate big metadata.
> However, the log size is still insufficient even if it is formatted at
> the larger size.
> 
> Therefore, if we're writing a new filesystem format (aka bigtime), bump
> the ratio unconditionally from 1:2048 to 1:256.  On a 220GB filesystem,
> the 99.95% latencies observed with a 200-writer file synchronous append
> workload running on a 44-AG filesystem (with 44 CPUs) spread across 4
> hard disks showed:
> 
> Log Size (MB)	Latency (ms)	Throughput (MB/s)
> 10		520		243
> 20		220		308
> 40		140		360
> 80		92		363
> 160		86		364
> 
> For 4 NVME, the results were:
> 
> 10		201		409
> 20		177		488
> 40		122		550
> 80		120		549
> 160		121		545
> 
> Hence we increase the ratio by 16x because there doesn't seem to be much
> improvement beyond that, and we don't want the log to grow /too/ large.

1:2048 -> 1:256 is an 8x bump, yes?  Which means we'll get a 2GB log
on a 512GB filesystem, and the 220GB log you tested is getting a
~1GB log?

I also wonder if the right thing to do here is just set a minimum
log size of 32MB? The worst of the long tail latencies are mitigated
by this point, and so even small filesystems grown out to 200GB will
have a log size that results in decent performance for this sort of
workload.

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux