Re: How to debug intermittent increasing md/inflight but no disk activity?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 12, 08:26, Dave Chinner wrote
> On Thu, Jul 11, 2024 at 01:23:12PM +0200, Andre Noll wrote:
> > On Thu, Jul 11, 09:12, Dave Chinner wrote
> > 
> > > > Of course it’s not reproducible, but any insight how to debug this next time
> > > > is much welcomed.
> > > 
> > > Probably not a lot you can do short of reconfiguring your RAID6
> > > storage devices to handle small IOs better. However, in general,
> > > RAID6 /always sucks/ for small IOs, and the only way to fix this
> > > problem is to use high performance SSDs to give you a massive excess
> > > of write bandwidth to burn on write amplification....
> > 
> > FWIW, our approach to mitigate the write amplification suckage of large
> > HDD-backed raid6 arrays for small I/Os is to set up a bcache device
> > by combining such arrays with two small SSDs (configured as raid1).
> 
> Which is effectively the same sort of setup as having a NVRAM cache
> in front of the RAID6 volume (i.e. hardware RAID controller).

Yes, bcache is cachevault on the cheap, plus the additional benefit
that bcache tries to detect and skip sequential I/O, bypassing
the cache.

> That can work if the cache is large enough to soak up bursts of
> small writes followed by enough idle time for the back end RAID6
> device to do all it's RMW cycles to clean the cache.
> 
> However, if the cache fills up with small writes, then slowdowns and
> IO latencies get even worse than if you are just using a plain RAID6
> device. Think about a cache with several million cached random 4kB
> writes, and how long that will take to flush to the RAID6 volume
> that might only be able to do 100 IOPS.

Indeed, we also see these stalls occasionally, especially under
mixed workloads where large file copies happen in parallel with heavy
metadata I/O such as a recursive chmod/chown. However, the stalls we
see are usually short. At most a couple of minutes, but not hours.

> Hence deploying a fast cache in front of a very slow drive is not
> exactly straight forward. Making it work reliably requires
> awareness of workload IO patterns. Special attention needs to be
> paid to the amount of idle time.

The problem is that knowing the I/O patterns might be too much to ask
for. In our case, many scientists use the servers at the same time,
and in very different ways. Some are experimenting with closed source
special purpose software that has unknown I/O characteristics. So the
workload and the I/O patterns are kind of unpredictable and vary a lot.

If people complain about slowness or high latencies, I usually
recommend to write to SSD-only scratch space first, then copy over
the results to the large HDD-backed arrays. Sometimes it's the
unsophisticated solutions that work best :)

Thanks
Andre
-- 
Max Planck Institute for Biology
Tel: (+49) 7071 601 829
Max-Planck-Ring 5, 72076 Tübingen, Germany
http://people.tuebingen.mpg.de/maan/




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux