Re: Several bugs/flaws in the current(?) bcache implementation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Tue, 12 Nov 2019 08:54:21 +0000 Tim Small wrote:

> On 12/11/2019 06:39, Christian Balzer wrote:
> >> From internal
> >> customers and external users, the feedback of maximum writeback rate is
> >> quite positive. This is the first time I realize not everyone wants it.
> >>  
> > The full speed (1TB/s) rate will result in initially high speeds (up to
> > 280MBs) in most tests, but degrade (and cause load spikes -> alarms) later
> > on, often resulting in it taking LONGER than if it had stuck with the
> > 4MB/s minimum rate set.
> > So yes, in my case something like a 32MB/s maximum rate would probably be
> > perfect.  
> 
> 
> I have some backup/archival targetted "drive-managed" SMR drives which
> include a non-SMR magnetic storage cache area which can cause this sort
> of behaviour.
> 
SMR! (makes signs to ward off evil! :)

> Sustained random writes make the drives fill their cache, and then
> performance falls off a cliff, since the drive must start making many
> read-modify-write passes in the SMR area.
> 
But yes, it's a decent enough analogy to a RAID controller with HW cache
backed by a RAID6 on HDDs. 
And every I/O system with caches experiences that cliff (all is great
until it totally goes to hell in a handbasket), thus my hopes to avoid
this needlessly. 

> e.g. this latency result:
> 
> https://www.storagereview.com/images/seagate_archive_8tb_sata_main_4kwrite_avglatency.png
> 
> (taken from https://www.storagereview.com/node/4665) - which illustrates
> performance after the drive's non-SMR internal write cache area is full.
> 
> There is somewhat similar behaviour from some SSDs (plus the additional
> potential problem of thermal throttling from sustained writes, and other
> internal house-keeping operations):
> 
> https://www.tweaktown.com/image.php?image=images.tweaktown.com/content/8/8/8875_005_samsung-970-evo-plus-ssd-review-96-layer-refresh_full.png
> 
> 
> Perhaps bcache could monitor backing store write latency and back-off to
> avoid this condition?
> 
DRBD does a decent job in that area and while this sounds good I'm always
worried about needless complexity in things that should be very simple (and
thus less error prone) and fast.
And since bcache is supposed to speed things UP, a complex code path may
also prove counterproductive, as can be seen in things like Ceph.

Regards,

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Rakuten Mobile Inc.



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux ARM Kernel]     [Linux Filesystem Development]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux