Re: when bringing dm-cache online, consumes all memory and reboots

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Il 2020-03-24 10:43 Zdenek Kabelac ha scritto:
By default we require migration threshold to be at least 8 chunks big.
So with big chunks like 2MiB in size - gives you 16MiBof required I/O threshold.

So if you do i.e. read 4K from disk - it may cause i/o load of 2MiB
chunk block promotion into cache - so you can see the math here...

Hi Zdenek, I am not sure to following you description of migration_threshold. From dm-cache kernel doc:

"Migrating data between the origin and cache device uses bandwidth.
The user can set a throttle to prevent more than a certain amount of
migration occurring at any one time.  Currently we're not taking any
account of normal io traffic going to the devices.  More work needs
doing here to avoid migrating during those peak io moments.
For the time being, a message "migration_threshold <#sectors>"
can be used to set the maximum number of sectors being migrated,
the default being 2048 sectors (1MB)."

Can you better explain what really migration_threshold accomplishes? It is a "max bandwidth cap" settings, or something more?

If the main workload is to read whole device over & over again likely
no caching will enhance your experience and you may simply need fast
whole
storage.

From what I understand the OP want to cache filesystem metadata to speedup rsync directory traversal. So a cache device should definitely be useful; albeit dm-cache being "blind" in regard to data vs metadata, the latter should be good candidate for hotspot promotion.

For reference, I have a ZFS system exactly used for such a workload (backup with rsnapshot, which uses rsync and hardlink to create deduplicated backups) and setting cache=metadata (rather than "all", so data and metadata) gives a very noticeable boot to rsync traversal.

Regards.

--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it [1]
email: g.danti@xxxxxxxxxx - info@xxxxxxxxxx
GPG public key ID: FF5F32A8


_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/




[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux