dm-userspace memory consumption in remap cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dan,

I've been playing with a program which uses the libdmu/libdevmapper interface to map a block device through dm-userspace. (I haven't been using cowd; I'm looking to integrate dmu support into an existing program.)

I noticed that after I wrote 1 GB of data to a dmu device with a 4 KB blocksize, the dm-userspace-remaps slab cache consumed about 39 MB of memory. Looking at alloc_remap_atomic(), dmu makes no attempt to reuse dmu_maps until a memory allocation fails, so that potentially dmu could force a large amount of data out of the page cache to make room for its map.


I've considered some workarounds from the userspace side, but they all seem fairly suboptimal:

1. Periodically invalidate the entire table. When cowd does this right now (on SIGHUP), it invalidates each page individually, which is not very pleasant. I suppose this could be done by loading a new dm table.

2. Periodically trigger block invalidations from userspace, fired by either the completion notification mechanism or a periodic timer. Userspace couldn't do this in an LRU fashion, since it doesn't see remap cache hits.

(As an aside, I haven't been able to figure out the semantics of the completion notification mechanism. Could you provide an example of how you expect it to be used from the userspace side?)

3. Map in dm-linear when there are large consecutive ranges, to try to keep the table size down. Some of the early dm-cow design notes mentioned this approach*, but I notice that the current cowd doesn't use it. Is this still a recommended procedure?


From the kernel side -- if the remap cache in the kernel is expected to be a subset of the mapping information maintained by userspace, it seems as though it should be possible to more aggressively reuse the LRU dmu_maps. That would impose a performance penalty for the extra map requests to userspace, but I wonder how that balances against having a larger page cache.


Thoughts?

Thanks
--Benjamin Gilbert

* http://www.redhat.com/archives/dm-devel/2006-March/msg00013.html

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux