The patch titled Subject: cleancache-remove-limit-on-the-number-of-cleancache-enabled-filesystems-fix has been added to the -mm tree. Its filename is cleancache-remove-limit-on-the-number-of-cleancache-enabled-filesystems-fix.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/cleancache-remove-limit-on-the-number-of-cleancache-enabled-filesystems-fix.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/cleancache-remove-limit-on-the-number-of-cleancache-enabled-filesystems-fix.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/SubmitChecklist when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Vladimir Davydov <vdavydov@xxxxxxxxxxxxx> Subject: cleancache-remove-limit-on-the-number-of-cleancache-enabled-filesystems-fix I admit the synchronization between cleancache_register_ops and cleancache_init_fs is far not obvious. I should have updated the comment instead of merely dropping it, sorry. What about the following patch proving correctness of register_ops-vs-init_fs synchronization? It is meant to be applied incrementally on top of patch #4. Cc: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> Cc: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx> Cc: David Vrabel <david.vrabel@xxxxxxxxxx> Cc: Mark Fasheh <mfasheh@xxxxxxxx> Cc: Joel Becker <jlbec@xxxxxxxxxxxx> Cc: Stefan Hengelein <ilendir@xxxxxxxxxxxxxx> Cc: Florian Schmaus <fschmaus@xxxxxxxxx> Cc: Andor Daam <andor.daam@xxxxxxxxxxxxxx> Cc: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx> Cc: Bob Liu <lliubbo@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/cleancache.c | 51 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 51 insertions(+) diff -puN mm/cleancache.c~cleancache-remove-limit-on-the-number-of-cleancache-enabled-filesystems-fix mm/cleancache.c --- a/mm/cleancache.c~cleancache-remove-limit-on-the-number-of-cleancache-enabled-filesystems-fix +++ a/mm/cleancache.c @@ -54,6 +54,57 @@ int cleancache_register_ops(struct clean if (cmpxchg(&cleancache_ops, NULL, ops)) return -EBUSY; + /* + * A cleancache backend can be built as a module and hence loaded after + * a cleancache enabled filesystem has called cleancache_init_fs. To + * handle such a scenario, here we call ->init_fs or ->init_shared_fs + * for each active super block. To differentiate between local and + * shared filesystems, we temporarily initialize sb->cleancache_poolid + * to CLEANCACHE_NO_BACKEND or CLEANCACHE_NO_BACKEND_SHARED + * respectively in case there is no backend registered at the time + * cleancache_init_fs or cleancache_init_shared_fs is called. + * + * Since filesystems can be mounted concurrently with cleancache + * backend registration, we have to be careful to guarantee that all + * cleancache enabled filesystems that has been mounted by the time + * cleancache_register_ops is called has got and all mounted later will + * get cleancache_poolid. This is assured by the following statements + * tied together: + * + * a) iterate_supers skips only those super blocks that has started + * ->kill_sb + * + * b) if iterate_supers encounters a super block that has not finished + * ->mount yet, it waits until it is finished + * + * c) cleancache_init_fs is called from ->mount and + * cleancache_invalidate_fs is called from ->kill_sb + * + * d) we call iterate_supers after cleancache_ops has been set + * + * From a) it follows that if iterate_supers skips a super block, then + * either the super block is already dead, in which case we do not need + * to bother initializing cleancache for it, or it was mounted after we + * initiated iterate_supers. In the latter case, it must have seen + * cleancache_ops set according to d) and initialized cleancache from + * ->mount by itself according to c). This proves that we call + * ->init_fs at least once for each active super block. + * + * From b) and c) it follows that if iterate_supers encounters a super + * block that has already started ->init_fs, it will wait until ->mount + * and hence ->init_fs has finished, then check cleancache_poolid, see + * that it has already been set and therefore do nothing. This proves + * that we call ->init_fs no more than once for each super block. + * + * Combined together, the last two paragraphs prove the function + * correctness. + * + * Note that various cleancache callbacks may proceed before this + * function is called or even concurrently with it, but since + * CLEANCACHE_NO_BACKEND is negative, they will all result in a noop + * until the corresponding ->init_fs has been actually called and + * cleancache_ops has been set. + */ iterate_supers(cleancache_register_ops_sb, NULL); return 0; } _ Patches currently in -mm which might be from vdavydov@xxxxxxxxxxxxx are mm-memcontrol-use-max-instead-of-infinity-in-control-knobs.patch mm-hotplug-fix-concurrent-memory-hot-add-deadlock.patch ocfs2-copy-fs-uuid-to-superblock.patch cleancache-zap-uuid-arg-of-cleancache_init_shared_fs.patch cleancache-forbid-overriding-cleancache_ops.patch cleancache-remove-limit-on-the-number-of-cleancache-enabled-filesystems.patch cleancache-remove-limit-on-the-number-of-cleancache-enabled-filesystems-fix.patch mm-vmscan-fix-the-page-state-calculation-in-too_many_isolated.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html