Re: [PATCH 2/2] AFS: Implement shared-writable mmap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hugh Dickins wrote:
On Wed, 16 May 2007, Nick Piggin wrote:

Andrew Morton wrote:

I need to work out what to do with

mm-fix-fault-vs-invalidate-race-for-linear-mappings.patch
mm-merge-populate-and-nopage-into-fault-fixes-nonlinear.patch
mm-merge-populate-and-nopage-into-fault-fixes-nonlinear-doc-fix.patch
mm-merge-populate-and-nopage-into-fault-fixes-nonlinear-fix.patch
mm-merge-nopfn-into-fault.patch
convert-hugetlbfs-to-use-vm_ops-fault.patch
mm-remove-legacy-cruft.patch
mm-debug-check-for-the-fault-vs-invalidate-race.patch
mm-fix-clear_page_dirty_for_io-vs-fault-race.patch

Probably merge them, I guess.  Hugh had concerns, I think over small
additional overhead from the lock_page()?


That's right, the overhead of the lock_page()/unlock_page() in the common
path of faulting, and of the extra call to unmap_mapping_range() when
truncating (because page lock doesn't satisfactorily replace the old
sequence count when COWing).


Yes he did. It seems to only be noticable in microbenchmarks.


So far, yes.  I expect it'll surface in some reallife workload
sometime, but let's not get too depressed about that.  I guess
your blithe "Scalability is not an issue" comment still irks me.


I say I believe scalability will not be a huge issue, because for
concurrent faulters on the same page, they still have cacheline
contention beginning before we lock the page (tree_lock), and
ending after we unlock it (page->_count), and a few others in the
middle for good mesure. I sure don't think it is going to help,
but I don't think it would be a great impact on an alrady sucky
workload.

We would have contention against other sources of lock_page, but
OTOH, we want to fix up the clear_page_dirty_for_io race window
by doing a wait_for_page_locked here anyway.

We could introduce some contention for some other lockers... but
again I think they are often situations where the page fields are
going to be modified anyway, or where the fault code would be
contending on something anyway (eg. vs !uptodate read, truncate).
The lock hold time in the fault should be smaller than many.

I'm not saying it would never be contended, but it is such a
granular thing and it is so often surrounded by file level locking.


Still, I have some lock_page speedup work that eliminates that regression
anyway.


Again, rather too blithely said.  You have a deep well of ingenuity,
but I doubt it can actually wash away all of the small overhead added.

OK, I haven't proven to eliminate the overhead completely, but we can
see that options exist... And we should be able to at least help the
more heavily impacted powerpc architecture and bring the hit it to a
more reasonable level there.


However, Hugh hasn't exactly said yes or no yet...


Getting a "yes" or "no" out of me is very hard work indeed.
But I didn't realize that was gating this work: if the world
had to wait on me, we'd be in trouble.

I think there are quite a few people eager to see the subsequent
->fault changes go in.  And I think I'd just like to minimize the
difference between -mm and mainline, so maximize comprehensibilty,
by getting this all in.  I've not heard of any correctness issues
with it, and we all agree that the page lock there is attractively
simple.

Well I value your opinion of course, and I don't want to change the
VM in a way that people are not unhappy with :)


If I ever find a better way of handling it,
I'm free to send patches in future, after all.

That's definitely what I had in mind -- the page lock is a simple
starting point (that is hopefully correct) that we would always be
able to build on with something more fancy in future.


Did that sound something like a "yes"?

A little bit :)


--
SUSE Labs, Novell Inc.
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux