Re: [RFC] Thing 1: Shardmap fox Ext4

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Dec 4, 2019, at 2:44 PM, Daniel Phillips <daniel@xxxxxxxxx> wrote:
> 
> On 2019-12-04 10:31 a.m., Andreas Dilger wrote:
>> One important use case that we have for Lustre that is not yet in the
>> upstream ext4[*] is the ability to do parallel directory operations.
>> This means we can create, lookup, and/or unlink entries in the same
>> directory concurrently, to increase parallelism for large directories.
> 
> This is a requirement for an upcoming transactional version of user space
> Shardmap. In the database world they call it "row locking". I am working
> on a hash based scheme with single record granularity that maps onto the
> existing shard buckets, which should be nice and efficient, maybe a bit
> tricky with respect to rehash but looks not too bad.
> 
> Per-shard rw locks are a simpler alternative, but might get a bit fiddly
> if you need to lock multiple entries in the same directory at the same
> time, which is required for mv is it not?

We currently have a "big filesystem lock" (BFL) for rename(), as rename
is not an operation many people care about the performance.  We've
discussed a number of times to optimize this for the common cases of
rename a regular file within a single directory and rename a regular
file between directories, but no plans at all to optimize rename of
directories between parents.

>> This is implemented by progressively locking the htree root and index
>> blocks (typically read-only), then leaf blocks (read-only for lookup,
>> read-write for insert/delete).  This provides improved parallelism
>> as the directory grows in size.
> 
> This will be much easier and more efficient with Shardmap because there
> are only three levels: top level shard array; shard hash bucket; record
> block. Locking applies only to cache, so no need to worry about possible
> upper tier during incremental "reshard".
> 
> I think Shardmap will also split more cleanly across metadata nodes than
> HTree.

We don't really split "htree" across metadata nodes, that is handled by
Lustre at a higher level than the underlying filesystem.  Hash filename
with per-directory hash type, modulo number of directory shards to find
index within that directory, then map index to a directory shard on a
particular server.  The backing filesystem directories are normal from
the POV of the local filesystem.

>> Will there be some similar ability in Shardmap to have parallel ops?
> 
> This work is already in progress for user space Shardmap. If there is
> also a kernel use case then we can just go forward assuming that this
> work or some variation of it applies to both.
> 
> We need VFS changes to exploit parallel dirops in general, I think,
> confirmed by your comment below. Seems like a good bit of work for
> somebody. I bet the benchmarks will show well, suitable grist for a
> master's thesis I would think.
> 
> Fine-grained directory locking may have a small enough footprint in
> the Shardmap kernel port that there is no strong argument for getting
> rid of it, just because VFS doesn't support it yet. Really, this has
> the smell of a VFS flaw (interested in Al's comments...)

I think that the VFS could get 95% of the benefit for 10% of the effort
would be by allowing only rename of regular files within a directory
with only a per-directory mutex.  The only workload that I know which
does a lot of rename is rsync, or parallel versions of it, that create
temporary files during data transfer, then rename the file over the
target atomically after the data is sync'd to disk.

>> Also, does Shardmap have the ability to shrink as entries are removed?
> 
> No shrink so far. What would you suggest? Keeping in mind that POSIX+NFS
> semantics mean that we cannot in general defrag on the fly. I planned to
> just hole_punch blocks that happen to become completely empty.
> 
> This aspect has so far not gotten attention because, historically, we
> just never shrink a directory except via fsck/tools. What would you
> like to see here? Maybe an ioctl to invoke directory defrag? A mode
> bit to indicate we don't care about persistent telldir cookies?

There are a few patches floating around to shrink ext4 directories which
I'd like to see landed at some point.  The current code is sub-optimal,
in that it only tries to shrink "easy" blocks from the end of directory,
but hopefully there can be more aggressive shrinking in later patches.

> How about automatic defrag that only runs when directory open count is
> zero, plus a flag to disable?

As long as the shrinking doesn't break POSIX readdir ordering semantics.
I'm obviously not savvy on the Shardmap details, but I'd think that the
shards need to be garbage collected/packed periodically since they are
log structured (write at end, tombstones for unlinks), so that would be
an opportunity to shrink the shards?

>> [*] we've tried to submit the pdirops patch a couple of times, but the
>> main blocker is that the VFS has a single directory mutex and couldn't
>> use the added functionality without significant VFS changes.
> 
> How significant would it be, really nasty or just somewhat nasty? I bet
> the resulting efficiencies would show up in some general use cases.

As stated above, I think the common case could be implemented relatively
easily (rename within a directory), then maybe rename files across
directories, and maybe never rename subdirectories across directories.

>> Patch at https://git.whamcloud.com/?p=fs/lustre-release.git;f=ldiskfs/kernel_patches/patches/rhel8/ext4-pdirop.patch;hb=HEAD
> 
> This URL gives me git://git.whamcloud.com/fs/lustre-release.git/summary,
> am I missing something?

Just walk down the tree for the "f=ldiskfs/..." pathname...


Cheers, Andreas





Attachment: signature.asc
Description: Message signed with OpenPGP


[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux