Re: mds: first stab at lookup-by-ino problem/soln description

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 16, 2013 at 5:17 PM, Sage Weil <sage@xxxxxxxxxxx> wrote:
> On Wed, 16 Jan 2013, Gregory Farnum wrote:
>> I'm not familiar with the interfaces at work there. Do we have a free
>> 32 bits we can steal in order to do that stuffing? (I *think* it would
>> go in the NFS filehandle structure rather than the ino, right?)
>
> Right, there is at least 8 more bytes in a standard fh (16 bytes iirc) to
> stuff whatever we want into.
>
>> We would need to also store that information in order to eventually
>> replace the anchor table, but of course that's much easier to deal
>> with. If we can just do it this way, that still leaves handling files
>> which don't have any data written yet ? under our current system,
>> users can apply a data layout to any inode which has not had data
>> written to it yet. Unfortunately that gets hard to deal with if a user
>> touches a bunch of files and then comes back to place them the next
>> day. :/ I suppose un-touched files could have the special property
>> that their lookup data is stored in the metadata pool and it gets
>> moved as soon as they have data ? in the typical case files are
>> written right away and so this wouldn't be any more writes, just a bit
>> more logic.
>
> We can also change the semantics, here.  It could be that you have to
> specify the file's layout on create, and can't after it was created.
> Otherwise you get the directory/subtree's layout.  We could store the pool
> with the remote dentry link, for instance, and we could stick it in the
> fh.  So the <ino, pool> is really the "locator" that you would need.
>
> That could work...

I'm less a fan of forcing users to specify file layouts on create
since there aren't any standard interfaces which would let them do
that, so a lot of use cases would be restricted to directory-level
layout changes. Granted that covers the big ones, but we do have a
non-zero number of users who have learned our previous semantics,
right?
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux