Re: ceph.v17 multi-mds ephemeral directory pinning: cannot set or retrieve extended attribute

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Apr 9, 2023 at 11:21 PM Ulrich Pralle
<Ulrich.Pralle@xxxxxxxxxxxx> wrote:
>
> Hi,
>
> we are using ceph version 17.2.5 on Ubuntu 22.04.1 LTS.
>
> We deployed multi-mds (max_mds=4, plus standby-replay mds).
> Currently we statically directory-pinned our user home directories (~50k).
> The cephfs' root directory is pinned to '-1', ./homes is pinned to "0".
> All user home directories below ./homes/ are pinned to -1, 1, 2, or 3
> depending on a simple hash algorithm.
> Cephfs is provided to our users as samba/cifs (clustered samba,ctdb).
>
> We want to try ephemeral directory pinning.
>
> We can successfully set the extended attribute
> "ceph.dir.pin.distributed" with setfattr(1), but cannot retrieve its
> setting afterwards.:
>
> # setfattr -n ceph.dir.pin.distributed -v 1 ./units
> # getfattr -n ceph.dir.pin.distributed ./units
> ./units: ceph.dir.pin.distributed: No such attribute
>
> strace setfattr reports success on setxattr
>
> setxattr("./units", "ceph.dir.pin.distributed", "1", 1, 0) = 0
>
> strace getfattr reports
>
> lstat("./units", {st_mode=S_IFDIR|0751, st_size=1, ...}) = 0
> getxattr("./units", "ceph.dir.pin.distributed", NULL, 0) = -1 ENODATA
> (No data available)
>
> The file system is mounted
> rw,noatime,,name=<omitted>,mds_namespace=<omitted>.acl,recover_session=clean.
> The cephfs mds caps are "allow rwps".
> "./units" has a ceph.dir.layout="stripe_unit=4194304 stripe_count=1
> object_size=4194304 pool=fs_data_units"
> Ubuntu's setfattr is version 2.4.48.
>
> Defining other cephfs extend attributes (like ceph.dir.pin,
> ceph.quota.max_bytes, etc.) works as expected.
>
> What are we missing?

Your kernel doesn't appear to know how to check virtual extended
attributes yet. It should be in 5.18.

> Should we clear all static directory pinnings in advance?

Start by removing the pin on /home. Then remove a group of pins on
some users directories. Confirm /home looks something like:

ceph tell mds.<fsname>:0 dump tree /home 0 | jq '.[0].dirfrags[] | .dir_auth'
"0"
"0"
"1"
"1"
"1"
"1"
"0"
"0"

Which tells you the dirfrags for /home are distributed across the
ranks (in this case, 0 and 1).

At that point, it should be fine to remove the rest of the manual pins.

> Are there any experience with ephemeral directory pinning?
> Or should one refrain from multi-mds at all?

It should work fine. Please give it a try and report back!

-- 
Patrick Donnelly, Ph.D.
He / Him / His
Red Hat Partner Engineer
IBM, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux