Re: subdirectory pinning and reducing ranks / max_mds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In my experience it just falls back to behave like its un-pinned.

For my use case I do the following:

/ pinned to rank 0
/env1 to rank 1
/env2 to rank 2
/env 3 to rank 3

If I do an upgrade it will collapse to single rank, all access/IO continues
after what would be a normal failover type of interval, i.e. IO may stop on
clients for 10-60 seconds or whatever as if a normal MDS rank failover
occurred.

But it will not remain in a locked state for the entire time from what I’ve
seen.

YMMV, but as long as the reduction in ranks actually works (we’ve had them
crash when trying to shut down and stuff), you should be in good shape.

If you do hit issues of ranks crashing, be ready to pause the upgrade, and
set your max_mds back to 3 or 4 to stop the immediate bleeding and continue
your troubleshooting without impact to the clients.

On Fri, Oct 21, 2022 at 12:29 PM Wesley Dillingham <wes@xxxxxxxxxxxxxxxxx>
wrote:

> In a situation where you have say 3 active MDS (and 3 standbys).
> You have 3 ranks, 0,1,2
> In your filesystem you have three directories at the root level [/a, /b,
> /c]
>
> you pin:
> /a to rank 0
> /b to rank 1
> /c to rank 2
>
> and you need to upgrade your Ceph Version. When it becomes time to reduce
> max_mds to 1 and thereby reduce the number of ranks to 1, just rank 0 what
> happens to directories /b and /c do they become unavailable between the
> time when max_mds is reduced to 1 and after the upgrade when max_mds is
> restored to 3. Alternatively if a rank disappears does the CephFS client
> understand this and begin to ignore the pinned rank and makes use of the
> remaining ranks? Thanks.
>
> Respectfully,
>
> *Wes Dillingham*
> wes@xxxxxxxxxxxxxxxxx
> LinkedIn <http://www.linkedin.com/in/wesleydillingham>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux