Re: MDS and stretched clusters

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Just noticed this thread. A couple questions. Is what we want to have MDS
daemons in say zone A and zone B, but the ones in zone A are prioritized to
be active and ones in zone B remain as standby unless absolutely necessary
(all the ones in zone A are down) or is it that we want to have some subset
of a pool of hosts in zone A and zone B have mds daemons? If it's the
former, cephadm doesn't do it. The followup question in that case would be
if there is some way to tell the mds daemons to prioritize certain ones to
be active over others? If there is, I didn't know about it, but I assume
we'd need that functionality to get that case to work.

On Tue, Oct 29, 2024 at 5:34 PM Gregory Farnum <gfarnum@xxxxxxxxxx> wrote:

> No, unfortunately this needs to be done at a higher level and is not
> included in Ceph right now. Rook may be able to do this, but I don't think
> cephadm does.
> Adam, is there some way to finagle this with pod placement rules (ie,
> tagging nodes as mds and mds-standby, and then assigning special mds config
> info to corresponding pods)?
> -Greg
>
> On Tue, Oct 29, 2024 at 12:46 PM Sake Ceph <ceph@xxxxxxxxxxx> wrote:
>
>> I hope someone of the development team can share some light on this. Will
>> search the tracker if some else made a request about this.
>>
>> > Op 29-10-2024 16:02 CET schreef Frédéric Nass <
>> frederic.nass@xxxxxxxxxxxxxxxx>:
>> >
>> >
>> > Hi,
>> >
>> > I'm not aware of any service settings that would allow that.
>> >
>> > You'll have to monitor each MDS state and restart any non-local active
>> MDSs to reverse roles.
>> >
>> > Regards,
>> > Frédéric.
>> >
>> > ----- Le 29 Oct 24, à 14:06, Sake Ceph ceph@xxxxxxxxxxx a écrit :
>> >
>> > > Hi all
>> > > We deployed successfully a stretched cluster and all is working fine.
>> But is it
>> > > possible to assign the active MDS services in one DC and the
>> standby-replay in
>> > > the other?
>> > >
>> > > We're running 18.2.4, deployed via cephadm. Using 4 MDS servers with
>> 2 active
>> > > MDS on pinnend ranks and 2 in standby-replay mode.
>> > > _______________________________________________
>> > > ceph-users mailing list -- ceph-users@xxxxxxx
>> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users@xxxxxxx
>> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux