Re: rbd mirroring - journal growing and snapshot high io load

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Are you guys affected by
https://tracker.ceph.com/issues/57396 ?

On Fri, 16 Sep 2022 at 09:40, ronny.lippold <ceph@xxxxxxxxx> wrote:

> hi and thanks a lot.
> good to stay not alone and understand some right :)
>
> i will also tell, if there is something new.
>
>
> so from my point of view, the only consistant way is to freeze fs or
> shutdown vm.
> after that start journal mirroring. so i think, only journal can work.
>
> you helped me a lot, cause i had a major understanding problem.
>
> maybe i will start a new thread in the mailing list and will see.
>
> have a great weekend and hopefully a smooth job switching ... i know,
> what you mean :)
>
>
> ronny
>
>
> Am 2022-09-15 15:33, schrieb Arthur Outhenin-Chalandre:
> > Hi Ronny,
> >
> >> On 15/09/2022 14:32 ronny.lippold <ceph@xxxxxxxxx> wrote:
> >> hi arthur, some time went ...
> >>
> >> i would like to know, if there are some news of your setup.
> >> do you have replication active running?
> >
> > No, there was no change at CERN. I am switching jobs as well actually
> > so I won't have much news for you on CERN infra in the future. I know
> > other people from the Ceph team at CERN watch this ml so you might
> > hear from them as well I guess.
> >
> >> we are using actually snapshot based and had last time a move of both
> >> clusters.
> >> after that, we had some damaged filesystems ind the kvm vms.
> >> did you ever had such a problems in your tests.
> >>
> >> i think, there are not so many people, how are using ceph replication.
> >> for me its hard to find the right way.
> >> can a snapshot based ceph replication be crash consisten? i think no.
> >
> > I never noticed it myself, but yes it's written on the docs actually
> > https://docs.ceph.com/en/quincy/rbd/rbd-snapshot/ (but on the
> > mirroring docs this is not actually explained). I never tested that
> > super carefully though and thought this was more a rare occurence than
> > anything else.
> >
> > I heard a while back (maybe a year-ish ago) that there was some long
> > term plan to automatically trigger an fsfreeze for librbd/qemu on a
> > snapshot which would probably solve your issue (and also allow
> > application level consistency via fsfreeze custom hooks). But this was
> > apparently a tricky feature to add. I cc'ed Illya maybe he would know
> > more about that or if something else could have caused your issue.
> >
> > Cheers,
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux