Re: erasure-code-lrc Questions regarding repair

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I somehow missed your message thanks for your effort to raise this issue

Ansgar

Am Di., 16. Jan. 2024 um 10:05 Uhr schrieb Eugen Block <eblock@xxxxxx>:
>
> Hi,
>
> I don't really have an answer, I just wanted to mention that I created
> a tracker issue [1] because I believe there's a bug in the LRC plugin.
> But there hasn't been any response yet.
>
> [1] https://tracker.ceph.com/issues/61861
>
> Zitat von Ansgar Jazdzewski <a.jazdzewski@xxxxxxxxxxxxxx>:
>
> > hi folks,
> >
> > I currently test erasure-code-lrc (1) in a multi-room multi-rack setup.
> > The idea is to be able to repair a disk-failures within the rack
> > itself to lower bandwidth-usage
> >
> > ```bash
> > ceph osd erasure-code-profile set lrc_hdd \
> > plugin=lrc \
> > crush-root=default \
> > crush-locality=rack \
> > crush-failure-domain=host \
> > crush-device-class=hdd \
> > mapping=__DDDDD__DDDDD__DDDDD__DDDDD \
> > layers='
> > [
> > [ "_cDDDDD_cDDDDD_cDDDDD_cDDDDD", "" ],
> > [ "cDDDDDD_____________________", "" ],
> > [ "_______cDDDDDD______________", "" ],
> > [ "______________cDDDDDD_______", "" ],
> > [ "_____________________cDDDDDD", "" ],
> > ]' \
> > crush-steps='[
> > [ "choose", "room", 4 ],
> > [ "choose", "rack", 1 ],
> > [ "chooseleaf", "host", 7 ],
> > ]'
> > ```
> >
> > The roule picks 4 out of 5 rooms and keeps the PG in one rack like expected!
> >
> > However it looks like the PG will not move to another Room if the PG
> > is undersized or the entire Room or Rack is down!
> >
> > Questions:
> > * do I miss something to allow LRC (PG's) to move across Racks/Rooms
> > for repair?
> > * Is it even possible to build such a 'Multi-stage' grushmap?
> >
> > Thanks for your help,
> > Ansgar
> >
> > 1) https://docs.ceph.com/en/quincy/rados/operations/erasure-code-jerasure/
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux