Re: same OSD in multiple CRUSH hierarchies

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Actually I've learned that it's not needed for a rule to start with a root bucket, so I can heve rules that will only consider a subtree of my total resources, and achieve what I was trying to do with the different disjunct hierarchies.

BTW: it is possible to have different trees with different roots, with some OSDs being part of multiple such trees, and create different rules that will start with one root or the other. But I was told that this could mess up the calculation for pg autoscaler and other housekeeping functions. So it seems a better option to have each OSD in one single tree, and use rules that will only consider subtrees ...

Regards,
Laszlo

Date: Mon, 19 Jun 2023 07:41:35 +0000
From: Eugen Block<eblock@xxxxxx>
Subject:  Re: same OSD in multiple CRUSH hierarchies
To:ceph-users@xxxxxxx
Message-ID:
	<20230619074135.Horde.gS8nAKQgZhlbV0HpymJ-lqf@xxxxxxxxxxxxxx>
Content-Type: text/plain; charset=utf-8; format=flowed; DelSp=Yes

Hi,
I don't think this is going to work. Each OSD belongs to a specific
host and you can't have multiple buckets (e.g. bucket type "host")
with the same name in the crush tree. But if I understand your
requirement correctly, there should be no need to do it this way. If
you structure your crush tree according to your separation
requirements and the critical pools use designated rules, you can
still have a rule that doesn't care about the data separation but
distributes the replicas across the available hosts (given your
failure domain would be "host"), which is already the default for the
replicated_rule. Did I misunderstand something?

Regards,
Eugen


Zitat von Budai Laszlo<laszlo.budai@xxxxxxxxx>:

Hi there,

I'm curious if there is anything against configuring an ODS to be
part in multiple CRUSH hierarchies. I'm thinking of the following
scenario:

I want to create pools that are using distinct sets of OSDs. I want
to make sure that a piece data which replicated at application level
will not end up on the same OSD. So I would creat multiple CRUSH
hierarchies (root - host - osd) but using different OSDs in each,
and different rules that are using those hierarchies. Then I would
create pools with the different rules, and use those different pools
for storing the data for the different application instances. But I
would also like to use the OSDs in the "default hierarchy" set up by
ceph where all the hosts are in the same root bucket, and have the
default replicated rule, so my generic data volumes would be able to
spread across all the OSDs available.

Is there something against this setup?

Thank you for any advice!
Laszlo
_______________________________________________
ceph-users mailing list --ceph-users@xxxxxxx
To unsubscribe send an email toceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux