Re: Need urgent help for ceph health error issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Yes, min_size=1 and size=2 for ssd

for hdd it is min_size=1 and size=3

Could you please advice, about using hdd and ssd in a same ceph cluster. Is
it okay for production grade openstack?
We have created a new replicated rule for ssd, different pool for ssd and
new disk marking ssd class.

not idea about ceph-balancer

Regards,
Munna

On Thu, Dec 9, 2021 at 1:54 PM Stefan Kooman <stefan@xxxxxx> wrote:

> Hi,
>
> On 12/9/21 03:11, Md. Hejbul Tawhid MUNNA wrote:
> > Hi,
> >
> > Yes, we have added new osd. Previously we had only one type disk, hdd.
> now
> > we have added ssd disk separate them with replicated_rule and device
> class
> >
> > ID CLASS WEIGHT  REWEIGHT SIZE    USE     AVAIL   %USE  VAR  PGS
> ...
> ...
> > 40   ssd 1.81898  1.00000 1.8 TiB 141 GiB 1.7 TiB  7.55 0.25 1035
> > 41   ssd 1.81898  1.00000 1.8 TiB  94 GiB 1.7 TiB  5.07 0.17 1043
> >                      TOTAL 226 TiB  69 TiB 158 TiB 30.40
>
> Is that a replicated with min_size=1 and size=2? Or are you planning on
> adding a third SSD osd?
>
> > MIN/MAX VAR: 0.17/1.30  STDDEV: 6.38
>
> It looks like the data placement could be optimized by the Ceph
> balancer. And this would provide you with (a lot more) space. Do you use
> Ceph balancer?
>
> These tools might be helpful as well:
>
> https://github.com/TheJJ/ceph-balancer
> https://github.com/digitalocean/pgremapper
>
> Gr. Stefan
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux