Re: How to check available storage with EC and different sized OSD's ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



with a m value of 1 if you lost a single OSD/failure domain you'd end up with a read only pg or cluster.  usually you need at least k+1 to survive a failure domain failure depending on your min_size setting.  The other thing you need to take into consideration is that the m value is for both failure domain *and* osd in an unlucky scenario (eg, you had a pg that happened to be on a downed host and a failed OSD elsewhere in the cluster).    For a 3 OSD configuration the minimum fault tolerant setup would be k=1, m=2 and you effectively then are doing replica 3 anyways.  At least this is my understanding of it.  Hope that helps
________________________________
From: Paweł Kowalski <pk@xxxxxxxxxxxx>
Sent: 08 November 2022 14:25
To: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Subject:  How to check available storage with EC and different sized OSD's ?

CAUTION: This email originates from outside THG

Hi,


I've set up a minimal EC setup - 3 OSDs, k=2, m=1:


root@skarb:~# ceph osd df
ID  CLASS    WEIGHT   REWEIGHT  SIZE     RAW USE  DATA OMAP     META
AVAIL    %USE   VAR   PGS  STATUS

[...]

  9  low_hdd  2.72849   1.00000  2.7 TiB  632 GiB  631 GiB  121 KiB  1.6
GiB  2.1 TiB  22.62  0.67   32      up
10  low_hdd  1.81879   1.00000  1.8 TiB  632 GiB  631 GiB  121 KiB  1.6
GiB  1.2 TiB  33.94  1.01   32      up
11  low_hdd  1.81879   1.00000  1.8 TiB  632 GiB  631 GiB  121 KiB  1.6
GiB  1.2 TiB  33.94  1.01   32      up
[...]


root@skarb:~# ceph df
--- RAW STORAGE ---
CLASS       SIZE    AVAIL     USED  RAW USED  %RAW USED
[...]
low_hdd  6.4 TiB  4.5 TiB  1.8 TiB   1.8 TiB      29.04
[...]

--- POOLS ---
POOL                         ID  PGS   STORED  OBJECTS     USED %USED
MAX AVAIL
[...]

ceph3_ec_low_k2_m1-data      20   32  1.2 TiB  325.96k  1.8 TiB 32.16
2.6 TiB
ceph3_ec_low_k2_m1-metadata  21   32  319 KiB        5  970 KiB
0    5.5 TiB

[...]


As you can see, the first OSD is larger (2,7TB) comparing to 2nd and 3rd.

The question is - is it possible to check (not calculate) safe available
storage space on this setup? Ceph df shows 4.5TB available, but
obviously the pool isn't ready for first OSD's failure.

And if I manage to calculate safe size, how to make this survive first
OSD failure? I guess it's not as simple as "just don't use more tha xxx
space"...


Regards,

Paweł


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


Danny Webb
Principal OpenStack Engineer
The Hut Group<http://www.thehutgroup.com/>

Tel:
Email: Danny.Webb@xxxxxxxxxxxxxxx<mailto:Danny.Webb@xxxxxxxxxxxxxxx>

For the purposes of this email, the "company" means The Hut Group Limited, a company registered in England and Wales (company number 6539496) whose registered office is at Fifth Floor, Voyager House, Chicago Avenue, Manchester Airport, M90 3DQ and/or any of its respective subsidiaries.

Confidentiality Notice
This e-mail is confidential and intended for the use of the named recipient only. If you are not the intended recipient please notify us by telephone immediately on +44(0)1606 811888 or return it to us by e-mail. Please then delete it from your system and note that any use, dissemination, forwarding, printing or copying is strictly prohibited. Any views or opinions are solely those of the author and do not necessarily represent those of the company.

Encryptions and Viruses
Please note that this e-mail and any attachments have not been encrypted. They may therefore be liable to be compromised. Please also note that it is your responsibility to scan this e-mail and any attachments for viruses. We do not, to the extent permitted by law, accept any liability (whether in contract, negligence or otherwise) for any virus infection and/or external compromise of security and/or confidentiality in relation to transmissions sent by e-mail.

Monitoring
Activity and use of the company's systems is monitored to secure its effective use and operation and for other lawful business purposes. Communications using these systems will also be monitored and may be recorded to secure effective use and operation and for other lawful business purposes.

hgvyjuv
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux