Re: What is the best way to use disks with different sizes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You were very clear.

Create one pool containing all drives.

You can deploy more than one OSD on an NVMe drive, using a fraction of the size.  Not all drives have to have the same number of OSDs.

I you deploy 2x OSDs on the 7.6TB and 1x OSDs on the 3.8TB, you will have 15 OSDs total, each 3.8T in size.

If you have enough RAM, you could deploy 4x OSDs on the 7.6T and 2x OSDs on the 3.8T for 30 OSDs total, each 1.4T in size.

This strategy makes all your OSDs roughly the same size.  NVMe devices can service a lot of IOPs in parallel, so one can deploy more than one OSD on each if you have sufficient RAM, to increase overall throughput.

> On Jul 4, 2023, at 8:35 PM, wodel youchi <wodel.youchi@xxxxxxxxx> wrote:
> 
> Hi and thanks,
> 
> Maybe I was not able to express myself correctly.
> 
> I have 3 nodes, and I will be using 3 replicas for the data, which will be VMs disks.
> 
> Each node has 04 disks :
> - 03 nvme disks of 3.8Tb
> - and 01 nvme disk of 7.6Tb
> 
> All three nodes are equivalent.
> 
> As mentioned above, one pool will suffice me for my VMs, my question is :
> - Should I create two pools, the first one over the 3.8Tb disks (it will use 9 disks with replicas 3) and the second pool over the 7.6Tb disks (it will use 03 disks with replicas 3). 
> - Or, should I create one big pool and use all the 12 disks, mixing them, despite the difference in size?
> 
> 
> Regards.
> 
>  <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>	Sans virus.www.avast.com <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> <x-msg://29/#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
> Le mar. 4 juil. 2023 à 15:32, Anthony D'Atri <anthony.datri@xxxxxxxxx <mailto:anthony.datri@xxxxxxxxx>> a écrit :
>> There aren’t enough drives to split into multiple pools.
>> 
>> Deploy 1 OSD on each of the 3.8T devices and 2 OSDs on each of the 7.6s.
>> 
>> Or, alternately, 2 and 4.
>> 
>> 
>> > On Jul 4, 2023, at 3:44 AM, Eneko Lacunza <elacunza@xxxxxxxxx <mailto:elacunza@xxxxxxxxx>> wrote:
>> > 
>> > Hi,
>> > 
>> > El 3/7/23 a las 17:27, wodel youchi escribió:
>> >> I will be deploying a Proxmox HCI cluster with 3 nodes. Each node has 3
>> >> nvme disks of 3.8Tb each and a 4th nvme disk of 7.6Tb. Technically I need
>> >> one pool.
>> >> 
>> >> Is it good practice to use all disks to create the one pool I need, or is
>> >> it better to create two pools, one on each group of disks?
>> >> 
>> >> If the former is good (use all disks and create one pool), should I take
>> >> into account the difference in disk size?
>> >> 
>> > 
>> > What space use % do you expect? If you mix all disks in the same pool, if a 7.6TB disk fails that node's other disks will get full if use is near 60%, halting writes.
>> > 
>> > With 2 pools, that would be "near 66%" for the 3.8T pool and no limit for 7.6TB (but in that case you'll only have 2 replicas with a disk failure).
>> > 
>> > Another option would be 4 pools, in that case if a disk in any pool fails your VMs on that pool will continue working with only 2 replicas.
>> > 
>> > For the "near" calculus, you must factor in nearfull and full ratios for OSDs, and also that data may be unevenly distributed among OSDs...
>> > 
>> > The choice also will affect how well the aggregated IOPS will be spread between VMs<->disks.
>> > 
>> > Cheers
>> > 
>> > Eneko Lacunza
>> > Zuzendari teknikoa | Director técnico
>> > Binovo IT Human Project
>> > 
>> > Tel. +34 943 569 206 | https://www.binovo.es <https://www.binovo.es/>
>> > Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
>> > 
>> > https://www.youtube.com/user/CANALBINOVO
>> > https://www.linkedin.com/company/37269706/
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users@xxxxxxx <mailto:ceph-users@xxxxxxx>
>> > To unsubscribe send an email to ceph-users-leave@xxxxxxx <mailto:ceph-users-leave@xxxxxxx>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx <mailto:ceph-users@xxxxxxx>
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx <mailto:ceph-users-leave@xxxxxxx>

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux