Re: What is the best way to use disks with different sizes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



There aren’t enough drives to split into multiple pools.

Deploy 1 OSD on each of the 3.8T devices and 2 OSDs on each of the 7.6s.

Or, alternately, 2 and 4.


> On Jul 4, 2023, at 3:44 AM, Eneko Lacunza <elacunza@xxxxxxxxx> wrote:
> 
> Hi,
> 
> El 3/7/23 a las 17:27, wodel youchi escribió:
>> I will be deploying a Proxmox HCI cluster with 3 nodes. Each node has 3
>> nvme disks of 3.8Tb each and a 4th nvme disk of 7.6Tb. Technically I need
>> one pool.
>> 
>> Is it good practice to use all disks to create the one pool I need, or is
>> it better to create two pools, one on each group of disks?
>> 
>> If the former is good (use all disks and create one pool), should I take
>> into account the difference in disk size?
>> 
> 
> What space use % do you expect? If you mix all disks in the same pool, if a 7.6TB disk fails that node's other disks will get full if use is near 60%, halting writes.
> 
> With 2 pools, that would be "near 66%" for the 3.8T pool and no limit for 7.6TB (but in that case you'll only have 2 replicas with a disk failure).
> 
> Another option would be 4 pools, in that case if a disk in any pool fails your VMs on that pool will continue working with only 2 replicas.
> 
> For the "near" calculus, you must factor in nearfull and full ratios for OSDs, and also that data may be unevenly distributed among OSDs...
> 
> The choice also will affect how well the aggregated IOPS will be spread between VMs<->disks.
> 
> Cheers
> 
> Eneko Lacunza
> Zuzendari teknikoa | Director técnico
> Binovo IT Human Project
> 
> Tel. +34 943 569 206 | https://www.binovo.es
> Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
> 
> https://www.youtube.com/user/CANALBINOVO
> https://www.linkedin.com/company/37269706/
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux