Re: Hardware for new OSD nodes.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Anthony,

El 22/10/20 a las 18:34, Anthony D'Atri escribió:

Yeah, didn't think about a RAID10 really, although there wouldn't be enough space for 8x300GB = 2400GB WAL/DBs.
300 is overkill for many applications anyway.
Yes, but he has spillover with 1600GB/12 WAL/DB. Seems he can make use of those 300GB.

Also, using a RAID10 for WAL/DBs will:
     - make OSDs less movable between hosts (they'd have to be moved all together - with 2 OSD per NVMe you can move them around in pairs
Why would you want to move them between hosts?

I think the usual case is a server failure, so that won't be a problem. With small clusters (like ours) you may want to reorganize OSDs to a new server (let's say, move one OSD of earch server to the new server). But this is an uncommon corner-case, I agree :)

Cheers

--
Eneko Lacunza                | +34 943 569 206
                             | elacunza@xxxxxxxxx
Zuzendari teknikoa           | https://www.binovo.es
Director técnico             | Astigarragako Bidea, 2 - 2º izda.
BINOVO IT HUMAN PROJECT S.L  | oficina 10-11, 20180 Oiartzun
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux