Hi Brian,
El 22/10/20 a las 17:50, Brian Topping escribió:
On Oct 22, 2020, at 9:14 AM, Eneko Lacunza <elacunza@xxxxxxxxx
<mailto:elacunza@xxxxxxxxx>> wrote:
Don't stripe them, if one NVMe fails you'll lose all OSDs. Just use 1
NVMe drive for 2 SAS drives and provision 300GB for WAL/DB for each
OSD (see related threads on this mailing list about why that exact size).
This way if a NVMe fails, you'll only lose 2 OSD.
Also, what size of WAL/DB partitions do you have now, and what
spillover size?
Generally agreed against making a single giant striped bucket.
Note this may be a good use for RAID10 on WAL/DB if you are committed
to multiple disks.
I generally put WAL/DB on RAID10 boot disks. It’s important to have
reliable WAL/DB, but also important that the machine actually boots in
the first place. With enough RAM and non-interactive use, most of the
boot bits will be cached so there is no contention for the channel.
Happy for any critique on this as well!
Yeah, didn't think about a RAID10 really, although there wouldn't be
enough space for 8x300GB = 2400GB WAL/DBs.
I usually also use the boot disk for WAL/DBs, it happens our clusters
are small and nodes not very dense.
Also, using a RAID10 for WAL/DBs will:
- make OSDs less movable between hosts (they'd have to be moved all
together - with 2 OSD per NVMe you can move them around in pairs,
although there would be data movement for sure)
- Provide half the IOPS/bandwith for WAL/DB (I think there would be
plenty for SAS magnetic drives though)
+ WAL/DBs will be safer (one disk failure won't lose any OSD)
- You must really be sure your raid card is dependable. (sorry but
I have seen so much management problems with top-tier RAID cards I avoid
them like the plague).
But it is an interesting idea nonetheless.
Cheers
--
Eneko Lacunza | +34 943 569 206
| elacunza@xxxxxxxxx
Zuzendari teknikoa | https://www.binovo.es
Director técnico | Astigarragako Bidea, 2 - 2º izda.
BINOVO IT HUMAN PROJECT S.L | oficina 10-11, 20180 Oiartzun
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx