Re: SSD considerations for block.db and WAL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Christian,
El 27/2/20 a las 20:08, Christian Wahl escribió:
Hi everyone,

we currently have 6 OSDs with 8TB HDDs split across 3 hosts.
The main usage is KVM-Images.

To improve speed we planned on putting the block.db and WAL onto NVMe-SSDs.
The plan was to put 2x1TB in each host.

One option I thought of was to RAID 1 them for better redundancy, I don't know how high the risk is of corrupting the block.db by one failed SSD block.
It's hard to tell without more information, but what is for you "improve speed"? Is that latency or throughtput?

I don't think those NVMe disk used as block.db and WAL will help much in terms of throughtput. They may help a bit with latency though.

For more throughtput you need more disks. Those 8TB disks are too big and too few.

Wouldn't it be better to use all NVMe disks as OSDs and create two tiers? One pool based on NVMe will be fast and the pool based on HDDs will be like now...

Or should I just one for WAL+block.db and use the other one as fast storage?

Depends on your use case, but I don't think it makes sense to create a RAID1 with NVMe disks.

Have you checked current DB and WAL use on those OSDs?

Cheers
Eneko


--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943569206
Astigarragako bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
www.binovo.es
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux