Re: optane + 4x SSDs for VM disk images?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 11/08/2019 19:46, Victor Hooi wrote:
Hi

I am building a 3-node Ceph cluster to storE VM disk images.

We are running Ceph Nautilus with KVM.

Each node has:

Xeon 4116
512GB ram per node
Optane 905p NVMe disk with 980 GB

Previously, I was creating four OSDs per Optane disk, and using only Optane disks for all storage.

However, if I could get say 4x980GB SSDs for each node, would that improve anything?

Is there a good way of using the Optane disks as a cache? (WAL?) Or what would be a good way of making use of this hardware for VM disk images?

Could performance of Optane + 4x SSDs per node ever exceed that of pure Optane disks?

Thanks,
Victor



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Generally i would go with adding the SSDs, gives you good capacity + overall performance per dollar + its common deployment in Ceph. Ceph does have overhead so trying to push extreme performance may be costly. To answer your questions:

Latency and single stream iops will always be better with pure Optane. So if you have a few client streams / low queue depth, then adding SSDs will make it slower.

If you have a lot of client streams, you can get higher total iops if you add SSDs and use your Optane as WAL/DB. 4 SSDs could be in the ballpark but you can stress test you cluster and measure the %busy of all your disks ( Optane + SSDs) to make sure they are equally busy at that ratio, if your Optane is less busy, you can further add SSDs, increasing overall iops.

So performance will depend if you want highest iops + can sacrifice latency then a hybrid solution is better. If you need absolute latency then stay with all Optane. As stated Ceph does have overhead, so the gain in latency as a ratio is costly.

For caching: i would not recommed bcache/dm-cache unless for hdds. Possibly dm-writecache can show slight write latency improvements and maybe a middle ground if you really want to squeeze latency.

/Maged






 




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux