How does IOPS/latency scale for additional OSDs? (Intel S3610 SATA SSD, for block storage use case)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm running a 3-node Ceph cluster for VM block storage (Proxmox/KVM).

Replication is set to 3.

Previously, we were running 1 x Intel Optane 905P 960B disk per node, with 4 x OSDs per drive, for total usable storage of 960 GB.

Performance was good, even without significant tuning, I assume largely because of the Optane disks.

However, we need more storage space.

We have some old 800 GB SSDs we could potentially use (Intel S3610).

I know it's possible to put the WAL/RocksDB on an Optane disks, and have normal SSDs for the OSDs. I assume we'd go down to a single OSD per disk if running normal SATA SSDs. However, other people are saying the performance gain from this isn't that great (e.g. https://yourcmc.ru/wiki/Ceph_performance)

Each of our 3 nodes has 8 drive bays, so we could populate this for 24 x 800GB SSDs in total. My question is:
  1. For the Intel S3610 - should we still run with 1 OSD per disk?
  2. How does performance (IOPS and latency) scale as the number of disks increase? (This is for VM block storage).
Thanks,
Victor
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux