Re: How does IOPS/latency scale for additional OSDs? (Intel S3610 SATA SSD, for block storage use case)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi!

Latency doesn't scale with the number of OSDs at all, IOPS scale almost linearly. IOPS are bounded by CPU usage though. Also a single RBD client usually doesn't deliver more than 20-30k read iops and 10-15k write iops.

You can run with more than 1 OSD per drive if you think you have enough CPU for that. S3610 spec is up to 84k read iops / 28k write iops, this is probably enough for 2 OSDs per drive. Micron also observed reduced tail latency with 2x OSD per drive. However I'd start with 1 OSD per drive to not overload CPUs. At least if they're not 2x xeon platinum :)

Hi,

I'm running a 3-node Ceph cluster for VM block storage (Proxmox/KVM).

Replication is set to 3.

Previously, we were running 1 x Intel Optane 905P 960B [1] disk per
node, with 4 x OSDs per drive, for total usable storage of 960 GB.

Performance was good, even without significant tuning, I assume
largely because of the Optane disks.

However, we need more storage space.

We have some old 800 GB SSDs we could potentially use (Intel S3610
[2]).

I know it's possible to put the WAL/RocksDB on an Optane disks, and
have normal SSDs for the OSDs. I assume we'd go down to a single OSD
per disk if running normal SATA SSDs. However, other people are saying
the performance gain from this isn't that great (e.g.
https://yourcmc.ru/wiki/Ceph_performance)

Each of our 3 nodes has 8 drive bays, so we could populate this for 24
x 800GB SSDs in total. My question is:

	* For the Intel S3610 - should we still run with 1 OSD per disk?
	* How does performance (IOPS and latency) scale as the number of
disks increase? (This is for VM block storage).

Thanks,
Victor

Links:
------
[1]
https://ark.intel.com/content/www/us/en/ark/products/129834/intel-optane-ssd-905p-series-960gb-1-2-height-pcie-x4-20nm-3d-xpoint.html
[2]
https://ark.intel.com/content/www/us/en/ark/products/82936/intel-ssd-dc-s3610-series-800gb-2-5in-sata-6gb-s-20nm-mlc.html
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux