Sure, it is running on SanDisk Infiniflash HW. Each Drive BW while test is running was ~400 MB/s and iops is ~20K , 100% 4K random write. In case of 16 OSD test, it was 16 drive Infiniflash JBOF with 2 hosts attached to it. The hosts were HW zoned to 8 drives each. The entire box BW is limited to max 12GB/s read/write, when it is fully populated i.e 64 drives . Since it is 16 drives , box will give 16 X 400 MB/s = ~6.4 GB/s. 100% Read iops for this 16 drive box is ~800K and 100% write iops is ~300K iops. Drives were having separate partition for Bluestore data/wal/db. Thanks & Regards Somnath -----Original Message----- From: Igor Fedotov [mailto:ifedotov@xxxxxxxxxxxx] Sent: Wednesday, November 09, 2016 3:26 PM To: Somnath Roy; ceph-devel@xxxxxxxxxxxxxxx Subject: Re: Bluestore with rocksdb vs ZS Hi Somnath, could you please describe the storage hardware used in your benchmarking: what drives, how are they organized, etc... What are the performance characteristics of the storage subsystem without Ceph? Thanks in advance, Igor On 11/10/2016 1:57 AM, Somnath Roy wrote: > Hi, > Here is the slide we presented in today's performance meeting. > > https://drive.google.com/file/d/0B7W-S0z_ymMJZXI3bkZLX3Z2U0E/view?usp= > sharing > > Feel free to come back if anybody has any query. > > Thanks & Regards > Somnath > PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" > in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo > info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html