[Single OSD performance on SSD] Can't go over 3, 2K IOPS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Results are with HT enabled. We are yet to measure the performance with HT disabled.
No, I didn't measure I/O time/utilization.

-----Original Message-----
From: Andrey Korolyov [mailto:andrey@xxxxxxx]
Sent: Friday, August 29, 2014 1:03 AM
To: Somnath Roy
Cc: Haomai Wang; ceph-users at lists.ceph.com
Subject: Re: [Single OSD performance on SSD] Can't go over 3, 2K IOPS

On Fri, Aug 29, 2014 at 10:37 AM, Somnath Roy <Somnath.Roy at sandisk.com> wrote:
> Thanks Haomai !
>
> Here is some of the data from my setup.
>
>
>
> ----------------------------------------------------------------------
> ----------------------------------------------------------------------
> ----------------------------------------------------------
>
> Set up:
>
> --------
>
>
>
> 32 core cpu with HT enabled, 128 GB RAM, one SSD (both journal and
> data) -> one OSD. 5 client m/c with 12 core cpu and each running two
> instances of ceph_smalliobench (10 clients total). Network is 10GbE.
>
>
>
> Workload:
>
> -------------
>
>
>
> Small workload ? 20K objects with 4K size and io_size is also 4K RR.
> The intent is to serve the ios from memory so that it can uncover the
> performance problems within single OSD.
>
>
>
> Results from Firefly:
>
> --------------------------
>
>
>
> Single client throughput is ~14K iops, but as the number of client
> increases the aggregated throughput is not increasing. 10 clients ~15K
> iops. ~9-10 cpu cores are used.
>
>
>
> Result with latest master:
>
> ------------------------------
>
>
>
> Single client is ~14K iops, but scaling as number of clients
> increases. 10 clients ~107K iops. ~25 cpu cores are used.
>
>
>
> ----------------------------------------------------------------------
> ----------------------------------------------------------------------
> ------------------------------------------------------------------
>
>
>
>
>
> More realistic workload:
>
> -----------------------------
>
> Let?s see how it is performing while > 90% of the ios are served from
> disks
>
> Setup:
>
> -------
>
> 40 cpu core server as a cluster node (single node cluster) with 64 GB
> RAM. 8 SSDs -> 8 OSDs. One similar node for monitor and rgw. Another
> node for client running fio/vdbench. 4 rbds are configured with
> ?noshare? option. 40 GbE network
>
>
>
> Workload:
>
> ------------
>
>
>
> 8 SSDs are populated , so, 8 * 800GB = ~6.4 TB of data.  Io_size = 4K RR.
>
>
>
> Results from Firefly:
>
> ------------------------
>
>
>
> Aggregated output while 4 rbd clients stressing the cluster in
> parallel is ~20-25K IOPS , cpu cores used ~8-10 cores (may be less
> can?t remember
> precisely)
>
>
>
> Results from latest master:
>
> --------------------------------
>
>
>
> Aggregated output while 4 rbd clients stressing the cluster in
> parallel is ~120K IOPS , cpu is 7% idle i.e  ~37-38 cpu cores.
>
>
>
> Hope this helps.
>
>
>

Thanks Roy, the results are very promising!

Just two moments - are numbers from above related to the HT cores or you renormalized the result for real ones? And what about percentage of I/O time/utilization in this test was (if you measured this ones)?

________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux