Re: radosgw performance with small files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



My hardware setup

One OSD host
   - EL6
   - 10 spinning disks with configuration
      - sda (hpsa0): 450GB (0%) RAID-0 == 1 x 450GB 15K SAS/6 
  - 31GB Memory
  - 1Gb/s ethernet line

Monitor and gateway hosts have the same configuration with just one disk.

I am benchmarking newstore performance with small files using radosgw. I am hitting a bottleneck when writing data though radosgw. I get very good write throughput when using librados to write small files (60K).

Regards
Srikanth


On Wed, May 20, 2015 at 8:03 AM, Mark Nelson <mnelson@xxxxxxxxxx> wrote:
On 05/19/2015 11:31 AM, Srikanth Madugundi wrote:
Hi,

I am seeing write performance hit with small files (60K) using radosgw.
The radosgw is configured to run with 600 threads. Here is the write
speed I get with file sizes of 60K


# sudo ceph -s
     cluster e445e46e-4d84-4606-9923-16fff64446dc
      health HEALTH_OK
      monmap e1: 1 mons at {osd187=13.24.0.7:6789/0
<http://13.24.0.7:6789/0>}, election epoch 1, quorum 0 osd187
      osdmap e205: 28 osds: 22 up, 22 in
       pgmap v17007: 1078 pgs, 9 pools, 154 GB data, 653 kobjects
             292 GB used, 8709 GB / 9002 GB avail
                 1078 active+clean
   client io 1117 kB/s rd, *2878 kB/s wr*, 2513 op/s

It appears that you have 22 OSDs and between reads and writes, there are ~114 ops/s per OSD.  How many ops/s per disk are you trying to achieve?

#


If I run the same script with larger file sizes(1MB-3MB), I get a better
write speed.

Generally larger files will do better for a variety of reasons, but the primary one is that the data will consistently be more sequentially laid out.  Assuming your OSDs are on spinning disks, this is a big advantage.



# sudo ceph -s
     cluster e445e46e-4d84-4606-9923-16fff64446dc
      health HEALTH_OK
      monmap e1: 1 mons at {osd187=13.24.0.79:6789/0
<http://13.24.0.79:6789/0>}, election epoch 1, quorum 0 osd187
      osdmap e205: 28 osds: 22 up, 22 in
       pgmap v16883: 1078 pgs, 9 pools, 125 GB data, 140 kobjects
             192 GB used, 8809 GB / 9002 GB avail
                 1078 active+clean
   client io *105 MB/s wr*, 1839 op/s
#

My cluster has 2 OSD hosts running total of 20 osd daemons, 1 mon and 1
radosgw hosts. Is the bottleneck coming from the single radosgw process?
If so, is it possible to run radosgw in multi process mode?

I think before anyone can answer your question, it might help to detail what your hardware setup is, how you are running the tests, and what kind of performance you'd like to achieve.


Regards
Srikanth



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux