Re: ceph rgw why are reads faster for larger than 64kb object size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Environment: Ceph Nautilus 14.2.8 Object Storage
Data nodes: 12 * HDD OSDs drives each with a 12TB capacity + 2 * SSD OSDs drives for rgw bucket index pool & rgw meta pool.

Custom configs (since we dealing with a majority smaller sized objects)
bluestore_min_alloc_size_ssd 4096
bluestore_min_alloc_size_hdd 4096

Stage                   Avg-ResTime
s7-read1KB 48W              43.11
s13-read2KB 48W            42.9
s19-read4KB 48W            42.88
s25-read8KB 48W            43.15
s31-read16KB 48W         43.46
s37-read32KB 48W         43.7
s43-read64KB 48W         44.78
s49-read128KB 48W       8.67
s55-read256KB 48W       13.28

The read latencies for 128KB objects are 5x faster compared to 64KB objects. Why?

From: Ronnie Puthukkeril <rputhukkeril@xxxxxxxxxx>
Date: Monday, April 12, 2021 at 5:34 PM
To: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Subject: Re: ceph rgw why are reads faster for larger than 64kb object size
Sorry about the formatting in the earlier email. Hope this one works.

Below are the read response times from cosbench

Stage                   Op-Name            Op-Type               Op-Count             Byte-Count         Avg-ResTime
s7-read1KB 48W              read                      read      2004202              2004202000       43.11
s13-read2KB 48W            read                      read      2013906              4027812000       42.9
s19-read4KB 48W            read                      read      2014701              8058804000       42.88
s25-read8KB 48W            read                      read      2002337              16018696000    43.15
s31-read16KB 48W         read                      read      1987785              31804560000    43.46
s37-read32KB 48W         read                      read      1976190              63238080000    43.7
s43-read64KB 48W         read                      read      1929183              123467712000                  44.78
s49-read128KB 48W       read                      read      9965032              1275524096000                8.67
s55-read256KB 48W       read                      read      6505554              1665421824000                13.28

Thanks,
Ronnie

From: Ronnie Puthukkeril <rputhukkeril@xxxxxxxxxx>
Date: Monday, April 12, 2021 at 5:24 PM
To: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Subject: ceph rgw why are reads faster for larger than 64kb object size
Environment: Ceph Nautilus 14.2.8 Object Storage
Data nodes: 12 * HDD OSDs drives each with a 12TB capacity + 2 * SSD OSDs drives for rgw bucket index pool & rgw meta pool.

Custom configs (since we dealing with a majority smaller sized objects)

bluestore_min_alloc_size_ssd         4096

bluestore_min_alloc_size_hdd         4096

Observations from cosbench performance tests
Stage
Op-Type
Op-Count
Byte-Count
Avg-ResTime
s7-read1KB 48W
read
2004202
2004202000
43.11
s13-read2KB 48W
read
2013906
4027812000
42.9
s19-read4KB 48W
read
2014701
8058804000
42.88
s25-read8KB 48W
read
2002337
16018696000
43.15
s31-read16KB 48W
read
1987785
31804560000
43.46
s37-read32KB 48W
read
1976190
63238080000
43.7
s43-read64KB 48W
read
1929183
123467712000
44.78
s49-read128KB 48W
read
9965032
1275524096000
8.67
s55-read256KB 48W
read
6505554
1665421824000
13.28

The response time improves drastically when the object size is greater than 64KB. What could be the reason?

Thanks,
Ronnie



[https://d1dejaj6dcqv24.cloudfront.net/asset/image/email-banner-384-2x.png]<https://www.qualys.com/email-banner>



This message may contain confidential and privileged information. If it has been sent to you in error, please reply to advise the sender of the error and then immediately delete it. If you are not the intended recipient, do not read, copy, disclose or otherwise use this message. The sender disclaims any liability for such unauthorized use. NOTE that all incoming emails sent to Qualys email accounts will be archived and may be scanned by us and/or by external service providers to detect and prevent threats to our systems, investigate illegal or inappropriate behavior, and/or eliminate unsolicited promotional emails ("spam"). If you have any concerns about this process, please contact us.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux