Hi again! We have done some tests regarding the limits of storing lots and lots of buckets through Rados Gateway into Ceph. Our test used a single user for which we removed the default max buckets limit. It then continuously created containers - both empty and such with 10 objects of around 100k random data in them. With 3 parallel processes we saw relatively consistent time of about 500-700ms per such container. This kept steady until we reached approx. 3 million containers after which the time per insert sharply went up to currently around 1600ms and rising. Due to some hiccups with network equipment the tests were aborted a few times, but then resumed without deleting any of the previous runs created containers, so the actual number might be 2.8 or 3.2 million, but still in that ballpark. We aborted the test here. Judging by the advice given earlier (see quoted mail below) that we might hit a limit on some per-user data structures, we created another user account, removed its max-bucket limit as well and restarted the benchmark with that one, _expecting_ the times to be down to the original range of 500-700ms. However, what we are seeing is that the times stay at the 1600ms and higher levels even for that fresh account. Here is the output of `rados df`, reformatted to fit the email. clones, degraded and unfound were 0 in all cases and have been left out for clarity: .rgw ========================= KB: 1,966,932 objects: 9,094,552 rd: 195,747,645 rd KB: 153,585,472 wr: 30,191,844 wr KB: 10,751,065 .rgw.buckets ========================= KB: 2,038,313,855 objects: 22,088,103 rd: 5,455,123 rd KB: 408,416,317 wr: 149,377,728 wr KB: 1,882,517,472 .rgw.buckets.index ========================= KB: 0 objects: 5,374,376 rd: 267,996,778 rd KB: 262,626,106 wr: 107,142,891 wr KB: 0 .rgw.control ========================= KB: 0 objects: 8 rd: 0 rd KB: 0 wr: 0 wr KB: 0 .rgw.gc ========================= KB: 0 objects: 32 rd: 5,554,407 rd KB: 5,713,942 wr: 8,355,934 wr KB: 0 .rgw.root ========================= KB: 1 objects: 3 rd: 524 rd KB: 346 wr: 3 wr KB: 3 We would very much like to understand what is going on here in order to decide if Rados Gateway is a viable option to base our production system on (where we expect similar counts as in the benchmark), or if we need to investigate using librados directly which we would like to avoid if possible. Any advice on what configuration parameters to check or which additional information to provide to analyze this would be very much welcome. Cheers, Daniel -- Daniel Schneller Mobile Development Lead CenterDevice GmbH | Merscheider Straße 1 | 42699 Solingen tel: +49 1754155711 | Deutschland daniel.schneller@xxxxxxxxxxxxxxxx | www.centerdevice.com
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com