Re: [ceph-users] Dramatic performance drop at certain number of objects in pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Christian,

Ah ok, I didn't see object size mentioned earlier. But I guess direct
rados small objects would be a rarish use-case and explains the very
high object counts.

I'm interested in finding the right balance for RBD given object size
is another variable that can be tweaked there. I recall the
UnitedStack folks using 32MB.

Cheers,

On 23 June 2016 at 12:28, Christian Balzer <chibi@xxxxxxx> wrote:
> On Thu, 23 Jun 2016 12:01:38 +1000 Blair Bethwaite wrote:
>
>> On 23 June 2016 at 11:41, Wade Holler <wade.holler@xxxxxxxxx> wrote:
>> > Workload is native librados with python.  ALL 4k objects.
>>
>> Was that meant to be 4MB?
>>
> Nope, he means 4K, he's putting lots of small objects via a python script
> into the cluster to test for exactly this problem.
>
> See his original post.
>
>
> Christian
> --
> Christian Balzer        Network/Systems Engineer
> chibi@xxxxxxx           Global OnLine Japan/Rakuten Communications
> http://www.gol.com/



-- 
Cheers,
~Blairo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux