Re: 40Mil objects in S3 rados pool / how calculate PGs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Op 14 juni 2016 om 10:10 schreef Ansgar Jazdzewski <a.jazdzewski@xxxxxxxxxxxxxx>:
> 
> 
> Hi,
> 
> we are using ceph and radosGW to store images (~300kb each) in S3,
> when in comes to deep-scrubbing we facing task timeouts (> 30s ...)
> 
> my questions is:
> 
> in case of that amount of objects/files is it better to calculate the
> PGs on a object-bases instant of the volume size? and how it should be
> done?
> 

Do you have bucket sharding enabled?

And how many objects do you have in a single bucket?

If sharding is not enabled for the bucket index you might have large RADOS objects with bucket indexes which are hard to scrub.

Wido

> thanks
> Ansgar
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux