Re: [ceph-large] radosgw index pool too big

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 1/9/19 9:58 AM, Thomas Bennett wrote:
> Hey,
> 
> I have been having some problems with my nvme osds on a pre-production
> system. We run 1715 osds (of which 35 are nvmes).
> 
> We run the buckets.index pool off these nvmes. However I've started
> seeing slow requests and near full warnings on the nvme osds.
> 
> Anyone have any suggestions on how to find the root cause?
> 

Are you storing a lot of (small) objects in the buckets?

> Check out this sample of my ceph osd df:
> 
> ID   CLASS WEIGHT  REWEIGHT SIZE   USE    AVAIL  %USE  VAR  PGS
> 1470  nvme 0.46599  1.00000   476G   341G   134G *71.70* 9.87  51
> 
> It seems like way too much metadata. I have been messing around with
> increasing pgs and modifying crush maps to go from host based to rack
> based fault tolerance, but I think these bloated omaps where there
> before I started messing around. 

How much real data is there in the buckets data pool?

With 51 PGs on the NVMe you are on the low side, you will want to have
this hovering around 150 or even 200 on NVMe drives to get the best
performance.

Wido

> 
> Cheers,
> Tom
> 
> _______________________________________________
> Ceph-large mailing list
> Ceph-large@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-large-ceph.com
> 
_______________________________________________
Ceph-large mailing list
Ceph-large@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-large-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFS]

  Powered by Linux