Re: Radosgw huge traffic to index bucket compared to incoming requests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dnia 2020-06-18, o godz. 22:18:08
Simon Leinen <simon.leinen@xxxxxxxxx> napisał(a):

> Mariusz Gronczewski writes:
> > listing itself is bugged in version
> > I'm running: https://tracker.ceph.com/issues/45955  
> 
> Ouch! Are your OSDs all running the same version as your RadosGW? The
> message looks a bit as if your RadosGW might be a newer version than
> the OSDs, and the new optimized bucket list operation was missing the
> new extensions to the client<->OSD protocol.
> 

Versions are same on every node. Is there a command that is required to
enable new extensions?

It was clean install of octopus from upstream ceph repos (not
upgrade) but the feature level appears to be stuck at luminous ?

ceph features
...
    "client": [
        {
            "features": "0x3f01cfb8ffadffff",
            "release": "luminous",
            "num": 16
        }
    ],
...

dunno if that's something expected or a problem.

> > But yes, our structure is generally /bucket/prefix/prefix/file so
> > there is not many big directories (we're migrating from GFS where
> > that was a problem)  
> 
> >> Paul Emmerich has written about performance issues with large
> >> buckets on this list, see
> >> https://lists.ceph.io/hyperkitty/list/dev@xxxxxxx/thread/36P62BOOCJBVVJCVUX5F5J7KYCGAAICV/
> >> 
> >> Let's say that there are opportunities for further improvements.
> >> 
> >> You could look for the specific queries that cause the high read
> >> load in your system.  Maybe there's something that can be done on
> >> the client side.  This could also provide input for Ceph
> >> development as to what kinds of index operations are used by
> >> applications "in the wild".  Those might be worth optimizing first
> >> :-)  
> 
> > Is there a way to debug which query exactly is causing that ?  
> 
> What I usually do is grep through the HTTP request logs of the
> front-end proxy/load balancer (Nginx in our case), and look for GET
> requests on a bucket that have a long duration.  It's a bit crude, I
> know.  (If someone knows better techniques for this, I'd also be
> interested! Maybe something based on something like
> Jaeger/OpenTracing, or clever log correlation?)
> 

I did some looking and it appears most of our indexing traffic were
developers doing some cleaning and monitoring the progress by basically
ls-ing over and over again so at least the problem should subsist once
migration is over.

AFAIK there isn't really any way to "mark request for tracing" in Ceph
so sadly any tracing kinda ends at radosgw

> > Currently there is a lot of incoming traffic (mostly from aws cli
> > sync) as we're migrating data over but that's at most hundreds of
> > requests per sec.  
> 
> >>   
> >> > running 15.2.3, nothing special in terms of tunning aside from
> >> > disabling some logging as to not overflow the logs.    
> >>   
> >> > We've had similar test cluster on 12.x (and way slower hardware)
> >> > getting similar traffic and haven't observed that magnitude of
> >> > difference.    
> >> 
> >> Was your bucket index sharded in 12.x?  
> 
> > we didn't touch default settings so I assume not ?  "radosgw-admin
> > metadata get" and "radosgw-admin bucket stat" doesn't say anything
> > about shards on old cluster, while on new cluster there is from 11
> > to few hundred on the biggest buckets.  
> 
> Yes, I think it's the sharding that causes the read amplification.
> 
> >> Hm, I don't understand enough about the operations that this
> >> represents, but maybe one of the RadosGW developers can explain
> >> why a single OSD would perform so many similar requests in such a
> >> short timeframe.  
> 
> > I'm getting similar logs on any osd/pg that takes part in the
> > .index  
> 
> Right, that's what I thought.  Again, I can't tell whether these log
> messages are to be expected... the repetitions look a bit odd.
> 
> Best regards,

Well, it is same requests over and over again so without/with too small
cache that's to be expected. I guess I'll just wait for next release.

Cheers


-- 
Mariusz Gronczewski, Administrator

Efigence S. A.
ul. Wołoska 9a, 02-583 Warszawa
T:   [+48] 22 380 13 13
NOC: [+48] 22 380 10 20
E: admin@xxxxxxxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux