Re: [ceph-users] slow request problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 14 Jul 2013, Stefan Priebe wrote:
> Am 14.07.2013 18:19, schrieb Sage Weil:
> > On Sun, 14 Jul 2013, Stefan Priebe - Profihost AG wrote:
> > > Hi sage,
> > > 
> > > Am 14.07.2013 um 17:01 schrieb Sage Weil <sage@xxxxxxxxxxx>:
> > > 
> > > > On Sun, 14 Jul 2013, Stefan Priebe wrote:
> > > > > Hello list,
> > > > > 
> > > > > might this be a problem due to having too much PGs? I've 370 per OSD
> > > > > instead
> > > > > of having 33 / OSD (OSDs*100/3).
> > > > 
> > > > That might exacerbate it.
> > > > 
> > > > Can you try setting
> > > > 
> > > > osd min pg log entries = 50
> > > > osd max pg log entries = 100
> > > 
> > > What does that exactly do? And why is a restart of all osds needed.
> > > Thanks!
> > 
> > This limits the size of the pg log.
> > 
> > > 
> > > > across your cluster, restarting your osds, and see if that makes a
> > > > difference?  I'm wondering if this is a problem with pg log rewrites
> > > > after
> > > > peering.  Note that adding that option and restarting isn't enough to
> > > > trigger the trim; you have to hit the cluster with some IO too, and (if
> > > > this is the source of your problem) the trim itself might be expensive.
> > > > So add it, restart, do a bunch of io (to all pools/pgs if you can), and
> > > > then see if the problem is still present?
> > > 
> > > Will try can't produce a write to every pg. it's a prod. Cluster with
> > > KVM rbd. But it has 800-1200 iop/s per second.
> > 
> > Hmm, if this is a production cluster, I would be careful, then!  Setting
> > the pg logs too short can lead to backfill, which is very expensive (as
> > you know).
> > 
> > The defaults are 3000 / 10000, so maybe try something less aggressive like
> > changing min to 500?
> 
> I've lowered the values to 500 / 1500 and it seems to lower the impact but
> does not seem to solve that one.

This suggests that the problem is the pg log rewrites that are an inherent 
part of cuttlefish.  This is replaced with improved rewrite logic in 0.66 
or so, so dumpling will be better.  I suspect that having a large number 
of pgs is exacerbating the issue for you.

We think there is still a different peering performance problem that Sam 
and paravoid have been trying to track down, but I believe in that case 
reducing the pg log sizes didn't have much effect.  (Maybe one of them can 
chime in here.)

This was unfortunately something we failed to catch before cuttlefish was 
released.  One of the main focuses right now is in creating large clusters 
and observing peering and recovery to make sure we don't repeat the same 
sort of mistake for dumpling!

sage



> 
> Stefan
> 
> > Also, I think
> > 
> >   ceph osd tell \* injectargs '--osd-min-pg-log-entries 500'
> > 
> > should work as well.  But again, be aware that lowering the value will
> > incur a trim that may in itself be a bit expensive (if this is the source
> > of the problem).
> > 
> > It is probably worth watching ceph pg dump | grep $some_random_pg and
> > watching the 'v' column over time (say, a minute or two) to see how
> > quickly pg events are being generated on your cluster. This will give you
> > a sense of how much time 500 (or however many) pg log entries covers!
> > 
> > sage
> > 
> > 
> > > 
> > > > 
> > > > Also note that the lower osd min pg log entries means that the osd
> > > > cannot
> > > > be down as long without requiring a backfill (50 ios per pg).  These
> > > > probably aren't the values that we want, but I'd like to find out
> > > > whether
> > > > the pg log rewrites after peering in cuttlefish are the culprit here.
> > > 
> > > 
> > > > 
> > > > Thanks!
> > > > 
> > > > > Is there any plan for PG merging?
> > > > 
> > > > Not right now.  :(  I'll talk to Sam, though, to see how difficult it
> > > > would be given the split approach we settled on.
> > > > 
> > > > Thanks!
> > > > sage
> > > > 
> > > > 
> > > > > 
> > > > > Stefan
> > > > > > Hello list,
> > > > > > 
> > > > > > anyone else here who always has problems bringing back an offline
> > > > > > OSD?
> > > > > > Since cuttlefish i'm seeing slow requests for the first 2-5 minutes
> > > > > > after bringing an OSD oinline again but that's so long that the VMs
> > > > > > crash as they think their disk is offline...
> > > > > > 
> > > > > > Under bobtail i never had any problems with that.
> > > > > > 
> > > > > > Please HELP!
> > > > > > 
> > > > > > Greets,
> > > > > > Stefan
> > > > > _______________________________________________
> > > > > ceph-users mailing list
> > > > > ceph-users@xxxxxxxxxxxxxx
> > > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > > > 
> > > > > 
> > > 
> > > 
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > the body of a message to majordomo@xxxxxxxxxxxxxxx
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > 
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux