Re: [ceph-users] slow request problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 14.07.2013 21:05, schrieb Sage Weil:
On Sun, 14 Jul 2013, Stefan Priebe wrote:
Am 14.07.2013 18:19, schrieb Sage Weil:
On Sun, 14 Jul 2013, Stefan Priebe - Profihost AG wrote:
Hi sage,

Am 14.07.2013 um 17:01 schrieb Sage Weil <sage@xxxxxxxxxxx>:

On Sun, 14 Jul 2013, Stefan Priebe wrote:
Hello list,

might this be a problem due to having too much PGs? I've 370 per OSD
instead
of having 33 / OSD (OSDs*100/3).

That might exacerbate it.

Can you try setting

osd min pg log entries = 50
osd max pg log entries = 100

What does that exactly do? And why is a restart of all osds needed.
Thanks!

This limits the size of the pg log.


across your cluster, restarting your osds, and see if that makes a
difference?  I'm wondering if this is a problem with pg log rewrites
after
peering.  Note that adding that option and restarting isn't enough to
trigger the trim; you have to hit the cluster with some IO too, and (if
this is the source of your problem) the trim itself might be expensive.
So add it, restart, do a bunch of io (to all pools/pgs if you can), and
then see if the problem is still present?

Will try can't produce a write to every pg. it's a prod. Cluster with
KVM rbd. But it has 800-1200 iop/s per second.

Hmm, if this is a production cluster, I would be careful, then!  Setting
the pg logs too short can lead to backfill, which is very expensive (as
you know).

The defaults are 3000 / 10000, so maybe try something less aggressive like
changing min to 500?

I've lowered the values to 500 / 1500 and it seems to lower the impact but
does not seem to solve that one.

This suggests that the problem is the pg log rewrites that are an inherent
part of cuttlefish.  This is replaced with improved rewrite logic in 0.66
or so, so dumpling will be better.  I suspect that having a large number
of pgs is exacerbating the issue for you.

We think there is still a different peering performance problem that Sam
and paravoid have been trying to track down, but I believe in that case
reducing the pg log sizes didn't have much effect.  (Maybe one of them can
chime in here.)

This was unfortunately something we failed to catch before cuttlefish was
released.  One of the main focuses right now is in creating large clusters
and observing peering and recovery to make sure we don't repeat the same
sort of mistake for dumpling!

Thanks Sage for these information. I had some OSD restarts which went better with the new settings but others which don't. But it's hard to measure and compare restart OSD.X with OSD.Y.

Do you have any recommandations for me? Wait for dumpling and hope that nothing fails until then? Or upgrading to 0.66? Or trying to move all data to a new pool having fewer PGs?

Thanks!

Greets,
Stefan
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux