Re: Performance issues related to scrubbing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Wed, 3 Feb 2016 17:48:02 -0800 Cullen King wrote:

> Hello,
> 
> I've been trying to nail down a nasty performance issue related to
> scrubbing. I am mostly using radosgw with a handful of buckets containing
> millions of various sized objects. When ceph scrubs, both regular and
> deep, radosgw blocks on external requests, and my cluster has a bunch of
> requests that have blocked for > 32 seconds. Frequently OSDs are marked
> down.
>   
>From my own (painful) experiences let me state this:

1. When your cluster runs out of steam during deep-scrubs, drop what
you're doing and order more HW (OSDs).
Because this is a sign that it would also be in trouble when doing
recoveries. 

2. If you cluster is inconvenienced by even mere scrubs, you're really in
trouble. 
Threaten the penny pincher with bodily violence and have that new HW
phased in yesterday.

> According to atop, the OSDs being deep scrubbed are reading at only 5mb/s
> to 8mb/s, and a scrub of a 6.4gb placement group takes 10-20 minutes.
> 
> Here's a screenshot of atop from a node:
> https://s3.amazonaws.com/rwgps/screenshots/DgSSRyeF.png
>   
This looks familiar. 
Basically at this point in time the competing read request for all the
objects clash with write requests and completely saturate your HD (about
120 IOPS and 85% busy according to your atop screenshot). 

There are ceph configuration options that can mitigate this to some
extend and which I don't see in your config, like
"osd_scrub_load_threshold" and "osd_scrub_sleep" along with the various IO
priority settings.
However the points above still stand.

XFS defragmentation might help, significantly if your FS is badly
fragmented. But again, this is only a temporary band-aid.

> First question: is this a reasonable speed for scrubbing, given a very
> lightly used cluster? Here's some cluster details:
> 
> deploy@drexler:~$ ceph --version
> ceph version 0.94.1-5-g85a68f9 (85a68f9a8237f7e74f44a1d1fbbd6cb4ac50f8e8)
> 
> 
> 2x Xeon E5-2630 per node, 64gb of ram per node.
>  
More memory can help by keeping hot objects in the page cache (so the
actual disks need not be read and can write at their full IOPS capacity).
A lot of memory (and the correct sysctl settings) will also allow for a
large SLAB space, keeping all those directory entries and other bits in
memory without having to go to disk to get them.

You seem to be just fine CPU wise. 

> 
> deploy@drexler:~$ ceph status
>     cluster 234c6825-0e2b-4256-a710-71d29f4f023e
>      health HEALTH_WARN
>             118 requests are blocked > 32 sec
>      monmap e1: 3 mons at {drexler=
> 10.0.0.36:6789/0,lucy=10.0.0.38:6789/0,paley=10.0.0.34:6789/0}
>             election epoch 296, quorum 0,1,2 paley,drexler,lucy
>      mdsmap e19989: 1/1/1 up {0=lucy=up:active}, 1 up:standby
>      osdmap e1115: 12 osds: 12 up, 12 in
>       pgmap v21748062: 1424 pgs, 17 pools, 3185 GB data, 20493 kobjects
>             10060 GB used, 34629 GB / 44690 GB avail
>                 1422 active+clean
>                    1 active+clean+scrubbing+deep
>                    1 active+clean+scrubbing
>   client io 721 kB/s rd, 33398 B/s wr, 53 op/s
>   
You want to avoid having scrubs going on willy-nilly in parallel and at
high peek times, even IF your cluster is capable of handling them.

Depending on how busy your cluster is and its usage pattern, you may do
what I did. 
Kick off a deep scrub of all OSDs "ceph osd deep-scrub \*" like 01:00 on a
Saturday morning. 
If your cluster is fast enough, it will finish before 07:00 (without
killing your client performance) and all regular scrubs will now happen in
that time frame as well (given default settings).
If your cluster isn't fast enough, see my initial 2 points. ^o^

> deploy@drexler:~$ ceph osd tree
> ID WEIGHT   TYPE NAME        UP/DOWN REWEIGHT PRIMARY-AFFINITY
> -1 43.67999 root default
> -2 14.56000     host paley
>  0  3.64000         osd.0         up  1.00000          1.00000
>  3  3.64000         osd.3         up  1.00000          1.00000
>  6  3.64000         osd.6         up  1.00000          1.00000
>  9  3.64000         osd.9         up  1.00000          1.00000
> -3 14.56000     host lucy
>  1  3.64000         osd.1         up  1.00000          1.00000
>  4  3.64000         osd.4         up  1.00000          1.00000
>  7  3.64000         osd.7         up  1.00000          1.00000
> 11  3.64000         osd.11        up  1.00000          1.00000
> -4 14.56000     host drexler
>  2  3.64000         osd.2         up  1.00000          1.00000
>  5  3.64000         osd.5         up  1.00000          1.00000
>  8  3.64000         osd.8         up  1.00000          1.00000
> 10  3.64000         osd.10        up  1.00000          1.00000
> 
> 
> My OSDs are 4tb 7200rpm Hitachi DeskStars, using XFS, with Samsung 850
> Pro journals (very slow, ordered s3700 replacements, but shouldn't pose
> problems for reading as far as I understand things).   

Just to make sure, these are genuine DeskStars?
I'm asking both because AFAIK they are out of production and their direct
successors, the Toshiba DT drives (can) have a nasty firmware bug that
totally ruins their performance (from ~8 hours per week to permanently
until power-cycled).

Regards,

Christian

> MONs are co-located
> with OSD nodes, but the nodes are fairly beefy and has very low load.
> Drives are on a expanding backplane, with an LSI SAS3008 controller.
> 
> I have a fairly standard config as well:
> 
> https://gist.github.com/kingcu/aae7373eb62ceb7579da
> 
> I know that I don't have a ton of OSDs, but I'd expect a little better
> performance than this. Checkout munin of my three nodes:
> 
> http://munin.ridewithgps.com/ridewithgps.com/drexler.ridewithgps.com/index.html#disk
> http://munin.ridewithgps.com/ridewithgps.com/paley.ridewithgps.com/index.html#disk
> http://munin.ridewithgps.com/ridewithgps.com/lucy.ridewithgps.com/index.html#disk
> 
> 
> Any input would be appreciated, before I start trying to micro-optimize
> config params, as well as upgrading to Infernalis.
> 
> 
> Cheers,
> 
> Cullen  


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux