Re: Ceph osd too full

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hello,

what Kostis said, in particular with regard to change crush weights (NOT
re-weight).

Also the output of "ceph -s" if you please, insufficient PGs can make OSD
imbalances worse.

Look at your output of "ceph df detail" and "ceph osd tree".
Find the worst outliers and carefully (a few % at most) adjust their weight
up and down respectively. 
Keep an eye on your host weight (in the tree output), you want your hosts
to stay at the same weight ultimately.

This is the output for one of my storage nodes after all the juggling,
now all OSDs are within 100GB or 2% of each others, but as you can see
some OSDs needed a LOT of nudging (default weight was 5):

ID WEIGHT  REWEIGHT SIZE  USE    AVAIL  %USE  VAR  
11 5.09999  1.00000 5411G   759G  4652G 14.04 0.80 
12 5.00000  1.00000 5411G   725G  4686G 13.40 0.77 
13 4.70000  1.00000 5411G   797G  4614G 14.74 0.84 
14 4.79999  1.00000 5411G   786G  4625G 14.53 0.83 
15 5.59999  1.00000 5411G   752G  4658G 13.91 0.80 
16 4.79999  1.00000 5411G   801G  4610G 14.81 0.85 
17 5.29999  1.00000 5411G   734G  4677G 13.57 0.78 

Christian

On Thu, 16 Jun 2016 00:57:00 +0300 Kostis Fardelas wrote:

> Hi Hauke,
> you could increase the mon/osd full/near full ratios but at this level
> of disk space scarcity, things may need your constant attention
> especially in case of failure given the risk of closing down the
> cluster IO. Modifying crush weights may be of use too.
> 
> Regards,
> Kostis
> 
> On 15 June 2016 at 21:13, Hauke Homburg <hhomburg@xxxxxxxxxxxxxx> wrote:
> >
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA1
> >
> > Hello,
> >
> > I have a ceph jewel Cluster with 5 Server and 40 OSD.
> > The Cluster is very full, but at this Moment i cannot use 10 Percent of
> > the Volume because the ceph health health says some Harddisks are too
> > full. They are between 75 and 95 Percent full. A
> >
> > ceph osd reweight-by-utilization doesn't help. How can i Fill the osd
> > constantly to use the maximun space?
> >
> > Regards
> >
> > Hauke
> >
> > - --
> > www.w3-creative.de
> >
> > www.westchat.de
> > -----BEGIN PGP SIGNATURE-----
> > Version: GnuPG v2.0.19 (GNU/Linux)
> >
> > iQIcBAEBAgAGBQJXYZrdAAoJEEIVizQb/Y0mrM0P/0YqB5Zb69I/HDblqfSmg+26
> > 1Io5j/vTz9gs5orHEvvU6wNZiEVnh8jfeczzxMaNQ+zW4MGED/ahrpZoHnJ5xEbb
> > a4xqpvrZdFYFYrhgrFDEQEo3cqC3L5E4VjR4aBp77WjH/Q7G9v62IHrNM0uU7Yfg
> > RKw7/zxHmZQBWek5Co7AtRmzZdjS7RelaVyEHQ7Vu2nO1aZUNYvjgUvVCHdos/TG
> > F3yiwFcXEk7H6EHyHs6dUoTgm0OOVw/MjOD7kLtM/uModEZoxQT5uuvod6iHZ5nE
> > eNkV/ipcTbUaDdkBbpBKhfNjsoyYLetNblEWbmrWw8bmorjq0CmtKT229cBrNZW8
> > bdPbrbG6/TCkydVm0KHEgU97FsIPI6yqJxSCnsFEBNFjYVvBlysqK1awXHK+tTjV
> > v3arQFFEIRC8salEoIWaGx97M3S/HuqcTV3zlZ+OrfXblrB5h3YJTonnxyi4Z1c7
> > 7imsMneNAYhlVcZtcWxNxKB8/wu0sX8yvjkwYMh1bIF3H/pt0JhoyJsWvEcKgEbH
> > s37nJ6I3hFZc9okefLK6uz9zIkZ1CLzYdTSnZS0pIDufHZVvuJe3nN1PSOAZ24JI
> > H4eV5INWS81f0EzOfUXRkfq86uDEtNIpLa3J+CHuYcnNYOc3TA/vBTB3QOBXaIcF
> > tT3jp+p3+DiDmvuynICc
> > =AU62
> > -----END PGP SIGNATURE-----
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux