Re: Balancer: uneven OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Oliver,

Thank you for the response, I did ensure that min-client-compact-level is indeed Luminous (see below). I have no kernel mapped rbd clients. Ceph versions reports mimic. Also below is the output of ceph balancer status. One thing to note, I did enable the balancer after I already filled the cluster, not from the onset. I had hoped that it wouldn't matter, though your comment "if the compat-level is too old for upmap, you'll only find a small warning about that in the logfiles" leaves me to believe that it will *not* work in doing it this way, please confirm and let me know what message to look for in /var/log/ceph.

Thank you!

root@hostadmin:~# ceph balancer status
{
"active": true,
"plans": [],
"mode": "upmap"
}



root@hostadmin:~# ceph features
{
"mon": [
{
"features": "0x3ffddff8ffacfffb",
"release": "luminous",
"num": 3
}
],
"osd": [
{
"features": "0x3ffddff8ffacfffb",
"release": "luminous",
"num": 7
}
],
"client": [
{
"features": "0x3ffddff8ffacfffb",
"release": "luminous",
"num": 1
}
],
"mgr": [
{
"features": "0x3ffddff8ffacfffb",
"release": "luminous",
"num": 3
}
]
}




Inactive hide details for Oliver Freyermuth ---05/29/2019 11:13:51 AM---Hi Tarek, what's the output of "ceph balancer status"?Oliver Freyermuth ---05/29/2019 11:13:51 AM---Hi Tarek, what's the output of "ceph balancer status"?

From: Oliver Freyermuth <freyermuth@xxxxxxxxxxxxxxxxxx>
To: ceph-users@xxxxxxxxxxxxxx
Date: 05/29/2019 11:13 AM
Subject: [EXTERNAL] Re: Balancer: uneven OSDs
Sent by: "ceph-users" <ceph-users-bounces@xxxxxxxxxxxxxx>





Hi Tarek,

what's the output of "ceph balancer status"?
In case you are using "upmap" mode, you must make sure to have a min-client-compat-level of at least Luminous:
http://docs.ceph.com/docs/mimic/rados/operations/upmap/
Of course, please be aware that your clients must be recent enough (especially for kernel clients).

Sadly, if the compat-level is too old for upmap, you'll only find a small warning about that in the logfiles,
but no error on terminal when activating the balancer or any other kind of erroneous / health condition.

Cheers,
Oliver

Am 29.05.19 um 17:52 schrieb Tarek Zegar:
> Can anyone help with this? Why can't I optimize this cluster, the pg counts and data distribution is way off.
> __________________
>
> I enabled the balancer plugin and even tried to manually invoke it but it won't allow any changes. Looking at ceph osd df, it's not even at all. Thoughts?
>
> root@hostadmin:~# ceph osd df
> ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS
> 1 hdd 0.00980 0 0 B 0 B 0 B 0 0 0
> 3 hdd 0.00980 1.00000 10 GiB 8.3 GiB 1.7 GiB 82.83 1.14 156
> 6 hdd 0.00980 1.00000 10 GiB 8.4 GiB 1.6 GiB 83.77 1.15 144
> 0 hdd 0.00980 0 0 B 0 B 0 B 0 0 0
> 5 hdd 0.00980 1.00000 10 GiB 9.0 GiB 1021 MiB 90.03 1.23 159
> 7 hdd 0.00980 1.00000 10 GiB 7.7 GiB 2.3 GiB 76.57 1.05 141
> 2 hdd 0.00980 1.00000 10 GiB 5.5 GiB 4.5 GiB 55.42 0.76 90
> 4 hdd 0.00980 1.00000 10 GiB 5.9 GiB 4.1 GiB 58.78 0.81 99
> 8 hdd 0.00980 1.00000 10 GiB 6.3 GiB 3.7 GiB 63.12 0.87 111
> TOTAL 90 GiB 53 GiB 37 GiB 72.93
> MIN/MAX VAR: 0.76/1.23 STDDEV: 12.67
>
>
> root@hostadmin:~# osdmaptool om --upmap out.txt --upmap-pool rbd
> osdmaptool: osdmap file 'om'
> writing upmap command output to: out.txt
> checking for upmap cleanups
> upmap, max-count 100, max*deviation 0.01 <---really? It's not even close to 1% across the drives*
> limiting to pools rbd (1)
> *no upmaps proposed*
>
>
> ceph balancer optimize myplan
> Error EALREADY: Unable to find further optimization,or distribution is already perfect
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

(See attached file: smime.p7s)_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



Attachment: smime.p7s
Description: Binary data

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux