Re: problems after upgrade to 14.2.1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



After installing the package on each mgr server and restarting the service, i disabled the module then enabled the module with the force option.  ( seems I cut that out of the output I pasted )  It was essentially trial and error.  After doing this, check and make sure you can see the module as enabled  ( ceph mgr services ).  You should see something in the output at that point.

 

I also had to fiddle with the SSL bits.

 

-Brent

 

From: ST Wong (ITSC) <ST@xxxxxxxxxxxxxxxx>
Sent: Friday, June 21, 2019 12:08 AM
To: Brent Kennedy <bkennedy@xxxxxxxxxx>; ceph-users@xxxxxxxxxxxxxx
Subject: RE: problems after upgrade to 14.2.1

 

Thanks.  I also didn’t encounter the spillover issue on another cluster from 13.2.6 -> 14.2.1.  On that cluster, the dashboard also didn’t work but reconfiguring it similar to what you did worked.  Yes, nice new look. J

 

I commands like yours but it keeps prompting “all mgr daemons do not support module 'dashboard', pass --force to force enablement”.   Restart the mgr service didn’t help.

 

/st wong

 

From: Brent Kennedy <bkennedy@xxxxxxxxxx>
Sent: Friday, June 21, 2019 11:57 AM
To: ST Wong (ITSC) <ST@xxxxxxxxxxxxxxxx>; ceph-users@xxxxxxxxxxxxxx
Subject: RE: problems after upgrade to 14.2.1

 

Not sure about the spillover stuff, didn’t happen to me when I upgraded from Luminous to 14.2.1.  The dashboard thing did happen to me.  Seems you have to disable the dashboard and then renable it after installing the separate dashboard rpm.  Also, make sure to restart the mgr services on each node before trying that and after the dashboard package install.  I didn’t end up using the SSL certificate bits.  Also, there is a code issue for 14.2.1 where you cannot login ( the login page just refreshes ), the bug report says its fixed in 14.2.2….

 

Login page Bug Report:     https://tracker.ceph.com/issues/40051   ( manual fix:  https://github.com/ceph/ceph/pull/27942/files )  Make sure to change the dashboard password after applying the fix.

 

The literal command history before I had it working again.  Love the new look though!

2046  ceph mgr module enable dashboard

2047  ceph mgr module disable dashboard

2048  ceph config set mgr mgr/dashboard/ssl false

2049  ceph mgr module disable dashboard

2050  ceph mgr module enable dashboard

2051  ceph dashboard create-self-signed-cert

2052  ceph config set mgr mgr/dashboard/ssl true

2053  ceph mgr module disable dashboard

2054  ceph mgr module enable dashboard

2056  systemctl restart ceph-mgr.target

2057  ceph mgr module disable dashboard

2058  ceph mgr module enable dashboard

2059  ceph dashboard set-login-credentials

 2060  systemctl restart ceph-mgr.target

2063  ceph mgr module disable dashboard

2064  ceph mgr module enable dashboard

2065  ceph dashboard ac-user-set-password

 

-Brent

 

 

From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> On Behalf Of ST Wong (ITSC)
Sent: Thursday, June 20, 2019 10:24 PM
To: ceph-users@xxxxxxxxxxxxxx
Subject: problems after upgrade to 14.2.1

 

Hi all,

 

We recently upgrade a testing cluster from 13.2.4 to 14.2.1.  We encountered 2 problems:

 

1.       Got warning of BlueFS spillover but the usage is low while it’s a testing cluster without much activity/data:

 

# ceph -s

  cluster:

    id:     cc795498-5d16-4b84-9584-1788d0458be9

    health: HEALTH_WARN

            BlueFS spillover detected on 8 OSD(s)

[snipped]

 

# ceph health detail

HEALTH_WARN BlueFS spillover detected on 8 OSD(s)

BLUEFS_SPILLOVER BlueFS spillover detected on 8 OSD(s)

     osd.0 spilled over 48 MiB metadata from 'db' device (17 MiB used of 500 MiB) to slow device

     osd.1 spilled over 41 MiB metadata from 'db' device (6.0 MiB used of 500 MiB) to slow device

     osd.2 spilled over 47 MiB metadata from 'db' device (17 MiB used of 500 MiB) to slow device

     osd.3 spilled over 48 MiB metadata from 'db' device (6.0 MiB used of 500 MiB) to slow device

     osd.4 spilled over 44 MiB metadata from 'db' device (19 MiB used of 500 MiB) to slow device

     osd.5 spilled over 45 MiB metadata from 'db' device (6.0 MiB used of 500 MiB) to slow device

     osd.6 spilled over 46 MiB metadata from 'db' device (14 MiB used of 500 MiB) to slow device

     osd.7 spilled over 43 MiB metadata from 'db' device (6.0 MiB used of 500 MiB) to slow device

 

Is this a bug in 14 like this http://tracker.ceph.com/issues/38745 ?

 

 

 

2.       Dashboard configuration are lost and unable to reconfigure it again.  

 

The ceph-mgr-dashboard rpm is there, but we can’t configure dashboard again:

 

--------------- cut here ------------------

# ceph mgr module enable dashboard

Error ENOENT: all mgr daemons do not support module 'dashboard', pass --force to force enablement

 

# ceph mgr module enable dashboard --force

# ceph mgr module ls

{

    "enabled_modules": [

        "dashboard"

    ],

 

[snipped]

 

# ceph mgr services

{}

 

# ceph dashboard create-self-signed-cert

Error EINVAL: No handler found for 'dashboard create-self-signed-cert'

 

// repeat the command gives different results

 

#  ceph dashboard create-self-signed-cert

Error EINVAL: Warning: due to ceph-mgr restart, some PG states may not be up to date

No handler found for 'dashboard create-self-signed-cert'

 

#  ceph dashboard create-self-signed-cert

no valid command found; 10 closest matches:

osd down <ids> [<ids>...]

osd require-osd-release luminous|mimic|nautilus {--yes-i-really-mean-it}

osd unset full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim

osd set full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim|pglog_hardlimit {--yes-i-really-mean-it}

osd erasure-code-profile ls

osd erasure-code-profile rm <name>

osd erasure-code-profile get <name>

osd erasure-code-profile set <name> {<profile> [<profile>...]} {--force}

osd unpause

osd pause

Error EINVAL: invalid command

--------------- cut here ------------------

 

Did we miss anything?

 

Thanks a lot.

Regards

/st wong

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux