Re: Fwd: Ceph Upgrade Issue - Luminous to Nautilus (14.2.11 ) using ceph-ansible

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>
>
> partitions after checking disk partitions  and whoami information.  After
> manually mounting the osd.108, now it's throwing permission error which I'm
> still reviewing (bdev(0xd1be000 /var/lib/ceph/osd/ceph-108/block) open open
> got: (13) Permission denied).  Enclosed the log of the OSD for full review
> - https://pastebin.com/7k0xBfDV.
>

As for this small part of your issues, do check that the ceph user/ceph
group has permissions from
/var/lib/ceph/osd/ and downwards before the mount attempt, and that it may
read the devices used for data and wal/db.

Also, the crash seem to be in the "upgrade-something-internally" in the
OSD, so you might consider
not upgrading the existing OSDs but instead recreate one at a time so it
gets started as Nautilus directly to at least skip the breaking upgrade
part. Not nice, not pretty but could work.

Another small thought is the guide on Mimic/Naut:
"If your cluster was originally installed with a version prior to Luminous,
ensure that it has completed at least one full scrub of all PGs while
running Luminous. "

Just double checking that you didn't have one or more pools running with
noscrub/nodeep-scrub so this (also) happened to this cluster?

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux