On Fri, 25 Jan 2019, Liu, Changcheng wrote: > This is not a normal Ceph setup. The user is root. > I'm upgrading the Storage from Ceph/V10.2.6 to Ceph/V13.2.2 for one > platform and hit some problems. Note that upgrades must first pass through luminous v12.2.z before upgrading further. An upgrade directly from 10.2.z to 13.2.z is not possible. sage > > @Burkhard > oops, I'll do further check. > > B.R. > Changcheng > > On 09:22 Fri 25 Jan, Willem Jan Withagen wrote: > > On 25-1-2019 07:15, Liu, Changcheng wrote: > > > Hi all, > > > I always hit below error: > > > storage-0:/var/run/ceph# /usr/bin/ceph-osd -i 0 --pid-file /var/run/ceph/osd.0.pid -c /etc/ceph/ceph.conf --cluster ceph -f > > > 2019-01-25 14:05:20.148 7f2bd8f87180 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory > > > Could anyone give some suggestion to debug this problem? > > > > > > I've checked some info as below: > > > 0)storage-0:/var/lib/ceph/osd/ceph-0# cat /etc/os-release > > > NAME="CentOS Linux" > > > VERSION="7 (Core)" > > > ID="centos" > > > ID_LIKE="rhel fedora" > > > VERSION_ID="7" > > > PRETTY_NAME="CentOS Linux 7 (Core)" > > > ANSI_COLOR="0;31" > > > CPE_NAME="cpe:/o:centos:centos:7" > > > HOME_URL="https://www.centos.org/" > > > BUG_REPORT_URL="https://bugs.centos.org/" > > > CENTOS_MANTISBT_PROJECT="CentOS-7" > > > CENTOS_MANTISBT_PROJECT_VERSION="7" > > > REDHAT_SUPPORT_PRODUCT="centos" > > > REDHAT_SUPPORT_PRODUCT_VERSION="7" > > > > > > 1)storage-0:/var/lib/ceph/osd/ceph-0# ceph --version > > > ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable) > > > > > > 2)storage-0:/var/lib/ceph/osd/ceph-0# ls > > > block block_uuid ceph_fsid fsid journal journal_uuid magic type > > > > > > 3)storage-0:/var/lib/ceph/osd/ceph-0# ls -l `readlink -f block` > > > brw-rw---- 1 root disk 8, 18 Jan 25 14:07 /dev/sdb2 > > > > > > 4)storage-0:/var/lib/ceph/osd/ceph-0# ls -l `readlink -f journal` > > > brw-rw---- 1 root disk 8, 18 Jan 25 14:07 /dev/sdb2 > > > > Changcheng, > > > > Just a wild guess. > > > > If this is a normal Ceph setup, the user running the OSD is ceph:ceph. > > And he will not have access to your devices, unless you have added him > > (her??) to the group disk.... > > > > --WjW > > > > > > > > 5)storage-0:/var/lib/ceph/osd/ceph-0# mount | grep sdb1 > > > /dev/sdb1 on /var/lib/ceph/osd/ceph-0 type xfs (rw,relatime,attr2,inode64,noquota) > > > > > > 6)storage-0:/var/lib/ceph/osd/ceph-0# ceph-disk list > > > /dev/dm-0 other, ext4, mounted on /scratch > > > /dev/dm-1 other, ext4, mounted on /var/log > > > /dev/dm-2 other, ext4, mounted on /var/lib/ceph/mon > > > /dev/sda : > > > /dev/sda1 other, 21686148-6449-6e6f-744e-656564454649 > > > /dev/sda2 other, ext4, mounted on /boot > > > /dev/sda3 other, ext4, mounted on / > > > /dev/sda4 other, LVM2_member > > > /dev/sdb : > > > /dev/sdb1 ceph data, active, cluster ceph, osd uuid 486b4414-8cb9-46f5-bfcd-6d0b24a8ce5c, block /dev/sdb2, journal /dev/sdb2 > > > /dev/sdb2 ceph block, for /dev/sdb1 > > > storage-0:/var/lib/ceph/osd/ceph-0# > > > > > > 7)storage-0:/var/lib/ceph/osd/ceph-0# ceph auth list > > > installed auth entries: > > > mgr.controller-0 > > > key: AQBp1Elc3wI5HBAAuWXr3/GNkiR/eElrmDOI0A== > > > caps: [mon] allow * > > > caps: [osd] allow * > > > mgr.controller-1 > > > key: AQD7zElcW9FSIhAAr57vVkmX2eqRXWYeBPWo+Q== > > > caps: [mon] allow * > > > caps: [osd] allow * > > > storage-0:/var/lib/ceph/osd/ceph-0# > > > > > > B.R. > > > Changcheng > > > > > > > >