Re: OSDs wont mount on Debian 10 (Buster) with Nautilus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Still no luck.

But the working OSDs have no partition:
OSD.1 => /dev/sdj
OSD.5 => /dev/sdb
OSD.6 => /dev/sdbc
OSD.10 => /dev/sdl


Where as the rest has:

root@ceph03:~# ls -l /dev/sd*
brw-rw---- 1 root disk 8,   0 Mar 25 16:23 /dev/sda
brw-rw---- 1 root disk 8,   1 Mar 25 16:23 /dev/sda1
brw-rw---- 1 root disk 8,   2 Mar 25 16:23 /dev/sda2
brw-rw---- 1 root disk 8,   3 Mar 25 16:23 /dev/sda3
brw-rw---- 1 root disk 8,  16 Mar 25 16:23 /dev/sdb
brw-rw---- 1 root disk 8,  32 Mar 25 16:23 /dev/sdc
brw-rw---- 1 root disk 8,  48 Mar 25 16:23 /dev/sdd
brw-rw---- 1 root disk 8,  49 Mar 25 16:23 /dev/sdd1
brw-rw---- 1 ceph ceph 8,  50 Mar 25 16:23 /dev/sdd2
brw-rw---- 1 root disk 8,  64 Mar 25 16:23 /dev/sde
brw-rw---- 1 root disk 8,  65 Mar 25 16:23 /dev/sde1
brw-rw---- 1 ceph ceph 8,  66 Mar 25 16:23 /dev/sde2
brw-rw---- 1 root disk 8,  80 Mar 25 16:23 /dev/sdf
brw-rw---- 1 root disk 8,  81 Mar 25 16:23 /dev/sdf1
brw-rw---- 1 ceph ceph 8,  82 Mar 25 16:23 /dev/sdf2
brw-rw---- 1 root disk 8,  96 Mar 25 16:23 /dev/sdg
brw-rw---- 1 root disk 8,  97 Mar 25 16:23 /dev/sdg1
brw-rw---- 1 ceph ceph 8,  98 Mar 25 16:23 /dev/sdg2
brw-rw---- 1 root disk 8, 112 Mar 25 16:23 /dev/sdh
brw-rw---- 1 root disk 8, 113 Mar 25 16:23 /dev/sdh1
brw-rw---- 1 ceph ceph 8, 114 Mar 25 16:23 /dev/sdh2
brw-rw---- 1 root disk 8, 128 Mar 25 16:23 /dev/sdi
brw-rw---- 1 root disk 8, 129 Mar 25 16:23 /dev/sdi1
brw-rw---- 1 ceph ceph 8, 130 Mar 25 16:23 /dev/sdi2
brw-rw---- 1 root disk 8, 144 Mar 25 16:23 /dev/sdj
brw-rw---- 1 root disk 8, 160 Mar 25 16:23 /dev/sdk
brw-rw---- 1 root disk 8, 161 Mar 25 16:23 /dev/sdk1
brw-rw---- 1 ceph ceph 8, 162 Mar 25 16:23 /dev/sdk2
brw-rw---- 1 root disk 8, 176 Mar 25 16:23 /dev/sdl
brw-rw---- 1 root disk 8, 192 Mar 25 16:23 /dev/sdm
brw-rw---- 1 root disk 8, 193 Mar 25 16:23 /dev/sdm1
brw-rw---- 1 ceph ceph 8, 194 Mar 25 16:23 /dev/sdm2

Did i miss to convert to bluestore or something?


On Wed, Mar 25, 2020 at 4:23 PM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote:
>
>  Try this
>
> chown ceph.ceph /dev/sdc2
> chown ceph.ceph /dev/sdd2
> chown ceph.ceph /dev/sde2
> chown ceph.ceph /dev/sdf2
> chown ceph.ceph /dev/sdg2
> chown ceph.ceph /dev/sdh2
>
>
>
> -----Original Message-----
> From: Ml Ml [mailto:mliebherr99@xxxxxxxxxxxxxx]
> Sent: 25 March 2020 16:22
> To: Marc Roos
> Subject: Re:  OSDs wont mount on Debian 10 (Buster) with
> Nautilus
>
> They where indeed disabled. I enabled them with:
>
> systemctl enable ceph-osd@4
> systemctl enable ceph-osd@15
> systemctl enable ceph-osd@17
> systemctl enable ceph-osd@20
> systemctl enable ceph-osd@21
> systemctl enable ceph-osd@27
> systemctl enable ceph-osd@32
>
> But they still wont start.
>
> On Wed, Mar 25, 2020 at 4:09 PM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx>
> wrote:
> >
> >
> > I had something similar. My osd were disabled, maybe this installer of
>
> > nautilus does that check
> >
> > systemctl is-enabled ceph-osd@0
> >
> > https://tracker.ceph.com/issues/44102
> >
> >
> >
> >
> > -----Original Message-----
> > From: Ml Ml [mailto:mliebherr99@xxxxxxxxxxxxxx]
> > Sent: 25 March 2020 16:05
> > To: ceph-users
> > Subject:  OSDs wont mount on Debian 10 (Buster) with
> > Nautilus
> >
> > Hello list,
> >
> > i upgraded to Debian 10, after that i upgraded from luminous to
> > nautilus.
> > I restarted the mons, then the OSDs.
> >
> > Everything was up and healthy.
> > After rebooting a node, only 3/10 OSD start up:
> >
> > -4       20.07686     host ceph03
> >  4   hdd  2.67020         osd.4     down  1.00000 1.00000
> >  5   hdd  1.71660         osd.5       up  1.00000 1.00000
> >  6   hdd  1.71660         osd.6       up  1.00000 1.00000
> > 10   hdd  2.67029         osd.10      up  1.00000 1.00000
> > 15   hdd  2.00000         osd.15    down  1.00000 1.00000
> > 17   hdd  1.20000         osd.17    down  1.00000 1.00000
> > 20   hdd  1.71649         osd.20    down  1.00000 1.00000
> > 21   hdd  2.00000         osd.21    down  1.00000 1.00000
> > 27   hdd  1.71649         osd.27    down  1.00000 1.00000
> > 32   hdd  2.67020         osd.32    down  1.00000 1.00000
> >
> > root@ceph03:~# /usr/bin/ceph-osd -f --cluster ceph --id 32 --setuser
> > ceph --setgroup ceph
> > 2020-03-25 15:46:36.330 7efddde5ec80 -1 auth: unable to find a keyring
>
> > on /var/lib/ceph/osd/ceph-32/keyring: (2) No such file or directory
> > 2020-03-25 15:46:36.330 7efddde5ec80 -1 AuthRegistry(0x56531c50a140)
> > no keyring found at /var/lib/ceph/osd/ceph-32/keyring, disabling cephx
> > 2020-03-25 15:46:36.330 7efddde5ec80 -1 auth: unable to find a keyring
>
> > on /var/lib/ceph/osd/ceph-32/keyring: (2) No such file or directory
> > 2020-03-25 15:46:36.330 7efddde5ec80 -1 AuthRegistry(0x7ffd04120468)
> > no keyring found at /var/lib/ceph/osd/ceph-32/keyring, disabling cephx
>
> > failed to fetch mon config (--no-mon-config to skip)
> >
> > root@ceph03:~# df
> > Filesystem     1K-blocks    Used Available Use% Mounted on
> > udev            24624580       0  24624580   0% /dev
> > tmpfs            4928216    9544   4918672   1% /run
> > /dev/sda3       47930248 5209760  40262684  12% /
> > tmpfs           24641068       0  24641068   0% /dev/shm
> > tmpfs               5120       0      5120   0% /run/lock
> > tmpfs           24641068       0  24641068   0% /sys/fs/cgroup
> > /dev/sda1         944120  144752    734192  17% /boot
> > tmpfs           24641068      24  24641044   1%
> /var/lib/ceph/osd/ceph-1
> > tmpfs           24641068      24  24641044   1%
> /var/lib/ceph/osd/ceph-6
> > tmpfs           24641068      24  24641044   1%
> /var/lib/ceph/osd/ceph-5
> > tmpfs           24641068      24  24641044   1%
> > /var/lib/ceph/osd/ceph-10
> > tmpfs            4928212       0   4928212   0% /run/user/0
> >
> > root@ceph03:~# ceph-volume lvm list
> >
> >
> > ====== osd.1 =======
> >
> >   [block]
> > /dev/ceph-9af8fc69-cab8-4c12-b51e-5746a0f0fc51/osd-block-b4987093-4fa5
> > -4
> > 7bd-8ddc-102b98444067
> >
> >       block device
> > /dev/ceph-9af8fc69-cab8-4c12-b51e-5746a0f0fc51/osd-block-b4987093-4fa5
> > -4
> > 7bd-8ddc-102b98444067
> >       block uuid                HSK6Da-elP2-CFYz-s0RH-UNiw-bey0-dVcml1
> >       cephx lockbox secret
> >       cluster fsid              5436dd5d-83d4-4dc8-a93b-60ab5db145df
> >       cluster name              ceph
> >       crush device class        None
> >       encrypted                 0
> >       osd fsid                  b4987093-4fa5-47bd-8ddc-102b98444067
> >       osd id                    1
> >       type                      block
> >       vdo                       0
> >       devices                   /dev/sdj
> >
> > ====== osd.10 ======
> >
> >   [block]
> > /dev/ceph-78f2730d-7277-4d1f-8909-449b45339f80/osd-block-fa241441-1758
> > -4
> > b85-9799-988eee3b2b3f
> >
> >       block device
> > /dev/ceph-78f2730d-7277-4d1f-8909-449b45339f80/osd-block-fa241441-1758
> > -4
> > b85-9799-988eee3b2b3f
> >       block uuid                440fNG-guO2-l1WJ-m5cR-GUkz-ZTUd-Fcz5Ml
> >       cephx lockbox secret
> >       cluster fsid              5436dd5d-83d4-4dc8-a93b-60ab5db145df
> >       cluster name              ceph
> >       crush device class        None
> >       encrypted                 0
> >       osd fsid                  fa241441-1758-4b85-9799-988eee3b2b3f
> >       osd id                    10
> >       type                      block
> >       vdo                       0
> >       devices                   /dev/sdl
> >
> > ====== osd.5 =======
> >
> >   [block]
> > /dev/ceph-793608ca-9dd1-4a4f-a776-c1e292127899/osd-block-112e0c75-f61b
> > -4
> > e50-9bb5-775bacd854af
> >
> >       block device
> > /dev/ceph-793608ca-9dd1-4a4f-a776-c1e292127899/osd-block-112e0c75-f61b
> > -4
> > e50-9bb5-775bacd854af
> >       block uuid                Z6VeNx-S9sg-ZOsh-HTw9-ykTc-YBrh-qFwz5i
> >       cephx lockbox secret
> >       cluster fsid              5436dd5d-83d4-4dc8-a93b-60ab5db145df
> >       cluster name              ceph
> >       crush device class        None
> >       encrypted                 0
> >       osd fsid                  112e0c75-f61b-4e50-9bb5-775bacd854af
> >       osd id                    5
> >       type                      block
> >       vdo                       0
> >       devices                   /dev/sdb
> >
> > ====== osd.6 =======
> >
> >   [block]
> > /dev/ceph-4b0cee89-03f4-4853-bc1d-09e0eb772799/osd-block-35288829-c1f6
> > -4
> > 2ab-aeb0-f2915a389e48
> >
> >       block device
> > /dev/ceph-4b0cee89-03f4-4853-bc1d-09e0eb772799/osd-block-35288829-c1f6
> > -4
> > 2ab-aeb0-f2915a389e48
> >       block uuid                G9qHxC-dN0b-XBes-QVss-Bzwa-7Xtw-ikksgM
> >       cephx lockbox secret
> >       cluster fsid              5436dd5d-83d4-4dc8-a93b-60ab5db145df
> >       cluster name              ceph
> >       crush device class        None
> >       encrypted                 0
> >       osd fsid                  35288829-c1f6-42ab-aeb0-f2915a389e48
> >       osd id                    6
> >       type                      block
> >       vdo                       0
> >       devices                   /dev/sdc
> >
> > I would mount it and run the osd deamon manually, but ceph-disk list
> > seems to be gone in Nautilus. Therefore i dont know where to mount
> what.
> >
> > Any ideas on that?
> >
> >
> > Cheers,
> > Michael
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
> > email to ceph-users-leave@xxxxxxx
> >
> >
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux