Re: OSDs wont mount on Debian 10 (Buster) with Nautilus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 Try this

chown ceph.ceph /dev/sdc2
chown ceph.ceph /dev/sdd2
chown ceph.ceph /dev/sde2
chown ceph.ceph /dev/sdf2
chown ceph.ceph /dev/sdg2
chown ceph.ceph /dev/sdh2



-----Original Message-----
From: Ml Ml [mailto:mliebherr99@xxxxxxxxxxxxxx] 
Sent: 25 March 2020 16:22
To: Marc Roos
Subject: Re:  OSDs wont mount on Debian 10 (Buster) with 
Nautilus

They where indeed disabled. I enabled them with:

systemctl enable ceph-osd@4
systemctl enable ceph-osd@15
systemctl enable ceph-osd@17
systemctl enable ceph-osd@20
systemctl enable ceph-osd@21
systemctl enable ceph-osd@27
systemctl enable ceph-osd@32

But they still wont start.

On Wed, Mar 25, 2020 at 4:09 PM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> 
wrote:
>
>
> I had something similar. My osd were disabled, maybe this installer of 

> nautilus does that check
>
> systemctl is-enabled ceph-osd@0
>
> https://tracker.ceph.com/issues/44102
>
>
>
>
> -----Original Message-----
> From: Ml Ml [mailto:mliebherr99@xxxxxxxxxxxxxx]
> Sent: 25 March 2020 16:05
> To: ceph-users
> Subject:  OSDs wont mount on Debian 10 (Buster) with 
> Nautilus
>
> Hello list,
>
> i upgraded to Debian 10, after that i upgraded from luminous to 
> nautilus.
> I restarted the mons, then the OSDs.
>
> Everything was up and healthy.
> After rebooting a node, only 3/10 OSD start up:
>
> -4       20.07686     host ceph03
>  4   hdd  2.67020         osd.4     down  1.00000 1.00000
>  5   hdd  1.71660         osd.5       up  1.00000 1.00000
>  6   hdd  1.71660         osd.6       up  1.00000 1.00000
> 10   hdd  2.67029         osd.10      up  1.00000 1.00000
> 15   hdd  2.00000         osd.15    down  1.00000 1.00000
> 17   hdd  1.20000         osd.17    down  1.00000 1.00000
> 20   hdd  1.71649         osd.20    down  1.00000 1.00000
> 21   hdd  2.00000         osd.21    down  1.00000 1.00000
> 27   hdd  1.71649         osd.27    down  1.00000 1.00000
> 32   hdd  2.67020         osd.32    down  1.00000 1.00000
>
> root@ceph03:~# /usr/bin/ceph-osd -f --cluster ceph --id 32 --setuser 
> ceph --setgroup ceph
> 2020-03-25 15:46:36.330 7efddde5ec80 -1 auth: unable to find a keyring 

> on /var/lib/ceph/osd/ceph-32/keyring: (2) No such file or directory
> 2020-03-25 15:46:36.330 7efddde5ec80 -1 AuthRegistry(0x56531c50a140) 
> no keyring found at /var/lib/ceph/osd/ceph-32/keyring, disabling cephx
> 2020-03-25 15:46:36.330 7efddde5ec80 -1 auth: unable to find a keyring 

> on /var/lib/ceph/osd/ceph-32/keyring: (2) No such file or directory
> 2020-03-25 15:46:36.330 7efddde5ec80 -1 AuthRegistry(0x7ffd04120468) 
> no keyring found at /var/lib/ceph/osd/ceph-32/keyring, disabling cephx 

> failed to fetch mon config (--no-mon-config to skip)
>
> root@ceph03:~# df
> Filesystem     1K-blocks    Used Available Use% Mounted on
> udev            24624580       0  24624580   0% /dev
> tmpfs            4928216    9544   4918672   1% /run
> /dev/sda3       47930248 5209760  40262684  12% /
> tmpfs           24641068       0  24641068   0% /dev/shm
> tmpfs               5120       0      5120   0% /run/lock
> tmpfs           24641068       0  24641068   0% /sys/fs/cgroup
> /dev/sda1         944120  144752    734192  17% /boot
> tmpfs           24641068      24  24641044   1% 
/var/lib/ceph/osd/ceph-1
> tmpfs           24641068      24  24641044   1% 
/var/lib/ceph/osd/ceph-6
> tmpfs           24641068      24  24641044   1% 
/var/lib/ceph/osd/ceph-5
> tmpfs           24641068      24  24641044   1%
> /var/lib/ceph/osd/ceph-10
> tmpfs            4928212       0   4928212   0% /run/user/0
>
> root@ceph03:~# ceph-volume lvm list
>
>
> ====== osd.1 =======
>
>   [block]
> /dev/ceph-9af8fc69-cab8-4c12-b51e-5746a0f0fc51/osd-block-b4987093-4fa5
> -4
> 7bd-8ddc-102b98444067
>
>       block device
> /dev/ceph-9af8fc69-cab8-4c12-b51e-5746a0f0fc51/osd-block-b4987093-4fa5
> -4
> 7bd-8ddc-102b98444067
>       block uuid                HSK6Da-elP2-CFYz-s0RH-UNiw-bey0-dVcml1
>       cephx lockbox secret
>       cluster fsid              5436dd5d-83d4-4dc8-a93b-60ab5db145df
>       cluster name              ceph
>       crush device class        None
>       encrypted                 0
>       osd fsid                  b4987093-4fa5-47bd-8ddc-102b98444067
>       osd id                    1
>       type                      block
>       vdo                       0
>       devices                   /dev/sdj
>
> ====== osd.10 ======
>
>   [block]
> /dev/ceph-78f2730d-7277-4d1f-8909-449b45339f80/osd-block-fa241441-1758
> -4
> b85-9799-988eee3b2b3f
>
>       block device
> /dev/ceph-78f2730d-7277-4d1f-8909-449b45339f80/osd-block-fa241441-1758
> -4
> b85-9799-988eee3b2b3f
>       block uuid                440fNG-guO2-l1WJ-m5cR-GUkz-ZTUd-Fcz5Ml
>       cephx lockbox secret
>       cluster fsid              5436dd5d-83d4-4dc8-a93b-60ab5db145df
>       cluster name              ceph
>       crush device class        None
>       encrypted                 0
>       osd fsid                  fa241441-1758-4b85-9799-988eee3b2b3f
>       osd id                    10
>       type                      block
>       vdo                       0
>       devices                   /dev/sdl
>
> ====== osd.5 =======
>
>   [block]
> /dev/ceph-793608ca-9dd1-4a4f-a776-c1e292127899/osd-block-112e0c75-f61b
> -4
> e50-9bb5-775bacd854af
>
>       block device
> /dev/ceph-793608ca-9dd1-4a4f-a776-c1e292127899/osd-block-112e0c75-f61b
> -4
> e50-9bb5-775bacd854af
>       block uuid                Z6VeNx-S9sg-ZOsh-HTw9-ykTc-YBrh-qFwz5i
>       cephx lockbox secret
>       cluster fsid              5436dd5d-83d4-4dc8-a93b-60ab5db145df
>       cluster name              ceph
>       crush device class        None
>       encrypted                 0
>       osd fsid                  112e0c75-f61b-4e50-9bb5-775bacd854af
>       osd id                    5
>       type                      block
>       vdo                       0
>       devices                   /dev/sdb
>
> ====== osd.6 =======
>
>   [block]
> /dev/ceph-4b0cee89-03f4-4853-bc1d-09e0eb772799/osd-block-35288829-c1f6
> -4
> 2ab-aeb0-f2915a389e48
>
>       block device
> /dev/ceph-4b0cee89-03f4-4853-bc1d-09e0eb772799/osd-block-35288829-c1f6
> -4
> 2ab-aeb0-f2915a389e48
>       block uuid                G9qHxC-dN0b-XBes-QVss-Bzwa-7Xtw-ikksgM
>       cephx lockbox secret
>       cluster fsid              5436dd5d-83d4-4dc8-a93b-60ab5db145df
>       cluster name              ceph
>       crush device class        None
>       encrypted                 0
>       osd fsid                  35288829-c1f6-42ab-aeb0-f2915a389e48
>       osd id                    6
>       type                      block
>       vdo                       0
>       devices                   /dev/sdc
>
> I would mount it and run the osd deamon manually, but ceph-disk list 
> seems to be gone in Nautilus. Therefore i dont know where to mount 
what.
>
> Any ideas on that?
>
>
> Cheers,
> Michael
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx
>
>

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux