Re: OSDs (v172.3) won't start after Rocky Upgrade to Kernel 4.18.0-372.26.1.el8_6.x86_64

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Janne,

LVM looks fine so far. Please se below...

BUT. It seems that after upgrade from Octopus to Quincy yesterday the
standalone packet "ceph-volume.noarch" won't updated/installed. So after
re-installation of ceph-volume and activation i got all the tmpfs mounts
under /var/lib/ceph again and working OSDs... This is/was quite strange!

Thanks so much for the hint.

Christoph







[root@ceph1n012 system]# lvs
  LV                                             VG
               Attr       LSize
  osd-block-8a13ef40-a843-4733-84e8-ec3a912bde53
ceph-03113595-4505-4f68-b181-c50875d1f04b -wi-a----- <1.82t

  osd-block-400688c2-883a-4351-b5c2-4221edca4ffd
ceph-1342f6f1-67fc-4e90-b318-c23a878c321e -wi-a----- <1.82t

  osd-block-c054584b-10d5-4ffc-9056-ad02ef0fc713
ceph-23ef5f9f-f232-4ddb-888d-7f1e22baee87 -wi-a----- 10.91t

  osd-block-84a5bf96-33f2-47b3-b35d-dbeff632b754
ceph-b4a80ffd-6705-4144-8c6e-6d96a2ba6f42 -wi-a----- <1.82t

  osd-block-8bf67ce6-b62d-4dc7-b224-ea87a1f08c4b
ceph-bdca15d4-c2ee-4dea-8ad1-ef4363be33cc -wi-a----- <3.64t

  osd-block-97fba9ab-96e6-4704-8111-71e5eb5583cc
ceph-c3145324-77e5-4d93-a8b9-a55f6ada6189 -wi-a----- 10.91t

  osd-block-6a88ced0-384f-457a-8a79-1de5d1dbe8b4
ceph-d4c2631b-8eed-4956-981c-7c33bd54c205 -wi-a----- <1.82t

  block.db1                                      ceph_cache1
                -wi-a----- 60.00g

  block.db2                                      ceph_cache1
                -wi-a----- 60.00g

  block.db3                                      ceph_cache1
                -wi-a----- 60.00g

  block.db4                                      ceph_cache1
                -wi-a----- 60.00g

  block.db5                                      ceph_cache1
                -wi-a----- 60.00g

  block.db6                                      ceph_cache1
                -wi-a----- 60.00g

  block.db7                                      ceph_cache1
                -wi-a----- 60.00g

  block.wal1                                     ceph_cache1
                -wi-a-----  3.00g

  block.wal2                                     ceph_cache1
                -wi-a-----  3.00g

  block.wal3                                     ceph_cache1
                -wi-a-----  3.00g

  block.wal4                                     ceph_cache1
                -wi-a-----  3.00g

  block.wal5                                     ceph_cache1
                -wi-a-----  3.00g

  block.wal6                                     ceph_cache1
                -wi-a-----  3.00g

  block.wal7                                     ceph_cache1
                -wi-a-----  3.00g








Am Do., 29. Sept. 2022 um 16:29 Uhr schrieb Janne Johansson <
icepic.dz@xxxxxxxxx>:

> > Many thanks for any hint helping to get missing 7 OSDs up ASAP.
>
> Not sure if it "helps", but I would try "ceph-volume lvm activate
> --all" if those were on lvm, I guess ceph-volume simple and raw might
> have similar command to search for and start everything that looks
> like a ceph OSD.
>
> Perhaps the kernel upgrade moved about device names or something, or
> lvm was prevented from finding the ceph stuff (which in turn makes it
> impossible to mount the tmpfs part from said lvm volume).
>
> --
> May the most significant bit of your life be positive.
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux