Re: ceph-ansible install failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Zhongzhou,

I think most of the time it means that a device is not wiped correctly.
Can you check that?

Thanks!

On Sat, 22 Oct 2022 at 01:01, Zhongzhou Cai <zhongzhoucai@xxxxxxxxxx> wrote:

> Hi folks,
>
> I'm trying to install ceph on GCE VMs (debian/ubuntu) with PD-SSDs using
> ceph-ansible image. The installation from clean has been good, but when I
> purged ceph cluster and tried to re-install, I saw the error:
>
> ```
>
> Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore
> bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap
> --keyfile - --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid
> 08d766d5-e843-4c65-9f4f-db7f0129b4e9 --setuser ceph --setgroup ceph
>
> stderr: 2022-10-21T22:07:58.447+0000 7f71afead080 -1
> bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
>
> stderr: 2022-10-21T22:07:58.455+0000 7f71afead080 -1 bluefs _replay 0x0:
> stop: uuid 8b1ce55d-10c1-a33d-1817-8a8427657694 != super.uuid
> 3d8aa673-00bd-473c-a725-06ac31c6b945, block dump:
>
> stderr: 00000000  6a bc c7 44 83 87 8b 1c  e5 5d 10 c1 a3 3d 18 17
> |j..D.....]...=..|
>
> stderr: 00000010  8a 84 27 65 76 94 bd 12  3c 11 4a c4 32 6c eb a4
> |..'ev...<.J.2l..|
>
>
>
> stderr: 00000ff0  2b 57 4e a4 ad da be cb  bf df 61 fc f7 ce 4a 14
> |+WN.......a...J.|
>
> stderr: 00001000
>
> stderr: 2022-10-21T22:07:58.987+0000 7f71afead080 -1 rocksdb:
> verify_sharding unable to list column families: NotFound:
>
> stderr: 2022-10-21T22:07:58.987+0000 7f71afead080 -1
> bluestore(/var/lib/ceph/osd/ceph-1/) _open_db erroring opening db:
>
> stderr: 2022-10-21T22:07:59.515+0000 7f71afead080 -1 OSD::mkfs:
> ObjectStore::mkfs failed with error (5) Input/output error
>
> stderr: 2022-10-21T22:07:59.515+0000 7f71afead080 -1 [0;31m ** ERROR: error
> creating empty object store in /var/lib/ceph/osd/ceph-1/: (5) Input/output
> error[0m
>
> --> Was unable to complete a new OSD, will rollback changes
> ```
>
> Can someone explain what uuid != super.uuid means? The issue seems not to
> happen when installing on a clean disk. Would it be related to the purging
> process not doing a good cleanup job? FWIW, I'm using
>
> https://github.com/ceph/ceph-ansible/blob/main/infrastructure-playbooks/purge-cluster.yml
> to purge the cluster.
>
> Thanks,
> Zhongzhou Cai
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>


-- 

*Guillaume Abrioux*Senior Software Engineer
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux