All,
I've just spent a significant amount of time unsuccessfully chasing the
_read_fsid unparsable uuid error on Debian 10 / Natilus 14.2.6. Since
this is a brand new cluster, last night I gave up and moved back to
Debian 9 / Luminous 12.2.11. In both cases I'm using the packages from
Debian Backports with ceph-ansible as my deployment tool.
Note that above I said 'the _read_fsid unparsable uuid' error. I've
searched around a bit and found some previously reported issues, but I
did not see any conclusive resolutions.
I would like to get to Nautilus as quickly as possible, so I'd gladly
provide additional information to help track down the cause of this
symptom. I can confirm that, looking at the ceph-volume.log on the OSD
host I see no difference between the ceph-volume lvm batch command
generated by the ceph-ansible versions associated with these two Ceph
releases:
ceph-volume --cluster ceph lvm batch --bluestore --yes
--block-db-size 133358734540 /dev/sdc /dev/sdd /dev/sde /dev/sdf
/dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/nvme0n1
Note that I'm using --block-db-size to divide my NVMe into 12 segments
as I have 4 empty drive bays on my OSD servers that I may eventually be
able to fill.
My OSD hardware is:
Disk /dev/nvme0n1: 1.5 TiB, 1600321314816 bytes, 3125627568 sectors
Disk /dev/sdc: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
Disk /dev/sdd: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
Disk /dev/sde: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
Disk /dev/sdf: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
Disk /dev/sdg: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
Disk /dev/sdh: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
Disk /dev/sdi: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
Disk /dev/sdj: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
I'd send the output of ceph-volume inventory on Luminous, but I'm
getting -->: KeyError: 'human_readable_size'.
Please let me know if I can provide any further information.
Thanks.
-Dave
--
Dave Hall
Binghamton University
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx