Jan,
Unfortunately I'm under immense pressure right now to get some form of
Ceph into production, so it's going to be Luminous for now, or maybe a
live upgrade to Nautilus without recreating the OSDs (if that's possible).
The good news is that in the next couple months I expect to add more
hardware that should be nearly identical. I will gladly give it a go at
that time and see if I can recreate. (Or, if I manage to thoroughly
crash my current fledgling cluster, I'll give it another go on one node
while I'm up all night recovering.)
If you could tell me where to look I'd gladly read some code and see if
I can find anything that way. Or if there's any sort of design document
describing the deep internals I'd be glad to scan it to see if I've hit
a corner case of some sort. Actually, I'd be interested in reading
those documents anyway if I could.
Thanks.
-Dave
Dave Hall
On 1/28/2020 3:05 AM, Jan Fajerski wrote:
On Mon, Jan 27, 2020 at 03:23:55PM -0500, Dave Hall wrote:
All,
I've just spent a significant amount of time unsuccessfully chasing
the _read_fsid unparsable uuid error on Debian 10 / Natilus 14.2.6.
Since this is a brand new cluster, last night I gave up and moved back
to Debian 9 / Luminous 12.2.11. In both cases I'm using the packages
>from Debian Backports with ceph-ansible as my deployment tool.
Note that above I said 'the _read_fsid unparsable uuid' error. I've
searched around a bit and found some previously reported issues, but I
did not see any conclusive resolutions.
I would like to get to Nautilus as quickly as possible, so I'd gladly
provide additional information to help track down the cause of this
symptom. I can confirm that, looking at the ceph-volume.log on the
OSD host I see no difference between the ceph-volume lvm batch command
generated by the ceph-ansible versions associated with these two Ceph
releases:
ceph-volume --cluster ceph lvm batch --bluestore --yes
--block-db-size 133358734540 /dev/sdc /dev/sdd /dev/sde /dev/sdf
/dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/nvme0n1
Note that I'm using --block-db-size to divide my NVMe into 12 segments
as I have 4 empty drive bays on my OSD servers that I may eventually
be able to fill.
My OSD hardware is:
Disk /dev/nvme0n1: 1.5 TiB, 1600321314816 bytes, 3125627568 sectors
Disk /dev/sdc: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
Disk /dev/sdd: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
Disk /dev/sde: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
Disk /dev/sdf: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
Disk /dev/sdg: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
Disk /dev/sdh: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
Disk /dev/sdi: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
Disk /dev/sdj: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
I'd send the output of ceph-volume inventory on Luminous, but I'm
getting -->: KeyError: 'human_readable_size'.
Please let me know if I can provide any further information.
Mind re-running you ceph-volume command with debug output
enabled:
CEPH_VOLUME_DEBUG=true ceph-volume --cluster ceph lvm batch --bluestore ...
Ideally you could also openen a bug report here
https://tracker.ceph.com/projects/ceph-volume/issues/new
Thanks!
Thanks.
-Dave
--
Dave Hall
Binghamton University
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx