Re: Nautilus 14.2.6 ceph-volume bluestore _read_fsid unparsable uuid

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jan 27, 2020 at 03:23:55PM -0500, Dave Hall wrote:
>All,
>
>I've just spent a significant amount of time unsuccessfully chasing 
>the  _read_fsid unparsable uuid error on Debian 10 / Natilus 14.2.6.  
>Since this is a brand new cluster, last night I gave up and moved back 
>to Debian 9 / Luminous 12.2.11.  In both cases I'm using the packages 
>from Debian Backports with ceph-ansible as my deployment tool.
>
>Note that above I said 'the _read_fsid unparsable uuid' error. I've 
>searched around a bit and found some previously reported issues, but I 
>did not see any conclusive resolutions.
>
>I would like to get to Nautilus as quickly as possible, so I'd gladly 
>provide additional information to help track down the cause of this 
>symptom.  I can confirm that, looking at the ceph-volume.log on the 
>OSD host I see no difference between the ceph-volume lvm batch command 
>generated by the ceph-ansible versions associated with these two Ceph 
>releases:
>
>   ceph-volume --cluster ceph lvm batch --bluestore --yes
>   --block-db-size 133358734540 /dev/sdc /dev/sdd /dev/sde /dev/sdf
>   /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/nvme0n1
>
>Note that I'm using --block-db-size to divide my NVMe into 12 segments 
>as I have 4 empty drive bays on my OSD servers that I may eventually 
>be able to fill.
>
>My OSD hardware is:
>
>   Disk /dev/nvme0n1: 1.5 TiB, 1600321314816 bytes, 3125627568 sectors
>   Disk /dev/sdc: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
>   Disk /dev/sdd: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
>   Disk /dev/sde: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
>   Disk /dev/sdf: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
>   Disk /dev/sdg: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
>   Disk /dev/sdh: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
>   Disk /dev/sdi: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
>   Disk /dev/sdj: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
>
>I'd send the output of ceph-volume inventory on Luminous, but I'm 
>getting  -->: KeyError: 'human_readable_size'.
>
>Please let me know if I can provide any further information.
Mind re-running you ceph-volume command with  debug output 
enabled:
CEPH_VOLUME_DEBUG=true ceph-volume --cluster ceph lvm batch --bluestore ...

Ideally you could also openen a bug report here 
https://tracker.ceph.com/projects/ceph-volume/issues/new

Thanks!
>
>Thanks.
>
>-Dave
>
>-- 
>Dave Hall
>Binghamton University
>
>_______________________________________________
>ceph-users mailing list -- ceph-users@xxxxxxx
>To unsubscribe send an email to ceph-users-leave@xxxxxxx

-- 
Jan Fajerski
Senior Software Engineer Enterprise Storage
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux