>>>It looks like at some point the filesystem is not passed to the options. Would >>>you mind running the `ceph-disk-prepare` command again but with >>>the --verbose flag? >>>I think that from the output above (correct it if I am mistaken) that would be >>>something like: Hi. If I’m running: ceph-deploy disk zap ceph001:sdaa ceph001:sda1 and ceph-disk -v prepare /dev/sdaa /dev/sda1, get the same errors: ====================================================== root@ceph001:~# ceph-disk -v prepare /dev/sdaa /dev/sda1 DEBUG:ceph-disk:Journal /dev/sda1 is a partition WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data DEBUG:ceph-disk:Creating osd partition on /dev/sdaa Information: Moved requested sector from 34 to 2048 in order to align on 2048-sector boundaries. The operation has completed successfully. DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1 meta-data="" isize=2048 agcount=32, agsize=22892700 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=732566385, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=357698, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.EGTIq2 with options noatime mount: /dev/sdaa1: more filesystems detected. This should not happen, use -t <type> to explicitly specify the filesystem type or use wipefs(8) to clean up the device. mount: you must specify the filesystem type ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime', '--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.EGTIq2']' returned non-zero exit status 32 If executed this command separately for both disks - looks like ok: For sdaa: root@ceph001:~# ceph-disk -v prepare /dev/sdaa INFO:ceph-disk:Will colocate journal with data on /dev/sdaa DEBUG:ceph-disk:Creating journal partition num 2 size 1024 on /dev/sdaa Information: Moved requested sector from 34 to 2048 in order to align on 2048-sector boundaries. The operation has completed successfully. DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/d1389210-6e02-4460-9cb2-0e31e4b0924f DEBUG:ceph-disk:Creating osd partition on /dev/sdaa Information: Moved requested sector from 2097153 to 2099200 in order to align on 2048-sector boundaries. The operation has completed successfully. DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1 meta-data="" isize=2048 agcount=32, agsize=22884508 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=732304241, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=357570, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.K3q9v5 with options noatime DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.K3q9v5 DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.K3q9v5/journal -> /dev/disk/by-partuuid/d1389210-6e02-4460-9cb2-0e31e4b0924f DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.K3q9v5 The operation has completed successfully. DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sdaa For sda1: root@ceph001:~# ceph-disk -v prepare /dev/sda1 DEBUG:ceph-disk:OSD data device /dev/sda1 is a partition DEBUG:ceph-disk:Creating xfs fs on /dev/sda1 meta-data="" isize=2048 agcount=4, agsize=655360 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=2621440, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 DEBUG:ceph-disk:Mounting /dev/sda1 on /var/lib/ceph/tmp/mnt.G30zPD with options noatime DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.G30zPD DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.G30zPD DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sda1 |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com