XFS Metadata corruption while activating OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello ceph-users,

It is a really really Really tough problem for our team.
We investigated in the problem for a long time, try a lot of efforts, but can’t solve the problem, even the concentrate cause of the problem is still unclear for us!
So, Anyone give any solution/suggestion/opinion whatever  will be highly highly appreciated!!!

Problem Summary:
When we activate osd, there will be  metadata corrupttion in the activating disk, probability is 100% !

Admin Nodes&MON node:
Platform: X86
OS: Ubuntu 16.04
Kernel: 4.12.0
Ceph: Luminous 12.2.2

OSD nodes:
Platform: armv7
OS:       Ubuntu 14.04
Kernel:   4.4.39
Ceph: Lominous 12.2.2
Disk: 10T+10T
Memory: 2GB

Deploy log:
root@mnc000:/home/mnvadmin/ceph# ceph-deploy disk zap arms001-01:sda
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy disk zap arms001-01:sda
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : zap
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fa04680f1b8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function disk at 0x7fa04708e500>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : [('arms001-01', '/dev/sda', None)]
[ceph_deploy.osd][DEBUG ] zapping /dev/sda on arms001-01
[arms001-01][DEBUG ] connection detected need for sudo
[arms001-01][DEBUG ] connected to host: arms001-01
[arms001-01][DEBUG ] detect platform information from remote host
[arms001-01][DEBUG ] detect machine type
[arms001-01][DEBUG ] find the location of an executable
[arms001-01][INFO ] Running command: sudo /sbin/initctl version
[arms001-01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 14.04 trusty
[arms001-01][DEBUG ] zeroing last few blocks of device
[arms001-01][DEBUG ] find the location of an executable
[arms001-01][INFO ] Running command: sudo /usr/local/bin/ceph-disk zap /dev/sda
[arms001-01][WARNIN] /usr/local/lib/python2.7/dist-packages/ceph_disk-1.0.0-py2.7.egg/ceph_disk/main.py:5653: UserWarning:
[arms001-01][WARNIN] *******************************************************************************
[arms001-01][WARNIN] This tool is now deprecated in favor of ceph-volume.
[arms001-01][WARNIN] It is recommended to use ceph-volume for OSD deployments. For details see:
[arms001-01][WARNIN]
[arms001-01][WARNIN] http://docs.ceph.com/docs/master/ceph-volume/#migrating
[arms001-01][WARNIN]
[arms001-01][WARNIN] *******************************************************************************
[arms001-01][WARNIN]
[arms001-01][DEBUG ] 4 bytes were erased at offset 0x0 (xfs)
[arms001-01][DEBUG ] they were: 58 46 53 42
[arms001-01][WARNIN] 10+0 records in
[arms001-01][WARNIN] 10+0 records out
[arms001-01][WARNIN] 10485760 bytes (10 MB) copied, 0.0610462 s, 172 MB/s
[arms001-01][WARNIN] 10+0 records in
[arms001-01][WARNIN] 10+0 records out
[arms001-01][WARNIN] 10485760 bytes (10 MB) copied, 0.129642 s, 80.9 MB/s
[arms001-01][WARNIN] Caution: invalid backup GPT header, but valid main header; regenerating
[arms001-01][WARNIN] backup header from main header.
[arms001-01][WARNIN]
[arms001-01][WARNIN] Warning! Main and backup partition tables differ! Use the 'c' and 'e' options
[arms001-01][WARNIN] on the recovery & transformation menu to examine the two tables.
[arms001-01][WARNIN]
[arms001-01][WARNIN] Warning! One or more CRCs don't match. You should repair the disk!
[arms001-01][WARNIN]
[arms001-01][DEBUG ] ****************************************************************************
[arms001-01][DEBUG ] Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
[arms001-01][DEBUG ] verification and recovery are STRONGLY recommended.
[arms001-01][DEBUG ] ****************************************************************************
[arms001-01][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[arms001-01][DEBUG ] other utilities.
[arms001-01][DEBUG ] Creating new GPT entries.
[arms001-01][DEBUG ] The operation has completed successfully.
[arms001-01][WARNIN] /usr/local/lib/python2.7/dist-packages/ceph_disk-1.0.0-py2.7.egg/ceph_disk/main.py:5685: UserWarning:
[arms001-01][WARNIN] *******************************************************************************
[arms001-01][WARNIN] This tool is now deprecated in favor of ceph-volume.
[arms001-01][WARNIN] It is recommended to use ceph-volume for OSD deployments. For details see:
[arms001-01][WARNIN]
[arms001-01][WARNIN] http://docs.ceph.com/docs/master/ceph-volume/#migrating
[arms001-01][WARNIN]
[arms001-01][WARNIN] *******************************************************************************
[arms001-01][WARNIN]


root@mnc000:/home/mnvadmin/ceph# ceph-deploy osd prepare --filestore arms001-01:sda
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy osd prepare --filestore arms001-01:sda
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] block_db : None
[ceph_deploy.cli][INFO ] disk : [('arms001-01', '/dev/sda', None)]
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] block_wal : None
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : prepare
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fe8e2ce85f0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] filestore : True
[ceph_deploy.cli][INFO ] func : <function osd at 0x7fe8e35e1488>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks arms001-01:/dev/sda:
[arms001-01][DEBUG ] connection detected need for sudo
[arms001-01][DEBUG ] connected to host: arms001-01
[arms001-01][DEBUG ] detect platform information from remote host
[arms001-01][DEBUG ] detect machine type
[arms001-01][DEBUG ] find the location of an executable
[arms001-01][INFO ] Running command: sudo /sbin/initctl version
[arms001-01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Deploying osd to arms001-01
[arms001-01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host arms001-01 disk /dev/sda journal None activate False
[arms001-01][DEBUG ] find the location of an executable
[arms001-01][INFO ] Running command: sudo /usr/local/bin/ceph-disk -v prepare --filestore --cluster ceph --fs-type xfs -- /dev/sda
[arms001-01][WARNIN] /usr/local/lib/python2.7/dist-packages/ceph_disk-1.0.0-py2.7.egg/ceph_disk/main.py:5653: UserWarning:
[arms001-01][WARNIN] *******************************************************************************
[arms001-01][WARNIN] This tool is now deprecated in favor of ceph-volume.
[arms001-01][WARNIN] It is recommended to use ceph-volume for OSD deployments. For details see:
[arms001-01][WARNIN]
[arms001-01][WARNIN] http://docs.ceph.com/docs/master/ceph-volume/#migrating
[arms001-01][WARNIN]
[arms001-01][WARNIN] *******************************************************************************
[arms001-01][WARNIN]
[arms001-01][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[arms001-01][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[arms001-01][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[arms001-01][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[arms001-01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms001-01][WARNIN] set_type: Will colocate journal with data on /dev/sda
[arms001-01][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[arms001-01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms001-01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms001-01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms001-01][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[arms001-01][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[arms001-01][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[arms001-01][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[arms001-01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms001-01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms001-01][WARNIN] ptype_tobe_for_name: name = journal
[arms001-01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms001-01][WARNIN] create_partition: Creating journal partition num 2 size 5120 on /dev/sda
[arms001-01][WARNIN] command_check_call: Running command: /sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:a7eb3f90-bf83-4ab8-b913-bc461a2044c3 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sda
[arms001-01][DEBUG ] The operation has completed successfully.
[arms001-01][WARNIN] update_partition: Calling partprobe on created device /dev/sda
[arms001-01][WARNIN] command_check_call: Running command: /sbin/udevadm settle --timeout=600
[arms001-01][WARNIN] command: Running command: /usr/bin/flock -s /dev/sda /sbin/partprobe /dev/sda
[arms001-01][WARNIN] command_check_call: Running command: /sbin/udevadm settle --timeout=600
[arms001-01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms001-01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms001-01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda2 uuid path is /sys/dev/block/8:2/dm/uuid
[arms001-01][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/a7eb3f90-bf83-4ab8-b913-bc461a2044c3
[arms001-01][WARNIN] command_check_call: Running command: /sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/sda
[arms001-01][DEBUG ] The operation has completed successfully.
[arms001-01][WARNIN] update_partition: Calling partprobe on prepared device /dev/sda
[arms001-01][WARNIN] command_check_call: Running command: /sbin/udevadm settle --timeout=600
[arms001-01][WARNIN] command: Running command: /usr/bin/flock -s /dev/sda /sbin/partprobe /dev/sda
[arms001-01][WARNIN] command_check_call: Running command: /sbin/udevadm settle --timeout=600
[arms001-01][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/a7eb3f90-bf83-4ab8-b913-bc461a2044c3
[arms001-01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms001-01][WARNIN] set_data_partition: Creating osd partition on /dev/sda
[arms001-01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms001-01][WARNIN] ptype_tobe_for_name: name = data
[arms001-01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms001-01][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/sda
[arms001-01][WARNIN] command_check_call: Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:f93553ca-b1d8-4b39-a0c5-5a1cea66c1d4 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sda
[arms001-01][DEBUG ] The operation has completed successfully.
[arms001-01][WARNIN] update_partition: Calling partprobe on created device /dev/sda
[arms001-01][WARNIN] command_check_call: Running command: /sbin/udevadm settle --timeout=600
[arms001-01][WARNIN] command: Running command: /usr/bin/flock -s /dev/sda /sbin/partprobe /dev/sda
[arms001-01][WARNIN] command_check_call: Running command: /sbin/udevadm settle --timeout=600
[arms001-01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms001-01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms001-01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda1 uuid path is /sys/dev/block/8:1/dm/uuid
[arms001-01][WARNIN] populate_data_path_device: Creating xfs fs on /dev/sda1
[arms001-01][WARNIN] command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sda1
[arms001-01][DEBUG ] meta-data=/dev/sda1 isize=2048 agcount=32, agsize=76259320 blks
[arms001-01][DEBUG ] = sectsz=4096 attr=2, projid32bit=0
[arms001-01][DEBUG ] data = bsize=4096 blocks=2440298235, imaxpct=5
[arms001-01][DEBUG ] = sunit=0 swidth=0 blks
[arms001-01][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0
[arms001-01][DEBUG ] log =internal log bsize=4096 blocks=521728, version=2
[arms001-01][DEBUG ] = sectsz=4096 sunit=1 blks, lazy-count=1
[arms001-01][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[arms001-01][WARNIN] mount: Mounting /dev/sda1 on /var/lib/ceph/tmp/mnt.62fw0Q with options noatime,inode64
[arms001-01][WARNIN] command_check_call: Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sda1 /var/lib/ceph/tmp/mnt.62fw0Q
[arms001-01][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.62fw0Q
[arms001-01][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.62fw0Q/ceph_fsid.16389.tmp
[arms001-01][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.62fw0Q/fsid.16389.tmp
[arms001-01][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.62fw0Q/magic.16389.tmp
[arms001-01][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.62fw0Q/journal_uuid.16389.tmp
[arms001-01][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.62fw0Q/journal -> /dev/disk/by-partuuid/a7eb3f90-bf83-4ab8-b913-bc461a2044c3
[arms001-01][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.62fw0Q/type.16389.tmp
[arms001-01][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.62fw0Q
[arms001-01][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.62fw0Q
[arms001-01][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.62fw0Q
[arms001-01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms001-01][WARNIN] command_check_call: Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sda
[arms001-01][DEBUG ] The operation has completed successfully.
[arms001-01][WARNIN] update_partition: Calling partprobe on prepared device /dev/sda
[arms001-01][WARNIN] command_check_call: Running command: /sbin/udevadm settle --timeout=600
[arms001-01][WARNIN] command: Running command: /usr/bin/flock -s /dev/sda /sbin/partprobe /dev/sda
[arms001-01][WARNIN] command_check_call: Running command: /sbin/udevadm settle --timeout=600
[arms001-01][WARNIN] command_check_call: Running command: /sbin/udevadm trigger --action=add --sysname-match sda1
[arms001-01][WARNIN] /usr/local/lib/python2.7/dist-packages/ceph_disk-1.0.0-py2.7.egg/ceph_disk/main.py:5685: UserWarning:
[arms001-01][WARNIN] *******************************************************************************
[arms001-01][WARNIN] This tool is now deprecated in favor of ceph-volume.
[arms001-01][WARNIN] It is recommended to use ceph-volume for OSD deployments. For details see:
[arms001-01][WARNIN]
[arms001-01][WARNIN] http://docs.ceph.com/docs/master/ceph-volume/#migrating
[arms001-01][WARNIN]
[arms001-01][WARNIN] *******************************************************************************
[arms001-01][WARNIN]
[arms001-01][INFO ] checking OSD status...
[arms001-01][DEBUG ] find the location of an executable
[arms001-01][INFO ] Running command: sudo /usr/bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host arms001-01 is now ready for osd use.


root@mnc000:/home/mnvadmin/ceph# ceph-deploy osd activate arms001-01:sda1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy osd activate arms001-01:sda1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : activate
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f3c377fd5f0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function osd at 0x7f3c380f6488>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : [('arms001-01', '/dev/sda1', None)]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks arms001-01:/dev/sda1:
[arms001-01][DEBUG ] connection detected need for sudo
[arms001-01][DEBUG ] connected to host: arms001-01
[arms001-01][DEBUG ] detect platform information from remote host
[arms001-01][DEBUG ] detect machine type
[arms001-01][DEBUG ] find the location of an executable
[arms001-01][INFO ] Running command: sudo /sbin/initctl version
[arms001-01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] activating host arms001-01 disk /dev/sda1
[ceph_deploy.osd][DEBUG ] will use init type: upstart
[arms001-01][DEBUG ] find the location of an executable
[arms001-01][INFO ] Running command: sudo /usr/local/bin/ceph-disk -v activate --mark-init upstart --mount /dev/sda1
[arms001-01][WARNIN] /usr/local/lib/python2.7/dist-packages/ceph_disk-1.0.0-py2.7.egg/ceph_disk/main.py:5653: UserWarning:
[arms001-01][WARNIN] *******************************************************************************
[arms001-01][WARNIN] This tool is now deprecated in favor of ceph-volume.
[arms001-01][WARNIN] It is recommended to use ceph-volume for OSD deployments. For details see:
[arms001-01][WARNIN]
[arms001-01][WARNIN] http://docs.ceph.com/docs/master/ceph-volume/#migrating
[arms001-01][WARNIN]
[arms001-01][WARNIN] *******************************************************************************
[arms001-01][WARNIN]
[arms001-01][WARNIN] main_activate: path = /dev/sda1
[arms001-01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda1 uuid path is /sys/dev/block/8:1/dm/uuid
[arms001-01][WARNIN] command: Running command: /sbin/blkid -o udev -p /dev/sda1
[arms001-01][WARNIN] command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/sda1
[arms001-01][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[arms001-01][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[arms001-01][WARNIN] mount: Mounting /dev/sda1 on /var/lib/ceph/tmp/mnt.mV02fa with options noatime,inode64
[arms001-01][WARNIN] command_check_call: Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sda1 /var/lib/ceph/tmp/mnt.mV02fa
[arms001-01][WARNIN] activate: Cluster uuid is 1832ac04-c1cc-412a-bd73-1df94a2523ca
[arms001-01][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[arms001-01][WARNIN] activate: Cluster name is ceph
[arms001-01][WARNIN] activate: OSD uuid is f93553ca-b1d8-4b39-a0c5-5a1cea66c1d4
[arms001-01][WARNIN] allocate_osd_id: Allocating OSD id...
[arms001-01][WARNIN] command: Running command: /usr/bin/ceph-authtool --gen-print-key
[arms001-01][WARNIN] __init__: stderr
[arms001-01][WARNIN] command_with_stdin: Running command with stdin: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f93553ca-b1d8-4b39-a0c5-5a1cea66c1d4
[arms001-01][WARNIN] command_with_stdin: 0
[arms001-01][WARNIN]
[arms001-01][WARNIN] command_check_call: Running command: /usr/bin/ceph-authtool /var/lib/ceph/tmp/mnt.mV02fa/keyring --create-keyring --name osd.0 --add-key AQCYZpdaRyVcMxAA4XskoJ6AJHy7jl9hN9lOaQ==
[arms001-01][DEBUG ] creating /var/lib/ceph/tmp/mnt.mV02fa/keyring
[arms001-01][DEBUG ] added entity osd.0 auth auth(auid = 18446744073709551615 key=AQCYZpdaRyVcMxAA4XskoJ6AJHy7jl9hN9lOaQ== with 0 caps)
[arms001-01][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.mV02fa/keyring
[arms001-01][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.mV02fa/whoami.16997.tmp
[arms001-01][WARNIN] activate: OSD id is 0
[arms001-01][WARNIN] activate: Initializing OSD...
[arms001-01][WARNIN] command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/tmp/mnt.mV02fa/activate.monmap
[arms001-01][WARNIN] got monmap epoch 1
[arms001-01][WARNIN] command_check_call: Running command: /usr/bin/ceph-osd --cluster ceph --mkfs -i 0 --monmap /var/lib/ceph/tmp/mnt.mV02fa/activate.monmap --osd-data /var/lib/ceph/tmp/mnt.mV02fa --osd-journal /var/lib/ceph/tmp/mnt.mV02fa/journal --osd-uuid f93553ca-b1d8-4b39-a0c5-5a1cea66c1d4 --setuser ceph --setgroup ceph
[arms001-01][WARNIN] 2018-03-01 10:34:03.307762 b671e000 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected f93553ca-b1d8-4b39-a0c5-5a1cea66c1d4, invalid (someone else's?) journal
[arms001-01][WARNIN] 2018-03-01 10:34:03.526145 b671e000 -1 journal do_read_entry(4096): bad header magic
[arms001-01][WARNIN] 2018-03-01 10:34:03.526175 b671e000 -1 journal do_read_entry(4096): bad header magic
[arms001-01][WARNIN] 2018-03-01 10:34:03.527606 b671e000 -1 read_settings error reading settings: (2) No such file or directory
[arms001-01][WARNIN] 2018-03-01 10:34:03.602071 b671e000 -1 key
[arms001-01][WARNIN] 2018-03-01 10:34:03.761415 b671e000 -1 created object store /var/lib/ceph/tmp/mnt.mV02fa for osd.0 fsid 1832ac04-c1cc-412a-bd73-1df94a2523ca
[arms001-01][WARNIN] activate: Marking with init system upstart
[arms001-01][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.mV02fa/upstart
[arms001-01][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.mV02fa/active.16997.tmp
[arms001-01][WARNIN] activate: ceph osd.0 data dir is ready at /var/lib/ceph/tmp/mnt.mV02fa
[arms001-01][WARNIN] move_mount: Moving mount to final location...
[arms001-01][WARNIN] command_check_call: Running command: /bin/mount -o noatime,inode64 -- /dev/sda1 /var/lib/ceph/osd/ceph-0
[arms001-01][WARNIN] command_check_call: Running command: /bin/umount -l -- /var/lib/ceph/tmp/mnt.mV02fa
[arms001-01][WARNIN] start_daemon: Starting ceph osd.0...
[arms001-01][WARNIN] command_check_call: Running command: /sbin/initctl emit --no-wait -- ceph-osd cluster=ceph id=0
[arms001-01][WARNIN] /usr/local/lib/python2.7/dist-packages/ceph_disk-1.0.0-py2.7.egg/ceph_disk/main.py:5685: UserWarning:
[arms001-01][WARNIN] *******************************************************************************
[arms001-01][WARNIN] This tool is now deprecated in favor of ceph-volume.
[arms001-01][WARNIN] It is recommended to use ceph-volume for OSD deployments. For details see:
[arms001-01][WARNIN]
[arms001-01][WARNIN] http://docs.ceph.com/docs/master/ceph-volume/#migrating
[arms001-01][WARNIN]
[arms001-01][WARNIN] *******************************************************************************
[arms001-01][WARNIN]
[arms001-01][INFO ] checking OSD status...
[arms001-01][DEBUG ] find the location of an executable
[arms001-01][INFO ] Running command: sudo /usr/bin/ceph --cluster=ceph osd stat --format=json


dmesg log:(Sorry arms001-01 dmesg log has log has been lost, but error message about metadata corruption on arms003-10 are the same with arms001-01)
Mar  5 11:08:49 arms003-10 kernel: [  252.534232] XFS (sda1): Unmount and run xfs_repair
Mar  5 11:08:49 arms003-10 kernel: [  252.539100] XFS (sda1): First 64 bytes of corrupted metadata buffer:
Mar  5 11:08:49 arms003-10 kernel: [  252.545504] eb82f000: 58 46 53 42 00 00 10 00 00 00 00 00 91 73 fe fb  XFSB.........s..
Mar  5 11:08:49 arms003-10 kernel: [  252.553569] eb82f010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Mar  5 11:08:49 arms003-10 kernel: [  252.561624] eb82f020: fc 4e e3 89 50 8f 42 aa be bc 07 0c 6e fa 83 2f  .N..P.B.....n../
Mar  5 11:08:49 arms003-10 kernel: [  252.569706] eb82f030: 00 00 00 00 80 00 00 07 ff ff ff ff ff ff ff ff  ................
Mar  5 11:08:49 arms003-10 kernel: [  252.577778] XFS (sda1): metadata I/O error: block 0x48b9ff80 ("xfs_trans_read_buf_map") error 117 numblks 8
Mar  5 11:08:49 arms003-10 kernel: [  252.602944] XFS (sda1): Metadata corruption detected at xfs_dir3_data_read_verify+0x58/0xd0, xfs_dir3_data block 0x48b9ff80
Mar  5 11:08:49 arms003-10 kernel: [  252.614170] XFS (sda1): Unmount and run xfs_repair
Mar  5 11:08:49 arms003-10 kernel: [  252.619030] XFS (sda1): First 64 bytes of corrupted metadata buffer:
Mar  5 11:08:49 arms003-10 kernel: [  252.625403] eb901000: 58 46 53 42 00 00 10 00 00 00 00 00 91 73 fe fb  XFSB.........s..
Mar  5 11:08:49 arms003-10 kernel: [  252.633441] eb901010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Mar  5 11:08:49 arms003-10 kernel: [  252.641474] eb901020: fc 4e e3 89 50 8f 42 aa be bc 07 0c 6e fa 83 2f  .N..P.B.....n../
Mar  5 11:08:49 arms003-10 kernel: [  252.649519] eb901030: 00 00 00 00 80 00 00 07 ff ff ff ff ff ff ff ff  ................
Mar  5 11:08:49 arms003-10 kernel: [  252.657554] XFS (sda1): metadata I/O error: block 0x48b9ff80 ("xfs_trans_read_buf_map") error 117 numblks 8
Mar  5 11:08:49 arms003-10 kernel: [  252.675056] XFS (sda1): Metadata corruption detected at xfs_dir3_data_read_verify+0x58/0xd0, xfs_dir3_data block 0x48b9ff80
Mar  5 11:08:49 arms003-10 kernel: [  252.686228] XFS (sda1): Unmount and run xfs_repair
Mar  5 11:08:49 arms003-10 kernel: [  252.691054] XFS (sda1): First 64 bytes of corrupted metadata buffer:
Mar  5 11:08:49 arms003-10 kernel: [  252.697425] eb901000: 58 46 53 42 00 00 10 00 00 00 00 00 91 73 fe fb  XFSB.........s..
Mar  5 11:08:49 arms003-10 kernel: [  252.705459] eb901010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Mar  5 11:08:49 arms003-10 kernel: [  252.713489] eb901020: fc 4e e3 89 50 8f 42 aa be bc 07 0c 6e fa 83 2f  .N..P.B.....n../
Mar  5 11:08:49 arms003-10 kernel: [  252.721520] eb901030: 00 00 00 00 80 00 00 07 ff ff ff ff ff ff ff ff  ................
Mar  5 11:08:49 arms003-10 kernel: [  252.729558] XFS (sda1): metadata I/O error: block 0x48b9ff80 ("xfs_trans_read_buf_map") error 117 numblks 8
Mar  5 11:08:49 arms003-10 kernel: [  252.741953] XFS (sda1): Metadata corruption detected at xfs_dir3_data_read_verify+0x58/0xd0, xfs_dir3_data block 0x48b9ff80
Mar  5 11:08:49 arms003-10 kernel: [  252.753139] XFS (sda1): Unmount and run xfs_repair
Mar  5 11:08:49 arms003-10 kernel: [  252.757955] XFS (sda1): First 64 bytes of corrupted metadata buffer:
Mar  5 11:08:49 arms003-10 kernel: [  252.764336] eb901000: 58 46 53 42 00 00 10 00 00 00 00 00 91 73 fe fb  XFSB.........s..
Mar  5 11:08:49 arms003-10 kernel: [  252.772365] eb901010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Mar  5 11:08:49 arms003-10 kernel: [  252.780395] eb901020: fc 4e e3 89 50 8f 42 aa be bc 07 0c 6e fa 83 2f  .N..P.B.....n../
Mar  5 11:08:49 arms003-10 kernel: [  252.788417] eb901030: 00 00 00 00 80 00 00 07 ff ff ff ff ff ff ff ff  ................
Mar  5 11:08:49 arms003-10 kernel: [  252.796514] XFS (sda1): metadata I/O error: block 0x48b9ff80 ("xfs_trans_read_buf_map") error 117 numblks 8

Our tries for solving the problem:
1.Delploy osd manually, still got the same error has been confirmed.
2.Browse kernel bug fix log, but no related bug fix log has been found since kernel 4.4.39.
3.Upgrade xfsprogs from 3.1.9 to 4.15.0, error number changed, still but disk will be corrupted while activating osd!

[2912641.987937] XFS (sda1): Metadata CRC error detected at xfs_dir3_data_read_verify+0x58/0xd0, xfs_dir3_data block 0xfffffff0
[2912641.999203] XFS (sda1): Unmount and run xfs_repair
[2912642.004202] XFS (sda1): First 64 bytes of corrupted metadata buffer:
[2912642.010759] e689a000: 58 46 53 42 00 00 10 00 00 00 00 00 91 73 fe fb  XFSB.........s..
[2912642.018958] e689a010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
[2912642.027177] e689a020: 61 7b 64 0d fa fe 41 14 bf ea 90 32 6c 73 e5 ad  a{d...A....2ls..
[2912642.035388] e689a030: 00 00 00 00 50 00 00 08 ff ff ff ff ff ff ff ff  ....P...........
[2912642.043630] XFS (sda1): metadata I/O error: block 0xfffffff0 ("xfs_trans_read_buf_map") error 74 numblks 8
[2912642.060390] XFS (sda1): Metadata CRC error detected at xfs_dir3_data_read_verify+0x58/0xd0, xfs_dir3_data block 0xfffffff0
[2912642.071673] XFS (sda1): Unmount and run xfs_repair
   
4.Use the disk as OSD node on X86 will not trigger the the problem has been confirmed.
5.Use sgdisk & mkfs.xfs to format the disk, and mount do some read&write dd test then unmount, will not trigger the problem has been confirmed.
6.Chang ceph version form 12.2.2 to 10.2.10, the problem still exist has been confirmed.
   10.2.0 Deploy log
  
prepare
root@mncx86004:/home/mnvadmin/ceph# ceph-deploy disk zap arms003-10:sda
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.33): /usr/bin/ceph-deploy disk zap arms003-10:sda
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : zap
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f29eab24cf8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7f29eab06578>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('arms003-10', '/dev/sda', None)]
[ceph_deploy.osd][DEBUG ] zapping /dev/sda on arms003-10
[arms003-10][DEBUG ] connection detected need for sudo
[arms003-10][DEBUG ] connected to host: arms003-10
[arms003-10][DEBUG ] detect platform information from remote host
[arms003-10][DEBUG ] detect machine type
[arms003-10][DEBUG ] find the location of an executable
[arms003-10][INFO  ] Running command: sudo /sbin/initctl version
[arms003-10][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[arms003-10][DEBUG ] zeroing last few blocks of device
[arms003-10][DEBUG ] find the location of an executable
[arms003-10][INFO  ] Running command: sudo /usr/local/bin/ceph-disk zap /dev/sda
[arms003-10][WARNIN] Caution: invalid backup GPT header, but valid main header; regenerating
[arms003-10][WARNIN] backup header from main header.
[arms003-10][WARNIN]
[arms003-10][WARNIN] Warning! Main and backup partition tables differ! Use the 'c' and 'e' options
[arms003-10][WARNIN] on the recovery & transformation menu to examine the two tables.
[arms003-10][WARNIN]
[arms003-10][WARNIN] Warning! One or more CRCs don't match. You should repair the disk!
[arms003-10][WARNIN]
[arms003-10][DEBUG ] ****************************************************************************
[arms003-10][DEBUG ] Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
[arms003-10][DEBUG ] verification and recovery are STRONGLY recommended.
[arms003-10][DEBUG ] ****************************************************************************
[arms003-10][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[arms003-10][DEBUG ] other utilities.
[arms003-10][DEBUG ] Creating new GPT entries.
[arms003-10][DEBUG ] The operation has completed successfully.
[ceph_deploy.osd][DEBUG ] Calling partprobe on zapped device /dev/sda
[arms003-10][DEBUG ] find the location of an executable
[arms003-10][INFO  ] Running command: sudo /sbin/partprobe /dev/sda
root@mncx86004:/home/mnvadmin/ceph# ceph-deploy osd prepare arms003-10:sda
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.33): /usr/bin/ceph-deploy osd prepare arms003-10:sda
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  disk                          : [('arms003-10', '/dev/sda', None)]
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : prepare
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f18af259d40>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f18af4c0500>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks arms003-10:/dev/sda:
[arms003-10][DEBUG ] connection detected need for sudo
[arms003-10][DEBUG ] connected to host: arms003-10
[arms003-10][DEBUG ] detect platform information from remote host
[arms003-10][DEBUG ] detect machine type
[arms003-10][DEBUG ] find the location of an executable
[arms003-10][INFO  ] Running command: sudo /sbin/initctl version
[arms003-10][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Deploying osd to arms003-10
[arms003-10][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[arms003-10][WARNIN] osd keyring does not exist yet, creating one
[arms003-10][DEBUG ] create a keyring file
[ceph_deploy.osd][DEBUG ] Preparing host arms003-10 disk /dev/sda journal None activate False
[arms003-10][DEBUG ] find the location of an executable
[arms003-10][INFO  ] Running command: sudo /usr/local/bin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/sda
[arms003-10][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[arms003-10][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[arms003-10][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[arms003-10][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[arms003-10][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms003-10][WARNIN] set_type: Will colocate journal with data on /dev/sda
[arms003-10][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[arms003-10][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms003-10][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms003-10][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms003-10][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[arms003-10][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[arms003-10][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[arms003-10][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[arms003-10][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms003-10][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms003-10][WARNIN] ptype_tobe_for_name: name = journal
[arms003-10][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms003-10][WARNIN] create_partition: Creating journal partition num 2 size 5120 on /dev/sda
[arms003-10][WARNIN] command_check_call: Running command: /sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:14fd0ee1-e33e-4133-a9c2-c9687f44e5aa --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sda
[arms003-10][DEBUG ] The operation has completed successfully.
[arms003-10][WARNIN] update_partition: Calling partprobe on created device /dev/sda
[arms003-10][WARNIN] command_check_call: Running command: /sbin/udevadm settle --timeout=600
[arms003-10][WARNIN] command: Running command: /usr/bin/flock -s /dev/sda /sbin/partprobe /dev/sda
[arms003-10][WARNIN] command_check_call: Running command: /sbin/udevadm settle --timeout=600
[arms003-10][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms003-10][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms003-10][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda2 uuid path is /sys/dev/block/8:2/dm/uuid
[arms003-10][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/14fd0ee1-e33e-4133-a9c2-c9687f44e5aa
[arms003-10][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/14fd0ee1-e33e-4133-a9c2-c9687f44e5aa
[arms003-10][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms003-10][WARNIN] set_data_partition: Creating osd partition on /dev/sda
[arms003-10][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms003-10][WARNIN] ptype_tobe_for_name: name = data
[arms003-10][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms003-10][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/sda
[arms003-10][WARNIN] command_check_call: Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:5a385ba3-022b-4c1c-865c-4b985410af77 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sda
[arms003-10][DEBUG ] The operation has completed successfully.
[arms003-10][WARNIN] update_partition: Calling partprobe on created device /dev/sda
[arms003-10][WARNIN] command_check_call: Running command: /sbin/udevadm settle --timeout=600
[arms003-10][WARNIN] command: Running command: /usr/bin/flock -s /dev/sda /sbin/partprobe /dev/sda
[arms003-10][WARNIN] command_check_call: Running command: /sbin/udevadm settle --timeout=600
[arms003-10][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms003-10][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms003-10][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda1 uuid path is /sys/dev/block/8:1/dm/uuid
[arms003-10][WARNIN] populate_data_path_device: Creating xfs fs on /dev/sda1
[arms003-10][WARNIN] command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sda1
[arms003-10][DEBUG ] meta-data=/dev/sda1              isize=2048   agcount=32, agsize=76259320 blks
[arms003-10][DEBUG ]          =                       sectsz=4096  attr=2, projid32bit=0
[arms003-10][DEBUG ] data     =                       bsize=4096   blocks=2440298235, imaxpct=5
[arms003-10][DEBUG ]          =                       sunit=0      swidth=0 blks
[arms003-10][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0
[arms003-10][DEBUG ] log      =internal log           bsize=4096   blocks=521728, version=2
[arms003-10][DEBUG ]          =                       sectsz=4096  sunit=1 blks, lazy-count=1
[arms003-10][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
[arms003-10][WARNIN] mount: Mounting /dev/sda1 on /var/lib/ceph/tmp/mnt.vhTS8S with options noatime,inode64
[arms003-10][WARNIN] command_check_call: Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sda1 /var/lib/ceph/tmp/mnt.vhTS8S
[arms003-10][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.vhTS8S
[arms003-10][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.vhTS8S/ceph_fsid.2977.tmp
[arms003-10][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.vhTS8S/fsid.2977.tmp
[arms003-10][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.vhTS8S/magic.2977.tmp
[arms003-10][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.vhTS8S/journal_uuid.2977.tmp
[arms003-10][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.vhTS8S/journal -> /dev/disk/by-partuuid/14fd0ee1-e33e-4133-a9c2-c9687f44e5aa
[arms003-10][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.vhTS8S
[arms003-10][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.vhTS8S
[arms003-10][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.vhTS8S
[arms003-10][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
[arms003-10][WARNIN] command_check_call: Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sda
[arms003-10][DEBUG ] The operation has completed successfully.
[arms003-10][WARNIN] update_partition: Calling partprobe on prepared device /dev/sda
[arms003-10][WARNIN] command_check_call: Running command: /sbin/udevadm settle --timeout=600
[arms003-10][WARNIN] command: Running command: /usr/bin/flock -s /dev/sda /sbin/partprobe /dev/sda
[arms003-10][WARNIN] command_check_call: Running command: /sbin/udevadm settle --timeout=600
[arms003-10][WARNIN] command_check_call: Running command: /sbin/udevadm trigger --action=add --sysname-match sda1
[arms003-10][INFO  ] checking OSD status...
[arms003-10][DEBUG ] find the location of an executable
[arms003-10][INFO  ] Running command: sudo /usr/bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host arms003-10 is now ready for osd use.



activate
root@mncx86004:/home/mnvadmin/ceph# ceph-deploy osd activate arms003-10:sda1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.33): /usr/bin/ceph-deploy osd activate arms003-10:sda1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : activate
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f19953d7d40>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f199563e500>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('arms003-10', '/dev/sda1', None)]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks arms003-10:/dev/sda1:
[arms003-10][DEBUG ] connection detected need for sudo
[arms003-10][DEBUG ] connected to host: arms003-10
[arms003-10][DEBUG ] detect platform information from remote host
[arms003-10][DEBUG ] detect machine type
[arms003-10][DEBUG ] find the location of an executable
[arms003-10][INFO  ] Running command: sudo /sbin/initctl version
[arms003-10][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] activating host arms003-10 disk /dev/sda1
[ceph_deploy.osd][DEBUG ] will use init type: upstart
[arms003-10][DEBUG ] find the location of an executable
[arms003-10][INFO  ] Running command: sudo /usr/local/bin/ceph-disk -v activate --mark-init upstart --mount /dev/sda1
[arms003-10][WARNIN] main_activate: path = /dev/sda1
[arms003-10][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda1 uuid path is /sys/dev/block/8:1/dm/uuid
[arms003-10][WARNIN] command: Running command: /sbin/blkid -o udev -p /dev/sda1
[arms003-10][WARNIN] command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/sda1
[arms003-10][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[arms003-10][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[arms003-10][WARNIN] mount: Mounting /dev/sda1 on /var/lib/ceph/tmp/mnt.zk8ykg with options noatime,inode64
[arms003-10][WARNIN] command_check_call: Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sda1 /var/lib/ceph/tmp/mnt.zk8ykg
[arms003-10][WARNIN] activate: Cluster uuid is e4182afd-ab1b-42c1-9fb1-e4ec211235e8
[arms003-10][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[arms003-10][WARNIN] activate: Cluster name is ceph
[arms003-10][WARNIN] activate: OSD uuid is 5a385ba3-022b-4c1c-865c-4b985410af77
[arms003-10][WARNIN] allocate_osd_id: Allocating OSD id...
[arms003-10][WARNIN] command: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise 5a385ba3-022b-4c1c-865c-4b985410af77
[arms003-10][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.zk8ykg/whoami.3222.tmp
[arms003-10][WARNIN] activate: OSD id is 66
[arms003-10][WARNIN] activate: Initializing OSD...
[arms003-10][WARNIN] command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/tmp/mnt.zk8ykg/activate.monmap
[arms003-10][WARNIN] got monmap epoch 1
[arms003-10][WARNIN] command: Running command: /usr/bin/timeout 300 ceph-osd --cluster ceph --mkfs --mkkey -i 66 --monmap /var/lib/ceph/tmp/mnt.zk8ykg/activate.monmap --osd-data /var/lib/ceph/tmp/mnt.zk8ykg --osd-journal /var/lib/ceph/tmp/mnt.zk8ykg/journal --osd-uuid 5a385ba3-022b-4c1c-865c-4b985410af77 --keyring /var/lib/ceph/tmp/mnt.zk8ykg/keyring --setuser ceph --setgroup ceph
[arms003-10][WARNIN] activate: Marking with init system upstart
[arms003-10][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.zk8ykg/upstart
[arms003-10][WARNIN] activate: Authorizing OSD key...
[arms003-10][WARNIN] command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.66 -i /var/lib/ceph/tmp/mnt.zk8ykg/keyring osd allow * mon allow profile osd
[arms003-10][WARNIN] added key for osd.66
[arms003-10][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.zk8ykg/active.3222.tmp
[arms003-10][WARNIN] activate: ceph osd.66 data dir is ready at /var/lib/ceph/tmp/mnt.zk8ykg
[arms003-10][WARNIN] move_mount: Moving mount to final location...
[arms003-10][WARNIN] command_check_call: Running command: /bin/mount -o noatime,inode64 -- /dev/sda1 /var/lib/ceph/osd/ceph-66
[arms003-10][WARNIN] command_check_call: Running command: /bin/umount -l -- /var/lib/ceph/tmp/mnt.zk8ykg
[arms003-10][WARNIN] start_daemon: Starting ceph osd.66...
[arms003-10][WARNIN] command_check_call: Running command: /sbin/initctl emit --no-wait -- ceph-osd cluster=ceph id=66
[arms003-10][INFO  ] checking OSD status...
[arms003-10][DEBUG ] find the location of an executable
[arms003-10][INFO  ] Running command: sudo /usr/bin/ceph --cluster=ceph osd stat --format=json

   Corruption  error is the same as 12.2.2.










_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux