Re: How to deploy ceph with spdk step by step?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi  Nathan Cutler,Orlando Moreno, Loic Dachary and Sage Weil,

 

I am making spdk enable on ceph. But I failed. My step is listed as below. Could you help check if all the step is right? And help to enable spdk on ceph. I know it's very rude, but I need your help. ceph version is 13.0.2 Thank you very much.

 

First step:I have run src/spdk/setup.sh as below:

 

[root@ceph-rep-05 ceph-ansible]# ../ceph/src/spdk/scripts/setup.sh

0005:01:00.0 (1179 010e): nvme -> vfio-pci

 

Second step:the ceph.conf about osd is that: 

[osd]

bluestore = true

[osd.0] 

host = ceph-rep-05

osd data = ""> 

bluestore_block_path = spdk:55cd2e404c7e1063

 

 

Third step:

ceph osd create

mkdir /var/lib/ceph/osd/ceph-0/ 

chown ceph:ceph /var/lib/ceph/osd/ceph-0/

ceph-osd -i 0 --mkfs --osd-data="" -c /etc/ceph/ceph.conf --debug_osd 20 mkkey

ceph-osd -i 0

 

 

[root@ceph-rep-05 ceph-ansible-0417]# ceph-osd -i 0 --mkfs --osd-data="" -c /etc/ceph/ceph.conf --debug_osd 20

2018-04-27 17:14:24.674 ffff9b5a0000 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway

2018-04-27 17:14:24.804 ffff9b5a0000 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway

2018-04-27 17:14:24.804 ffff9b5a0000 -1 journal do_read_entry(4096): bad header magic

2018-04-27 17:14:24.804 ffff9b5a0000 -1 journal do_read_entry(4096): bad header magic

[root@ceph-rep-05 ceph-ansible -0417]# ceph-osd -i 0

starting osd.0 at - osd_data /var/lib/ceph/osd/ceph-0/ /var/lib/ceph/osd/ceph-0/journal

2018-04-27 17:14:44.852 ffff83b20000 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway

2018-04-27 17:14:44.852 ffff83b20000 -1 journal do_read_entry(8192): bad header magic

2018-04-27 17:14:44.852 ffff83b20000 -1 journal do_read_entry(8192): bad header magic

2018-04-27 17:14:44.872 ffff83b20000 -1 osd.0 0 log_to_monitors {default=true}

 

Last step:

[root@ceph-rep-05 ceph-ansible-0417]# ceph -s

  cluster:

    id:     e05d6376-6965-4c48-9b36-b8f5c518e3b9

    health: HEALTH_WARN

            Reduced data availability: 256 pgs inactive

            too many PGs per OSD (256 > max 200)

 

  services:

    mon: 1 daemons, quorum ceph-rep-05

    mgr: ceph-rep-05(active)

    osd: 1 osds: 1 up, 1 in

 

  data:

    pools:   3 pools, 256 pgs

    objects: 0 objects, 0

    usage:   0 used, 0 / 0 avail

    pgs:     100.000% pgs unknown

             256 unknown

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux