Re: librbd::image::CreateRequest: 0x55e4fc8bf620 handle_create_id_object: error creating RBD id object

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks!

This problem fixed by your advice:

1. add 3 osd service

2. link  libcls_rbd.so to libcls_rbd.so.1.0.0, because I build ceph from source code according to Mykola's advice.

On 2018/11/6 下午4:33, Ashley Merrick wrote:
Is that correct or have you added more than 1 OSD?

CEPH is never going to work or be able to bring up a pool with only one OSD, if you really do have more than OSD and have added them correctly then there really is something up with your CEPH setup / config and may be worth starting from scratch.

On Tue, Nov 6, 2018 at 4:31 PM Dengke Du <dengke.du@xxxxxxxxxxxxx> wrote:


On 2018/11/6 下午4:29, Ashley Merrick wrote:
What does

"ceph osd tree" show ?
root@node1:~# ceph osd tree
ID CLASS WEIGHT  TYPE NAME      STATUS REWEIGHT PRI-AFF
-2             0 host 0                                
-1       1.00000 root default                          
-3       1.00000     host node1                        
 0   hdd 1.00000         osd.0    down        0 1.00000

On Tue, Nov 6, 2018 at 4:27 PM Dengke Du <dengke.du@xxxxxxxxxxxxx> wrote:


On 2018/11/6 下午4:24, Ashley Merrick wrote:
If I am reading your ceph -s output correctly you only have 1 OSD, and 0 pool's created.

So your be unable to create a RBD till you atleast have a pool setup and configured to create the RBD within.
root@node1:~# ceph osd lspools
1 libvirt-pool
2 test-pool


I create pools using:

ceph osd pool create libvirt-pool 128 128

following:

http://docs.ceph.com/docs/master/rbd/libvirt/


On Tue, Nov 6, 2018 at 4:21 PM Dengke Du <dengke.du@xxxxxxxxxxxxx> wrote:

On 2018/11/6 下午4:16, Mykola Golub wrote:
> On Tue, Nov 06, 2018 at 09:45:01AM +0800, Dengke Du wrote:
>
>> I reconfigure the osd service from start, the journal was:
> I am not quite sure I understand what you mean here.
>
>> ------------------------------------------------------------------------------------------------------------------------------------------
>>
>> -- Unit ceph-osd@0.service has finished starting up.
>> --
>> -- The start-up result is RESULT.
>> Nov 05 18:02:36 node1 ceph-osd[4487]: 2018-11-05 18:02:36.915 7f6a27204e80
>> -1 Public network was set, but cluster network was not set
>> Nov 05 18:02:36 node1 ceph-osd[4487]: 2018-11-05 18:02:36.915 7f6a27204e80
>> -1     Using public network also for cluster network
>> Nov 05 18:02:36 node1 ceph-osd[4487]: starting osd.0 at - osd_data
>> /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
>> Nov 05 18:02:37 node1 ceph-osd[4487]: 2018-11-05 18:02:37.365 7f6a27204e80
>> -1 journal FileJournal::_open: disabling aio for non-block journal.  Use
>> journal_force_aio to force use of a>
>> Nov 05 18:02:37 node1 ceph-osd[4487]: 2018-11-05 18:02:37.414 7f6a27204e80
>> -1 journal do_read_entry(6930432): bad header magic
>> Nov 05 18:02:37 node1 ceph-osd[4487]: 2018-11-05 18:02:37.729 7f6a27204e80
>> -1 osd.0 21 log_to_monitors {default=true}
>> Nov 05 18:02:47 node1 nagios[3584]: Warning: Return code of 13 for check of
>> host 'localhost' was out of bounds.
>>
>> ------------------------------------------------------------------------------------------------------------------------------------------
> Could you please post the full ceph-osd log somewhere? /var/log/ceph/ceph-osd.0.log

I don't have the file /var/log/ceph/ceph-osd.o.log

root@node1:~# systemctl status ceph-osd@0
ceph-osd@0.service - Ceph object storage daemon osd.0
    Loaded: loaded (/lib/systemd/system/ceph-osd@.service; disabled;
vendor preset: enabled)
    Active: active (running) since Mon 2018-11-05 18:02:36 UTC; 6h ago
  Main PID: 4487 (ceph-osd)
     Tasks: 64
    Memory: 27.0M
    CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@0.service
            └─4487 /usr/bin/ceph-osd -f --cluster ceph --id 0

Nov 05 18:02:36 node1 systemd[1]: Starting Ceph object storage daemon
osd.0...
Nov 05 18:02:36 node1 systemd[1]: Started Ceph object storage daemon osd.0.
Nov 05 18:02:36 node1 ceph-osd[4487]: 2018-11-05 18:02:36.915
7f6a27204e80 -1 Public network was set, but cluster network was not set
Nov 05 18:02:36 node1 ceph-osd[4487]: 2018-11-05 18:02:36.915
7f6a27204e80 -1     Using public network also for cluster network
Nov 05 18:02:36 node1 ceph-osd[4487]: starting osd.0 at - osd_data
/var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 05 18:02:37 node1 ceph-osd[4487]: 2018-11-05 18:02:37.365
7f6a27204e80 -1 journal FileJournal::_open: disabling aio for non-block
journal.  Use journal_force_aio to force use of a>
Nov 05 18:02:37 node1 ceph-osd[4487]: 2018-11-05 18:02:37.414
7f6a27204e80 -1 journal do_read_entry(6930432): bad header magic
Nov 05 18:02:37 node1 ceph-osd[4487]: 2018-11-05 18:02:37.729
7f6a27204e80 -1 osd.0 21 log_to_monitors {default=true}

>
>> but hang at the command: "rbd create libvirt-pool/dimage --size 10240 "
> So it hungs forever now instead of returning the error?
no returning any error, just hungs
> What is `ceph -s` output?
root@node1:~# ceph -s
   cluster:
     id:     9c1a42e1-afc2-4170-8172-96f4ebdaac68
     health: HEALTH_WARN
             no active mgr

   services:
     mon: 1 daemons, quorum 0
     mgr: no daemons active
     osd: 1 osds: 0 up, 0 in

   data:
     pools:   0 pools, 0 pgs
     objects: 0  objects, 0 B
     usage:   0 B used, 0 B / 0 B avail
     pgs:


>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux