RES: Debian install

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for your reply Anastasios, 

I was waiting for some answer.

 

My /etc/apt/sources.list.d/ceph.list content is:

deb https://download.ceph.com/debian-nautilus/ buster main

 

                Even if I do “apt-get update”, the packages still the same.

 

The Ceph client (CephFS mount) is working well, but I can´t deploy new osds.

 

The error that I posted occurs when I do : “ceph-deploy osd create --data /dev/sdb node1”

 

 

I appreciate any help.

 

 

Rafael.  

 

De: Anastasios Dados <tdados@xxxxxxxxxxx> 
Enviada em: segunda-feira, 29 de junho de 2020 20:01
Para: Rafael Quaglio <quaglio@xxxxxxxxxx>; ceph-users@xxxxxxx
Assunto: Re:  Debian install

 

Hello Rafael,

Can you check the apt sources list that exist from your ceph-deploy node? Maybe there you have put luminous debian packages version?

 

Regards,

Anastasios

 

 

On Mon, 2020-06-29 at 06:59 -0300, Rafael Quaglio wrote:

Hi,

 

We have already installed a new Debian (10.4) server and I need put it in a

Ceph cluster. 

 

When I execute the command to install ceph on this node:

 

 

 

ceph-deploy install --release nautilus node1

 

 

 

                It starts to install a version 12.x in my node...

 

 

 

(...)

 

[serifos][DEBUG ] After this operation, 183 MB of additional disk space will

be used.

 

[serifos][DEBUG ] Selecting previously unselected package python-cephfs.

 

(Reading database ... 30440 files and directories currently installed.)

 

[serifos][DEBUG ] Preparing to unpack

.../python-cephfs_12.2.11+dfsg1-2.1+b1_amd64.deb ...

 

[serifos][DEBUG ] Unpacking python-cephfs (12.2.11+dfsg1-2.1+b1) ...

 

[serifos][DEBUG ] Selecting previously unselected package ceph-common.

 

[serifos][DEBUG ] Preparing to unpack

.../ceph-common_12.2.11+dfsg1-2.1+b1_amd64.deb ...

 

[serifos][DEBUG ] Unpacking ceph-common (12.2.11+dfsg1-2.1+b1) ...

 

(...)

 

 

 

                How do I upgrade this packages?

 

                Even installed packages in this version, the installation

completes without erros.

 

 

 

                The question is due to an error message that I'm recieving

when deploy a new osd.

 

 

 

ceph-deploy osd create --data /dev/sdb node1

 

 

 

 

 

                At this point:

 

 

 

[ceph_deploy.osd][INFO  ] Distro info: debian 10.4 buster

 

[ceph_deploy.osd][DEBUG ] Deploying osd to node1

 

[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

 

[node1][DEBUG ] find the location of an executable

 

[node1][INFO  ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph

lvm create --bluestore --data /dev/sdb

 

[node1][WARNIN] -->  RuntimeError: Unable to create a new OSD id

 

[node1][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key

 

[node1][DEBUG ] Running command: /bin/ceph --cluster ceph --name

client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i -

osd new 76da6c51-8385-4ffc-9a8e-0dfc11e31feb

 

[node1][DEBUG ]  stderr:

/build/ceph-qtARip/ceph-12.2.11+dfsg1/src/mon/MonMap.cc: In function 'void

MonMap::sanitize_mons(std::map<std::__cxx11::basic_string<char>,

entity_addr_t>&)' thread 7f2bc7fff700 time 2020-06-29 06:56:17.331350

 

[node1][DEBUG ]  stderr:

/build/ceph-qtARip/ceph-12.2.11+dfsg1/src/mon/MonMap.cc: 77: FAILED

assert(mon_info[p.first].public_addr == p.second)

 

[node1][DEBUG ]  stderr: ceph version 12.2.11

(26dc3775efc7bb286a1d6d66faee0ba30ea23eee) luminous (stable)

 

[node1][DEBUG ]  stderr: 1: (ceph::__ceph_assert_fail(char const*, char

const*, int, char const*)+0xf5) [0x7f2bdaff5f75]

 

[node1][DEBUG ]  stderr: 2:

(MonMap::sanitize_mons(std::map<std::__cxx11::basic_string<char,

std::char_traits<char>, std::allocator<char> >, entity_addr_t,

std::less<std::__cxx11::basic_string<char, std::char_traits<char>,

std::allocator<char> > >,

std::allocator<std::pair<std::__cxx11::basic_string<char,

std::char_traits<char>, std::allocator<char> > const, entity_addr_t> >

&)+0x568) [0x7f2bdb050038]

 

[node1][DEBUG ]  stderr: 3:

(MonMap::decode(ceph::buffer::list::iterator&)+0x4da) [0x7f2bdb05500a]

 

[node1][DEBUG ]  stderr: 4: (MonClient::handle_monmap(MMonMap*)+0x216)

[0x7f2bdb042a06]

 

[node1][DEBUG ]  stderr: 5: (MonClient::ms_dispatch(Message*)+0x4ab)

[0x7f2bdb04729b]

 

[node1][DEBUG ]  stderr: 6: (DispatchQueue::entry()+0xeba) [0x7f2bdb06bf5a]

 

[node1][DEBUG ]  stderr: 7: (DispatchQueue::DispatchThread::entry()+0xd)

[0x7f2bdb1576fd]

 

[node1][DEBUG ]  stderr: 8: (()+0x7fa3) [0x7f2be499dfa3]

 

[node1][DEBUG ]  stderr: 9: (clone()+0x3f) [0x7f2be45234cf]

 

[node1][DEBUG ]  stderr: NOTE: a copy of the executable, or `objdump -rdS

<executable>` is needed to interpret this.

 

[node1][ERROR ] RuntimeError: command returned non-zero exit status: 1

 

[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-volume

--cluster ceph lvm create --bluestore --data /dev/sdb

 

[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

 

 

 

 

 

                I think this error occurs because the wrong package that was

installed.

 

 

 

 

 

Thanks,

 

Rafael

 

_______________________________________________

ceph-users mailing list -- ceph-users@xxxxxxx <mailto:ceph-users@xxxxxxx> 

To unsubscribe send an email to ceph-users-leave@xxxxxxx <mailto:ceph-users-leave@xxxxxxx> 

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux