Re: RuntimeError on activate lvm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Additionaly, if I try to add the volume automatically (I zapped the lvm and removed de osd entries with ceph osd rm, then recreated the lvm's). Now I get this...
Command: 'ceph orch daemon add osd hvs001:/dev/hvs001_sda2/lvol0'

Errors:
RuntimeError: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/dd4b0610-b4d2-11ec-bb58-d1b32ae31585/mon.hvs001/config
Non-zero exit code 1 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:3cf8e17ae80444cda3aa8872a36938b3e2b62fa564f29794773762406f9420d7 -e NODE_NAME=hvs001 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/dd4b0610-b4d2-11ec-bb58-d1b32ae31585:/var/run/ceph:z -v /var/log/ceph/dd4b0610-b4d2-11ec-bb58-d1b32ae31585:/var/log/ceph:z -v /var/lib/ceph/dd4b0610-b4d2-11ec-bb58-d1b32ae31585/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpcv2el0nk:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpmc7njw96:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:3cf8e17ae80444cda3aa8872a36938b3e2b62fa564f29794773762406f9420d7 lvm batch --no-auto /dev/hvs001_sda2/lvol0 --yes --no-systemd
/usr/bin/docker: stderr --> passed data devices: 0 physical, 1 LVM
/usr/bin/docker: stderr --> relative data size: 1.0
/usr/bin/docker: stderr Running command: /usr/bin/ceph-authtool --gen-print-key
/usr/bin/docker: stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new a6a26aa1-894c-467b-bcae-1445213d6f91
/usr/bin/docker: stderr  stderr: Error EEXIST: entity osd.0 exists but key does not match
...



> -----Oorspronkelijk bericht-----
> Van: Dominique Ramaekers
> Verzonden: woensdag 6 april 2022 10:13
> Aan: 'Eugen Block' <eblock@xxxxxx>; ceph-users@xxxxxxx
> Onderwerp: RE:  Re: RuntimeError on activate lvm
> 
> Hi Eugen,
> 
> Thanks for the quick response! I'm probably doing things the more difficult
> (wrong) way 😉
> 
> This is my first installation of a Ceph-cluster. I'm setting op three servers for
> non-critical data and low i/o-load.
> I don't want to lose capacity in storage space by losing the entire disk on
> which the os is installed. The os disk is about 900Gb and I've partitioned 50Gb
> for the os. I want to use the remaining 850Gb as OSD.
> 
> First I've created a new partition of 850Gb and changed the type to 95 (Ceph
> OSD). Then I tried to add it to the cluster with 'ceph orch daemon add osd
> hvs002:/dev/sda3', but I got an error.
> 
> That's why I tried the lvm manual way.
> 
> I know using a partition next to the os isn't best practice. But pointers to
> 'better practice' than what I describe above would be greatly appreciated.
> 
> Greetings,
> 
> Dominique.
> 
> > -----Oorspronkelijk bericht-----
> > Van: Eugen Block <eblock@xxxxxx>
> > Verzonden: woensdag 6 april 2022 9:53
> > Aan: ceph-users@xxxxxxx
> > Onderwerp:  Re: RuntimeError on activate lvm
> >
> > Hi,
> >
> > is there any specific reason why you do it manually instead of letting
> > cephadm handle it? I might misremember but I believe for the manual
> > lvm activation to work you need to pass the '--no-systemd' flag.
> >
> > Regards,
> > Eugen
> >
> > Zitat von Dominique Ramaekers <dominique.ramaekers@xxxxxxxxxx>:
> >
> > > Hi,
> > >
> > >
> > > I've setup a ceph cluster using cephadmin on three ubuntu servers.
> > > Everything went great until I tried to activate a osd prepared on a
> > > lvm.
> > >
> > >
> > > I have prepared 4 volumes with the command:
> > >
> > > ceph-volume lvm prepare --data vg/lv
> > >
> > >
> > > Now I try to activate one of them with the command (followed by the
> > output):
> > >
> > > root@hvs001:/# ceph-volume lvm activate 0
> > > 25bfe96a-4f7a-47e1-8644-b74a4d104dbc
> > > Running command: /usr/bin/mount -t tmpfs tmpfs
> > > /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R
> > ceph:ceph
> > > /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-
> > tool
> > > --cluster=ceph prime-osd-dir --dev /dev/hvs001_sda2/lvol0 --path
> > > /var/lib/ceph/osd/ceph-0 --no-mon-config Running command:
> > > /usr/bin/ln -snf /dev/hvs001_sda2/lvol0
> > > /var/lib/ceph/osd/ceph-0/block Running
> > > command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
> > > Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Running
> > > command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
> > Running
> > > command: /usr/bin/systemctl enable
> > > ceph-volume@lvm-0-25bfe96a-4f7a-47e1-8644-b74a4d104dbc
> > >  stderr: Created symlink
> > > /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-
> > 25bfe96a
> > > -4f7a-47e1-8644-b74a4d104dbc.service ->
> > > /usr/lib/systemd/system/ceph-
> > volume@.service.
> > > Running command: /usr/bin/systemctl enable --runtime ceph-osd@0
> > >  stderr: Created symlink
> > > /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service ->
> > > /usr/lib/systemd/system/ceph-osd@.service.
> > > Running command: /usr/bin/systemctl start ceph-osd@0
> > >  stderr: Failed to connect to bus: No such file or directory
> > > -->  RuntimeError: command returned non-zero exit status: 1
> > >
> > >
> > > Seems systemd isn't playing along?
> > >
> > >
> > > Please advice.
> > >
> > >
> > > Some additional backround info:
> > >
> > > root@hvs001:/# ceph status
> > >   cluster:
> > >     id:     dd4b0610-b4d2-11ec-bb58-d1b32ae31585
> > >     health: HEALTH_OK
> > >
> > >   services:
> > >     mon: 3 daemons, quorum hvs001,hvs002,hvs003 (age 23m)
> > >     mgr: hvs001.baejuo(active, since 23m), standbys: hvs002.etijdk
> > >     osd: 4 osds: 0 up, 2 in (since 36m)
> > >
> > >   data:
> > >     pools:   0 pools, 0 pgs
> > >     objects: 0 objects, 0 B
> > >     usage:   0 B used, 0 B / 0 B avail
> > >     pgs:
> > >
> > >
> > > root@hvs001:/# ceph-volume lvm list
> > >
> > >
> > > ====== osd.0 =======
> > >
> > >   [block]       /dev/hvs001_sda2/lvol0
> > >
> > >       block device              /dev/hvs001_sda2/lvol0
> > >       block uuid                6cEw8v-5xIA-K76l-7zIN-V2BK-RNWD-yGwfqp
> > >       cephx lockbox secret
> > >       cluster fsid              dd4b0610-b4d2-11ec-bb58-d1b32ae31585
> > >       cluster name              ceph
> > >       crush device class
> > >       encrypted                 0
> > >       osd fsid                  25bfe96a-4f7a-47e1-8644-b74a4d104dbc
> > >       osd id                    0
> > >       osdspec affinity
> > >       type                      block
> > >       vdo                       0
> > >       devices                   /dev/sda2
> > >
> > > ====== osd.1 =======
> > >
> > >   [block]       /dev/hvs001_sdb3/lvol1
> > >
> > > ....
> > >
> > >
> > > Greetings,
> > >
> > >
> > > Dominique.
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
> > > email to ceph-users-leave@xxxxxxx
> >
> >
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
> > email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux