Re: RuntimeError on activate lvm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Eugen,

Thanks for the quick response! I'm probably doing things the more difficult (wrong) way 😉

This is my first installation of a Ceph-cluster. I'm setting op three servers for non-critical data and low i/o-load.
I don't want to lose capacity in storage space by losing the entire disk on which the os is installed. The os disk is about 900Gb and I've partitioned 50Gb for the os. I want to use the remaining 850Gb as OSD.

First I've created a new partition of 850Gb and changed the type to 95 (Ceph OSD). Then I tried to add it to the cluster with 'ceph orch daemon add osd hvs002:/dev/sda3', but I got an error.

That's why I tried the lvm manual way.

I know using a partition next to the os isn't best practice. But pointers to 'better practice' than what I describe above would be greatly appreciated.

Greetings,

Dominique.

> -----Oorspronkelijk bericht-----
> Van: Eugen Block <eblock@xxxxxx>
> Verzonden: woensdag 6 april 2022 9:53
> Aan: ceph-users@xxxxxxx
> Onderwerp:  Re: RuntimeError on activate lvm
> 
> Hi,
> 
> is there any specific reason why you do it manually instead of letting
> cephadm handle it? I might misremember but I believe for the manual lvm
> activation to work you need to pass the '--no-systemd' flag.
> 
> Regards,
> Eugen
> 
> Zitat von Dominique Ramaekers <dominique.ramaekers@xxxxxxxxxx>:
> 
> > Hi,
> >
> >
> > I've setup a ceph cluster using cephadmin on three ubuntu servers.
> > Everything went great until I tried to activate a osd prepared on a
> > lvm.
> >
> >
> > I have prepared 4 volumes with the command:
> >
> > ceph-volume lvm prepare --data vg/lv
> >
> >
> > Now I try to activate one of them with the command (followed by the
> output):
> >
> > root@hvs001:/# ceph-volume lvm activate 0
> > 25bfe96a-4f7a-47e1-8644-b74a4d104dbc
> > Running command: /usr/bin/mount -t tmpfs tmpfs
> > /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R
> ceph:ceph
> > /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-
> tool
> > --cluster=ceph prime-osd-dir --dev /dev/hvs001_sda2/lvol0 --path
> > /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln
> > -snf /dev/hvs001_sda2/lvol0 /var/lib/ceph/osd/ceph-0/block Running
> > command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
> > Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Running
> > command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
> Running
> > command: /usr/bin/systemctl enable
> > ceph-volume@lvm-0-25bfe96a-4f7a-47e1-8644-b74a4d104dbc
> >  stderr: Created symlink
> > /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-
> 25bfe96a
> > -4f7a-47e1-8644-b74a4d104dbc.service -> /usr/lib/systemd/system/ceph-
> volume@.service.
> > Running command: /usr/bin/systemctl enable --runtime ceph-osd@0
> >  stderr: Created symlink
> > /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service ->
> > /usr/lib/systemd/system/ceph-osd@.service.
> > Running command: /usr/bin/systemctl start ceph-osd@0
> >  stderr: Failed to connect to bus: No such file or directory
> > -->  RuntimeError: command returned non-zero exit status: 1
> >
> >
> > Seems systemd isn't playing along?
> >
> >
> > Please advice.
> >
> >
> > Some additional backround info:
> >
> > root@hvs001:/# ceph status
> >   cluster:
> >     id:     dd4b0610-b4d2-11ec-bb58-d1b32ae31585
> >     health: HEALTH_OK
> >
> >   services:
> >     mon: 3 daemons, quorum hvs001,hvs002,hvs003 (age 23m)
> >     mgr: hvs001.baejuo(active, since 23m), standbys: hvs002.etijdk
> >     osd: 4 osds: 0 up, 2 in (since 36m)
> >
> >   data:
> >     pools:   0 pools, 0 pgs
> >     objects: 0 objects, 0 B
> >     usage:   0 B used, 0 B / 0 B avail
> >     pgs:
> >
> >
> > root@hvs001:/# ceph-volume lvm list
> >
> >
> > ====== osd.0 =======
> >
> >   [block]       /dev/hvs001_sda2/lvol0
> >
> >       block device              /dev/hvs001_sda2/lvol0
> >       block uuid                6cEw8v-5xIA-K76l-7zIN-V2BK-RNWD-yGwfqp
> >       cephx lockbox secret
> >       cluster fsid              dd4b0610-b4d2-11ec-bb58-d1b32ae31585
> >       cluster name              ceph
> >       crush device class
> >       encrypted                 0
> >       osd fsid                  25bfe96a-4f7a-47e1-8644-b74a4d104dbc
> >       osd id                    0
> >       osdspec affinity
> >       type                      block
> >       vdo                       0
> >       devices                   /dev/sda2
> >
> > ====== osd.1 =======
> >
> >   [block]       /dev/hvs001_sdb3/lvol1
> >
> > ....
> >
> >
> > Greetings,
> >
> >
> > Dominique.
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
> > email to ceph-users-leave@xxxxxxx
> 
> 
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email
> to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux