Re: trouble deploying custom config OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Seccentral,

How did you run that `ceph-volume raw prepare` command exactly? If you ran
it manually from within a separate container, the keyring issue you faced
is expected.

In any case, what you are trying to achieve is not supported by ceph-volume
at the moment but from what I've seen, it doesn't require much effort to
support it. I could make it with a very small change in ceph-volume.
I created this tracker [1] and started a patch (not pushed yet, I'll update
this thread accordingly).
Looks like you tried multiple things in this environment, the few troubles
you are facing regarding the usage of `--all-available-devices` would
require a bit more details (mgr logs for instance...).

I'm personally available on IRC (OFTC, nick: guits) and slack (
ceph-storage.slack.com)

Thanks,

[1] https://tracker.ceph.com/issues/58515

On Thu, 19 Jan 2023 at 18:28, seccentral <seccentral@xxxxxxxxxxxxxx> wrote:

> Hi.
> I'm new to ceph, been toying around in a virtual environment (for now)
> trying to understand how to manage it. I made 3 vms in proxmox and
> provisioned a bunch of virtual drives to each. Bootstrapped following the
> quincy-branch official documentation.
> These are the drives:
>
> > /dev/sdb 128.00 GB sdb True False QEMU HARDDISK (HDD)
> > /dev/sdc 128.00 GB sdc True False QEMU HARDDISK (HDD)
> > /dev/sdd 32.00 GB sdd False False QEMU HARDDISK (SSD)
>
> This is the lvdisplay on /dev/sdd after creating two lvs:
>
> > db-0 dev0-db-0 -wi-a----- 16.00g
> >
> > db-1 dev0-db-0 -wi-a----- <16.00g
>
> My curiosity was to have OSDs with data=raw + block.db=lv created like
> this:
>
> > ceph-volume raw prepare --bluestore --data /dev/sdd --block.db
> /dev/mapper/dev0--db--0--db--0
>
> ​This required tinkering with permissions and temporarily modifying
> /etc/ceph/ceph.keyring because by default it wasn't allowing access, RADOS
> complained about unauthorized client.boostrap-osd something but I got it to
> work eventually.
> (By the way, In a real environment, would RAW be of any benefit vs lvm
> everywhere ?)
> So now I have created 2 OSDs, each with the journal on the SSD and the
> data on the HDD.
> I repeated the steps on my other two boxes (btw, can't this be done from
> the local box via ceph cli ?)
> Now I am trying (and failing) to start OSD daemons on this host. I tried
> apply osd --all-available-devices, it tells me "Scheduled
> osd.all-available-devices update..." but nothing happens.
> I'm also not sure how to apply osds from a yaml file since that would
> provision them and .. they're already provisioned using the ceph-volume
> command above... right ?
>
> I'm having trouble getting a lot of things to work, this is just one of
> them and even if I feel nostalgic using mailing lists, It's inefficient. Is
> there any interactive community where I can find some people usually online
> and talk to them realtime like discord/slack etc ? I tried irc but most are
> afk.
>
> Thanks
>
> Sent with [Proton Mail](https://proton.me/) secure email.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>


-- 

*Guillaume Abrioux*Senior Software Engineer
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux