trouble deploying custom config OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi.
I'm new to ceph, been toying around in a virtual environment (for now) trying to understand how to manage it. I made 3 vms in proxmox and provisioned a bunch of virtual drives to each. Bootstrapped following the quincy-branch official documentation.
These are the drives:

> /dev/sdb 128.00 GB sdb True False QEMU HARDDISK (HDD)
> /dev/sdc 128.00 GB sdc True False QEMU HARDDISK (HDD)
> /dev/sdd 32.00 GB sdd False False QEMU HARDDISK (SSD)

This is the lvdisplay on /dev/sdd after creating two lvs:

> db-0 dev0-db-0 -wi-a----- 16.00g
>
> db-1 dev0-db-0 -wi-a----- <16.00g

My curiosity was to have OSDs with data=raw + block.db=lv created like this:

> ceph-volume raw prepare --bluestore --data /dev/sdd --block.db /dev/mapper/dev0--db--0--db--0

​This required tinkering with permissions and temporarily modifying /etc/ceph/ceph.keyring because by default it wasn't allowing access, RADOS complained about unauthorized client.boostrap-osd something but I got it to work eventually.
(By the way, In a real environment, would RAW be of any benefit vs lvm everywhere ?)
So now I have created 2 OSDs, each with the journal on the SSD and the data on the HDD.
I repeated the steps on my other two boxes (btw, can't this be done from the local box via ceph cli ?)
Now I am trying (and failing) to start OSD daemons on this host. I tried apply osd --all-available-devices, it tells me "Scheduled osd.all-available-devices update..." but nothing happens.
I'm also not sure how to apply osds from a yaml file since that would provision them and .. they're already provisioned using the ceph-volume command above... right ?

I'm having trouble getting a lot of things to work, this is just one of them and even if I feel nostalgic using mailing lists, It's inefficient. Is there any interactive community where I can find some people usually online and talk to them realtime like discord/slack etc ? I tried irc but most are afk.

Thanks

Sent with [Proton Mail](https://proton.me/) secure email.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux