Re: block.db on a LV? (Re: Mixed SSD+HDD OSD setup recommendation)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jan,

I think you're running into an issue reported a couple of times.
For the use of LVM you have to specify the name of the Volume Group and the respective Logical Volume instead of the path, e.g.

ceph-volume lvm prepare --bluestore --block.db ssd_vg/ssd00 --data /dev/sda

Regards,
Eugen


Zitat von Jan Kasprzak <kas@xxxxxxxxxx>:

Hello, Ceph users,

replying to my own post from several weeks ago:

Jan Kasprzak wrote:
: [...] I plan to add new OSD hosts,
: and I am looking for setup recommendations.
:
: Intended usage:
:
: - small-ish pool (tens of TB) for RBD volumes used by QEMU
: - large pool for object-based cold (or not-so-hot :-) data,
: 	write-once read-many access pattern, average object size
: 	10s or 100s of MBs, probably custom programmed on top of
: 	libradosstriper.
:
: Hardware:
:
: The new OSD hosts have ~30 HDDs 12 TB each, and two 960 GB SSDs.
: There is a small RAID-1 root and RAID-1 swap volume spanning both SSDs,
: leaving about 900 GB free on each SSD.
: The OSD hosts have two CPU sockets (32 cores including SMT), 128 GB RAM.
:
: My questions:
[...]
: - block.db on SSDs? The docs recommend about 4 % of the data size
: 	for block.db, but my SSDs are only 0.6 % of total storage size.
:
: - or would it be better to leave SSD caching on the OS and use LVMcache
: 	or something?
:
: - LVM or simple volumes?

I have problem setting this up with ceph-volume: I want to have an OSD
on each HDD, with block.db on the SSD. In order to set this up,
I have created a VG on the two SSDs, created 30 LVs on top of it for block.db,
and wanted to create an OSD using the following:

# ceph-volume lvm prepare --bluestore \
	--block.db /dev/ssd_vg/ssd00 \
	--data /dev/sda
[...]
--> blkid could not detect a PARTUUID for device: /dev/cbia_ssd_vg/ssd00
--> Was unable to complete a new OSD, will rollback changes
[...]

Then it failed, because deploying a volume used client.bootstrap-osd user,
but trying to roll the changes back required the client.admin user,
which does not have a keyring on the OSD host. Never mind.

The problem is with determining the PARTUUID of the SSD LV for block.db.
How can I deploy an OSD which is on top of bare HDD, but which also
has a block.db on an existing LV?

Thanks,

-Yenya

--
| Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}> |
| http://www.fi.muni.cz/~kas/                         GPG: 4096R/A45477D5 |
 This is the world we live in: the way to deal with computers is to google
 the symptoms, and hope that you don't have to watch a video. --P. Zaitcev
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux