OSDs for 2 different pools on a single host

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello List,

I hope, I don't ask a question that is answered already, but if so,
pointing to the solution would be nice.
Although the Subject is problably misleading a bit, that is essentially
my problem (as far as I understand).

Is it possible to apply OSDs directly to a root to circumvent a pool
from using all OSDs on a host, or can I somewhere "fix" the hostname,
with which the OSDs are started (for each OSD seperately)?
This should be a non-volatile configuration.


The scenario:

I have build a cluster of 3 Proxmox Virtualisation Hosts (at the
moment). I use Ceph to provide the distributed storage. There is good
documentation to build the ceph cluster on Proxmox hosts. Each system
has 4 conventional HDDs and 2 SSDs (one for the OS, and another was
planed for the journal). I came across cache pools then. Now I wanted to
use the second SSD becoming an OSD for a cache pool. I set up 5 OSDs (4x
HDD and 1xSSD) on each host. A default CRUSHmap was generated with every
OSD set to be part of the node it was on.

However, in order to create a second pool, I had to separate the SSDs
from the other drives. I edited the CRUSHmap so that they would appear
to be on different hosts (which are not existent), and set up a second
root and ruleset for what to become the cache pool. As far as I could
see, this worked well. Setting up the second pool and adding it as a
tier to the original storage as well as setting it to cache-mode worked
to. Using this setup as the backend storage for the VMs, these VMs seem
really fast on disk IO, so I guess, it all did, what it should.

The Problem was, that after reboot (or at any event that starts the OSD
deamons automatically), the SSD-OSDs are startet within the original
host (which makes sense) and so they appear on the normal storage-pool
tier rather than the cache-pool, which was then in a degraded state.
Other than manually reset the CRUSHmap after any failed SSD or reboot or
any event that causes the OSD deamons to restart, I see nothing to fix that.

Thanks for your answers in advance, Christian Doering

-- 
Christian D?ring
christian.doering at comfortticket.de
<mailto:christian.doering at comfortticket.de>

comfortticket Karten- und Vertriebsservices GmbH
Deichstr. 21
20459 Hamburg
Tel. +49-40-696505-55
Fax. +49-40-696505-90
Web. http://www.comfortticket.de

Gesch?ftsf?hrer: Bj?rn Schlesselmann
Registergericht: Amtsgericht Hamburg HRB89224

Follow us on Facebook
<https://www.facebook.com/pages/comfortticketde/243568222351949> Twitter
<http://twitter.com/comfortticketde> Google+
<https://plus.google.com/u/0/b/106127096037175048393/106127096037175048393>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140731/bb5e0c80/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: djijifbj.png
Type: image/png
Size: 246 bytes
Desc: not available
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140731/bb5e0c80/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: debhebah.png
Type: image/png
Size: 457 bytes
Desc: not available
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140731/bb5e0c80/attachment-0001.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: bcdcgehd.gif
Type: image/gif
Size: 926 bytes
Desc: not available
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140731/bb5e0c80/attachment.gif>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux