OSDs for 2 different pools on a single host

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Add a parameter to the OSD's config file
"osd crush update on start = false"

I'd recommend creating a section for just your SSD OSDs which sets this, as
that will let any of your other disks that move continue to be updated. :)
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Thu, Jul 31, 2014 at 8:12 AM, Christian Doering <cd at comfortticket.de>
wrote:

>  Hello List,
>
> I hope, I don't ask a question that is answered already, but if so,
> pointing to the solution would be nice.
> Although the Subject is problably misleading a bit, that is essentially my
> problem (as far as I understand).
>
> Is it possible to apply OSDs directly to a root to circumvent a pool from
> using all OSDs on a host, or can I somewhere "fix" the hostname, with which
> the OSDs are started (for each OSD seperately)?
> This should be a non-volatile configuration.
>
>
> The scenario:
>
> I have build a cluster of 3 Proxmox Virtualisation Hosts (at the moment).
> I use Ceph to provide the distributed storage. There is good documentation
> to build the ceph cluster on Proxmox hosts. Each system has 4 conventional
> HDDs and 2 SSDs (one for the OS, and another was planed for the journal). I
> came across cache pools then. Now I wanted to use the second SSD becoming
> an OSD for a cache pool. I set up 5 OSDs (4x HDD and 1xSSD) on each host. A
> default CRUSHmap was generated with every OSD set to be part of the node it
> was on.
>
> However, in order to create a second pool, I had to separate the SSDs from
> the other drives. I edited the CRUSHmap so that they would appear to be on
> different hosts (which are not existent), and set up a second root and
> ruleset for what to become the cache pool. As far as I could see, this
> worked well. Setting up the second pool and adding it as a tier to the
> original storage as well as setting it to cache-mode worked to. Using this
> setup as the backend storage for the VMs, these VMs seem really fast on
> disk IO, so I guess, it all did, what it should.
>
> The Problem was, that after reboot (or at any event that starts the OSD
> deamons automatically), the SSD-OSDs are startet within the original host
> (which makes sense) and so they appear on the normal storage-pool tier
> rather than the cache-pool, which was then in a degraded state. Other than
> manually reset the CRUSHmap after any failed SSD or reboot or any event
> that causes the OSD deamons to restart, I see nothing to fix that.
>
> Thanks for your answers in advance, Christian Doering
>
> --
> Christian D?ring
> christian.doering at comfortticket.de
>
> comfortticket Karten- und Vertriebsservices GmbH
> Deichstr. 21
> 20459 Hamburg
> Tel. +49-40-696505-55
> Fax. +49-40-696505-90
> Web. http://www.comfortticket.de
>
> Gesch?ftsf?hrer: Bj?rn Schlesselmann
> Registergericht: Amtsgericht Hamburg HRB89224
>
> Follow us on [image: Facebook]
> <https://www.facebook.com/pages/comfortticketde/243568222351949> [image:
> Twitter] <http://twitter.com/comfortticketde> [image: Google+]
> <https://plus.google.com/u/0/b/106127096037175048393/106127096037175048393>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140731/31200d43/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux