cache pools on hypervisor servers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks a lot for your input. I will proceed with putting the cache pool on the storage layer instead.

Andrei

----- Original Message -----
> From: "Sage Weil" <sweil at redhat.com>
> To: "Andrei Mikhailovsky" <andrei at arhont.com>
> Cc: "Robert van Leeuwen" <Robert.vanLeeuwen at spilgames.com>, ceph-users at lists.ceph.com
> Sent: Thursday, 14 August, 2014 6:33:25 PM
> Subject: Re: [ceph-users] cache pools on hypervisor servers
> 
> On Thu, 14 Aug 2014, Andrei Mikhailovsky wrote:
> > Hi guys,
> > 
> > Could someone from the ceph team please comment on running osd cache pool
> > on
> > the hypervisors? Is this a good idea, or will it create a lot of
> > performance
> > issues?
> 
> It doesn't sound like an especially good idea.  In general you want the
> cache pool to be significantly faster than the base pool (think PCI
> attached flash).  And there won't be any particular affinity to the host
> where the VM consuming the sotrage happens to be, so I don't think there
> is a reason to put the flash in the hypervisor nodes unless there simply
> isn't anywhere else to put them.
> 
> Probably what you're after is a client-side write-thru cache?  There is
> some ongoing work to build this into qemu and possibly librbd, but nothing
> is ready yet that I know of.
> 
> sage
> 
> 
> > 
> > Anyone in the ceph community that has done this? Any results to share?
> > 
> > Many thanks
> > 
> > Andrei
> > 
> > ____________________________________________________________________________
> >       From: "Robert van Leeuwen" <Robert.vanLeeuwen at spilgames.com>
> >       To: "Andrei Mikhailovsky" <andrei at arhont.com>
> >       Cc: ceph-users at lists.ceph.com
> >       Sent: Thursday, 14 August, 2014 9:31:24 AM
> >       Subject: RE: cache pools on hypervisor servers
> > 
> > > Personally I am not worried too much about the hypervisor -
> > hypervisor traffic as I am using a dedicated infiniband network for
> > storage.
> > > It is not used for the guest to guest or the internet traffic or
> > anything else. I would like to decrease or at least smooth out the
> > traffic peaks between the hypervisors and the SAS/SATA osd storage
> > servers.
> > > I guess the ssd cache pool would enable me to do that as the
> > eviction rate should be more structured compared to the random io
> > writes that guest vms generate. Sounds reasonable
> > 
> > >>I'm very interested in the effect of caching pools in combination
> > with running VMs on them so I'd be happy to hear what you find ;)
> > > I will give it a try and share back the results when we get the ssd
> > kit.
> > Excellent, looking forward to it.
> > 
> > 
> > >> As a side note: Running OSDs on hypervisors would not be my
> > preferred choice since hypervisor load might impact Ceph performance.
> > > Do you think it is not a good idea even if you have a lot of cores
> > on the hypervisors?
> > > Like 24 or 32 per host server?
> > > According to my monitoring, our osd servers are not that stressed
> > and generally have over 50% of free cpu power.
> > 
> > The number of cores do not really matter if they are all busy ;)
> > I honestly do not know how Ceph behaves when it is CPU starved but I
> > guess it might not be pretty.
> > Since your whole environment will be crumbling down if your storage
> > becomes unavailable it is not a risk I would take lightly.
> > 
> > Cheers,
> > Robert van Leeuwen
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> 


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux