cache pools on hypervisor servers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Robert, thanks for your reply, please see my comments inline 

----- Original Message -----

> From: "Robert van Leeuwen" <Robert.vanLeeuwen at spilgames.com>
> To: "Andrei Mikhailovsky" <andrei at arhont.com>, ceph-users at lists.ceph.com
> Sent: Wednesday, 13 August, 2014 6:57:57 AM
> Subject: RE: cache pools on hypervisor servers

> > I was hoping to get some answers on how would ceph behaive when I install
> > SSDs on the hypervisor level and use them as cache pool.
> > Let's say I've got 10 kvm hypervisors and I install one 512GB ssd on each
> > server.
> >I then create a cache pool for my storage cluster using these ssds. My
> >questions are:
> >
> >1. How would the network IO flow when I am performing read and writes on the
> >virtual machines? Would writes get stored on the hypervisor's ssd disk
> >right away or would the rights be directed to the osd servers >first and
> >then redirected back to the cache pool on the hypervisor's ssd? Similarly,
> >would reads go to the osd servers and then redirected to the cache pool on
> >the hypervisors?

> You would need to make an OSD of your hypervisors.
> Data would be "striped" across all hypervisors in the cache pool.
> So you would shift traffic from:
> hypervisors -> dedicated ceph OSD pool
> to
> hypervisors -> hypervisors running a OSD with SSD
> Note that the local OSD also has to to do OSD replication traffic so you are
> increasing the network load on the hypervisors by quite a bit.

Personally I am not worried too much about the hypervisor - hypervisor traffic as I am using a dedicated infiniband network for storage. It is not used for the guest to guest or the internet traffic or anything else. I would like to decrease or at least smooth out the traffic peaks between the hypervisors and the SAS/SATA osd storage servers. I guess the ssd cache pool would enable me to do that as the eviction rate should be more structured compared to the random io writes that guest vms generate. 

> > Would majority of network traffic shift to the cache pool level and stay at
> > the hypervisors level rather than hypervisor / osd server level?

> I guess it depends on your access patterns and how much data needs to be
> migrated back and forth to the regular storage.
> I'm very interested in the effect of caching pools in combination with
> running VMs on them so I'd be happy to hear what you find ;)

I will give it a try and share back the results when we get the ssd kit. 

> As a side note: Running OSDs on hypervisors would not be my preferred choice
> since hypervisor load might impact Ceph performance.

Do you think it is not a good idea even if you have a lot of cores on the hypervisors? Like 24 or 32 per host server? According to my monitoring, our osd servers are not that stressed and generally have over 50% of free cpu power. Having said this, ssd osds will generate more io and throughput compared to the sas/sata osds, so the cpu load might be higher. Not really sure here. 

> I guess you can end up with pretty weird/unwanted results when your
> hypervisors get above a certain load threshold.
> I would certainly test a lot with high loads before putting it in
> production...

Definitely! 

> Cheers,
> Robert van Leeuwen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140813/94a7e7d5/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux