Re: Storing VM Images on CEPH with RBD-QEMU driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Dan,

On Fri, 20 Dec 2013 14:01:04 +0100 Dan van der Ster wrote:

> On Fri, Dec 20, 2013 at 9:44 AM, Christian Balzer <chibi@xxxxxxx> wrote:
> >
> > Hello,
> >
> > On Fri, 20 Dec 2013 09:20:48 +0100 Dan van der Ster wrote:
> >
> >> Hi,
> >> Our fio tests against qemu-kvm on RBD look quite promising, details
> >> here:
> >>
> >> https://docs.google.com/spreadsheet/ccc?key=0AoB4ekP8AM3RdGlDaHhoSV81MDhUS25EUVZxdmN6WHc&usp=drive_web#gid=0
> >>
> > That data is very interesting and welcome, however it would be a lot
> > more relevant if it included information about your setup (though it is
> > relatively easy to create a Ceph cluster than can saturate GbE ^.^) and
> > your configuration.
> >
> > For example I assume you're using the native QEMU RBD interface.
> > How did you configure caching, just turned it on and left it at the
> > default values?
> >
> 
> It's all RedHat 6.5, qemu-kvm-rhev-0.12.1.2-2.415.el6_5.3 on the HVs,
> ceph 0.67.4 on the servers. Caching is enabled with the usual
>   rbd cache = true
>   rbd cache writethrough until flush = true
> (otherwise defaults)
That's a good data point, I'll probably play with those defaults
eventually. One thinks that the same amount of cache as a consumer HD can
be improved upon, given memory prices and all. ^o^

> The hardware is 47 OSD servers with 24 OSDs each, single 10GbE NIC per
> server, no SSDs, write journal as a file on the OSD partition (which
> is a baaad idea for small write latency, so we are slowly reinstalling
> everything to put the journal on a separate partition)
> 
Ah yes, there is the impressive bit, 47 times 24 should easily give you
that amount of IOPs, even with the journal not optimized. 

Regards,

Christian

> Cheers, Dan
> 
> >> tl;dr: rbd with caching enabled is (1) at least 2x faster than the
> >> local instance storage, and (2) reaches the hypervisor's GbE network
> >> limit in ~all cases except very small random writes.
> >>
> >> BTW, currently we have ~10 VMs running those fio tests in a loop, and
> >> we're seeing ~25,000op/s sustained in the ceph logs. Not bad IMHO.
> > Given the feedback I got from my "Sanity Check" mail, I'm even more
> > interested in the actual setup you're using now.
> > Given your workplace, I expect to be impressed. ^o^
> >
> >> Cheers, Dan
> >> CERN IT/DSS
> >>
> > [snip]
> >
> > Regards,
> >
> > Christian
> > --
> > Christian Balzer        Network/Systems Engineer
> > chibi@xxxxxxx           Global OnLine Japan/Fusion Communications
> > http://www.gol.com/
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux