Re: Storing VM Images on CEPH with RBD-QEMU driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Dec 20, 2013 at 6:19 PM, James Pearce <james@xxxxxxxxxxxx> wrote:
>
> "fio --size=100m --ioengine=libaio --invalidate=1 --direct=1 --numjobs=10
> --rw=read --name=fiojob --blocksize_range=4K-512k --iodepth=16"
>
> Since size=100m so reads would be entirely cached

--invalidate=1 drops the cache, no? Our results of that particular fio
test are consistently just under 1Gb/s on varied VMs running on varied
HVs.

BTW, look what happens when you don't drop the cache:

# fio --size=100m --ioengine=libaio --invalidate=0 --direct=0
--numjobs=10 --rw=read --name=fiojob --blocksize_range=4K-512k | grep
READ
   READ: io=1000.0MB, aggrb=4065.5MB/s, minb=416260KB/s,
maxb=572067KB/s, mint=179msec, maxt=246msec

> and, if hypervisor is
> write-back, potentially many writes would never make it to the cluster as
> well?

Maybe you're right, but only if fio in randwrite mode overwrites the
same address many times (does it??), and the rbd cache discards
overwritten writes (does it??). By observation, I can say for certain
that when we have those 10 VMs running these benchmarks in a while 1
loop, our cluster becomes quite busy.
Cheers, Dan


>
> Sorry if I've misunderstood :)
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux