Re: Xen blktap driver for Ceph RBD : Anybody wants to test ? :p

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 23 Apr 2013, at 17:06, Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx> wrote:

> Hi,
> 
>> - client dom0: a simple quick debootstrap, and a low amount of memory to bypass buffers
> 
> I assume you meant domU ?
> You ran those tests in a VM right ?

Right! DomU indeed, not dom0. The benchmarks are done on /dev/xvdb, being the rbd device. I was able to block-attach/detach them also on the dom0, but that is trivial once it works.

> 
>> Thx for the work you've put in this! Seems to work nicely at first glance, but I'll do some more tests later on (with a more recent ceph cluster)...
> 
> Thanks for testing. When you do more tests, it'd be interesting to
> compare the kernel driver with the blktap driver.

I'll be upgrading my testcluster first to the latest stable ceph release, and then run some endurance-tests against the setup. But that won't happen before mid next week.

> 
> I've actually identified a bottleneck caused the Xen IO ring splitting
> large requests into small 44k chunks, which tends to lower RBD
> performance a lot ... I'l still investigating possible solutions to
> that.

Feel free to bug me once it is fixed, if you want some testing.

Rgds,
Bernard
Openminds BVBA

> 
> Cheers,
> 
>    Sylvain
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux