Re: Ceph Blog Articles

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Nick,

m( of course, you're right. Yes, we have rbd_cache enabled for KVM /
QEMU. That probably also explains the large diff between avg and stdev.
Thanks for the Pointer.

Unfortunately I have not yet gotten fio to work with the rbd engine.
Always fails with

> rbd engine: RBD version: 0.1.9
> rbd_open failed.
> fio_rbd_connect failed.

Regardless if I set the clustername or not (to either ceph or the fsid)
and if I specify the clientname as ceph.client.admin, client.admin or
admin. Any pointer what I might be missing here?

Greetings
-Sascha-

Am 06.12.2016 um 15:49 schrieb Nick Fisk:
> Hi Sascha,
> 
> Have you got any write back caching enabled? That time looks very fast, almost too fast to me. It looks like some of the writes
> completed in around 70us which is almost the same as a single hop of 10G networking, where you would have at least 2 hops
> (Client->OSD1->OSD2).
> 
> What are your write cache settings for qemu?

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux