Hi Andrey,
As I understood right, you have md device holding both journal and
filestore? What type of raid you have here?
Yes, same md device holding both journal and filestore. It is a raid5.
Of course you`ll need a
separate device (for experimental purposes, fast disk may be enough)
for the journal
Is there a way to tell if the journal is the bottleneck without actually
adding such an extra device?
filestore partition, you may also change it to simple RAID0, or even
separate disks, and create one osd over every disk(you should see to
I have only 3 OSDs with 4 disks each. I was afraid that it would be too
brittle as a RAID0, and if I created seperate OSDs for each disk, it
would stall the file system due to recovery if a server crashes.
What size of cache_size/max_dirty you have inside ceph.conf
I haven't set them explicitly, so I imagine the cache_size is 32 MB and
the max_dirty is 24 MB.
> and which
qemu version you use?
Using the default 0.15 version in Fedora 16.
tasks increasing cache may help OS to align writes more smoothly. Also
you don`t need to set rbd_cache explicitly in the disk config using
qemu 1.2 and younger releases, for older ones
http://lists.gnu.org/archive/html/qemu-devel/2012-05/msg02500.html
should be applied.
I read somewhere that I needed to enable it specifically for older
qemu-kvm versions, which I did like this:
format=rbd,file=rbd:data/image1:rbd_cache=1,if=virtio
However now I read in the docs for qemu-rbd that it needs to be set like
this:
format=raw,file=rbd:data/squeeze:rbd_cache=true,cache=writeback
I'm not sure if 1 and true are interpreted the same way?
I'll try using "true" and see if I get any noticable changes in behaviour.
The link you sent me seems to indicate that I need to compile my own
version of qemu-kvm to be able to test this?
--
Jens Kristian Søgaard, Mermaid Consulting ApS,
jens@xxxxxxxxxxxxxxxxxxxx,
http://www.mermaidconsulting.com/
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html