On 30/08/2012 08:21, Jonathan Tripathy wrote:
On 30/08/2012 08:15, Jonathan Tripathy wrote:
Hi There,
On my WIndows DomU (Xen VM) which is running on a LV which is using
bcache (against two SSD in MDRAID1 and a MD-RAID10 spindle array), I
ran an IOMeter test for about 2 hours (with 30 workers and a io depth
of 256). This was a very heavy workload (Got an average iops of about
6.5k). After I stopped the test, I then went back to fio on my Linux
Xen Host (Dom0). The random write performance isn't as good as it was
before I started the IOMeter test. It used to be about 25k and now
showed about 7k iops. I assumed that maybe this was due to the fact
that bcache was writing out dirty data to the spindles so the SSD was
busy.
However, this morning, after the spindles have calmed down,
performance of fio is still not great (still about 7k).
Is there something wrong here? What is expected behavior?
Thanks
BTW, I can confirm that this isn't an SSD issue, as I have a partition
on the SSD that I kept seperate from bcache and I'm getting excellent
(about 28k) iops performance there.
It's as if after the heavy workload I did with IOMeter, bcache has
somehow throttled the writeback cache?
Any help is appreciated.
Also, I'm not sure if this is related, but is there a memory leak
somewhere in the bcache code? I haven't used this machine for anything
else apart from running the above tests and here is my RAM usage:
free -m
total used free shared buffers cached
Mem: 1155 1021 133 0 0 8
-/+ buffers/cache: 1013 142
Swap: 952 53 899
Any ideas? Please let me know if you need me to run any other commands.
Thanks
--
To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html