On 03.09.2012 01:37, Kent Overstreet wrote:
On Sat, Sep 01, 2012 at 01:47:52PM +0100, Jonathan Tripathy wrote:
On 31/08/2012 13:41, Jonathan Tripathy wrote:
>
>
>On 31.08.2012 13:36, Jonathan Tripathy wrote:
>>On 31.08.2012 04:47, James Harper wrote:
>>>>Hi Kent,
>>>>
>>>>I'm going to try and reproduce it myself as well. I just
>>>>used IOMeter in a
>>>>Windows DomU with 30 workers, each having an io depth of 256. A
*very*
>>>>heavy workload indeed, but my point was to see if I could
>>>>break something.
>>>>Unless the issue is specific to windows causing problems
>>>>(NTFS or whatever),
>>>>I'm guessing running fio with 30 jobs and an iodepth of 256
>>>>would probably
>>>>produce a similar load.
>>>>
>>>>BTW, do you have access to a Xen node for testing?
>>>>
>>>
>>>Does the problem resolve itself after you shut down the windows
DomU?
>>>Or only when you reboot the whole Dom0?
>>>
>>
>>Hi There,
>>
>>I managed to reproduce this again. I have to reboot the entire
Dom0
>>(the physical server) for it to work properly again.
>>
>>James, are you able to reproduce this? Kent, are there any other
>>tests/debug output you need from me?
>>
>
>BTW, I was using IOMeter's 'default' Access Specification with the
>following modifications: 100% random, 66% read, 33% write, and a
>2kB size. My bcache is formatted for 512bytes.
>--
>
Kent, is there any debug output of some sort I could switch on and
help you figure out what's going on? If needs be, I can give you
access to my setup here where you can run these tests yourself, if
you're not keen installing Xen on your end :)
Shell access would probably be fastest, I suppose...
One thing that comes to mind is perhaps the load from background
writeback is slowing things down. Two things you can do:
set writeback_percent to 10, that'll enable a pd controller so it's
not
going full blast
Hi Kent,
I will try the above configuration change and repeat the test. However,
it's worth noting that I left an overnight gap between from when my
iomater run finished and from when I started the fio test. This was to
ensure that all data had been written out to backing storage. While I
didn't check if the cache was clean or dirty in the morning, I can
confirm that there was no disk activity according to the HDD lights on
the server case.
Cheers
--
To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html