Weird. That wouldn't be blocksize - a tiny bucket size could cause performance issues, but not consistent with what you describe. Might be some sort of interaction with xfs, I'll have to see if I can reproduce it. On Thu, Dec 8, 2011 at 6:32 PM, Marcus Sorensen <shadowsor@xxxxxxxxx> wrote: > Got to try this out quickly this afternoon. Used 200GB hardware raid1 > caching for 8 disk, 8T raid 10. Enabled writeback, put xfs on bcache0. > Mkfs.xfs took awhile, which was unusual. I mounted the filesystem, created > an 8GB file, which was fast. Then ran some 512b random reads against it(16 > threads), almost sad speed. Switched same test to random writes, and it was > as slow as spindle. Some of the threads even threw "blocked for 120 seconds" > traces. I wonder if my blocksize is set wrong on the cache, sort of hard to > find the appropriate numbers. > > On Dec 6, 2011 10:02 AM, "Marcus Sorensen" <shadowsor@xxxxxxxxx> wrote: >> >> I'm also curious as to how it decides what to keep in cache and whatto >> toss out, what to write direct to platter and what to buffer. I'vebeen >> testing LSI's cachecade 2.0 pro, and my intent is to post >> somebenchmarks between the two. From what I've seen you get at most >> 1/2performance of your SSD if everything could fit into cache, I'm >> notsure if that's due to their algorithm and how they decide what's >> SSDworthy and what's not. -- To unsubscribe from this list: send the line "unsubscribe linux-bcache" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html