On 04/13/2012 02:48 PM, Ted Ts'o wrote:
On Fri, Apr 13, 2012 at 08:41:58PM +0200, Andi Kleen wrote:
The reason why I ask this is we're seeing anything like this with Eric
Whitney's 48 CPU scalability testing; we're not CPU bottlenecked, and
I don't even see evidence of a larger than usual CPU utilization
compared to other file systems.
I bet Eric didn't test with this statistic counter.
Huh? You can't turn it off, and he's been doing regular scalability
tests at least once per kernel release.
Yes, as recently as 3.4-rc1. I saw Andi's patch, and tested it this
week against that baseline with the ffsb profiles we've been using for
ext4 (and other filesystem) scalability measurements.
I didn't get a noticeable delta for throughput or reported CPU
utilization on my 48 core eight node NUMA test setup. That said, I plan
to look at this more closely to verify that my workloads should have
seen a delta in the first place. Ted knows them well, though. It's
worth noting that I've got plenty of free CPU capacity while running the
workload, which differs from Andi's/Tim's description.
Can you say a bit more about exactly how you are doing this test and
what are the "other issues" where this is becoming a bottleneck? If
possible I'd like to ask Eric if he can add it to his regular
scalability tests.
Yes, I'm certainly willing to do that if practical, and I'm curious to
know more about what the workload looks like.
Eric
Thanks,
- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html