Re: Slow ceph fs performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Oct 1, 2012 at 9:47 AM, Tommi Virtanen <tv@xxxxxxxxxxx> wrote:
> On Thu, Sep 27, 2012 at 11:04 AM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
>> However, my suspicion is that you're limited by metadata throughput
>> here. How large are your files? There might be some MDS or client
>> tunables we can adjust, but rsync's workload is a known weak spot for
>> CephFS.
>
> I feel like people are missing this part of Greg's message. Everyone
> is so busy benchmarking RADOS small I/O, but what if it's currently
> bottlenecked by all the file-level access operations that interact
> with the MDS? Rsync causes a ton of those.

Yes. Bryan, you mentioned that you didn't see a lot of resource usage
— was it perhaps flatlined at (100 * 1 / num_cpus)? The MDS is
multi-threaded in theory, but in practice it has the equivalent of a
Big Kernel Lock so it's not going to get much past one cpu core of
time...
The rados bench results do indicate some pretty bad small-file write
performance as well though, so I guess it's possible your testing is
running long enough that the page cache isn't absorbing that hit. Did
performance start out higher or has it been flat?

> If you want to benchmark just the small IO, you can't compare rsync to rsync.
>
> If you want to benchmark just the metadata part, rsync with 0-size
> files might actually be an interesting workload.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux