Re: Why Ceph's aggregate write throughput does not scale with the number of osd nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Greg,

I installed Ceph on 10 ec2 instances. one for mon, one for mds, and
the other eights are osds. I used IOZONE's distributed measurement
mode (Multiple clients that are on different nodes and creating  same
type of workload in parallel) to test the scalability of Ceph. The
problem is with the increase of the number of clients that was doing
writing. the aggregate write throughput didn't scale up. For example.
when I had only one client writing data to ceph, the throughput was
around 60 MB/s. when I had 2 clients writing to two different files to
ceph, the throughput was still around 60MB/s. Same condition with 4
clients and 8 clients. And the Clients were all on different nodes.
But the aggregate reading throughput did scale up, which told us that
the data was distributed on different osds. Otherwise it wouldn't
scale up with the number of the reading clients.

So I don't know the reason why it couldn't scale up for writing. I saw
the old bugs, so I tried new versions, but it still didn't work.

BTW, actually my English name is Gregory too :) That's a good name

Best,
Xiaofei

On Wed, Nov 23, 2011 at 10:53 AM, Gregory Farnum
<gregory.farnum@xxxxxxxxxxxxx> wrote:
>
> On Wed, Nov 23, 2011 at 10:38 AM, Xiaofei Du <xiaofei.du008@xxxxxxxxx> wrote:
> > I searched online. saw Bug #538 and Tasks #584 may related to my
> > problem. What's the status of this problem? I first used version 0.34.
> > Then I tried a newer version which is 0.37. But the problem still
> > exists. Is there anyone who knows this problem and how to solve this?
> > Thanks a lot. and Happy Thanksgiving
> Can you give us more details about the problem you're seeing? We've
> got a pile of new hardware being installed now that we will soon be
> doing large-scale testing on so we'll run into any problems. But if
> you're seeing scaling issues it'd be good to know something more than
> that they're related to two old bugs...
> :)
> -Greg



--
Xiaofei (Gregory) Du
Department of Computer Science
University of California, Santa Barbara
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux