Re: Why Ceph's aggregate write throughput does not scale with the number of osd nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Kelly,

I used m1.large instances. I tried both EBS and local storage. I did
IO test on the EBS and local devices. It is not bad. I also tested
HDFS on these instances. it showed much better aggregate writing
throughput. So I guess it is not disk IO problems. Yeah, EC2 disk IO
does have fluctuations, but the throughput is not so bad.

Best,
Xiaofei

On Wed, Nov 23, 2011 at 2:33 PM, Kelly Kane <kelly@xxxxxxxxxxxxxxxx> wrote:
> On Wed, Nov 23, 2011 at 12:07, Xiaofei Du <xiaofei.du008@xxxxxxxxx> wrote:
>>
>> I installed Ceph on 10 ec2 instances.
>
> Can you go into more detail about your EC2 instances and where you are
> storing your data? If you are storing them on standard EBS then you
> are competing for non-guaranteed bandwidth. The information contained
> on the AWS product description page is basically a lie (
> http://aws.amazon.com/ebs/ ) unless things have changed substantially
> since I last used them.  If you are storing data on the ephemeral
> disks then unless you are on a "whole machine" instance
> (m1.xlarge/c1.xlarge/m2.4xlarge) you are competing for SATA resources
> on the ephemeral disks. If your machine neighbors are doing some heavy
> disk workload you may simply be starved for resources.
>
> Kelly
>



-- 
Xiaofei (Gregory) Du
Department of Computer Science
University of California, Santa Barbara
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux