Re: Why Ceph's aggregate write throughput does not scale with the number of osd nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 23, 2011 at 12:07, Xiaofei Du <xiaofei.du008@xxxxxxxxx> wrote:
>
> I installed Ceph on 10 ec2 instances.

Can you go into more detail about your EC2 instances and where you are
storing your data? If you are storing them on standard EBS then you
are competing for non-guaranteed bandwidth. The information contained
on the AWS product description page is basically a lie (
http://aws.amazon.com/ebs/ ) unless things have changed substantially
since I last used them.  If you are storing data on the ephemeral
disks then unless you are on a "whole machine" instance
(m1.xlarge/c1.xlarge/m2.4xlarge) you are competing for SATA resources
on the ephemeral disks. If your machine neighbors are doing some heavy
disk workload you may simply be starved for resources.

Kelly
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux