Re: High-availability testing of ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jul 31, 2012 at 12:31 AM,  <Eric_YH_Chen@xxxxxxxxxx> wrote:
> If the performance of rbd device is n MB/s under replica=2,
> then that means the total io throughputs on hard disk is over 3 * n MB/s.
> Because I think the total number of copies is 3 in original.
>
> So, it seems not correct now, the total number of copies is only 2.
> The total io through puts on disk should be 2 * n MB/s. Right?

Yes, each replica needs to independently write the data to disk. On
top of that, there are journal writes, and filesystems have overhead
too. If you create a 1 GB object in a pool replicated 3 times, you
should expect about 3*1 GB writes in total to your osd data disks, and
at least 3*1 GB writes in total to your osd journal disks.

In normal use, you have many servers, and use CRUSH rules to ensure
the different replicas are not stored on the same server.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux