Re: min_size & hybrid OSD latency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Christian is correct that min_size does not affect how many need to ACK the write, it is responsible for how many copies need to be available for the PG to be accessible.  This is where SSD journals for filestore and SSD DB/WAL partitions come into play.  The write is considered ACK'd as soon as the journal has received the write.

Additionally please keep in mind that a write to an SSD across a network is not going to be as fast as the SSD statistics claim on their own.  You are adding in network latency to the device.

On Tue, Oct 10, 2017 at 7:51 PM Christian Balzer <chibi@xxxxxxx> wrote:

Hello,

On Wed, 11 Oct 2017 00:05:26 +0200 Jack wrote:

> Hi,
>
> I would like some information about the following
>
> Let say I have a running cluster, with 4 OSDs: 2 SSDs, and 2 HDDs
> My single pool has size=3, min_size=2
>
> For a write-only pattern, I thought I would get SSDs performance level,
> because the write would be acked as soon as min_size OSDs acked
>
> But I am right ?
>
You're the 2nd person in very recent times to come up with that wrong
conclusion about min_size.

All writes have to be ACKed, the only time where hybrid stuff helps is to
accelerate reads.
Which is something that people like me at least have very little interest
in as the writes need to be fast.

Christian

> (the same setup could involve some high latency OSDs, in the case of
> country-level cluster)
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>


--
Christian Balzer        Network/Systems Engineer
chibi@xxxxxxx           Rakuten Communications
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux