Re: Pacific: parallel PG reads?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

This is a good suggestion. Unfortunately, I've already tried striping the
RBD images, and it didn't provide much of an effect. I.e. I striped an
image, stripe count 2 and size 2 MB, and it performed almost exactly the
same as a non-striped image.

Z

On Thu, Nov 11, 2021 at 8:24 PM 胡 玮文 <huww98@xxxxxxxxxxx> wrote:

> Hi Zakhar,
>
>
>
> If you are using RBD, you may be interested in the striping feature. It
> works like RAID0 and can read from multiple object at once for sequential
> read requests.
>
>
>
> https://docs.ceph.com/en/latest/man/8/rbd/#striping
>
>
>
> Weiwen Hu
>
>
>
> 从 Windows 版邮件 <https://go.microsoft.com/fwlink/?LinkId=550986>发送
>
>
>
> *发件人: *Zakhar Kirpichenko <zakhar@xxxxxxxxx>
> *发送时间: *2021年11月11日 20:54
> *收件人: *ceph-users <ceph-users@xxxxxxx>
> *主题: * Pacific: parallel PG reads?
>
>
>
> Hi,
>
> I'm still trying to combat really bad read performance from HDD-backed
> replicated pools, which is under 100 MB/s most of the time with 1 thread
> and QD=1. I don't quite understand why the reads are that slow, i.e. much
> slower than a single HDD, but do understand that Ceph clients read a PG
> from primary OSD only.
>
> Since reads are immutable, is it possible to make Ceph clients read PG in a
> RAID1-like fashion, i.e. if a PG has a primary OSD and two replicas, is it
> possible to read all 3 OSDs in parallel for a 3x performance gain?
>
> I would appreciate any advice.
>
> Best regards,
> Zakhar
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux