Re: Ceph iSCSI GW is too slow when compared with Raw RBD performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Awesome, thanks for the info!

By any chance, do you happen to know what configurations you needed to
adjust to make Veeam perform a bit better?

On Fri, Jun 23, 2023 at 10:42 AM Anthony D'Atri <aad@xxxxxxxxxxxxxx> wrote:

> Yes, with someone I did some consulting for.  Veeam seems to be one of the
> prevalent uses for ceph-iscsi, though I'd try to use the native RBD client
> instead if possible.
>
> Veeam appears by default to store really tiny blocks, so there's a lot of
> protocol overhead.  I understand that Veeam can be configured to use "large
> blocks" that can make a distinct difference.
>
>
>
> On Jun 23, 2023, at 09:33, Work Ceph <work.ceph.user.mailing@xxxxxxxxx>
> wrote:
>
> Great question!
>
> Yes, one of the slowness was detected in a Veeam setup. Have you
> experienced that before?
>
> On Fri, Jun 23, 2023 at 10:32 AM Anthony D'Atri <aad@xxxxxxxxxxxxxx>
> wrote:
>
>> Are you using Veeam by chance?
>>
>> > On Jun 22, 2023, at 21:18, Work Ceph <work.ceph.user.mailing@xxxxxxxxx>
>> wrote:
>> >
>> > Hello guys,
>> >
>> > We have a Ceph cluster that runs just fine with Ceph Octopus; we use RBD
>> > for some workloads, RadosGW (via S3) for others, and iSCSI for some
>> Windows
>> > clients.
>> >
>> > We started noticing some unexpected performance issues with iSCSI. I
>> mean,
>> > an SSD pool is reaching 100MB of write speed for an image, when it can
>> > reach up to 600MB+ of write speed for the same image when mounted and
>> > consumed directly via RBD.
>> >
>> > Is that performance degradation expected? We would expect some
>> degradation,
>> > but not as much as this one.
>> >
>> > Also, we have a question regarding the use of Intel Turbo boost. Should
>> we
>> > disable it? Is it possible that the root cause of the slowness in the
>> iSCSI
>> > GW is caused by the use of Intel Turbo boost feature, which reduces the
>> > clock of some cores?
>> >
>> > Any feedback is much appreciated.
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users@xxxxxxx
>> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux