Re: RGW performance as a Veeam capacity tier

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Just an update for anyone that sees this it looks like Veeam doesn't index it's content real well and as such when it offloads it, it is random IO which means that the IOPS and throughput is not great and you really need to overbuild your volumes (RAID) on your Veeam server to get any kind of performance out of it. So on a 4disk r10 you get about 30M/s when offloading.



-----Original Message-----
From: Konstantin Shalygin <k0ste@xxxxxxxx> 
Sent: Saturday, July 10, 2021 10:28 AM
To: Nathan Fish <lordcirth@xxxxxxxxx>
Cc: Drew Weaver <drew.weaver@xxxxxxxxxx>; ceph-users@xxxxxxx
Subject: Re:  Re: RGW performance as a Veeam capacity tier

Veeam normally produced 2-4Gbit/s to S3 in our case


k

Sent from my iPhone

> On 10 Jul 2021, at 08:36, Nathan Fish <lordcirth@xxxxxxxxx> wrote:
> 
> No, that's pretty slow, you should get at least 10x that for 
> sequential writes. Sounds like Veeam is doing a lot of sync random 
> writes. If you are able to add a bit of SSD (preferably NVMe) for 
> journaling, that can help random IO a lot. Alternatively, look into IO 
> settings for Veeam.
> 
> For reference, we have ~100 drives with size=3, and get ~3 GiB/s 
> sequential with the right benchmark tuning.
> 
>> On Fri, Jul 9, 2021 at 1:59 PM Drew Weaver <drew.weaver@xxxxxxxxxx> wrote:
>> 
>> Greetings.
>> 
>> I've begun testing using Ceph 14.2.9 as a capacity tier for a scale out backup repository in Veeam 11.
>> 
>> The backup host and the RGW server are connected directly at 10Gbps.
>> 
>> It would appear that the maximum throughput that Veeam is able to achieve while archiving data to this cluster is about 24MB/sec.
>> 
>> client:   156 KiB/s rd, 24 MiB/s wr, 156 op/s rd, 385 op/s wr
>> 
>> The cluster has 6 OSD hosts with a total of 48 4TB SATA drives.
>> 
>> Does that performance sound about right for 48 4TB SATA drives /w 10G networking?
>> 
>> Thanks,
>> -Drew
>> 
>> 
>> 
>> 
>> 
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
>> email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux