Re: [rgw multisite] issue of sync large file

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

What exactly has you worried about large transfers for sync?

Breaking them up and sending the parts in parallel could certainly help to reduce the latency of data sync between zones, as long as there's bandwidth available. But without some careful throttling and i/o scheduling (which we don't currently do at all for data sync), I'd worry about saturating the network with sync i/o and starving clients. Our civetweb frontend is also imposing a limit on the number of threads for incoming connections, so I'd be wary about tying up more of those for sync. A strategy for i/o scheduling, along with a replacement for civetweb, are both on our long-term roadmap - we'd love your input on those.

Thanks,
Casey


On 10/10/2016 06:12 AM, Zhangzengran wrote:
Hi Casey:
   We realized an issue recently when testing rgw multisite. when we put a large file(T scale)
using multipart into master zone, the slave zone will sync it with atomic processor. This means
it will get the whole file in a single request,so we worry about the sync process in this situation.

Maybe we could rewrite a new function for RGWAsyncFetchRemoteObj. It could download the
orign file with byte-range, and restore it with multi-part?

Regards, thank you!
-------------------------------------------------------------------------------------------------------------------------------------
本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
邮件!
This e-mail and its attachments contain confidential information from H3C, which is
intended only for the person or entity whose address is listed above. Any use of the
information contained herein in any way (including, but not limited to, total or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
by phone or email immediately and delete it!

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux