Workload Separation in Ceph RGW Cluster - Recommended or Not?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

I would like to seek your insights and recommendations regarding the
practice of workload separation in a Ceph RGW (RADOS Gateway) cluster. I
have been facing challenges with large queues in my deployment and would
appreciate your expertise in determining whether workload separation is a
recommended approach or not.

In my current Ceph cluster, I have 20 RGW instances. Client requests are
directed to RGW1-16, while RGW17-20 are dedicated to administrative tasks
and backend usage. However, I have been encountering errors and congestion
issues due to the accumulation of large queues within the RGW instances.

Considering the above scenario, I would like to inquire about your opinions
on workload separation as a potential solution. Specifically, I am
interested in knowing whether workload separation is recommended in a Ceph
RGW cluster.

To address the queue congestion and improve performance, my proposed
solution includes separating the RGW instances based on their specific
purposes. This entails allocating dedicated instances for client requests,
backend usage, administrative tasks, metadata synchronization with other
zone groups, garbage collection (GC), and lifecycle (LC) operations.

I kindly request your feedback and insights on the following points:

1. Is workload separation considered a recommended practice in Ceph RGW
deployments?
2. What are the potential benefits and drawbacks of workload separation in
terms of performance, resource utilization, and manageability?
3. Are there any specific considerations or best practices to keep in mind
while implementing workload separation in a Ceph RGW cluster?
4. Can you share your experiences or any references/documentation that
highlight successful implementations of workload separation in Ceph RGW
deployments?

I truly value your expertise and appreciate your time and effort in
providing guidance on this matter. Your insights will contribute
significantly to optimizing the performance and stability of my Ceph RGW
cluster.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux