Re: Notes on "brick multiplexing" in 4.0

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wednesday 17 June 2015 02:21 AM, Kaushal M wrote:
One more question. I keep hearing about QoS for volumes as a feature.
How will we guarantee service quality for all the bricks from a single
server? Even if we weren't doing QoS, we make sure that operations on
brick doesn't DOS the others. We already keep hearing from users about
self-healing causing problems for the clients. Self-healing, rebalance
running simultaneously on multiple volumes in a multiplexed bricks
environment would most likely be disastrous.



Applying per tenant rules might be easier with a multiplexed brick than on a non-multiplexed one. Each tenant would need some slice of the overall resources and a single instance of the QoS translator loaded in the multiplexed brick can address this requirement. Same with management/internal operations like self-healing, rebalance etc. We would need knobs/policies to ensure that management operations do not steal the thunder away from user driven IO operations.

On the other hand, if we have one process per volume/tenant preventing problems like noisy neighbors in a multi-tenant situation can be harder to address as each tenant is unaware of the global resource usage.

-Vijay


On Tue, Jun 16, 2015 at 11:01 PM, Jeff Darcy <jdarcy@xxxxxxxxxx> wrote:
Reading through that, it sounds like a well thought out approach.

Thanks!

Did you consider a super-lightweight version first, which only has
a process listening on one port for multiplexing traffic, and then
passes the traffic to individual processes running on the server?

   eg similar to how common IPv4 NAT does, but for gluster traffic

Yes, I thought about it.  Depending on how it's done, it could
alleviate the too-many-ports problem, but it doesn't really address
the uncontrolled contention for CPU, memory, and so on.  In a way
it would make that worse, as it's one more process to keep
switching in and out among the others.  Sure would have been nice,
though.
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel



_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel



[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux