Re: Limit bandwidth per-user (uid/gid)

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Dec 23, 2013 at 8:12 AM, Paride Legovini <pl@xxxxxxxxxxxxxx> wrote:
> Dear Carl-Daniel,
>
> thanks for your reply, it contains a lot of wise suggestions.
>
> On Mon, Dec 23, 2013 at 04:22:08AM +0100, Carl-Daniel Hailfinger wrote:
>> Dear Paride,
>>
>> that's a really exciting environment.
>>
>> Am 22.12.2013 18:10 schrieb Paride Legovini:
>> > I'm working in an Antarctic research station where our connection to the
>> > Internet is a 512kbps satellite link.
>>
>> Assuming that the satellite link is expensive and has long roundtrip
>> times, it is probably a wise idea not to throw away any data which has
>> already been sent over the satellite link.
>
> You're perfectly right here: the link is expensive and the round trip is
> 700ms at best, but reaches easily 3000ms or even 4000ms when the network
> is congested.
>
>> > I want to set up a server where each research project has an account
>> > where they send data via sftp or rsync; this data is then transferred
>> > overnight to a server in Europe. My idea is to use a separate cronjob
>> > or daemon for each user that runs with the user's privileges.
>> >
>> > What I want to do is:
>> >
>> > 1. Limit the total bandwidth that a group (GID) can generate. There
>> >    should be separate limits for inbound and outbound traffic.
>>
>> Shaping outbound traffic is mostly a standard procedure with lots of
>> documentation available. Nowadays you have some more options than a few
>> years ago, and quite a few of them are easy to set up. I'll conveniently
>> ignore outbound shaping in my response and talk about the hard stuff
>> instead.
>> You can't limit inbound traffic to Antactica unless you do it on the
>> other end of the satellite link (Europe). Well, you can limit the
>> inbound bandwidth by throwing away packets locally, but that's stupid
>> for packets which have already come over an expensive satellite link.
>> Only the remote side decides how much bandwidth to send over the
>> satellite link. What has been sent by the remote side will go over the
>> satellite link and you can't undo this on the local side.
>> What you can do in Antarctica is:
>> 1. Perform the transfer with bandwidth-aware tools (rsync has the
>> bwlimit option, but that is too crude to be useful).
>> 2. Rely on crude implicit+indirect throttling of inbound traffic by
>> shaping the bandwidth passed to local applications, which will cause a
>> sort of backpressure (changed Ack behaviour) once the kernel buffers for
>> your connection are full. Some people also recommend to throttle/police
>> outbound Acks, but with current TCP congestion control mechanisms this
>> can have unintended side effects.
>>
>>
>> > 2. Limit the bandwidth per-user (UID), so if the GID is allowed to
>> >    generate 384kbps of traffic, and 3 users are using the network, each
>> >    user can at most benefit of 128kbps. If there's only one user he gets
>> >    all the 384kbps.
>> >    Again there should be different limits for inbound and outbound
>> >    traffic.
>> >    This should work regardless the number of connections the user makes.
>> >
>> > I played a bit with iptables and tc, but the only way I found to do
>> > something like this is to manually set a different mark for each user
>> > and then use tc, but I'd prefer a solution where there's no need to set
>> > up any rule manually if a user is added or removed. Also, the
>> > --uid-owner option works only for outbound traffic.
>>
>> AFAICS even if you use net_cls as suggested by Joseph Santaniello,
>> you're limited to managing outbound traffic (net_cls may indeed a
>> cleaner approach than uid-owner, I haven't tried that). With CONNMARK,
>> you should be able to tie the pieces together. I did such a setup
>> (CONNMARK, not net_cls) for inbound+outbound throttling quite a few
>> years ago, but sadly the code has been lost somewhere in the depths of
>> decommissioned harddisks.
>
> If I understand correctly, this corresponds to your second suggestion
> regarding the inbound traffic throttling. The difficult thing here is
> that I not only have to mark a connection, but I have to mark all the
> connection started by a certain user (cgroup) in the same (known) way,
> so I can use tc on them.
>
> I see that for inbound traffic shaping Intermediate Functional Block
> (ifb) devices can also be used, but I don't think it will help me to do
> a per-user (cgroup) throttling or shaping for the incoming traffic.
> This seems a difficult task.
>
> I'll think about it, but at first I think I'll concentrate on the
> outgoing traffic.
>
> Sunny greetings,
>
> Paride
>


The 700-4000 latency might very well improve with throttled outbound.

I admit to very cursory knowledge of this, but as I understand the way
tc works with outbound queues is that the ack rate is adjusted so the
other side essentially sees it as a slower interface, and no packets
are intentionally dropped unless the buffers get full.

Going the cgroups way, you could either start the cronjobs in some
wrapper that shoves the tasks into the cgroup you want, or something
like cgred to automatically by uid/gid/process move tasks into cgroups
as they are started.

Or maybe tunneled IP to a machine back in "The World" that acts as a
gateway and does the trottling for you?

Joseph
--
To unsubscribe from this list: send the line "unsubscribe lartc" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux