How to do network emulation on incoming traffic?

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm trying to simulate a satellite link to a Linux server to test
application performance.  I haven't used any of the tc stuff before,
but I blandly assured people it would be "easy" to set up a simulated
long thin pipe on a spare network interface.

However, now that I'm exploring, it's proving quite difficult.

Let me start with the general question first.  My setup is:

+--------+           +---------+
| Linux  |-----------| Windows |
| Server |    LAN    | Client  |
+--------+           +---------+

And I want the LAN to look like a satellite link, with delay, jitter,
packet loss, and (asymmetric) rate limiting in both directions.

(If you care, I'm trying to emulate a DirecWay satellite link for
a feasibility test.  The parameters are ~350+/-35 ms delay each way,
75 kbit/s uplink, 550 kbit/s downlink.  The latter takes from a
multi-megabyte "fair usage" bucket that refills at 50 kbit/s.
I don't have good packet loss numbers, so I'm going to start with
1% and see how sensitive performance is.)

Can anyone tell me how to do that?


My problem is that trying to set up netem incoming is proving
to be a pain:

# tc qdisc add dev spare handle ffff: ingress
# tc qdisc add dev spare parent ffff: handle 10: netem delay 300ms 50ms 25% loss 1%
RTNETLINK answers: Unknown error 4294967295

I'm not at all certain why this doesn't work.  I'm told that the ingress
queue is a bit of a kludge; is there an explanaiton of how it is
implemented somewhere, that would help me understand its limitations?

The whole tc system is causing me some confusion.


First of all, am I right that there's considerable overlap in
functionality with netfilter?  Both have packet selection (filtering)
mechanisms, and both can throw away packets, but they differ in what
other actions they can do:

Netfilter can redirect, reply to, and modify packets, but it cannot
delay or reorder them.  Its throttling features (limit and
hashlimit match modules) are fairly simplistic.  It does, however,
have sophtisticated stateful packet classification features.

Netfilter also lets you mess with packets in multiple different
places in the routing path.  There's PREROUTING and POSTROUTING,
and every packet also passes through one of INPUT, OUTPUT, and FORWARD.

tc is all about throttling and reordering packets.  It cannot
redirect, reply to, or modify packets, and its classification is
stateless and fairly simplistic.

You can use netfilter to perform filtering (classification) for tc,
but not vice-versa.

I *think* netfilter's flexibility comes at a bit of a speed penalty,
and doing pure-tc classification will be faster than the equivalent
logic using netfilter.  (But for a typical broadband connection up to
10 Mbit/sec, this is not a big issue.)


One tc question: If most queueing is done outgoing, is there some sort of
"local delivery" outgoing queue that I can use to throttle traffic to
local services?



Now, I think that I understand netfilter.  Each packet passes through
a succession of rules, each of which has some match conditions and an
action.  This continues until a final disposition action is performed.

tc is a little more confusing.  With classless qdiscs, it seems that there
is a chain of queues, and packets pass through them in sequence.

QUESTION: It seems that these queues are "active" at both ends.
A source pushes packets into them, and a device pulls them out at
its transmission rate.  When a device polls for packets from a
priority queue, the queue will give the "best" packet available
at the time.

It's not clear how this works when two queues are connected together.
If a rate-limited FIFO is reciving packets from a priority queue, does
it "pull" until it's full, even though waiting might result in better
packet ordering?

I need to use netem plus a rate-control queue like tbf.


QUESTION: The whole major:minor number thing is a bit confusing.
I know that minor number 0 is reserved for qdescs, but is the convention
that class x:y is associated with qdesc x:0 something that is enforced
somewhere, or are they just random 32-bit numbers, 65536 of which are
reserved for qdescs?



But when you have classful qdiscs, thing start getting confusing.

It appears that you need three things:
- A "tc qdisc add" statement to create the "major" qdisc in the chain.
- Some "tc class add" statements to create queue classes
- Some "tc filter add" statements to assign packets to the
  various classes.

The picture at http://pupa.da.ru/tc/ seems to help, but it doesn't
explain the multiple-major case at all.

(But that web page *des* tell me about the IMQ device, which may be
the solution to my problems...  I'll go away and play with that now.)


Anyway, thanks for any guidance on the subject.  I think there's some
big conceptual issue I'm just not getting, leading to a disconnect.
_______________________________________________
LARTC mailing list
LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux