userspace. To achieve this I create a standard Gold/Silver/Bronze configuration as follows;
tc qdisc add dev eth0 root handle 1: htb default 12 tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbit tc class add dev eth0 parent 1:1 classid 1:10 htb rate 1mbit ceil 100mbit tc class add dev eth0 parent 1:1 classid 1:11 htb rate 512kbit ceil 100mbit tc class add dev eth0 parent 1:1 classid 1:12 htb rate 1kbit ceil 100mbit
Then according to the documentation setting the SO_PRIORITY socket option in the
appropriate manner i.e. routing traffic to the class 1:10 should set the priority to 65546,
MAJOR in the high bits and MINOR in the low bits. I have the correct capability to set this
out of the 0-7 range and the setsockopt returns no error.
The problem I have is that the skb->priority field is not set when I get to the htb_classify function
via the htb_enqueue function. That is that the skb->sk and skb->priority fields are both zero.
As a result control falls through until the default class is selected. The small test I am using is that
my application simply sets all of the outgoing data to a single priority.
And it is not set to the default :-)
I have tracked it down as far as the ip_queue_xmit function in net/ipv4/ip_output.c.
There the skb->priority and the skb->sk field are correct however we go through the NF_HOOK,
which is where I got lost.
There is no reason I can think of that a socket should lose any of it's options between these two
points in the code. Could someone who understands the intermediate code let me know if this is a
problem with the code or with the way I am setting the option on the socket.
Thanks for any help you can give.
Declan Conlon _______________________________________________ LARTC mailing list / LARTC@xxxxxxxxxxxxxxx http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/