Re: [RFC PATCH 00/17] xdp: Add packet queueing and scheduling capabilities

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Jul 17, 2022 at 08:41:10PM +0200, Kumar Kartikeya Dwivedi wrote:
> On Sun, 17 Jul 2022 at 20:17, Cong Wang <xiyou.wangcong@xxxxxxxxx> wrote:
> >
> > On Wed, Jul 13, 2022 at 11:52:07PM +0200, Toke Høiland-Jørgensen wrote:
> > > Stanislav Fomichev <sdf@xxxxxxxxxx> writes:
> > >
> > > > On Wed, Jul 13, 2022 at 4:14 AM Toke Høiland-Jørgensen <toke@xxxxxxxxxx> wrote:
> > > >>
> > > >> Packet forwarding is an important use case for XDP, which offers
> > > >> significant performance improvements compared to forwarding using the
> > > >> regular networking stack. However, XDP currently offers no mechanism to
> > > >> delay, queue or schedule packets, which limits the practical uses for
> > > >> XDP-based forwarding to those where the capacity of input and output links
> > > >> always match each other (i.e., no rate transitions or many-to-one
> > > >> forwarding). It also prevents an XDP-based router from doing any kind of
> > > >> traffic shaping or reordering to enforce policy.
> > > >>
> > > >> This series represents a first RFC of our attempt to remedy this lack. The
> > > >> code in these patches is functional, but needs additional testing and
> > > >> polishing before being considered for merging. I'm posting it here as an
> > > >> RFC to get some early feedback on the API and overall design of the
> > > >> feature.
> > > >>
> > > >> DESIGN
> > > >>
> > > >> The design consists of three components: A new map type for storing XDP
> > > >> frames, a new 'dequeue' program type that will run in the TX softirq to
> > > >> provide the stack with packets to transmit, and a set of helpers to dequeue
> > > >> packets from the map, optionally drop them, and to schedule an interface
> > > >> for transmission.
> > > >>
> > > >> The new map type is modelled on the PIFO data structure proposed in the
> > > >> literature[0][1]. It represents a priority queue where packets can be
> > > >> enqueued in any priority, but is always dequeued from the head. From the
> > > >> XDP side, the map is simply used as a target for the bpf_redirect_map()
> > > >> helper, where the target index is the desired priority.
> > > >
> > > > I have the same question I asked on the series from Cong:
> > > > Any considerations for existing carousel/edt-like models?
> > >
> > > Well, the reason for the addition in patch 5 (continuously increasing
> > > priorities) is exactly to be able to implement EDT-like behaviour, where
> > > the priority is used as time units to clock out packets.
> >
> > Are you sure? I seriouly doubt your patch can do this at all...
> >
> > Since your patch relies on bpf_map_push_elem(), which has no room for
> > 'key' hence you reuse 'flags' but you also reserve 4 bits there... How
> > could tstamp be packed with 4 reserved bits??
> >
> > To answer Stanislav's question, this is how my code could handle EDT:
> >
> > // BPF_CALL_3(bpf_skb_map_push, struct bpf_map *, map, struct sk_buff *, skb, u64, key)
> > skb->tstamp = XXX;
> > bpf_skb_map_push(map, skb, skb->tstamp);
> 
> It is also possible here, if we could not push into the map with a
> certain key it wouldn't be a PIFO.
> Please look at patch 16/17 for an example (test_xdp_pifo.c), it's just
> that the interface is different (bpf_redirect_map),


Sorry for mentioning that I don't care about XDP case at all. Please let me
know how this works for eBPF Qdisc. This is what I found in 16/17:

+ ret = bpf_map_push_elem(&pifo_map, &val, flags);


> the key has been expanded to 64 bits to accommodate such use cases. It
> is also possible in a future version of the patch to amortize the cost
> of taking the lock for each enqueue by doing batching, similar to what
> cpumap/devmap implementations do.

How about the 4 reserved bits?

 ret = bpf_map_push_elem(&pifo_map, &val, flags);

which leads to:

+#define BPF_PIFO_PRIO_MASK	(~0ULL >> 4)
...
+static int pifo_map_push_elem(struct bpf_map *map, void *value, u64 flags)
+{
+	struct bpf_pifo_map *pifo = container_of(map, struct bpf_pifo_map, map);
+	struct bpf_pifo_element *dst;
+	unsigned long irq_flags;
+	u64 prio;
+	int ret;
+
+	/* Check if any of the actual flag bits are set */
+	if (flags & ~BPF_PIFO_PRIO_MASK)
+		return -EINVAL;
+
+	prio = flags & BPF_PIFO_PRIO_MASK;


Please let me know how you calculate 64 bits while I only calculate 60
bits (for skb case, obviously)?

Wait for a second, as BPF_EXIST is already a bit, I think you have 59
bits here actually...

Thanks!



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux