Re: Docker containers, vxcan and cangw

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Chris,

On 08/15/2018 04:02 AM, Christian Gagneraud wrote:
On 15 August 2018 at 08:00, Oliver Hartkopp <socketcan@xxxxxxxxxxxx> wrote:

I created some slides for AGL this April:
https://wiki.automotivelinux.org/agl-distro/apr2018-f2f
https://wiki.automotivelinux.org/_media/agl-distro/agl2018-socketcan.pdf

Glad to talk to the author of these slides, agl2018-socketcan.pdf is
what triggered my attempt at writing a docker plugin! ;)
It's basically the only useful information you'll find on the internet
if you look for vxcan.

:-)

In fact I was also urged to learn about net namespaces when Mario posted this patch https://marc.info/?l=linux-can&m=148767639224547&w=2

And as it did not work to support the namespace support like in veth inside vcan, vxcan became necessary ...

IMO the setup depends on the use-case in the way, that you are also able to
move a 'real' CAN interface into the docker container which removes it from
the root namespace - an interesting move to capsulate the CAN access inside
the Linux host. Additionally vcan's can be created inside the docker
containers.

Yes vcan can be used inside a container, but you need to use
--network=host, which is a no-go for me.

Really? I think using vcan's from the root/init/global/default/host namespace is no senseful option, right?

AFAICS you never need --network=host for using CAN inside Docker.

I guess to be able to create
vcan iface's from within a container you need to run in privileged
mode, which i'm not that keen on.

You can either create vcan's inside that namespace or you can create a vcan (in the init namespace) and move it to the (docker) namespace by assigning that CAN interface to that namespace. The same that I suggested on slide 20 with real CAN interfaces.

https://marc.info/?l=linux-can&m=149046502301622&w=2

---8<--- snip!

and moved my already existing vcan0 virtual CAN interface to the
namespace 'blue':

# ip netns add blue
# ip link set dev vcan0 netns blue

From now vcan0 is not visible from 'ip link show' anymore.

But it is visible in the namespace 'blue':

# ip netns exec blue ip link list
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group
default qlen 1000
     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
6: vcan0: <NOARP> mtu 72 qdisc noop state DOWN mode DEFAULT group
default qlen 1000
     link/can

---8<--- snip!

The same can be done with a real 'can0' CAN interface by
# ip link set dev can0 netns blue

What kind of use-case do you have in mind that you need to link different
namespaces/containers with vxcan?

Yep, sorry i completely forgot to give some background, so here we go:

I'm working for Navico [0], and we manufacture various product for the
marine industry, we use CAN in various ways, either in NMEA2000, J1939
or SmartCraft "mode".
Most of our product runs embedded Linux, and we have recently
introduce automated testing by running these software on x86-Linux,
and we do that in docker containers, one "app", one container.
Sometimes we want to control what is seen by an application under
tests (input test vectors), but I would like to be able to run
simulation and/or tests using "real world" setup too, setups where
there are more than one device on the bus. And in our 'simulated'
case, it means several docker containers (ideally, b/c that's how they
are 'released').

Nice setup :-)

As stated above, i would prefer not to run in network=host mode,
because our devices have ethernet connectivity too, and i want to
fully control the ethernet network connectivity (including how many
interfaces, addressing, ...).

As stated above network=host should not be necessary.

I then found your slide 'agl2018-socketcan.pdf', and was so exited.
The only problem is that there was no technical details (in term of
Linux command lines), so I first was very contemplating, wondering how
the F* I could make it work with Docker. It's only after reading a
blog [1] about veth bridges and namespaces (basically explaining
technical details of Docker networking), that i made the link (pun
intended). Basically, i've just discovered how to use Linux namespaces
at a low level! :)

I know what you're talking about :)

As i was familiar enough with the Linux veth bridge and the "virtual
patch cables" stuff, i decided to give it a go with docker and CAN.
I first got a shell-based PoC working, and then implemented a docker
plugin based on information from pyvolume [2] (I wanted to implement
it with python rather than go).

To my surprised, it just worked! The biggest surprise was that docker
didn't complain about the non-IP nature of vxcan network interfaces.

Good to know. I've done nothing with Docker so far - but I know about some colleagues that are thinking about a similar test setup too.

Best regards,
Oliver

[0] https://navico.com/
[1] http://www.opencloudblog.com/?p=66
[2] https://github.com/ronin13/pyvolume




[Index of Archives]     [Automotive Discussions]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [CAN Bus]

  Powered by Linux