Re: Docker containers, vxcan and cangw

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 15 August 2018 at 08:00, Oliver Hartkopp <socketcan@xxxxxxxxxxxx> wrote:
> Hi Christian,
>
>
> On 08/14/2018 04:34 AM, Christian Gagneraud wrote:
>
>> I'm working on a docker plugin that allows to connect docker containers
>> via
>> a virtual can bus, see [1] (this is very experimental so far).
>> B/c Linux doesn't offer the equivalent of a bridge, i'm using the cangw to
>> achieve the same result.
>>
>>
>>        +-------+
>>        |       |                      +-------+
>>        |       |>vxcan0.1----vxcan0.0<| cont1 |
>> vcan0<| CANGW |                      +-------+
>>        |       |                      +-------+
>>        |       |>vxcan1.1----vxcan1.0<| cont2 |
>>        +-------+                      +-------+
>>
>>
>> $ docker network create --driver vxcan canbus0
>> $ docker run -d -it --name ecu0 ubuntu-canutils cat
>> $ docker run -d -it --name ecu1 ubuntu-canutils cat
>> $ docker network connect canbus0 ecu0
>> $ docker network connect canbus0 ecu1
>> $ docker exec -it ecu0 candump vcanXXX
>> $ docker exec -it ecu1 cangen vcanYYY
>>
>> It just works.
>>
>> My only concern is about cangw, i'm not using any filtering, I'm just
>> interconnecting peers all together.
>> Here is how I'm doing it:
>> -----------------------------------------------------
>> def attach_endpoint(self, endpoint_id, namespace_id):
>>      endpoint = self.endpoints[endpoint_id]
>>      endpoint.attach(namespace_id)
>>      for other_id, other in self.endpoints.items():
>>          if other_id != endpoint_id:
>>              self.gateway.add_rule(other.if_name, endpoint.if_name)
>>              self.gateway.add_rule(endpoint.if_name, other.if_name)
>> -----------------------------------------------------
>>
>> As you can see, there are N*(N-1) rules, which doesn't scale well, but
>> this
>> is not a requirement for now.
>>
>> I'm just wondering if it is the right approach or if there is a more
>> simple
>> and/or elegant way to achieve the same result.
>
>
> Nice work!

Thanks, it's very much in a quick and dirty state for now...

> I created some slides for AGL this April:
> https://wiki.automotivelinux.org/agl-distro/apr2018-f2f
> https://wiki.automotivelinux.org/_media/agl-distro/agl2018-socketcan.pdf

Glad to talk to the author of these slides, agl2018-socketcan.pdf is
what triggered my attempt at writing a docker plugin! ;)
It's basically the only useful information you'll find on the internet
if you look for vxcan.

> ... which also uses can-gw to fit the various use-cases.

Yes, I realised that your use-case is different, first you used
namespaces without heavy framework such as docker. And then you use
cangw as a security component.

> To be similar to veth the vxcan provides just an interconnection between
> namespaces without any 'CAN frame loopback' as we know it from vcan's.
>
> IMO the setup depends on the use-case in the way, that you are also able to
> move a 'real' CAN interface into the docker container which removes it from
> the root namespace - an interesting move to capsulate the CAN access inside
> the Linux host. Additionally vcan's can be created inside the docker
> containers.

Yes vcan can be used inside a container, but you need to use
--network=host, which is a no-go for me. I guess to be able to create
vcan iface's from within a container you need to run in privileged
mode, which i'm not that keen on.

> What kind of use-case do you have in mind that you need to link different
> namespaces/containers with vxcan?

Yep, sorry i completely forgot to give some background, so here we go:

I'm working for Navico [0], and we manufacture various product for the
marine industry, we use CAN in various ways, either in NMEA2000, J1939
or SmartCraft "mode".
Most of our product runs embedded Linux, and we have recently
introduce automated testing by running these software on x86-Linux,
and we do that in docker containers, one "app", one container.
Sometimes we want to control what is seen by an application under
tests (input test vectors), but I would like to be able to run
simulation and/or tests using "real world" setup too, setups where
there are more than one device on the bus. And in our 'simulated'
case, it means several docker containers (ideally, b/c that's how they
are 'released').
As stated above, i would prefer not to run in network=host mode,
because our devices have ethernet connectivity too, and i want to
fully control the ethernet network connectivity (including how many
interfaces, addressing, ...).

I then found your slide 'agl2018-socketcan.pdf', and was so exited.
The only problem is that there was no technical details (in term of
Linux command lines), so I first was very contemplating, wondering how
the F* I could make it work with Docker. It's only after reading a
blog [1] about veth bridges and namespaces (basically explaining
technical details of Docker networking), that i made the link (pun
intended). Basically, i've just discovered how to use Linux namespaces
at a low level! :)
As i was familiar enough with the Linux veth bridge and the "virtual
patch cables" stuff, i decided to give it a go with docker and CAN.
I first got a shell-based PoC working, and then implemented a docker
plugin based on information from pyvolume [2] (I wanted to implement
it with python rather than go).

To my surprised, it just worked! The biggest surprise was that docker
didn't complain about the non-IP nature of vxcan network interfaces.

Basically, none of the above would have been possible without your
agl2018-socketcan.pdf slides!
Thank you so much.

Chris

[0] https://navico.com/
[1] http://www.opencloudblog.com/?p=66
[2] https://github.com/ronin13/pyvolume



[Index of Archives]     [Automotive Discussions]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [CAN Bus]

  Powered by Linux