On 18 August 2018 at 05:24, Oliver Hartkopp <socketcan@xxxxxxxxxxxx> wrote: > Maybe this is something that can be added to docker, that you can create > vcan's and can-gw rules at startup without urging the docker instance to > have root capabilities. Yes and no, my goal is a bit different, I want something generic: - create CAN virtual busses, as many as you want. The same way you create docker networks - create docker containers, as many as you want - connect them together with whatever topology you fancy. See below for more details. I think i've achieved my goals (so far so good). My main "issue" was the usage of the cangw as a replacement for the lack of a "CAN Linux Bridge". According to your explanation, that is the only way to go, for now. > The problem is: > > can's and vcan's are available in ONE namespace only > vxcan's endpoints are visible in exactly TWO namespaces > > Currently having multiple can-gw rules (in the root namespace) > interconnecting vxcan instances pointing into different namespaces seems to > be the only solution. > > If you only have ONE CAN application in your namespace you might also let > your application use the vxcan directly (without a can-gw crossrouting the > traffic to another 'namespace local' vcan). > > The vxcan has no local echo. But if you only have one application nobody > cares about the missing echo, right? ;-) > > Does that fit you use-case? It fits the simple use cases, but i want to offer a completely generic solution. The way we do our automated test is by using BDD (eg. https://behave.readthedocs.io/en/latest/) It is up to the test writer to "instantiate" as many virtual devices as needed, as many virtual CANbusses as needed and then interconnect them the way they need. We have 'gateway' devices, that sit b/w J1939 (boat engines) and NMEA200, and 'console' devices that are connected to N2K only. The goal is that every single physical devices is represented, during automated tests, by a self-contained docker container. I would like to containerise our simulators too (devices that generate CAN messages, eg, engine, auto pilot, gps, ...) >> >> My approach so far was to manually deploy and start the plugin >> locally, but apparently, "native" plugins are just a sort of >> degenerated docker image. > > > Ooookaay - did not understand the details o_O > > But looking forward to the next steps anyway ;-) Basically, i've just implemented a "managed" docker network plugin, instead of the old "hackish" legacy plugin. It greatly simplifies the user's life. See https://docs.docker.com/engine/extend/ for details. Feel free to try it: sudo modprobe can-gw # won't auto-load docker plugin install chgans/can4docker docker plugin ls docker plugin enable chgans/can4docker docker network create --driver chgans/can4docker canbus0 docker run -d -it --rm --name ecu0 ubuntu cat docker run -d -it --rm --name ecu1 ubuntu cat docker network connect canbus0 ecu0 docker network connect canbus0 ecu1 Console 1: docker exec -it ecu0 sh -c 'apt update --yes && apt install --yes can-utils iproute2' docker exec -it ecu0 ip ink docker exec -it ecu0 cangen vcan123456 Console2: docker exec -it ecu1 sh -c 'apt update && apt install can-utils iproute2' docker exec -it ecu1 ip ink docker exec -it ecu1 candump vcan789abc The key here, is that as a user, you do not even need to do any manual installation. "docker plugin install chgans/can4docker" gives you instant CAN connectivity as long as you're running a recent enough kernel and you manually loaded can-gw. I had to upgrade from KUbuntu-16.04 to 18.04 for this to work (kernel version). I still need to cleanup my code, write doc, tests, ... The above will eventually land on https://can4docker.readthedocs.io/en/latest/usage.html Chris