Hi a few comments On Wed, 14 Aug 2013, Suman Anna wrote: > The remoteproc infrastructure is currently tied closely with the > virtio/rpmsg framework, and the boot requires that there are virtio > devices present in the resource table from the firmware image. Using static channels is something that can be added to the existing code, if you don't want to use dynamic channels. > The rpmsg shared memory buffers are currently kinda fixed (512 buffers > of 512 bytes each) and requires 3 pages just for the vring structures > for this many buffers. So, if there are restrictions on DDR access, then > this pretty much rules out remoteproc/rpmsg infrastructure. It should be possible to patch that code to vary the size and count of the memory buffers, based on the remote processor. So no direct DDR access should be required - for that reason, anyway. > If the DDR access is ok, then there are other challenges that needs to > be met. The current firmware definitely requires the addition of the > resource table and the lower level code for handling the virtio_ring > transport for receiving messages. It would also need its own remoteproc > driver for handling the firmware binary format Hmm, could you explain this further? Are you just referring to the process of parsing out the dynamic channel data during initialization? > and the signalling required to trigger the rpmsg buffer processing. The > firmware binary format needs to be adapted to something that this driver > would understand. It definitely doesn't look like ELF currently, so > something on the lines of ste_modem_rproc needs to be done. Or just use ELF or static channels. > Also, the remoteproc/rpmsg infrastructure can support multiple vring > transport channels between the processor, and depending on how many are > supported, we either need to exchange the vq_id (like OMAP remoteproc), > or process the known virtqueues always (like DA8xx remoteproc). The > former requires that a message payload is used, and mandates the usage > of the IPC data registers in the control module given that WkupM3 on > AM335 cannot access any mailbox registers. Any usage of IPC data > registers depends on where we do it. If all the accesses were to be done > within mach-omap2/control.c, then there is no easy way for using this > API from drivers/remoteproc, until we have the control module driver. Yep, real SCM drivers have been needed for some time now, for pretty much all of the OMAP SoCs. It should be pretty easy to prototype for your purposes, though. > The current communication uses the IPC data registers, and sometimes > uses them as plain status registers. There's certain registers used for > sharing status, version etc which are shared by both the processors. > Using rpmsg would require communicating every single message, and if > there were to be some shared variables to be used simultaneously, then > this has to be exchanged through a new remoteproc resource type. I don't quite understand this last part - "shared variables to be used simultaneously". How does the existing code synchronize them? > One additional aspect is that the current remoteproc core does not have > the necessary runtime pm support, but in general the approach would be > to treat the remoteprocs as true slave devices. I would imagine the > driver core to put the remoteprocs into reset state, after asking them > to save their context during suspend. Why is runtime PM support needed in the remoteproc core? Wouldn't that only be needed in the remote processor's device driver? - Paul -- To unsubscribe from this list: send the line "unsubscribe linux-omap" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html