On 10/11/21 5:35 AM, Tom Rix wrote: > > On 10/10/21 6:41 PM, Xu Yilun wrote: >> On Sat, Oct 09, 2021 at 05:11:20AM -0700, Tom Rix wrote: >>> On 10/9/21 1:08 AM, Xu Yilun wrote: >>>> On Wed, Sep 29, 2021 at 04:00:20PM -0700, Russ Weight wrote: >>>>> The FPGA Image Load framework provides an API to upload image >>>>> files to an FPGA device. Image files are self-describing. They could >>>>> contain FPGA images, BMC images, Root Entry Hashes, or other device >>>>> specific files. It is up to the lower-level device driver and the >>>>> target device to authenticate and disposition the file data. >>>> I've reconsider the FPGA persistent image update again, and think we >>>> may include it in FPGA manager framework. >>>> >>>> Sorry I raised this topic again when it is already at patch v17, but now >>>> I need to consider more seriously than before. >>>> >>>> We have consensus the FPGA persistent image update is just like a normal >>>> firmware update which finally writes the nvmem like flash or eeprom, >>>> while the current FPGA manager deals with the active FPGA region update >>>> and re-activation. Could we just expand the FPGA manager and let it handle >>>> the nvmem update as well? Many FPGA cards have nvmem and downloaders >>>> supports updating both FPGA region and nvmem. The fpga-image-load driver is actually just a data transfer. The class driver has no knowledge about what the data is or where/if the data will be stored. This functionality could, of course, be merged into the fpga-mgr. I did a proof of concept of this a while back and we discussed the pros and cons. See this email for a recap: https://marc.info/?l=linux-fpga&m=161998085507374&w=2 Things have changed some with the evolution of the driver. The IOCTL approach probably fits better than the sysfs implementation. At the time it seemed that a merge would add unnecessary complexity without adding value. >>>> >>>> According to the patchset, the basic workflow of the 2 update types are >>>> quite similar, get the data, prepare for the HW, write and complete. >>>> They are already implemented in FPGA manager. We've discussed some >>>> differences like threading or canceling the update, which are >>>> not provided by FPGA manager but they may also nice to have for FPGA >>>> region update. An FPGA region update may also last for a long time?? >>>> So I think having 2 sets of similar frameworks in FPGA is unnecessary. >>>> >>>> My quick mind is that we add some flags in struct fpga_mgr & struct >>>> fpga_image_info to indicate the HW capability (support FPGA region >>>> update or nvmem update or both) of the download engine and the provided >>>> image type. Then the low-level driver knows how to download if it >>>> supports both image types.An char device could be added for each fpga manager dev, providing the >>>> user APIs for nvmem update. We may not use the char dev for FPGA region >>>> update cause it changes the system HW devices and needs device hotplug >>>> in FPGA region. We'd better leave it to FPGA region class, this is >>>> another topic. I'll give this some more thought and see if I can come up with some RFC patches. - Russ >>>> >>>> More discussion is appreciated. >>> I also think fpga_mgr could be extended. >>> >>> In this patchset, >>> >>> https://lore.kernel.org/linux-fpga/20210625195849.837976-1-trix@xxxxxxxxxx/ >>> >>> A second, similar set of write ops was added to fpga_manger_ops, >>> >>> new bit/flag was added to fpga_image_info >>> >>> The intent was for dfl to add their specific ops to cover what is done in >>> this patchset. >> I think we don't have to add 2 ops for reconfig & reimage in framework, >> the 2 processes are almost the same. >> >> Just add the _REIMAGE (or something else, NVMEM?) flag for >> fpga_image_info, and low level drivers handle it as they do for other >> flags. >> >> How do you think? > > A single set is fine. > > A difficult part of is the length of time to do the write. The existing write should be improved to use a worker thread. > > Tom > >> >> Thanks, >> Yilun >> >>> Any other driver would do similar. >>> >>> Is this close to what you are thinking ? >>> >>> Tom >>> >>>> Thanks, >>>> Yilun >>>> >