On 02/28/2017 07:14 PM, James Smart wrote: > > On 2/28/2017 8:34 AM, Hannes Reinecke wrote: >> Can you clarify these? >> Are these 'just' resource allocation problems or something else, too? > > Most are resource allocation - buffer pools, dma pools, pages for > resources, and hw resource allocation splits. However, async receive RQ > policies are a case where initiator and target have to share the policy > and perhaps a buffer pool, so they have to be careful. Another area is > in ABTS handling. Initiators typically leave things up to the hardware, > and do little if any ABTS handling. Most targets though, as CMD IU may > be received without an assign exchange context and be buffered until the > target is ready to do something, require that they handle ABTS's. Some > of the target features, when enabled, dictate host ownership of ABTS > policy. So, if running I+T, it gets rather tricky. > Ah, okay. However, I still would favour having both integrated into the same driver; otherwise we run into the tricky issue of having to manually unbind drivers from a given PCI device (or declare some PCI devices as target capable on a rather arbitrary manner). Can't it be modelled like the NVME-FC target, with having the same driver supporting all modes, but the admin having to decide which PCI function is doing what? (NB: What about SR-IOV? Are the resources shared across functions or does each function have its own set? If so than I'd be perfectly happy if we could set each _function_ to a given role, much like VirtualConnect does nowadays). Cheers, Hannes -- Dr. Hannes Reinecke zSeries & Storage hare@xxxxxxx +49 911 74053 688 SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg GF: J. Hawn, J. Guild, F. Imendörffer, HRB 16746 (AG Nürnberg)