On Sat, 2008-12-13 at 13:50 +0100, Bart Van Assche wrote: > On Sat, Dec 13, 2008 at 1:33 PM, Nicholas A. Bellinger > <nab@xxxxxxxxxxxxxxx> wrote: > > The point is that neither you nor Vlad would acknowledge any of the > > issues on that thread. > > What that thread started with is whether or not higher-order > allocations would help the performance of a storage target. I replied > that the final argument in any discussion about performance are > performance measurements. You failed to publish any performance > numbers in that thread, which is why I stopped replying. > This was just one of the items that I mentioned that is implemented in Target_Core_Mod/ConfigFS v3.0 that is lacking in SCST Core. The list (which has not changed) is the following: <SNIP> The fudemental limitiation I ran into wrt SCST core has to do with memory allocation (or rather, lack there of). The problem is that for the upstream generic kernel target, the requirements are the following: A single codepath memory allocating *AND* mapping for: I) Every type of device_type II) Every combination of max_sectors and sector_size with per PAGE_SIZE segments and multiple contigious PAGE_SIZE memory segments III) Every combination of I and II while receiving a CDB with sector_count > $STORAGE_OBJECT->max_sectors IV) Allocating multiple contigious struct page from the memory allocator for I, II, III. So, if we are talking about these first two, both target_core_mod and SCST core have them (at least I think SCST has #2 with PAGE_SIZE memory segments). I know that target_core_mod has #3 (because I explictly designed it this way), and you have some patch for SCST to do this, great!. However, your algortims currently assume PAGE_SIZE by default, which is a problem for not just II, III, and IV above. :-( V) Accept pre-registered memory segments being passed into target_core_mod that are then mapped (NOT MEMCPY!!) to struct scatterlist->page_link handling all cases (I, II, III, and IV) for zero-copy DMA with multiple contigious PAGE_SIZE memory segments (Very fast!!) using a *SINGLE* codepath down to every Linux Storage Subsystem past, present and future. This is the target_core_mod design that has existed since 2006. </SNIP> So, you are talking about IV) above, which is just one of the items. As mentioned,the big item for me is V), which means you are going to have to do some fundemental changes to SCST core to make this work. As previously mentioned, these five design requirements have been part of LIO-Target v2.x and Target_Core_Mod/ConfigFS v3.0 from the start. > > Lets not even get into how you claimed RDMA > > meant only userspace ops on virtual memory addresses using a vendor > > specific API, or that RDMA using virtual addresses would be > > communicating with drivers/scsi or block/ (which obviously use > struct > > page). > > I never claimed that RDMA is only possible from user space -- that was > a misinterpretation of your side. > > I never referred to any vendor specific RDMA API. > > But I agree that the following paragraph I cited from Intel's VIA > architecture document may be misleading: > > The VI provider is directly responsible for a number of functions nor- > mally supplied by the operating system. The VI provider manages the > pro- > tected sharing of the network controller, virtual to physical > translation of > buffer addresses, and the synchronization of completed work via > interrupts. > The VI provider also provides a reliable transport service, with the > level of re- > liability depending upon the capabilities of the underlying network. > > I guess the above paragraph means that RDMA hardware must have > scatter/gather support. I have no idea why you keep mentioning Intel's VIA in the context of RDMA and generic target mode. This API has *NOTHING* to do with a target mode engine using generic algorithms for zero-copy struct page mapping from RDMA capable hardware into Linux/SCSI, Linux/BLOCK or Linux/VFS subsystems. --nab > > Bart. > -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html