Andy, >> I was not able to find a ready-to-use in-kernel arena allocator (that >> can allocate memory within the specified continuous memory chunk) so >> I ended up with a free-list implementation. I thought that >> full-featured bitmap or slab allocator solely for TCMU is an >> overkill. > > > A full-featured one I agree is too much. But what about a simple one? We > have a greater tolerance for false negatives -- failing to allocate space > even if it's available -- than other allocators because this just means the > cmd has to wait until previous commands complete, instead of failure. > > If we had a small, fixed-size array in tcmu_cmd to keep track of its data > area allocation ranges, wouldn't that let us track the ranges it was using > so we could free them in the bitmap on completion[1]? If the allocation > required more ranges than the fixed size, we'd just give up and sleep. Worst > case, once all submitted commands completed we'd know we could satisfy the > next pending one. > > Thoughts on this approach? > > [1] basically a copy of the iovec[] userspace sees, but safe from userspace > clobbering it. The problem is that iovec[] size depends on se_cmd scatterlists length (basically se_cmd->t_bidi_data_nents + se_cmd->t_data_nents). How big/long these can be? That's why I tried to re-use scatterlists to keep data ring allocation information (as I was not quite sure how to keep the information about variable sized scatterlist in a fixed-size tcmu_cmd). Do you have any ideas? Max -- To unsubscribe from this list: send the line "unsubscribe target-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html