Re: [LSF/MM TOPIC/ATTEND] RDMA passive target

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey Boaz,

RDMA passive target
~~~~~~~~~~~~~~~~~~~

The idea is to have a storage brick that exports a very
low level pure RDMA API to access its memory based storage.
The brick might be battery backed volatile based memory, or
pmem based. In any case the brick might utilize a much higher
capacity then memory by utilizing a "tiering" to slower media,
which is enabled by the API.

The API is simple:

1. Alloc_2M_block_at_virtual_address (ADDR_64_BIT)
    ADDR_64_BIT is any virtual address and defines the logical ID of the block.
    If the ID is already allocated an error is returned.
    If storage is exhausted return => ENOSPC
2. Free_2M_block_at_virtual_address (ADDR_64_BIT)
    Space for logical ID is returned to free store and the ID becomes free for
    a new allocation.
3. map_virtual_address(ADDR_64_BIT, flags) => RDMA handle
    previously allocated virtual address is locked in memory and an RDMA handle
    is returned.
    Flags: read-only, read-write, shared and so on...
4. unmap__virtual_address(ADDR_64_BIT)
    At this point the brick can write data to slower storage if memory space
    is needed. The RDMA handle from [3] is revoked.
5. List_mapped_IDs
    An extent based list of all allocated ranges. (This is usually used on
    mount or after a crash)

My understanding is that you're describing a wire protocol correct?

The dumb brick is not the Network allocator / storage manager at all. and it
is not a smart target / server. like an iser-target or pnfs-DS. A SW defined
application can do that, on top of the Dumb-brick. The motivation is a low level
very low latency API+library, which can be built upon for higher protocols or
used directly for very low latency cluster.
It does however mange a virtual allocation map of logical to physical mapping
of the 2M blocks.

The challenge in my mind would be to have persistence semantics in
place.


Currently both drivers initiator and target are in Kernel, but with
latest advancement by Dan Williams it can be implemented in user-mode as well,
Almost.

The almost is because:
1. If the target is over a /dev/pmemX then all is fine we have 2M contiguous
    memory blocks.
2. If the target is over an FS, we have a proposal pending for an falloc_2M_flag
    to ask the FS for a contiguous 2M allocations only. If any of the 2M allocations
    fail then return ENOSPC from falloc. This way we guaranty that each 2M block can be
    mapped by a single RDAM handle.

Umm, you don't need the 2M to be contiguous in order to represent them
as a single RDMA handle. If that was true iSER would have never worked.
Or I misunderstood what you meant...
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux