Ceph RDMA Update

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Haomai,
   I read your below presentation:
   Topic: CEPH RDMA UPDATE
   Link: https://www.openfabrics.org/images/eventpresos/2017presentations/103_Ceph_HWang.pdf

   I want to talk about the items on page 17:
   I. Work in Progress:
      1. RDMA-CM for control path
         [Changcheng]:
            Do you also prefer that we need use RDMA-CM for connection management?
         1) Support multiple devices
         [Changcheng]:
            Do you mean seperate public & cluster network and use both RDMA on public & cluster network?
            Currently, Ceph could work under RDMA with below solution:
              a. Make no difference between public & cluster network, both use the same RDMA device port for RDMA messenger.
              OR
              b. Public network is based on TCP posix and cluster network is running on RDMA.
         2) Enable unified ceph.conf for all ceph nodes
         [Changcheng]:
            Do you mean that in some node, ceph need set different RDMA device port to be used?
      2. Ceph replication Zero-copy
         1) Reduce number of memcpy by half by re-using data buffers on primary OSD
         [Changcheng]:
            What does it mean? Any technical sharing about this iteam?
      3. Tx zero-copy
         Avoid copy out by using reged memory
         [Changcheng]:
            I've read the code, the function:tx_copy_chunk will copy data to segmented chunk to be sent. How do you solve the zero-copy problem?

   II. ToDo:
      1. Use RDMA Read/Write for better memory utilization
      [Changcheng]:
         Any plan to implement RDMA Read/Write? How to solve the compatiblity problem since the previous implementation is based on RC-Send/RC-Recv?
      2. ODP - On demand paging
      [Changcheng]:
         Do you mean that "the registered Memory Region is pinned to physical page and can't be swapped out" problem?
      3. Erasure-coding using HW offload.
      [Changcheng]:
         Is this related with RDMA NIC?

B.R.
Changcheng
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux