Re: RDMA/RoCE enablement failed with (113) No route to host

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Last I heard (read) was that the RDMA implementation is somewhat experimental. Search for "troubleshooting ceph rdma performance" on this mailing list for more info.

(Adding Roman in CC who has been working on this recently.)

Mohamad

On 12/18/18 11:42 AM, Michael Green wrote:
I don't know. 
Ceph documentation on Mimic doesn't appear to go into too much details on RDMA in general, but still it's mentioned in the Ceph docs here and there.  Some examples:

I want to believe that the official docs wouldn't mention something that's completely broken?

There are multiple posts in this very mailing list from people trying to make it work. 
--
Michael Green
Customer Support & Integration
Tel. +1 (518) 9862385
green@xxxxxxxxxxxxx

E8 Storage has a new look, find out more 










On Dec 18, 2018, at 6:55 AM, Виталий Филиппов <vitalif@xxxxxxxxxx> wrote:

Is RDMA officially supported? I'm asking because I recently tried to use DPDK and it seems it's broken... i.e the code is there, but does not compile until I fix cmake scripts, and after fixing the build OSDs just get segfaults and die after processing something like 40-50 incoming packets.

Maybe RDMA is in the same state?

13 декабря 2018 г. 2:42:23 GMT+03:00, Michael Green <green@xxxxxxxxxxxxx> пишет:
Sorry for bumping the thread. I refuse to believe there are no people on this list who have successfully enabled and run RDMA with Mimic. :)

Mike

Hello collective wisdom,

ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable) here.

I have a working cluster here consisting of 3 monitor hosts,  64 OSD processes across 4 osd hosts, plus 2 MDSs, plus 2 MGRs. All of that is consumed by 10 client nodes.

Every host in the cluster, including clients is 
RHEL 7.5
Mellanox OFED 4.4-2.0.7.0
RoCE NICs are either MCX416A-CCAT or MCX414A-CCAT @ 50Gbit/sec
The NICs are all mlx5_0 port 1

ring and ib_send_bw work fine both ways on any two nodes in the cluster.

Full configuration of the cluster is pasted below, but RDMA related parameters are configured as following:


ms_public_type = async+rdma
ms_cluster = async+rdma
# Exclude clients for now 
ms_type = async+posix

ms_async_rdma_device_name = mlx5_0
ms_async_rdma_polling_us = 0
ms_async_rdma_port_num=1

When I try to start MON, it immediately fails as below. Anybody has seen this or could give any pointers what to/where to look next?


------ceph-mon.rio.log--begin------
2018-12-12 22:35:30.011 7f515dc39140  0 set uid:gid to 167:167 (ceph:ceph)
2018-12-12 22:35:30.011 7f515dc39140  0 ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable), process ceph-mon, pid 2129843
2018-12-12 22:35:30.011 7f515dc39140  0 pidfile_write: ignore empty --pid-file
2018-12-12 22:35:30.036 7f515dc39140  0 load: jerasure load: lrc load: isa
2018-12-12 22:35:30.036 7f515dc39140  0  set rocksdb option compression = kNoCompression
2018-12-12 22:35:30.036 7f515dc39140  0  set rocksdb option level_compaction_dynamic_level_bytes = true
2018-12-12 22:35:30.036 7f515dc39140  0  set rocksdb option write_buffer_size = 33554432
2018-12-12 22:35:30.036 7f515dc39140  0  set rocksdb option compression = kNoCompression
2018-12-12 22:35:30.036 7f515dc39140  0  set rocksdb option level_compaction_dynamic_level_bytes = true
2018-12-12 22:35:30.036 7f515dc39140  0  set rocksdb option write_buffer_size = 33554432
2018-12-12 22:35:30.147 7f51442ed700  2 Event(0x55d927e95700 nevent=5000 time_id=1).set_owner idx=1 owner=139987012998912
2018-12-12 22:35:30.147 7f51442ed700 10 stack operator() starting
2018-12-12 22:35:30.147 7f5143aec700  2 Event(0x55d927e95200 nevent=5000 time_id=1).set_owner idx=0 owner=139987004606208
2018-12-12 22:35:30.147 7f5144aee700  2 Event(0x55d927e95c00 nevent=5000 time_id=1).set_owner idx=2 owner=139987021391616
2018-12-12 22:35:30.147 7f5143aec700 10 stack operator() starting
2018-12-12 22:35:30.147 7f5144aee700 10 stack operator() starting
2018-12-12 22:35:30.147 7f515dc39140  0 starting mon.rio rank 0 at public addr 192.168.1.58:6789/0 at bind addr 192.168.1.58:6789/0 mon_data /var/lib/ceph/mon/ceph-rio fsid 376540c8-a362-41cc-9a58-9c8ceca0e4ee
2018-12-12 22:35:30.147 7f515dc39140 10 -- - bind bind 192.168.1.58:6789/0
2018-12-12 22:35:30.147 7f515dc39140 10 -- - bind Network Stack is not ready for bind yet - postponed
2018-12-12 22:35:30.147 7f515dc39140  0 starting mon.rio rank 0 at 192.168.1.58:6789/0 mon_data /var/lib/ceph/mon/ceph-rio fsid 376540c8-a362-41cc-9a58-9c8ceca0e4ee
2018-12-12 22:35:30.148 7f515dc39140  0 mon.rio@-1(probing).mds e84 new map
2018-12-12 22:35:30.148 7f515dc39140  0 mon.rio@-1(probing).mds e84 print_map
e84
enable_multiple, ever_enabled_multiple: 0,0
compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
legacy client fscid: -1

No filesystems configured
Standby daemons:

5906437:        192.168.1.152:6800/1077205146 'prince' mds.-1.0 up:standby seq 2
6284118:        192.168.1.59:6800/1266235911 'salvador' mds.-1.0 up:standby seq 2

2018-12-12 22:35:30.148 7f515dc39140  0 mon.rio@-1(probing).osd e25894 crush map has features 288514051259236352, adjusting msgr requires
2018-12-12 22:35:30.148 7f515dc39140  0 mon.rio@-1(probing).osd e25894 crush map has features 288514051259236352, adjusting msgr requires
2018-12-12 22:35:30.148 7f515dc39140  0 mon.rio@-1(probing).osd e25894 crush map has features 1009089991638532096, adjusting msgr requires
2018-12-12 22:35:30.148 7f515dc39140  0 mon.rio@-1(probing).osd e25894 crush map has features 288514051259236352, adjusting msgr requires
2018-12-12 22:35:30.149 7f515dc39140 10 -- - create_connect 192.168.1.88:6800/1638, creating connection and registering
2018-12-12 22:35:30.149 7f515dc39140 10 -- - >> 192.168.1.88:6800/1638 conn(0x55d9281fbe00 :-1 s=STATE_NONE pgs=0 cs=0 l=0)._connect csq=0
2018-12-12 22:35:30.149 7f515dc39140 10 -- - get_connection mgr.5894115 192.168.1.88:6800/1638 new 0x55d9281fbe00
2018-12-12 22:35:30.150 7f515dc39140  1 -- - --> 192.168.1.88:6800/1638 -- mgropen(unknown.rio) v3 -- 0x55d92844e000 con 0
2018-12-12 22:35:30.151 7f515dc39140  1 -- - start start
2018-12-12 22:35:30.151 7f515dc39140  1 -- - start start
2018-12-12 22:35:30.151 7f515dc39140 10 -- - ready -
2018-12-12 22:35:30.151 7f515dc39140 10 -- - bind bind 192.168.1.58:6789/0
2018-12-12 22:35:30.151 7f515dc39140 10  Processor -- bind
2018-12-12 22:35:30.154 7f5144aee700  1 Infiniband binding_port found active port 1
2018-12-12 22:35:30.154 7f5144aee700  1 Infiniband init receive queue length is 4096 receive buffers
2018-12-12 22:35:30.154 7f5144aee700  1 Infiniband init assigning: 1024 send buffers
2018-12-12 22:35:30.154 7f5144aee700  1 Infiniband init device allow 4194303 completion entries
2018-12-12 22:35:30.509 7f515dc39140 10  Processor -- bind bound to 192.168.1.58:6789/0
2018-12-12 22:35:30.509 7f515dc39140  1 -- 192.168.1.58:6789/0 learned_addr learned my addr 192.168.1.58:6789/0
2018-12-12 22:35:30.509 7f515dc39140  1 -- 192.168.1.58:6789/0 _finish_bind bind my_inst.addr is 192.168.1.58:6789/0
2018-12-12 22:35:30.510 7f515dc39140 10 -- - ready -
2018-12-12 22:35:30.510 7f515dc39140  1  Processor -- start
2018-12-12 22:35:30.510 7f515dc39140  0 mon.rio@-1(probing) e5  my rank is now 0 (was -1)
2018-12-12 22:35:30.510 7f515dc39140  1 -- 192.168.1.58:6789/0 shutdown_connections
2018-12-12 22:35:30.510 7f515dc39140  1 -- 192.168.1.58:6789/0 _send_message--> mon.1 192.168.1.59:6789/0 -- mon_probe(probe 376540c8-a362-41cc-9a58-9c8ceca0e4ee name rio) v6 -- ?+0 0x55d928525680
2018-12-12 22:35:30.510 7f515dc39140 10 -- 192.168.1.58:6789/0 create_connect 192.168.1.59:6789/0, creating connection and registering
2018-12-12 22:35:30.510 7f515dc39140 10 -- 192.168.1.58:6789/0 >> 192.168.1.59:6789/0 conn(0x55d9281fc400 :-1 s=STATE_NONE pgs=0 cs=0 l=0)._connect csq=0
2018-12-12 22:35:30.510 7f515dc39140  1 -- 192.168.1.58:6789/0 --> 192.168.1.59:6789/0 -- mon_probe(probe 376540c8-a362-41cc-9a58-9c8ceca0e4ee name rio) v6 -- 0x55d928525680 con 0
2018-12-12 22:35:30.510 7f515dc39140  1 -- 192.168.1.58:6789/0 _send_message--> mon.2 192.168.1.65:6789/0 -- mon_probe(probe 376540c8-a362-41cc-9a58-9c8ceca0e4ee name rio) v6 -- ?+0 0x55d928525900
2018-12-12 22:35:30.510 7f515dc39140 10 -- 192.168.1.58:6789/0 create_connect 192.168.1.65:6789/0, creating connection and registering
2018-12-12 22:35:30.510 7f515dc39140 10 -- 192.168.1.58:6789/0 >> 192.168.1.65:6789/0 conn(0x55d9281fca00 :-1 s=STATE_NONE pgs=0 cs=0 l=0)._connect csq=0
2018-12-12 22:35:30.510 7f515dc39140  1 -- 192.168.1.58:6789/0 --> 192.168.1.65:6789/0 -- mon_probe(probe 376540c8-a362-41cc-9a58-9c8ceca0e4ee name rio) v6 -- 0x55d928525900 con 0
2018-12-12 22:35:30.513 7f5143aec700 10 NetHandler generic_connect connect: (111) Connection refused
2018-12-12 22:35:30.513 7f51442ed700 10 NetHandler generic_connect connect: (111) Connection refused
2018-12-12 22:35:30.513 7f5144aee700 10 Infiniband send_msg sending: 0, 12894, 0, 0, fe800000000000007efe90fffe1e2524
2018-12-12 22:35:30.513 7f5143aec700  1 RDMAStack connect try connecting failed.
2018-12-12 22:35:30.513 7f51442ed700  1 RDMAStack connect try connecting failed.
2018-12-12 22:35:30.513 7f5144aee700 10 -- - >> 192.168.1.88:6800/1638 conn(0x55d9281fbe00 :-1 s=STATE_CONNECTING_RE pgs=0 cs=0 l=0)._process_connection nonblock connect inprogress
2018-12-12 22:35:30.513 7f5143aec700 10 -- 192.168.1.58:6789/0 >> 192.168.1.59:6789/0 conn(0x55d9281fc400 :-1 s=STATE_CONNECTING pgs=0 cs=0 l=0).fault waiting 0.200000
2018-12-12 22:35:30.513 7f51442ed700 10 -- 192.168.1.58:6789/0 >> 192.168.1.65:6789/0 conn(0x55d9281fca00 :-1 s=STATE_CONNECTING pgs=0 cs=0 l=0).fault waiting 0.200000
2018-12-12 22:35:30.513 7f5144aee700 10 -- - >> 192.168.1.88:6800/1638 conn(0x55d9281fbe00 :-1 s=STATE_CONNECTING_RE pgs=0 cs=0 l=0).handle_write
2018-12-12 22:35:30.513 7f5143aec700 10 -- 192.168.1.58:6789/0 >> 192.168.1.59:6789/0 conn(0x55d9281fc400 :-1 s=STATE_CONNECTING pgs=0 cs=0 l=0).handle_write
2018-12-12 22:35:30.513 7f51442ed700 10 -- 192.168.1.58:6789/0 >> 192.168.1.65:6789/0 conn(0x55d9281fca00 :-1 s=STATE_CONNECTING pgs=0 cs=0 l=0).handle_write
2018-12-12 22:35:30.513 7f5144aee700  5 Infiniband recv_msg recevd: 105, 0, 262144, 0,
2018-12-12 22:35:30.513 7f5144aee700 -1  RDMAConnectedSocketImpl activate failed to transition to RTR state: (113) No route to host
2018-12-12 22:35:30.515 7f5144aee700 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/13.2.2/rpm/el7/BUILD/ceph-13.2.2/src/msg/async/rdma/RDMAConnectedSocketImpl.cc: In function 'void RDMAConnectedSocketImpl::handle_connection()' thread 7f5144aee700 time 2018-12-12 22:35:30.514762
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/13.2.2/rpm/el7/BUILD/ceph-13.2.2/src/msg/async/rdma/RDMAConnectedSocketImpl.cc: 224: FAILED assert(!r)

 ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0xff) [0x7f515506e6bf]
 2: (()+0x285887) [0x7f515506e887]
 3: (RDMAConnectedSocketImpl::handle_connection()+0x6f5) [0x7f51551b9655]
 4: (EventCenter::process_events(unsigned int, std::chrono::duration<unsigned long, std::ratio<1l, 1000000000l> >*)+0x695) [0x7f51551a76f5]
 5: (()+0x3c15cc) [0x7f51551aa5cc]
 6: (()+0x6afaef) [0x7f5155498aef]
 7: (()+0x7e25) [0x7f5154396e25]
 8: (clone()+0x6d) [0x7f5150cb7bad]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
------ceph-mon.rio.log--end------

-----ceph.conf---begin-----
[client]
rbd_cache = False
rbd_cache_writethrough_until_flush = False
#admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
#log file = /var/log/ceph/

[global]
ms_type = async+posix
# set RDMA messaging just for the public or cluster network
ms_public_type = async+rdma
ms_cluster = async+rdma
#
# set a device name according to IB or ROCE device used, e.g.
ms_async_rdma_device_name = mlx5_0
#
# for better performance if using LUMINOUS 12.2.x release
ms_async_rdma_polling_us = 0
ms_async_rdma_port_num=1

rgw_override_bucket_index_max_shards=1
auth client required = none
auth cluster required = none
auth service required = none
auth supported = none
cephx require signatures = False
cephx sign messages = False
cluster network = 192.168.1.0/24
debug asok = 0/0
debug auth = 0/0
debug bluefs = 0/0
debug bluestore = 0/0
debug buffer = 0/0
debug client = 0/0
debug context = 0/0
debug crush = 0/0
debug filer = 0/0
debug filestore = 0/0
debug finisher = 0/0
#debug hadoop = 0/0 -- doesn't work in E8 setup
debug heartbeatmap = 0/0
debug journal = 0/0
debug journaler = 0/0
debug lockdep = 0/0
debug log = 0
debug mds = 0/0
debug mds_balancer = 0/0
debug mds_locker = 0/0
debug mds_log = 0/0
debug mds_log_expire = 0/0
debug mds_migrator = 0/0
debug mon = 0/0
debug monc = 0/0
debug ms = 10/10
#debug ms = 0/0
debug objclass = 0/0
debug objectcacher = 0/0
debug objecter = 0/0
debug optracker = 0/0
debug osd = 0/0
debug paxos = 0/0
debug perfcounter = 0/0
debug rados = 0/0
debug rbd = 0/0
debug rgw = 0/0
debug rocksdb = 0/0
debug throttle = 0/0
debug timer = 0/0
debug tp = 0/0
#debug zs = 0/0 -- doesn't work in E8 setup
fsid = 376540c8-a362-41cc-9a58-9c8ceca0e4ee
mon_host = 192.168.1.58,192.168.1.65,192.168.1.59
mon pg warn max per osd = 800
mon_allow_pool_delete = True
mon_max_pg_per_osd = 800
ms type = async
ms_crc_data = False
#ms_crc_header = False -- broken!
osd objectstore = bluestore
osd_pool_default_size = 2
perf = True
public network = 192.168.1.0/24
rocksdb_perf = True
# Parameter not present in Micron's config, but introduced by ceph-deploy
mon_initial_members = rio
# The following param claims to reduce CPU usage; found it at
ms nocrc = true
#rbd_op_threads=4
[mon]
mon_max_pool_pg_num = 166496
mon_osd_max_split_count = 10000
[osd]
osd_min_pg_log_entries = 10
osd_max_pg_log_entries = 10
osd_pg_log_dups_tracked = 10
osd_pg_log_trim_min = 10
bluestore_cache_kv_max = 96G
bluestore_cache_kv_ratio = 0.2
bluestore_cache_meta_ratio = 0.8
bluestore_cache_size_ssd = 7G
bluestore_csum_type = none
bluestore_extent_map_shard_max_size = 200
bluestore_extent_map_shard_min_size = 50
bluestore_extent_map_shard_target_size = 100

bluestore_rocksdb_options = compression=kNoCompression,max_write_buffer_number=64,min_write_buffer_number_to_merge=32,recycle_log_file_num=64,compaction_style=kCompactionStyleLevel,write_buffer_size=4MB,target_file_size_base=4MB,max_background_compactions=64,level0_file_num_compaction_trigger=64,level0_slowdown_writes_trigger=128,level0_stop_writes_trigger=256,max_bytes_for_level_base=6GB,compaction_threads=32,flusher_threads=8,compaction_readahead_size=2MB

[client.rgw.sm26]
rgw_frontends = "civetweb port=7480"

#[osd.0]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01ae
#
#[osd.1]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01ae
#
#[osd.2]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01ae
#
#[osd.3]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01ae
#
#[osd.4]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01ae
#
#[osd.5]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01ae
#
#[osd.6]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01ae
#
#[osd.7]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01ae
#
#[osd.8]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01ae
#
#[osd.9]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01ae
#
#[osd.10]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01ae
#
#[osd.11]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01ae
#
#[osd.12]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01ae
#
#[osd.13]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01ae
#
#[osd.14]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01ae
#
#[osd.15]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01ae
#
## BONJOVI1
#
#[osd.16]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01af
#
#[osd.17]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01af
#
#[osd.18]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01af
#
#[osd.19]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01af
#
#[osd.20]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01af
#
#[osd.21]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01af
#
#[osd.22]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01af
#
#[osd.23]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01af
#
#[osd.24]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01af
#
#[osd.25]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01af
#
#[osd.26]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01af
#
#[osd.27]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01af
#
#[osd.28]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01af
#
#[osd.29]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01af
#
#[osd.30]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01af
#
#[osd.31]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:01af
#
## PRINCE
#
#[osd.32]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:0198
#
#[osd.33]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:0198
#[osd.34]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:0198
#
#[osd.35]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:0198
#
#[osd.36]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:0198
#
#[osd.37]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:0198
#
#[osd.38]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:0198
#
#[osd.39]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:0198
#
#[osd.40]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:0198
#
#[osd.41]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:0198
#
#[osd.42]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:0198
#
#[osd.43]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:0198
#
#[osd.44]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:0198
#
#[osd.45]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:0198
#
#[osd.46]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:0198
#
#[osd.47]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:0198
#
#
## RINGO
#
#[osd.48]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:018e
#
#[osd.49]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:018e
#
#[osd.50]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:018e
#
#[osd.51]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:018e
#
#[osd.52]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:018e
#
#[osd.53]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:018e
#
#[osd.54]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:018e
#
#[osd.55]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:018e
#
#[osd.56]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:018e
#
#[osd.57]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:018e
#
#[osd.58]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:018e
#
#[osd.59]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:018e
#
#[osd.60]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:018e
#
#[osd.61]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:018e
#
#[osd.62]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:018e
#
#[osd.63]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:018e
#
#[mon.rio]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:013a
#
#[mon.salvador]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:013b
#
#[mon.medellin]
#ms_async_rdma_local_gid=0000:0000:0000:0000:0000:ffff:c0a8:0141

-----ceph.conf---end-----


--
Michael Green











--
With best regards,
Vitaliy Filippov


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux