Re: Ceph Bluestore performance question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Stijn, 

> the IPoIB network is not 56gb, it's probably a lot less (20gb or so).
> the ib_write_bw test is verbs/rdma based. do you have iperf tests
> between hosts, and if so, can you share those reuslts?

Wow - indeed, yes, I was completely mistaken about ib_write_bw. 
Good that I asked! 

You are completely right, checking with iperf3 I get:
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  18.4 GBytes  15.8 Gbits/sec  14242             sender
[  4]   0.00-10.00  sec  18.4 GBytes  15.8 Gbits/sec                  receiver

Taking into account that the OSDs also talk to each other over the very same network,
I can totally follow the observed client throughput. 

This leaves me with two questions:
- Is it safe to use RDMA with 12.2.2 already? Reading through this mail archive, 
  I grasped it may lead to memory exhaustion and in any case needs some hacks to the systemd service files. 
- Is it already clear whether RDMA will be part of 12.2.3? 

Also, of course the final question from the last mail:
"Why is data moved in a k=4 m=2 EC-pool with 6 hosts and failure domain "host" after failure of one host?"
is still open. 

Many thanks already, this helped a lot to understand things better!

Cheers,
Oliver

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux