Re: ceph all-nvme mysql performance tuning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2017-11-27 14:02, German Anders wrote:
4x 2U servers:
  1x 82599ES 10-Gigabit SFI/SFP+ Network Connection
  1x Mellanox ConnectX-3 InfiniBand FDR 56Gb/s Adapter (dual port)
so I assume you are using IPoIB as the cluster network for the replication...

1x OneConnect 10Gb NIC (quad-port) - in a bond configuration
(active/active) with 3 vlans
... and the 10GbE network for the front-end network?

At 4k writes your network latency will be very high (see the flame graphs at the Intel NVMe presentation from the Boston OpenStack Summit - not sure if there is a newer deck that somebody could link ;)) and the time will be spent in the kernel. You could give RDMAMessenger a try but it's not stable at the current LTS release.

If I were you I'd be looking at 100GbE - we've recently pulled in a bunch of 100GbE links and it's been wonderful to see 100+GB/s going over the network for just storage.

Some people suggested mounting multiple RBD volumes - unless I'm mistaken and you're using very recent qemu/libvirt combinations with the proper libvirt disk settings all IO will still be single threaded towards librbd thus not making any speedup.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux