Performance degrad on big cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,
   We meet a performance degrad in one of our cluster. Our
randwrite latency degraded from 1ms to 5ms(fio -ioengine=rbd
iodepth=1)
   The cluster has about 200 osds runing on Intel 3500 SSD, we run
both qemu and ceph-osd on the hosts. The network for ceph is 10GbE.
   While the cluster is smaller and not so many qemu processes,
the IO latency is about 1ms, but now the latency is 5ms.
   I use strace to  get the time for syscall, all syscall (writev,
io_sub mit, recvfrom,sendmsg,lseek,fgetxattr etc.)  use 300us to 600us.
The syscall time on a small and idle cluster is near to 0us.
   After checked serval clusters, I come to a conclusion:
   num_of_osds     num_of_threads_on_host    time_of_syscall(us)
   200                       10000             300-600
   100                       5000              200-500
   70                        2500              100-300
   9                         750               20-60

  The threads on one of 200 osds cluster's host is like this:
  name num_of_processes  num_of_threads  num_of_threads_per_process
  qemu-kvm    49                9748            198
  ceph-osd    6                 5707            951

I think too many threads on the host lead to high latency of ceph-osd process, and cause to high I/O latency from the client-side.

  Anyone's help is welcome.

Thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux