Re: migrating cephfs metadata pool from spinning disk to SSD.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Bob,

Those numbers would seem to indicate some other problem ....   One of the biggest culprits of that poor performance is often related to network issues.  In the last few months, there have been several reported issues of performance, that have turned out to be network.  Not all, but most.  You're best bet is to check each host interface statistics for errors.  make sure you have a match on the MTU size (jumbo frames settings on the host and on your switches).  Check your switches for network errors.  Try extended size ping checks between nodes, insure you set the packet size close to your max MTU size and check that you're getting good performance from *all nodes* to every other node.  Last, try a network performance test to each of the OSD nodes and see if one of them is acting up.  

If you are backing your journal on SSD - you DEFINITELY should be getting vastly better performance than that.  I have a cluster with 6 OSD nodes w/ 10x 4TB OSDs - using 2 7200 rpm disks as the journatl (12 disks total).  NO SSDs in that configuration.  I can push the cluster to about 650 MByte/sec via network RBD 'dd' tests, and get about 2500 IOPS.  NOTE - this is an all spinning HDD cluster w/ 7200 rpm disks! 

~~shane

On 8/4/15, 2:36 PM, "ceph-users on behalf of Bob Ababurko" <ceph-users-bounces@xxxxxxxxxxxxxx on behalf of bob@xxxxxxxxxxxx> wrote:

I have my first ceph cluster up and running and am currently testing cephfs for file access.  It turns out, I am not getting excellent write performance on my cluster via cephfs(kernel driver) and would like to try to explore moving my cephfs_metadata pool to SSD.

To quickly describe the cluster:

all nodes run Centos 7.1 w/ ceph-0.94.1(hammerhead)
[bababurko@cephosd01 ~]$ uname -r
3.10.0-229.el7.x86_64
[bababurko@cephosd01 ~]$ cat /etc/redhat-release
CentOS Linux release 7.1.1503 (Core)

6 OSD nodes w/ 5 x 1TB(7200 rpm/dont have model handy) sata & 1 TB SSD(850 pro) which includes a journal(5GB) for each of the 5 OSD's, so there is much space on the SSD left to create a partition for a SSD pool...at least 900GB per SSD.  Also noteworthy is that these disks are behind a raid controller(LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2) with each disk configured as raid 0.
3 MON nodes
1 MDS node

My writes are not going as I would expect wrt to IOPS(50-1000 IOPs) & write throughput( ~25MB/s max).  I'm interested in understanding what it takes to create a SSD pool that I can then migrate the current Cephfs_metadata pool to.  I suspect that the spinning disk metadata pool is a bottleneck and I want to try to get the max performance out of this cluster to prove that we would build out a larger version.  One caveat is that I have copied about 4 TB of data to the cluster via cephfs and dont want to lose the data so I obviously need to keep the metadata intact.

If anyone has done this OR understands how this can be done, I would appreciate the advice.

thanks in advance,
Bob


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux