Effect of tunables on client system load

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,
   First, some background: 
       I have been running a small (4 compute nodes) xen server cluster backed by both a small ceph (4 other nodes with a total of 18x 1-spindle osd's) and small gluster cluster (2 nodes each with a 14 spindle RAID array). I started with gluster 3-4 years ago, at first using NFS to access gluster, then upgraded to gluster FUSE. However, I had been facinated with ceph since I first read about it, and probably added ceph as soon as XCP released a kernel with RBD support, possibly approaching 2 years ago.
       With Ceph, since I started out with the kernel RBD, I believe it locked me to Bobtail tunables. I connected to XCP via a project that tricks XCP into running LVM on the RBDs managing all this through the iSCSI mgmt infrastructure somehow... Only recently I've switched to a newer project that uses the RBD-NBD mapping instead. This should let me use whatever tunables my client SW support AFAIK. I have not yet changed my tunables as the data re-org will probably take a day or two (only 1Gb networking...).

   Over this time period, I've observed that my gluster backed guests tend not to consume as much of domain-0's (the Xen VM management host) resources as do my Ceph backed guests. To me, this is somewhat intuitive  as the ceph client has to do more "thinking" than the gluster client. However, It seems to me that the IO performance of the VM guests is well outside than the difference in spindle count would suggest. I am open to the notion that there are probably quite a few sub-optimal design choices/constraints within the environment. However, I haven't the resources to conduct all that many experiments and benchmarks.... So, over time I've ended up treating ceph as my resilient storage, and gluster as my more performant (3x vs 2x replication, and, as mentioned above, my gluster guests had quicker guest IO and lower dom-0 load).

    So, on to my questions:

   Would setting my tunables to jewel (my present release), or anything newer than bobtail (which is what I think I am set to if I read the ceph status warning correctly) reduce my dom-0 load and/or improve any aspects of the client IO performance?

   Will adding nodes to the cluster ceph reduce load on dom-0, and/or improve client IO performance (I doubt the former and would expect the latter...)?

   So, why did I bring up gluster at all? In an ideal world, I would like to have just one storage environment that would satisfy all my organizations needs. If forced to choose with the knowledge I have today, I would have to select gluster. I am hoping to come up with some actionable data points that might help me discover some of my mistakes which might explain my experience to date and maybe even help remedy said mistakes. As I mentioned earlier, I like ceph, more so than gluster, and would like to employ more within my environment. But, given budgetary constraints, I need to do what's best for my organization.

   Thanks in advance,
   Nate
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux