Performance Testing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

 

I found the following on testing performance  - http://tracker.ceph.com/projects/ceph/wiki/Benchmark_Ceph_Cluster_Performance and have a few questions:

 

-          By testing the block device Do the performance tests take the overall cluster performance (how long it takes the data to replicate to the other nodes based on copies, etc.)? or is it just a local portion, ignoring the backend/external ceph processes?  We’re using ceph as block devices for proxmox storage for kvms/containers.

 

-          If the above is as a whole, is there a way to test the “local” storage independently of the cluster/pool as a whole.  Basically, I’m testing a few different journal drive options (Intel S3700, Samsung SM863) and controllers (ICH, LSI, Adaptec) and would prefer to change hardware in one node (also limits purchasing requirements for testing), rather than having to replicate it in all nodes.  Getting close enough numbers to a fully deployed setup is good enough for .  We currently have three nodes, two pools, 6 OSDs per node, and trying to find an appropriate drive before we scale the system and start putting workloads.

 

-          Write Cache – In most benchmarking scenarios, it’s said to disable write caching on the drive.  However, according to this (http://tracker.ceph.com/projects/ceph/wiki/Benchmark_Ceph_Cluster_Performance) it seems to indicate that “Newer kernels should work fine” – does this mean that on a “modern” kernel this setting is not necessary since it’s accounted for during the use of the journal, or that the disabling should work fine?  We’ve seen vast differences using Sebastien Han’s guide (http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/) but that uses fio directly to the device (which will clear out the partitions on a “live” journal…yes it was a test system so nothing major, just an unexpected issue of the OSD’s not coming up after reboot).  We’ve been disabling it but just want to check to see if this is an unnecessary step, or a “best practice” step that should be done regardless.

 

Thanks in advance….

 

Carlos M. Perez

CMP Consulting Services

305-669-1515

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux