Re: Performance Testing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 17 Jun 2016 3:33 p.m., "Carlos M. Perez" <cperez@xxxxxxxxx> wrote:

>
> Hi,
>
>  
>
> I found the following on testing performance  - http://tracker.ceph.com/projects/ceph/wiki/Benchmark_Ceph_Cluster_Performance and have a few questions:
>
>  
>
> -          By testing the block device Do the performance tests take the overall cluster performance (how long it takes the data to replicate to the other nodes based on copies, etc.)? or is it just a local portion, ignoring the backend/external ceph processes?  We’re using ceph as block devices for proxmox storage for kvms/containers.
>

I'm not sure what you mean by "local portion", are you doing the benchmarking directly on an OSD node? When writing with rbd bench or fio, the writes will be distributed across the cluster according to your cluster config so the performance will reflect the various attributes of your cluster (replication count, journal speed, network latency etc.).

>  
>
> -          If the above is as a whole, is there a way to test the “local” storage independently of the cluster/pool as a whole.  Basically, I’m testing a few different journal drive options (Intel S3700, Samsung SM863) and controllers (ICH, LSI, Adaptec) and would prefer to change hardware in one node (also limits purchasing requirements for testing), rather than having to replicate it in all nodes.  Getting close enough numbers to a fully deployed setup is good enough for .  We currently have three nodes, two pools, 6 OSDs per node, and trying to find an appropriate drive before we scale the system and start putting workloads.
>

If I understand correctly, you're doing your rbd testing on an OSD node and you want to just test the performance of the OSD's in that node. Localising in this way isn't really a common use case for Ceph. You could potentially create a new pool containing just the OSD's in the node but you would need to play around with your crush map to get that working e.g changing the 'osd crush chooseleaf type'. 

>  
>
> -          Write Cache – In most benchmarking scenarios, it’s said to disable write caching on the drive.  However, according to this (http://tracker.ceph.com/projects/ceph/wiki/Benchmark_Ceph_Cluster_Performance) it seems to indicate that “Newer kernels should work fine” – does this mean that on a “modern” kernel this setting is not necessary since it’s accounted for during the use of the journal, or that the disabling should work fine?  We’ve seen vast differences using Sebastien Han’s guide (http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/) but that uses fio directly to the device (which will clear out the partitions on a “live” journal…yes it was a test system so nothing major, just an unexpected issue of the OSD’s not coming up after reboot).  We’ve been disabling it but just want to check to see if this is an unnecessary step, or a “best practice” step that should be done regardless.
>

I think you meant this link. It is saying that on kernels newer than 2.6.33 there is no need to disable the write cache on a raw disk being used for a journal. That is because the data is properly flushed to the disk before it sends an ACK.


>  
>
> Thanks in advance….
>
>  
>
> Carlos M. Perez
>
> CMP Consulting Services
>
> 305-669-1515
>
>  
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux