RE: Configuration of your benchmark for the performance report with functrace

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Stephen,
  We setup two methods in house for performance breakdown against ceph repo (wip-blkin branch).
  1. The performance bench and latency breakdown with BLKIN(zipkin+lttng). As far as we know, It needs lot of efforts to get it support on new Ceph release like infernalis.
  2. The performance bench and latency breakdown with lttng(functrace). 
  We might soon to port our work to infernalis once we find bandwidth.  However, I did not quite understand your question like "   We are currently not seeing it capture any traces, but a build based on Hammer works..." . It should be easy to port functrace to support infernalis , right? 

  Regards,
  James

-----Original Message-----
From: Blinick, Stephen L [mailto:stephen.l.blinick@xxxxxxxxx] 
Sent: Wednesday, December 16, 2015 4:01 PM
To: James (Fei) Liu-SSI
Subject: RE: Configuration of your benchmark for the performance report with functrace

Hi, first of all I'm sorry I didn't respond to your other queries a few weeks back.  I have been buried with end of the year stuff, and the hardware is in use on Infernalis work.

Next -- the full ceph.conf and yaml file with CBT we used for our all-flash stuff is in the backup slides of this deck: http://www.slideshare.net/Inktank_Ceph/accelerating-cassandra-workloads-on-ceph-with-allflash-pcie-ssds

But let me know if you want any more info, like the partition sizing. I think our journal partitions are 20GB each.   Lttng with all the function tracing I did had an impact of about 15% to latency & performance.   Make sure to increase the lttng buffer sizes a reasonable amount.  I did something like this:
       lttng  create
       lttng enable-channel -u cephtrc --subbuf-size 2097152 
       lttng enable-event -u -a -c cephtrc
       lttng add-context -c cephtrc -u -t pthread_id Then lttng start/stop/view/destroy....

Lastly -- did you get lttng to work on infernalis?  We are currently not seeing it capture any traces, but a build based on Hammer works...

Thanks,

Stephen


-----Original Message-----
From: James (Fei) Liu-SSI [mailto:james.liu@xxxxxxxxxxxxxxx]
Sent: Wednesday, December 16, 2015 4:57 PM
To: Blinick, Stephen L
Subject: Configuration of your benchmark for the performance report with functrace

HI Stephen,
  Thanks for your great help. We are successfully duplicate your effort over here to run the functrace and breakdown the latency. However, We got very high latency in our bench. I was wondering whether we can know your hardware configuration and ceph.conf.

Appreciate for your great help as always.

Regards,
James

-----Original Message-----
From: Cbt [mailto:cbt-bounces@xxxxxxxxxxxxxx] On Behalf Of Blinick, Stephen L
Sent: Monday, December 07, 2015 1:49 PM
To: Blyth, Logan; cbt@xxxxxxxxxxxxxx
Subject: Re: [Cbt] Is this list still active?

I still monitor this list (as do some others I believe).  And we still use CBT for our Ceph performance work. 

Thanks,

Stephen


-----Original Message-----
From: Cbt [mailto:cbt-bounces@xxxxxxxxxxxxxx] On Behalf Of Blyth, Logan
Sent: Monday, December 7, 2015 2:43 PM
To: cbt@xxxxxxxxxxxxxx
Subject: [Cbt] Is this list still active?

Or does the discussion take place elsewhere? perhaps ceph-users?

Logan
_______________________________________________
Cbt mailing list
Cbt@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/cbt-ceph.com
_______________________________________________
Cbt mailing list
Cbt@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/cbt-ceph.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux