IO-based timing analysis on v0.93

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm sharing below the data from the timing analysis I mentioned in the performance meeting this morning.   Attachments are problematic so I'm trying Google docs.

The test is single node rados bench runs, 4k reads & writes, single OSD, memstore backend.    Hopefully the format is self-explanatory, but essentially key modules were modified with function entrance/exit traces and timing, as well as some traces marking the beginning and ending of a worker thread loop.  20-50K IOs are run, and then a script is used to 'detangle' the sequence of traces & timing of each individual op, using OID (tid is not available everywhere).  After the ops are detangled, which now includes the time that op spent not being processed (i.e. sitting between threads on a queue), any ops that have identical call paths are averaged together to get average times.   I then arbitrarily colored the groups of traces based on thread/major work area.

For each run, the QD was increased past the point where throughput no longer increases and only latency increases, each tab of the spreadsheet can be compared to see which component of the IO time increases.   The upper-right contains the IO time as guessed by the script vs what Rados Bench records (they are pretty similar, but Rados Bench is measuring a bit more beyond the last librados traces).

Summary Sheet (with links to the others)
https://docs.google.com/spreadsheets/d/14LplRzVKf_aM65ENJjghuY8DFGmVOIqn2n_ASAILsxo/edit?usp=sharing

Contains:
1) 100% Reads
2) 100% Writes
3) 100% Reads w/ auth=none
4) 100% writes w /auth=none

In #1 and #2, the bottleneck is in the simple messenger, and with auth turned off, for #4 that appears to move to the bottleneck to the time before the finisher begins to operate on an op after the Op Worker is done with it.   

I should be able to run w/ Filestore rather easily, as long as I tag what each thread is doing.  I'll also run w/ multiple clients.  Let me know if you have any comments or suggestions.

Thanks,

Stephen



--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux