[Linux-cluster] oprofile for tar/rm tests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I ran the oprofile utility while doing the same sort of tar/rm tests
that Daniel McNeil and others have been running (although I didn't do
the sync).  oprofile periodically takes a sample of the CPU instruction
pointer to figure out how the CPU is spending its time.

This was on a single node, using FC JBOD, single physical disk, no
volume manager, nolock, on 1 GHz dual Xeon, 1 GByte.

Average realtime, the tar takes about 46 seconds, the rm -rf about 26
seconds, when repeatedly cycling between the two.

Attached is a result files, grepped to show only the gfs calls, for the
tar.  I'll send the one for rm in a separate mail, to try to stay under
the list's mail size filter limit.

Hot spots for tar are:

gfs_dpin
gfs_glock_dq
glock_wait_internal
gfs_holder_init
gfs_glock_nq

Hot spots for rm are:

gfs_dpin
gfs_ail_empty
gfs_unlinked_get
do_strip
gfs_glock_dq

If you use the oprofile tool, don't make the mistake that I did of
mounting gfs on the "/gfs" mountpoint.  opreport looked there first to
find the "gfs" module for symbols (oops, bad format)!

Sequence I followed:

cd /gfsmount
opcontrol --start
cp /path/to/linux-2.6.7.tar.gz .
tar -xvzf linux-2.6.7.tar.gz
opcontrol --shutdown
opreport -lp /lib/modules/2.6.8.1/kernel > report

similar for rm -rf .

between oprofile runs, to erase old results, do:

opcontrol --reset

-- Ben --

Opinions are mine, not Intel's


Attachment: report.tar.gfs
Description: report.tar.gfs


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux