more dput lock contentions in 2.6.38-rc?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
we are testing dbench benchmark and see big drop of 2.6.38-rc compared
to 2.6.37 in several machines with 2 sockets or 4 sockets. We have 12
disks mount to /mnt/stp/dbenchdata/sd*/ and dbench runs against data of
the disks. According to perf, we saw more lock contentions:
In 2.6.37: 13.00%        dbench  [kernel.kallsyms]   [k] _raw_spin_lock
In 2.6.38-rc: 69.45%        dbench  [kernel.kallsyms]   [k]_raw_spin_lock
-     69.45%        dbench  [kernel.kallsyms]   [k] _raw_spin_lock
   - _raw_spin_lock
      - 48.41% dput
         - 61.17% path_put
            - 60.47% do_path_lookup
               + 53.18% user_path_at
               + 42.13% do_filp_open
               + 4.69% user_path_parent
            - 35.56% d_path
                 seq_path
                 show_vfsmnt
                 seq_read
                 vfs_read
                 sys_read
                 system_call_fastpath
                 __GI___libc_read
            + 2.17% do_filp_open
            + 1.72% mounts_release
         + 38.69% link_path_walk
      + 30.21% path_get
      + 19.08% nameidata_drop_rcu
      + 0.83% __d_lookup
it appears there are heavy lock contention when dput release '/', 'mnt',
'stp', 'dbenchdata' and 'proc' when dbench is running.

Thanks,
Shaohua

--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux