On Mon, Jul 08, 2013 at 09:15:33PM -0400, Chris Mason wrote: > Quoting Dave Chinner (2013-07-08 08:44:53) > > [cc fsdevel because after all the XFS stuff I did a some testing on > > mmotm w.r.t per-node LRU lock contention avoidance, and also some > > scalability tests against ext4 and btrfs for comparison on some new > > hardware. That bit ain't pretty. ] > > > > And, well, the less said about btrfs unlinks the better: > > > > + 37.14% [kernel] [k] _raw_spin_unlock_irqrestore > > + 33.18% [kernel] [k] __write_lock_failed > > + 17.96% [kernel] [k] __read_lock_failed > > + 1.35% [kernel] [k] _raw_spin_unlock_irq > > + 0.82% [kernel] [k] __do_softirq > > + 0.53% [kernel] [k] btrfs_tree_lock > > + 0.41% [kernel] [k] btrfs_tree_read_lock > > + 0.41% [kernel] [k] do_raw_read_lock > > + 0.39% [kernel] [k] do_raw_write_lock > > + 0.38% [kernel] [k] btrfs_clear_lock_blocking_rw > > + 0.37% [kernel] [k] free_extent_buffer > > + 0.36% [kernel] [k] btrfs_tree_read_unlock > > + 0.32% [kernel] [k] do_raw_write_unlock > > > > Hi Dave, > > Thanks for doing these runs. At least on Btrfs the best way to resolve > the tree locking today is to break things up into more subvolumes. Sure, but you can't do that most workloads. Only on specialised workloads (e.g. hashed directory tree based object stores) is this really a viable option.... > I've > got another run at the root lock contention in the queue after I get > the skiplists in place in a few other parts of the Btrfs code. It will be interesting to see how these new structures play out ;) Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html