On 09/04/2013 05:34 PM, Linus Torvalds wrote:
On Wed, Sep 4, 2013 at 12:25 PM, Waiman Long<waiman.long@xxxxxx> wrote:
Yes, the perf profile was taking from an 80-core machine. There isn't any
scalability issue hiding for the short workload on an 80-core machine.
However, I am certain that more may pop up when running in an even larger
machine like the prototype 240-core machine that our team has been testing
on.
Sure. Please let us know, I think it's going to be interesting to see
what that shows.
SGI certainly did much larger machines, but their primary target
tended to be all user space, so they had things like "tons of
concurrent page faults in the same process" rather than filename
lookup or the tty layer.
Linus
I think SGI is more focus on compute-intensive workload. HP is more
focus on high-end commercial workload like SAP HANA. Below was a sample
perf profile of the high-systime workload on a 240-core prototype
machine (HT off) with 3.10-rc1 kernel with my lockref and seqlock patches:
9.61% 3382925 swapper [kernel.kallsyms] [k]
_raw_spin_lock
|--59.90%-- rcu_process_callbacks
|--19.41%-- load_balance
|--9.58%-- rcu_accelerate_cbs
|--6.70%-- tick_do_update_jiffies64
|--1.46%-- scheduler_tick
|--1.17%-- sched_rt_period_timer
|--0.56%-- perf_adjust_freq_unthr_context
--1.21%-- [...]
6.34% 99 reaim [kernel.kallsyms] [k]
_raw_spin_lock
|--73.96%-- load_balance
|--11.98%-- rcu_process_callbacks
|--2.21%-- __mutex_lock_slowpath
|--2.02%-- rcu_accelerate_cbs
|--1.95%-- wake_up_new_task
|--1.70%-- scheduler_tick
|--1.67%-- xfs_alloc_log_agf
|--1.24%-- task_rq_lock
|--1.15%-- try_to_wake_up
--2.12%-- [...]
5.39% 2 reaim [kernel.kallsyms] [k]
_raw_spin_lock_irqsave
|--95.08%-- rwsem_wake
|--1.80%-- rcu_process_callbacks
|--1.03%-- prepare_to_wait
|--0.59%-- __wake_up
--1.50%-- [...]
2.28% 1 reaim [kernel.kallsyms] [k]
_raw_spin_lock_irq
|--90.56%-- rwsem_down_write_failed
|--9.25%-- __schedule
--0.19%-- [...]
Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html