Re: Bad performance with XFS + 2.6.38 / 2.6.39

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>> The system seems more in I/O wait. Older
>>> kernels (2.6.32.xx and 2.6.26.xx) gives better results.
> [ ... ]
>> When I go back to older kernels, the load go down. With newer
>> kernel, all is working well too, but load (as reported by
>> uptime) is higher.
> [ ... ]
>>> birnie:~/TRACE# uptime
>>> 11:48:34 up 17:18,  3 users,  load average: 0.04, 0.18, 0.23
>>> penderyn:~/TRACE# uptime
>>> 11:48:30 up 23 min,  3 users,  load average: 4.03, 3.82, 3.21
> [ ... ]

> But 'uptime' reports the load average, which is (roughly)
> processes actually running on the CPU. If the load average is
> higher, that usually means that the file system is running
> better, not worse. It looks as if you are not clear whether you
> have a regression or an improvement.

Perhaps it would be useful to see the output of something like
'iostat -d -x 10' and 'vmstat 10' to see if the load average is
higher because of processes waiting less and running more or
whether it is processes running in 'iowait'. It can help also
using 'htop' with an '.htoprc' like this:

--------------------------------------------------------------
fields=0 48 2 17 38 39 13 14 46 62 63 1 
sort_key=63
sort_direction=1
hide_threads=0
hide_kernel_threads=0
hide_userland_threads=0
shadow_other_users=1
highlight_base_name=1
highlight_megabytes=1
highlight_threads=0
tree_view=0
header_margin=0
detailed_cpu_time=1
color_scheme=2
delay=15
left_meters=Memory AllCPUs 
left_meter_modes=1 1 
right_meters=Swap AllCPUs 
right_meter_modes=1 2 
--------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux