Re: dozens of xfsaild threads

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ben, 

Thanks again for your help. This should be all the relevant information that you requested:

The workload is classic ETL. The problem shows up when trying to refresh the DWH. 

 uname -a
Linux  3.0.42-0.7-default #1 SMP Tue Oct 9 11:58:45 UTC 2012 (a8dc443) x86_64 x86_64 x86_64 GNU/Linux

xfs_repair -V
xfs_repair version 3.1.8

Number of CPU: 8

processor       : 7
vendor_id       : GenuineIntel
cpu family      : 6
model           : 23
model name      : Intel(R) Xeon(R) CPU           E5450  @ 3.00GHz
stepping        : 6
cpu MHz         : 2999.800
cache size      : 6144 KB
physical id     : 1
siblings        : 4
core id         : 3
cpu cores       : 4
apicid          : 7
initial apicid  : 7
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 lahf_lm dtherm tpr_shadow vnmi flexpriority
bogomips        : 6003.48
clflush size    : 64
cache_alignment : 64
address sizes   : 38 bits physical, 48 bits virtual
power management:

cat /proc/meminfo
MemTotal:       24738004 kB
MemFree:          903492 kB
Buffers:             364 kB
Cached:         22131464 kB
SwapCached:        28044 kB
Active:         15011808 kB
Inactive:        7694912 kB
Active(anon):   12375396 kB
Inactive(anon):  3024204 kB
Active(file):    2636412 kB
Inactive(file):  4670708 kB
Unevictable:        5160 kB
Mlocked:            5160 kB
SwapTotal:      18874364 kB
SwapFree:       13429788 kB
Dirty:               916 kB
Writeback:             0 kB
AnonPages:        559160 kB
Mapped:         13143856 kB
Shmem:          14821620 kB
Slab:             618944 kB
SReclaimable:     499808 kB
SUnreclaim:       119136 kB
KernelStack:        5512 kB
PageTables:        64124 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    31243364 kB
Committed_AS:   31705548 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      336820 kB
VmallocChunk:   34359375216 kB
HardwareCorrupted:     0 kB
AnonHugePages:    258048 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:      129392 kB
DirectMap2M:    25034752 kB

cat /proc/mounts
rootfs / rootfs rw 0 0
udev /dev tmpfs rw,relatime,nr_inodes=0,mode=755 0 0
tmpfs /dev/shm tmpfs rw,relatime 0 0
/dev/mapper/system-root / xfs rw,relatime,delaylog,noquota 0 0
proc /proc proc rw,relatime 0 0
sysfs /sys sysfs rw,relatime 0 0
devpts /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
/dev/cciss/c0d0p1 /boot ext3 rw,relatime,errors=continue,user_xattr,acl,barrier=1,data="" 0 0
/dev/mapper/system-export /export xfs rw,relatime,delaylog,noquota 0 0
/dev/mapper/system-opt /opt xfs rw,relatime,delaylog,noquota 0 0
/dev/mapper/system-var /var xfs rw,relatime,delaylog,noquota 0 0
/dev/mapper/system-var /tmp xfs rw,relatime,delaylog,noquota 0 0
... etc

 cat /proc/partitions

major minor  #blocks  name

 104        0  143338560 cciss/c0d0
 104        1     104391 cciss/c0d0p1
 104        2  143227507 cciss/c0d0p2
 253        0   10485760 dm-0
 253        1   10485760 dm-1
 253        2    8388608 dm-2
 253        3   18874368 dm-3
 253        4    5242880 dm-4
   8       48  536870912 sdd
   8        0  536870912 sda
   8       64  536870912 sde
   8       32  536870912 sdc
   8       80  536870912 sdf
   8       96  536870912 sdg
   8       16  536870912 sdb
   8      112  536870912 sdh
   8      128  536870912 sdi
   8      160  536870912 sdk
   8      144  536870912 sdj
   8      176  536870912 sdl
   8      192  536870912 sdm
   8      224  536870912 sdo
   8      240  536870912 sdp
   8      208  536870912 sdn
  65        0  536870912 sdq
  65       16  536870912 sdr
  65       32  536870912 sds
  65       48  536870912 sdt
  65       64  536870912 sdu
  65       80  536870912 sdv
  65       96  536870912 sdw
  65      112  536870912 sdx
  65      128  536870912 sdy
  65      144  536870912 sdz
  65      160  536870912 sdaa
  65      176  536870912 sdab
  65      192  536870912 sdac
  65      208  536870912 sdad
  65      224  536870912 sdae
  65      240  536870912 sdaf
 253        5  536870912 dm-5
 253        6  536870912 dm-6
 etc....


On Fri, Feb 22, 2013 at 9:25 PM, Ben Myers <bpm@xxxxxxx> wrote:
Hi Erik,

On Fri, Feb 22, 2013 at 02:04:17PM +0100, Erik Knight wrote:
> We've recently noticed that our system is experiencing extreme performance
> problems when running large workloads. The problem seems to come from
> excessive System CPU time. Specifically dozens of xfsaild threads. We used
> to have SSD drives but recently switched to HDD, so some of us are thinking
> that there may be a configuration issue within XFS that is optimized for
> SSD but performs terribly slow on HDD.
>
> Can anyone explain what these threads do, what would cause so many of them
> to be running simultaneously or consume so much CPU?

AIL stands for Active Item List.  These guys sync metadata which has been
logged to its final location on disk.  You'll have one daemon per filesystem.

If you have a very metadata intensive workload they could get a workout.  It
would help to know a bit more about your workload and configuration.  Can you
provide the relevant information listed here?

http://www.xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F

Regards,
        Ben

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs

[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux