Performance problems with kupdated and kswapd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




I have performance problems. I just downloaded and compiled
kernel 2.4.18 in the hopes that it would run better than the redhat
2.4.9-13 that caused data corruption on my raid system.

However when the only thing I have on the system is the normal
startup processes with NFS serving the array and then using tar
to recover data into the array from a remote system the load seems
to be used for non-raid activity.

Here is what most of the time takes the load

  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
    6 root      20   0     0    0     0 SW   42.1  0.0   1:56 kupdated
    4 root      10   0     0    0     0 SW   13.5  0.0   1:58 kswapd

The lights on the drives hardly ever blink, data is moving
way too slow and I'm just getting frustrated as I don't see
any way in hell the 100GB will ever be recovered.

I know I must have forgotten something during the kernel
configuration or the array creation. Hopefully someone on
the list will know what I'm doing wrong.

I used mdadm to create the array (4 80GB Maxtor drives):

[root@fs01 etc]# mdadm --create /dev/md0 --level=0 --raid-disks=2
/dev/hdf1 /dev/hdh1
mdadm: array /dev/md0 started.
[root@fs01 etc]# mdadm --create /dev/md1 --level=0 --raid-disks=2
/dev/hdj1 /dev/hdl1
mdadm: array /dev/md1 started.
[root@fs01 etc]# mdadm --create /dev/md2 --level=1 --raid-disks=2
/dev/md0 /dev/md1
mdadm: array /dev/md2 started.

Here is the current output for mdadm detail info:

[root@fs01 etc]# mdadm -Q --detail /dev/md2
/dev/md2:
        Version : 00.90.00
  Creation Time : Fri Apr  5 22:32:43 2002
     Raid Level : raid1
     Array Size : 160071360 (152.65 GiB 163.91 GB)
    Device Size : 160071360 (152.65 GiB 163.91 GB)
     Raid Disks : 2
    Total Disks : 2
Preferred Minor : 2
    Persistance : Superblock is persistant

    Update Time : Fri Apr  5 22:32:43 2002
          State : dirty, no-errors
  Active Drives : 2
 Working Drives : 2
  Failed Drives : 0
   Spare Drives : 0


    Number   Major   Minor   RaidDisk   State
       0       9        0        0      active sync   /dev/md0
       1       9        1        1      active sync   /dev/md1
           UUID : 324ba547:7a880d5a:4034dc23:b95173a1

And here is the /proc/mdstat:

[root@fs01 etc]# cat /proc/mdstat
Personalities : [raid0] [raid1]
read_ahead 1024 sectors
md2 : active raid1 md1[1] md0[0]
      160071360 blocks [2/2] [UU]
      [>....................]  resync =  0.2% (465000/160071360)
finish=19600.7min speed=133K/sec
md1 : active raid0 hdl1[1] hdj1[0]
      160071424 blocks 64k chunks

md0 : active raid0 hdh1[1] hdf1[0]
      160071424 blocks 64k chunks

unused devices: <none>


Another question I had is: How could I do md2 without
causing a resync? I think that the resync is a waste of
time as my next command is a mkreiserfs and I didn't
see it on the man page (hopefully I read it right)

Thanks in advance,

Alberto



-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux