find errors in a directory of files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



hello list,

 I'm trying to write a script that will search through a directory of trace
logs for an oracle database. From what I understand new files are always
being created in the directory and it's not possible to know the exact
names of the files before they are created. The purpose of this is to
create service checks in nagios. Because you don't know the names of the
files ahead of time traditional plugins like check_logs or
check_logfiles.plwon't work.

 Here's what I was able to come up with:

#!/bin/bash



log1='/u01/app/oracle/admin/ecom/udump/*'
crit1=($(grep 'ORA-00600' $log1))
crit2=($(grep 'ORA-04031' $log1))
crit3=($(grep 'ORA-07445' $log1))



if [ $crit1 ] ; then
   echo "$crit1 on ecom1"
   status=2


elif [ $crit2 ]; then
    echo "$crit2 on ecom1"
    status=2

elif [ $crit3 ]; then
    echo "$crit3 on ecom1"
    status=2
fi


echo $status
exit $status


This is a very early version of the scripts, so as you can see I'm echoing
a test message at the end letting you know the exit status.

The problem with this script is that it is only able to detect one error in
the logs. If you echo more than one test phrase into a log file or into
multiple log files it still only picks up one error message.

I was just wondering if anyone on the list might have a suggestion on how
best to accomplish this task?

Thanks
Tim

-- 
GPG me!!

gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos


[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux