Re: OOM killer on bapp02 again

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 12, 2015 at 09:17:31AM -0600, Stephen John Smoogen wrote:
> > I know that MirrorManager2 will be coming but I just saw that there were
> > multiple crawler process killed on bapp02.
> >
> > I also see in (at least) one of the crawler logs
> > (1995-stderr.log-20150312.gz):
> >
> >   File
> > "/usr/lib/python2.6/site-packages/sqlobject/postgres/pgconnection.py", line
> > 115, in makeConnection
> >     raise self.module.OperationalError("%s; used connection string %r" %
> > (e, self.dsn))
> > psycopg2.OperationalError: FATAL:  the database system is shutting down
> > ; used connection string 'dbname=mirrormanager user=mirroradmin
> > password=password host=db-mirrormanager port=5432'
> >
> > It also seems that the DB password is in the log file in cleartext.
> >
> > Currently there is also crawler which takes 1.2 GB instead of the normal
> > 200MB.
> > This might also be a reason for the OOMs:
> >
> > 441      20633  5.2  8.0 1296468 1285924 ?     S    13:00   3:37
> > /usr/bin/python /usr/share/mirrormanager/server/crawler_perhost -c
> > /etc/mirrormanager/prod.cfg --hostid 1508 --logfile
> > /var/log/mirrormanager/crawler/1508.log
> >
> Thanks for the note. Someone must have kicked it because the process was
> gone.

Another one, this time 1.4G:

441      29631 26.9  9.4 1522900 1512888 ?     R    15:00  20:27 /usr/bin/python /usr/share/mirrormanager/server/crawler_perhost -c /etc/mirrormanager/prod.cfg --hostid 218 --logfile /var/log/mirrormanager/crawler/218.log

and two more with 700M

The one with 1.4G is even my mirror and I think I understand
what is happening. Previously the crawler used to crawl using
ftp or http and each directory (ftp) or file (http) was fetched
separately and added to the database. Now with the rsync crawler
the crawler has the information about the whole mirror in memory
as long as it takes to update the database. So rsync is faster
during the crawl but requires huge amounts of memory. As I am
mirroring almost everything the crawler process requires a lot of
memory for my mirror. With 30 parallel crawlers and 16GB of memory
this can lead to situation where more memory is required than available.

It probably does not make much sense to change much in the current
setup as MM2 will be available soon.

		Adrian

Attachment: pgpBbXo4Nhl8h.pgp
Description: PGP signature

_______________________________________________
infrastructure mailing list
infrastructure@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/infrastructure

[Index of Archives]     [Fedora Development]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [KDE Users]

  Powered by Linux