Re: Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 04 Feb 2000, Kelly Lynn Martin <kelly@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
> On Fri, 4 Feb 2000 09:52:30 +0100 (MET), quinet@xxxxxxxxxx (Raphael Quinet) said:
> >I disagree.  This would only encourage some users to re-compile their
> >own version of the Gimp in a private directory in order to get around
> >the hardcoded limits.
> 
> Frankly, I disagree.  Systems where admins are likely to impose such
> restrictions are going to be ones where users don't have enough space
> to compile private copies of Gimp.

I wouldn't be too sure about that.  On a system that I was previously
administering (students' network at the university), I have seen some
users that were using /var/tmp or /tmp to store their applications
while they were logged in, and deleted the stuff afterwards.  The
quota was something like 5 Mb on the home directory and much larger in
the temporary directories (for good reasons) so they took advantage of
that.  Some of them were re-compiling every time, some others had
stored the compiled binaries on some external ftp servers and were
downloading them in /tmp every time they needed them.  This had some
obvious impact on security...  Some other users were hiding their
applications in some system directories that had to be writable by
all, such as /usr/local/lib/emacs/lock or /var/spool/mail...

Anyway, I would not be surprised that any limit that could be
hardcoded in the Gimp would be circumvented by some frustrated users
who would re-compile their own version of the main executable and put
it somewhere when they need it.  And as Mark said in another message,
it is not our job to enforce local policies (of course we should not
make them un-enforceable either) so if the admin wants to restrict
disk or memory usage, they should use other means than the Gimp:
ulimit and quota are some examples.

> >Being a system administrator myself, I believe that an admin should
> >always suggest some limits (and maybe use some social engineering to
> >encourage users to respect these limits) but should avoid hard
> >limits.   
> 
> It depends on the kind of users you have and the hardware you're
> running.  Imposing hard limits is sometimes the only way to deal with
> certain types of users.

Yes, it is sometimes very hard to convince some users.  But here is an
example: on one system with limited disk space (old DEC 3100 Ultrix
workstations), we had set up some quotas and the disks very constantly
full.  All users were using the maximum space available under their
quota, and they only started cleaning up when they had exceeded their
quota.  Then we tried an experiment: instead of decreasing the quotas,
we decided to increase them significantly for everybody, but every
week a "high score" list of disk usage was printed at the entrance of
the terminal room, with the names of the top 50 users.  This was not a
perfect solution, but there was enough social pressure to make sure
that nobody stayed at the top of the list for a long time.  This
solved several problems: most users started to clean up their home
directory before entering the top 20, and those who had a valid reason
to consume more disk space could easily explain it to the others.
Those who could not explain why they consumed so much disk space had
to make some room so that others could continue working.  Well, that's
only an example and it cannot be applied in all cases (e.g. the users
have to know and trust each other to some extent, otherwise such a
system will just generate suspicion or hatred between them).  Ah well,
it looks like I got carried away and this is off-topic for this list.
Sorry...

> >On the other hand, if ulimits are used to limit the maximum file size
> >or CPU usage, there is not much that we could do about it.  Same if
> >disk quotas are activated.  The Gimp can have some control over its
> >memory usage, but many parts of the code assume that the disk space
> >is unlimited (or is not the main constraint).
> 
> Yup.  It might be nice to catch SIGXCPU and try to do an orderly
> shutdown before the SIGKILL does ya' in, though. :)

As long as this is not in glib or libgimp, otherwise I know that some
members of this list would complain about plug-ins and signal handlers
:-)

On second thought...  The default for SIGXCPU and SIGXFSZ is to
generate a core dump.  Maybe it would be better to get a core dump and
be able to get whatever is left inside, instead of desesperately
trying to save the file and getting a SIGKILL while this is in
progress?  On third thought...  If your disk quota is exceeded, you
will not even get the core dump.  On fourth thought... :-)  Who in
their right mind would use the Gimp on a systems that has such strict
constraints?

-Raphael



[Index of Archives]     [Video For Linux]     [Photo]     [Yosemite News]     [gtk]     [GIMP for Windows]     [KDE]     [GEGL]     [Gimp's Home]     [Gimp on GUI]     [Gimp on Windows]     [Steve's Art]

  Powered by Linux