close file descriptor on other process

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

I have a process which has opened about ten files but
only one of them is currently in use. Each of these
ten files is quite large (about 100MB per each file)
and the partition is getting full now (80% full).
Therefore, I use gzip to compress the other nine files
to save some space. However, I found that the disk
usage is not decreased but increased to 85% after
gzip!

When I use lsof to check the files opened by that
process, it shows some files are marked deleted as
below

tecs 17138 tecs 9w REG 104,3 113490637
98458 /logs/output.log.20050308 (deleted)

The system is still holding the file descriptors and
not released. What should I do to free the space? I
cannot stop the process because it is on production
now.

If I use cp /proc/17138/fd/9 to
/logs/output.log.20050308, it will
recover the original file, but lsof still shows
deleted. Therefore, I cannot use cat /dev/null >
/logs/output.log.20040308 to truncate the file.
Is there any method which can recover the original
file such that I can truncate it and save the disk
space?

Or, how do I close the file descriptor on other
process without stopping the process (assumed that I
have root priviledge)?

Thanks in advance.

Anson

__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-- 
redhat-list mailing list
unsubscribe mailto:redhat-list-request@xxxxxxxxxx?subject=unsubscribe
https://www.redhat.com/mailman/listinfo/redhat-list

[Index of Archives]     [CentOS]     [Kernel Development]     [PAM]     [Fedora Users]     [Red Hat Development]     [Big List of Linux Books]     [Linux Admin]     [Gimp]     [Asterisk PBX]     [Yosemite News]     [Red Hat Crash Utility]


  Powered by Linux