Re: A hard link problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Following up my own post: my kludge was
ls -Rli | sort -u | awk '{total += $6;}END { print "total: " total;}'

My result was pretty much what df gave, so we do have a problem. Now I'm looking into alternatives, like incremental tar backups....

   mark

---- Original message ----
>Date: Tue, 14 Apr 2009 10:34:35 -0400 (EDT)
>From: <m.roth2006@xxxxxxx>  
>Subject: A hard link problem  
>To: General Red Hat Linux discussion list <redhat-list@xxxxxxxxxx>
>
>We're backing up using rsync and hard links. The problem is that the fs is filling up *fast*.
>
>According to df, 
>154814444 108694756  38255576  74%
>According to du -s -k, I've got 123176708 in use, which appears larger (unless it's too early in the morning for me to read that right).
>
>Now, ls -Ri | wc -l on one directory shows 10765, while ls -Ri | sort -u | wc -l in the same directory shows 3274, so yeah, there are a lot of hard links. What I need to figure out, so that we don't blow out the filesystem, is how much space is *really* in use. I'd like something a bit faster and more elegant than, say, ls -Ri | awk '{print $!;}' > filelist, and then a shell script to loop 
>find /backup -inum $fromlist -ls | awk '{print $7;}' > total, and then awk '{total += $1;}END { print total;}' total
>
>That would be a mess....
>
>Suggestions?
>
>      mark
>
>-- 
>redhat-list mailing list
>unsubscribe mailto:redhat-list-request@xxxxxxxxxx?subject=unsubscribe
>https://www.redhat.com/mailman/listinfo/redhat-list

-- 
redhat-list mailing list
unsubscribe mailto:redhat-list-request@xxxxxxxxxx?subject=unsubscribe
https://www.redhat.com/mailman/listinfo/redhat-list

[Index of Archives]     [CentOS]     [Kernel Development]     [PAM]     [Fedora Users]     [Red Hat Development]     [Big List of Linux Books]     [Linux Admin]     [Gimp]     [Asterisk PBX]     [Yosemite News]     [Red Hat Crash Utility]


  Powered by Linux