find . -name *.log | xargs rm
was the way I was going to go, but then it said
-bash: /usr/bin/find: Argument list too long
rm: too few arguments
I also tried
mv *.log /dev/null
which generated a similar error.
Silly me, I thought it was rm that was complaining when it was in fact
bash!!
Obviously the string that bash generates by expanding wildcard '*.log'
is too long for it to pass to any command.
Just like Juri said it is a shell limitation.I have looked, but havent
seen any options or shell variables that would let me up this limit.
So I escaped the wildcard and let 'find' handle it by
find . -name '*.log' | xargs rm
which did it.
Thanks Stephen for the find and xargs insight. Thanks all for replying,
although this is clearly not a filesystem issue ( which wasn't clear to
me when I presented it initially)
-------- Original Message --------
Subject: -bash: /bin/rm: Argument list too long
Date: Mon, 24 Feb 2003 11:29:57 -0900
From: N Nair <nandagopalnair@netscape.net>
To: ext3-users@redhat.com
References: <20030221170912.E1E343F3C7@listman.redhat.com>
Folks:
Is there a limit to the number of arguments that can be passed to
fileutils programs such as mv or rm ? If yes, is it filesystem
dependant/kernel config dependant/fileutils version dependant? Can this
maximum limit be tuned/controlled ? I googled on it a bit, but couldn't
find anything much more relevant than a message in the OS-X forum.
http://www.omnigroup.com/mailman/archive/macosx-admin/2002-December/027907.html
Someone asked me this question in a linux-related forum and I
decided to experiment on my box ( RedHat 8.0 running vanilla kernel
2.4.20; fileutils version 4.1.9 ) to find out what the current limit
was. I found that 12806 filenames can be passed to rm at a time without
generating the error shown in the subject line. (Took me about a dozen
tries to reach this figure.)
Your comments would be extremely valuable.
Nandu.
_______________________________________________
Ext3-users@redhat.com
https://listman.redhat.com/mailman/listinfo/ext3-users