360GB when I started. df shows the diskonly partially full when it stops. The disk vendor has confirmed firmware issues just then, so it's in their court now. Thanks all for your input. Tony on 27/3/02 11:26 PM, Bryan Kadzban at bwkadzba@mtu.edu wrote: > Tony Clark wrote: >> I have a little script that makes a bunch of large files to give the >> filesystem a beating. It goes like this: >> >> #!/bin/bash >> i=1000 >> for((i=0;i != 301;i++)); do >> time -p dd if=/dev/zero of=./bigfile.$i bs=1024k count=1024 >> echo $i >> done >> >> About 4 files in, it dies with 'dd: writing `./bigfile.4': Read-only >> file system' >> > > This may be a dumb question, but you do have more than 4GB of space available > on > the RAID array when you start, correct? From this script, you'll need 300GB > in > order to not run out, because when the block size for dd is 1024k (1 meg), and > the count is 1024, that's 1024 megabytes, or 1GB. Since you're writing out > 300 > of those 1GB files, I could see why the filesystem might run out of room. > > Although the "journal block not found" message *does* point to general data > corruption, you never know, this might have something to do with it too. > -- Tony Clark tony@rsp.com.au Rising Sun Pictures Tel: +6 18 8364 6074 Adelaide, Australia Fax: +61 8 8364 6075 http://www.rsp.com.au/