Linux 2.4.18 on RH 7.2 - odd failures

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Tony Clark wrote:
> I have a little script that makes a bunch of large files to give the 
> filesystem a beating.  It goes like this:
> 
> #!/bin/bash
> i=1000
> for((i=0;i != 301;i++)); do
>   time -p dd if=/dev/zero of=./bigfile.$i bs=1024k count=1024
>   echo $i
> done
> 
> About 4 files in, it dies with 'dd: writing `./bigfile.4': Read-only 
> file system'
> 

This may be a dumb question, but you do have more than 4GB of space available on 
the RAID array when you start, correct?  From this script, you'll need 300GB in 
order to not run out, because when the block size for dd is 1024k (1 meg), and 
the count is 1024, that's 1024 megabytes, or 1GB.  Since you're writing out 300 
of those 1GB files, I could see why the filesystem might run out of room.

Although the "journal block not found" message *does* point to general data 
corruption, you never know, this might have something to do with it too.





[Index of Archives]         [Linux RAID]     [Kernel Development]     [Red Hat Install]     [Video 4 Linux]     [Postgresql]     [Fedora]     [Gimp]     [Yosemite News]

  Powered by Linux