Re: mdadm raid5 array - 0 space available but usage is less than capacity

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Well, it's an ext3 file system. Here's the output of df -Ti

Filesystem    Type    Inodes   IUsed   IFree IUse% Mounted on
/dev/md1      ext3   1466368  215121 1251247   15% /
varrun       tmpfs    257853      85  257768    1% /var/run
varlock      tmpfs    257853       2  257851    1% /var/lock
udev         tmpfs    257853    3193  254660    2% /dev
devshm       tmpfs    257853       1  257852    1% /dev/shm
/dev/md0      ext3     48192      38   48154    1% /boot
/dev/md2      ext3   242147328  151281 241996047    1% /share

Cheers
Rob


On 23 September 2010 20:53, Kaizaad Bilimorya <kaizaad@xxxxxxxxxxx> wrote:
>
>
> On Thu, 23 Sep 2010, Robin Doherty wrote:
>
>> I have a RAID5 array of 5 1TB disks that has worked fine for 2 years
>> but now says that it has 0 space available (even though it does have
>> space available). It will allow me to read from it but not write. I
>> can delete things, and the usage goes down but the space stays at 0.
>>
>> I can touch but not mkdir:
>>
>> rob@cholera ~ $ mkdir /share/test
>> mkdir: cannot create directory `/share/test': No space left on device
>> rob@cholera ~ $ touch /share/test
>> rob@cholera ~ $ rm /share/test
>> rob@cholera ~ $
>>
>> Output from df -h (/dev/md2 is the problem array):
>>
>> Filesystem            Size  Used Avail Use% Mounted on
>> /dev/md1               23G   15G  6.1G  72% /
>> varrun               1008M  328K 1007M   1% /var/run
>> varlock              1008M     0 1008M   0% /var/lock
>> udev                 1008M  140K 1008M   1% /dev
>> devshm               1008M     0 1008M   0% /dev/shm
>> /dev/md0              183M   43M  131M  25% /boot
>> /dev/md2              3.6T  3.5T     0 100% /share
>>
>> and without the -h:
>>
>> Filesystem           1K-blocks      Used Available Use% Mounted on
>> /dev/md1              23261796  15696564   6392900  72% /
>> varrun                 1031412       328   1031084   1% /var/run
>> varlock                1031412         0   1031412   0% /var/lock
>> udev                   1031412       140   1031272   1% /dev
>> devshm                 1031412         0   1031412   0% /dev/shm
>> /dev/md0                186555     43532    133391  25% /boot
>> /dev/md2             3843709832 3705379188         0 100% /share
>
>
> Just a shot in the dark but I have seen this with Lustre systems. What does
> "df -i" show?
>
> thanks
> -k
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux