No space left on device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

yes, we already checked that
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/sda1            3055616   51143 3004473    2% /
none                 2057204     720 2056484    1% /dev
none                 2058311       1 2058310    1% /dev/shm
none                 2058311      22 2058289    1% /var/run
none                 2058311       2 2058309    1% /var/lock
none                 2058311       3 2058308    1% /lib/init/rw
/dev/sdb             4818066560 4277250 4813789310    1% /storage/6
/dev/sda6            1220608    2232 1218376    1% /var
/dev/sda7            2445984      12 2445972    1% /tmp

We managed to free some more space (now ~1TB free) and creating files 
seem to work again. However, during the migration of a user, some files 
don't show up in his directory, although we are able to find them via ls 
<filename>.

We decided to reverse the migration and try again, but then I was not 
able to delete his directory:

rm: FATAL: directory `delete_me/rootfiles/dplusdminus' changed dev/ino

Also, when we tried chown -R <user>:<group> user/ the following error 
popped up.

chown: fts_read failed: No such file or directory

Additionally, some directories show up in ls -l like that

?---------  ? ?    ?        ?            ? b_dd

This all is really confusing to us. Can someone shed some light on this 
mess?

Regards,
Daniel


On 01/19/2011 10:53 AM, Andrew S?guin wrote:
> Hi,
>
> Maybe a silly thought / maybe you have already thought of it, but is
> there still some free inodes on the underlying filesystem (df -i)?
>
> Regards,
> Andrew
>
> On Wed, Jan 19, 2011 at 10:44 AM, Daniel Zander
> <zander at ekp.uni-karlsruhe.de>  wrote:
>> Hi,
>>
>> we just made some space available on the affected brick (386 GB free), but
>> still the same problem remains. But I don't think that this is really a
>> glusterFS problem, as also root cannot create any directories directly on
>> the fileserver anymore.
>>
>> Thanks so far,
>> Daniel
>>
>>
>>
>> On 01/19/2011 10:12 AM, Mark "Naoki" Rogers wrote:
>>>
>>> I think you might want to look into re-balance:
>>>
>>> http://europe.gluster.org/community/documentation/index.php?title=Gluster_3.1:_Rebalancing_Volumes&redirect=no
>>>
>>> <http://europe.gluster.org/community/documentation/index.php?title=Gluster_3.1:_Rebalancing_Volumes&redirect=no>
>>>
>>>
>>> It's generally for adding/removing bricks but might re-distribute data
>>> in a way that solves your disk space issue.
>>>
>>>
>>> On 01/19/2011 04:43 PM, Daniel Zander wrote:
>>>>
>>>> Hi!
>>>>
>>>>> Assuming you are doing a straight distribute there(?), if the user in
>>>>
>>>> Yes's it's a distributed volume.
>>>>
>>>>> question is hashed onto the brick that is 100% full you'll get a space
>>>>
>>>> Is there a way around this other than moving files away from this one
>>>> brick by hand?
>>>>
>>>>> error. Not sure I followed your migration details though, when you say
>>>>> "user directories were moved into one of the above folders" do you mean
>>>>> copied directly onto the individual storage bricks?
>>>>
>>>> Yes, eg. user_a had the following directories
>>>> server5:/storage/5/user_a
>>>> server6:/storage/6/user_a
>>>>
>>>> Then we performed a move:
>>>> ssh server5 "mv /storage/5/user_a/ /storage/5/cluster/user_a"
>>>> ssh server6 "mv /storage/6/user_a/ /storage/6/cluster/user_a"
>>>>
>>>> This was done as it would not cause any network traffic. Then the volume
>>>> was created like that:
>>>> Brick1: 192.168.101.249:/storage/4/cluster
>>>> Brick2: 192.168.101.248:/storage/5/cluster
>>>> Brick3: 192.168.101.250:/storage/6/cluster
>>>> Brick4: 192.168.101.247:/storage/7/cluster
>>>> Brick5: 192.168.101.246:/storage/8/cluster
>>>>
>>>> Regards,
>>>> Daniel
>>>>
>>>>> On 01/19/2011 05:01 AM, zander at ekp.uni-karlsruhe.de wrote:
>>>>>>
>>>>>> mv: cannot create regular file `/storage/cluster/<etc...>': No space
>>>>>> left on device
>>>>>>
>>>>>> Doing df -h tells me, however:
>>>>>>
>>>>>> glusterfs#192.168.101.247:/lemmy
>>>>>> 104T 69T 36T 66% /storage/cluster
>>>>>>
>>>>>> It may be of importance that one brick in the cluster is actually 100%
>>>>>> used. Others are almost completely empty. I am using GlusterFS.3.1.1,
>>>>>> the file servers are running debian lenny or ubuntu server 10.04,
>>>>>> clients are SLC4, SLC5, CentOS and ubuntu server 10.04.
>>>>>>
>>>>> _______________________________________________
>>>>> Gluster-users mailing list
>>>>> Gluster-users at gluster.org
>>>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux