No space left on device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Maybe a silly thought / maybe you have already thought of it, but is
there still some free inodes on the underlying filesystem (df -i)?

Regards,
Andrew

On Wed, Jan 19, 2011 at 10:44 AM, Daniel Zander
<zander at ekp.uni-karlsruhe.de> wrote:
> Hi,
>
> we just made some space available on the affected brick (386 GB free), but
> still the same problem remains. But I don't think that this is really a
> glusterFS problem, as also root cannot create any directories directly on
> the fileserver anymore.
>
> Thanks so far,
> Daniel
>
>
>
> On 01/19/2011 10:12 AM, Mark "Naoki" Rogers wrote:
>>
>> I think you might want to look into re-balance:
>>
>> http://europe.gluster.org/community/documentation/index.php?title=Gluster_3.1:_Rebalancing_Volumes&redirect=no
>>
>> <http://europe.gluster.org/community/documentation/index.php?title=Gluster_3.1:_Rebalancing_Volumes&redirect=no>
>>
>>
>> It's generally for adding/removing bricks but might re-distribute data
>> in a way that solves your disk space issue.
>>
>>
>> On 01/19/2011 04:43 PM, Daniel Zander wrote:
>>>
>>> Hi!
>>>
>>>> Assuming you are doing a straight distribute there(?), if the user in
>>>
>>> Yes's it's a distributed volume.
>>>
>>>> question is hashed onto the brick that is 100% full you'll get a space
>>>
>>> Is there a way around this other than moving files away from this one
>>> brick by hand?
>>>
>>>> error. Not sure I followed your migration details though, when you say
>>>> "user directories were moved into one of the above folders" do you mean
>>>> copied directly onto the individual storage bricks?
>>>
>>> Yes, eg. user_a had the following directories
>>> server5:/storage/5/user_a
>>> server6:/storage/6/user_a
>>>
>>> Then we performed a move:
>>> ssh server5 "mv /storage/5/user_a/ /storage/5/cluster/user_a"
>>> ssh server6 "mv /storage/6/user_a/ /storage/6/cluster/user_a"
>>>
>>> This was done as it would not cause any network traffic. Then the volume
>>> was created like that:
>>> Brick1: 192.168.101.249:/storage/4/cluster
>>> Brick2: 192.168.101.248:/storage/5/cluster
>>> Brick3: 192.168.101.250:/storage/6/cluster
>>> Brick4: 192.168.101.247:/storage/7/cluster
>>> Brick5: 192.168.101.246:/storage/8/cluster
>>>
>>> Regards,
>>> Daniel
>>>
>>>> On 01/19/2011 05:01 AM, zander at ekp.uni-karlsruhe.de wrote:
>>>>>
>>>>> mv: cannot create regular file `/storage/cluster/<etc...>': No space
>>>>> left on device
>>>>>
>>>>> Doing df -h tells me, however:
>>>>>
>>>>> glusterfs#192.168.101.247:/lemmy
>>>>> 104T 69T 36T 66% /storage/cluster
>>>>>
>>>>> It may be of importance that one brick in the cluster is actually 100%
>>>>> used. Others are almost completely empty. I am using GlusterFS.3.1.1,
>>>>> the file servers are running debian lenny or ubuntu server 10.04,
>>>>> clients are SLC4, SLC5, CentOS and ubuntu server 10.04.
>>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux