Re: [Gluster-users] Fwd: dht_is_subvol_filled messages on client

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, Can anyone suggest something for this issue? df, du has no issue
for the bricks yet one subvolume not being used by gluster..

On Wed, May 4, 2016 at 4:40 PM, Serkan Çoban <cobanserkan@xxxxxxxxx> wrote:
> Hi,
>
> I changed cluster.min-free-inodes to "0". Remount the volume on
> clients. inode full messages not coming to syslog anymore but I see
> disperse-56 subvolume still not being used.
> Anything I can do to resolve this issue? Maybe I can destroy and
> recreate the volume but I am not sure It will fix this issue...
> Maybe the disperse size 16+4 is too big should I change it to 8+2?
>
> On Tue, May 3, 2016 at 2:36 PM, Serkan Çoban <cobanserkan@xxxxxxxxx> wrote:
>> I also checked the df output all 20 bricks are same like below:
>> /dev/sdu1 7.3T 34M 7.3T 1% /bricks/20
>>
>> On Tue, May 3, 2016 at 1:40 PM, Raghavendra G <raghavendra@xxxxxxxxxxx> wrote:
>>>
>>>
>>> On Mon, May 2, 2016 at 11:41 AM, Serkan Çoban <cobanserkan@xxxxxxxxx> wrote:
>>>>
>>>> >1. What is the out put of du -hs <back-end-export>? Please get this
>>>> > information for each of the brick that are part of disperse.
>>>
>>>
>>> Sorry. I needed df output of the filesystem containing brick. Not du. Sorry
>>> about that.
>>>
>>>>
>>>> There are 20 bricks in disperse-56 and the du -hs output is like:
>>>> 80K /bricks/20
>>>> 80K /bricks/20
>>>> 80K /bricks/20
>>>> 80K /bricks/20
>>>> 80K /bricks/20
>>>> 80K /bricks/20
>>>> 80K /bricks/20
>>>> 80K /bricks/20
>>>> 1.8M /bricks/20
>>>> 80K /bricks/20
>>>> 80K /bricks/20
>>>> 80K /bricks/20
>>>> 80K /bricks/20
>>>> 80K /bricks/20
>>>> 80K /bricks/20
>>>> 80K /bricks/20
>>>> 80K /bricks/20
>>>> 80K /bricks/20
>>>> 80K /bricks/20
>>>> 80K /bricks/20
>>>>
>>>> I see that gluster is not writing to this disperse set. All other
>>>> disperse sets are filled 13GB but this one is empty. I see directory
>>>> structure created but no files in directories.
>>>> How can I fix the issue? I will try to rebalance but I don't think it
>>>> will write to this disperse set...
>>>>
>>>>
>>>>
>>>> On Sat, Apr 30, 2016 at 9:22 AM, Raghavendra G <raghavendra@xxxxxxxxxxx>
>>>> wrote:
>>>> >
>>>> >
>>>> > On Fri, Apr 29, 2016 at 12:32 AM, Serkan Çoban <cobanserkan@xxxxxxxxx>
>>>> > wrote:
>>>> >>
>>>> >> Hi, I cannot get an answer from user list, so asking to devel list.
>>>> >>
>>>> >> I am getting [dht-diskusage.c:277:dht_is_subvol_filled] 0-v0-dht:
>>>> >> inodes on subvolume 'v0-disperse-56' are at (100.00 %), consider
>>>> >> adding more bricks.
>>>> >>
>>>> >> message on client logs.My cluster is empty there are only a couple of
>>>> >> GB files for testing. Why this message appear in syslog?
>>>> >
>>>> >
>>>> > dht uses disk usage information from backend export.
>>>> >
>>>> > 1. What is the out put of du -hs <back-end-export>? Please get this
>>>> > information for each of the brick that are part of disperse.
>>>> > 2. Once you get du information from each brick, the value seen by dht
>>>> > will
>>>> > be based on how cluster/disperse aggregates du info (basically statfs
>>>> > fop).
>>>> >
>>>> > The reason for 100% disk usage may be,
>>>> > In case of 1, backend fs might be shared by data other than brick.
>>>> > In case of 2, some issues with aggregation.
>>>> >
>>>> >> Is is safe to
>>>> >> ignore it?
>>>> >
>>>> >
>>>> > dht will try not to have data files on the subvol in question
>>>> > (v0-disperse-56). Hence lookup cost will be two hops for files hashing
>>>> > to
>>>> > disperse-56 (note that other fops like read/write/open still have the
>>>> > cost
>>>> > of single hop and dont suffer from this penalty). Other than that there
>>>> > is
>>>> > no significant harm unless disperse-56 is really running out of space.
>>>> >
>>>> > regards,
>>>> > Raghavendra
>>>> >
>>>> >> _______________________________________________
>>>> >> Gluster-devel mailing list
>>>> >> Gluster-devel@xxxxxxxxxxx
>>>> >> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > Raghavendra G
>>>> _______________________________________________
>>>> Gluster-devel mailing list
>>>> Gluster-devel@xxxxxxxxxxx
>>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>
>>>
>>>
>>>
>>> --
>>> Raghavendra G
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux