Re: Gluster-users Digest, Vol 74, Issue 3, Message 7:

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Andy does the disconnect appear mostly from a CLI change?  I'm trying to investigate the same issue with "disconnects".  Several of my clients report rpc_ping_timer_check of not responding in 10secs. which is the limit i've set, but I am not fully understanding what trips that limit?

Prasanth is the gluster update going to be made available for gluster 3.4.4 ?


Khoi Mai






From:        gluster-users-request@xxxxxxxxxxx
To:        gluster-users@xxxxxxxxxxx
Date:        06/03/2014 06:58 AM
Subject:        Gluster-users Digest, Vol 74, Issue 3
Sent by:        gluster-users-bounces@xxxxxxxxxxx




Send Gluster-users mailing list submissions to
                gluster-users@xxxxxxxxxxx

To subscribe or unsubscribe via the World Wide Web, visit
               
http://supercolony.gluster.org/mailman/listinfo/gluster-users
or, via email, send a message with subject or body 'help' to
                gluster-users-request@xxxxxxxxxxx

You can reach the person managing the list at
                gluster-users-owner@xxxxxxxxxxx

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Gluster-users digest..."


Today's Topics:

  1. Distributed  volumes (yalla.gnan.kumar@xxxxxxxxxxxxx)
  2. Re: Distributed volumes (Michael DePaulo)
  3. Re: Distributed  volumes (Franco Broi)
  4. Re: recommended upgrade procedure from gluster-3.2.7 to
     gluster-3.5.0 (Todd Pfaff)
  5. Re: Unavailability during self-heal for large volumes
     (Laurent Chouinard)
  6. Re: [Gluster-devel] autodelete in snapshots (M S Vishwanath Bhat)
  7. Brick on just one host constantly going offline (Andrew Lau)
  8. Re: Unavailability during self-heal for large volumes
     (Pranith Kumar Karampuri)
  9. Re: Brick on just one host constantly going offline
     (Pranith Kumar Karampuri)
 10. Re: Brick on just one host constantly going offline (Andrew Lau)
 11. Re: Brick on just one host constantly going offline
     (Pranith Kumar Karampuri)
 12. Re: Distributed  volumes (yalla.gnan.kumar@xxxxxxxxxxxxx)
 13. Re: Distributed  volumes (Franco Broi)
 14. Re: Distributed  volumes (yalla.gnan.kumar@xxxxxxxxxxxxx)
 15. Re: [Gluster-devel] autodelete in snapshots (Kaushal M)
 16. Re: Distributed  volumes (Franco Broi)
 17. Re: Distributed  volumes (yalla.gnan.kumar@xxxxxxxxxxxxx)
 18. Re: Distributed volumes (Kaushal M)
 19. Re: Distributed volumes (yalla.gnan.kumar@xxxxxxxxxxxxx)
 20. Re: Distributed volumes (Franco Broi)
 21. Re: Distributed volumes (Vijay Bellur)
 22. NFS ACL Support in Gluster 3.4 (Indivar Nair)
 23. Re: NFS ACL Support in Gluster 3.4 (Santosh Pradhan)


----------------------------------------------------------------------

Message: 1
Date: Mon, 2 Jun 2014 12:26:09 +0000
From: <yalla.gnan.kumar@xxxxxxxxxxxxx>
To: <gluster-users@xxxxxxxxxxx>
Subject: Distributed  volumes
Message-ID:
                <67765C71374B974FBFD2AD05AF438EFF0BD564B6@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
               
Content-Type: text/plain; charset="us-ascii"

Hi All,

I have created a distributed volume of 1 GB ,  using two bricks from two different servers.
I have written 7 files whose sizes are a total of  1 GB.
How can I check that files are distributed on both the bricks ?


Thanks
Kumar

________________________________

This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.
______________________________________________________________________________________

www.accenture.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140602/fef11c67/attachment-0001.html>

------------------------------

Message: 2
Date: Mon, 2 Jun 2014 09:04:44 -0400
From: Michael DePaulo <mikedep333@xxxxxxxxx>
To: yalla.gnan.kumar@xxxxxxxxxxxxx
Cc: gluster-users@xxxxxxxxxxx
Subject: Re: [Gluster-users] Distributed volumes
Message-ID:
                <CAMKht8gtijHHZhza3T5zv5hRij9PzwrVvkxtSHDXYRkmGDYpEA@xxxxxxxxxxxxxx>
Content-Type: text/plain; charset=UTF-8

On Mon, Jun 2, 2014 at 8:26 AM,  <yalla.gnan.kumar@xxxxxxxxxxxxx> wrote:
> Hi All,
>
>
>
> I have created a distributed volume of 1 GB ,  using two bricks from two
> different servers.
>
> I have written 7 files whose sizes are a total of  1 GB.
>
> How can I check that files are distributed on both the bricks ?
>
>
>
>
>
> Thanks
>
> Kumar

Hi Kumar,

You can use standard file browsing commands like "cd" and "ls" on both
of the bricks. The volume's files will show up as regular files on the
underlying filesystem. You can manually verify that files that exist
on brick 1 do not exist on brick 2, and vica-versa.

For example, here's me running file browsing commands on my replicated
volume's brick:

mike@nostromo:/data1/brick1/gv1 :( [7] $ ls -latr
total 24
drwxr-xr-x.   3 root root 4096 Dec 19 22:21 homes
drwxr-xr-x.   3 root root 4096 May  3 17:55 ..
drw-------. 261 root root 4096 May  3 18:38 .glusterfs
drwxr-xr-x.   4 root root 4096 May  3 21:02 .
mike@nostromo:/data1/brick1/gv1 :) [8] $ sudo du -s -h homes/ .glusterfs/
[sudo] password for mike:
34G     homes/
252M    .glusterfs/

-Mike


------------------------------

Message: 3
Date: Mon, 02 Jun 2014 21:05:07 +0800
From: Franco Broi <franco.broi@xxxxxxxxxx>
To: yalla.gnan.kumar@xxxxxxxxxxxxx
Cc: gluster-users@xxxxxxxxxxx
Subject: Re: Distributed  volumes
Message-ID: <1401714307.17051.23.camel@tc1>
Content-Type: text/plain; charset="UTF-8"

Just do an ls on the bricks, the paths are the same as the mounted
filesystem.

On Mon, 2014-06-02 at 12:26 +0000, yalla.gnan.kumar@xxxxxxxxxxxxx
wrote:
> Hi All,
>
>  
>
> I have created a distributed volume of 1 GB ,  using two bricks from
> two different servers.
>
> I have written 7 files whose sizes are a total of  1 GB.
>
> How can I check that files are distributed on both the bricks ?
>
>  
>
>  
>
> Thanks
>
> Kumar
>
>
>
>
> ______________________________________________________________________
>
>
> This message is for the designated recipient only and may contain
> privileged, proprietary, or otherwise confidential information. If you
> have received it in error, please notify the sender immediately and
> delete the original. Any other use of the e-mail by you is prohibited.
> Where allowed by local law, electronic communications with Accenture
> and its affiliates, including e-mail and instant messaging (including
> content), may be scanned by our systems for the purposes of
> information security and assessment of internal compliance with
> Accenture policy.
> ______________________________________________________________________________________
>
>
www.accenture.com
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
>
http://supercolony.gluster.org/mailman/listinfo/gluster-users




------------------------------

Message: 4
Date: Mon, 2 Jun 2014 11:56:11 -0400 (EDT)
From: Todd Pfaff <pfaff@xxxxxxxxxxxxxxxxx>
To: Pranith Kumar Karampuri <pkarampu@xxxxxxxxxx>
Cc: Susant Palai <spalai@xxxxxxxxxx>, Venkatesh Somyajulu
                <vsomyaju@xxxxxxxxxx>,                 gluster-users@xxxxxxxxxxx
Subject: Re: [Gluster-users] recommended upgrade procedure from
                gluster-3.2.7 to gluster-3.5.0
Message-ID:
                <alpine.LMD.2.02.1406021149500.1729@xxxxxxxxxxxxxxxxxxxxxxxxxx>
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed

On Sun, 1 Jun 2014, Pranith Kumar Karampuri wrote:

>
>
> ----- Original Message -----
>> From: "Todd Pfaff" <pfaff@xxxxxxxxxxxxxxxxx>
>> To: "Pranith Kumar Karampuri" <pkarampu@xxxxxxxxxx>
>> Cc: gluster-users@xxxxxxxxxxx
>> Sent: Saturday, May 31, 2014 7:18:23 PM
>> Subject: Re: [Gluster-users] recommended upgrade procedure from gluster-3.2.7 to gluster-3.5.0
>>
>> thanks, pranith, that was very helpful!
>>
>> i followed your advice, it ran and completed, and now i'm left with these
>> results on the removed brick (before commit):
>>
>>    find /2/scratch/ | wc -l
>>    83083
>>
>>    find /2/scratch/ -type f | wc -l
>>    16
>>
>>    find /2/scratch/ -type d | wc -l
>>    70243
>>
>>    find /2/scratch/ -type l | wc -l
>>    12824
>>
>>    find /2/scratch/ ! -type d -a ! -type f | wc -l
>>    12824
>>
>>    find /2/scratch/.glusterfs -type l | wc -l
>>    12824
>>
>>    find /2/scratch/* | wc -l
>>    12873
>>
>>    find /2/scratch/* -type d | wc -l
>>    12857
>>
>> so it looks like i have 16 files and 12857 directories left in /2/scratch,
>> and 12824 links under /2/scratch/.glusterfs/.
>>
>> my first instinct is to ignore (and remove) the many remaining directories
>> that are empty and only look closer at those that contain the 16 remaining
>> files.
>>
>> can i ignore the links under /2/scratch/.glusterfs?
>>
>> as for the 16 files that remain, i can migrate them manually if necessary
>> but i'll first look at all the brick filesystems to see if they already
>> exist elsewhere in some form.
>>
>> do you recommend i do anything else?
>
> Your solutions are good :-). Could you please send us the
> configuration, logs of the setup so that we can debug why those files
> didn't move? It would be good if we can find the reason for it and fix
> it in the next releases so that this issue is prevented.


sure, i'd be happy to help.  what exactly should i send you in terms of
configuration?  just my /etc/glusterfs/glusterd.vol?  output of some
gluster commands?  other?

in terms of logs, what do you want to see?  do you want this file in its
entirety?

  -rw------- 1 root root 145978018 May 31 08:10
  /var/log/glusterfs/scratch-rebalance.log

anything else?


>
> CC developers who work on this feature to look into the issue.
>
> Just curious, did the remove-brick status output say if any failures
> happened?


i don't recall seeing anything in the remove-brick status command output
that indicated any failures.

tp


>
> Pranith
>
>>
>> thanks,
>> tp
>>
>>
>> On Fri, 30 May 2014, Pranith Kumar Karampuri wrote:
>>
>>>
>>>
>>> ----- Original Message -----
>>>> From: "Todd Pfaff" <pfaff@xxxxxxxxxxxxxxxxx>
>>>> To: gluster-users@xxxxxxxxxxx
>>>> Sent: Saturday, May 31, 2014 1:58:33 AM
>>>> Subject: Re: recommended upgrade procedure from
>>>> gluster-3.2.7 to gluster-3.5.0
>>>>
>>>> On Sat, 24 May 2014, Todd Pfaff wrote:
>>>>
>>>>> I have a gluster distributed volume that has been running nicely with
>>>>> gluster-3.2.7 for the past two years and I now want to upgrade this to
>>>>> gluster-3.5.0.
>>>>>
>>>>> What is the recommended procedure for such an upgrade?  Is it necessary
>>>>> to
>>>>> upgrade from 3.2.7 to 3.3 to 3.4 to 3.5, or can I safely transition from
>>>>> 3.2.7 directly to 3.5.0?
>>>>
>>>> nobody responded so i decided to wing it and hope for the best.
>>>>
>>>> i also decided to go directly from 3.2.7 to 3.4.3 and not bother with
>>>> 3.5 yet.
>>>>
>>>> the volume is distributed across 13 bricks.  formerly these were in 13
>>>> nodes, 1 brick per node, but i recently lost one of these nodes.
>>>> i've moved the brick from the dead node to be a second brick in one of
>>>> the remaining 12 nodes.  i currently have this state:
>>>>
>>>>    gluster volume status
>>>>    Status of volume: scratch
>>>>    Gluster process                                 Port    Online  Pid
>>>>    ------------------------------------------------------------------------------
>>>>    Brick 172.16.1.1:/1/scratch                     49152   Y       6452
>>>>    Brick 172.16.1.2:/1/scratch                     49152   Y       10783
>>>>    Brick 172.16.1.3:/1/scratch                     49152   Y       10164
>>>>    Brick 172.16.1.4:/1/scratch                     49152   Y       10465
>>>>    Brick 172.16.1.5:/1/scratch                     49152   Y       10186
>>>>    Brick 172.16.1.6:/1/scratch                     49152   Y       10388
>>>>    Brick 172.16.1.7:/1/scratch                     49152   Y       10386
>>>>    Brick 172.16.1.8:/1/scratch                     49152   Y       10215
>>>>    Brick 172.16.1.9:/1/scratch                     49152   Y       11059
>>>>    Brick 172.16.1.10:/1/scratch                    49152   Y       9238
>>>>    Brick 172.16.1.11:/1/scratch                    49152   Y       9466
>>>>    Brick 172.16.1.12:/1/scratch                    49152   Y       10777
>>>>    Brick 172.16.1.1:/2/scratch                     49153   Y       6461
>>>>
>>>>
>>>> what i want to do next is remove Brick 172.16.1.1:/2/scratch and have
>>>> all files it contains redistributed across the other 12 bricks.
>>>>
>>>> what's the correct procedure for this?  is it as simple as:
>>>>
>>>>    gluster volume remove-brick scratch 172.16.1.1:/2/scratch start
>>>>
>>>> and then wait for all files to be moved off that brick?  or do i also
>>>> have to do:
>>>>
>>>>    gluster volume remove-brick scratch 172.16.1.1:/2/scratch commit
>>>>
>>>> and then wait for all files to be moved off that brick?  or do i also
>>>> have to do something else, such as a rebalance, to cause the files to
>>>> be moved?
>>>
>>> 'gluster volume remove-brick scratch  172.16.1.1:/2/scratch start' does
>>> start the process of migrating all the files to the other bricks. You need
>>> to observe the progress of the process using 'gluster volume remove-brick
>>> scratch  172.16.1.1:/2/scratch status' Once this command says 'completed'
>>> You should execute 'gluster volume remove-brick scratch
>>> 172.16.1.1:/2/scratch commit' to completely remove this brick from the
>>> volume. I am a bit paranoid so I would check that no files are left behind
>>> by doing a find on the brick 172.16.1.1:/2/scratch just before issuing the
>>> 'commit' :-).
>>>
>>> Pranith.
>>>
>>>>
>>>> how do i know when everything has been moved safely to other bricks and
>>>> the then-empty brick is no longer involved in the cluster?
>>>>
>>>> thanks,
>>>> tp
>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users@xxxxxxxxxxx
>>>>
http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>>>
>>>
>>>
>>
>
>


------------------------------

Message: 5
Date: Mon, 2 Jun 2014 19:26:40 +0000
From: Laurent Chouinard <laurent.chouinard@xxxxxxxxxxx>
To: Pranith Kumar Karampuri <pkarampu@xxxxxxxxxx>
Cc: "gluster-users@xxxxxxxxxxx" <gluster-users@xxxxxxxxxxx>
Subject: Re: Unavailability during self-heal for large
                volumes
Message-ID:
                <95ea1865fac2484980d020c6a3b7f0cd@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
Content-Type: text/plain; charset="utf-8"

> Laurent,
>    This has been improved significantly in afr-v2 (enhanced version of replication
> translator in gluster) which will be released with 3.6 I believe. The issue happens
> because of the directory self-heal in the older versions. In the new version per file
> healing in a directory is performed instead of Full directory heal at-once which was
> creating a lot of traffic. Unfortunately This is too big a change to backport to older
> releases :-(.
>
> Pranith


Hi Pranith,

Thank you for this information.

Do you think there is a way to limit/throttle the current directory self-heal then? I don't mind if it takes a long time.

Alternatively, is there a way to completely disable the complete healing system? I would consider running a manual healing operation by STAT'ing every file, which would allow me to throttle the speed to a more manageable level.

Thanks,

Laurent Chouinard

------------------------------

Message: 6
Date: Tue, 3 Jun 2014 01:23:48 +0530
From: M S Vishwanath Bhat <msvbhat@xxxxxxxxx>
To: Vijay Bellur <vbellur@xxxxxxxxxx>
Cc: Seema Naik <senaik@xxxxxxxxxx>, Gluster Devel
                <gluster-devel@xxxxxxxxxxx>,                 gluster-users@xxxxxxxxxxx
Subject: Re: [Gluster-users] [Gluster-devel] autodelete in snapshots
Message-ID:
                <CA+H6b3MyWZruhrrykB6Vmv6B1++cgPwb=f+iSqdHnECJQvvmEQ@xxxxxxxxxxxxxx>
Content-Type: text/plain; charset="utf-8"

On 3 June 2014 01:02, M S Vishwanath Bhat <msvbhat@xxxxxxxxx> wrote:

>
>
>
> On 2 June 2014 20:22, Vijay Bellur <vbellur@xxxxxxxxxx> wrote:
>
>> On 04/23/2014 05:50 AM, Vijay Bellur wrote:
>>
>>> On 04/20/2014 11:42 PM, Lalatendu Mohanty wrote:
>>>
>>>> On 04/16/2014 11:39 AM, Avra Sengupta wrote:
>>>>
>>>>> The whole purpose of introducing the soft-limit is, that at any point
>>>>> of time the number of
>>>>> snaps should not exceed the hard limit. If we trigger auto-delete on
>>>>> hitting hard-limit, then
>>>>> the purpose itself is lost, because at that point we would be taking a
>>>>> snap, making the limit
>>>>> hard-limit + 1, and then triggering auto-delete, which violates the
>>>>> sanctity of the hard-limit.
>>>>> Also what happens when we are at hard-limit + 1, and another snap is
>>>>> issued, while auto-delete
>>>>> is yet to process the first delete. At that point we end up at
>>>>> hard-limit + 1. Also what happens
>>>>> if for a particular snap the auto-delete fails.
>>>>>
>>>>> We should see the hard-limit, as something set by the admin keeping in
>>>>> mind the resource consumption
>>>>> and at no-point should we cross this limit, come what may. If we hit
>>>>> this limit, the create command
>>>>> should fail asking the user to delete snaps using the "snapshot
>>>>> delete" command.
>>>>>
>>>>> The two options Raghavendra mentioned are applicable for the
>>>>> soft-limit only, in which cases on
>>>>> hitting the soft-limit
>>>>>
>>>>> 1. Trigger auto-delete
>>>>>
>>>>> or
>>>>>
>>>>> 2. Log a warning-message, for the user saying the number of snaps is
>>>>> exceeding the snap-limit and
>>>>> display the number of available snaps
>>>>>
>>>>> Now which of these should happen also depends on the user, because the
>>>>> auto-delete option
>>>>> is configurable.
>>>>>
>>>>> So if the auto-delete option is set as true, auto-delete should be
>>>>> triggered and the above message
>>>>> should also be logged.
>>>>>
>>>>> But if the option is set as false, only the message should be logged.
>>>>>
>>>>> This is the behaviour as designed. Adding Rahul, and Seema in the
>>>>> mail, to reflect upon the
>>>>> behaviour as well.
>>>>>
>>>>> Regards,
>>>>> Avra
>>>>>
>>>>
>>>> This sounds correct. However we need to make sure that the usage or
>>>> documentation around this should be good enough , so that users
>>>> understand the each of the limits correctly.
>>>>
>>>>
>>> It might be better to avoid the usage of the term "soft-limit".
>>> soft-limit as used in quota and other places generally has an alerting
>>> connotation. Something like "auto-deletion-limit" might be better.
>>>
>>>
>> I still see references to "soft-limit" and auto deletion seems to get
>> triggered upon reaching soft-limit.
>>
>> Why is the ability to auto delete not configurable? It does seem pretty
>> nasty to go about deleting snapshots without obtaining explicit consent
>> from the user.
>>
>
> I agree with Vijay here. It's not good to delete a snap (even though it is
> oldest) without the explicit consent from user.
>
> FYI It took me more than 2 weeks to figure out that my snaps were getting
> autodeleted after reaching "soft-limit". For all I know I had not done
> anything and my snap restore were failing.
>
> I propose to remove the terms "soft" and "hard" limit. I believe there
> should be a limit (just "limit") after which all snapshot creates should
> fail with proper error messages. And there can be a water-mark after which
> user should get warning messages. So below is my proposal.
>
> *auto-delete + snap-limit:  *If the snap-limit is set to *n*, next snap
> create (n+1th) will succeed *only if* *if auto-delete is set to on/true/1*
> and oldest snap will get deleted automatically. If autodelete is set to
> off/false/0 , (n+1)th snap create will fail with proper error message from
> gluster CLI command.  But again by default autodelete should be off.
>
> *snap-water-mark*: This should come in picture only if autodelete is
> turned off. It should not have any meaning if auto-delete is turned ON.
> Basically it's usage is to give the user warning that limit almost being
> reached and it is time for admin to decide which snaps should be deleted
> (or which should be kept)
>
> *my two cents*
>
Adding gluster-users as well.

-MS

>
> -MS
>
>
>>
>> Cheers,
>>
>> Vijay
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel@xxxxxxxxxxx
>>
http://supercolony.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140603/58732995/attachment-0001.html>

------------------------------

Message: 7
Date: Tue, 3 Jun 2014 08:40:25 +1000
From: Andrew Lau <andrew@xxxxxxxxxxxxxx>
To: "gluster-users@xxxxxxxxxxx List" <gluster-users@xxxxxxxxxxx>
Subject: [Gluster-users] Brick on just one host constantly going
                offline
Message-ID:
                <CAD7dF9dCb-f_pNkiu51P7BsPqbeonE+OOuXH84ni4fD_poR0kA@xxxxxxxxxxxxxx>
Content-Type: text/plain; charset=UTF-8

Hi,

Just a short post as I've since nuked the test environment.

I've had this case where in a 2 node gluster replica, the brick of the
first host is constantly going offline.

gluster volume status

would report host 1's brick is offline. The quorum would kick in,
putting the whole cluster into a read only state. This has only
recently been happening w/ gluster 3.5 and it normally happens after
about 3-4 days of 500GB or so data transfer.

Has anyone noticed this before? The only way to bring it back was to:

killall glusterfsd ; killall -9 glusterfsd ; killall glusterd ; glusterd


Thanks,
Andrew


------------------------------

Message: 8
Date: Mon, 2 Jun 2014 20:42:36 -0400 (EDT)
From: Pranith Kumar Karampuri <pkarampu@xxxxxxxxxx>
To: Laurent Chouinard <laurent.chouinard@xxxxxxxxxxx>
Cc: gluster-users@xxxxxxxxxxx
Subject: Re: [Gluster-users] Unavailability during self-heal for large
                volumes
Message-ID:
                <1921256894.15836933.1401756156535.JavaMail.zimbra@xxxxxxxxxx>
Content-Type: text/plain; charset=utf-8



----- Original Message -----
> From: "Laurent Chouinard" <laurent.chouinard@xxxxxxxxxxx>
> To: "Pranith Kumar Karampuri" <pkarampu@xxxxxxxxxx>
> Cc: gluster-users@xxxxxxxxxxx
> Sent: Tuesday, June 3, 2014 12:56:40 AM
> Subject: RE: Unavailability during self-heal for large volumes
>
> > Laurent,
> >    This has been improved significantly in afr-v2 (enhanced version of
> >    replication
> > translator in gluster) which will be released with 3.6 I believe. The issue
> > happens
> > because of the directory self-heal in the older versions. In the new
> > version per file
> > healing in a directory is performed instead of Full directory heal at-once
> > which was
> > creating a lot of traffic. Unfortunately This is too big a change to
> > backport to older
> > releases :-(.
> >
> > Pranith
>
>
> Hi Pranith,
>
> Thank you for this information.
>
> Do you think there is a way to limit/throttle the current directory self-heal
> then? I don't mind if it takes a long time.
>
> Alternatively, is there a way to completely disable the complete healing
> system? I would consider running a manual healing operation by STAT'ing
> every file, which would allow me to throttle the speed to a more manageable
> level.

gluster volume set <volume-name> cluster.self-heal-daemon off would disable glustershd performing automatic healing.

Pranith
>
> Thanks,
>
> Laurent Chouinard
>


------------------------------

Message: 9
Date: Mon, 2 Jun 2014 20:56:14 -0400 (EDT)
From: Pranith Kumar Karampuri <pkarampu@xxxxxxxxxx>
To: Andrew Lau <andrew@xxxxxxxxxxxxxx>
Cc: "gluster-users@xxxxxxxxxxx List" <gluster-users@xxxxxxxxxxx>
Subject: Re: [Gluster-users] Brick on just one host constantly going
                offline
Message-ID:
                <1543691713.15838872.1401756974277.JavaMail.zimbra@xxxxxxxxxx>
Content-Type: text/plain; charset=utf-8



----- Original Message -----
> From: "Andrew Lau" <andrew@xxxxxxxxxxxxxx>
> To: "gluster-users@xxxxxxxxxxx List" <gluster-users@xxxxxxxxxxx>
> Sent: Tuesday, June 3, 2014 4:10:25 AM
> Subject: Brick on just one host constantly going offline
>
> Hi,
>
> Just a short post as I've since nuked the test environment.
>
> I've had this case where in a 2 node gluster replica, the brick of the
> first host is constantly going offline.
>
> gluster volume status
>
> would report host 1's brick is offline. The quorum would kick in,
> putting the whole cluster into a read only state. This has only
> recently been happening w/ gluster 3.5 and it normally happens after
> about 3-4 days of 500GB or so data transfer.

Could you check mount logs to see if there are ping timer expiry messages for disconnects?
If you see them, then it is very likely that you are hitting throttling problem fixed by
http://review.gluster.org/7531

Pranith

>
> Has anyone noticed this before? The only way to bring it back was to:
>
> killall glusterfsd ; killall -9 glusterfsd ; killall glusterd ; glusterd
>
>
> Thanks,
> Andrew
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
>
http://supercolony.gluster.org/mailman/listinfo/gluster-users
>


------------------------------

Message: 10
Date: Tue, 3 Jun 2014 11:12:44 +1000
From: Andrew Lau <andrew@xxxxxxxxxxxxxx>
To: Pranith Kumar Karampuri <pkarampu@xxxxxxxxxx>
Cc: "gluster-users@xxxxxxxxxxx List" <gluster-users@xxxxxxxxxxx>
Subject: Re: [Gluster-users] Brick on just one host constantly going
                offline
Message-ID:
                <CAD7dF9c0005xCo_RKZW9L-cqP_TqGPpNd1B7-2FWRzEXZ_Rvvw@xxxxxxxxxxxxxx>
Content-Type: text/plain; charset=UTF-8

Hi Pranith,

On Tue, Jun 3, 2014 at 10:56 AM, Pranith Kumar Karampuri
<pkarampu@xxxxxxxxxx> wrote:
>
>
> ----- Original Message -----
>> From: "Andrew Lau" <andrew@xxxxxxxxxxxxxx>
>> To: "gluster-users@xxxxxxxxxxx List" <gluster-users@xxxxxxxxxxx>
>> Sent: Tuesday, June 3, 2014 4:10:25 AM
>> Subject: Brick on just one host constantly going offline
>>
>> Hi,
>>
>> Just a short post as I've since nuked the test environment.
>>
>> I've had this case where in a 2 node gluster replica, the brick of the
>> first host is constantly going offline.
>>
>> gluster volume status
>>
>> would report host 1's brick is offline. The quorum would kick in,
>> putting the whole cluster into a read only state. This has only
>> recently been happening w/ gluster 3.5 and it normally happens after
>> about 3-4 days of 500GB or so data transfer.
>
> Could you check mount logs to see if there are ping timer expiry messages for disconnects?
> If you see them, then it is very likely that you are hitting throttling problem fixed by
http://review.gluster.org/7531
>

Ah, that makes sense as it was the only volume which had that ping
timeout setting. I also did see the timeout messages in the logs when
I was checking. So is this merged in 3.5.1 ?

> Pranith
>
>>
>> Has anyone noticed this before? The only way to bring it back was to:
>>
>> killall glusterfsd ; killall -9 glusterfsd ; killall glusterd ; glusterd
>>
>>
>> Thanks,
>> Andrew
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users@xxxxxxxxxxx
>>
http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>


------------------------------

Message: 11
Date: Mon, 2 Jun 2014 22:14:53 -0400 (EDT)
From: Pranith Kumar Karampuri <pkarampu@xxxxxxxxxx>
To: Andrew Lau <andrew@xxxxxxxxxxxxxx>
Cc: "gluster-users@xxxxxxxxxxx List" <gluster-users@xxxxxxxxxxx>
Subject: Re: Brick on just one host constantly going
                offline
Message-ID:
                <1518143519.15897096.1401761693764.JavaMail.zimbra@xxxxxxxxxx>
Content-Type: text/plain; charset=utf-8



----- Original Message -----
> From: "Andrew Lau" <andrew@xxxxxxxxxxxxxx>
> To: "Pranith Kumar Karampuri" <pkarampu@xxxxxxxxxx>
> Cc: "gluster-users@xxxxxxxxxxx List" <gluster-users@xxxxxxxxxxx>
> Sent: Tuesday, June 3, 2014 6:42:44 AM
> Subject: Re: Brick on just one host constantly going offline
>
> Hi Pranith,
>
> On Tue, Jun 3, 2014 at 10:56 AM, Pranith Kumar Karampuri
> <pkarampu@xxxxxxxxxx> wrote:
> >
> >
> > ----- Original Message -----
> >> From: "Andrew Lau" <andrew@xxxxxxxxxxxxxx>
> >> To: "gluster-users@xxxxxxxxxxx List" <gluster-users@xxxxxxxxxxx>
> >> Sent: Tuesday, June 3, 2014 4:10:25 AM
> >> Subject: Brick on just one host constantly going offline
> >>
> >> Hi,
> >>
> >> Just a short post as I've since nuked the test environment.
> >>
> >> I've had this case where in a 2 node gluster replica, the brick of the
> >> first host is constantly going offline.
> >>
> >> gluster volume status
> >>
> >> would report host 1's brick is offline. The quorum would kick in,
> >> putting the whole cluster into a read only state. This has only
> >> recently been happening w/ gluster 3.5 and it normally happens after
> >> about 3-4 days of 500GB or so data transfer.
> >
> > Could you check mount logs to see if there are ping timer expiry messages
> > for disconnects?
> > If you see them, then it is very likely that you are hitting throttling
> > problem fixed by
http://review.gluster.org/7531
> >
>
> Ah, that makes sense as it was the only volume which had that ping
> timeout setting. I also did see the timeout messages in the logs when
> I was checking. So is this merged in 3.5.1 ?

Yes!
http://review.gluster.org/7570

Pranith
>
> > Pranith
> >
> >>
> >> Has anyone noticed this before? The only way to bring it back was to:
> >>
> >> killall glusterfsd ; killall -9 glusterfsd ; killall glusterd ; glusterd
> >>
> >>
> >> Thanks,
> >> Andrew
> >> _______________________________________________
> >> Gluster-users mailing list
> >> Gluster-users@xxxxxxxxxxx
> >>
http://supercolony.gluster.org/mailman/listinfo/gluster-users
> >>
>


------------------------------

Message: 12
Date: Tue, 3 Jun 2014 07:21:21 +0000
From: <yalla.gnan.kumar@xxxxxxxxxxxxx>
To: <franco.broi@xxxxxxxxxx>
Cc: gluster-users@xxxxxxxxxxx
Subject: Re: [Gluster-users] Distributed  volumes
Message-ID:
                <67765C71374B974FBFD2AD05AF438EFF0BD586A4@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
               
Content-Type: text/plain; charset="utf-8"

Hi,

I have created a distributed volume on my   gluster node. I have attached this volume to a VM on openstack. The size is 1 GB. I have written files close to 1 GB onto the
Volume.   But when I do a ls inside the brick directory , the volume is present only on one gluster server brick. But it is empty on another server brick.  Files are meant to be
spread across both the bricks according to distributed volume definition.

On the VM:
--------------

# ls -al
total 1013417
drwxr-xr-x    3 root     root          4096 Jun  1 22:03 .
drwxrwxr-x    3 root     root          1024 Jun  1 21:24 ..
-rw-------    1 root     root     31478251520 Jun  1 21:52 file
-rw-------    1 root     root     157391257600 Jun  1 21:54 file1
-rw-------    1 root     root     629565030400 Jun  1 21:55 file2
-rw-------    1 root     root     708260659200 Jun  1 21:59 file3
-rw-------    1 root     root     6295650304 Jun  1 22:01 file4
-rw-------    1 root     root     39333801984 Jun  1 22:01 file5
-rw-------    1 root     root     78643200000 Jun  1 22:04 file6
drwx------    2 root     root         16384 Jun  1 21:24 lost+found
----------
# du -sch *
20.0M   file
100.0M  file1
400.0M  file2
454.0M  file3
4.0M    file4
11.6M   file5
0       file6
16.0K   lost+found
989.7M  total
------------------------


On the gluster server nodes:
-----------------------
root@primary:/export/sdd1/brick# ll
total 12
drwxr-xr-x 2 root root 4096 Jun  2 04:08 ./
drwxr-xr-x 4 root root 4096 May 27 08:42 ../
root@primary:/export/sdd1/brick#
--------------------------

root@secondary:/export/sdd1/brick# ll
total 1046536
drwxr-xr-x 2 root root       4096 Jun  2 08:51 ./
drwxr-xr-x 4 root root       4096 May 27 08:43 ../
-rw-rw-rw- 1  108  115 1073741824 Jun  2 09:35 volume-0ec560be-997f-46da-9ec8-e9d6627f2de1
root@secondary:/export/sdd1/brick#
---------------------------------


Thanks
Kumar








-----Original Message-----
From: Franco Broi [
mailto:franco.broi@xxxxxxxxxx]
Sent: Monday, June 02, 2014 6:35 PM
To: Gnan Kumar, Yalla
Cc: gluster-users@xxxxxxxxxxx
Subject: Re: [Gluster-users] Distributed volumes

Just do an ls on the bricks, the paths are the same as the mounted filesystem.

On Mon, 2014-06-02 at 12:26 +0000, yalla.gnan.kumar@xxxxxxxxxxxxx
wrote:
> Hi All,
>
>
>
> I have created a distributed volume of 1 GB ,  using two bricks from
> two different servers.
>
> I have written 7 files whose sizes are a total of  1 GB.
>
> How can I check that files are distributed on both the bricks ?
>
>
>
>
>
> Thanks
>
> Kumar
>
>
>
>
> ______________________________________________________________________
>
>
> This message is for the designated recipient only and may contain
> privileged, proprietary, or otherwise confidential information. If you
> have received it in error, please notify the sender immediately and
> delete the original. Any other use of the e-mail by you is prohibited.
> Where allowed by local law, electronic communications with Accenture
> and its affiliates, including e-mail and instant messaging (including
> content), may be scanned by our systems for the purposes of
> information security and assessment of internal compliance with
> Accenture policy.
> ______________________________________________________________________
> ________________
>
>
www.accenture.com
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
>
http://supercolony.gluster.org/mailman/listinfo/gluster-users




________________________________

This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.
______________________________________________________________________________________

www.accenture.com

------------------------------

Message: 13
Date: Tue, 03 Jun 2014 15:25:46 +0800
From: Franco Broi <franco.broi@xxxxxxxxxx>
To: yalla.gnan.kumar@xxxxxxxxxxxxx
Cc: gluster-users@xxxxxxxxxxx
Subject: Re: Distributed  volumes
Message-ID: <1401780346.2236.299.camel@tc1>
Content-Type: text/plain; charset="UTF-8"


What do gluster vol info and gluster vol status give you?

On Tue, 2014-06-03 at 07:21 +0000, yalla.gnan.kumar@xxxxxxxxxxxxx
wrote:
> Hi,
>
> I have created a distributed volume on my   gluster node. I have attached this volume to a VM on openstack. The size is 1 GB. I have written files close to 1 GB onto the
> Volume.   But when I do a ls inside the brick directory , the volume is present only on one gluster server brick. But it is empty on another server brick.  Files are meant to be
> spread across both the bricks according to distributed volume definition.
>
> On the VM:
> --------------
>
> # ls -al
> total 1013417
> drwxr-xr-x    3 root     root          4096 Jun  1 22:03 .
> drwxrwxr-x    3 root     root          1024 Jun  1 21:24 ..
> -rw-------    1 root     root     31478251520 Jun  1 21:52 file
> -rw-------    1 root     root     157391257600 Jun  1 21:54 file1
> -rw-------    1 root     root     629565030400 Jun  1 21:55 file2
> -rw-------    1 root     root     708260659200 Jun  1 21:59 file3
> -rw-------    1 root     root     6295650304 Jun  1 22:01 file4
> -rw-------    1 root     root     39333801984 Jun  1 22:01 file5
> -rw-------    1 root     root     78643200000 Jun  1 22:04 file6
> drwx------    2 root     root         16384 Jun  1 21:24 lost+found
> ----------
> # du -sch *
> 20.0M   file
> 100.0M  file1
> 400.0M  file2
> 454.0M  file3
> 4.0M    file4
> 11.6M   file5
> 0       file6
> 16.0K   lost+found
> 989.7M  total
> ------------------------
>
>
> On the gluster server nodes:
> -----------------------
> root@primary:/export/sdd1/brick# ll
> total 12
> drwxr-xr-x 2 root root 4096 Jun  2 04:08 ./
> drwxr-xr-x 4 root root 4096 May 27 08:42 ../
> root@primary:/export/sdd1/brick#
> --------------------------
>
> root@secondary:/export/sdd1/brick# ll
> total 1046536
> drwxr-xr-x 2 root root       4096 Jun  2 08:51 ./
> drwxr-xr-x 4 root root       4096 May 27 08:43 ../
> -rw-rw-rw- 1  108  115 1073741824 Jun  2 09:35 volume-0ec560be-997f-46da-9ec8-e9d6627f2de1
> root@secondary:/export/sdd1/brick#
> ---------------------------------
>
>
> Thanks
> Kumar
>
>
>
>
>
>
>
>
> -----Original Message-----
> From: Franco Broi [
mailto:franco.broi@xxxxxxxxxx]
> Sent: Monday, June 02, 2014 6:35 PM
> To: Gnan Kumar, Yalla
> Cc: gluster-users@xxxxxxxxxxx
> Subject: Re: Distributed volumes
>
> Just do an ls on the bricks, the paths are the same as the mounted filesystem.
>
> On Mon, 2014-06-02 at 12:26 +0000, yalla.gnan.kumar@xxxxxxxxxxxxx
> wrote:
> > Hi All,
> >
> >
> >
> > I have created a distributed volume of 1 GB ,  using two bricks from
> > two different servers.
> >
> > I have written 7 files whose sizes are a total of  1 GB.
> >
> > How can I check that files are distributed on both the bricks ?
> >
> >
> >
> >
> >
> > Thanks
> >
> > Kumar
> >
> >
> >
> >
> > ______________________________________________________________________
> >
> >
> > This message is for the designated recipient only and may contain
> > privileged, proprietary, or otherwise confidential information. If you
> > have received it in error, please notify the sender immediately and
> > delete the original. Any other use of the e-mail by you is prohibited.
> > Where allowed by local law, electronic communications with Accenture
> > and its affiliates, including e-mail and instant messaging (including
> > content), may be scanned by our systems for the purposes of
> > information security and assessment of internal compliance with
> > Accenture policy.
> > ______________________________________________________________________
> > ________________
> >
> >
www.accenture.com
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users@xxxxxxxxxxx
> >
http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
> ________________________________
>
> This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.
> ______________________________________________________________________________________
>
>
www.accenture.com




------------------------------

Message: 14
Date: Tue, 3 Jun 2014 07:29:24 +0000
From: <yalla.gnan.kumar@xxxxxxxxxxxxx>
To: <franco.broi@xxxxxxxxxx>
Cc: gluster-users@xxxxxxxxxxx
Subject: Re: Distributed  volumes
Message-ID:
                <67765C71374B974FBFD2AD05AF438EFF0BD586B9@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
               
Content-Type: text/plain; charset="utf-8"

root@secondary:/export/sdd1/brick# gluster volume  info

Volume Name: dst
Type: Distribute
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: primary:/export/sdd1/brick
Brick2: secondary:/export/sdd1/brick



-----Original Message-----
From: Franco Broi [
mailto:franco.broi@xxxxxxxxxx]
Sent: Tuesday, June 03, 2014 12:56 PM
To: Gnan Kumar, Yalla
Cc: gluster-users@xxxxxxxxxxx
Subject: Re: Distributed volumes


What do gluster vol info and gluster vol status give you?

On Tue, 2014-06-03 at 07:21 +0000, yalla.gnan.kumar@xxxxxxxxxxxxx
wrote:
> Hi,
>
> I have created a distributed volume on my   gluster node. I have attached this volume to a VM on openstack. The size is 1 GB. I have written files close to 1 GB onto the
> Volume.   But when I do a ls inside the brick directory , the volume is present only on one gluster server brick. But it is empty on another server brick.  Files are meant to be
> spread across both the bricks according to distributed volume definition.
>
> On the VM:
> --------------
>
> # ls -al
> total 1013417
> drwxr-xr-x    3 root     root          4096 Jun  1 22:03 .
> drwxrwxr-x    3 root     root          1024 Jun  1 21:24 ..
> -rw-------    1 root     root     31478251520 Jun  1 21:52 file
> -rw-------    1 root     root     157391257600 Jun  1 21:54 file1
> -rw-------    1 root     root     629565030400 Jun  1 21:55 file2
> -rw-------    1 root     root     708260659200 Jun  1 21:59 file3
> -rw-------    1 root     root     6295650304 Jun  1 22:01 file4
> -rw-------    1 root     root     39333801984 Jun  1 22:01 file5
> -rw-------    1 root     root     78643200000 Jun  1 22:04 file6
> drwx------    2 root     root         16384 Jun  1 21:24 lost+found
> ----------
> # du -sch *
> 20.0M   file
> 100.0M  file1
> 400.0M  file2
> 454.0M  file3
> 4.0M    file4
> 11.6M   file5
> 0       file6
> 16.0K   lost+found
> 989.7M  total
> ------------------------
>
>
> On the gluster server nodes:
> -----------------------
> root@primary:/export/sdd1/brick# ll
> total 12
> drwxr-xr-x 2 root root 4096 Jun  2 04:08 ./ drwxr-xr-x 4 root root
> 4096 May 27 08:42 ../ root@primary:/export/sdd1/brick#
> --------------------------
>
> root@secondary:/export/sdd1/brick# ll
> total 1046536
> drwxr-xr-x 2 root root       4096 Jun  2 08:51 ./
> drwxr-xr-x 4 root root       4096 May 27 08:43 ../
> -rw-rw-rw- 1  108  115 1073741824 Jun  2 09:35
> volume-0ec560be-997f-46da-9ec8-e9d6627f2de1
> root@secondary:/export/sdd1/brick#
> ---------------------------------
>
>
> Thanks
> Kumar
>
>
>
>
>
>
>
>
> -----Original Message-----
> From: Franco Broi [
mailto:franco.broi@xxxxxxxxxx]
> Sent: Monday, June 02, 2014 6:35 PM
> To: Gnan Kumar, Yalla
> Cc: gluster-users@xxxxxxxxxxx
> Subject: Re: [Gluster-users] Distributed volumes
>
> Just do an ls on the bricks, the paths are the same as the mounted filesystem.
>
> On Mon, 2014-06-02 at 12:26 +0000, yalla.gnan.kumar@xxxxxxxxxxxxx
> wrote:
> > Hi All,
> >
> >
> >
> > I have created a distributed volume of 1 GB ,  using two bricks from
> > two different servers.
> >
> > I have written 7 files whose sizes are a total of  1 GB.
> >
> > How can I check that files are distributed on both the bricks ?
> >
> >
> >
> >
> >
> > Thanks
> >
> > Kumar
> >
> >
> >
> >
> > ____________________________________________________________________
> > __
> >
> >
> > This message is for the designated recipient only and may contain
> > privileged, proprietary, or otherwise confidential information. If
> > you have received it in error, please notify the sender immediately
> > and delete the original. Any other use of the e-mail by you is prohibited.
> > Where allowed by local law, electronic communications with Accenture
> > and its affiliates, including e-mail and instant messaging
> > (including content), may be scanned by our systems for the purposes
> > of information security and assessment of internal compliance with
> > Accenture policy.
> > ____________________________________________________________________
> > __
> > ________________
> >
> >
www.accenture.com
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users@xxxxxxxxxxx
> >
http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
> ________________________________
>
> This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.
> ______________________________________________________________________
> ________________
>
>
www.accenture.com




------------------------------

Message: 15
Date: Tue, 3 Jun 2014 13:02:54 +0530
From: Kaushal M <kshlmster@xxxxxxxxx>
To: Gluster Devel <gluster-devel@xxxxxxxxxxx>
Cc: "gluster-users@xxxxxxxxxxx" <gluster-users@xxxxxxxxxxx>
Subject: Re: [Gluster-devel] autodelete in snapshots
Message-ID:
                <CAOujamXNzzsNZW0jX9gNPzP7JzUnM1gGDCnNiu7R=ygWTc4oFQ@xxxxxxxxxxxxxx>
Content-Type: text/plain; charset=UTF-8

I agree as well. We shouldn't be deleting any data without the
explicit consent of the user.

The approach proposed by MS is better than the earlier approach.

~kaushal

On Tue, Jun 3, 2014 at 1:02 AM, M S Vishwanath Bhat <msvbhat@xxxxxxxxx> wrote:
>
>
>
> On 2 June 2014 20:22, Vijay Bellur <vbellur@xxxxxxxxxx> wrote:
>>
>> On 04/23/2014 05:50 AM, Vijay Bellur wrote:
>>>
>>> On 04/20/2014 11:42 PM, Lalatendu Mohanty wrote:
>>>>
>>>> On 04/16/2014 11:39 AM, Avra Sengupta wrote:
>>>>>
>>>>> The whole purpose of introducing the soft-limit is, that at any point
>>>>> of time the number of
>>>>> snaps should not exceed the hard limit. If we trigger auto-delete on
>>>>> hitting hard-limit, then
>>>>> the purpose itself is lost, because at that point we would be taking a
>>>>> snap, making the limit
>>>>> hard-limit + 1, and then triggering auto-delete, which violates the
>>>>> sanctity of the hard-limit.
>>>>> Also what happens when we are at hard-limit + 1, and another snap is
>>>>> issued, while auto-delete
>>>>> is yet to process the first delete. At that point we end up at
>>>>> hard-limit + 1. Also what happens
>>>>> if for a particular snap the auto-delete fails.
>>>>>
>>>>> We should see the hard-limit, as something set by the admin keeping in
>>>>> mind the resource consumption
>>>>> and at no-point should we cross this limit, come what may. If we hit
>>>>> this limit, the create command
>>>>> should fail asking the user to delete snaps using the "snapshot
>>>>> delete" command.
>>>>>
>>>>> The two options Raghavendra mentioned are applicable for the
>>>>> soft-limit only, in which cases on
>>>>> hitting the soft-limit
>>>>>
>>>>> 1. Trigger auto-delete
>>>>>
>>>>> or
>>>>>
>>>>> 2. Log a warning-message, for the user saying the number of snaps is
>>>>> exceeding the snap-limit and
>>>>> display the number of available snaps
>>>>>
>>>>> Now which of these should happen also depends on the user, because the
>>>>> auto-delete option
>>>>> is configurable.
>>>>>
>>>>> So if the auto-delete option is set as true, auto-delete should be
>>>>> triggered and the above message
>>>>> should also be logged.
>>>>>
>>>>> But if the option is set as false, only the message should be logged.
>>>>>
>>>>> This is the behaviour as designed. Adding Rahul, and Seema in the
>>>>> mail, to reflect upon the
>>>>> behaviour as well.
>>>>>
>>>>> Regards,
>>>>> Avra
>>>>
>>>>
>>>> This sounds correct. However we need to make sure that the usage or
>>>> documentation around this should be good enough , so that users
>>>> understand the each of the limits correctly.
>>>>
>>>
>>> It might be better to avoid the usage of the term "soft-limit".
>>> soft-limit as used in quota and other places generally has an alerting
>>> connotation. Something like "auto-deletion-limit" might be better.
>>>
>>
>> I still see references to "soft-limit" and auto deletion seems to get
>> triggered upon reaching soft-limit.
>>
>> Why is the ability to auto delete not configurable? It does seem pretty
>> nasty to go about deleting snapshots without obtaining explicit consent from
>> the user.
>
>
> I agree with Vijay here. It's not good to delete a snap (even though it is
> oldest) without the explicit consent from user.
>
> FYI It took me more than 2 weeks to figure out that my snaps were getting
> autodeleted after reaching "soft-limit". For all I know I had not done
> anything and my snap restore were failing.
>
> I propose to remove the terms "soft" and "hard" limit. I believe there
> should be a limit (just "limit") after which all snapshot creates should
> fail with proper error messages. And there can be a water-mark after which
> user should get warning messages. So below is my proposal.
>
> auto-delete + snap-limit:  If the snap-limit is set to n, next snap create
> (n+1th) will succeed only if if auto-delete is set to on/true/1 and oldest
> snap will get deleted automatically. If autodelete is set to off/false/0 ,
> (n+1)th snap create will fail with proper error message from gluster CLI
> command.  But again by default autodelete should be off.
>
> snap-water-mark: This should come in picture only if autodelete is turned
> off. It should not have any meaning if auto-delete is turned ON. Basically
> it's usage is to give the user warning that limit almost being reached and
> it is time for admin to decide which snaps should be deleted (or which
> should be kept)
>
> *my two cents*
>
> -MS
>
>>
>>
>> Cheers,
>>
>> Vijay
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel@xxxxxxxxxxx
>>
http://supercolony.gluster.org/mailman/listinfo/gluster-devel
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxxx
>
http://supercolony.gluster.org/mailman/listinfo/gluster-devel
>


------------------------------

Message: 16
Date: Tue, 03 Jun 2014 15:34:01 +0800
From: Franco Broi <franco.broi@xxxxxxxxxx>
To: yalla.gnan.kumar@xxxxxxxxxxxxx
Cc: gluster-users@xxxxxxxxxxx
Subject: Re: Distributed  volumes
Message-ID: <1401780841.2236.304.camel@tc1>
Content-Type: text/plain; charset="UTF-8"


Ok, what you have is a single large file (must be filesystem image??).
Gluster will not stripe files, it writes different whole files to
different bricks.

On Tue, 2014-06-03 at 07:29 +0000, yalla.gnan.kumar@xxxxxxxxxxxxx
wrote:
> root@secondary:/export/sdd1/brick# gluster volume  info
>
> Volume Name: dst
> Type: Distribute
> Status: Started
> Number of Bricks: 2
> Transport-type: tcp
> Bricks:
> Brick1: primary:/export/sdd1/brick
> Brick2: secondary:/export/sdd1/brick
>
>
>
> -----Original Message-----
> From: Franco Broi [
mailto:franco.broi@xxxxxxxxxx]
> Sent: Tuesday, June 03, 2014 12:56 PM
> To: Gnan Kumar, Yalla
> Cc: gluster-users@xxxxxxxxxxx
> Subject: Re: [Gluster-users] Distributed volumes
>
>
> What do gluster vol info and gluster vol status give you?
>
> On Tue, 2014-06-03 at 07:21 +0000, yalla.gnan.kumar@xxxxxxxxxxxxx
> wrote:
> > Hi,
> >
> > I have created a distributed volume on my   gluster node. I have attached this volume to a VM on openstack. The size is 1 GB. I have written files close to 1 GB onto the
> > Volume.   But when I do a ls inside the brick directory , the volume is present only on one gluster server brick. But it is empty on another server brick.  Files are meant to be
> > spread across both the bricks according to distributed volume definition.
> >
> > On the VM:
> > --------------
> >
> > # ls -al
> > total 1013417
> > drwxr-xr-x    3 root     root          4096 Jun  1 22:03 .
> > drwxrwxr-x    3 root     root          1024 Jun  1 21:24 ..
> > -rw-------    1 root     root     31478251520 Jun  1 21:52 file
> > -rw-------    1 root     root     157391257600 Jun  1 21:54 file1
> > -rw-------    1 root     root     629565030400 Jun  1 21:55 file2
> > -rw-------    1 root     root     708260659200 Jun  1 21:59 file3
> > -rw-------    1 root     root     6295650304 Jun  1 22:01 file4
> > -rw-------    1 root     root     39333801984 Jun  1 22:01 file5
> > -rw-------    1 root     root     78643200000 Jun  1 22:04 file6
> > drwx------    2 root     root         16384 Jun  1 21:24 lost+found
> > ----------
> > # du -sch *
> > 20.0M   file
> > 100.0M  file1
> > 400.0M  file2
> > 454.0M  file3
> > 4.0M    file4
> > 11.6M   file5
> > 0       file6
> > 16.0K   lost+found
> > 989.7M  total
> > ------------------------
> >
> >
> > On the gluster server nodes:
> > -----------------------
> > root@primary:/export/sdd1/brick# ll
> > total 12
> > drwxr-xr-x 2 root root 4096 Jun  2 04:08 ./ drwxr-xr-x 4 root root
> > 4096 May 27 08:42 ../ root@primary:/export/sdd1/brick#
> > --------------------------
> >
> > root@secondary:/export/sdd1/brick# ll
> > total 1046536
> > drwxr-xr-x 2 root root       4096 Jun  2 08:51 ./
> > drwxr-xr-x 4 root root       4096 May 27 08:43 ../
> > -rw-rw-rw- 1  108  115 1073741824 Jun  2 09:35
> > volume-0ec560be-997f-46da-9ec8-e9d6627f2de1
> > root@secondary:/export/sdd1/brick#
> > ---------------------------------
> >
> >
> > Thanks
> > Kumar
> >
> >
> >
> >
> >
> >
> >
> >
> > -----Original Message-----
> > From: Franco Broi [
mailto:franco.broi@xxxxxxxxxx]
> > Sent: Monday, June 02, 2014 6:35 PM
> > To: Gnan Kumar, Yalla
> > Cc: gluster-users@xxxxxxxxxxx
> > Subject: Re: Distributed volumes
> >
> > Just do an ls on the bricks, the paths are the same as the mounted filesystem.
> >
> > On Mon, 2014-06-02 at 12:26 +0000, yalla.gnan.kumar@xxxxxxxxxxxxx
> > wrote:
> > > Hi All,
> > >
> > >
> > >
> > > I have created a distributed volume of 1 GB ,  using two bricks from
> > > two different servers.
> > >
> > > I have written 7 files whose sizes are a total of  1 GB.
> > >
> > > How can I check that files are distributed on both the bricks ?
> > >
> > >
> > >
> > >
> > >
> > > Thanks
> > >
> > > Kumar
> > >
> > >
> > >
> > >
> > > ____________________________________________________________________
> > > __
> > >
> > >
> > > This message is for the designated recipient only and may contain
> > > privileged, proprietary, or otherwise confidential information. If
> > > you have received it in error, please notify the sender immediately
> > > and delete the original. Any other use of the e-mail by you is prohibited.
> > > Where allowed by local law, electronic communications with Accenture
> > > and its affiliates, including e-mail and instant messaging
> > > (including content), may be scanned by our systems for the purposes
> > > of information security and assessment of internal compliance with
> > > Accenture policy.
> > > ____________________________________________________________________
> > > __
> > > ________________
> > >
> > >
www.accenture.com
> > > _______________________________________________
> > > Gluster-users mailing list
> > > Gluster-users@xxxxxxxxxxx
> > >
http://supercolony.gluster.org/mailman/listinfo/gluster-users
> >
> >
> >
> >
> > ________________________________
> >
> > This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.
> > ______________________________________________________________________
> > ________________
> >
> >
www.accenture.com
>
>
>




------------------------------

Message: 17
Date: Tue, 3 Jun 2014 07:39:58 +0000
From: <yalla.gnan.kumar@xxxxxxxxxxxxx>
To: <franco.broi@xxxxxxxxxx>
Cc: gluster-users@xxxxxxxxxxx
Subject: Re: [Gluster-users] Distributed  volumes
Message-ID:
                <67765C71374B974FBFD2AD05AF438EFF0BD586DF@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
               
Content-Type: text/plain; charset="utf-8"

I have created distributed volume,  created a 1 GB volume on it, and attached it to the VM and created a filesystem on it.  How to verify that the files in the vm are
distributed across both the bricks on two servers ?



-----Original Message-----
From: Franco Broi [
mailto:franco.broi@xxxxxxxxxx]
Sent: Tuesday, June 03, 2014 1:04 PM
To: Gnan Kumar, Yalla
Cc: gluster-users@xxxxxxxxxxx
Subject: Re: Distributed volumes


Ok, what you have is a single large file (must be filesystem image??).
Gluster will not stripe files, it writes different whole files to different bricks.

On Tue, 2014-06-03 at 07:29 +0000, yalla.gnan.kumar@xxxxxxxxxxxxx
wrote:
> root@secondary:/export/sdd1/brick# gluster volume  info
>
> Volume Name: dst
> Type: Distribute
> Status: Started
> Number of Bricks: 2
> Transport-type: tcp
> Bricks:
> Brick1: primary:/export/sdd1/brick
> Brick2: secondary:/export/sdd1/brick
>
>
>
> -----Original Message-----
> From: Franco Broi [
mailto:franco.broi@xxxxxxxxxx]
> Sent: Tuesday, June 03, 2014 12:56 PM
> To: Gnan Kumar, Yalla
> Cc: gluster-users@xxxxxxxxxxx
> Subject: Re: Distributed volumes
>
>
> What do gluster vol info and gluster vol status give you?
>
> On Tue, 2014-06-03 at 07:21 +0000, yalla.gnan.kumar@xxxxxxxxxxxxx
> wrote:
> > Hi,
> >
> > I have created a distributed volume on my   gluster node. I have attached this volume to a VM on openstack. The size is 1 GB. I have written files close to 1 GB onto the
> > Volume.   But when I do a ls inside the brick directory , the volume is present only on one gluster server brick. But it is empty on another server brick.  Files are meant to be
> > spread across both the bricks according to distributed volume definition.
> >
> > On the VM:
> > --------------
> >
> > # ls -al
> > total 1013417
> > drwxr-xr-x    3 root     root          4096 Jun  1 22:03 .
> > drwxrwxr-x    3 root     root          1024 Jun  1 21:24 ..
> > -rw-------    1 root     root     31478251520 Jun  1 21:52 file
> > -rw-------    1 root     root     157391257600 Jun  1 21:54 file1
> > -rw-------    1 root     root     629565030400 Jun  1 21:55 file2
> > -rw-------    1 root     root     708260659200 Jun  1 21:59 file3
> > -rw-------    1 root     root     6295650304 Jun  1 22:01 file4
> > -rw-------    1 root     root     39333801984 Jun  1 22:01 file5
> > -rw-------    1 root     root     78643200000 Jun  1 22:04 file6
> > drwx------    2 root     root         16384 Jun  1 21:24 lost+found
> > ----------
> > # du -sch *
> > 20.0M   file
> > 100.0M  file1
> > 400.0M  file2
> > 454.0M  file3
> > 4.0M    file4
> > 11.6M   file5
> > 0       file6
> > 16.0K   lost+found
> > 989.7M  total
> > ------------------------
> >
> >
> > On the gluster server nodes:
> > -----------------------
> > root@primary:/export/sdd1/brick# ll
> > total 12
> > drwxr-xr-x 2 root root 4096 Jun  2 04:08 ./ drwxr-xr-x 4 root root
> > 4096 May 27 08:42 ../ root@primary:/export/sdd1/brick#
> > --------------------------
> >
> > root@secondary:/export/sdd1/brick# ll total 1046536
> > drwxr-xr-x 2 root root       4096 Jun  2 08:51 ./
> > drwxr-xr-x 4 root root       4096 May 27 08:43 ../
> > -rw-rw-rw- 1  108  115 1073741824 Jun  2 09:35
> > volume-0ec560be-997f-46da-9ec8-e9d6627f2de1
> > root@secondary:/export/sdd1/brick#
> > ---------------------------------
> >
> >
> > Thanks
> > Kumar
> >
> >
> >
> >
> >
> >
> >
> >
> > -----Original Message-----
> > From: Franco Broi [
mailto:franco.broi@xxxxxxxxxx]
> > Sent: Monday, June 02, 2014 6:35 PM
> > To: Gnan Kumar, Yalla
> > Cc: gluster-users@xxxxxxxxxxx
> > Subject: Re: Distributed volumes
> >
> > Just do an ls on the bricks, the paths are the same as the mounted filesystem.
> >
> > On Mon, 2014-06-02 at 12:26 +0000, yalla.gnan.kumar@xxxxxxxxxxxxx
> > wrote:
> > > Hi All,
> > >
> > >
> > >
> > > I have created a distributed volume of 1 GB ,  using two bricks from
> > > two different servers.
> > >
> > > I have written 7 files whose sizes are a total of  1 GB.
> > >

> > > How can I check that files are distributed on both the bricks ?
> > >
> > >
> > >
> > >
> > >
> > > Thanks
> > >
> > > Kumar
> > >
> > >
> > >
> > >
> > > ____________________________________________________________________
> > > __
> > >
> > >
> > > This message is for the designated recipient only and may contain
> > > privileged, proprietary, or otherwise confidential information. If
> > > you have received it in error, please notify the sender immediately
> > > and delete the original. Any other use of the e-mail by you is prohibited.
> > > Where allowed by local law, electronic communications with Accenture
> > > and its affiliates, including e-mail and instant messaging
> > > (including content), may be scanned by our systems for the purposes
> > > of information security and assessment of internal compliance with
> > > Accenture policy.
> > > ____________________________________________________________________
> > > __
> > > ________________
> > >
> > >
www.accenture.com
> > > _______________________________________________
> > > Gluster-users mailing list
> > > Gluster-users@xxxxxxxxxxx
> > >
http://supercolony.gluster.org/mailman/listinfo/gluster-users
> >
> >
> >
> >
> > ________________________________
> >
> > This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.
> > ______________________________________________________________________
> > ________________
> >
> >
www.accenture.com
>
>
>




------------------------------

Message: 18
Date: Tue, 3 Jun 2014 13:19:08 +0530
From: Kaushal M <kshlmster@xxxxxxxxx>
To: yalla.gnan.kumar@xxxxxxxxxxxxx
Cc: "gluster-users@xxxxxxxxxxx" <gluster-users@xxxxxxxxxxx>
Subject: Re: Distributed volumes
Message-ID:
                <CAOujamXxBoF3QAbYgyAOvGY7fimNgkGBv8Z-gotpchh0xgEePA@xxxxxxxxxxxxxx>
Content-Type: text/plain; charset=UTF-8

You have only 1 file on the gluster volume, the 1GB disk image/volume
that you created. This disk image is attached to the VM as a file
system, not the gluster volume. So whatever you do in the VM's file
system, affects just the 1 disk image. The files, directories etc. you
created, are inside the disk image. So you still have just one file on
the gluster volume, not many as you are assuming.



On Tue, Jun 3, 2014 at 1:09 PM,  <yalla.gnan.kumar@xxxxxxxxxxxxx> wrote:
> I have created distributed volume,  created a 1 GB volume on it, and attached it to the VM and created a filesystem on it.  How to verify that the files in the vm are
> distributed across both the bricks on two servers ?
>
>
>
> -----Original Message-----
> From: Franco Broi [
mailto:franco.broi@xxxxxxxxxx]
> Sent: Tuesday, June 03, 2014 1:04 PM
> To: Gnan Kumar, Yalla
> Cc: gluster-users@xxxxxxxxxxx
> Subject: Re: Distributed volumes
>
>
> Ok, what you have is a single large file (must be filesystem image??).
> Gluster will not stripe files, it writes different whole files to different bricks.
>
> On Tue, 2014-06-03 at 07:29 +0000, yalla.gnan.kumar@xxxxxxxxxxxxx
> wrote:
>> root@secondary:/export/sdd1/brick# gluster volume  info
>>
>> Volume Name: dst
>> Type: Distribute
>> Status: Started
>> Number of Bricks: 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: primary:/export/sdd1/brick
>> Brick2: secondary:/export/sdd1/brick
>>
>>
>>
>> -----Original Message-----
>> From: Franco Broi [
mailto:franco.broi@xxxxxxxxxx]
>> Sent: Tuesday, June 03, 2014 12:56 PM
>> To: Gnan Kumar, Yalla
>> Cc: gluster-users@xxxxxxxxxxx
>> Subject: Re: Distributed volumes
>>
>>
>> What do gluster vol info and gluster vol status give you?
>>
>> On Tue, 2014-06-03 at 07:21 +0000, yalla.gnan.kumar@xxxxxxxxxxxxx
>> wrote:
>> > Hi,
>> >
>> > I have created a distributed volume on my   gluster node. I have attached this volume to a VM on openstack. The size is 1 GB. I have written files close to 1 GB onto the
>> > Volume.   But when I do a ls inside the brick directory , the volume is present only on one gluster server brick. But it is empty on another server brick.  Files are meant to be
>> > spread across both the bricks according to distributed volume definition.
>> >
>> > On the VM:
>> > --------------
>> >
>> > # ls -al
>> > total 1013417
>> > drwxr-xr-x    3 root     root          4096 Jun  1 22:03 .
>> > drwxrwxr-x    3 root     root          1024 Jun  1 21:24 ..
>> > -rw-------    1 root     root     31478251520 Jun  1 21:52 file
>> > -rw-------    1 root     root     157391257600 Jun  1 21:54 file1
>> > -rw-------    1 root     root     629565030400 Jun  1 21:55 file2
>> > -rw-------    1 root     root     708260659200 Jun  1 21:59 file3
>> > -rw-------    1 root     root     6295650304 Jun  1 22:01 file4
>> > -rw-------    1 root     root     39333801984 Jun  1 22:01 file5
>> > -rw-------    1 root     root     78643200000 Jun  1 22:04 file6
>> > drwx------    2 root     root         16384 Jun  1 21:24 lost+found
>> > ----------
>> > # du -sch *
>> > 20.0M   file
>> > 100.0M  file1
>> > 400.0M  file2
>> > 454.0M  file3
>> > 4.0M    file4
>> > 11.6M   file5
>> > 0       file6
>> > 16.0K   lost+found
>> > 989.7M  total
>> > ------------------------
>> >
>> >
>> > On the gluster server nodes:
>> > -----------------------
>> > root@primary:/export/sdd1/brick# ll
>> > total 12
>> > drwxr-xr-x 2 root root 4096 Jun  2 04:08 ./ drwxr-xr-x 4 root root
>> > 4096 May 27 08:42 ../ root@primary:/export/sdd1/brick#
>> > --------------------------
>> >
>> > root@secondary:/export/sdd1/brick# ll total 1046536
>> > drwxr-xr-x 2 root root       4096 Jun  2 08:51 ./
>> > drwxr-xr-x 4 root root       4096 May 27 08:43 ../
>> > -rw-rw-rw- 1  108  115 1073741824 Jun  2 09:35
>> > volume-0ec560be-997f-46da-9ec8-e9d6627f2de1
>> > root@secondary:/export/sdd1/brick#
>> > ---------------------------------
>> >
>> >
>> > Thanks
>> > Kumar
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > -----Original Message-----
>> > From: Franco Broi [
mailto:franco.broi@xxxxxxxxxx]
>> > Sent: Monday, June 02, 2014 6:35 PM
>> > To: Gnan Kumar, Yalla
>> > Cc: gluster-users@xxxxxxxxxxx
>> > Subject: Re: Distributed volumes
>> >
>> > Just do an ls on the bricks, the paths are the same as the mounted filesystem.
>> >
>> > On Mon, 2014-06-02 at 12:26 +0000, yalla.gnan.kumar@xxxxxxxxxxxxx
>> > wrote:
>> > > Hi All,
>> > >
>> > >
>> > >
>> > > I have created a distributed volume of 1 GB ,  using two bricks from
>> > > two different servers.
>> > >
>> > > I have written 7 files whose sizes are a total of  1 GB.
>> > >
>> > > How can I check that files are distributed on both the bricks ?
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > Thanks
>> > >
>> > > Kumar
>> > >
>> > >
>> > >
>> > >
>> > > ____________________________________________________________________
>> > > __
>> > >
>> > >
>> > > This message is for the designated recipient only and may contain
>> > > privileged, proprietary, or otherwise confidential information. If
>> > > you have received it in error, please notify the sender immediately
>> > > and delete the original. Any other use of the e-mail by you is prohibited.
>> > > Where allowed by local law, electronic communications with Accenture
>> > > and its affiliates, including e-mail and instant messaging
>> > > (including content), may be scanned by our systems for the purposes
>> > > of information security and assessment of internal compliance with
>> > > Accenture policy.
>> > > ____________________________________________________________________
>> > > __
>> > > ________________
>> > >
>> > >
www.accenture.com
>> > > _______________________________________________
>> > > Gluster-users mailing list
>> > > Gluster-users@xxxxxxxxxxx
>> > >
http://supercolony.gluster.org/mailman/listinfo/gluster-users
>> >
>> >
>> >
>> >
>> > ________________________________
>> >
>> > This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.
>> > ______________________________________________________________________
>> > ________________
>> >
>> >
www.accenture.com
>>
>>
>>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
>
http://supercolony.gluster.org/mailman/listinfo/gluster-users


------------------------------

Message: 19
Date: Tue, 3 Jun 2014 08:20:46 +0000
From: <yalla.gnan.kumar@xxxxxxxxxxxxx>
To: <kshlmster@xxxxxxxxx>
Cc: gluster-users@xxxxxxxxxxx
Subject: Re: Distributed volumes
Message-ID:
                <67765C71374B974FBFD2AD05AF438EFF0BD58729@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
               
Content-Type: text/plain; charset="utf-8"

Hi,

So , in which scenario, does the distributed volumes have files on both the bricks ?


-----Original Message-----
From: Kaushal M [
mailto:kshlmster@xxxxxxxxx]
Sent: Tuesday, June 03, 2014 1:19 PM
To: Gnan Kumar, Yalla
Cc: Franco Broi; gluster-users@xxxxxxxxxxx
Subject: Re: Distributed volumes

You have only 1 file on the gluster volume, the 1GB disk image/volume that you created. This disk image is attached to the VM as a file system, not the gluster volume. So whatever you do in the VM's file system, affects just the 1 disk image. The files, directories etc. you created, are inside the disk image. So you still have just one file on the gluster volume, not many as you are assuming.



On Tue, Jun 3, 2014 at 1:09 PM,  <yalla.gnan.kumar@xxxxxxxxxxxxx> wrote:
> I have created distributed volume,  created a 1 GB volume on it, and
> attached it to the VM and created a filesystem on it.  How to verify that the files in the vm are distributed across both the bricks on two servers ?
>
>
>
> -----Original Message-----
> From: Franco Broi [
mailto:franco.broi@xxxxxxxxxx]
> Sent: Tuesday, June 03, 2014 1:04 PM
> To: Gnan Kumar, Yalla
> Cc: gluster-users@xxxxxxxxxxx
> Subject: Re: [Gluster-users] Distributed volumes
>
>
> Ok, what you have is a single large file (must be filesystem image??).
> Gluster will not stripe files, it writes different whole files to different bricks.
>
> On Tue, 2014-06-03 at 07:29 +0000, yalla.gnan.kumar@xxxxxxxxxxxxx
> wrote:
>> root@secondary:/export/sdd1/brick# gluster volume  info
>>
>> Volume Name: dst
>> Type: Distribute
>> Status: Started
>> Number of Bricks: 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: primary:/export/sdd1/brick
>> Brick2: secondary:/export/sdd1/brick
>>
>>
>>
>> -----Original Message-----
>> From: Franco Broi [
mailto:franco.broi@xxxxxxxxxx]
>> Sent: Tuesday, June 03, 2014 12:56 PM
>> To: Gnan Kumar, Yalla
>> Cc: gluster-users@xxxxxxxxxxx
>> Subject: Re: Distributed volumes
>>
>>
>> What do gluster vol info and gluster vol status give you?
>>
>> On Tue, 2014-06-03 at 07:21 +0000, yalla.gnan.kumar@xxxxxxxxxxxxx
>> wrote:
>> > Hi,
>> >
>> > I have created a distributed volume on my   gluster node. I have attached this volume to a VM on openstack. The size is 1 GB. I have written files close to 1 GB onto the
>> > Volume.   But when I do a ls inside the brick directory , the volume is present only on one gluster server brick. But it is empty on another server brick.  Files are meant to be
>> > spread across both the bricks according to distributed volume definition.
>> >
>> > On the VM:
>> > --------------
>> >
>> > # ls -al
>> > total 1013417
>> > drwxr-xr-x    3 root     root          4096 Jun  1 22:03 .
>> > drwxrwxr-x    3 root     root          1024 Jun  1 21:24 ..
>> > -rw-------    1 root     root     31478251520 Jun  1 21:52 file
>> > -rw-------    1 root     root     157391257600 Jun  1 21:54 file1
>> > -rw-------    1 root     root     629565030400 Jun  1 21:55 file2
>> > -rw-------    1 root     root     708260659200 Jun  1 21:59 file3
>> > -rw-------    1 root     root     6295650304 Jun  1 22:01 file4
>> > -rw-------    1 root     root     39333801984 Jun  1 22:01 file5
>> > -rw-------    1 root     root     78643200000 Jun  1 22:04 file6
>> > drwx------    2 root     root         16384 Jun  1 21:24 lost+found
>> > ----------
>> > # du -sch *
>> > 20.0M   file
>> > 100.0M  file1
>> > 400.0M  file2
>> > 454.0M  file3
>> > 4.0M    file4
>> > 11.6M   file5
>> > 0       file6
>> > 16.0K   lost+found
>> > 989.7M  total
>> > ------------------------
>> >
>> >
>> > On the gluster server nodes:
>> > -----------------------
>> > root@primary:/export/sdd1/brick# ll total 12 drwxr-xr-x 2 root root
>> > 4096 Jun  2 04:08 ./ drwxr-xr-x 4 root root
>> > 4096 May 27 08:42 ../ root@primary:/export/sdd1/brick#
>> > --------------------------
>> >
>> > root@secondary:/export/sdd1/brick# ll total 1046536
>> > drwxr-xr-x 2 root root       4096 Jun  2 08:51 ./
>> > drwxr-xr-x 4 root root       4096 May 27 08:43 ../
>> > -rw-rw-rw- 1  108  115 1073741824 Jun  2 09:35
>> > volume-0ec560be-997f-46da-9ec8-e9d6627f2de1
>> > root@secondary:/export/sdd1/brick#
>> > ---------------------------------
>> >
>> >
>> > Thanks
>> > Kumar
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > -----Original Message-----
>> > From: Franco Broi [
mailto:franco.broi@xxxxxxxxxx]
>> > Sent: Monday, June 02, 2014 6:35 PM
>> > To: Gnan Kumar, Yalla
>> > Cc: gluster-users@xxxxxxxxxxx
>> > Subject: Re: Distributed volumes
>> >
>> > Just do an ls on the bricks, the paths are the same as the mounted filesystem.
>> >
>> > On Mon, 2014-06-02 at 12:26 +0000, yalla.gnan.kumar@xxxxxxxxxxxxx
>> > wrote:
>> > > Hi All,
>> > >
>> > >
>> > >
>> > > I have created a distributed volume of 1 GB ,  using two bricks
>> > > from two different servers.
>> > >
>> > > I have written 7 files whose sizes are a total of  1 GB.
>> > >
>> > > How can I check that files are distributed on both the bricks ?
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > Thanks
>> > >
>> > > Kumar
>> > >
>> > >
>> > >
>> > >
>> > > _________________________________________________________________
>> > > ___
>> > > __
>> > >
>> > >
>> > > This message is for the designated recipient only and may contain
>> > > privileged, proprietary, or otherwise confidential information.
>> > > If you have received it in error, please notify the sender
>> > > immediately and delete the original. Any other use of the e-mail by you is prohibited.
>> > > Where allowed by local law, electronic communications with
>> > > Accenture and its affiliates, including e-mail and instant
>> > > messaging (including content), may be scanned by our systems for
>> > > the purposes of information security and assessment of internal
>> > > compliance with Accenture policy.
>> > > _________________________________________________________________
>> > > ___
>> > > __
>> > > ________________
>> > >
>> > >
www.accenture.com
>> > > _______________________________________________
>> > > Gluster-users mailing list
>> > > Gluster-users@xxxxxxxxxxx
>> > >
http://supercolony.gluster.org/mailman/listinfo/gluster-users
>> >
>> >
>> >
>> >
>> > ________________________________
>> >
>> > This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.
>> > ___________________________________________________________________
>> > ___
>> > ________________
>> >
>> >
www.accenture.com
>>
>>
>>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
>
http://supercolony.gluster.org/mailman/listinfo/gluster-users


------------------------------

Message: 20
Date: Tue, 03 Jun 2014 16:22:16 +0800
From: Franco Broi <franco.broi@xxxxxxxxxx>
To: yalla.gnan.kumar@xxxxxxxxxxxxx
Cc: gluster-users@xxxxxxxxxxx
Subject: Re: Distributed volumes
Message-ID: <1401783736.2236.333.camel@tc1>
Content-Type: text/plain; charset="UTF-8"

On Tue, 2014-06-03 at 08:20 +0000, yalla.gnan.kumar@xxxxxxxxxxxxx
wrote:
> Hi,
>
> So , in which scenario, does the distributed volumes have files on both the bricks ?

If you make more than 1 file.

>
>
> -----Original Message-----
> From: Kaushal M [
mailto:kshlmster@xxxxxxxxx]
> Sent: Tuesday, June 03, 2014 1:19 PM
> To: Gnan Kumar, Yalla
> Cc: Franco Broi; gluster-users@xxxxxxxxxxx
> Subject: Re: [Gluster-users] Distributed volumes
>
> You have only 1 file on the gluster volume, the 1GB disk image/volume that you created. This disk image is attached to the VM as a file system, not the gluster volume. So whatever you do in the VM's file system, affects just the 1 disk image. The files, directories etc. you created, are inside the disk image. So you still have just one file on the gluster volume, not many as you are assuming.
>
>
>
> On Tue, Jun 3, 2014 at 1:09 PM,  <yalla.gnan.kumar@xxxxxxxxxxxxx> wrote:
> > I have created distributed volume,  created a 1 GB volume on it, and
> > attached it to the VM and created a filesystem on it.  How to verify that the files in the vm are distributed across both the bricks on two servers ?
> >
> >
> >
> > -----Original Message-----
> > From: Franco Broi [
mailto:franco.broi@xxxxxxxxxx]
> > Sent: Tuesday, June 03, 2014 1:04 PM
> > To: Gnan Kumar, Yalla
> > Cc: gluster-users@xxxxxxxxxxx
> > Subject: Re: Distributed volumes
> >
> >
> > Ok, what you have is a single large file (must be filesystem image??).
> > Gluster will not stripe files, it writes different whole files to different bricks.
> >
> > On Tue, 2014-06-03 at 07:29 +0000, yalla.gnan.kumar@xxxxxxxxxxxxx
> > wrote:
> >> root@secondary:/export/sdd1/brick# gluster volume  info
> >>
> >> Volume Name: dst
> >> Type: Distribute
> >> Status: Started
> >> Number of Bricks: 2
> >> Transport-type: tcp
> >> Bricks:
> >> Brick1: primary:/export/sdd1/brick
> >> Brick2: secondary:/export/sdd1/brick
> >>
> >>
> >>
> >> -----Original Message-----
> >> From: Franco Broi [
mailto:franco.broi@xxxxxxxxxx]
> >> Sent: Tuesday, June 03, 2014 12:56 PM
> >> To: Gnan Kumar, Yalla
> >> Cc: gluster-users@xxxxxxxxxxx
> >> Subject: Re: [Gluster-users] Distributed volumes
> >>
> >>
> >> What do gluster vol info and gluster vol status give you?
> >>
> >> On Tue, 2014-06-03 at 07:21 +0000, yalla.gnan.kumar@xxxxxxxxxxxxx
> >> wrote:
> >> > Hi,
> >> >
> >> > I have created a distributed volume on my   gluster node. I have attached this volume to a VM on openstack. The size is 1 GB. I have written files close to 1 GB onto the
> >> > Volume.   But when I do a ls inside the brick directory , the volume is present only on one gluster server brick. But it is empty on another server brick.  Files are meant to be
> >> > spread across both the bricks according to distributed volume definition.
> >> >
> >> > On the VM:
> >> > --------------
> >> >
> >> > # ls -al
> >> > total 1013417
> >> > drwxr-xr-x    3 root     root          4096 Jun  1 22:03 .
> >> > drwxrwxr-x    3 root     root          1024 Jun  1 21:24 ..
> >> > -rw-------    1 root     root     31478251520 Jun  1 21:52 file
> >> > -rw-------    1 root     root     157391257600 Jun  1 21:54 file1
> >> > -rw-------    1 root     root     629565030400 Jun  1 21:55 file2
> >> > -rw-------    1 root     root     708260659200 Jun  1 21:59 file3
> >> > -rw-------    1 root     root     6295650304 Jun  1 22:01 file4
> >> > -rw-------    1 root     root     39333801984 Jun  1 22:01 file5
> >> > -rw-------    1 root     root     78643200000 Jun  1 22:04 file6
> >> > drwx------    2 root     root         16384 Jun  1 21:24 lost+found
> >> > ----------
> >> > # du -sch *
> >> > 20.0M   file
> >> > 100.0M  file1
> >> > 400.0M  file2
> >> > 454.0M  file3
> >> > 4.0M    file4
> >> > 11.6M   file5
> >> > 0       file6
> >> > 16.0K   lost+found
> >> > 989.7M  total
> >> > ------------------------
> >> >
> >> >
> >> > On the gluster server nodes:
> >> > -----------------------
> >> > root@primary:/export/sdd1/brick# ll total 12 drwxr-xr-x 2 root root
> >> > 4096 Jun  2 04:08 ./ drwxr-xr-x 4 root root
> >> > 4096 May 27 08:42 ../ root@primary:/export/sdd1/brick#
> >> > --------------------------
> >> >
> >> > root@secondary:/export/sdd1/brick# ll total 1046536
> >> > drwxr-xr-x 2 root root       4096 Jun  2 08:51 ./
> >> > drwxr-xr-x 4 root root       4096 May 27 08:43 ../
> >> > -rw-rw-rw- 1  108  115 1073741824 Jun  2 09:35
> >> > volume-0ec560be-997f-46da-9ec8-e9d6627f2de1
> >> > root@secondary:/export/sdd1/brick#
> >> > ---------------------------------
> >> >
> >> >

> >> > Thanks
> >> > Kumar
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > -----Original Message-----
> >> > From: Franco Broi [
mailto:franco.broi@xxxxxxxxxx]
> >> > Sent: Monday, June 02, 2014 6:35 PM
> >> > To: Gnan Kumar, Yalla
> >> > Cc: gluster-users@xxxxxxxxxxx
> >> > Subject: Re: Distributed volumes
> >> >
> >> > Just do an ls on the bricks, the paths are the same as the mounted filesystem.
> >> >
> >> > On Mon, 2014-06-02 at 12:26 +0000, yalla.gnan.kumar@xxxxxxxxxxxxx
> >> > wrote:
> >> > > Hi All,
> >> > >
> >> > >
> >> > >
> >> > > I have created a distributed volume of 1 GB ,  using two bricks
> >> > > from two different servers.
> >> > >
> >> > > I have written 7 files whose sizes are a total of  1 GB.
> >> > >
> >> > > How can I check that files are distributed on both the bricks ?
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > > Thanks
> >> > >
> >> > > Kumar
> >> > >
> >> > >
> >> > >
> >> > >
> >> > > _________________________________________________________________
> >> > > ___
> >> > > __
> >> > >
> >> > >
> >> > > This message is for the designated recipient only and may contain
> >> > > privileged, proprietary, or otherwise confidential information.
> >> > > If you have received it in error, please notify the sender
> >> > > immediately and delete the original. Any other use of the e-mail by you is prohibited.
> >> > > Where allowed by local law, electronic communications with
> >> > > Accenture and its affiliates, including e-mail and instant
> >> > > messaging (including content), may be scanned by our systems for
> >> > > the purposes of information security and assessment of internal
> >> > > compliance with Accenture policy.
> >> > > _________________________________________________________________
> >> > > ___
> >> > > __
> >> > > ________________
> >> > >
> >> > >
www.accenture.com
> >> > > _______________________________________________
> >> > > Gluster-users mailing list
> >> > > Gluster-users@xxxxxxxxxxx
> >> > >
http://supercolony.gluster.org/mailman/listinfo/gluster-users
> >> >
> >> >
> >> >
> >> >
> >> > ________________________________
> >> >
> >> > This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.
> >> > ___________________________________________________________________
> >> > ___
> >> > ________________
> >> >
> >> >
www.accenture.com
> >>
> >>
> >>
> >
> >
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users@xxxxxxxxxxx
> >
http://supercolony.gluster.org/mailman/listinfo/gluster-users
>




------------------------------

Message: 21
Date: Tue, 03 Jun 2014 14:21:39 +0530
From: Vijay Bellur <vbellur@xxxxxxxxxx>
To: yalla.gnan.kumar@xxxxxxxxxxxxx, kshlmster@xxxxxxxxx
Cc: gluster-users@xxxxxxxxxxx
Subject: Re: Distributed volumes
Message-ID: <538D8C9B.3010709@xxxxxxxxxx>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

On 06/03/2014 01:50 PM, yalla.gnan.kumar@xxxxxxxxxxxxx wrote:
> Hi,
>
> So , in which scenario, does the distributed volumes have files on both the bricks ?
>
>

Reading the documentation for various volume types [1] can be useful to
obtain answers for questions of this nature.

-Vijay

[1]
https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_setting_volumes.md



------------------------------

Message: 22
Date: Tue, 3 Jun 2014 17:10:57 +0530
From: Indivar Nair <indivar.nair@xxxxxxxxxxxx>
To: Gluster Users <gluster-users@xxxxxxxxxxx>
Subject: NFS ACL Support in Gluster 3.4
Message-ID:
                <CALuPYL0CykF9Q41SKsWtOSRXnAqor02mfR6W9FAth18XwK=cXQ@xxxxxxxxxxxxxx>
Content-Type: text/plain; charset="utf-8"

Hi All,

I recently upgraded a Gluster 3.3.1 installation to Gluster 3.4.
It was a straight forward upgrade using Yum.
The OS is CentOS 6.3.

The main purpose of the upgrade was to get ACL Support on NFS exports.
But it doesn't seem to be working.

I mounted the gluster volume using the following options -

mount -t nfs -o vers=3,mountproto=tcp,acl <gluster_server>:/volume /mnt

The getfacl or setfacl commands does not work on any dir/files on this
mount.

The plan is to re-export the NFS Mounts using Samba+CTDB.
NFS mounts seem to give better performance than Gluster Mounts.

Am I missing something?

Regards,


Indivar Nair
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140603/e543c5e8/attachment-0001.html>

------------------------------

Message: 23
Date: Tue, 03 Jun 2014 17:26:56 +0530
From: Santosh Pradhan <spradhan@xxxxxxxxxx>
To: Indivar Nair <indivar.nair@xxxxxxxxxxxx>,                 Gluster Users
                <gluster-users@xxxxxxxxxxx>
Subject: Re: NFS ACL Support in Gluster 3.4
Message-ID: <538DB808.5050802@xxxxxxxxxx>
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"

I guess Gluster 3.5 has fixed the NFS-ACL issues and getfacl/setfacl
works there.

Regards,
Santosh

On 06/03/2014 05:10 PM, Indivar Nair wrote:
> Hi All,
>
> I recently upgraded a Gluster 3.3.1 installation to Gluster 3.4.
> It was a straight forward upgrade using Yum.
> The OS is CentOS 6.3.
>
> The main purpose of the upgrade was to get ACL Support on NFS exports.
> But it doesn't seem to be working.
>
> I mounted the gluster volume using the following options -
>
> mount -t nfs -o vers=3,mountproto=tcp,acl <gluster_server>:/volume /mnt
>
> The getfacl or setfacl commands does not work on any dir/files on this
> mount.
>
> The plan is to re-export the NFS Mounts using Samba+CTDB.
> NFS mounts seem to give better performance than Gluster Mounts.
>
> Am I missing something?
>
> Regards,
>
>
> Indivar Nair
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
>
http://supercolony.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140603/04fb291d/attachment-0001.html>

------------------------------

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

End of Gluster-users Digest, Vol 74, Issue 3
********************************************


**

This email and any attachments may contain information that is confidential and/or privileged for the sole use of the intended recipient. Any use, review, disclosure, copying, distribution or reliance by others, and any forwarding of this email or its contents, without the express permission of the sender is strictly prohibited by law. If you are not the intended recipient, please contact the sender immediately, delete the e-mail and destroy all copies.
**
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux