Pasting Testing Logs
======================
3.6
[root@dhcp-0-112 rpms]# /sbin/gluster v create v1 $tm1:/export/sdb/br1
volume create: v1: success: please start the volume to access data
[root@dhcp-0-112 rpms]# gluster v start v1
[root@dhcp-0-112 rpms]#
[root@dhcp-0-112 rpms]# mount -t glusterfs $tm1:v1 /gluster_vols/vol
[root@dhcp-0-112 rpms]# gluster v quota v1 enable
volume quota : success
[root@dhcp-0-112 rpms]#
[root@dhcp-0-112 rpms]# mkdir -p /gluster_vols/vol/dir1; gluster v quota v1 limit-usage /dir1 5MB 10
volume quota : success
[root@dhcp-0-112 rpms]# mkdir -p /gluster_vols/vol/dir2; gluster v quota v1 limit-usage /dir2 16MB 10
volume quota : success
[root@dhcp-0-112 rpms]# gluster v quota v1 list
Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded?
---------------------------------------------------------------------------------------------------------------------------
/dir1 5.0MB 10% 0Bytes 5.0MB No No
/dir2 16.0MB 10% 0Bytes 16.0MB No No
[root@dhcp-0-112 rpms]#
[root@dhcp-0-112 rpms]#
[root@dhcp-0-112 rpms]# rpm -qa | grep glusterfs-ser
glusterfs-server-3.6.9-0.1.gitcaccd6c.fc24.x86_64
[root@dhcp-0-112 rpms]# umount /gluster_vols/vol
[root@dhcp-0-112 rpms]#
[root@dhcp-0-112 rpms]# cat /var/lib/glusterd/vols/v1/quota.conf
[root@dhcp-0-112 rpms]# hexdump /var/lib/glusterd/vols/v1/quota.conf
[root@dhcp-0-112 rpms]# hexdump -c /var/lib/glusterd/vols/v1/quota.conf
0000000 G l u s t e r F S Q u o t a
0000010 c o n f | v e r s i o n :
0000020 v 1 . 1 \n U \t 213 I 252 251 C 337 262 x \b
0000030 i y r 5 021 312 335 w 366 X 5 B H 210 260 227
0000040 ^ 251 X 237 G
0000045
[root@dhcp-0-112 rpms]#
[root@dhcp-0-112 rpms]# getfattr -d -m. -e hex /export/sdb/br1/dir1/ | grep gfid
getfattr: Removing leading '/' from absolute path names
trusted.gfid=0x55098b49aaa943dfb278086979723511
[root@dhcp-0-112 rpms]# getfattr -d -m. -e hex /export/sdb/br1/dir2/ | grep gfid
getfattr: Removing leading '/' from absolute path names
trusted.gfid=0xcadd77f65835424888b0975ea9589f47
[root@dhcp-0-112 rpms]# gluster v stop v1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: v1: success
[root@dhcp-0-112 rpms]# pkill glusterd
+++++++++++++++++++ Replace with 3.9 build without patch++++++++++
[root@dhcp-0-112 3.9]# systemctl start glusterd
[root@dhcp-0-112 3.9]#
[root@dhcp-0-112 3.9]# rpm -qa | grep glusterfs-ser
glusterfs-server-3.9.0rc2-0.13.gita3bade0.fc24.x86_64
[
[root@dhcp-0-112 3.9]# gluster v set all cluster.op-version 30700
volume set: success
[root@dhcp-0-112 3.9]# gluster v start v1
volume start: v1: success
[root@dhcp-0-112 3.9]# mount -t glusterfs $tm1:v1 /gluster_vols/vol
>> not sure why we see this , second attempt succeeds
[root@dhcp-0-112 3.9]# gluster v quota v1 limit-usage /dir1 12MB 10
quota command failed : Failed to start aux mount
[root@dhcp-0-112 3.9]#
[root@dhcp-0-112 3.9]# gluster v quota v1 limit-usage /dir2 12MB 10
volume quota : success
[root@dhcp-0-112 3.9]# hexdump -c /var/lib/glusterd/vols/v1/quota.conf
0000000 G l u s t e r F S Q u o t a
0000010 c o n f | v e r s i o n :
0000020 v 1 . 2 \n U \t 213 I 252 251 C 337 262 x \b
0000030 i y r 5 021 001 312 335 w 366 X 5 B H 210 260
0000040 227 ^ 251 X 237 G 001
0000047
[root@dhcp-0-112 3.9]# gluster v quota v1 list
Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/dir1 5.0MB 10%(512.0KB) 0Bytes 5.0MB No No
/dir2 12.0MB 10%(1.2MB) 0Bytes 12.0MB No No
[root@dhcp-0-112 3.9]#
[root@dhcp-0-112 3.9]# gluster v quota v1 limit-usage /dir1 12MB 10
[root@dhcp-0-112 3.9]# cksum /var/lib/glusterd/vols/v1/quota.conf
496616948 71 /var/lib/glusterd/vols/v1/quota.conf
[root@dhcp-0-112 3.9]#
>> Now we disable , followed by enable and set the same limits to check if we get the same quota.conf contents
[root@dhcp-0-112 3.9]# /sbin/gluster v quota v1 disable
Disabling quota will delete all the quota configuration. Do you want to continue? (y/n) y
quota command failed : Volume quota failed. The cluster is operating at version 30700. Quota command disable is unavailable in this version.
[root@dhcp-0-112 3.9]#
>> we need to upgrade to 3_7_12 to use disable
[root@dhcp-0-112 3.9]# gluster v set all cluster.op-version 30712
volume set: success
[root@dhcp-0-112 3.9]# /sbin/gluster v quota v1 disable
Disabling quota will delete all the quota configuration. Do you want to continue? (y/n) y
volume quota : success
[root@dhcp-0-112 3.9]# /sbin/gluster v quota v1 enable
volume quota : success
>> again we hit it,
[root@dhcp-0-112 3.9]# gluster v quota v1 limit-usage /dir1 12MB 10
quota command failed : Failed to start aux mount
[root@dhcp-0-112 3.9]#
[root@dhcp-0-112 3.9]# gluster v quota v1 limit-usage /dir1 12MB 10
volume quota : success
[root@dhcp-0-112 3.9]# gluster v quota v1 limit-usage /dir2 12MB 10
volume quota : success
>> we get same quota.conf contents, confirming that the limit-usage rewrites quota.conf correctly
[root@dhcp-0-112 3.9]# cksum /var/lib/glusterd/vols/v1/quota.conf
496616948 71 /var/lib/glusterd/vols/v1/quota.conf
I Notice that while the limit-usage command does rewrite quota.conf correctly, the command does not always succeed.
So the user may have to try it multiple times.
======================
3.6
[root@dhcp-0-112 rpms]# /sbin/gluster v create v1 $tm1:/export/sdb/br1
volume create: v1: success: please start the volume to access data
[root@dhcp-0-112 rpms]# gluster v start v1
[root@dhcp-0-112 rpms]#
[root@dhcp-0-112 rpms]# mount -t glusterfs $tm1:v1 /gluster_vols/vol
[root@dhcp-0-112 rpms]# gluster v quota v1 enable
volume quota : success
[root@dhcp-0-112 rpms]#
[root@dhcp-0-112 rpms]# mkdir -p /gluster_vols/vol/dir1; gluster v quota v1 limit-usage /dir1 5MB 10
volume quota : success
[root@dhcp-0-112 rpms]# mkdir -p /gluster_vols/vol/dir2; gluster v quota v1 limit-usage /dir2 16MB 10
volume quota : success
[root@dhcp-0-112 rpms]# gluster v quota v1 list
Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded?
---------------------------------------------------------------------------------------------------------------------------
/dir1 5.0MB 10% 0Bytes 5.0MB No No
/dir2 16.0MB 10% 0Bytes 16.0MB No No
[root@dhcp-0-112 rpms]#
[root@dhcp-0-112 rpms]#
[root@dhcp-0-112 rpms]# rpm -qa | grep glusterfs-ser
glusterfs-server-3.6.9-0.1.gitcaccd6c.fc24.x86_64
[root@dhcp-0-112 rpms]# umount /gluster_vols/vol
[root@dhcp-0-112 rpms]#
[root@dhcp-0-112 rpms]# cat /var/lib/glusterd/vols/v1/quota.conf
[root@dhcp-0-112 rpms]# hexdump /var/lib/glusterd/vols/v1/quota.conf
[root@dhcp-0-112 rpms]# hexdump -c /var/lib/glusterd/vols/v1/quota.conf
0000000 G l u s t e r F S Q u o t a
0000010 c o n f | v e r s i o n :
0000020 v 1 . 1 \n U \t 213 I 252 251 C 337 262 x \b
0000030 i y r 5 021 312 335 w 366 X 5 B H 210 260 227
0000040 ^ 251 X 237 G
0000045
[root@dhcp-0-112 rpms]#
[root@dhcp-0-112 rpms]# getfattr -d -m. -e hex /export/sdb/br1/dir1/ | grep gfid
getfattr: Removing leading '/' from absolute path names
trusted.gfid=0x55098b49aaa943dfb278086979723511
[root@dhcp-0-112 rpms]# getfattr -d -m. -e hex /export/sdb/br1/dir2/ | grep gfid
getfattr: Removing leading '/' from absolute path names
trusted.gfid=0xcadd77f65835424888b0975ea9589f47
[root@dhcp-0-112 rpms]# gluster v stop v1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: v1: success
[root@dhcp-0-112 rpms]# pkill glusterd
+++++++++++++++++++ Replace with 3.9 build without patch++++++++++
[root@dhcp-0-112 3.9]# systemctl start glusterd
[root@dhcp-0-112 3.9]#
[root@dhcp-0-112 3.9]# rpm -qa | grep glusterfs-ser
glusterfs-server-3.9.0rc2-0.13.gita3bade0.fc24.x86_64
[
[root@dhcp-0-112 3.9]# gluster v set all cluster.op-version 30700
volume set: success
[root@dhcp-0-112 3.9]# gluster v start v1
volume start: v1: success
[root@dhcp-0-112 3.9]# mount -t glusterfs $tm1:v1 /gluster_vols/vol
>> not sure why we see this , second attempt succeeds
[root@dhcp-0-112 3.9]# gluster v quota v1 limit-usage /dir1 12MB 10
quota command failed : Failed to start aux mount
[root@dhcp-0-112 3.9]#
[root@dhcp-0-112 3.9]# gluster v quota v1 limit-usage /dir2 12MB 10
volume quota : success
[root@dhcp-0-112 3.9]# hexdump -c /var/lib/glusterd/vols/v1/quota.conf
0000000 G l u s t e r F S Q u o t a
0000010 c o n f | v e r s i o n :
0000020 v 1 . 2 \n U \t 213 I 252 251 C 337 262 x \b
0000030 i y r 5 021 001 312 335 w 366 X 5 B H 210 260
0000040 227 ^ 251 X 237 G 001
0000047
[root@dhcp-0-112 3.9]# gluster v quota v1 list
Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/dir1 5.0MB 10%(512.0KB) 0Bytes 5.0MB No No
/dir2 12.0MB 10%(1.2MB) 0Bytes 12.0MB No No
[root@dhcp-0-112 3.9]#
[root@dhcp-0-112 3.9]# gluster v quota v1 limit-usage /dir1 12MB 10
[root@dhcp-0-112 3.9]# cksum /var/lib/glusterd/vols/v1/quota.conf
496616948 71 /var/lib/glusterd/vols/v1/quota.conf
[root@dhcp-0-112 3.9]#
>> Now we disable , followed by enable and set the same limits to check if we get the same quota.conf contents
[root@dhcp-0-112 3.9]# /sbin/gluster v quota v1 disable
Disabling quota will delete all the quota configuration. Do you want to continue? (y/n) y
quota command failed : Volume quota failed. The cluster is operating at version 30700. Quota command disable is unavailable in this version.
[root@dhcp-0-112 3.9]#
>> we need to upgrade to 3_7_12 to use disable
[root@dhcp-0-112 3.9]# gluster v set all cluster.op-version 30712
volume set: success
[root@dhcp-0-112 3.9]# /sbin/gluster v quota v1 disable
Disabling quota will delete all the quota configuration. Do you want to continue? (y/n) y
volume quota : success
[root@dhcp-0-112 3.9]# /sbin/gluster v quota v1 enable
volume quota : success
>> again we hit it,
[root@dhcp-0-112 3.9]# gluster v quota v1 limit-usage /dir1 12MB 10
quota command failed : Failed to start aux mount
[root@dhcp-0-112 3.9]#
[root@dhcp-0-112 3.9]# gluster v quota v1 limit-usage /dir1 12MB 10
volume quota : success
[root@dhcp-0-112 3.9]# gluster v quota v1 limit-usage /dir2 12MB 10
volume quota : success
>> we get same quota.conf contents, confirming that the limit-usage rewrites quota.conf correctly
[root@dhcp-0-112 3.9]# cksum /var/lib/glusterd/vols/v1/quota.conf
496616948 71 /var/lib/glusterd/vols/v1/quota.conf
I Notice that while the limit-usage command does rewrite quota.conf correctly, the command does not always succeed.
So the user may have to try it multiple times.
On Fri, Nov 11, 2016 at 2:42 AM, Vijay Bellur <vbellur@xxxxxxxxxx> wrote:
Maybe we should not flip the LATEST for non-RPM distributions inOn Thu, Nov 10, 2016 at 11:56 AM, Niels de Vos <ndevos@xxxxxxxxxx> wrote:
> On Thu, Nov 10, 2016 at 11:44:21AM -0500, Vijay Bellur wrote:
>> On Thu, Nov 10, 2016 at 11:14 AM, Shyam <srangana@xxxxxxxxxx> wrote:
>> > On 11/10/2016 11:01 AM, Vijay Bellur wrote:
>> >>
>> >> On Thu, Nov 10, 2016 at 10:49 AM, Shyam <srangana@xxxxxxxxxx> wrote:
>> >>>
>> >>>
>> >>>
>> >>> On 11/10/2016 10:21 AM, Vijay Bellur wrote:
>> >>>>
>> >>>>
>> >>>> On Thu, Nov 10, 2016 at 10:16 AM, Manikandan Selvaganesh
>> >>>> <manikandancs333@xxxxxxxxx> wrote:
>> >>>> Given that we are done with the last release in 3.6.x, I think there
>> >>>> would be users looking to upgrade. My vote is to include the
>> >>>> necessary patches in 3.9 and not let users go through unnatural
>> >>>> workflows to get quota working again in 3.9.0.
>> >>>
>> >>>
>> >>>
>> >>> <Comment is without knowing if the necessary patches are good to go>
>> >>>
>> >>> Consider this a curiosity question ATM,
>> >>>
>> >>> 3.9 is an LTM, right? So we are not stating workflows here are set in
>> >>> stone?
>> >>> Can this not be an projected workflow?
>> >>>
>> >>
>> >>
>> >> 3.9 is a STM release as per [1].
>> >
>> >
>> > Sorry, I meant STM.
>> >
>> >>
>> >> Irrespective of a release being LTM or not, being able to upgrade to a
>> >> release without operational disruptions is a requirement.
>> >
>> >
>> > I would say upgrade to an STM *maybe* painful, as it is an STM and hence may
>> > contain changes that are yet to be announced stable or changed workflows
>> > that are not easy to upgrade to. We do need to document them though, even
>> > for the STM.
>> >
>> > Along these lines, the next LTM should be as stated, i.e "without
>> > operational disruptions". The STM is for adventurous folks, no?
>> >
>>
>> In my view STM releases for getting new features out early. This would
>> enable early adopters to try and provide feedback about new features.
>> Existing features and upgrades should work smoothly. IOW, we do not
>> want to have known regressions for existing features in STM releases.
>> New features might have rough edges and this should be amply
>> advertised.
>
> I do not think users on 3.6 are the right consumers for a STM release.
> These users are conservative and did not ugrade earlier. I doubt they
> are interested in new features *now*. Users that did not upgrade before,
> are unlikely the users that will upgrade in three months when 3.9 is
> EOL.
>
>> In this specific case, quota has not undergone any significant changes
>> in 3.9 and letting such a relatively unchanged feature affect users
>> upgrading from 3.6 does not seem right to me. Also note that since
>> LATEST in d.g.o would point to 3.9.0 after the release, users
>> performing package upgrades on their systems could end up with 3.9.0
>> inadvertently.
>
> The packages from the CentOS Storage SIG will by default provide the
> latest LTM release. The STM release is provided in addition, and needs
> an extra step to enable.
>
> I am not sure how we can handle this in other distributions (or also
> with the packages on d.g.o.).
d.g.o? or should we introduce LTM/LATEST and encourage users to change
their repository files to point to this?
Packaging in distributions would be handled by package maintainers and
I presume they can decide the appropriateness of a release for
packaging?
Thanks,
Vijay
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel