Fw: [Gluster-users] Distributed-Disperse Shard Behavior

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It seems quite odd.
I'm adding the devel list,as it looks like a bug - but it could be a feature ;)

Best Regards,
Strahil Nikolov


----- Препратено съобщение -----
От: Fox <foxxz.net@xxxxxxxxx>
До: Gluster Users <gluster-users@xxxxxxxxxxx>
Изпратено: събота, 5 февруари 2022 г., 05:39:36 Гринуич+2
Тема: Re: [Gluster-users] Distributed-Disperse Shard Behavior

I tried setting the shard size to 512MB. It slightly improved the space utilization during creation - not quite double space utilization. And I didn't run out of space creating a file that occupied 6gb of the 8gb volume (and I even tried 7168MB just fine). See attached command line log.

On Fri, Feb 4, 2022 at 6:59 PM Strahil Nikolov <hunter86_bg@xxxxxxxxx> wrote:
It sounds like a bug to me.
In virtualization sharding is quite common (yet, on replica volumes) and I have never observed such behavior.
Can you increase the shard size to 512M and check if the situation is better ?
Also, share the volume info.

Best Regards,
Strahil Nikolov

On Fri, Feb 4, 2022 at 22:32, Fox
Using gluster v10.1 and creating a Distributed-Dispersed volume with sharding enabled.

I create a 2gb file on the volume using the 'dd' tool. The file size shows 2gb with 'ls'. However, 'df' shows 4gb of space utilized on the volume. After several minutes the volume utilization drops to the 2gb I would expect.

This is repeatable for different large file sizes and different disperse/redundancy brick configurations.

I've also encountered a situation, as configured above, where I utilize close to full disk capacity and am momentarily unable to delete the file.

I have attached a command line log of an example of above using a set of test VMs setup in a glusterfs cluster.

Is this initial 2x space utilization anticipated behavior for sharding?

It would mean that I can never create a file bigger than half my volume size as I get an I/O error with no space left on disk.
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
root@tg1:~# gluster volume create gv30 disperse 5 tg{1,2,3,4,5}:/data/brick1/gv30 tg{1,2,3,4,5}:/data/brick2/gv30
volume create: gv30: success: please start the volume to access data

root@tg1:~# gluster volume set gv30 features.shard on
volume set: success

root@tg1:~# gluster volume set gv30 features.shard-block-size 512MB
volume set: success

root@tg1:~# gluster volume start gv30
volume start: gv30: success

root@tg1:~# gluster volume info

Volume Name: gv30
Type: Distributed-Disperse
Volume ID: e14cf92b-6f2d-420d-97ac-f725959d0398
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (4 + 1) = 10
Transport-type: tcp
Bricks:
Brick1: tg1:/data/brick1/gv30
Brick2: tg2:/data/brick1/gv30
Brick3: tg3:/data/brick1/gv30
Brick4: tg4:/data/brick1/gv30
Brick5: tg5:/data/brick1/gv30
Brick6: tg1:/data/brick2/gv30
Brick7: tg2:/data/brick2/gv30
Brick8: tg3:/data/brick2/gv30
Brick9: tg4:/data/brick2/gv30
Brick10: tg5:/data/brick2/gv30
Options Reconfigured:
features.shard-block-size: 512MB
features.shard: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on

root@tg1:~# mount -t glusterfs tg1:/gv30 /mnt

root@tg1:~# cd /mnt

root@tg1:/mnt# df -h
Filesystem      Size  Used Avail Use% Mounted on
tg1:/gv30       8.0G  399M  7.6G   5% /mnt

root@tg1:/mnt# dd if=/dev/zero of=file bs=1M count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 36.3422 s, 59.1 MB/s

root@tg1:/mnt# df -h
Filesystem      Size  Used Avail Use% Mounted on
tg1:/gv30       8.0G  3.9G  4.1G  49% /mnt

(about 5 minutes later)

root@tg1:/mnt# df -h
Filesystem      Size  Used Avail Use% Mounted on
tg1:/gv30       8.0G  2.4G  5.6G  31% /mnt

root@tg1:/mnt# rm file 

root@tg1:/mnt# df -h
Filesystem      Size  Used Avail Use% Mounted on
tg1:/gv30       8.0G  399M  7.6G   5% /mnt

root@tg1:/mnt# dd if=/dev/zero of=file bs=1M count=6144
6144+0 records in
6144+0 records out
6442450944 bytes (6.4 GB, 6.0 GiB) copied, 96.3252 s, 66.9 MB/s

root@tg1:/mnt# df -h
Filesystem      Size  Used Avail Use% Mounted on
tg1:/gv30       8.0G  7.0G  1.1G  88% /mnt

(about 5 minutes later)

root@tg1:/mnt# df -h
Filesystem      Size  Used Avail Use% Mounted on
tg1:/gv30       8.0G  6.7G  1.3G  85% /mnt

-------

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux