Trouble with data size in a Distributed-Replicated 2 x 3 = 6 volume

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



HI,

In my Distriuted-Replicated volume [ 2 x 3 = 6] used as a data domain for oVirt hypervisor, I had a trouble with used space :

It was a previous Replicated 1 x 3 = 3 volume extended by adding one brick (one disk) in each server to recharge a 2 x 3 = 6 volume.

Df -h on the mount point give à 98% used volume (see below)

Filesystem              Size  Used Avail Use% Mounted on

ovirt1.pmmg.priv:/data  3.4T  3.3T   82G  98% /rhev/data-center/mnt/glusterSD/ovirt1.pmmg.priv:_data


But : du -hs on this folder give me a really different amount of data :


du -hs /rhev/data-center/mnt/glusterSD/ovirt1.pmmg.priv\:_data

1.4T /rhev/data-center/mnt/glusterSD/ovirt1.pmmg.priv:_data


And of course just a small hidden folder (.trashcan nearly empty in this mount point) :


du -hs /rhev/data-center/mnt/glusterSD/ovirt1.pmmg.priv\:_data/.trashcan

1.2M /rhev/data-center/mnt/glusterSD/ovirt1.pmmg.priv:_data/.trashcan


I don’t known how to free space in this volume and why such a difference between DU and DF ? Are my data replicated on the 6 bricks ?


Any clues will be appreciated.


Thank's

________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux