Hi Hari,
I hope that the crawl will run at most for a couple of days. Do you know if there is a way to solve the issue definitely ?
GlusterFS version is 3.12.14. You can find below some additional info.
Volume Name: tier2 Type: Distributed-Disperse Volume ID: a28d88c5-3295-4e35-98d4-210b3af9358c Status: Started Snapshot Count: 0 Number of Bricks: 12 x (4 + 2) = 72 Transport-type: tcp
Many thanks, Mauro
Hi, Yes the above mentioned steps are right. The way to find if the crawl is still happening is to grep for quota_crawl in the processes that are still running. # ps aux | grep quota_crawl As long as this process is alive, the crawl is happening. Note: crawl does take a lot of time as well. And it happens twice. On Fri, Jul 19, 2019 at 5:42 PM Mauro Tridici < mauro.tridici@xxxxxxx> wrote: Hi Hari,
thank you very much for the fast answer. I think that the we will try to solve the issue disabling and enabling quota. So, if I understand I have to do the following actions:
- save on my notes the current quota limits; - disable quota using "gluster volume quota /tier2 disable” command; - wait a while for the crawl (question: how can I understand that crawl is terminated!? how logn should I wait?); - enable quota using "gluster volume quota /tier2 enable”; - set again the previous quota limits.
Is this correct?
Many thanks for your support, Mauro
On 19 Jul 2019, at 12:48, Hari Gowtham <hgowtham@xxxxxxxxxx> wrote:
Hi Mauro,
The fsck script is the fastest way to resolve the issue. The other way would be to disable quota and once the crawl for disable is done, we have to enable and set the limits again. In this way, the crawl happens twice and hence its slow.
On Fri, Jul 19, 2019 at 3:27 PM Mauro Tridici <mauro.tridici@xxxxxxx> wrote:
Dear All,
I’m experiencing again a problem with gluster file system quota. The “df -hT /tier2/CSP/sp1” command output is different from the “du -ms” command executed against the same folder.
[root@s01 manual]# df -hT /tier2/CSP/sp1 Filesystem Type Size Used Avail Use% Mounted on s01-stg:tier2 fuse.glusterfs 25T 22T 3.5T 87% /tier2
[root@s01 sp1]# du -ms /tier2/CSP/sp1 14TB /tier2/CSP/sp1
In the past, I used successfully the quota_fsck_new-6.py script in order to detect the SIZE_MISMATCH occurrences and fix them. Unfortunately, the number of sub-directories and files saved in /tier2/CSP/sp1 grew so much and the list of SIZE_MISMATCH entries is very long.
Is there a faster way to correct the mismatching outputs? Could you please help me to solve, if it is possible, this issue?
Thank you in advance, Mauro
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users
-- Regards, Hari Gowtham.
-- Regards, Hari Gowtham.
|