Re: really large number of skipped files after a scrub

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Great to hear that.
You can also setup some logic to track the scrub status (for example ELK stack to ingest the logs).

Best Regards,
Strahil Nikolov

В четвъртък, 19 януари 2023 г., 15:19:27 ч. Гринуич+2, cYuSeDfZfb cYuSeDfZfb <cyusedfzfb@xxxxxxxxx> написа:


Hi,

Just to follow up my first observation from this email from december: automatic scheduled scrubs that not happen. We have now upgraded glusterfs from 7.4 to 10.1, and now see that the automated scrubs ARE running now. Not sure why they didn't in 7.4, but issue solved. :-)

MJ

On Mon, 12 Dec 2022 at 13:38, cYuSeDfZfb cYuSeDfZfb <cyusedfzfb@xxxxxxxxx> wrote:
Hi,

I am running a PoC with cluster, and, as one does, I am trying to break and heal it.

One of the things I am testing is scrubbing / healing.

My cluster is created on ubuntu 20.04 with stock glusterfs 7.2, and my test volume info:

Volume Name: gv0
Type: Replicate
Volume ID: 7c09100b-8095-4062-971f-2cea9fa8c2bc
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/data/brick1/gv0
Brick2: gluster2:/data/brick1/gv0
Brick3: gluster3:/data/brick1/gv0
Options Reconfigured:
features.scrub-freq: daily
auth.allow: x.y.z.q
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
features.bitrot: on
features.scrub: Active
features.scrub-throttle: aggressive
storage.build-pgfid: on

I have two issues:

1) scrubs are configured to run daily (see above) but they don't automatically happen. Do I need to configure something to actually get daily automatic scrubs?

2) A "scrub status" reports *many* skipped files, and only very few files that have actually been scrubbed. Why are so many files skipped?

See:

gluster volume bitrot gv0 scrub status

Volume name : gv0

State of scrub: Active (Idle)

Scrub impact: aggressive

Scrub frequency: daily

Bitrot error log location: /var/log/glusterfs/bitd.log

Scrubber error log location: /var/log/glusterfs/scrub.log


=========================================================

Node: localhost

Number of Scrubbed files: 8112

Number of Skipped files: 51209

Last completed scrub time: 2022-12-10 04:36:55

Duration of last scrub (D:M:H:M:S): 0:16:58:53

Error count: 0


=========================================================

Node: gluster3

Number of Scrubbed files: 42

Number of Skipped files: 59282

Last completed scrub time: 2022-12-10 02:24:42

Duration of last scrub (D:M:H:M:S): 0:16:58:15

Error count: 0


=========================================================

Node: gluster2

Number of Scrubbed files: 42

Number of Skipped files: 59282

Last completed scrub time: 2022-12-10 02:24:29

Duration of last scrub (D:M:H:M:S): 0:16:58:2

Error count: 0

=========================================================

Thanks!
MJ
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux