Re: Fwd: really large number of skipped files after a scrub

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



By the way, what is the output of 'ps aux | grep bitd' ?

Best Regards,
Strahil Nikolov 

On Tue, Dec 13, 2022 at 15:45, Strahil Nikolov
<hunter86_bg@xxxxxxxxx> wrote:
Based on https://bugzilla.redhat.com/show_bug.cgi?id=1299737#c12 , the previos name was 'number of unsigned files'.

Signing seem to be a very complex process (see http://goo.gl/Mjy4mD ) and as far as I understand - those 'skipped' files were too new to be signed .

If you do have RAID5/6 , I think that bitrod is unnecessary.

Best Regards,
Strahil Nikolov 

On Tue, Dec 13, 2022 at 12:33, cYuSeDfZfb cYuSeDfZfb
<cyusedfzfb@xxxxxxxxx> wrote:
Hi,

I am running a PoC with cluster, and, as one does, I am trying to break and heal it.

One of the things I am testing is scrubbing / healing.

My cluster is created on ubuntu 20.04 with stock glusterfs 7.2, and my test volume info:

Volume Name: gv0
Type: Replicate
Volume ID: 7c09100b-8095-4062-971f-2cea9fa8c2bc
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/data/brick1/gv0
Brick2:  gluster2:/data/brick1/gv0
Brick3:  gluster3:/data/brick1/gv0
Options Reconfigured:
features.scrub-freq: daily
auth.allow: x.y.z.q
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
features.bitrot: on
features.scrub: Active
features.scrub-throttle: aggressive
storage.build-pgfid: on

I have two issues:

1) scrubs are configured to run daily (see above) but they don't automatically happen. Do I need to configure something to actually get daily automatic scrubs?

2) A "scrub status" reports *many* skipped files, and only very few files that have actually been scrubbed. Why are so many files skipped?

See:

gluster volume bitrot gv0 scrub status

Volume name : gv0

State of scrub: Active (Idle)

Scrub impact: aggressive

Scrub frequency: daily

Bitrot error log location: /var/log/glusterfs/bitd.log

Scrubber error log location: /var/log/glusterfs/scrub.log


=========================================================

Node: localhost

Number of Scrubbed files: 8112

Number of Skipped files: 51209

Last completed scrub time: 2022-12-10 04:36:55

Duration of last scrub (D:M:H:M:S): 0:16:58:53

Error count: 0


=========================================================

Node:  gluster3

Number of Scrubbed files: 42

Number of Skipped files: 59282

Last completed scrub time: 2022-12-10 02:24:42

Duration of last scrub (D:M:H:M:S): 0:16:58:15

Error count: 0


=========================================================

Node:  gluster2

Number of Scrubbed files: 42

Number of Skipped files: 59282

Last completed scrub time: 2022-12-10 02:24:29

Duration of last scrub (D:M:H:M:S): 0:16:58:2

Error count: 0

=========================================================

Thanks!
MJ
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux