On 4/11/18 11:11 pm, Berend De Schouwer wrote:
On Sun, 2018-11-04 at 18:57 +1100, Eyal Lebedinsky wrote:
On 4/11/18 6:39 pm, Berend De Schouwer wrote:
On Sun, 2018-11-04 at 17:32 +1100, Eyal Lebedinsky wrote:
On 4/11/18 4:44 pm, Samuel Sieb wrote:
On 11/3/18 10:19 PM, Eyal Lebedinsky wrote:
What is this io, and can it be stopped? I want to allow the
disks
to enter
low power mode (not spin down) when idle.
I'm assuming since it's a new RAID that you haven't created
files
on it yet, or at least not many. Try running "lsof +D
/mnt/point"
to see if there is any process looking at it. Then try running
"inotifywait -rm /mnt/point" and let it run for a little while
to
see if you can catch some process accessing the fs. You will
need
to install "inotify-tools" to get that program.
Sure, should have said a bit more: The array was resync'ed, then
(much) data was copied in. This was a few days ago.
Did the sync finish? What is the kernel status of the array?
cat /proc/mdstat
Yes, as I mentioned, the sync finished (28/Oct), then the copy
finished (30/Oct).
Acquired a new case and installed the array there (using an old mobo)
testing how
the case ventilation performs. Waiting for a new mobo/CPU/mem.
It is during this quiet period that I noticed the io issue which got
me wondering.
Since this happens only when the array is mounted, and I do not see
any files being
touched, I wondered if this is some ext4 internal housekeeping. Can
this be related
to the size of the fs?
Maybe. Maybe the journal. Maybe a runaway sync().
The machine was reboot numerous times, same thing.
You can play with mount options like 'noatime.' Note that some mount
options might cause data corruption. Look in /proc/mounts for the
currently used options. See if there's something different to /.
Same options:
/dev/sda2 on / type ext4 (rw,relatime)
/dev/md127 on /new-raid type ext4 (rw,relatime,stripe=640)
'noatime' showa similar activity to 'realtime'.
read-only mount stops this activity.
Your original mail showed more activity on /dev/sdb .. sdh than on
/dev/md127, so it might be raid housekeeping, or a ext4/raid barrier.
This is OK. Looking at one entry in 'iostat 100':
avg-cpu: %user %nice %system %iowait %steal %idle
0.01 0.00 0.12 0.64 0.00 99.24
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 0.01 0.08 0.00 8 0
sdc 5.14 5.12 16.23 512 1623
sdg 6.50 26.36 37.47 2636 3747
sdh 5.86 7.80 18.91 780 1891
sdf 7.64 35.76 46.87 3576 4687
sde 6.64 19.76 30.87 1976 3087
sdd 5.14 5.12 16.23 512 1623
sdb 5.28 7.36 18.47 736 1847
md127 1.59 0.00 35.76 0 3576
For RAID6 'write' generates more activity than for a plain device.
Even a small 'write' leads to whole stripes read/modify/write.
Note that there are no 'read' operations on the array.
What I see is a periodic 'write' of about 20KB to md127, probably to the fs,
This rate is very constant.
If I have to guess I would say this is some ext4 internal activity to a
control area (not to a file in the fs).
/dev/md127 showed only write access. Is that typical too?
The shortest way to know if it's ext4 is to re-format as xfs or btrfs.
I don't suggest you do that lightly.
--
Eyal at Home (fedora@xxxxxxxxxxxxxx)
_______________________________________________
users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx