On Monday, June 29, 2020 1:09:16 AM MST Markus Larsson wrote: > On 29 June 2020 08:26:21 CEST, "John M. Harris Jr" <johnmh@xxxxxxxxxxxxx> > wrote: > >On Sunday, June 28, 2020 5:37:08 PM MST Chris Adams wrote: > > > >> Once upon a time, John M. Harris Jr <johnmh@xxxxxxxxxxxxx> said: > >> > >> > >> > XFS proved to be troublesome, and still is up to the latest of RHEL7. > >> > It's > >> > not uncommon to have to run xfs_repair on smaller XFS partitions, > >> > especially / boot. I'm not sure if btrfs has the same issue there? > >> > >> > >> > >> [citation needed] > >> > >> I haven't run xfs_repair in probably 15 years (and so never on Fedora or > >> RHEL/CentOS). > > > > > >I haven't had time to figure out why the RHEL systems I have that are > >(mistakenly I assume, though they were created before I was hired) using > >XFS run into that issue, after about a month, they report 100% disk > >space utilization on /boot, and I've gotta run xfs_repair in order to fix > >that. In the unlikely event that I have the time to figure out why, before > >I just re- install them (which is already planned), I'd be happy to follow > >up with a citation. :) > > > That is very odd. I haven't seen it once in over a decade in an environment > with thousands of machines. Very interesting though, I think I will have > to try to replicate this. Is there anything special about them like odd > partition layout etc? I can't confirm at the moment, but I'm pretty sure /boot is a 1GiB (maybe 2GiB) XFS partition. I've marked your message as "TODO" in my client, so I can get you more info tomorrow if you're interested. -- John M. Harris, Jr. Splentity _______________________________________________ devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx