Re: consider dropping defrag of journals on btrfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fr, 05.02.21 20:43, Dave Howorth (systemd@xxxxxxxxxxxxxx) wrote:

> 128 MB files, and I might allocate an extra MB or two for overhead, I
> don't know. So when it first starts there'll be 128 MB allocated and
> 384 MB free. In stable state there'll be 512 MB allocated and nothing
> free. One 128 MB allocated and slowly being used. 384 MB full of
> archive files. You always have between 384 MB and 512 MB of logs
> stored. I don't understand where you're getting your numbers from.

As mentioned elswhere: we typically have to remove two "almost 128M"
files to get space for "exactly 128M" of guaranteed space.

And you know, each user gets their own journal. Hence, once a single
user logs a single line aother 128M are gone, and if another user then
does it, bam, another 128M is gone.

We can't eat space away like that.

> If you can't figure out which parts of an archived file are useful and
> which aren't then why are you keeping them? Why not just delete them?
> And if you can figure it out then why not do so and compact the useful
> information into the minimum storage?

We archive for multiple reasons: because file was dirty when we
started up (in which case there apparently was an abnormal shutdown of
the system or journald), or because we rotate and start a new file (or
time change or whatnot). In the first ("dirty") case we don't touch
the file at all, because it's likely corrupt and we don't want to
corrupt further. We just rename it so that it gets "~" at the
end. When we archive the "clean" way we mark the file internally as
archived, but before sync everything to disk, so that we know for sure
it's all in a good state, and then we don't touch it anymore.

"journalctl" will process all these files, regardless if "dirty"
archived or "clean" archived. It tries hard to make the best of these
files, and varirous codepaths to make sure we don't get confused by
half-written files, and can use as much as possible of the parts that
were written correctly.

hence, that's why we don't delete corrupted files: because we use as
much of it as we can. Why? because usually the logs shortly before
your system died abnormally are the most interesting.

> > Because fs metadata, and because we don't always write files in
> > full. I mean, we often do not, because we start a new file *before*
> > the file would grow beyond the threshold. this typically means that
> > it's typically not enough to delete a single file to get the space we
> > need for a full new one, we usually need to delete two.
>
> Why would you start a new file before the old one is full?

Various reasons: user asked for rotation or vacuuming. because
abnormal shutdown. becase time change (we want individual files to be
montonically ordered), …

Lennart

--
Lennart Poettering, Berlin
_______________________________________________
systemd-devel mailing list
systemd-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/systemd-devel




[Index of Archives]     [LARTC]     [Bugtraq]     [Yosemite Forum]     [Photo]

  Powered by Linux