Re: Toy/demo: using ChatGPT to summarize lengthy LKML threads (b4 integration)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 28, 2024 at 06:00:07AM +0100, Willy Tarreau wrote:
> On Tue, Feb 27, 2024 at 05:32:34PM -0500, Konstantin Ryabitsev wrote:

> > So, the question is -- is this useful at all? Am I wasting time poking in this
> > direction, or is this something that would be of benefit to any of you? If the
> > latter, I will document how to set this up and commit the thread minimization
> > code I hacked together to make it cheaper.

> I figured a number of shortcomings about this: I suspect that those
> most interested in such output are either, a bit like me, not much
> active on kernel development, or focus on a specific area and mostly
> want to stay aware of ongoing changes in other areas they're really
> not familiar with.

I can imagine using this sort of thing for the case where I get to my
inbox in the morning and there's some enormous thread appeared overnight
with people arguing and I'm trying to get a handle on what the various
subthreads are all about.  The demo didn't cover exactly that case but
it looked like it might be able to give some sort of useful steer.

> And because of this I didn't find on what boundaries to cut the analysis,
> If it's "since last time I read my email", it can only be done locally
> and will be per-user. If it's a summary of a final thread, it's not
> super interesting and it's better explained (IMHO) on LWN where the
> hot topics are summarized and developed. If it's the list of threads
> of the day, I've suspected that there are so many that it's unlikely
> I'd read all of them every evening or every morning. I've been wondering
> if an interesting approach would be to only summarize long threads,
> since most short ones are a patch, a review and an ACK and do not need
> to be summarized, but I think that most of us seeing a subject repeat
> over many e-mails will just look at a few exchanges there to get an
> idea of what's going on.

For the above case it'd be an on demand thing which I'd look for
occasionally.

> Also regarding processing costs, I've had extremely good results using
> the Mixtral-8x7B LLM in instruct mode running locally. It has a 32k context
> like GPT4. And if not enough, given that most of a long thread's contents
> is in fact quoted text, it could be sufficient to drop multiple indents
> to preserve a response and its context while dropping most of the repeat
> (it cuts your example thread in roughly half). But this still takes quite
> a bit of processing time: processing the 14 mails from the thread above
> took 13 minutes on a 80-core Ampere Altra system (no GPU involved here).
> This roughly costs 1 minute per e-mail, that's a lot per day, not counting
> the time needed to tune the prompt to get the best results!

That actually sounds potentially viable for my case, at least while I'm
at home.

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux