Re: Toy/demo: using ChatGPT to summarize lengthy LKML threads (b4 integration)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 28, 2024 at 04:29:53PM +0100, Willy Tarreau wrote:
> > Another use for this that I could think is a way to summarize digests.
> > Currently, if you choose a digest subscription, you will receive a single
> > email with message subjects and all the new messages as individual
> > attachments. It would be interesting to see if we can send out a "here's
> > what's new" summary with links to threads instead.
> 
> Indeed!
> 
> > The challenge would be to do it in a way that doesn't bankrupt LFIT in the
> > process. :)
> 
> That's exactly why it would make sense to invest in one large machine
> and let it operate locally while "only" paying the power bill.

I'm not sure how realistic this is, if it takes 10 minutes to process a single
4000-word thread. :) With ChatGPT it would probably cost thousands of dollars
daily if we did this for large lists (and it doesn't really make sense to do
this on small lists anyway, as the whole purpose behind the idea is to
summarize lists with lots of traffic).

For the moment, I will document how I got this working and maybe look into
further shrinking the amount of data that would be needed to be sent to the
LLM. I will definitely need to make it easy to use a local model, since
relying on a proprietary service (of questionable repute in the eyes of many)
would not be in the true spirit of what we are all trying to do here. As I
said, I was mostly toying around with $25 worth credits that I had with
OpenAI.

-K




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux