Re: Toy/demo: using ChatGPT to summarize lengthy LKML threads (b4 integration)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 28, 2024 at 12:52:43PM -0500, Konstantin Ryabitsev wrote:
> On Wed, Feb 28, 2024 at 04:29:53PM +0100, Willy Tarreau wrote:
> > > Another use for this that I could think is a way to summarize digests.
> > > Currently, if you choose a digest subscription, you will receive a single
> > > email with message subjects and all the new messages as individual
> > > attachments. It would be interesting to see if we can send out a "here's
> > > what's new" summary with links to threads instead.
> > 
> > Indeed!
> > 
> > > The challenge would be to do it in a way that doesn't bankrupt LFIT in the
> > > process. :)
> > 
> > That's exactly why it would make sense to invest in one large machine
> > and let it operate locally while "only" paying the power bill.
> 
> I'm not sure how realistic this is, if it takes 10 minutes to process a single
> 4000-word thread. :)

I know. People are getting way better perfs with GPUs as well as on Macs
particularly. I have not investigated such options at all, I'm only
relying on commodity hardware. I shared the commands so that those
interested and with the hardware can attempt it as well. I don't know
how far we can shrink that time.

> With ChatGPT it would probably cost thousands of dollars
> daily if we did this for large lists (and it doesn't really make sense to do
> this on small lists anyway, as the whole purpose behind the idea is to
> summarize lists with lots of traffic).

Sure.

> For the moment, I will document how I got this working and maybe look into
> further shrinking the amount of data that would be needed to be sent to the
> LLM. I will definitely need to make it easy to use a local model, since
> relying on a proprietary service (of questionable repute in the eyes of many)
> would not be in the true spirit of what we are all trying to do here.

I tend to think that these solutions will evolve very quickly both hosted
and local, and it's prudent not to stick to a single approach anyway.

> As I
> said, I was mostly toying around with $25 worth credits that I had with
> OpenAI.

And that was a great experience showing really interesting results!

Cheers,
Willy




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux