On Wed, 26 Nov 2008, Charles Seeger wrote: > +------ Patrick Vervoorn wrote (Wed, 26-Nov-2008, 10:44 +0100): > | I've noticed what you mentioned before, but didn't find it a big nuisance. > | A simple press of Ctrl-r when you're at the end, puts you back at the > | beginning. > > I'll give that a try, but I'm wanting to go back to the middle, where > I had hit tab to read a selected thread, rather than to the beginning. > Matt Ackeret suggested this might be a feature rather than a bug, > in which case trn may still have a valid index to the previous page in > the thread selector. If so, there may be a command to make use of that. > Until now I've been remembering the percentage displayed at the bottom > of the thread selector and using "<" to move back to that vicinity. > All things considered, this certainly is a minor nuisance compared to > memory leaks, crashes and non-ascii character support. Interesting to see in what ways different people read the newsgroups. In my case, I first do a scan of the overview-pages, selecting what I (think I) want to read. At the end, I press 'X' to junk all other stuff, and start reading. Scrolling by 'space' when things are interesting, when a sub-thread gets boring, I press ',', when the whole thread has progressed beyond recovery, I press 'k'. Trn does the rest. It's a rather rare occurance that I select a single thread, read it, then go back to the article overview (usually a marked thread to which I contributed), but I can understand why the behaviour you're describing is a nuisance in that case. > | As for my memory-related problems, I do not encounter these when I read > | 'regular' text-groups, even when they're pretty big and/or have a rather > | big retention. I do encounter this when skimming through binary groups, > | with up to several hundred k's of article-overviews being pulled in. > > Since I don't read any binary groups, that probably is the difference. > But, some text groups that I do read have 500-1000 articles per day, and > I sometimes go several days to a week before accessing their overviews. > But I don't think I have ever reached 50k of actual new articles (as > opposed to article numbers). I don't run the server, so I'm not up on > the retention periods, but 100k spooled articles may be an upper bound > for the text groups that I'm reading/skimming. With the amount of flow in the binary groups, the textnews groups are nothing more than a 'drop in the bucket' compared to it. I know of usenet servers who do not actually expire their textnews groups, since the needed storage is trivial compared to a days worth of binaries. Be aware something like alt.binaries.x or alt.binaries.boneless has several million articles flowing through it per day. > | So it's possible the usenet-flow wasn't available in april 2001 when > | test76 was released, so it's never been tested with these amounts of > | articles...? > > I haven't looked at the relevant statistics, but I suspect that Usenet > traffic hasn't increased all that much since the turn of the millennia, > certainly not compared to other Internet traffic. OTOH, the binary > groups may differ significantly. Still, increased traffic (or greater > retention--bigger disks!) might be enough to hit one threshold or another. Binary newsgroup are still growing; while many people are opposed to using UseNet for binaries, it is at the very least a pretty useful (but perhaps not efficient) way of 'broadcasting' binaries. According to the Wiki page, total UseNet traffic was 3.8 TB/day in april 2008. It should be well over 4 TB / day by now I suppose. Over in news.software.nntp, a discussion raged about a commercial provider (IIRC it was Giganews) going to 64-bit article numbers. Reason: the huge amount of articles flowing to the aforementioned 'dump-groups; (alt.binaries.x/boneless/etc) is causing them to run out of the 32-bit space. This would probably break trn in a big way (and a lot of other clients too). All of course due to the binaries; text-news flow is nowhere near that volume to need updates/fixes like that. > Best, > Chuck Best regards, Patrick.