Search Postgresql Archives

Re: Building an home computer for best Poker Tracker performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



OK, did I just read Poker Tracker and SSDs in the same email --- my head
is going to explode!

Greg, tell me you didn't get involved with Postgres because of Poker
Tracker.  :-O  :-)

---------------------------------------------------------------------------

Greg Smith wrote:
> On 07/17/2011 09:37 PM, mdxxd wrote:
> > My DB is big, around 5M hands(big is relative of course), I use complex
> > HUD(if you know what it is), run complex reports and play 12+ tables.
> > Complex is relatively complex to other PT users, I don't know how it
> > compared to other tasks.
> >    
> 
> That is pretty crazy.  Back when I was using Poker Tracker, I was hard 
> pressed to play 3 tables at once, and after a year of regular play only 
> had 25K hands of history.
> 
> To provide some more context for what this looks like to the rest of the 
> list here, updates to the hand history come in at a rate that's measured 
> in hands per hour, with 50 to 100 being a pretty fast table; let's call 
> it 2 hands per minute.  That may not seem like too much, but there's 
> actually a lot of records being written each time--the data behind a 
> single hand of play is probably touching 50-ish rows.
> 
> And if you update player statistics after each hand, there can easily be 
> a half dozen queries you have to run all over again to update the 
> heads-up display.  And those fire for every player sitting at the table, 
> potentially as many as 10.  So with 12 tables going at once, 2 hands per 
> minute at each, 10 players at each table, and 6 HUD queries per player 
> per hand, that works out to 24 queries per second.  Do not scoff at this 
> workload assuming it will only take a core or two to keep with; if the 
> HUD data really is complicated, this can add up fast.
> 
> And they look like OLAP queries; the only thing that keeps the whole 
> thing managable is that only a single player is involved in each of 
> them, so you're usually hitting an index.  And the data about the 
> players currently active tends to get pulled into cache and stay there, 
> so the working data set at any time isn't too large.
> 
> > *Will PostgreSQL benefit from using hyperthreading in i7 2600k CPU? Or will
> > there be no noticeable performance change if I use i5 2500K CPU(i.e no HT,
> > just 4 cores)?
> > *Will there be noticeable performance improvements if I OC my CPU from
> > 3.3ghz/3.4ghz(with turbo bust to 3.7ghz/3.8ghz) to 4.5ghz?
> >    
> 
> Even with 12 tables going, I don't think that a large number of cores is 
> really necessary here.  You want individual cores to be as fast as 
> possible for this type of application, but it's not too likely you'll be 
> able to use more than 4 at once usefully.  I would guess that using a 
> smaller number of cores and aiming at a really high clock speed on them 
> is the better strategy for you.
> 
> > RAM:
> > *Will there be noticeable performance improvements if I will use 16GB RAM
> > over 8GB RAM? I know big servers uses at least 16GB(often 32GB).
> > *Will there be noticeable change between CAS 9 and CAS 8/7? 1333mhz/1600mgz
> > ddr3 ram?
> >    
> 
> You want to use the fastest RAM possible here, that really matters for 
> what you're doing.  I doubt the amount of data being processed at any 
> time will exceed 8GB; the "hot" set here is only the hand histories for 
> players who are currently sitting at tables with you.  Also, I've found 
> the 4GB modules normally used reach 16GB total on a desktop system tend 
> not to be quite as fast as the similar 2GB ones.
> 
> You should aim at DDR3/1600 and the lowest CAS you can find.  Your 
> complex HUD updates are for the most part going to be limited by how 
> fast your CPU can chew through information that's in memory, so this may 
> end up being the most critical factor to your system performance.
> 
> 
> > SSD:
> > Different SSD excel in different areas. I know that for general PC usage, 4K
> > Q1 random read/write is the most important.
> > What is the most important for PT3(and PostgreSQL)  usage? Random? sequel?
> > 4K/8K/16K / 32K/ +? 4K-64Thrd? 4K QD4/QD16/QD32 ? etc ...
> >    
> 
> Databases have a mix of sequential and random access for what you're 
> doing, so you're going to want a balanced drive; no one of these factors 
> is the obvious important one, they all are.
> 
> Note that most inexpensive SSD units can result in database corruption 
> if your system crashes.  See 
> http://wiki.postgresql.org/wiki/Reliable_Writes for more details.  This 
> is much more important to get right than to worry about the drive 
> benchmarks.  The only inexpensive SSD consumer drive I'm aware of that 
> works well for PostgreSQL are Intel's recent 320 series.  See 
> http://blog.2ndquadrant.com/en/2011/04/intel-ssd-now-off-the-sherr-sh.html 
> for details.  Putting your main hand histories on one of those would 
> make it very unlikely that drive I/O speed was ever a limiting factor 
> for you.  Much more likely that you'll have trouble with memory maxing out.
> 
> To summarize how I would spec out this sort of system:
> 
> -Pick a reasonably priced SSD that's big enough to hold your data.  
> Consider if you can justify buying 2 and using a RAID-1 to survive a 
> drive failure.
> -Price out the fastest DDR you can find, with 8GB probably being plenty.
> -Use your remaining budget to get the fastest individual cores you can 
> justify, shooting for 4 to 8 of them probably.
> 
> Two final notes:
> 
> -Make sure to follow the basic database tuning guidelines at 
> http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server to get 
> your configuration in the right area.  I think your workload might 
> really benefit from turning off synchronous_commit in particular, so try 
> both settings there.  It won't kill you to lose a hand or two of history 
> if you disable that, and the speed improvement could be large.
> 
> -If you want to start up a discussion about optimizing your server, that 
> would be better done on the pgsql-performance list than this one.
> 
> -- 
> Greg Smith   2ndQuadrant US    greg@xxxxxxxxxxxxxxx   Baltimore, MD
> 
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

-- 
  Bruce Momjian  <bruce@xxxxxxxxxx>        http://momjian.us
  EnterpriseDB                             http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux