On 19 Sep 2008, at 22:33, Philip Thompson wrote:
On Sep 19, 2008, at 4:01 PM, Stut wrote:
On 19 Sep 2008, at 21:44, Philip Thompson wrote:
On Sep 19, 2008, at 1:12 PM, Stut wrote:
On 19 Sep 2008, at 18:47, Philip Thompson wrote:
6. Begin transaction
7. Lock user session row
8. Update user session
9. Commit transaction
If all you're doing is issuing an update command there is no need
to do so in a transaction and definitely no need to lock the row.
An update is atomic.
Maybe what you actually mean to do here is lock it before you get
the session data, make changes to it and then unlock it once
you're done changing it. Doing that would likely keep the row
locked for the entire duration of a request which can start
causing problems as traffic increases.
I'm starting the transaction because MySQL "SELECT... FOR UPDATE"
requires a transaction to lock the row. But now that I think about
it... the reason we use the lock is so that we don't have
collisions in data - specifically here the user session. However,
the user session row is only accessed by a single user (his/her
own). And since they can only be logged in at one location,
there's virtually no way for a collision. Right? I can remove
queries 6, 7, and 9, right?
Yes, you only need the update statement.
Ok, here, only the update is needed. But for other locations where
multiple users may be accessing the same record, I should lock it.....
Yes and no. If all you're going to do while it's locked is issue the
update statement then it's pointless. However, if you need to prevent
anyone from updating the row from when you read it to when you write
it back then you need to lock it for the duration.
Note that these are the 10 queries that happen after the initial
SESSION load. I supposed I could reduce this by 1 or 2 queries -
I could store the page id/information in the session. Now with
that said, the queries are negligible (in elapsed time) and
required.
However, I'm always open up to suggestions/improvements.... =D
You may think they're required, but I'm betting they're not if
you really think about it. However, if your DB can handle it then
why fix something that ain't broken.
It can handle it now. But I'm not worried about now. We have less
than 10 clients/offices using the app. This may grow up to 100
within the next year. That's when there's gonna be lots and lots
of data and we may start to see a slow down.
That's not even close to a large number of users, but it depends a
lot on what else the servers you're hosting it on are being used for.
A client may have 1 user or 50 users. It's not the user-size I'm
concerned about. This software is for doctor's offices. So, last
week when we had our first import from another practice management
system (aptly acronym'd, PMS), our patient records jumped from about
1,000 to 65,000. That's just 1 client! Now, I still know that's not
a whole lot, but multiply that by 100 clients in the next year:
64000 * 100 = 6.4 million patient records. That's more of a
significant number.
Not particularly, and to be honest the traffic to the site will be
your problem, not the number of users or records stored on it. Queries
can always be optimised but the architecture of the site is harder and
more expensive to change.
We're using a dedicated server that hosts the website and the
database. I *know* we're going to need to expand... but that's
beyond my control as a mere pawn. As of today, it's okay.
Sounds like you've got an easy sharding option so you should be ok.
Once you outgrow that single server it should be pretty simple to put
a redirector on to a main server which will redirect after login to
another server (shard) which contains all the data for that client.
This is commonly the easiest sharding scenario to implement but it
only works so long as a single client doesn't outgrow a single server.
-Stut
--
http://stut.net/
--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php