Re: Server to server communication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



D Steward wrote:
How often does the data change and how critical is it to have real-time results. Web sites often have thousands of people getting copies of the same thing, or at least computed from the same values even if they are the same only for a short period of time.
The servers will exchange sensitive data hopefully with a latency of <
50ms.
Ping time between them is 20ms.

That's not the relevant question. How often does the data change relative to the number of times you re-use it?

One approach is to put memcached between your web application and the database for extremely fast repeated access to the same data. It is just a cache layer, though, you still need a persistent database underneath. http://www.danga.com/memcached/

Ahh, thanks. I forgot about memcached. I am presently using some
in-memory MySQL tables, but I'll have to benchmark this against
memcached.
But the 2nd server was procured to relieve the CPU load on the main one.
Even with a 16-way Opteron, this situation would have had to be faced
eventually.

The big advantage of memcached is that you can distribute it over as many servers as you need to keep everything in RAM - and have it shared by any number of clients.

--
  Les Mikesell
   lesmikesell@xxxxxxxxx


_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos

[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux