> On Dec 10, 2016, at 10:01 AM, Tom DalPozzo <t.dalpozzo@xxxxxxxxx> wrote:
>
> 2016-12-10 16:36 GMT+01:00 Rob Sargent <robjsargent@xxxxxxxxx>:
>
> > On Dec 10, 2016, at 7:27 AM, Tom DalPozzo <t.dalpozzo@xxxxxxxxx> wrote:
> >
> > Hi,
> > I'd like to do that! But my DB must be crash proof! Very high reliability is a must.
> > I also use sycn replication.
> > Regards
> > Pupillo
> >
> >
> >
> >
> > Are each of the updates visible to a user or read/analyzed by another activity? If not you can do most of the update in memory and flush a snapshot periodically to the database.
> >
> >
>
> This list discourages top posting. You’re asked to place your reply at the bottom
>
> You haven’t laid out you’re application architecture (how many clients, who is reading who is writing, etc). Caching doesn’t mean your database is any less crash proof. At that rate of activity, depending on architecture, you could lose updates in all sorts of crash scenarios.
>
> As for crash proof, I meant that once my client app is told that her update request was committed, it mustn't get lost (hdd failure apart of course). And I can't wait to flush the cache before telling to the app :"committed".
> I can replicate also the cache on the standby PC of course.
> Regards
> Pupillo
>
>
>
>
>
OK clientA sends an update; you commit and tell clientA committed. clientB updates same record; Do you tell clientA of clientB’s update?
Are the two updates cumulative or destructive.
Can you report all updates done by clientA?
I have one direct DB client (let's name it MIDAPP) only. This client of the DB is a server for up to 10000 final clients.
Any time MIDAPP is going to reply to a client, it must save a "status record with some data" related to that client and only after that, answering /committing the final client.
The next time the same final client will ask something, the same status record will be updated again (with a different content).
Each client can send up to 10000 reqs per day, up to 1 per second.
So, if I lose the cache, I'm done. I don't want to send the status to the final clients (in order to get it back in case).
I can evaluate to use a periodic_snapshot+ a sequential log for tracking the newer updates, but still to evaluate.
Regards
Pupillo