• 10

A PHP Error was encountered

Severity: Notice

Message: Undefined index: userid

Filename: views/question.php

Line Number: 191


File: /home/prodcxja/public_html/questions/application/views/question.php
Line: 191
Function: _error_handler

File: /home/prodcxja/public_html/questions/application/controllers/Questions.php
Line: 433
Function: view

File: /home/prodcxja/public_html/questions/index.php
Line: 315
Function: require_once

I have an application which require doing a lot of queries for every single page which make it slower.

I have tried to use cache systems (Redis) but implementation are little bit complicated and will take time as every query depend on a lot of dependencies.

I'm think about have another clone of database in memory which should be read only but every time database update or insert happened should update the memory rows? do you think this will be effective and practical and if so is there are any current way to achieve this as i'm using postgres.

My second idea was to use something like sphinx search engine for all queries and in this case i will need to use real-time indexes.

Is this working or there are a better solution.

Notice: database size is about 50GB and i can get memory for it all.


Generally it is not a smart idea. See, databases are written by smart people and use as much memory for caching as possible. If you hit the disc too often, then you have a memory problem - and keeping ANOTHER copy in the same limited memory will not make things better.

What someone should do is baseline analysis. What is the bottleneck? Fix it. More memory, SSD are typical steps here. What are the most expensive queries? Experience tells me you likely have some hammers in there that do a lot of damage, and the problem is not "too many queries" but "what the heck is an index".

But an in memory database normally should not make a difference because a database worth anything already uses as much memory as it can as cache. That is - naturally - unless some admin puts in a ridiculously low limit to "save the memory" and then has performance problems.

And then you really should start with caching.

  • 4
Reply Report
    • Mostly sensible, but you should be looking at your query plans before you rush out and buy SSDs. Making sure you've set the shared buffers to use most of your memory is a good start - with enough memory, postgres will effectively run with the entire data set in memory but writes stored on disk too.
    • Also, focus your caching efforts on data that's expensive to generate or queried a lot and can be afford to be a bit out of date. There's no point caching stuff that's a trivial index-only scan to look up and has a short validity lifetime relative to how often it's accessed.
      • 1
    • Well, personally I think SSD are all the way to go outside of storage. I removed any non ssd from any server - again, outside our 30tb backup volumes, outside our file servers. The rest runs on SSD anyway. They are not THAT expensive and make everything faster.

Trending Tags