The Open Source PostgreSQL database is one of the most popular relational databases. More and more Web developers and DBAs are choosing PostgreSQL for it’s stability and reliability. The out-of-the-box install of PostgreSQL is good, but a few basic configuration changes can make it much faster while using fewer system resources. Tuning a database can be overwhelming for web developers, so we’ve kept these recommendations simple and conservative. To squeeze the most performance out of PostgreSQL we recommend hiring a company that specializes in PostgreSQL.
The shared_buffers configuration parameter determines how much memory is dedicated to PostgreSQL to use for caching data. Tuning this configuration parameter can have the greatest impact on query times, mostly because the default setting is far too low given the amount of memory available on modern servers. The general advice is to set this value to approximately 25% of the physical RAM on the server, up to 8 GB.
For example, if your server has 12 GB of RAM, you would set this line in your postgresql.conf file:
shared_buffers = 3GB
The effective_cache_size configuration parameter serves as a guide, informing the PostgreSQL as to how much of the database is potentially in memory. Setting this configuration paraeter to 1/2 of total memory is a widely accepted conservative setting.
Following the example of a server with 12 GB of RAM, your postgresql.conf file would look like this:
effective_cache_size = 6GB
The checkpoint_segments configuration parameter determines how often PostgreSQL will do a checkpoint on its transaction files. Each file is 16MB, and the default setting of 3 means PostgreSQL can easily become resource intensive by doing a checkpoint every 48MB. To avoid checkpoints becoming a bottleneck, set this value to at least 10. We recommend setting it to 16.
checkpoint_segments = 16
The work_mem configuration parameter determines how much memory PostgreSQL can use for internal sort operations and hash tables before writing to temporary disk files. It is preferable to do these operations in memory because its faster than I/O operations on the disk. There are no exact formulas for figuring this one out, since it depends on the number of sort operations and maximum potential connections to the database. However, doubling this from 1MB to 2MB should be safe and will improve queries that are doing smaller sorts on disk.
work_mem = 2MB
The maintenance_work_mem configuration parameter specifies the maximum amount of memory to be used by maintenance operations, such as VACUUM and CREATE INDEX. These operations don’t typically run often or concurrently, so it’s safe to increase this value. Changing this will help speed up the automatic vacuuming process — a daemon that runs in the background and cleans up database tables while collecting statistics on new data. Doubling the default from 16MB to 32MB is a good start.
maintenance_work_mem = 32MB