PostgreSQL performance tuning for faster query execution

Objective

Our objective is to make a dummy query execution run faster on PostgreSQL database using only the built in tools available
in the database.

Operating System and Software Versions

  • Operating System: Red Hat Enterprise Linux 7.5
  • Software: PostgreSQL server 9.2

Requirements

PostgreSQL server base install up and running. Access to the command line tool psql and ownership of the example database.

Conventions

  • # – requires given linux commands to be executed with root privileges either directly as a root user or by use of sudo command
  • $ – given linux commands to be executed as a regular non-privileged user

Introduction

PostgreSQL is a reliable open source database available in many modern distribution’s repository. The ease of use, the ability to use extensions and the stability it provides all add to it’s popularity.
While providing the base functionality, like answering to SQL queries, store inserted data consistently, handling transactions, etc. most mature database solutions provide tools and know-hows on how to
tune the database, identify possible bottlenecks, and be able to solve performance problems bound to happen as the system powered by the given solution grows.

PostgreSQL is no exception, and in this
guide we’ll use the built in tool explain to make a slow-running query complete faster. It is far from a real world database, but one can take the hint on the usage of the built in tools. We’ll use a PostgreSQL server version 9.2 on Red Hat Linux 7.5, but the tools shown in this guide are present in much older database and operating system versions as well.



The problem to be solved

Consider this simple table (the column names are self-explanatory):

foobardb=# \d+ employees
                                                 Table "public.employees"
      Column      |  Type   |                      Modifiers                      | Storage  | Stats target | Description 
------------------+---------+-----------------------------------------------------+----------+--------------+-------------
 emp_id           | numeric | not null default nextval('employees_seq'::regclass) | main     |              | 
 first_name       | text    | not null                                            | extended |              | 
 last_name        | text    | not null                                            | extended |              | 
 birth_year       | numeric | not null                                            | main     |              | 
 birth_month      | numeric | not null                                            | main     |              | 
 birth_dayofmonth | numeric | not null                                            | main     |              | 
Indexes:
    "employees_pkey" PRIMARY KEY, btree (emp_id)
Has OIDs: no

With records like:

foobardb=# select * from employees limit 2;
 emp_id | first_name | last_name | birth_year | birth_month | birth_dayofmonth 
--------+------------+-----------+------------+-------------+------------------
      1 | Emily      | James     |       1983 |           3 |               20
      2 | John       | Smith     |       1990 |           8 |               12

In this example we are the Nice Company, and deployed an application called HBapp that sends a “Happy Birthday” email to the employee on his/her birthday. The application queries the database every morning to find recipients for the day (before work hours, we don’t want to kill our HR database out of kindness).
The application runs the following query to find the recipients:

foobardb=# select emp_id, first_name, last_name from employees where birth_month = 3 and birth_dayofmonth = 20;

 emp_id | first_name | last_name 
--------+------------+-----------
      1 | Emily      | James


All works fine, the users get their mail. Many other applications use the database, and the employees table within, like accounting and BI. The Nice Company grows, and so grows the employees table. In time the application runs too long, and execution overlaps with the start of the work hours resulting slow database response time in mission critical applications. We have to do something to make this query run faster, or the application will be undeployed, and with it there will be less niceness in Nice Company.

For this example we won’t be using any advanced tools to solve the problem, only one provided by the base install. Let’s see how the database planner executes the query with explain.

We are not testing in production; we create a database for testing, create the table, and insert two employees into it mentioned above. We use the same values for the query all along in this tutorial,
so at any run, only one record will match the query: Emily James. Then we run the query with preceding explain analyze to see how it is executed with minimal data in the table:

foobardb=# explain analyze select emp_id, first_name, last_name from employees where birth_month = 3 and birth_dayofmonth = 20;
                                             QUERY PLAN                                              
-----------------------------------------------------------------------------------------------------
 Seq Scan on employees  (cost=0.00..15.40 rows=1 width=96) (actual time=0.023..0.025 rows=1 loops=1)
   Filter: ((birth_month = 3::numeric) AND (birth_dayofmonth = 20::numeric))
   Rows Removed by Filter: 1
 Total runtime: 0.076 ms
(4 rows)

That’s real fast. Possibly as fast as it was when the company first deployed the HBapp. Let’s mimic the state of the current production foobardb by loading as many (fake) employees into the database as we have in production (note: we’ll need the same storage size under the test database as in production).

We’ll simply use bash to populate the test database (assuming we have 500.000 employees in production):

$ for j in {1..500000} ; do echo "insert into employees (first_name, last_name, birth_year, birth_month, birth_dayofmonth) values ('user$j','Test',1900,01,01);"; done | psql -d foobardb

Now we have 500002 employees:

foobardb=# select count(*) from employees;
 count  
--------
 500002
(1 row)

Let’s run the explain query again:

foobardb=# explain analyze select emp_id, first_name, last_name from employees where birth_month = 3 and birth_dayofmonth = 20;
                                                QUERY PLAN                                                
----------------------------------------------------------------------------------------------------------
 Seq Scan on employees  (cost=0.00..11667.63 rows=1 width=22) (actual time=0.012..150.998 rows=1 loops=1)
   Filter: ((birth_month = 3::numeric) AND (birth_dayofmonth = 20::numeric))
   Rows Removed by Filter: 500001
 Total runtime: 151.059 ms


We still have only one match, but the query is significantly slower. We should notice the planner’s first node: Seq Scan which stands for sequential scan – the database reads the whole
table, while we need only one record, like a grep would in bash. In fact, it can be actually slower than grep. If we export the table to a csv file called /tmp/exp500k.csv:

 foobardb=# copy employees to '/tmp/exp500k.csv' delimiter ',' CSV HEADER;
COPY 500002

And grep the information we need (we search for 20th day of the 3rd month, the last two values in the csv file in every
line):

$ time grep ",3,20" /tmp/exp500k.csv 
1,Emily,James,1983,3,20

real    0m0.067s
user    0m0.018s
sys     0m0.010s

This is, caching aside, deemed to be slower and slower as the table grows.

The solution is of cause indexing. No employee can have more than one birth date, which consist of exactly one birth_year, birth_month and birth_dayofmonth – so these three fields provide a unique value for that particular user. And a user is identified by his/her emp_id (there can be more than one employees in the company with the same name). If we declare a constraint on these four fields, an implicit index will be created as well:

foobardb=# alter table employees add constraint birth_uniq unique (emp_id, birth_year, birth_month, birth_dayofmonth);
NOTICE:  ALTER TABLE / ADD UNIQUE will create implicit index "birth_uniq" for table "employees"

So we got an index for the four fields, let’s see how our query runs:

foobardb=# explain analyze select emp_id, first_name, last_name from employees where birth_month = 3 and birth_dayofmonth = 20;
                                                 QUERY PLAN                                                 
------------------------------------------------------------------------------------------------------------
 Seq Scan on employees  (cost=0.00..11667.19 rows=1 width=22) (actual time=103.131..151.084 rows=1 loops=1)
   Filter: ((birth_month = 3::numeric) AND (birth_dayofmonth = 20::numeric))
   Rows Removed by Filter: 500001
 Total runtime: 151.103 ms
(4 rows)


That’s identical to the last one, and we can see the plan is the same, the index isn’t used. Let’s create another index by a unique constraint on emp_id, birth_month and birth_dayofmonth only (after all, we don’t query for birth_year in HBapp):

foobardb=# alter table employees add constraint birth_uniq_m_dom unique (emp_id, birth_month, birth_dayofmonth);
NOTICE:  ALTER TABLE / ADD UNIQUE will create implicit index "birth_uniq_m_dom" for table "employees"

Let’s see the result of our tuning:

foobardb=# explain analyze select emp_id, first_name, last_name from employees where birth_month = 3 and birth_dayofmonth = 20;
                                                QUERY PLAN                                                 
-----------------------------------------------------------------------------------------------------------
 Seq Scan on employees  (cost=0.00..11667.19 rows=1 width=22) (actual time=97.187..139.858 rows=1 loops=1)
   Filter: ((birth_month = 3::numeric) AND (birth_dayofmonth = 20::numeric))
   Rows Removed by Filter: 500001
 Total runtime: 139.879 ms
(4 rows)

Nothing. The difference above comes from the use of caches, but the plan is the same. Let’s go further. Next we’ll create another index on emp_id and birth_month:

foobardb=# alter table employees add constraint birth_uniq_m unique (emp_id, birth_month);
NOTICE:  ALTER TABLE / ADD UNIQUE will create implicit index "birth_uniq_m" for table "employees"

And run the query again:

foobardb=# explain analyze select emp_id, first_name, last_name from employees where birth_month = 3 and birth_dayofmonth = 20;
                                                          QUERY PLAN                                                          
------------------------------------------------------------------------------------------------------------------------------
 Index Scan using birth_uniq_m on employees  (cost=0.00..11464.19 rows=1 width=22) (actual time=0.089..95.605 rows=1 loops=1)
   Index Cond: (birth_month = 3::numeric)
   Filter: (birth_dayofmonth = 20::numeric)
 Total runtime: 95.630 ms
(4 rows)

Success! The query is 40% faster, and we can see the that the plan changed: the database does not scan the whole table anymore, but uses the index on birth_month and emp_id. We created all the mixes of the four fields, only one remains. Worth to try:



foobardb=# alter table employees add constraint birth_uniq_dom unique (emp_id, birth_dayofmonth);
NOTICE:  ALTER TABLE / ADD UNIQUE will create implicit index "birth_uniq_dom" for table "employees"

The last index is created on fields emp_id and birth_dayofmonth. And the result is:

foobardb=# explain analyze select emp_id, first_name, last_name from employees where birth_month = 3 and birth_dayofmonth = 20;
                                                           QUERY PLAN                                                         
--------------------------------------------------------------------------------------------------------------------------------
 Index Scan using birth_uniq_dom on employees  (cost=0.00..11464.19 rows=1 width=22) (actual time=0.025..72.394 rows=1 loops=1)
   Index Cond: (birth_dayofmonth = 20::numeric)
   Filter: (birth_month = 3::numeric)
 Total runtime: 72.421 ms
(4 rows)

Now our query is about 49% faster, using the last (and only the last) index created. Our table and related indexes look as follows:

foobardb=# \d+ employees
                                                 Table "public.employees"
      Column      |  Type   |                      Modifiers                      | Storage  | Stats target | Description 
------------------+---------+-----------------------------------------------------+----------+--------------+-------------
 emp_id           | numeric | not null default nextval('employees_seq'::regclass) | main     |              | 
 first_name       | text    | not null                                            | extended |              | 
 last_name        | text    | not null                                            | extended |              | 
 birth_year       | numeric | not null                                            | main     |              | 
 birth_month      | numeric | not null                                            | main     |              | 
 birth_dayofmonth | numeric | not null                                            | main     |              | 
Indexes:
    "employees_pkey" PRIMARY KEY, btree (emp_id)
    "birth_uniq" UNIQUE CONSTRAINT, btree (emp_id, birth_year, birth_month, birth_dayofmonth)
    "birth_uniq_dom" UNIQUE CONSTRAINT, btree (emp_id, birth_dayofmonth)
    "birth_uniq_m" UNIQUE CONSTRAINT, btree (emp_id, birth_month)
    "birth_uniq_m_dom" UNIQUE CONSTRAINT, btree (emp_id, birth_month, birth_dayofmonth)
Has OIDs: no

We don’t need the intermediate indexes created, the plan clearly states it will not use them, so we drop them:

foobardb=# alter table employees drop constraint birth_uniq;
ALTER TABLE
foobardb=# alter table employees drop constraint birth_uniq_m;
ALTER TABLE
foobardb=# alter table employees drop constraint birth_uniq_m_dom;
ALTER TABLE

In the end, our table gains only one additional index, which is low cost for closely double speed of HBapp:



foobardb=# \d+ employees
                                                 Table "public.employees"
      Column      |  Type   |                      Modifiers                      | Storage  | Stats target | Description 
------------------+---------+-----------------------------------------------------+----------+--------------+-------------
 emp_id           | numeric | not null default nextval('employees_seq'::regclass) | main     |              | 
 first_name       | text    | not null                                            | extended |              | 
 last_name        | text    | not null                                            | extended |              | 
 birth_year       | numeric | not null                                            | main     |              | 
 birth_month      | numeric | not null                                            | main     |              | 
 birth_dayofmonth | numeric | not null                                            | main     |              | 
Indexes:
    "employees_pkey" PRIMARY KEY, btree (emp_id)
    "birth_uniq_dom" UNIQUE CONSTRAINT, btree (emp_id, birth_dayofmonth)
Has OIDs: no

And we can introduce our tuning to production by adding the index we have seen to be most useful:

alter table employees add constraint birth_uniq_dom unique (emp_id, birth_dayofmonth);

Conclusion

Needless to say that this is only a dummy example. It is unlikely that you will store your employee’s birth date in three separate fields while you could use a date type field, enabling date-related operations in a lot easier way than comparing month and day values as integers. Also note that the above few explain queries are not fit as excessive testing. In a real world scenario you need to test the new database object’s impact on any other application that uses the database, as well as components of your system that interact with HBapp.

For example, in this case, if we can process the table for recipients in 50% of the original response time, we can virtually produce 200% of the emails on the other end of the application (let’s say, the HBapp runs in sequence for all the 500 subsidiary company of Nice Company), which may result in peak load somewhere else – maybe the mail servers will receive a lot of “Happy Birthday” emails to relay just before they should send out the daily reports to management, resulting in delays of delivery. It is also a bit far from reality that someone tuning a database will create indexes with blind trial and error – or at least, let’s hope this is so in a company employing that many people.

Note however, that we gained 50% performance boost on the query only using the built in PostgreSQL explain feature to identify a single index that could be useful in the given situation. We also shown that any relational database is no better than a clear text search if we don’t use them as they are meant to be used.



Comments and Discussions
Linux Forum