close

Menu

Contact Info

Folow us on social

Preventing Human Errors in Postgres

  • Denish Patel
  • 24th February 2014
  • postgresql

Whenever discussion on Disaster Recovery Planning for the databases comes up, “Preventing & Recovering from Human Error” topic arises because human tends to make mistakes and engineers are humans 🙂  Having best DBAs around and with restrictions &  best practices/policies in-place,  you still can’t afford to ignore the fact – “an engineer could make mistake”.  For disaster recovery planning , this factor should be taken into consideration.

Fixing or recovering from human errors, specially inside databases, is costly , which is   accepted fact 🙁  Having better access, restrictions and good practices in-place could help to avoid some of the silly mistakes but you never know !! What if the Superuser (DBAs or anyone having super privileges) makes the mistake?

While dealing with databases, there are number of unknown and unimaginable ways, the engineer could make  mistakes, but I’m going to talk about couple of known and specific mistakes and how Postgres features could be used  to prevent those mistakes:

Preventing DROP DATABASE

Last week, Payal, one of my colleagues , asked me ; Is there anyway to prevent superuser from dropping  production database? My immediate response was: superuser(DBAs) can’t afford to make that mistake !!!!

She asked me another question; What if I convert production db to template database so it can _NOT_ be dropped even you are superuser? As you can see below, template database can not be dropped:

testdb=# drop database template1;
ERROR: cannot drop a template database

My  immediate response was;  hrmm.. there  could be security issue. The topic looked interesting to me so I started digging further into the topic on making production db to template database. As the Document suggests,  datistemplate can be set to indicate that a database is intended as a template for CREATE DATABASE. If this flag is set, the database can be cloned by any user with CREATEDB privileges; if it is not set, only superusers and the owner of the database can clone it.

As you might have realized now , there is a little risk because a user with CREATEDB can close the production database. Fortunately, CREATEDB is not default permission, you have to grant it explicitly.  The gain of preventing DROP DATABASE is much higher than underlying security issue. This is still experimental thought !! If you see any problems setting production db to template database, please shout out!  The below example converts the “proddb” to template database by setting datistemplate flag to true and try to drop the database using superuser:

testdb=# update pg_database set datistemplate=true where datname=’proddb’;
UPDATE 1
testdb=# drop database proddb;
ERROR: cannot drop a template database

Preventing DROP TABLE

After coming up possible fix on DROP DATABASE, we decided to investigate further on similar ways of preventing dropping table accidentally !!

Actually, Postgres 9.3 introduced new feature “System level Trigger“. This can be used to prevent table drop in production. The below example shows how to prevent specific table to be dropped from the database:

Function to prevent single table drop

CREATE OR REPLACE FUNCTION confirm_drop()
RETURNS event_trigger
LANGUAGE plpgsql
AS $$
DECLARE
obj record;
BEGIN
FOR obj in SELECT * from pg_event_trigger_dropped_objects()
LOOP
IF obj.object_name = ‘users’ AND obj.schema_name = ‘testing’ THEN
RAISE EXCEPTION ‘command % is disabled for this table’, tg_tag;
END IF;
END LOOP;
END;
$$;

Place System Level Trigger

CREATE EVENT TRIGGER confirm_drop on sql_drop WHEN TAG IN (‘DROP TABLE’) EXECUTE PROCEDURE confirm_drop();

Let’s try to drop USERS table…

postgres=# drop table testing.users;
ERROR: command DROP TABLE is disabled for this table
STATEMENT: drop table testing.users;
ERROR: command DROP TABLE is disabled for this table

The above output shows even superuser can not drop the table. You could use similar approach to prevent TABLE DROP on SCHEMA or ALL TABLES in the production database.

However,  I can think off two possible downside of this approach ; (i) The system level trigger feature is not available until Postgres 9.3 (ii) You have to take the system level trigger into consideration while  actually dropping individual table or  any table in schema or databases.

I’d still rely on best practices for making sure the human errors can be avoided by putting required restrictions in place and following best practices for schema changes . Though, PostgreSQL allows us technical capability to prevent these mistakes by providing very low overhead technical solutions. Is it worth to take advantage of it? If you are planning on attending PGConf NYC or PGCon in Ottawa this year, let’s discuss in-person?

Thoughts/Comments/Concerns are welcome.

8 replies on “Preventing Human Errors in Postgres”

  1. Setting the template flag to a database which is not a template is wrong from a semantic point of view.

    Even if todays codebase does not implement undesired side effects based on the semantic tag ‘this is a template’ (which still has to be validated), this might change anytime. Some behaviour which is clearly correct for a template DB might be just plain wrong for a production DB. And such behaviour could be implemented anytime in a future release.

    So a much cleaner approach would be a new feature, e.g. a postgresql.conf option, which forbids database drops once set. To drop a db one would have to change the server config and perform the drop.

    1. @Kai You are bang on “semantic” meaning of template database! That was other reason I haven’t sold totally on the idea. Making the prod database to template is definitely not the correct solution but it’s band-aid until there is a patch/fix.

      Actually, I like your idea of having config option to forbid database drop only through SQL command and make it dependent on postgresql.conf parameter as well.

  2. The issue here is that our DDL permissions are not fine-grained enough. We have an excellent example to work from, though: DML, where INSERT, UPDATE and DELETE permissions are in principle separate. We don’t have a similar division among CREATE, ALTER, and DROP, which leads to byzantine workarounds such as those you’ve demonstrated above.

    Let’s fix this problem with a principled approach, not a hack.

    1. David,

      I’m kind of agree on having fine grained access for DDL statements is right step but it will not fix the “human error” issue. If you have superuser permission, the DROP DATABASE is so easy, on the other hand a way more destructive, and making such mistake on prod could be very costly for person and the db owner.
      Having second layer of complexity/confirmation (as discussed by Kai or Payal) for dropping database, it will help to keep the environment a little safer from specific accidental human error.

      1. With utmost respect, if you think some mechanical procedure will prevent human error, you are mistaken. That the measures to soften that hard reality are abstruse and non-standard is a problem we can actually solve.

  3. I would just use the PostgreSQL version of sudo.

    postgres=# CREATE DATABASE testdb;
    CREATE DATABASE

    postgres=# CREATE ROLE supergroup NOLOGIN SUPERUSER;
    CREATE ROLE

    postgres=# CREATE ROLE yourname LOGIN IN ROLE supergroup NOINHERIT PASSWORD ‘secret’;
    CREATE ROLE

    postgres=# \c – yourname
    Password for user yourname:
    You are now connected to database “postgres” as user “yourname”…

    postgres=> DROP DATABASE testdb;
    ERROR: must be owner of database testdb;

    postgres=> SET ROLE supergroup;
    SET

    postgres=# DROP DATABASE testdb;
    DROP DATABASE

    postgres=# RESET ROLE;
    RESET

    postgres=>

  4. I find the datistemplate hack unnecessary. Most production environments always have active connections on the database, which is sufficient to defend against DROP DATABASE:

    ERROR: database “proddb” is being accessed by other users
    DETAIL: There is 1 other session using the database.

Comments are closed.