ClickMasters
← Back to all FAQ cards

Database & Data Management

Database Design Services FAQs

What is database normalisation and why does it matter?

Database normalisation is the process of structuring a relational database schema to reduce data redundancy and improve data integrity. A normalised schema stores each fact exactly once in the most appropriate table rather than repeating it across multiple tables. The normal forms define progressively stricter rules: First Normal Form (1NF) requires atomic column values (no arrays or lists stored in a single column), Second Normal Form (2NF) requires that non-key attributes depend on the entire primary key (not a subset), Third Normal Form (3NF) requires that non-key attributes depend only on the primary key (not on other non-key attributes), and BCNF requires that every determinant is a candidate key. The practical consequence of poor normalisation: update anomalies (changing a customer's email requires updating it in 5 tables miss one and the data is inconsistent), delete anomalies (deleting the last order for a customer deletes the customer's contact information), and insert anomalies (cannot add a product category without a product in that category).

Should I use UUID or integer primary keys?

Both are appropriate depending on the use case. Integer primary keys (SERIAL or BIGSERIAL auto-incrementing integers) are: smaller (4 or 8 bytes vs 16 bytes for UUID), generate sequential values (good index locality new rows inserted at end of B-tree index, no random fragmentation), and are human-readable (easier to debug and reference in support tickets). UUID (gen_random_uuid() v4) primary keys are: globally unique across distributed systems (safe to generate in application without database coordination important for distributed inserts), non-enumerable (prevent URL guessing /api/orders/1234 leaks that there are 1234 orders; /api/orders/a3b7c8... does not), and migration-safe (no integer range exhaustion, no sequence conflicts when merging databases). ClickMasters defaults to UUIDs for new B2B SaaS products (non-enumerable resource IDs are a security benefit in customer-facing APIs) and bigserial for internal/analytics tables where human readability and sequential access patterns matter.

What is row-level security (RLS) and how does it enforce multi-tenancy?

Row-Level Security (RLS) is a PostgreSQL feature that enforces access control at the row level restricting which rows each database session can read or modify based on a policy. For multi-tenant SaaS applications: every tenant-scoped table has an organisation_id column. An RLS policy is applied to the table: `CREATE POLICY tenant_isolation ON orders USING (organisation_id = current_setting('app.current_org_id')::uuid)`. When the application authenticates a user, it sets the session variable `SET app.current_org_id = 'user-org-uuid'`. Every subsequent query on that connection is automatically filtered by the RLS policy `SELECT * FROM orders` returns only the authenticated organisation's orders. The database enforces this regardless of what SQL the application writes a bug that accidentally omits a WHERE clause cannot return another tenant's data. This is safer than application-level filtering where a single coding mistake can cause a data breach.

How do you add indexes to an existing production database without downtime?

PostgreSQL supports concurrent index creation: `CREATE INDEX CONCURRENTLY index_name ON table (column)`. Concurrent index creation does not hold a lock on the table the table remains fully available for reads and writes during the index build. The trade-offs: concurrent creation takes longer than normal index creation (2-3x), it cannot be run inside a transaction, and if it fails partway through, the partially built invalid index must be dropped and recreated (`DROP INDEX CONCURRENTLY invalid_index`). For adding constraints (NOT NULL, UNIQUE, FOREIGN KEY) to tables with existing data: each constraint type has a specific zero-downtime multi-step approach to avoid the `ACCESS EXCLUSIVE` lock that `ALTER TABLE` would otherwise take. ClickMasters documents the specific approach for each migration type in the migration plan and validates in a staging environment with production-scale data before executing on production.