Production Engineering · School Management Platform · 100+ Modules
50+ endpoints covered. 15+ SQL operators. 12 lines per module now.
01 — The Problem: 50 Modules, 50 Identical Functions
Every backend developer has written the same findAll function more times than they can count.
Pagination, sorting, filtering by name, filtering by status — copy, paste, adjust.
By the tenth module, you're not building features anymore. You're maintaining fifty
slightly different versions of the same function.
The system I was building covered students, teachers, employees, billing, grading, attendance, extracurriculars, and more — over 100 modules, all needing list endpoints, all needing filtering, sorting, and pagination, all serving multiple frontend portals with different data needs.
The naive approach would have buried us. The real problem appears when the frontend asks: "Can we filter teachers by subject AND campus at the same time, with OR logic across multiple campuses?" You'd have to touch every single service file, change the DTO, add the condition, test, deploy — and repeat next sprint when they ask for something else.
Files Touched Per New Filter Requirement
10 modules
50 modules
Before engine
40+
lines per findAll, repeated per module
After engine
12
lines per findAll, identical across all modules
02 — The Insight: Filters Are Trees, Not If-Statements
The breakthrough came from stopping to think about filters as individual if statements
and recognizing them as a tree structure. Any SQL WHERE clause is fundamentally a tree:
operator nodes (AND / OR) with children, and leaf nodes with a field, an operation, and a value.
Once you see that shape, the solution becomes obvious: design a JSON grammar that represents this tree, pass it as a query parameter, and write one recursive function that walks the tree and builds the SQL.
{
"AND": [
{ "full_name": { "contains": "Ahmad" } },
{ "status": { "is": "active" } },
{
"OR": [
{ "campus_id": { "is": "1" } },
{ "campus_id": { "is": "3" } }
]
}
]
}That JSON compiles to exactly:
WHERE full_name ILIKE '%Ahmad%'
AND status = 'active'
AND (campus_id = 1 OR campus_id = 3)No backend code change required — ever — for new filter combinations. The frontend constructs the tree it needs. The engine executes it.
The critical detail is SQL parenthesization. Without wrapping each OR group in its own
brackets, the database will misread operator precedence and return wrong results — silently.
The engine uses TypeORM's Brackets primitive to ensure the SQL parentheses always mirror
the depth of the JSON tree, exactly.
03 — The Operator Layer: Every SQL Operation, Declared Once
Once the tree traversal identifies a leaf node, it dispatches to a switch that maps each
operation name to a SQL template. This is where domain knowledge lives: not just = and
LIKE, but the nuanced operations that real systems actually need.
contains uses ILIKE instead of LIKE — case-insensitive by default, because nobody
should have to remember whether they typed "Ahmad" or "ahmad."
is_date translates to a BETWEEN spanning the full calendar day. Filtering by date
almost never means an exact timestamp match — it means everything that happened on that day.
array_contains converts a PostgreSQL text[] column to a comma-separated string first,
then does ILIKE — because SQL can't run pattern matching directly on array types.
Each operation uses a unique named parameter with an incrementing counter, ensuring no parameter collision when the same field appears multiple times in a single query.
Operator Reference — 15+ SQL Mappings
04 — Type Intelligence: Auto-Filtering from Entity Metadata
The tree grammar handles frontend-constructed queries. But there's a second layer: automatic query generation from the entity itself.
When a plain query param arrives — ?full_name=Ahmad — the engine reads TypeORM's
runtime entity metadata to determine the column type. A varchar becomes a contains
filter. A timestamp becomes an is_date filter. An int becomes an is (exact match)
filter. An enum[] becomes an array_contains filter.
| Column type | Query param | Auto-generates |
|---|---|---|
varchar | ?full_name=Ahmad | contains |
timestamp | ?created_at=2024-03-01 | is_date |
int | ?campus_id=3 | is |
enum | ?status=active | enum_contains |
int[] | ?campus_id=1,3 | in |
The result: add a column to an entity and it's automatically filterable via query params with the correct SQL operation — no service code change required. The engine upgrades itself for free every time the schema changes.
05 — Cross-Relation Filtering: No Joins in the Controller
The most powerful capability: filtering on columns that don't live on the entity being
queried. A frontend might want to filter students by campus name — but campus_name is
on the related Campus entity, not on Student.
The engine detects any query param containing underscores and treats it as a relation chain. It splits the param name, walks the entity's relation graph recursively, and resolves the final segment as a column on the target entity.
The alias — campus — is automatically inferred from the path walked. The column type is
read from the resolved entity's metadata. The correct SQL operation is chosen.
No controller code. No join declaration.
Depth is theoretically unlimited. A three-level chain like
subjectYear_subject_name walks three entities and produces the correct alias —
constrained only by the actual relation graph, not by any hardcoded depth limit.
Cross-Relation Traversal — campus_name → SQL alias
Query param
?campus_name=SouthRelation graph walk
campus + namecampus.name ILIKE '%South%'?subjectYear_subject_name=Mathsubject.name ILIKE '%Math%'06 — Server-Side Query Injection: Not Validation
Multi-tenant systems have a hard requirement: a user at Campus A must never see Campus B's data. The naive approach is to validate that the frontend includes the correct campus filter. The problem: validation is trust. If the frontend sends the wrong campus ID — accidentally or maliciously — the wrong data is returned.
The engine takes a different approach: inject required conditions server-side and merge them with whatever the frontend requested. The server builds its own mandatory filter — campus isolation, tenant scope, any constraint that must always apply — and combines it with the frontend's query tree using a deep merge.
The frontend's AND array and the server's AND array get concatenated. The frontend
cannot bypass server-side data isolation — it can only append its own conditions on top
of what the server requires. No validation needed. The constraint is structural.
Query Merge — Frontend + Server = Final Query
Frontend query
AND: [{ full_name: { contains: "Ahmad" } },
{ status: { is: "active" } }
]
Server-enforced
AND: [{ campus_id: { is: "1" } }
]
Final query (cannot be bypassed)
AND: [{ campus_id: { is: "1" } }, ← always present
{ full_name: { contains: "Ahmad" } },
{ status: { is: "active" } }
]