We live currently in a world that believes in scaling out things VERY wide with very inefficient code.
Indeed. Iโve seen too many companies building systems on Docker and other highly inefficient platforms that while resilient are literally hundreds to thousands of times slower than they would be if you used a decent programming language on one or two dedicated machines in a traditional failover cluster.
The problem is that even most computer science graduates arenโt taught much actual computer science (ask my partner, she is a computer science grad from a decent school and doesnโt feel she got what she should have out of it), so for most problems what you gain in parallelism you lose in raw speed.
The speed of light is finite. The speed of electrons is finite, and even slower than light. Coordination costs are high, and rise exponentially the more nodes you have (this isnโt quite true for many classes of problems, but writing books is not what I do here).
Scaling is great. But scaling actually hurts you in many cases. I can build a single machine thatโs faster than some thousand-node Docker containers for many problems โ if I use the right hardware and the right programming language, and a properly-optimized database.
But like most human endeavors, IT moves on trends and whatโs cool rather than what makes sense.
Can I assume that whatever you mean by a properly-optimized database starts with a relational database?
I’m no database expert (not really my area, despite all the SQL troubleshooting I’ve been forced into over the years!) but in general I’m most familiar with relational DBs, so yeah I did mean that type.