We live currently in a world that believes in scaling out things VERY wide with very inefficient code.
Indeed. I’ve seen too many companies building systems on Docker and other highly inefficient platforms that while resilient are literally hundreds to thousands of times slower than they would be if you used a decent programming language on one or two dedicated machines in a traditional failover cluster.
The problem is that even most computer science graduates aren’t taught much actual computer science (ask my partner, she is a computer science grad from a decent school and doesn’t feel she got what she should have out of it), so for most problems what you gain in parallelism you lose in raw speed.
The speed of light is finite. The speed of electrons is finite, and even slower than light. Coordination costs are high, and rise exponentially the more nodes you have (this isn’t quite true for many classes of problems, but writing books is not what I do here).
Scaling is great. But scaling actually hurts you in many cases. I can build a single machine that’s faster than some thousand-node Docker containers for many problems — if I use the right hardware and the right programming language, and a properly-optimized database.
But like most human endeavors, IT moves on trends and what’s cool rather than what makes sense.