Extravagancy in TechFebruary, 2021
I’ve lately started to ponder the repercussions of this trend of extravagant architectural choices in the tech industry. Mostly these kinds of options seem to be prevalent in this current era of cloud computing. At least, I seem to stumble upon these regularly when I work with a wide variety of different distributed systems as a DevOps/SRE consultant. Great examples in this kind of trend are various Kubernetes setups in projects where you could easily manage to progress without it or some data infrastructure solution that feels like a sledgehammer for hitting a small nail.
I’m not bashing these technologies since I enjoy working with them, and I work with them daily. They have their purpose, but quite often, this purpose is meant for a way larger picture in mind. Now, if we focus on the example of Kubernetes, sure, it can bring a lot of benefits, like easier deployments, reducing complexity on large projects, and quite often reducing costs. But no one can argue that it can be overkill in many different projects. If it’s not needed, it mainly brings unnecessary complexity and reduces productivity in these projects. So it can be a double-edged sword. But I don’t want to focus on these singular technologies in this topic since they feel minor on the grand scale.
Implications on our evolution
When we are moving more and more to this science-fiction picture of the future, we need to start thinking more about topics such as transhumanism and how we are going to live with machines that’ll outsmart us. Understandably, topics associated with transhumanism, like singularity, AI, nanotechnologies, cybernetics, and much more, are challenging to discuss first of all on a technological level and but also on a moral and ethical level. It is also hard to say that will we even ever see the rise of these kinds of technologies. It could be that our civilization can see that these inventions are possible, but we cannot implement these. It could also be that technological evolution has also started to get so rapid that we will see the big turn of events in these topics in the near future. Overall technological evolution grows exponentially, so the time between significant inventions gets shorter and shorter. So, for now, we can only speculate on how things might turn out.
Whatever the outcome may be, I believe that some degree of optimism is in place. I think the singularity is inevitable, and most of the industry’s actions indicate that the path is not towards good. These actions are the main reason why these over-the-top architectural choices might be hinting about something that might be inevitably bad.
When I talk about some projects using these “sledgehammer” solutions in projects where they aren’t necessary, I’m overall talking about a small pesky thing. What worries me about this topic is that we are using these kinds of hyped-up tools, which happens to be the month’s flavor in every project; what could this mean, for example, in the development of AI or other future technologies? Could the fact that we seem to have endless resources cause something that cannot be reverted? Bill Joy wrote a great essay about the future not needing us, which makes it scary to think that we run these extravagant systems just mainly because we can. A similar thing applies to data collection and many other issues in privacy. Most of the big platforms that utilize some tracking tends to collect a lot of data, which quite often isn’t used thoroughly, so the data is collected to build minimal information about the user. Possibly the rest are saved for later.
Smart usage of limited resources
Back in the olden days, when I wasn’t even born, computers tended to be understandably very limited in terms of resources. Computing has evolved tremendously since allowing us to use these kinds of larger-than-life solutions in environments where they wouldn’t necessarily be needed. Now, has the quality of systems or programs evolved directly proportional to the increase of computing power? Definitely not. The fact that these kinds of powers are available to us everywhere has possibly increased the number of innovations since more people can start thinking possible uses for these machines that are all around us and because they are in contact with them regularly. Although you could think that since more people are in contact with these kinds of machines daily that it would equal to more interest in programming etc. as general. This doesn’t seem to be the case.
Where I’m getting with this is the fact that the quality tends to be going down when we go towards the future; how could this be tackled? Clearly, this kind of wild west design in these kinds of crucial systems can’t continue.
Strategic approach in the development
When we are talking about this extravagancy phenomenon in tech projects, it tends to affect the program/system developers the most. Often, they are not making these decisions since, most of the time, it tends to be someone from the ivory tower that plans these decisions. Thankfully, these kinds of people tend to have at least some background in these systems relatively frequently but not always. So should developer’s opinions matter more when thinking of various options for your project? I believe Sun Microsystems had a great idea when they started to market Java for people. Sun was a hardware company that figured out that to sell more hardware, they had to please programmers first, which resulted in Java being one of the most widely used languages today. Now, did Java please programmers, maybe back then when people hated C++, but opinions seem to have shifted in recent years, although both languages still enjoy immense support.
Overall, I think these large systems have their places in many domains, but these domains where their power could use efficiently are very rare. This ends up in a situation where we either have a lot of unnecessary computing power just laying there or used for something unnecessary. Now systems have this unnecessary complexity that mainly hinders the workflow of the people developing the whole system.
I also think that doing something for the reason of “this might be needed in the future” is a bad practice since this tends to end up in an infinite loop of unnecessary work. Since more straightforward solutions tend to be quite often good enough for most of the projects with much better developer experience and much better efficiency. These kinds of solutions also quite often allow effortless migration to a bigger and better solution if it’s needed. Don’t optimize if it’s not necessary.