Why modern programming languages are like this?

20th of April, 2022

For some weird reason I've always enjoyed the topic of performance and optimizations tremendously. Especially trying to understand why some compiler does various optimizations on an instruction level to the hardware. It's quite a trip to see years of expertise on hardware design and how it works in modern computing. But recently that got me wondering is there really a point to that?

Now don't get me wrong, less instructions usually means slightly faster computation, and that's a good thing, right? But considering modern hardware, is that necessary? Should we be concerned about the fact that our compilers work that hard to minimize the amount of instructions in the output of the code? Of course that would make sense if we would be living in a world where computation would still be slow (I don't know, 50s? 70s?).

These kind of actions to minimize the amount of instructions can easily lead up to some funky situations where familiar operations start behaving unintuitively. For example, common operations like '+' or '<'. In these kind of situations, if the program happens to behave incorrectly, often it's considered to be programmer's fault.

In modern hardware, computations are more or less free, and we almost flirt with the idea of concrete Turing machine with infinite amount of memory. Shouldn't the fact that we mostly use this kind of hardware be reflected in the programming languages also? Especially if we consider the fact that one cache miss can easily lead up to a way bigger run-time compared to hundreds of add instructions. If those extra instructions don't increase the size of the data or the program itself, what's wrong with these extra instructions? Especially considering the fact that we could add quite a bit of run-time computation to the program without affecting too much the total running time of the program.

So instead of focusing on minimizing instructions in the output of the language, we could focus on improving the semantics of the language and pretty much completely remove these common hard-to-find errors from our software. This is especially present in many language where we have multiple different features that does more or less the same thing but they might have slight difference when it comes to the performance.

When we start having multiple of these different features that work pretty much the same way to each other, languages easily start having excess amount of features. Using the large amount various features in one code base can easily lead to complex and hard to understand programs. This then often leads that the used features are limited in one code base, so that programmers in the project only use common subset of the language.

Great example in "modern" programming languages of this is C++ regular vs. virtual functions. These kind of features easily lead to a fact that programmers start wasting their precious time on different micro-optimizations which usually in the grand scheme of things aren't really worthwhile. Especially considering the fact that when we start to focus on these kind of optimizations, we can easily loose focus from the stuff that really matters, large-scale behavior of the program.

Can we fix this anyway? Probably no, since we are already so invested in these kind of languages. We can point fingers to various places and blame them that we are in this situation. New programming language doesn't really solve this issue since we just can't rewrite everything in it, and the migration would be a really slow process. Can we fix existing languages? Probably no, which is why we rely on various external tools to analyze and check our programs and various conventions to follow so that we are able to write the best code possible in these languages.

So modern computing is very exciting but it also can be a mess…

Tags: computers, programming

Now playing: Gillian Welch - The Way It Will Be


Showing Now Playing with Hugo

6th of April, 2022

So I wanted to add "Now playing" footer to my posts so I can share easily the music that I'm listening currently and maybe some people are able to find something new and interesting with that so I implemented a very quick and poor man's implementation for it! Basically I only play with the YouTube's search_query URL parameter and I pass in the song to that from the Hugo post's front matter.

I pass in the current song as a slice in the post's front matter similarly to this:

---
nowPlaying: ["DAF", "Liebe auf den Erste Blick"]
---

Then I just parse that in the template:

{{ if .Params.nowPlaying }}
{{ $artist := index .Params.nowPlaying 0 }}
{{ $song := index .Params.nowPlaying 1 }}
{{ $query := querify "search_query" ( printf "%s %s" $artist $song ) "search_type" "videos" }}
<div class="now-playing">
  <p class="no-margin">Now playing:
    <a href="https://www.youtube.com/results?{{ $query | safeURL }}" target="_blank">
      {{ $artist }} - {{ $song }}
    </a>
  </p>
</div>
{{ end }}

Tags: computers, music

Now playing: DAF - Liebe auf den Erste Blick


Why not Kubernetes?

6th of February, 2022

First, I would like to say I think Kubernetes is an excellent platform for its intended purposes. It provides excellent fault-tolerance all over the cluster, a fast and easy way to run updates on your deployments, great tools for managing services, volumes, metrics, and more, each having its own lifecycle to manage. Also, implementing your tooling by extending the Kubernetes' API is a trivial task, so you can easily leverage the great tooling to make your own for whatever you might need.

Today it's also effortless to spin up a Kubernetes cluster with various installers and different managed options. While being very complex, it's still a step closer to the idea of "just run my code and make it work". Also, with containers in the picture, we are pretty close to the magical situation where we actually can run the same application similarly on the laptop and in one cloud.

For me, issues start rising when we begin using Kubernetes for something other than its intended purposes. While I don't have any statistics on this, I have a pretty strong gut feeling that most of the people running Kubernetes are using it as a glorified scheduler for placing containers on nodes as fast as possible. While it's an excellent and overall pretty easy tool to use for orchestrating containers, its fundamental purpose is to orchestrate anything crucial to your infrastructure like network, storage, and other dependencies.

Kubernetes allows complete user freedom to run your infrastructure as you see fit. Despite sounding like a cliche, this kind of freedom can bear huge responsibilities. I would dare to say that most developers and system administrators don't want to make these decisions. What if, at some point in the development, you would wish to change your networking interface or maybe some dynamic storage provider? Can you even do such a thing in that stage of development if the decision was made before you even had anything running in Kubernetes?

Kelsey Hightower put it nicely a while back when he described Kubernetes isn't meant for being a developer platform but a framework for creating platforms. So while it definitely can work as a developer platform, and overall it's pretty easy to get started, kubectl run and kubectl expose and your good to go. That being said, all the API designs in Kubernetes are created for clusters and how to manage these. So while containers are part of this, there are so much more to be leveraged. So should application developers, startups or small businesses use something like this? Probably not, unless they are developing a platform product.

When we get into cluster management, we need to start thinking about managing the lifecycle of everything running inside the cluster. Unfortunately, this is also when things start to get hard. What to do if something inside the cluster dies? What if I need to provision something dynamically? Kubernetes is pretty good at simplifying many of these topics, but due to the complexity of things happening behind the scenes, all this complexity cannot be simplified away.

Kubernetes has a high entry threshold, and it's a very complex project, but still, way too often, I see it marketed as a simple solution for many problems. While you can use Kubernetes in a very simple manner and get lots of stuff done, eventually, you will hit a wall. Deploying fault-tolerant distributed applications that are scalable against a pool of machines with dependencies in networking, storage, and more, that's a hard problem.

Kubernetes is built for production workloads and running infrastructure beyond your demo application. For this reason, complexity in Kubernetes is justified and should be approached with that mindset.

Tags: computers


Google Analytics considered... illegal?

19th of January, 2022

Some time ago I wrote a short post about my my feelings towards web analytics which were sparked due to a spike in visitors on my site (mainly coming from Hacker News). Due to that surge, I decided to part ways completely from any sort of tracking, since for me it was mainly a unnecessary dopamine fix rather than anything useful.

Today I stumbled upon big news on the front of legitimacy of web analytics from the point of view of privacy. Turns out, as most suspected, it's not so good, at least according to Austria's data protection authority.

Basically this case dates back to invalidation of Privacy Shield data sharing system between the EU and the US, because of overreaching US surveillance. Turns out that many companies in US have largely ignored this invalidation, which happened in 2020, and despite this they have still continued to transfer data from EU to US. The Austrian DPA held that the use of Google Analytics by an Austrian website provider led to transfers of personal data to Google LLC in the U.S. in violation of Chapter V. of the GDPR.

Future of Google Analytics in EU

In the long run, there will be two options: Either the US changes its surveillance laws to strengthen their tech businesses, or US providers will have to host data of European users in Europe. This kind of transcontinental transfer is currently (as the time of writing this) only illegal Austria, but Dutch's DPA (data protection authority) has stated that Google Analytics "may may soon no longer be allowed".

Any case, this is great thing for privacy in EU and hopefully many more countries would join Austria in this effort. You can follow what countries have started to follow this at Is Google Analytics ILLEGAL in your country?

Tags: analytics, computers, gdpr, privacy


Adventures in linear types

9th of January, 2022

Lately, I have dedicated a large part of my free time to audio software. I have done this mainly out of interest in the subject due to my history in music. But at the same time, I also thought writing audio software could be a fun passion project or even a small business that I could work on alongside my day job. I don't see myself replacing my current job with this, but maybe I could dedicate 20% of my work time to it.

The world of audio software is a pretty exciting place. It involves a lot of low-level systems stuff like signals and real-time operations, complex math at times, and something that you can feel or at least hear. And what's great, I don't have any background in this stuff!

Now I have programmed most of my life and played around with RTOS, but when it comes to writing algorithms for manipulating digital signals, that's new stuff for me. However, I have experience with the topic from the user point of view since I have been making music for almost as long as I have programmed. This experience involves playing instruments, how effects affect the sound, how mixing and mastering works etc. But what do linear types have to do with any of this?

Signals in the wild

Like I said earlier, signal processing (not necessarily just audio) is very low-level stuff. So when it comes to working with signals in software, you often need to work with C or C++. This is mainly due to the performant and close to hardware nature of the languages required to handle and manipulate signals optimally and efficiently.

Digital signal processing is also full of algorithms. Standard workflow for people working in this industry seems to be that these applications are prototyped on some high-level language before being produced. Often in languages/tools like MATLAB, Octave, Mathematica, and similar very heavily math-oriented languages and tools. Julia has appeared to grow in popularity also in this world. These high-level languages are mainly used due to the speed of development.

It is also not uncommon to see FPGA being used in these applications. For several reasons: they are reconfigurable hardware, so you can tailor and deploy on them computation units and data buses specifically designed to your particular needs. So if you're working with digital hardware, you can't go wrong with FPGAs. In this world, VHDL or Verilog comes in handy.

As you can see, overall, the applications tend to involve a lot of different low-level concepts, but at the same time, high-level topics in terms of prototyping. But, as the post's header might hint, I'm not interested in the prototyping aspects of signal processing since I think those are all well and good. Instead, I'm interested in having a small thought experiment on whether the low-level elements could be improved somehow.

I would consider myself a functional programmer first and foremost, even though I mainly write imperative and/or object-oriented code, at least professionally. Now in my free time and in non-trivial side projects (that are not signal processing related), I like to work with weird languages like Haskell or Common Lisp. Unfortunately, as I mentioned above, almost all the work in this signal processing world is written in C or C++, emphasizing the latter. However, I completely understand why these languages are used since we talk about real-time programming, so latency needs to be minimized.

"Real-time" can be understood that the program has to produce the correct result but also on a certain amount of time (which varies between systems).

If we use audio processing as an example, typically, you would have some sort of processing function in your code that would work in the audio callback:

process :: BufferRef -> ()

This function would get its callback from either a sound card or either some input device, e.g. microphone. After it has received its callback, this block of code (whatever might be inside it) would write the corresponding audio data into the given buffer. Which would then be played at the speakers or vice-versa when recording.

This procedure is basically what should happen in real-time lots of times when we are doing audio processing. Audio software is often set up to send these audio callbacks from a high priority "real-time" thread with a very short latency between the callbacks, ~1-10ms (varies between systems).

To achieve this minimal latency between callbacks, you often can't rely on stuff like garbage collection since you can't be sure when your program launches it. I dare to say that most of the software benefits from GC significantly, but in the audio making GC right is very hard. What makes it hard is that if GC launches at the wrong time or the latency between callbacks gets too large, garbage data will leak into the buffer, causing unwanted sounds.

Most other software might only see a slight latency in their computations if they do profiling, so that might not be the end of the world, of course, depending on your context. But in audio, you cannot let that happen since you can literally hear that glitch, which is unforgivable.

When it comes to C or C++, I think everyone knows their foot guns that involve memory management. Thankfully in modern C++, it's not that bad (as long as you follow core guidelines), but there is still a lot of unnecessary baggage when it comes to safe code in these languages.

Could there be any way we could use garbage collected language while doing "real-time" operations and how that could be achieved?

Linear types

GHC 9.0 introduced support for Linear Haskell, which can be enabled with -XLinearTypes. One of the significant use cases for linear types is implementing latency-sensitive real-time services and analytics jobs. As I mentioned earlier, a major issue in this use case is GC pauses, which can happen at arbitrary points for reasonably long periods. The problem is exacerbated as the size of the working set increases. The goal here is to partially or entirely eliminate garbage collection by controlling aliasing and making memory management explicit but safe.

So what then are linear types. Henry Baker described linear types and their benefits in his paper Lively Linear Lisp — 'Look Ma, No Garbage!' and also on "Use-once" variables and linear objects: storage management, reflection and multi-threading. As you can see, we are not talking about a new topic. Basically, we are talking about types whose instances must be held in a linear variable. A variable is linear if it's accessed exactly once in its scope. Same for a linear object, their reference count is always 1. When we have this safety guarantee on type-level, we can avoid synchronization and GC and also, we could update linear objects in place since that would be referentially transparent.

Avoiding garbage collection

So why we can avoid synchronization and GC with linear types? If we would consider the following function as an example:

linearFunc :: a %1-> b

On their own, linear types only gives a type to functions that consume their argument exactly once when their result is consumed precisely once. So alone, they don't make your programs any faster or more safe for resources. Still, they allow you to write many different optimizations and resource-safe abstractions that weren't possible before.

First, since linear values can only be used once, these values cannot be shared. This means that, in principle, they shouldn't be a subject to GC. But this is very dependant on the consumer of the values since that may very well do some sort of de-allocation on the spot. One way to mitigate this could be to store these values to heap outside of GC's control.

While utilizing heap for these values alone would diminish the GC, it would introduce some overhead to your program, which could increase the total running time of your application. But if we continue to use real-time systems as an example, this isn't necessarily a bad thing.

In real-time systems, optimizations often happen only to the worst-case scenarios. This is because you don't really care about your latencies as long as they stay within the particular window. But you do care that those latencies should never go above your maximum limit, and this is primarily where optimizations utilizing linear types could come in handy.

Practical linear types

Linear types are a blessing in GC languages if you intend to do anything safely in the low-level world. I would like to continue this post with some practical examples of how Haskell utilizes these types and how they can make low-level optimizations and resources safer in your Haskell code, but that deserves their own post.

Tags: computers, dsp, haskell, lisp, programming


<< Older