Posted on 23rd of June 2021
| 878 wordsCode reading has always been an activity I’ve just done without giving
any thought to it. But despite this, now, when I look back at this
habit, I see it as immensely beneficial. This habit caught my
attention when I was reading Peter Seibel’s book Coders at Work, in
which there is a section where Peter asks about code reading from his
interviewees. His interviewees tended to be unanimous that code
reading is very beneficial. Still, while reading his interviews, it
left a picture that the practice itself seemed to be lacking even
within those heavyweight programmers. The exception in this being Brad
Fitzpatrick and, obviously, Donald Knuth. If these programmers speak
for this practice but don’t do it in the wild, then who does? Overall,
it seems pretty odd to me. Seibel made a great comparison regarding
this when he compared programmers to novelists, where if a novelist
hasn’t read anyone else’s publications, it would be unheard of.
I’ve always enjoyed reading others’ source code, mainly, let’s face
it, to steal some ideas. But by doing this, I’ve received a long list
of different lessons, ideas, and patterns, which I’ve utilized
frequently in most of my work after these revelations.
Pattern Matching
One of the most significant benefits I’ve learned while reading code
is that you can learn various patterns after a while. Sure, every
project might seem cluttered and hard to understand for a while, but
when you get the gist of it, you start to realize why this or that has
been done the way it is. Furthermore, when you’ve understood some of
these patterns, it gets much more comfortable to start noticing them
in other similar or not-so-similar projects. Fundamentally this means
the graph of WTF-per-seconds starts getting less and less.
I have also noticed that pattern matching helps understand the whole
project under study itself. It would be best to try comprehending a
large open-source project simultaneously but in small pieces. Then,
when one of these pieces is understood, it can help tremendously
understand the other pieces.
Benefits of Reinventing
It can often be pretty hard to understand the functionality of some
parts of an extensive program by looking at the code. So quite often,
to get a better grasp of foreign code is to reimplement the way you
would write it. This way, you can abstract the bread and butter out of
the program and utilize it however you want.
This kind of reimplementing can be quite hard on bigger projects. The
best way to reinvent something in those projects is to change
something and see changes in the new compilation. For example, try to
change some text in some menu or output. This way, you can quickly
test how well you understand the foreign code.
Code as a Literature Medium
Many say that code is not literature because you read it differently
from prose. In my opinion, this doesn’t necessarily need to be the
case. Overall, code is written for humans first and then machines
second. An excellent example is Robert C. Martin’s ravings, in which
he often recites that the “code should read like prose to be clean”,
which I tend to agree with. Another good one is Donald Knuth’s
approach to literate programming. However, the latter one is more
about embedding code pieces amidst what one could call
prose. Nonetheless, this kind of system makes the code much more
readable since writing is such a big part.
One thing that I believe makes people think code is not literature is
syntax highlighting. I don’t use it. For some reason, I never grew
used to coloured text. Of course, I might be a bit biased, but when I
turn on syntax highlighting, I tend to focus on the wrong things in
the code, making it so that it doesn’t read like prose
anymore. Removing syntax highlighting has allowed me to grasp the
whole structure better. Is this true, or does it work for everyone? I
don’t think so, but that’s how I feel.
Code Reading Club
Based on these thoughts and Seibel’s ideas, I decided to try some
code-reading clubs in my workplace. Initially, what I had in mind for
this kind of club was choosing one library/program per week/month or
whatever and then dissecting the main logic behind it and discussing
it. However, I quickly realized that this would most likely work since
people have different interests in programming. For example, I am not
interested in various GUI applications or other frontend technologies,
even though they might have some good ideas behind them.
So a much better approach would most likely be that person chooses one
library/program and then dissect it sharing the findings with the rest
of the group. This dissection done by someone other than yourself
could easily inspire you and others to dive more deeply into the code
itself, even though it might be a little bit outside your
interests. That being said exploring the world around your circles can
be mind-opening since you can easily find new approaches to the same
problems that you might face in your work.
I want to give this kind of approach a good try, and I could write
some “deep thoughts” about it in the form of a review.
Posted on 8th of May 2021
| 1053 wordsI’ve started to ponder the repercussions of this trend of extravagant
architectural choices in the tech industry. But unfortunately, these
options seem prevalent in this current era of cloud computing. At
least, I seem to stumble upon these regularly when working with a wide
variety of distributed systems. Great examples of this kind of trend
are various Kubernetes setups in projects where you could easily
manage to progress without it or some data infrastructure solution
that feels like a sledgehammer for hitting a small nail.
I’m not bashing these technologies since I enjoy working with them,
and I work with them daily. They have their purpose, but this purpose
is often meant for a larger picture in mind. Now, if we focus on the
example of Kubernetes, sure, it can bring many benefits, like easier
deployments, reduced complexity on large projects, and often reduced
costs. But no one can argue that it can be overkill in many
projects. If it’s not needed, it mainly brings unnecessary complexity
and reduces productivity in these projects. So it can be a
double-edged sword. But I don’t want to focus on these singular
technologies in this topic since they feel minor on the grand scale.
Implications on Our Evolution
When we move more to this science-fiction picture of the future, we
need to start thinking more about topics such as transhumanism and how
we will live with machines that’ll outsmart us. Understandably, issues
associated with transhumanism, like the singularity, AI,
nanotechnologies, cybernetics, and much more, are challenging to
discuss, first of all, on a technological level and on a moral and
ethical level. But, on the other hand, it is also hard to say that we
will even ever see the rise of these kinds of technologies. It could
be that our civilization can see that these inventions are possible,
but we cannot implement these. On the other hand, it could also be
that technological evolution has also started to get so rapid that we
will see a significant turn of events in these topics in the near
future.
[[https://www.kurzweilai.net/the-law-of-accelerating-returns
][Overall
technological evolution grows exponentially, so the time between
significant inventions gets shorter and shorter]]. So, we can only
speculate on how things might turn out.
Whatever the outcome may be, I believe that some degree of optimism is
in place. However, I think the singularity is inevitable, and most of
the industry’s actions indicate that the path is not good. These
actions are the main reason why these over-the-top architectural
choices might hint at something that might be inevitably bad.
When I talk about some projects using these “sledgehammer” solutions
in projects where they aren’t necessary, I’m overall talking about a
small pesky thing. What worries me about this topic is that we are
using these kinds of hyped-up tools, which happen to be the month’s
flavour in every project; what could this mean, for example, in the
development of AI or other future technologies? Could we seem to have
endless resources cause of something that cannot be reverted? Bill
Joy wrote a great essay about the future not needing
us
, which makes it scary to
think that we run these extravagant systems just mainly because we
can. A similar thing applies to data collection and many other issues
in privacy. Most big platforms that utilize some tracking tend to
collect a lot of data, which often isn’t used thoroughly, so the data
is collected to build minimal information about the user. Possibly the
rest are saved for later.
Clever Usage of Limited Resources
Back in the olden days, when I wasn’t even born, computers tended to
be understandably very limited in terms of resources. Computing has
evolved tremendously since allowing us to use these kinds of
larger-than-life solutions in environments where they wouldn’t
necessarily be needed. Has the quality of systems or programs evolved
directly proportional to the increase in computing power? Definitely
not. The fact that these kinds of powers are available to us
everywhere has possibly increased the number of innovations since more
people can start thinking of possible uses for these machines that are
all around us because they are in contact with them
regularly. Although you could think that since more people are in
contact with these machines daily, it would equal more interest in
programming, etc. This doesn’t seem to be the case.
Where I’m getting with this is the fact that the quality tends to be
going down when we go towards the future; how could this be tackled?
Clearly, this kind of wild west design in these crucial systems can’t
continue.
Strategic Approach in the Development
When we talk about this extravagancy phenomenon in tech projects, it
tends to affect the program/system developers the most. Often, they
are not making these decisions since it tends to be someone from the
ivory tower who often plans these decisions. Thankfully, these people
have at least some background in these systems relatively frequently
but not always. So should the developer’s opinions matter more when
considering various options for your project? Sun Microsystems had a
great idea when they marketed Java to people. Sun was a hardware
company that figured out that they had to please programmers first to
sell more hardware, which resulted in Java being one of the most
widely used languages today. Now, did Java please programmers? Maybe
back when people hated C++, but opinions seem to have shifted
recently, although both languages still enjoy immense support.
Overall, I think these large systems have their places in many
domains, but these domains where their power could use efficiently are
very rare. This ends up in a situation where we either have a lot of
unnecessary computing power just lying there or used for something
unnecessary. Now systems have this unnecessary complexity that mainly
hinders the people’s workflow in developing the whole system.
I also think that doing something because “this might be needed in the
future” is a bad practice since this tends to end up in an infinite
loop of unnecessary work. Since more straightforward solutions tend to
be quite often good enough for most projects with much better
developer experience and much better efficiency. These solutions also
often allow effortless migration to a bigger and better solution if
needed. So don’t optimize if it’s not necessary.
Posted on 28th of March 2021
| 1125 wordsI started to rekindle my, unfortunately, lost writing habit a couple
of weeks ago. I set up Google Analytics for this page mainly due to
its easy use to see simple analytics. I was only interested in visitor
count and possibly where my readers’ were coming from. Google
Analytics is a massive tool with massive amounts of data going into
it. I tried to restrict this collection as much as possible, which
suits my personal blog’s needs.
Then my page rose to the front page of Hacker News, and it started to
get a lot of traction. Suddenly, thousands of readers came every day
to my pesky little page with just a few posts as I followed the
visitor counts rising in my Google Analytics view. That got me
thinking about the ethics of this kind of tracking. Which then ended
up with me deleting my account and data from it.
Discomfort With Tracking
Before I deleted my data and account from Google Analytics, I looked
for alternatives. I stumbled upon many other privacy-oriented and
GDPR-compliant analytics platforms, which at first seemed promising.
Also, having good options for ever-prevalent Google Analytics is a
great thing. But despite these features, they don’t remove the
uneasiness mining your users’ data causes. Of course, we are talking
about spying here. Thankfully there are now some restrictions
regarding personally identifiable information (PII), at least in the
GDPR, limiting the shadiness quite a lot. But that brings new issues
in handling this kind of information since you need to be sure that
your software doesn’t leak this information. Thankfully, opting out
entirely from collecting PII in your software is an option.
I understand why people might want to add at least simplistic tracking
to their sites since it can provide helpful information about your
content, companies can see how users use their site, and the list goes
on. Especially when you combine Google Analytics, or similar analytics
tool, with ads, companies can reap significant benefits from this kind
of tracking. But 9 of 10 sites shouldn’t need this. You could argue
that most administrators use this tracking only for dopamine fixes and
don’t utilize the tracked data. Even though they might use it somehow,
how do they inform the user? I dare to say that information about data
usage is almost always written in some shallow boilerplate text or in
no way at all.
GDPR
highlights mainly four things about data
usage:
It gives EU citizens the final say on how their data is used. If your
company handles PIIs, there are tighter restrictions on handling
these. Companies can store/use data only if the person consents to
it. User has rights to their data.
Consent is the crucial part here since many sites lack on this front.
There has been a lot of discussion about what should be considered
consent. GDPR Art. 6.1(f)
says
that “processing is necessary for the legitimate interests pursued by
the controller or by a third party”. Now legitimate interest is
relatively shallow, and quite a few authorities in Germany, for
example, consider that third-party analytics do not fall under
“legitimate
interest”
.
You can utilize consent management platforms to ensure the user’s
consent before dropping the tracking code on your page. But this then
raises the question of what can be considered consent.
Drew DeVault wrote a great post about web analytics and informed
consent
.
Informed consent is a principle from healthcare, but it still can
offer significant elements to be utilized, especially in technology
and privacy. Drew split up the essential elements of informed consent
in tracking to these three points:
Disclosure of the nature and purpose of the research and its
implications (risks and benefits) for the participant and the
confidentiality of the collected information. An adequate
understanding of these facts on the part of the participant, requiring
an accessible explanation in lay terms and an assessment of
understanding. The participant must exercise voluntary agreement,
without coercion or fear of repercussions (e.g. not being allowed to
use your website).
Considering these essential elements of informed consent, we agree
that most tracking sites don’t follow these guidelines.
Thankfully trivial tracker blocking is supported already in many
browsers, which makes this issue slightly more bearable, and also,
you’re able to download external tools to do it. But still, this kind
of approach is pretty upside down.
All Kinds of Cookies
Unfortunately, ad-tech companies have tried to make blocking these
harder and harder by constantly evolving these cookies to
evercookies, supercookies,
etc.
The way these have worked is that trackers have stored these
harder-to-detect and delete cookies in different obscure places in the
browser, like Flash storage or HSTS flags. Evercookies were a big
thing in early 2010 since many sites were using Flash and Silverlight,
and those were very exploitable. Today those technologies aren’t used
anymore, but that doesn’t mean the evolution of cookies has
stopped. On the other hand, Supercookies work on the network level of
your service provider.
Thankfully lately, for example, Firefox has been able to start
tackling
these
.
In that post, the Firefox team discloses what they had to do to take
some action against this, and it is wild. First, they had to
re-architect the whole connection handling in the browser, which was
first made to increase user experience by reducing overhead to
eliminate these pesky cache-based cookies.
Still, browser
fingerprinting
could be considered the evilest cookie of them all. Browser
fingerprinting identifies everything it can from your system. Like
some cookies, this has real use cases, e.g., preventing fraud in
financial institutions. Still, principally this is just another
intrusive way to track people. Thankfully some modern browsers offer
at least some ways to avoid this, but not a full-fledged solution
(other than disposable systems).
Future of Cookies
Lately, there has been some news about privacy-friendly substitutes
to cookies by tech
giants.
Cookies have been a relatively significant issue privacy-wise for
decades, and since the ad industry is so large, finding a replacement
for these has been hard. So only time will tell. We cannot get rid of
cookies entirely in the near future. They might change into something
else, maybe this kind of API utilizing machine learning to analyze
user data. Which I don’t know is better or worse. So cannot wait!
tin-foil hat tightens
Conclusion
So what is the conclusion here? Probably nothing. Recently started
small-time blogger just got scared from big numbers coming into his
site collecting all kinds of data which ended up with him stopping
this kind of action, at least on his site. Since for most users/sites,
this kind of tracking is just a silly monkey-get-banana dopamine fix.
Don’t track unless you need to; if you do, inform it thoroughly.
Posted on 3rd of March 2021
| 644 wordsWhen talking about the tools of the trade, almost regardless of the
industry, email seems to be a vital tool. The same applies to me.
Obviously, in the tech industry, everything goes by email. But also in
music. If I happen to write, record, mix or master something, I always
share these via email.
Unfortunately, email is a crucial part of my workflow, so I care about
my productivity while using it. So recently, I started to look for
options for my two different GSuite accounts. One was used for my
personal domain, and another was for my music publishing company. A
big reason behind the migration was that I found GSuite too much for
my needs. I don’t necessarily have anything against Google’s product,
albeit I agree they have a bit too big of a footprint on the internet,
so I at least try to limit my contributions to it.
Requirements for Provider
I only have two requirements for my provider: IMAP/SMTP support and
the ability to use my domain(s). Given these requirements, there are
probably hundreds of providers that would fit these requirements. But
after a while of skimming through different providers, I ended up with
FastMail
and
ProtonMail.
FastMail
FastMail seemed like a good fit when I first looked into it: easily
manageable domains and reasonable pricing. I quickly tested it with
their offered trial account and was pretty pleased with their product.
However, concerns arose when I learned that the company is from
Australia. Not that I hate Australia by any means, but their hostile
and subversive laws regarding encryption are pretty sketchy. The
assistance and access
act
allows, under Australia’s legislation, police to force companies to
create a technical function that would give them access to encrypted
messages without the user’s knowledge, which made FastMail pretty much
a no-go for me.
ProtonMail
After finding Australia’s laws against encryption, it seemed like a
natural choice. I had already heard of them before, and their security
stand. Unfortunately, ProtonMail doesn’t support IMAP/SMTP access, at
least in the standard way, mainly because of encryption, which is why
I didn’t want to go that route when I first heard of them. However,
they offer a kind of unorthodox solution via their ProtonMail
Bridge. By my understanding, this only handles the authentication to
your mail and provider localhost-only endpoints to IMAP4/SMTP. Then
you can configure your mail client of choice with these new endpoints.
Attractive solution, and at least for me, it seems to work and doesn’t
hinder my workflow that much. Albeit, this conveniently enables vendor
lock-in, which is not very good in my books. Still, I’m pretty happy
with their product and decided to migrate my emails there.
Honorable Mention: Migadu
Migadu
is on the smaller end of
the spectrum when talking about email providers, but overall they
seemed to have great values. I didn’t go that route (yet?) because I
read that they have had some outages in their services in the
past. This doesn’t mean that your email has been lost since the global
mail system is pretty tolerant of that, but not logging into your mail
can be pretty annoying. Also, their bandwidth-based pricing and daily
mail limits made them unsuitable for me. I work a lot with email and
send and receive a lot of them, so they offered pricing ideal for my
needs, but it was a little bit too expensive at that point.
Dishonorable Mention: Self-hosting
No.
Conclusion
FastMail at first seemed like a good fit, but due to Australia’s
legislation, it just doesn’t work for me. ProtonMail overall seems
like a pretty exciting provider, at least on paper. But the vendor
lock-in aspect of their bridge is rather odd, although I understand
why they have done it. Still, this seemed minor to me, so I’ll
continue to use their service, at least for a while.
Posted on 14th of February 2021
| 939 wordsA few years ago, I had a habit of semi-regularly writing about various
exciting topics. Unfortunately, time passed, and I began to write less
and less, and recently I’ve gotten out of the habit altogether. This
is a shame in many ways since I’ve always felt writing to be immensely
therapeutic.
At the time of writing, this world is also in a very odd place. Most
countries are quarantined due to COVID-19, and people stay in their
homes. Yours truly included! So to pass the time during these times,
I’m trying to reawaken this habit.
Habitual writing has been on my mind for a long time, especially since
it has been so present in my life. I’ve also somehow lost a few other
healthy habits lately, which have made me think about how I can
reawaken them in my daily life. Healthy practices that come to mind
that I’ve lost would definitely be workouts and meditation. Although
you could argue that the lost habit of working out is mainly related
to the current difficult times, I’m not too worried. I believe that
eventually when the world calms down in terms of this pandemic, I can
relearn that habit quite quickly. But losing the regular meditation
practice is really a shame, in my opinion. Like working out,
meditation has played a big part in my life for years.
Even though my meditation practices have been irregular lately, the
earlier “hard work” has helped me in my everyday life. But recently,
I’ve started thinking about how I could relearn this habit. I’ve
learned that, at least in my own case, the best way to learn habits
has definitely been to do something often but not in an excessive
amount. So in meditation, this was easy. Start for 5 or 10 minutes
(which is nothing, everyone can find time for this) and just do
it. Current times support relearning this since people are primarily
working remotely. Hence, it is easy to start your day with this
practice. With these simple steps, I feel like I’ve been able to
reawaken this practice that was once very present in my life.
This got me thinking about utilizing a similar approach in other
habits I’ve forgotten. The habits that came to mind were music and
writing. Although some could argue that these are more or less the
same thing. For some reason, I’ve struggled to pick up my instruments
and write some new music during the pandemic. Many others have the
same feelings in their own area of interest. I don’t know the cause
for this; maybe the constant staring at the same four walls for over a
year is the culprit. Who knows? A similar thing has also happened in
my writing.
What really got me wanting to reawaken these habits was when I
stumbled upon Richard
P. Gabriel’s
poetry. Gabriel is
a legendary Lisp programmer. As a Lisp programmer myself, I’m always
interested in what other like-minded people are up to. Gabriel started
a project of writing one poem a day on March 18,
2000
to end a lengthy
poetry-writing slump. Gabriel agrees that he is not necessarily a
great poet, even though many could argue otherwise, but I think that
is non-essential. While forming this habit, you don’t necessarily need
to be the new Robert Frost. But since writing poetry (or anything) is
a technical skill, constant practice is bound to help you in your
journey. I stumbled upon a similar approach while reading Pat
Pattison’s Writing Better
Lyrics
,
where he talked about “daily object writing” in terms of getting
better at writing. Pattison also noted that forming a habit is the big
thing in this, which will eventually improve writing.
This approach is more or less similar to how I learned the healthy
habit of regular meditation. How could I apply a similar approach to
my composing and writing? Knowing myself, I cannot do this kind of
creative work sporadically (or wait for the creative slump to end), or
I’ll never do it. If I tried to write one piece and post every day, I
feel that doing both daily would be slightly excessive (mainly
timewise). So I need to find a healthy balance in practice and not be
over-encumbered.
In my case, I believe that some time-boxed, very focused practice on
something works the best. So what I intend to do is I’ll focus on a
period (half an hour, an hour or so) on the given task, whether it is
composing, writing, or programming (another healthy habit that I
practice, which thankfully hasn’t been lost, but I always feel I could
do more of it). I’ll set a healthy goal for this time box, so I don’t
expect to write some new groundbreaking sonata, earth-shattering blog
post, or the next big open-source project. Instead, I want to do
something in these fields regularly to hone my skills in the given
area. Since I’m trying to work on multiple habits, I also understand
that I might not always have time to do everything. That’s okay. I can
most likely squeeze in a smaller session to have at least some
practice. Or if I just simply cannot do anything, that’s fine too. I
just don’t want to see myself doing something excessively one day and
then slacking off the next day since “I did so much yesterday.”
(learned from Pattison).
Productivity has been really close to my heart, even though I
occasionally lack significantly in that area. But maybe with small
steps, everyone can benefit from a slight boost in their productivity.
Or just procrastinate… As long as you’re happy.