But that was not its first strike. A few years earlier, after a long process of enshittification, Google announcing it wil be closing Notebook — its all-purpose note taking, stuff-saving, research helper application. This was the final straw for me, and since then I switched completely to Evernote (which I was a member since 2009).
Evernote was great. A little buggy at first, it had its problems, but for the most part it was a featur rich application. I used it mostly for:
I was very satisfied with it, and it was one of the earliest subscriptions I was paying for proudly for many years. Until things turned for the worse in recent years. I canceled my subscription last year, because I found myself using evernote less and less. It was still a great value, but none of the premium features were useful to me. I still used the clipper, shared some notebooks with my wife, had tons of recipes and other notes I would browse on a weekly basis. But none of the premium features were appealing to me any more.
I also stopped using Skitch, since it was not running natively on the silicon macs. I switched to CleanShot X with a subscription from Setapp. It’s actually much more feature-rich, which is rather expected — Skitch aimed to be simple.
So I decided to cut the cost. I had to reduce the number of devices I used the app on to two: the web client and my iPhone. That was an inconvenience (for example, I often used it on my iPad), but nothing I couldn’t handle. Things were good for a period of time, until last month, Evernote put the last nail in its coffin: their free plan was reduced to one notebook and 50 notes.
I have over 2500 notes across dozen of notebooks.
And I don’t even have the mac version to export the notes. How do you even?!
After 15 years its time to move on entirely. Notion seems like the default choice: it’s where the cool kids hang these days. But interestingly, I’ve known about Notion almost since its inception, but never got to use it, and I am not really that drawn to it. Joplin seems like an interesting alternative — it does not lock you in, and you are the owner of your own data at all times. There is an option to use Dropbox as storage, which is a tempting solution, as I am a premium member already.
Dropbox itself has a competing product called Paper. I used it a few years ago for a project. It was nice, but nowhere near as powerful as Evernote was. My default app for quick notes, Bear, also has a clipper, but I just don’t see how it could act as a knowledge repository. The interface is just too focused on simple notes. Or should I just make it up with Google and switch to Keep?
]]>So to celebrate it’s 10th year, I took a stroll down the memory lane to talk about some of the most promiment attributes of this site.
While this is the longest software project, my blogging history itself is much longer, as I started in the early 2000s. While it was some time ago (almost a quarter of a century, actually), times were suprisingly similar. Blogs, or rather personal websites, as the term wasn’t really that popular yet, were either self-hosted or published using one of the existing blogging platforms. At the time, the leading software provider was Movable Type (which, TIL, is still alive today), and it was yet a few years before WordPress was created.
And as the blogosphere flourished, so did my writing. Over the years I created, hosted, and wrote literally dozens of blogs, mixing both personal and professional topics. But I always had this site, my personal site. It evolved multiple times, as I was learning the craft of web development: * Starting out under a free domain, hosted on my personal computer at home, its contents consisted mostly of movie reviews and some content about me * Then renamed to lebkowski.info with 3 or 4 major redesigns along the way, it was mostly a blog about me, my travels & adventures, web development and interacting with the blogosphere in general * And some time between late 2013 and early 2014 it was replaced by a static one-pager lebkowski.name, which is was a precursor for this site
I can’t remember the reasoning behind that decision, but I decided to scrap all existing content on my site and replace it with a simple static page. There was no blog or articles. And as such it was fitting to publish it as a static HTML site. I remember being inspired by another site when creating the layout, and using some kind of fancy editor with the ability to automatically compile assets, synchronize multiple devices and hot reloading (it used an early version of browsersync or a similar solution) and automatically deploy the built website using FTP et. al.
At the time, we were using Less as a CSS preprocessor at Docplanner, so that was my technology of choice for personal projects as well.
When it comes to deployment, I can’t remember how this static HTML was delivered to production (it might have been a manual process), but 2014 were the early days of docker. I was fascinated by this new way of thinking, and the possibilities it opened for PaaS solutions. And soon I adapted Dokku to build and host all my projects, thrilled by the simplicity of the build process it introduced (similarly to heroku).
Not long after I decided to bring back articles, but I wanted the site to remain static, so I moved the engine to Sculpin (PHP static site generator) — I wrote content in markdown, dokku built and released, digitalocean hosted. This was in mid-2014, and this same skeleton in the same git repository lives and powers the site to this day. But there sure were changes since then!
RSS, contrary to popular belief, is not dead. So this was the founding block of any of the blogs I built. This site was no exception. Moreover, cool URIs don’t change. This is why if someone subscribed to my site’s feed around 2005, it would work continuously to this day, nearly 20 years later.
Do I miss out on analyzing visitor traffic by allowing consuming my content on different platforms this way? Technically, yes. But also: I removed visitor tracking altogether when Google tried to force a migration to Analytics v4, and I’ve been living happily without knowing the numers ever since.
At some point I stumbled upon an obstacle: how to embed rich content like youtube videos in my content, which is created in markdown. While markdown technically allows mixing with HTML, I did not want that and opted for a simpler option: just link to the content.
You know, back in 2008, on the wave of Web 2.0 hype, some person named Leah Culver proposed a standard protocol for sharing and embedding rich web content: oEmbed. This allowed me to just write a paragraph with an URL, and with some embedly magic it was automatically turned into a rich embed. Based on open standards, and supporting any data provider (and with embedly’s help, even some that do not support it natively).
At one point I integrated with Algolia to provide a search feature. I was using it heavily for commercial pruposes, and it seemed fitting for this site as well. I pushed the index during build time, and used the JS SDK to provide the UI on the site. Unfortunately, there was little adoption from the users, so ultimately I dropped it — and haven’t thought of it since
I mentioned opting for Less in the begining. Unfortunately, that decision did not age well, as it was ultimately Saas which won the preprocessor wars. I was late to the party and only got myself to switch in 2020. Along with some redesign, I introduced two major improvements:
So the frontend stack was modernized about 4 years ago and it holds remarkably well to this day (I even have a component library build in case I would want to go through the redesign once more). Part of the reason is that there is almost no Javascript used, and the little there is was written in VanillaJS, so no webpack/babel is necessary.
Speaking of javascript: as a heavy reddit user at that time (hello RES) I relied a lot on keyboard navigation. And I thought it would be an obscure, but otherwise an useful feature for my site as well: did you know that you can jump between content sections by pressing either K
or J
keys (not mobile friendly, I’m afraid). You can try it now.
Quite soon after its initial release I jumped on the AMP bandwagon. I thought it was an interesting standard. Fortunately it didn’t take me long to see it for what it really was — an attack on the open web — and removed it a few months later. It took Google about 5 years before they utimately backed down too, and stopped pushing this agenda.
I never need AMP. It wasn’t magic. It just cut the fat from multi megabyte websites. Mine’s lean and fast without any help.
I don’t use infrastructure as a code approach here, so I can’t track exactly when it happened, but at some point I decided to switch fully to HTTPS. I think I must’ve had some paid certificates earlier, but by early 2015 I certainly switched to letsencrypt, and automated the whole ordeal. It was before Caddy or Traefik automated the whole thing, so I remember scripting it all together to work with my dokku’s nginx.
At that time I already used SSL for local development, so switching production was a no-brainer. Over time I had the ability to upgrade my ancient version of dokku so I could use the letsencrypt plugin that works out of the box. I was also able to switch from http to dns challenge, which has proven to be much more reliable in my case.
Some of the most recent additions are the indie web improvements I think I always followed the spirit of that movement, although not necessarily in any formal way.
For example, in the mid 2000-s, a protocol named OpenID Connect was introduced and widely adopted. This allowed me to turn my site into my identity provider. Before „login with facebook” or „login with google” links, I could „sign in with your URL” and I took advantage of this. Unfortunately, the adoption withered and died, so I no longer use it. But I have it in the back of my head, and whenever a similar solution surfaces, I will be ready to switch.
Other examples of indie web elements are for example the use of Semantic Web in the form of JSON-LD (a successor of once popular RDF), microformats, and even such unnoticeable details as using <time>
element to markup dates. This makes site’s content richer for any kind of automated tools, and allows seamless integration in other places. I originally made this so that sharing links on slack or social media had a more pleasant form.
Like webmentions. I’m on the fence with the whole liking / commenting / pinging thing. I don’t engage with the community as much these days, so the features are mostly dormant, but they are there.
And finally, after upgrading dokku last year, and replacing my legacy digital ocean droplet with a brand new $5 one, I decided to switch the build process completely. The buildpack approach was interesting, but caused a lot of maintenance headaches — the buildpacks became outdated or missing, and it felt I like didn’t have the process under control. I didn’t have the confidence that I would be able to recreate it easily using more modern and open toolset.
So the first step was to switch to Dockerfile builds. They still relied on dokku, but used dockerfiles — a standard I knew and could trust, and was not proprietary to dokku ecosystem. and from there it was just one step to extract the build process out of dokku entirely.
Over the weekend I moved it to Github Actions. It still uses the same dockerfiles, but now it just pushes an image to the container registry and triggers dokku to rebuild. As a side effect I can now automatically deploy any branch to a staging environment, which is automatically provisioned (with SSL from LE) and decomissioned after I delete the branch.
Most elements of this process are replaceable:
I feel that while the stack is understandably more complex than a couple of years ago, it is also more robust and resilent. Let’s hope for another ten years together.
That final push had a strong reason behind it. I wanted to return more short-form blogging. Currently, I write most of my content in a dedicated markdown editor, and then commit it to the site’s git repository (and push to release). This requires for me to be on a laptop.
I wanted to be able to write more freely. Use my note-taking app or Prose on any device I choose. But since my site is still static and has no content management system, I would need to have a way of publishing notes from those places. I opted to save them to dropbox, which in turn would use a webhook to trigger the github actions build workflow — and there, a simple automation would fetch notes from storage before the sculpin would build the site.
And this separation allows me to do just that, and it is now working and live. What remains is the hope that I find the motivation to write more often🤞
]]>Anyway, I stumbled upon Advent of Code: what would soon to become a recurring event of 25 coding challenges in December. I did a bunch of those and either lost interest, or needed to get back to my actual work. But now, a couple years later, again not having anything to code on a daily basis, I went back and continued from where I left off.
I remember doing the first couple of challenges in python as a learning experience. I think that is the most python I’ve done in my life, as far as I remember. Now, since I’m mostly transitioning to TypeScript (the last thing I’ve written in PHP was 6 months ago), I thought that would be a good choice for me.
Unfortunately it didn’t pan out. I didn’t feel comfortable with wiring all the things around the challenges — like mostly building the module loader, and all the existing boilerplates / launchers / template generators didn’t actually fit my needs. It just wasn’t fun coding using them, and the constraint’s felt unnecessary. So after a couple of tries, I switched back to PHP.
The chair was good, thank you. Fortunately, I am not concerned about recruiting to this project, and PHP is a lot of fun for me, so here we go. After stitching together a basic loader, I went on to implement the first challenges.
And immediately I miss TypeScript. It’s mostly the short lambdas (aka fat arrows) that make the difference. Since I was using a lot of colletion transformations, they pop up a lot, and typing the PHP’s static fn (…) => …
really makes a difference in comparison to pure (…) => …
. Prettier is a close second. It just works, and it allows me to write the code in an absolute sloppy manner, and it will fix anything, while PHP CS Fixer on the other hand, won’t even bother to insert a semicolon for me.
Other than that, I’m having a fast pace moving forward. Whenever I’m experiencing any inconveniences, I improve the loader/boilerplate/launcher parts, and that in turn improves the DX. I extract common parts to a shared lib to reuse between solutions. And I am exploring a lot.
The loader uses the simplest form of DIC container. It scans the source directory for implementations of certain interfaces, and exposes those to certain factories. This way all I have to do is drop an entrypoint-like class anywhere, and use a marker interface on it to indicate that it should be used as a challenge solution. Similarly with input parsers — since they are always provided in a text form, the very first step of every solve is to parse it into a nice DTO.
A lot of the solutions rely on finding the answer by brute-force. This means thousands or millions of operations. And just so I know what is going on, I created the Progress
indicator class, that iteratively displays partial results in the console, while the script is running. It also allows me to estimate the time/iterations required to find the solution, so I get a nice progress bar.
There is a lot of combinatorics in the challenges, for example:
I learned about all of this in high school, but that was decades ago, so I can’t say I remember a lot, so I am having quite a hard time to name the concepts, so that I could build a dedicated library for it.
I also use a lot of high level concepts, like OOP, a collection library, value objects, etc, which makes the code readable on the one hand, but painfully slow at times. Replacing a filter()
or a map()
method with a foreach()
makes a difference here.
While most answers can be brute forced, and the hard part is to optimize the algo or find shortcuts, there are challenges which can be solved metodically.
One such example was molecule folding challenge, where some solution could be brute forced very easily. Proving that this was the best one took me 100s of millions of iterations, and the process did not finish event then.
Switching to a smarter approach was a lot of fun. And while I either googled or discovered a bunch of breakthroughs, that did not yield an elegant solution for me. I remember spending literally days on that one, and I learned a bunch about chemistry, parsers, and other stuff.
This one simulates an RPG-style combat. I recall a great article about representing this kind of rules in the type system (and failing), so I immediately recognized that this time, rules on who can use what weapon, and how the combat proceeds in different scenarios are business rules, and as such are required to be represented as first class in the code.
It is quite a different approach to what I was used to: where entities holding state are also responsible for the validity of this state and it’s mutations. Here, those responsibilities are separated, and there is a separate layer on top that ensures the business rules are followed. In my example you can see how enforcing the inventory rules of a warrior was moved from that players class factory method to a separate builder.
But for the second part, where another class of characters is implemented, I also wanted to try a different approach: to use an evolutionary algorithm to solve the challenge. After implementing the magical combat rules, and making sure they work, instead of brute-forcing the solution or trying to be smart about it… I created a legion of random wizards, let them fight a clone of the final boss, and mutated the ones that did best in each iteration. And I repeated the process until my processor got hot.
I thought it would be much easier, and much more spectacular. I was aiming to visualize the process, in which different species take over the population, because they have better results. I ended up just showing the best 5-10 ones of each iteration. And the difficult part was: how to best classify who got better result.
My first thought was, that whomever won the combat was better to any loser, and then whomever dealt the most damage, and then who used the least resources. This yielded results quickly, but interestingly, not the best results. The algorithm quickly arrived at local maximums and had a hard time mutating out of them. Apparently, some strategies that are good for early stage combat, arent as efficient in the later stages, and when my highly trained wizards evolved for a couple of generations, it was basically impossible for them to backtrack and change strategies that would yield the best result in the long run.
Which brings me to my second point: I struggled a lot with the mutation strategies. Even slightly changing the ways species mutated resulted in very different outcomes. In the end, a couple of small tweaks were responsible for big improvements:
This somehow allowed me to arrive at my final solution. Curiously enough, the most efficient species didn’t survive over generations, which I expected. Instead, I had to keep track of the best solution in each generation, insted of relying on the most recent one.
I use a lot of OOP here. While inefficient at times, it makes the code so much readable. I see people implementing their solutions in a procedural style, on one file, top to bottom. I on the other hand separate responsibilities, test individual components automatically, and build architectures that allow me to expand the code easily.
For example, in the Warrior/Wizard simulator: the second challenge relied on the previous one, and it was quite easy for me to reuse the code for both solutions. And there are two parts to each challenge — usually a tweak to the code to account for new requirements is easy and elegant.
In addition, I riddle my code with assertions. This allows me to make sure that not only the types are correct, but also the values make sense in a semantic way. E.g. if I have method Character::gainHealth(value)
, I make sure value
is a positive integer. No sense in damaging a player by healing them with negative health. Or using magical, healing fireballs with negative damage, for that matter.
Another thing I used is combining assertions with exceptions. I wouldn’t use that on a larger codebase this way, but there is a certain elegance to just enumerating border conditions in code, without any control statements. Building custom assertion classes would probably achieve similar results in a more mature codebase.
Named parameters and factory methods also improve code readability a lot. Use them whenever you can.
In the end, Advent of Code, despite being largely about algorithms and structures, is a lot of fun (and warning: it consumes a lot of time). Would recommend!
]]>Many software engineers, especially the experienced ones, will tell you that there is no such thing as perfect code. They have given up hope and accepted that the will never find the holy grail. I shifted perspective and turned an infinite game into a finite one. This allowed me to stop focusing on code, and move to more important things of software engineering.
Before I share my recipe, I’d like to clarify what does perfect means in this context. You might take the philosophical definition by Antoine de Saint-Exupéry:
Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away
But there is also a more practical approach: a perfect code is one, which you cannot modify to make it any better. In other words, by modifying perfect code, you can only make it worse.
That being said, there are only two requirements your code needs to satisfy to become perfect:
Those are the ony two goals. It’s not about test coverage, architecture, readability, static analysis metrics or the framework used. I mean, your team will probably take those into account during review.
By adding anything more than is required by your peers to accept it, you are possibly overengineering the solution. If you’re implemented more business scenarios or edge cases, then you ain’t going to need it principle applies. And you are delaying releasing to production and delivering value to your users or customers, hence your solution becomes less than perfect.
This explicitly does not mean that the code can remain perfect over time. When revisiting it at a later date a lot of things will change: your understanding of the domain, technical methods and techniques, your team might grow and have different requirements, or maybe simply new business cases arrise. You will need continuous refactoring to keep it perfect.
But at the time of merging to main
this is exactly what is needed, no more, no less. A perfect piece of code.
I have a dream… I have a dream of a product-led company. Where there are small, interdisciplinary teams of around a dozen people. Everyone has a different role, but we all have the same goal (whatever is a priority for the product at any given time).
We are not tied down by corporate politics. There are a lot of various stakeholders, but in the end, we do what is best for the product. Nobody can pull out a joker card to impose their priorities, just because they have a job title, seniority or they played poker with the CEO last Friday. Upper management stays out. They share the vision and their aspirations, we do the rest. This allows us to do meaningful stuff. Features we care for. We have a sense of purpose.
We are not enslaved by any tool or process. We use them as means to get the outputs we desire, and the moment they get in the way, we change, adapt, flex.
What is value? How can we know that what we are doing brings value to anyone? Because we are close with our users, and their users preferably. We listen to them, we look at how they use our software. And we do more: we do research, so we can know better than what they tell us. We can discover problems they aren’t even aware they have.
We work in flex time every day. We are remote and asynchronous, because we don’t need to be pinned down to one timezone, and in one office to be effective. But we do value face time, so we all meet every day. Not because it’s the best way to get on the same page, but because it’s the best way to build strong bonds. And our relationships are the base of everything: we care for each other, we want to collaborate together, and we understand each other better than only by the words we speak: we can read between the lines.
Each day starts with a symbolic daily. Scratch that. With breakfast. We bring a coffee, a bagel, or an omelette du fromage. We talk about our previous days, our aspirations, and how our skiing trip went. There is no agenda. Nobody needs to answer three questions. They can if they feel that the team would benefit. Otherwise, anyone brings any work-related topics if they need to. And then we either discuss or agree to form a working group to address the issue. We say „have a nice day” and move our separate ways.
We have a „place” we hang out. It’s either a gather.town kind of thing, a discord server, or we just run a google meet in the background. We share our problems with whoever is available. If we have a blocker, everyone is noticed, because we leave a Jira comment, or we announce it on slack, or any other tool. We don’t need to wait for daily. Maybe it’ll be resolved before tomorrow.
We spontaneously share our work via the means of screensharing. People can come over to watch, or join in a pair programming session. We review other people’s work at various stages, and this also leads to collaboration: we not only point out problems or mistakes, we also propose solutions, or brainstorm them together. We look at other ppls calendars and join their activities too: design sessions, user interviews. We are interested in other areas to understand them better.
We don’t fight between refactoring and pushing for new mvp features. We all share the same context, and all understand what is the priority at the moment, and what risks we can take. There will be times we rush to deliver something, and times we can lay back and improve the architecture.
During a two-week period we have the time to meet for a pizza after hours. Remotely, everyone brings their own. We also reflect on what’s going well, and what’s not. And Fridays are special days. We set aside whatever we’re doing and do some housekeeping. Groom the backlog a bit. Experiment with that library we always wanted to try. Refactor parts of legacy code. Improve test coverage. Again, no agenda. Just ideas. And no supervision. Here is where serendipity happens.
We choose the best tool for the job. Enterprise, MVP, low-code, no-code, whatever fits. If it’s out of our competences, we buy it on the open market. Everything outside of our team is open market, even within the company. If need the help of a platform team for example: they are a separate entity, with separate workflow, relationships, priorities. They aren’t aligned with us and we can’t afford to expect it. This is why we need to „pay” (the currency is most probably time) for their services. Quid pro quo, Clarice.
We regularly catch up to share progress, showcase anything we have done, update on the priorities, and assign ownership. We volunteer for projects not because the Scrum Gods told us so, but because we want to do it collaboratively, together with people around us, and we have an obligation to them, not to the process.
After we reach any milestone, or release of an important feature, we celebrate together, and acknowledge each person’s contributions. We successfully increased the value, maybe it’s time to call it a day a little earlier today?
]]>Remote work is about how you do the work. You cannot get the same effects, if you’re missing some of the crucial tools in your process. We work great together as teams, because we collaborate, we brainstorm, we spontaneously discuss ideas. And a lot of us are used to doing those things face to face. Taking away those opportunities by allowing people to work from different locations will hinder your progress.
This is why is much more important to adapt to new tools, methods, and work styles in general when deciding to go remote. You still need to brainstorm, spontaneously discuss ideas, build relations and work together, but you need to realize that you don’t have a shared office to facilitate those things for you.
Long story short: allowing people to work from different places, without realizing that you need to account for the lost opportunities is a dead end. Remote is not about the location, is about your work culture.
]]>I love greenfield projects, but I hate the bootstrapping phase. Despite working almost exclusively on new projects since 2015, I rarely actually need to start from scratch. Up to this point, it usually meant copying bits and pieces from previous projects.
This was not an option this time, since, erm, the stack changed. So I started reading and talking to the team a lot to taxi my way up to some basic proficiency in the NestJS framework.
In the first days, each step forward raised dozens of questions, obstacles and unknowns. Imagine you’re hungry and need some eggs. But instead of just grabbing a wallet and going to the corner store, you realize that you have no legs, but that’s not really a problem, because money wasn’t invented yet, so you need to grow some chickens instead. And then it turns out they don’t have type definitions, so you can’t use them.
Let’s start with some fundamentals: what I am trying to achieve here. The main goal is to have a semi-decent NestJS application. Both I and others on the team adapt domain driven design, so I’ll use those principles in this codebase too. Testing is an important factor too: it helps to develop with higher confidence, and actually improves my developer experience, since the app is headless, so it has no real interface I can poke around.
There are also other concepts that seemed important to me from the get-go, and those include:
So let’s dive right in to see the steps I took along the way to complete the first phase of building my application: the passing of a simple end-to-end scenario.
I think this is one of the core parts of any framework. In fact, I’ve seen micro frameworks which were nothing but a DI container. In Nest, the DI is engraved deep into the system, so that is not unexpected. But it is also tied closely to the module system, which raises an eyebrow for me. I’ll touch on that later.
Due to the way how JavaScript and/or TypeScript work, there are a lot of shortcomings all JS DI containers share: you can’t use the interface as an identifier and thus you need to explicitly tie class dependencies to DI tokens. In fact, autowiring mostly doesn’t work and you have to resort to juggling manually registering classes and their dependencies and slapping @Injectable()
and @Inject()
all over the place. A skill I’m yet to master.
The autowiring part was actually a hard blow for me. After being skeptical about it at first, when I was still developing with Symfony, I reached a point where I mostly only used DIC configuration to wire value objects containing configuration — the rest was either autowired, or used dedicated factories.
Unfortunately, until a popular TypeScript library starts to pre-compile the container configuration in build-time (if that is even technically possible) there is no getting around it.
I like to treat the app like a black box during end-to-end testing. To achieve that, I need to recognize all inputs and outputs. Among them are for example HTTP adapters, or any adapter that reaches the outside world for that matter. It’s the benefit of the hexagonal architecture that I know exactly where to look for them. Entity persistence adapters in particular do not match that definition, since the database is part of the app during tests.
So I know there is a group of services that I want to replace with test doubles during e2e, because I want to run put my black box in a controlled environment. I want to switch them at the same place they are defined in the main container, so the definitions are close together (it’s a design choice). „The Nest way” is rather to instantiate individual modules, and mock certain services in each test case explicitly, or use some other form of jiggery-pokery. I haven’t decided if I want to cut my box into pieces (by pulling out individual modules), so for the time being I’ll stick with what I know.
To achieve my goal, I created a function that will register either the regular adapter or the test double. The way it works is that for each InjectionToken it registers both versions of the service on the side, and then uses a factory method to return the correct one depending on the runtime config.
Let’s get back to the modules. It’s the framework’s opinionated method of splitting the app into smaller parts and managing dependencies between them. They are strongly encouraged. And while I believe it’s convenient to have your dependency container configuration assembled from pieces, I think the job of separating concerns can be done in a better way.
This is why I said fuck it and opted for dependency cruiser instead. Shout out to Lech who reminded me about deptrac (a PHP alternative) and triggered me to start using it back in the day. You see, having a bunch of hierarchical parts of the DIC that are private by default (I’m talking about Nest modules) does not actually prevent you from doing anything, it just inconveniences you to do so. There is nothing stopping you from actually importing any other module all over the place, exporting everything, and doing a lot of mess in the process.
Dep cruiser on the other hand sets out strict rules about what can depend on what. And it’s not on the DIC level, but on the file level, so it applies to any kind of imports. You can set your layered architecture, you can raise module boundaries, and take them down by exposing internal APIs.
I’m not yet actively going against the framework yet, but I’m kinda skimming on the surface.
I think I can have a high unit coverage, because of the way I write code that is easy to test, and how fluent I am with unit tests. They just come naturally to me. So the first thing I did was to move the testing sources to __tests__
subdirectories, instead of just suffixing the names with .(test|spec).ts
. That is because I create a lot of various test doubles, and they wouldn’t have a place to live otherwise. On top of that, dep cruiser forbids any application code to depend on anything isolated to the __tests__
directory, so there is an added benefit of not using any of them in production.
I have nothing against jest
mocking capabilities, but having explicit test doubles improves readability, increases reuse, not to mention that they are first class citizens. You can inject them into the container (as mentioned earlier), they are affected by automatic refactorings done by the IDE, etc. I’ll surely also resort to jest
mocks to cut some corners.
Another thing that I immediately started using are Mothers. Quick googling hasn’t yielded any interesting results as to dive deeper into that topic, so I will leave this as an exercise for the reader. I’ll just quickly summarize:
A Mother is a static factory containing convenience methods for creating entities. I used to write code with dozens of places where new Entity
was used, mostly in test sources, and any changes to the entity’s constructor were a pain. With the help of Mothers, you move those calls to one place. It also abstracts away the creation process (it’s a factory after all) making the test sources more readable and more descriptive (e.g. OfferMother::deactivatedLastMonth()
).
I think it was Patryk who introduced me to this pattern, and I must admit, I wasn’t a fan at first, but the concept grew on me as I was writting more tests.
I wanted to adopt the gherkin syntax for jest unit tests because it’s so descriptive and powerful. It has done wonders for us at Phone. I even started installing the jest-cucumber plugin but it felt quite poor. What else should I expect from an npm package? And then I realized that I should just use cucumber-js directly.
The setup was straightforward, my IDE supports the feature files natively and offers completion, and suggests implementing missing step definitions, allows me to run individual scenarios. It also enables me to debug them, although I needed to work around the step timeouts, which were eager to end test cases prematurely.
One thing I miss is that the step definitions are not a part of the Nest application, so I can’t use the DIC to provide their dependencies. Instead, the steps have the service locator injected and fetch whatever they need explicitly. I can live with that.
The first thing I was told: Prisma is shit. Don’t use Prisma. Run away. Thanks for the tip! I’ll use TypeORM instead. That’s a name I’ve heard before, and it’s officially supported by the framework. Nothing can go wrong.
Oh, but you can’t use your domain entities, you have to map do persistence DTOs — was the second thing I was told. Damn, no, please no. I might as well use ActiveRecord instead. That was a real bummer.
Fortunately, there is a somewhat hidden, not well-documented option called entitySkipConstructor
that basically allows me to skip the mapping step. Maybe some DDD evangelists will fume about it, but that’s the boilerplate I would very much like to avoid. And that is something I am used to (PHP’s Doctrine was doing just fine in a similar role), and some familiarity at this point brings me much comfort.
After finding another poorly documented feature I learned that I can decouple my entity classes from their mappings. In other words, instead of using decorators, I can define the same metadata in a separate place. Great, that keeps my domain a little cleaner. I didn’t read the fine print which stated that I am restricted in the way I can name my schemas, but that wasn’t anything a couple of hours of debugging wasn’t able to fix.
The framework is kind enough to provide me with decorators to reduce the boilerplate of injecting repositories, but at the same time forces me to add the boilerplate to configure which schemas are allowed. The module thingy gets in the way again. Is there a way to export everything by default? I’ll set global
to true
, just in case.
On the upside, the framework can automatically synchronize the database schema in a test environment. That’s something that had caused a lot of problems for me in the past, so I’m glad it’s available out of the box here.
Don’t get me started. I don’t need to know if something is an interface or an implementation when I depend on it. Either one is a contract, and it can change its nature freely without affecting the consumers. This is why I don’t prefix with I
and I don’t suffix with Interface
.
Service
suffix feels even worse. It reeks of the times when logic was contained in controllers, and sometimes extracted to those special things called „services” (if you had a DIC) or „helpers” (if you didn’t or used functions). A class does not concern itself with whether it is or isn’t a service, and it should be left out of its name.
In addition, Nest’s conventions add a lot of .service
, .controller
, .port
, .adapter
and other weird stuff to otherwise fine filenames. I have no idea why I would want to do that. I always followed a simple rule: the file is named identically to the thing that lives inside it (that also implies one declaration per file), which was actually forced by PHP’s autoloading standards. And my IDE understands that when I’m renaming stuff. I like that rule.
I think I’m openly going against the framework conventions here, but it’s a hill I’m willing to die on, especially since I was such a vocal proponent of suffixing in the past. I’m reformed. I only keep the suffixes for unit tests, since they don’t contain any single named thing inside, so I use the SUT.spec.ts
format for jest
to have an easier time finding them.
I’ve heard this name thrown a lot on reddit, but I never had the chance to use it. I wasn’t expecting much. In fact, I was rather looking for an assertion library for two reasons, one more important than the other:
jest
was only a dev dependencyexpect
function in cucumber step definitions, thinking it was in jest
context and that it’s not requiredBut I stumbled on Zod instead and OMG it is so game-changing. Is there a thing it cannot do?
()
And all the time it infers the output type, so TypeScript is aware of what comes out of my JSON.parse(): any
mess after I pass it through schema.parse()
. The need to have any kind of input DTOs, their decorators for validation, transformers to meticulously fill out each field, the validators themselves, and mappers to match input format to something more familiar — they are all gone.
Would recommend, 10/10, even without rice.
It was a tiresome journey for me. A one that I didn’t know where would lead me or how long it would last. Finally I think I have a quite good grasp on it, and I will be feeling more comfortable going forward.
Imagine my joy seeing the test scenario turn green!
]]>After more than 20 years of being primarily a PHP developer, I am finally changing the tech stack. I guess I won’t have to eat a chair after all. Actually, I am switching more than just languages: a new company, a new team, a new framework, new people around, a new business area, a new role… But you can read all about that in the previous chapters. Today, let’s focus on my first steps in adopting to — spoiler alert — NestJS ecosystem.
We’re trying to quickly bootstrap a relatively small/easy application to aid with our business goals. It’s familiar territory: we need to move fast, validate the idea, and iterate in response to feedback. So we can skip a lengthy planning phase and get right to the job.
My first thought was to build the MVP using something I’m familiar with, so PHP. The obvious upside was to bootstrap quickly, having a lot of experience with the tool and ecosystem around it. That was under the assumption that it is temporary, and would be rewritten after a couple of weeks. We all know how long temporary solutions last, so that was crossed out fairly quickly, especially since this language was not a part of the company’s stack.
Then the idea of python was brought up, but the justification was shortsighted. The app was in the general data area, so python seemed like a fit. But on a deep dive, it turns out that the problem is not complex enough to warrant a specialized tool to solve. That, and the team in general being unfavorable to introducing that language to the tech stack, resulted in abandoning that idea.
Two last candidates were on the table, both already used by the team: .NET or TypeScript. We had a discussion before about the state of existing applications, language capabilities, maturity, and enterprise features. This is why .NET was so appealing. A lot of plumbing would just work without me having to deal with it for too long (not unlike in Symfony): the dependency injection container, command buses, async messaging, advanced security and access control topics, reduced boilerplate, and much more. I know that I can have expectations that most of the common problems are already solved, so I could focus on what is important: the business logic.
Previously, at Docplanner, it was also used by other teams, but I never considered it, turning my attention to more flashy alternatives like Kotlin.
You might have an impression that the same is true for JavaScript, but it’s not quite the same. It’s easy to find an npm package for anything, the community is quite potent in that regard. Unfortunately you soon realize that it’s only surface level.
There is a lot of diversity in the ecosystem, but I don’t necessarily mean it in a good way. You need to rely on tons of plugins or packages, but they are not always homogeneous, and often have problems working together. Some are feature rich, others solve simple use cases. There are those that are opinionated and force their way of thinking on you, and others are more flexible and forgiving.
While this is less often seen nowadays, there are packages written in JavaScript without type definitions. That is mostly a deal breaker to use in a TypeScript codebase.
And in the end, since there are so many to choose from, so when you finally find a lib, you need to assess the risks of using it. What is the code quality? Will I be able to extend or otherwise modify it? Is there enough flexibility, or are we forcing ourselves into a corner? Is it still actively maintained?
Sidestory: A few years ago I wanted an OAuth2 interceptor for axios. I had a similar middleware for PHP’s Guzzle on the backend. I was expecting that given some basic configuration of secrets, endpoints and scopes, it would manage the token lifecycle, its persistence and attach the credentials to requests automatically. The most popular lib I found wasn’t doing any of that. As far as I can recall, it only injected a bearer token into headers, and even that was buggy. The logic that probably every other OAuth client uses, we had to implement ourselves. Long story short, that was not a pleasant experience.
Forget the whole previous section. In the end, I decided to take the path of least resistance: NestJS, since I already knew TypeScript, and the framework was mature enough and had a bunch of enterprise features already.
In the next chapter I describe my first steps with the framework, some decisions I made along the way, great tools that really made a difference, as well as some shortcomings that I’m yet to overcome.
]]>I told my story about the decision to leave the Docplanner Phone team in the previous chapter. In this piece, I aim to describe my experiences with the job hunt. Strap in.
Excluding some shorter gigs, I always either used my network and recommendations, or some of the popular job boards to find a job. Curiously, I never had any luck with recruiters, neither reaching out to me from in-house teams nor from agencies. I have met some great headhunters along the way, we threw a few punches, but that never yielded any results for me (although we came close a couple of times).
The first time I got a job — I wasn’t even looking. An acquaintance knew someone who was hiring, so I went to meet them out of curiosity. I stayed for 5 years. I found the next one via one of the popular job boards back in the day: Goldenline. And the position was at Goldenline itself. Very early in the process, they decided that I would be a better fit for something new they were slowly brewing: ZnanyLekarz (aka Docplanner). And this is how I joined the team that brought this doctors’ marketplace to its commercial success.
My next jump was in 2015, and it seems bizarre from a time perspective: I applied through a job offer on Pracuj.pl. Goldenline was already starting its decline back then, and the new wave of IT job boards like No Fluff Jobs or JustJoin weren’t born yet. I remember Dominika summing up the „HR” part of our interview:
Ok, that went well, let’s hope you don’t tank the technical part
I supposedly nailed it (warning: cheesy) and joined the team to improve how we approached technology (in polish) among other things, and stayed until things turned for the worse over 4 years later (article also in polish).
The next place was when I boomerang’d back to Docplanner by the means of a recommendation, rendering all my other ongoing processes moot. This was also the time I realized that job hunting can be — paradoxically — challenging for people with experience. And this was confirmed by a number of friends in a similar spot. Despite performing various roles in the past, with a lot of success to show for it, a lot of potential employers still fail to read between the lines, and expect you to jump through hoops to prove your worth, sometimes evaluating skills that won’t be even used on the job.
I think I wrote about that topic on another occasion. Either way, this unpleasant experience was fresh in my memory when I reached out to find the successor of Docplanner Phone for me.
From a bird’s eye view, the market for IT job boards looks like this (keep in mind that this is just my impression, not necessarily an honest representation):
So I turn to the domestic market:
And that leaves me with the other board: JustJoint.it. Trying to strike a balance between a long-form description, and characterizing the role with keywords and numbers, it always had a certain appeal to me, both as an employee, and as a hiring manager.
So I went there, browsed a little, got fed up with the state of the tech market, just to finally apply for a single role on a Sunday afternoon. And then I waited. At that point, I didn’t yet know when I would leave my current company. I expected a long process of looking for a new place. I’m old and fussy.
I actually got a bit worried, because the response hadn’t come in for a couple of days. You have to understand — I’m not usually getting ghosted, it’s rather the opposite: I once received an offer without actually applying, we just had a coffee together. So what happened this time? Maybe it’s true that people don’t read, and my CV was 3 pages long…
As it turns out, I didn’t actually fit the profile they were looking for, but they did read between the lines and realized that I could be of use in different areas. Just a lucky coincidence on my side, and an admirable approach on theirs — they could’ve easily rejected me with an automated email. We’re off to a good start, as they totally subverted my expectations: instead of me going through hoops, I have shown myself in an unfavorable light, and yet they want to figure something out regardless.
So long story short, I’m on the phone with the recruiter, talking about my past experiences and plans for the future, so that they can get know me better and we can figure out if I’m a fit for another role they were planning to open. It turns out there is a match, and the position is closer to the product than to engineering. At the same time I am starting to understand why so many engineers are turning their careers into product roles. Unexpectedly, I was going to make the same move myself.
Things picked up pace since then. As far as I recell I had a three-step process, one with HR, one with my technical hiring manager, and just one to cap it off with the CEO (yes, it’s a small company).
I trusted my gut. The agility they showed, the problems they were mentioning, the questions they asked: they all fed my curiosity and desire to join the team. The technical discussion was actually cut short to almost half the time. We quickly realized that we operate on similar registers, and there was no real need to dive deeper.
I think it wasn’t even two weeks between the first phone, and receiving the offer I later accepted.
This was actually the first real onboarding I had in my life. Back in 2015 it wasn’t yet a hot topic to focus on this process. Someone showed me around the office, pointed me to my desk, and I mostly had to figure everything out by myself, with some help from people sitting around me.
In Docplanner on the other hand, I voluntarily opted out, since I wasn’t that much interested in the rest of the company. I was set to create something new, that wouldn’t intertwine with existing products. I felt the time pressure, so I forego 15 out of the 18 onboarding meetings that were planned for me.
The first thing that caught my attention was the fact that the process was to take place at the office, despite the offer being fully remote. And at the company’s headquarters in another city too, except most of the engineering team, including my manager, weren’t even there.
It turns out it’s not really deliberate. The HQ has always been the hub for a lot of these kinds of events, and this was no different. Probably in time, a more sensible plan will be arranged based on each individual role. I used the time I had to meet as many people from different departments as I could, have some face time, and drink some coffee. But I cut my stay short to get back home, and to continue the onboarding process remotely.
As it turns out, that on-site part was more valuable than I previously thought. Since most of the team works remotely and rarely even visits the office near my home (near is an understatement, btw, it’s still almost an hour drive), this was one of the few real opportunities to actually meet someone and to start building some deeper relations. I truly enjoyed those three days I spent there, and it even left a little stain on my admiration for remote-first teams.
After the first day, organized by the HR dept to a tee, I was thrown into the deep water. I can’t say I didn’t like or expect that — the whole process up to this point was chaotic (and I mean it in a good way). At the same time, I hear that other people that joined along with me had their first weeks more well-organized.
But I was already off to the races: armed with a few tools and names, I set out to learn about the business domain of my next endeavor. What followed next was me switching the tech stack, and you can read all about it in chapter 3.
]]>I am during this most uncomfortable period of change. They say that changing jobs is right there at the top of the most stressful events in life. Well, at least for some people. And as it turns out, I’m among them. But don’t let me get ahead of myself.
I have a long career behind me already, and there are a few constants that can describe it up to this point:
It all actually comes down to the fact, that I usually leave without a backup plan. I quit, set my status to open, and in the meantime usually dock somewhere for a couple of weeks until I find the next place for myself. I expected it to play out similarly this time, but that was not the case.
I started Docplanner Phone narrowly before the pandemic hit, which changed the job market a lot. And while there arose many lucrative opportunities, especially from global, remote-first companies, simultaneously the conditions at Docplanner kept improving.
The compensation rose steadily, with a noticeable bump along the way, when by the company’s decision we started to aim to be competitive on the broader European market — and what followed was a salary adjustment for the whole team.
The work-life balance was fabulous. We had a lot of autonomy in that regard and we used it a lot. We relied on trust, rather than putting the number of hours in. And the team reacted with engagement and responsibility.
The tech had its problems, some of which were being slowly mitigated (with great success I should add). And the funny thing was: it was the first time in my life that I worked on a team that had a larger product debt than a technical one. The quality was really top shelf. We did large refactors, consisting of thousands of lines of code, resulting in actually reducing bugs, rather than introducing new ones. And it was a pleasure to do them.
Maybe that was part of the problem? As a startup, we’d put too many resources into technology, and too little into marketing and sales? 🤔 Dunno.
We’ve managed to build a competent, open-minded, and diverse team. It was a pleasure to work with everyone, and the relationships we cherished along the way were priceless. That was actually the primary motivator for me personally, as well as the thing I will miss the most. Products raise and fall, friendships are here to last.
There were two primary things that caused pain in my back.
It was actually a great deal for the product. We got the infrastructure, funding and support of a successful, established company, but at the same time we were able to move fast, take risks, break things, and most importantly: leave dozens of years of baggage behind. That worked fine with a bunch of bumps along the road, but the net outcome was certainly positive for us.
There were some altercations with other departments, famously: legal, human resources and the site reliability team.
There were a bunch of reasons for this:
Disclaimer: I see that mostly as a systemic failure or a conflict of interests. There were tons of brilliant people with good intentions involved, and I can’t blame any of them for those outcomes… Well, barring some exceptions, I did have some beefs along the way.
So I reflected: what could’ve I achieved without all those obstacles, but instead with a company vision that is aligned with mine? One without so much politics, or scale crippling all of its efforts to introduce meaningful change. 🤔
For the majority of time, we operated with a glooming shadow over our heads. We had no certainty about what the next quarter will bring (read: we were a startup). One thing was certain: while we achieved some success, and reached a lot of our goals, it was never a spectacular victory. And that means we couldn’t grow as fast as some of us wanted.
One aspect of that was that the size of the team stagnated, and in the last period it shrank. The prospect of me getting back into roles I really enjoyed, like managing leaders, or recruitment, has moved away. A smaller team also meant fewer capabilities: it was unreasonable for us to undergo complex projects (both from a business perspective, as well as from a technical one) because they would hog up too many resources, hindering our day to day operations. There was just no room do to a whole category of fun things.
The market situation didn’t give much hope to reverse this trend. What was expected is more cost spending, fewer experiments, hiring freeze. We were heading for at least a couple of months in idle gear, and it would take us another few to get back up to speed.
More than 4 years prior to this I have abandoned writing code as my primary role. Joining Phone was just meant to be a temporary step back, a necessary investment to reach new heights. That goal failed miserably for me, and staying on the team any longer felt like a total surrender on that front.
At the same time, the product outlived some of the other experiments the company launched over the years. The number of customers raised steadily, there were no huge gaps in functionality, and most of the new features were rather long shots — with more effort, and less expected benefit. The product matured, and my skillset was no longer paramount to its further development.
Sometime in early November I made the decision. Product entering its next growth stage, slow loss of autonomy, and the growing excitement to try out something new — those were the deciding factors for me. Oh, and the fucking JumpCloud requirement. I don’t think I’ll ever accept working on a machine with a backdoor installed.
]]>