Anyway, I stumbled upon Advent of Code: what would soon to become a recurring event of 25 coding challenges in December. I did a bunch of those and either lost interest, or needed to get back to my actual work. But now, a couple years later, again not having anything to code on a daily basis, I went back and continued from where I left off.
I remember doing the first couple of challenges in python as a learning experience. I think that is the most python I’ve done in my life, as far as I remember. Now, since I’m mostly transitioning to TypeScript (the last thing I’ve written in PHP was 6 months ago), I thought that would be a good choice for me.
Unfortunately it didn’t pan out. I didn’t feel comfortable with wiring all the things around the challenges — like mostly building the module loader, and all the existing boilerplates / launchers / template generators didn’t actually fit my needs. It just wasn’t fun coding using them, and the constraint’s felt unnecessary. So after a couple of tries, I switched back to PHP.
The chair was good, thank you. Fortunately, I am not concerned about recruiting to this project, and PHP is a lot of fun for me, so here we go. After stitching together a basic loader, I went on to implement the first challenges.
And immediately I miss TypeScript. It’s mostly the short lambdas (aka fat arrows) that make the difference. Since I was using a lot of colletion transformations, they pop up a lot, and typing the PHP’s static fn (…) => …
really makes a difference in comparison to pure (…) => …
. Prettier is a close second. It just works, and it allows me to write the code in an absolute sloppy manner, and it will fix anything, while PHP CS Fixer on the other hand, won’t even bother to insert a semicolon for me.
Other than that, I’m having a fast pace moving forward. Whenever I’m experiencing any inconveniences, I improve the loader/boilerplate/launcher parts, and that in turn improves the DX. I extract common parts to a shared lib to reuse between solutions. And I am exploring a lot.
The loader uses the simplest form of DIC container. It scans the source directory for implementations of certain interfaces, and exposes those to certain factories. This way all I have to do is drop an entrypoint-like class anywhere, and use a marker interface on it to indicate that it should be used as a challenge solution. Similarly with input parsers — since they are always provided in a text form, the very first step of every solve is to parse it into a nice DTO.
A lot of the solutions rely on finding the answer by brute-force. This means thousands or millions of operations. And just so I know what is going on, I created the Progress
indicator class, that iteratively displays partial results in the console, while the script is running. It also allows me to estimate the time/iterations required to find the solution, so I get a nice progress bar.
There is a lot of combinatorics in the challenges, for example:
I learned about all of this in high school, but that was decades ago, so I can’t say I remember a lot, so I am having quite a hard time to name the concepts, so that I could build a dedicated library for it.
I also use a lot of high level concepts, like OOP, a collection library, value objects, etc, which makes the code readable on the one hand, but painfully slow at times. Replacing a filter()
or a map()
method with a foreach()
makes a difference here.
While most answers can be brute forced, and the hard part is to optimize the algo or find shortcuts, there are challenges which can be solved metodically.
One such example was molecule folding challenge, where some solution could be brute forced very easily. Proving that this was the best one took me 100s of millions of iterations, and the process did not finish event then.
Switching to a smarter approach was a lot of fun. And while I either googled or discovered a bunch of breakthroughs, that did not yield an elegant solution for me. I remember spending literally days on that one, and I learned a bunch about chemistry, parsers, and other stuff.
This one simulates an RPG-style combat. I recall a great article about representing this kind of rules in the type system (and failing), so I immediately recognized that this time, rules on who can use what weapon, and how the combat proceeds in different scenarios are business rules, and as such are required to be represented as first class in the code.
It is quite a different approach to what I was used to: where entities holding state are also responsible for the validity of this state and it’s mutations. Here, those responsibilities are separated, and there is a separate layer on top that ensures the business rules are followed. In my example you can see how enforcing the inventory rules of a warrior was moved from that players class factory method to a separate builder.
But for the second part, where another class of characters is implemented, I also wanted to try a different approach: to use an evolutionary algorithm to solve the challenge. After implementing the magical combat rules, and making sure they work, instead of brute-forcing the solution or trying to be smart about it… I created a legion of random wizards, let them fight a clone of the final boss, and mutated the ones that did best in each iteration. And I repeated the process until my processor got hot.
I thought it would be much easier, and much more spectacular. I was aiming to visualize the process, in which different species take over the population, because they have better results. I ended up just showing the best 5-10 ones of each iteration. And the difficult part was: how to best classify who got better result.
My first thought was, that whomever won the combat was better to any loser, and then whomever dealt the most damage, and then who used the least resources. This yielded results quickly, but interestingly, not the best results. The algorithm quickly arrived at local maximums and had a hard time mutating out of them. Apparently, some strategies that are good for early stage combat, arent as efficient in the later stages, and when my highly trained wizards evolved for a couple of generations, it was basically impossible for them to backtrack and change strategies that would yield the best result in the long run.
Which brings me to my second point: I struggled a lot with the mutation strategies. Even slightly changing the ways species mutated resulted in very different outcomes. In the end, a couple of small tweaks were responsible for big improvements:
This somehow allowed me to arrive at my final solution. Curiously enough, the most efficient species didn’t survive over generations, which I expected. Instead, I had to keep track of the best solution in each generation, insted of relying on the most recent one.
I use a lot of OOP here. While inefficient at times, it makes the code so much readable. I see people implementing their solutions in a procedural style, on one file, top to bottom. I on the other hand separate responsibilities, test individual components automatically, and build architectures that allow me to expand the code easily.
For example, in the Warrior/Wizard simulator: the second challenge relied on the previous one, and it was quite easy for me to reuse the code for both solutions. And there are two parts to each challenge — usually a tweak to the code to account for new requirements is easy and elegant.
In addition, I riddle my code with assertions. This allows me to make sure that not only the types are correct, but also the values make sense in a semantic way. E.g. if I have method Character::gainHealth(value)
, I make sure value
is a positive integer. No sense in damaging a player by healing them with negative health. Or using magical, healing fireballs with negative damage, for that matter.
Another thing I used is combining assertions with exceptions. I wouldn’t use that on a larger codebase this way, but there is a certain elegance to just enumerating border conditions in code, without any control statements. Building custom assertion classes would probably achieve similar results in a more mature codebase.
Named parameters and factory methods also improve code readability a lot. Use them whenever you can.
In the end, Advent of Code, despite being largely about algorithms and structures, is a lot of fun (and warning: it consumes a lot of time). Would recommend!
]]>Many software engineers, especially the experienced ones, will tell you that there is no such thing as perfect code. They have given up hope and accepted that the will never find the holy grail. I shifted perspective and turned an infinite game into a finite one. This allowed me to stop focusing on code, and move to more important things of software engineering.
Before I share my recipe, I’d like to clarify what does perfect means in this context. You might take the philosophical definition by Antoine de Saint-Exupéry:
Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away
But there is also a more practical approach: a perfect code is one, which you cannot modify to make it any better. In other words, by modifying perfect code, you can only make it worse.
That being said, there are only two requirements your code needs to satisfy to become perfect:
Those are the ony two goals. It’s not about test coverage, architecture, readability, static analysis metrics or the framework used. I mean, your team will probably take those into account during review.
By adding anything more than is required by your peers to accept it, you are possibly overengineering the solution. If you’re implemented more business scenarios or edge cases, then you ain’t going to need it principle applies. And you are delaying releasing to production and delivering value to your users or customers, hence your solution becomes less than perfect.
This explicitly does not mean that the code can remain perfect over time. When revisiting it at a later date a lot of things will change: your understanding of the domain, technical methods and techniques, your team might grow and have different requirements, or maybe simply new business cases arrise. You will need continuous refactoring to keep it perfect.
But at the time of merging to main
this is exactly what is needed, no more, no less. A perfect piece of code.
I have a dream… I have a dream of a product-led company. Where there are small, interdisciplinary teams of around a dozen people. Everyone has a different role, but we all have the same goal (whatever is a priority for the product at any given time).
We are not tied down by corporate politics. There are a lot of various stakeholders, but in the end, we do what is best for the product. Nobody can pull out a joker card to impose their priorities, just because they have a job title, seniority or they played poker with the CEO last Friday. Upper management stays out. They share the vision and their aspirations, we do the rest. This allows us to do meaningful stuff. Features we care for. We have a sense of purpose.
We are not enslaved by any tool or process. We use them as means to get the outputs we desire, and the moment they get in the way, we change, adapt, flex.
What is value? How can we know that what we are doing brings value to anyone? Because we are close with our users, and their users preferably. We listen to them, we look at how they use our software. And we do more: we do research, so we can know better than what they tell us. We can discover problems they aren’t even aware they have.
We work in flex time every day. We are remote and asynchronous, because we don’t need to be pinned down to one timezone, and in one office to be effective. But we do value face time, so we all meet every day. Not because it’s the best way to get on the same page, but because it’s the best way to build strong bonds. And our relationships are the base of everything: we care for each other, we want to collaborate together, and we understand each other better than only by the words we speak: we can read between the lines.
Each day starts with a symbolic daily. Scratch that. With breakfast. We bring a coffee, a bagel, or an omelette du fromage. We talk about our previous days, our aspirations, and how our skiing trip went. There is no agenda. Nobody needs to answer three questions. They can if they feel that the team would benefit. Otherwise, anyone brings any work-related topics if they need to. And then we either discuss or agree to form a working group to address the issue. We say „have a nice day” and move our separate ways.
We have a „place” we hang out. It’s either a gather.town kind of thing, a discord server, or we just run a google meet in the background. We share our problems with whoever is available. If we have a blocker, everyone is noticed, because we leave a Jira comment, or we announce it on slack, or any other tool. We don’t need to wait for daily. Maybe it’ll be resolved before tomorrow.
We spontaneously share our work via the means of screensharing. People can come over to watch, or join in a pair programming session. We review other people’s work at various stages, and this also leads to collaboration: we not only point out problems or mistakes, we also propose solutions, or brainstorm them together. We look at other ppls calendars and join their activities too: design sessions, user interviews. We are interested in other areas to understand them better.
We don’t fight between refactoring and pushing for new mvp features. We all share the same context, and all understand what is the priority at the moment, and what risks we can take. There will be times we rush to deliver something, and times we can lay back and improve the architecture.
During a two-week period we have the time to meet for a pizza after hours. Remotely, everyone brings their own. We also reflect on what’s going well, and what’s not. And Fridays are special days. We set aside whatever we’re doing and do some housekeeping. Groom the backlog a bit. Experiment with that library we always wanted to try. Refactor parts of legacy code. Improve test coverage. Again, no agenda. Just ideas. And no supervision. Here is where serendipity happens.
We choose the best tool for the job. Enterprise, MVP, low-code, no-code, whatever fits. If it’s out of our competences, we buy it on the open market. Everything outside of our team is open market, even within the company. If need the help of a platform team for example: they are a separate entity, with separate workflow, relationships, priorities. They aren’t aligned with us and we can’t afford to expect it. This is why we need to „pay” (the currency is most probably time) for their services. Quid pro quo, Clarice.
We regularly catch up to share progress, showcase anything we have done, update on the priorities, and assign ownership. We volunteer for projects not because the Scrum Gods told us so, but because we want to do it collaboratively, together with people around us, and we have an obligation to them, not to the process.
After we reach any milestone, or release of an important feature, we celebrate together, and acknowledge each person’s contributions. We successfully increased the value, maybe it’s time to call it a day a little earlier today?
]]>Remote work is about how you do the work. You cannot get the same effects, if you’re missing some of the crucial tools in your process. We work great together as teams, because we collaborate, we brainstorm, we spontaneously discuss ideas. And a lot of us are used to doing those things face to face. Taking away those opportunities by allowing people to work from different locations will hinder your progress.
This is why is much more important to adapt to new tools, methods, and work styles in general when deciding to go remote. You still need to brainstorm, spontaneously discuss ideas, build relations and work together, but you need to realize that you don’t have a shared office to facilitate those things for you.
Long story short: allowing people to work from different places, without realizing that you need to account for the lost opportunities is a dead end. Remote is not about the location, is about your work culture.
]]>I love greenfield projects, but I hate the bootstrapping phase. Despite working almost exclusively on new projects since 2015, I rarely actually need to start from scratch. Up to this point, it usually meant copying bits and pieces from previous projects.
This was not an option this time, since, erm, the stack changed. So I started reading and talking to the team a lot to taxi my way up to some basic proficiency in the NestJS framework.
In the first days, each step forward raised dozens of questions, obstacles and unknowns. Imagine you’re hungry and need some eggs. But instead of just grabbing a wallet and going to the corner store, you realize that you have no legs, but that’s not really a problem, because money wasn’t invented yet, so you need to grow some chickens instead. And then it turns out they don’t have type definitions, so you can’t use them.
Let’s start with some fundamentals: what I am trying to achieve here. The main goal is to have a semi-decent NestJS application. Both I and others on the team adapt domain driven design, so I’ll use those principles in this codebase too. Testing is an important factor too: it helps to develop with higher confidence, and actually improves my developer experience, since the app is headless, so it has no real interface I can poke around.
There are also other concepts that seemed important to me from the get-go, and those include:
So let’s dive right in to see the steps I took along the way to complete the first phase of building my application: the passing of a simple end-to-end scenario.
I think this is one of the core parts of any framework. In fact, I’ve seen micro frameworks which were nothing but a DI container. In Nest, the DI is engraved deep into the system, so that is not unexpected. But it is also tied closely to the module system, which raises an eyebrow for me. I’ll touch on that later.
Due to the way how JavaScript and/or TypeScript work, there are a lot of shortcomings all JS DI containers share: you can’t use the interface as an identifier and thus you need to explicitly tie class dependencies to DI tokens. In fact, autowiring mostly doesn’t work and you have to resort to juggling manually registering classes and their dependencies and slapping @Injectable()
and @Inject()
all over the place. A skill I’m yet to master.
The autowiring part was actually a hard blow for me. After being skeptical about it at first, when I was still developing with Symfony, I reached a point where I mostly only used DIC configuration to wire value objects containing configuration — the rest was either autowired, or used dedicated factories.
Unfortunately, until a popular TypeScript library starts to pre-compile the container configuration in build-time (if that is even technically possible) there is no getting around it.
I like to treat the app like a black box during end-to-end testing. To achieve that, I need to recognize all inputs and outputs. Among them are for example HTTP adapters, or any adapter that reaches the outside world for that matter. It’s the benefit of the hexagonal architecture that I know exactly where to look for them. Entity persistence adapters in particular do not match that definition, since the database is part of the app during tests.
So I know there is a group of services that I want to replace with test doubles during e2e, because I want to run put my black box in a controlled environment. I want to switch them at the same place they are defined in the main container, so the definitions are close together (it’s a design choice). „The Nest way” is rather to instantiate individual modules, and mock certain services in each test case explicitly, or use some other form of jiggery-pokery. I haven’t decided if I want to cut my box into pieces (by pulling out individual modules), so for the time being I’ll stick with what I know.
To achieve my goal, I created a function that will register either the regular adapter or the test double. The way it works is that for each InjectionToken it registers both versions of the service on the side, and then uses a factory method to return the correct one depending on the runtime config.
Let’s get back to the modules. It’s the framework’s opinionated method of splitting the app into smaller parts and managing dependencies between them. They are strongly encouraged. And while I believe it’s convenient to have your dependency container configuration assembled from pieces, I think the job of separating concerns can be done in a better way.
This is why I said fuck it and opted for dependency cruiser instead. Shout out to Lech who reminded me about deptrac (a PHP alternative) and triggered me to start using it back in the day. You see, having a bunch of hierarchical parts of the DIC that are private by default (I’m talking about Nest modules) does not actually prevent you from doing anything, it just inconveniences you to do so. There is nothing stopping you from actually importing any other module all over the place, exporting everything, and doing a lot of mess in the process.
Dep cruiser on the other hand sets out strict rules about what can depend on what. And it’s not on the DIC level, but on the file level, so it applies to any kind of imports. You can set your layered architecture, you can raise module boundaries, and take them down by exposing internal APIs.
I’m not yet actively going against the framework yet, but I’m kinda skimming on the surface.
I think I can have a high unit coverage, because of the way I write code that is easy to test, and how fluent I am with unit tests. They just come naturally to me. So the first thing I did was to move the testing sources to __tests__
subdirectories, instead of just suffixing the names with .(test|spec).ts
. That is because I create a lot of various test doubles, and they wouldn’t have a place to live otherwise. On top of that, dep cruiser forbids any application code to depend on anything isolated to the __tests__
directory, so there is an added benefit of not using any of them in production.
I have nothing against jest
mocking capabilities, but having explicit test doubles improves readability, increases reuse, not to mention that they are first class citizens. You can inject them into the container (as mentioned earlier), they are affected by automatic refactorings done by the IDE, etc. I’ll surely also resort to jest
mocks to cut some corners.
Another thing that I immediately started using are Mothers. Quick googling hasn’t yielded any interesting results as to dive deeper into that topic, so I will leave this as an exercise for the reader. I’ll just quickly summarize:
A Mother is a static factory containing convenience methods for creating entities. I used to write code with dozens of places where new Entity
was used, mostly in test sources, and any changes to the entity’s constructor were a pain. With the help of Mothers, you move those calls to one place. It also abstracts away the creation process (it’s a factory after all) making the test sources more readable and more descriptive (e.g. OfferMother::deactivatedLastMonth()
).
I think it was Patryk who introduced me to this pattern, and I must admit, I wasn’t a fan at first, but the concept grew on me as I was writting more tests.
I wanted to adopt the gherkin syntax for jest unit tests because it’s so descriptive and powerful. It has done wonders for us at Phone. I even started installing the jest-cucumber plugin but it felt quite poor. What else should I expect from an npm package? And then I realized that I should just use cucumber-js directly.
The setup was straightforward, my IDE supports the feature files natively and offers completion, and suggests implementing missing step definitions, allows me to run individual scenarios. It also enables me to debug them, although I needed to work around the step timeouts, which were eager to end test cases prematurely.
One thing I miss is that the step definitions are not a part of the Nest application, so I can’t use the DIC to provide their dependencies. Instead, the steps have the service locator injected and fetch whatever they need explicitly. I can live with that.
The first thing I was told: Prisma is shit. Don’t use Prisma. Run away. Thanks for the tip! I’ll use TypeORM instead. That’s a name I’ve heard before, and it’s officially supported by the framework. Nothing can go wrong.
Oh, but you can’t use your domain entities, you have to map do persistence DTOs — was the second thing I was told. Damn, no, please no. I might as well use ActiveRecord instead. That was a real bummer.
Fortunately, there is a somewhat hidden, not well-documented option called entitySkipConstructor
that basically allows me to skip the mapping step. Maybe some DDD evangelists will fume about it, but that’s the boilerplate I would very much like to avoid. And that is something I am used to (PHP’s Doctrine was doing just fine in a similar role), and some familiarity at this point brings me much comfort.
After finding another poorly documented feature I learned that I can decouple my entity classes from their mappings. In other words, instead of using decorators, I can define the same metadata in a separate place. Great, that keeps my domain a little cleaner. I didn’t read the fine print which stated that I am restricted in the way I can name my schemas, but that wasn’t anything a couple of hours of debugging wasn’t able to fix.
The framework is kind enough to provide me with decorators to reduce the boilerplate of injecting repositories, but at the same time forces me to add the boilerplate to configure which schemas are allowed. The module thingy gets in the way again. Is there a way to export everything by default? I’ll set global
to true
, just in case.
On the upside, the framework can automatically synchronize the database schema in a test environment. That’s something that had caused a lot of problems for me in the past, so I’m glad it’s available out of the box here.
Don’t get me started. I don’t need to know if something is an interface or an implementation when I depend on it. Either one is a contract, and it can change its nature freely without affecting the consumers. This is why I don’t prefix with I
and I don’t suffix with Interface
.
Service
suffix feels even worse. It reeks of the times when logic was contained in controllers, and sometimes extracted to those special things called „services” (if you had a DIC) or „helpers” (if you didn’t or used functions). A class does not concern itself with whether it is or isn’t a service, and it should be left out of its name.
In addition, Nest’s conventions add a lot of .service
, .controller
, .port
, .adapter
and other weird stuff to otherwise fine filenames. I have no idea why I would want to do that. I always followed a simple rule: the file is named identically to the thing that lives inside it (that also implies one declaration per file), which was actually forced by PHP’s autoloading standards. And my IDE understands that when I’m renaming stuff. I like that rule.
I think I’m openly going against the framework conventions here, but it’s a hill I’m willing to die on, especially since I was such a vocal proponent of suffixing in the past. I’m reformed. I only keep the suffixes for unit tests, since they don’t contain any single named thing inside, so I use the SUT.spec.ts
format for jest
to have an easier time finding them.
I’ve heard this name thrown a lot on reddit, but I never had the chance to use it. I wasn’t expecting much. In fact, I was rather looking for an assertion library for two reasons, one more important than the other:
jest
was only a dev dependencyexpect
function in cucumber step definitions, thinking it was in jest
context and that it’s not requiredBut I stumbled on Zod instead and OMG it is so game-changing. Is there a thing it cannot do?
()
And all the time it infers the output type, so TypeScript is aware of what comes out of my JSON.parse(): any
mess after I pass it through schema.parse()
. The need to have any kind of input DTOs, their decorators for validation, transformers to meticulously fill out each field, the validators themselves, and mappers to match input format to something more familiar — they are all gone.
Would recommend, 10/10, even without rice.
It was a tiresome journey for me. A one that I didn’t know where would lead me or how long it would last. Finally I think I have a quite good grasp on it, and I will be feeling more comfortable going forward.
Imagine my joy seeing the test scenario turn green!
]]>After more than 20 years of being primarily a PHP developer, I am finally changing the tech stack. I guess I won’t have to eat a chair after all. Actually, I am switching more than just languages: a new company, a new team, a new framework, new people around, a new business area, a new role… But you can read all about that in the previous chapters. Today, let’s focus on my first steps in adopting to — spoiler alert — NestJS ecosystem.
We’re trying to quickly bootstrap a relatively small/easy application to aid with our business goals. It’s familiar territory: we need to move fast, validate the idea, and iterate in response to feedback. So we can skip a lengthy planning phase and get right to the job.
My first thought was to build the MVP using something I’m familiar with, so PHP. The obvious upside was to bootstrap quickly, having a lot of experience with the tool and ecosystem around it. That was under the assumption that it is temporary, and would be rewritten after a couple of weeks. We all know how long temporary solutions last, so that was crossed out fairly quickly, especially since this language was not a part of the company’s stack.
Then the idea of python was brought up, but the justification was shortsighted. The app was in the general data area, so python seemed like a fit. But on a deep dive, it turns out that the problem is not complex enough to warrant a specialized tool to solve. That, and the team in general being unfavorable to introducing that language to the tech stack, resulted in abandoning that idea.
Two last candidates were on the table, both already used by the team: .NET or TypeScript. We had a discussion before about the state of existing applications, language capabilities, maturity, and enterprise features. This is why .NET was so appealing. A lot of plumbing would just work without me having to deal with it for too long (not unlike in Symfony): the dependency injection container, command buses, async messaging, advanced security and access control topics, reduced boilerplate, and much more. I know that I can have expectations that most of the common problems are already solved, so I could focus on what is important: the business logic.
Previously, at Docplanner, it was also used by other teams, but I never considered it, turning my attention to more flashy alternatives like Kotlin.
You might have an impression that the same is true for JavaScript, but it’s not quite the same. It’s easy to find an npm package for anything, the community is quite potent in that regard. Unfortunately you soon realize that it’s only surface level.
There is a lot of diversity in the ecosystem, but I don’t necessarily mean it in a good way. You need to rely on tons of plugins or packages, but they are not always homogeneous, and often have problems working together. Some are feature rich, others solve simple use cases. There are those that are opinionated and force their way of thinking on you, and others are more flexible and forgiving.
While this is less often seen nowadays, there are packages written in JavaScript without type definitions. That is mostly a deal breaker to use in a TypeScript codebase.
And in the end, since there are so many to choose from, so when you finally find a lib, you need to assess the risks of using it. What is the code quality? Will I be able to extend or otherwise modify it? Is there enough flexibility, or are we forcing ourselves into a corner? Is it still actively maintained?
Sidestory: A few years ago I wanted an OAuth2 interceptor for axios. I had a similar middleware for PHP’s Guzzle on the backend. I was expecting that given some basic configuration of secrets, endpoints and scopes, it would manage the token lifecycle, its persistence and attach the credentials to requests automatically. The most popular lib I found wasn’t doing any of that. As far as I can recall, it only injected a bearer token into headers, and even that was buggy. The logic that probably every other OAuth client uses, we had to implement ourselves. Long story short, that was not a pleasant experience.
Forget the whole previous section. In the end, I decided to take the path of least resistance: NestJS, since I already knew TypeScript, and the framework was mature enough and had a bunch of enterprise features already.
In the next chapter I describe my first steps with the framework, some decisions I made along the way, great tools that really made a difference, as well as some shortcomings that I’m yet to overcome.
]]>I told my story about the decision to leave the Docplanner Phone team in the previous chapter. In this piece, I aim to describe my experiences with the job hunt. Strap in.
Excluding some shorter gigs, I always either used my network and recommendations, or some of the popular job boards to find a job. Curiously, I never had any luck with recruiters, neither reaching out to me from in-house teams nor from agencies. I have met some great headhunters along the way, we threw a few punches, but that never yielded any results for me (although we came close a couple of times).
The first time I got a job — I wasn’t even looking. An acquaintance knew someone who was hiring, so I went to meet them out of curiosity. I stayed for 5 years. I found the next one via one of the popular job boards back in the day: Goldenline. And the position was at Goldenline itself. Very early in the process, they decided that I would be a better fit for something new they were slowly brewing: ZnanyLekarz (aka Docplanner). And this is how I joined the team that brought this doctors’ marketplace to its commercial success.
My next jump was in 2015, and it seems bizarre from a time perspective: I applied through a job offer on Pracuj.pl. Goldenline was already starting its decline back then, and the new wave of IT job boards like No Fluff Jobs or JustJoin weren’t born yet. I remember Dominika summing up the „HR” part of our interview:
Ok, that went well, let’s hope you don’t tank the technical part
I supposedly nailed it (warning: cheesy) and joined the team to improve how we approached technology (in polish) among other things, and stayed until things turned for the worse over 4 years later (article also in polish).
The next place was when I boomerang’d back to Docplanner by the means of a recommendation, rendering all my other ongoing processes moot. This was also the time I realized that job hunting can be — paradoxically — challenging for people with experience. And this was confirmed by a number of friends in a similar spot. Despite performing various roles in the past, with a lot of success to show for it, a lot of potential employers still fail to read between the lines, and expect you to jump through hoops to prove your worth, sometimes evaluating skills that won’t be even used on the job.
I think I wrote about that topic on another occasion. Either way, this unpleasant experience was fresh in my memory when I reached out to find the successor of Docplanner Phone for me.
From a bird’s eye view, the market for IT job boards looks like this (keep in mind that this is just my impression, not necessarily an honest representation):
So I turn to the domestic market:
And that leaves me with the other board: JustJoint.it. Trying to strike a balance between a long-form description, and characterizing the role with keywords and numbers, it always had a certain appeal to me, both as an employee, and as a hiring manager.
So I went there, browsed a little, got fed up with the state of the tech market, just to finally apply for a single role on a Sunday afternoon. And then I waited. At that point, I didn’t yet know when I would leave my current company. I expected a long process of looking for a new place. I’m old and fussy.
I actually got a bit worried, because the response hadn’t come in for a couple of days. You have to understand — I’m not usually getting ghosted, it’s rather the opposite: I once received an offer without actually applying, we just had a coffee together. So what happened this time? Maybe it’s true that people don’t read, and my CV was 3 pages long…
As it turns out, I didn’t actually fit the profile they were looking for, but they did read between the lines and realized that I could be of use in different areas. Just a lucky coincidence on my side, and an admirable approach on theirs — they could’ve easily rejected me with an automated email. We’re off to a good start, as they totally subverted my expectations: instead of me going through hoops, I have shown myself in an unfavorable light, and yet they want to figure something out regardless.
So long story short, I’m on the phone with the recruiter, talking about my past experiences and plans for the future, so that they can get know me better and we can figure out if I’m a fit for another role they were planning to open. It turns out there is a match, and the position is closer to the product than to engineering. At the same time I am starting to understand why so many engineers are turning their careers into product roles. Unexpectedly, I was going to make the same move myself.
Things picked up pace since then. As far as I recell I had a three-step process, one with HR, one with my technical hiring manager, and just one to cap it off with the CEO (yes, it’s a small company).
I trusted my gut. The agility they showed, the problems they were mentioning, the questions they asked: they all fed my curiosity and desire to join the team. The technical discussion was actually cut short to almost half the time. We quickly realized that we operate on similar registers, and there was no real need to dive deeper.
I think it wasn’t even two weeks between the first phone, and receiving the offer I later accepted.
This was actually the first real onboarding I had in my life. Back in 2015 it wasn’t yet a hot topic to focus on this process. Someone showed me around the office, pointed me to my desk, and I mostly had to figure everything out by myself, with some help from people sitting around me.
In Docplanner on the other hand, I voluntarily opted out, since I wasn’t that much interested in the rest of the company. I was set to create something new, that wouldn’t intertwine with existing products. I felt the time pressure, so I forego 15 out of the 18 onboarding meetings that were planned for me.
The first thing that caught my attention was the fact that the process was to take place at the office, despite the offer being fully remote. And at the company’s headquarters in another city too, except most of the engineering team, including my manager, weren’t even there.
It turns out it’s not really deliberate. The HQ has always been the hub for a lot of these kinds of events, and this was no different. Probably in time, a more sensible plan will be arranged based on each individual role. I used the time I had to meet as many people from different departments as I could, have some face time, and drink some coffee. But I cut my stay short to get back home, and to continue the onboarding process remotely.
As it turns out, that on-site part was more valuable than I previously thought. Since most of the team works remotely and rarely even visits the office near my home (near is an understatement, btw, it’s still almost an hour drive), this was one of the few real opportunities to actually meet someone and to start building some deeper relations. I truly enjoyed those three days I spent there, and it even left a little stain on my admiration for remote-first teams.
After the first day, organized by the HR dept to a tee, I was thrown into the deep water. I can’t say I didn’t like or expect that — the whole process up to this point was chaotic (and I mean it in a good way). At the same time, I hear that other people that joined along with me had their first weeks more well-organized.
But I was already off to the races: armed with a few tools and names, I set out to learn about the business domain of my next endeavor. What followed next was me switching the tech stack, and you can read all about it in chapter 3.
]]>I am during this most uncomfortable period of change. They say that changing jobs is right there at the top of the most stressful events in life. Well, at least for some people. And as it turns out, I’m among them. But don’t let me get ahead of myself.
I have a long career behind me already, and there are a few constants that can describe it up to this point:
It all actually comes down to the fact, that I usually leave without a backup plan. I quit, set my status to open, and in the meantime usually dock somewhere for a couple of weeks until I find the next place for myself. I expected it to play out similarly this time, but that was not the case.
I started Docplanner Phone narrowly before the pandemic hit, which changed the job market a lot. And while there arose many lucrative opportunities, especially from global, remote-first companies, simultaneously the conditions at Docplanner kept improving.
The compensation rose steadily, with a noticeable bump along the way, when by the company’s decision we started to aim to be competitive on the broader European market — and what followed was a salary adjustment for the whole team.
The work-life balance was fabulous. We had a lot of autonomy in that regard and we used it a lot. We relied on trust, rather than putting the number of hours in. And the team reacted with engagement and responsibility.
The tech had its problems, some of which were being slowly mitigated (with great success I should add). And the funny thing was: it was the first time in my life that I worked on a team that had a larger product debt than a technical one. The quality was really top shelf. We did large refactors, consisting of thousands of lines of code, resulting in actually reducing bugs, rather than introducing new ones. And it was a pleasure to do them.
Maybe that was part of the problem? As a startup, we’d put too many resources into technology, and too little into marketing and sales? 🤔 Dunno.
We’ve managed to build a competent, open-minded, and diverse team. It was a pleasure to work with everyone, and the relationships we cherished along the way were priceless. That was actually the primary motivator for me personally, as well as the thing I will miss the most. Products raise and fall, friendships are here to last.
There were two primary things that caused pain in my back.
It was actually a great deal for the product. We got the infrastructure, funding and support of a successful, established company, but at the same time we were able to move fast, take risks, break things, and most importantly: leave dozens of years of baggage behind. That worked fine with a bunch of bumps along the road, but the net outcome was certainly positive for us.
There were some altercations with other departments, famously: legal, human resources and the site reliability team.
There were a bunch of reasons for this:
Disclaimer: I see that mostly as a systemic failure or a conflict of interests. There were tons of brilliant people with good intentions involved, and I can’t blame any of them for those outcomes… Well, barring some exceptions, I did have some beefs along the way.
So I reflected: what could’ve I achieved without all those obstacles, but instead with a company vision that is aligned with mine? One without so much politics, or scale crippling all of its efforts to introduce meaningful change. 🤔
For the majority of time, we operated with a glooming shadow over our heads. We had no certainty about what the next quarter will bring (read: we were a startup). One thing was certain: while we achieved some success, and reached a lot of our goals, it was never a spectacular victory. And that means we couldn’t grow as fast as some of us wanted.
One aspect of that was that the size of the team stagnated, and in the last period it shrank. The prospect of me getting back into roles I really enjoyed, like managing leaders, or recruitment, has moved away. A smaller team also meant fewer capabilities: it was unreasonable for us to undergo complex projects (both from a business perspective, as well as from a technical one) because they would hog up too many resources, hindering our day to day operations. There was just no room do to a whole category of fun things.
The market situation didn’t give much hope to reverse this trend. What was expected is more cost spending, fewer experiments, hiring freeze. We were heading for at least a couple of months in idle gear, and it would take us another few to get back up to speed.
More than 4 years prior to this I have abandoned writing code as my primary role. Joining Phone was just meant to be a temporary step back, a necessary investment to reach new heights. That goal failed miserably for me, and staying on the team any longer felt like a total surrender on that front.
At the same time, the product outlived some of the other experiments the company launched over the years. The number of customers raised steadily, there were no huge gaps in functionality, and most of the new features were rather long shots — with more effort, and less expected benefit. The product matured, and my skillset was no longer paramount to its further development.
Sometime in early November I made the decision. Product entering its next growth stage, slow loss of autonomy, and the growing excitement to try out something new — those were the deciding factors for me. Oh, and the fucking JumpCloud requirement. I don’t think I’ll ever accept working on a machine with a backdoor installed.
]]>Teams and organizational hierarchy are the main things that dictate how we work, how we communicate, and how our goals are determined. There is also the second face of that coin: it is where we draw lines and build barriers.
Professionals are naturally drawn to two things: people doing a similar job (same discipline), and those who want to achieve the same goals. Why would we reduce those two motivations to one?
Here’s how it works now, in my experience, most of the time: teams silo themselves teams: engineering, product, marketing, HR, what have you. And then, try to overcome barriers in working together towards the same goal. You hear from right to left: how to make the product people understand the need to reduce technical debt? What to do to efficiently work with HR? How can we make engineers more aware of the business context?
Put them all in one room (or zoom call, if you fancy). No more HR, Product and Engineering teams. You now have Product Alpha and Product Beta teams, all interdisciplinary, each under a single leader, working towards a common goal. There will be no artificial barriers to overcome!
They will still improve their skills, share knowledge, and improve processes across multiple teams, with people doing the same craft because that is what people are naturally drawn to (building guilds, if you will). But their primary allegiance is to the business team. What you currently understand as department goals are not what brings revenue for your company, this is not the end goal, just a means to an end. I dare to say, it’s pathological in some cases.
The way ahead of us is long and winding, but we already made the first step: we tore down the silos and now our product team consists of product owners, software engineers, a designer, a training expert, a customer success specialist, a marketing person, an hr partner, a telecommunication specialist, and more — every discipline we need to build a successful business. The next step is to break free from the grip of the organizational hierarchy and unite under a single leader.
To use the MVP cake analogy: cut across the layers to have some delicious experience from every slice!
]]>We started building our app around 3 years ago. After the first few months of rapid development, the business idea was validated, and we slowly transitioned into a new phase. That meant growing the team, as well as starting to make more mature features. To give you a rough idea of our size, the app is over 600 Vue components, twice as many TS files on top of that, written by half a dozen developers.
Taken from experience, the apps usually get more complex with scale. A lot of people with different backgrounds and knowledge levels contribute, features need to interact with more parts of the application, the code gets bigger and harder to maintain. We want to minimize the impact of those on the business, while increasing the quality of software by creating a larger number, more robust test cases
There were three main milestones that allowed us to work on our final solution.
Introducing strong typing to the codebase was an obvious choice. In retrospect it resulted in fewer type-related errors, and a faster development pace, due to developers being more aware of the data types they were handling. We introduced TypeScript gradually, so it was not a blocker for us to roll out, and to this day, after almost 18 months, we still have some leftovers.
Our API client was responsible for too many things, so we split it out into repositories, responsible for mediating between the application logic and the API — constructing HTTP requests, and mapping responses to domain objects, among others.
Leveraging the type system in the components themselves, allowed us to benefit from TS to the fullest in Vue components as well. The previous method used some magic (mapGetters
, mapActions
) that confused our tools and allowed for easily-preventable bugs (eg. mapping a non-existent getter didn’t even cause a warning in our setup). Class components didn’t map getters but instead used strongly-typed store directly.
We extracted utility functions and generic helpers from Vue component files from the start, but not so much for business logic. The next step was to include also that: simply moving methods / pure functions so they can be tested in isolation. That was rather a cosmetic than an architectural change, but it allowed us to reduce the testing surface for those units. Being able to mock simple inputs and outputs, instead of mocking store and inspecting the resulting virtual dom reduced the entry threshold for writing tests, and we experienced more of them on a daily basis.
Large testing surface
Small testing surface
In order to avoid a huge, monolithic and complex application, we split it into modules. This way we can have a lot of isolated pieces, each having limited scope — at the cost of some boilerplate when it comes to communication between modules. The approach is to build modules from the bottom up, so starting with the domain. This way we model only what is necessary for the module, and map it using adapters from the global scope.
Given a task for example, one of the main concepts in our app: there is no simple representation of a task entity in the app. Each module has a subset of behaviours. In a monolithic application, those responsibilities would probably be grouped together, resulting in a God Object antipattern.
On a grand level, not every element of a module is public and accessible outside of a module, so at that scale our application is effectively an order of magnitude smaller, as we can discard all of the private elements.
To avoid depending on the global state (including functions / logic), we used the dependency inversion principle to inject data and logic into our classes. Instead of importing modules directly and having those dependencies hardcoded, we use both constructor and argument injection and expect them to be provided from the outside. The dependency injection container is responsible for stitching it together. In tests, we skip the DIC and provide the test doubles manually. We selected tsyringe as our DIC library.
We decided to use OOP to achieve:
All of those can be achieved using both functional and the object oriented paradigms, but the OOP was more familiar to us. For the curious, here’s how we’d do it using functional approach:
// 1. Start out with a class having a global dependency
class GlobalDependencyApiClient {
items(page: number) {
return (new HttpClient).get(`/items?page=${page}`);
}
}
// 2. Expect the dependency from the outside
class DependencyInversionApiClient {
constructor(private readonly http: HttpClient) {}
items(page: number) {
return this.http.get(`/items?page=${page}`);
}
}
// 3. Extract the logic to a function
function ItemsFetcher(http: HttpClient, page: number) {
return http.get(`/items?page=${page}`);
}
class ApiClient {
constructor(private readonly http: HttpClient) {}
items(page: number) {
return ItemsFetcher(this.http, page);
}
}
// inject as an argument and use partial application to provide the dependency
const FunctionalApiApiClient = ItemsFetcher.bind(null, new HttpClient);
// inject using the constructor
const ObjectOrientedApiClient = new ApiClient(new HttpClient);
Each module is split into layers with various responsibilities:
We have set up the dependency cruiser to enforce the dependency rules described above.
A class using dependency injection
export default class ConnectionIdentifierLabelFactory {
constructor(
private readonly workstations: WorkstationRepository,
private readonly tasks: TaskRepository,
private readonly patients: PatientRepository
) {}
get(to: string): string {
const patient = this.patients.get(to);
if (patient) {
return patient.displayName;
}
const workstation = this.workstations.find(to);
if (workstation) {
return workstation.displayName;
}
const task = this.tasks.find(to);
if (task) {
return task.displayName(this.patients);
}
return phoneNumberFormatter(to);
}
}
Adapters that provide data from outside of the domain
@injectable()
export default class StorePatientAdapter implements PatientRepository {
constructor(
@inject(RootStoreToken)
private readonly store: Store,
private readonly patientFactory: PatientFactory
) {}
async searchQuery(phrase?: string): Promise<Patient[]> {
const { items } = await this.store.dispatch(
FETCH_PATIENTS_BY_QUERY_ACTION,
this.getQueryParams(phrase)
);
// map DTO to entities:
return items.map(this.patientFactory.make);
}
}
Providers configure the dependency injection container
export default class SsoProvider implements DpProvider {
register(container: DependencyContainer): void {
container.register<SsoFlowRepository>(SsoFlowRepositoryToken, SessionStorageSsoFlowRepository);
container.register<SsoFlowInitializer>(SsoFlowInitializer, {
useFactory: (c: DependencyContainer) =>
new SsoFlowInitializer(c.resolve(SsoFlowRepositoryToken), window),
});
}
}
Public API facades resolved from the container
export const authorization = <Authorization>resolve(Authorization);
export type { Authorization };
export { default as BookVisit } from './ui/BookVisitProvider.vue';
Vue components resolve dependencies using the container
export default class TaskTileDisplayReminderBadge extends Vue {
@Prop({ type: Object, required: true })
readonly task: Task;
private readonly reminderFactory: ReminderFactory = resolve(ReminderFactory);
private readonly dateConverter: DateConverter = resolve(DateConverterToken);
get date(): DateInterface {
return {
formatted: this.reminder.getFormattedDate(this.dateConverter),
};
}
private get reminder(): Reminder {
return this.reminderFactory.make(this.task.id, this.task.reminderAt);
}
}
The Mother pattern
export default class ServiceMother {
static getSome(): Service {
return new Service(numberId(), 'consultation online', numberId());
}
static getWithAddressId(addressId: string): Service {
return new Service(numberId(), 'consultation online', addressId);
}
}
Test doubles
export default class StaticSsoFlowRepository implements SsoFlowRepository {
readonly storage = new Map();
readonly tokens: string[] = [];
get(): never {
throw new Error('Method not implemented.');
}
save(token: string, value: SsoFlowState): void {
this.tokens.push(token);
this.storage.set(token, value);
}
}
Yes, this is a false dichotomy. Simple functions convert to simple classes without dependencies. Instead, if those functions do have dependencies, they probably depend on the global scope and you either don’t test it in isolation or you need to find your implicit dependencies and mock them, either way, using more complex methods (like jest mocking module imports).
Examples of simple assertion and expectation tests:
describe('LoadMoreEvaluator', () => {
test.each`
numberOfBookedSlots | numberOfFreeSlots | expected
${80} | ${80} | ${true}
${300} | ${80} | ${true}
${30} | ${101} | ${false}
${120} | ${120} | ${false}
`(
'should return $expected if there is $numberOfBookedSlots booked slots and $numberOfFreeSlots free slots',
({ numberOfBookedSlots, numberOfFreeSlots, expected }) => {
const bookingSlots = BookingSlotsMother.createWith(numberOfFreeSlots, numberOfBookedSlots);
const sut = new LoadMoreEvaluator();
expect(sut.shouldLoadMore(bookingSlots)).toBe(expected);
}
);
});
const fileRepository = {
uploadPublicFile: jest.fn(),
};
describe('MediaFileUploader', () => {
beforeEach(() => {
fileRepository.uploadPublicFile.mockClear();
});
test('should not upload empty file', async () => {
const sut = new MediaFileUploader(fileRepository);
await sut.tryUpload(null);
expect(fileRepository.uploadPublicFile).not.toHaveBeenCalled();
});
}
We do not optimize for less disk space. More lines of simple code trump fewer lines of a more complex one. Our OOP approach produces a lot of simple code, that is easier to reason about. Being able to split our functions / classes into multiple smaller ones shows us that in fact there were a lot of responsibilities encapsulated in them, so now we conform to SRP better.
Having a lot of simple test doubles increases the number of lines of code, but not the complexity.
Tests for simple units are as simple as they were before. Tests for more complex units are still complex, but the context setup is easier and more explicit. We also improved the architecture of test cases themselves, so we hide away creating the test context using numerous patterns:
jest.mock()
What follows is an example of a test suite using a lot of dependencies replaced by test doubles, written using a gherkin-like syntax:
describe('ConnectionIdentifierLabelFactory', () => {
// test cases declarations using gherkin syntax:
test('call pstn number', () => {
when_i_call('+48500100100');
then_i_expect_label_to_be('+48 500 100 100');
});
test('call task by id with prefix', () => {
const id = uuid();
given_there_is_a_task(new Task(id, '+48500100100'));
when_i_call(`task:${id}`);
then_i_expect_label_to_be('+48 500 100 100');
});
// scenario context preparation with defaults
// dependencies:
let workstationRepository: WorkstationRepository;
let taskRepository: TaskRepository;
let patientRepository: PatientRepository;
// result:
let label: string;
// reset to some defaults before each scenario:
beforeEach(() => {
label = '';
workstationRepository = new WorkstationRepositoryStub();
taskRepository = new TaskRepositoryStub();
patientRepository = new PatientRepositoryStub();
});
// region steps definitions: givens, whens and thens
function given_there_is_a_task(...tasks: Task[]): void {
taskRepository = new TaskRepositoryStub(tasks);
}
// create system under test with all the prepared dependencies
// and calculate the result for given input
function when_i_call(to: string) {
const sut = new ConnectionIdentifierLabelFactory(
workstationRepository,
taskRepository,
patientRepository
);
label = sut.get(to);
}
// assert that the results match expectations
function then_i_expect_label_to_be(expected: string) {
expect(label).toBe(expected);
}
});
Vue 3 does not use class components anymore, we can’t upgrade. This is true, but at the same time, we stray away from Vue specific features, and our view layer is actually thin. The business logic is framework agnostic. Abandoning class components wouldn’t be a huge issue for us, since the core OOP classes are in the domain layer.
Moving to Vue 3 would probably be as easy for us as moving to React. If we ever make that decision, we can still make use of the composition API for local, view-related stuff, and replace the integration layer to communicate with our domain.
At first glance, straying away from the standard way of doing things (just the framework way) increases the barrier of entry for new team members. Yes, it is. But our app would never be just a simple Vue application. It would always be a large, and complex one. It’s best for us to spread this complexity over dozen of modules rather than to think that the onboarding process would be simple, because it’s just a standard Vue application, and that there is nothing tricky about it.
Our goals were to increase maintainability and quality. Did we hit those?
The work is not done yet, every changeset raises new questions and uncovers topics we need to address. But so far, we are very satisfied with our results.
]]>