When I tell people I studied biotechnology before becoming a software engineer, I usually get one of two reactions: polite confusion or genuine curiosity about how those dots connect. The truth is, the path from laboratory benches to code editors was never planned. But looking back, it was never random either.

I started my academic journey at BHT Berlin studying biotechnology. The curriculum was dense with molecular biology, bioprocess engineering, and analytical chemistry. I spent long hours learning to design experiments, interpret noisy data, and troubleshoot protocols that refused to behave the way the textbook said they should. At the time, I had no idea how deeply these skills would shape my approach to software.

The laboratory as a debugging environment

Biological systems are among the most complex things you can study. They are non-deterministic, context-dependent, and full of edge cases. When an experiment fails in a biology lab, you cannot simply read the stack trace. You have to reason backwards through dozens of variables, any one of which might have shifted. You develop a kind of disciplined patience, a willingness to hold multiple hypotheses in tension while methodically ruling them out.

This is, I later discovered, exactly how good debugging works in software.

At Solaga UG, I worked on algae-based research, exploring how microorganisms could be harnessed for practical applications. The work sat at the intersection of biology, data, and engineering. I was collecting experimental data, looking for patterns, and iterating on processes that had to be both scientifically sound and practically viable. Without knowing it, I was training the same muscles that would later help me build data pipelines and reason about machine learning models.

The pivot

The decision to attend 42 Berlin was not a rejection of my scientific background. It was an extension of it. 42's peer-learning model appealed to me precisely because it mirrored the collaborative, self-directed nature of good research. There were no lectures, no hand-holding. You had a problem, you had peers, and you had to figure it out. For someone trained in experimental science, this felt natural.

What surprised me most was how transferable the core skills were. The systematic thinking I had developed designing experiments translated directly into architecting software systems. The rigorous analysis required to validate scientific results mapped onto writing reliable tests and reviewing code critically. Even the comfort with ambiguity, so essential in research, proved invaluable when navigating the inevitable uncertainties of building products.

Where it all converges

At Vaarhaft, where I now work on AI-driven fraud detection, these threads come together in ways I could not have anticipated. Understanding biological data, with all its noise, variability, and hidden structure, prepared me for working with ML pipelines in a way that a purely computer science background might not have. I am used to asking whether a signal is real or an artifact. I am comfortable with probabilistic reasoning. I know what it means to validate a model against messy, real-world data rather than clean benchmarks.

Scientific methodology, the cycle of hypothesis, experiment, analysis, and iteration, is not just applicable to engineering. It is arguably the most underrated skill set in the industry. The best engineers I have worked with think like scientists: they question assumptions, design controlled tests, and resist the urge to generalize from insufficient evidence.

The case for non-linear paths

The tech industry has a well-documented bias toward linear credentials. Computer science degree, internship, junior role, promotion. There is nothing wrong with that path, but the industry loses something when it treats it as the only legitimate one. People who arrive at software through science, the arts, medicine, or trades bring cognitive diversity that homogeneous teams simply cannot replicate.

If you are reading this and considering an unconventional move into tech, I want to offer some honest encouragement. The transition is not effortless. There are gaps to fill, vocabulary to learn, and moments where you feel behind. But the skills you already have, the ability to learn complex systems, to reason under uncertainty, to persist through ambiguity, these are not consolation prizes. They are genuine advantages.

The most interesting problems in technology today sit at the intersection of disciplines. Climate modeling, drug discovery, biosecurity, computational biology. These fields do not need people who only know how to code. They need people who can code and think like scientists.

From working student to full-time

In March 2026, I officially joined VAARHAFT full-time as AI & Data Engineer. A year earlier, I had walked into a pitch event at 42 Berlin and met our CEO Linus Kameni. That five-minute conversation turned into a working student position, which turned into building core parts of the product, which turned into this.

At VAARHAFT, I acquire and manage the data our models need, build the automations that keep our pipelines running, and train our deepfake and image fraud detection models using our own end-to-end frameworks. Every week, the technology gets sharper. In a world where anyone can generate a convincing fake image in seconds, that work matters.

What I value most about VAARHAFT is that the problem is real and urgent. AI-generated media is eroding public trust in what we see. We are building the systems that restore it. The scientific rigor I learned in biotechnology, the engineering discipline I built at 42, the comfort with ambiguity that comes from a non-linear path — all of it converges here, every single day.

My path from biotechnology through algae research to a coding school to AI-powered fraud detection is not a story of reinvention. It is a story of accumulation. Every stage added a layer. Nothing was wasted. And if your own path looks similarly winding, take that as a sign that you might be building something richer than you realize.