I'm joining Fairtiq as Tribe Lead Engineer in April 2026, and honestly, i couldn't be happier.
Fairtiq builds GPS-based fare calculation for public transport across Europe. You get on a train, the app knows where you are, you get off, it charges you the correct fare. No ticket machines, no zone confusion, no fumbling with apps at the station. Just get on and go. It's the kind of product that makes you wonder why it hasn't always worked this way.
I'm genuinely thrilled about this. When they made the offer, my response was immediate: "I don't really need to think through this. You have my yes already." From every interview process I ran in the past five weeks, Fairtiq was always my top choice. The people I met, the mission, the way they think about engineering. Every round made me more certain this was the right place - and the best thing was it took them only 2 weeks from start to offer.
Thats not necessarily what this blog post should be about though. What's interesting is how I found them. "Tribe Lead Engineer" wasn't really a job title I was searching for. I was looking for "Engineering Manager" and "Head of Engineering", maybe "Teamleitung". If you've searched for software roles lately, you know the problem: the same title means completely different things at different companies, and different titles often mean the same thing. "Software Engineer" at one company is building distributed systems. At another, it's building SAP stuff. "Engineering Manager" can mean building teams or managing a building's HVAC. "Full Stack Developer" ranges from React forms to architecting entire platforms. Search by title and you either miss roles that fit or drown in irrelevant noise.
The job search is also notoriously difficult today. There is just a lot of noise out there, and it's hard to find the right fit. I have seen LinkedIn job posts with 100s of applicants within days, with descriptions that are just not particularly helpful. And don't get me started on sites like Indeed, where searching for Engineering Manager gives you anything but jobs as an Engineering Manager.
So i had something different in mind. Why not tailor the jobsearch to me, instead of trying to find the name of a role. And this is what i did: I built a job search system with five AI agents that search by role substance, not job title. Here's how it worked.
The Problem With Modern Job Search
If you look around the internet, the advice you get for searching for a job is volume. Apply to a hundred positions, hope for ten responses, interview at three, accept whatever offer comes first. LinkedIn's Easy Apply button exists to make this easier, and recruiters blast the same message to hundreds of candidates because the math seems to favor quantity. And it has never been easier to tailor your CV and cover letter to the role you're applying to.
But this approach has a cost that compounds quickly. If you're applying to a hundred companies, you can't research any of them properly. You scan the job description for keyword matches, maybe glance at the company website, and fire off an application. You don't know if the role matches your actual skills or just your resume keywords. You don't know if the engineering culture is healthy or if the team is drowning. You don't know if you'd even want to work there. You're hoping the interview process will answer those questions, which means you spend hours preparing for and attending interviews at companies you might reject anyway, or worse, companies that were never going to hire you because the fit was wrong from the start.
The alternative is to research deeply before applying. Understand the company, the team, the actual role, the people you'd work with. But that kind of research takes hours per company. If you're doing a serious job search while employed, you might have ten hours a week to dedicate to it. Deep research on five companies fills that budget completely, and you've barely scratched the surface of what's available.
So you're stuck choosing between two bad options. Research superficially and apply broadly, which produces low-quality applications and wastes time on bad-fit interviews. Or research deeply and apply narrowly, which means missing opportunities you'd be perfect for because you never found them. The job that actually fits your skills and preferences might be out there, but it's buried under a job title you weren't searching for, at a company you've never heard of, in a posting that went up last Tuesday and will close before you get to page three of your search results.
Five Agents, One Search
I built a system on OpenCode with five specialized AI agents, each handling a different part of the job search workflow. The core idea was simple: let agents handle the parts that scale linearly with effort, while I focus on the parts that require human judgment.
The Profiler builds and maintains my candidate profile by reading my CV, asking clarifying questions about what I actually want, and creating a structured document that other agents reference throughout the process. This isn't just a reformatted resume. It captures the environments I thrive in, the kinds of problems I want to solve, my real constraints around location and compensation, and the signals that distinguish a good fit from a superficial keyword match. Every other agent uses this profile as their source of truth.
The Discoverer searches for opportunities across job boards, company career pages, and LinkedIn, but it doesn't search by job title. It searches by role substance, looking for positions where the actual responsibilities and requirements match my profile regardless of what the company decided to call the role. This is how "Tribe Lead Engineer" surfaced as a top match even though I was searching for "Engineering Manager." The title didn't match my search terms, but the role description matched my profile almost exactly.
The Screener does quick fit assessment on everything the Discoverer finds. For each opportunity, it checks whether the role actually matches once you read past the job title, whether the company size and stage make sense, whether there are obvious red flags in the posting or the company's recent history. Most discovered opportunities don't pass screening, and that's the point. The screener prevents me from drowning in noise by filtering before I invest attention.
The Analyzer goes deep on the opportunities that pass screening. For each promising match, it researches the company properly: what their engineering blog reveals about their technical culture, what employee reviews suggest about the working environment, what their funding situation implies about stability, what I'd actually be doing day-to-day based on the role and the team structure. This is the research I would have done manually for companies I was serious about, except now it happens for every opportunity that passes the screening threshold.
The Strategist ranks the analyzed opportunities and helps plan my approach. Which companies should I prioritize? Which represent backup options if my top choices don't work out? This was less about optimization and more about peace of mind. Knowing I had a prioritized list with clear reasoning behind the rankings meant I could focus on execution instead of constantly second-guessing which application to send next.
The system also served as a tracker and journal throughout the process. Every company researched, every application sent, every interview completed, every piece of feedback received got logged in a structured format. When I needed to remember what I'd learned about a company before a follow-up interview, the context was there. When I wanted to understand why certain applications weren't converting, I could look for patterns in the data. Job searching usually feels like chaos because the information lives in your head, your email, a dozen browser tabs, and scattered notes. Having it centralized changed the experience completely.
What The Numbers Looked Like
The entire process took five weeks from first search to signed offer.
The agents researched over 70 companies across seven geographic regions, starting with local options and expanding outward as I understood the market better. From those 70+ companies, 27 applications made it through screening as genuine matches worth pursuing. That filtering ratio matters: I wasn't sending 70 applications and hoping for the best. I was sending 27 applications to companies I'd already vetted as plausible fits.
Of those 27 applications, 7 progressed to interviews, which is roughly a 26% conversion rate. I would call that high for senior roles and this day and age, and I attribute it largely to application quality. By the time I applied, I actually understood each company well enough to write a specific application and explain why I wanted to work there. That shows, both in how you write and in how you interview.
From those 7 interview processes, one produced an offer, and that was Fairtiq. Two others were still in progress when the offer came, which gave me real options rather than the pressure of accepting whatever arrived first. The rest ended in rejection for various reasons: domain mismatch in industries where I lacked experience, location requirements that couldn't flex to remote, and in one case organizational chaos where the hiring manager left mid-process.
Twenty-four rejections total across the search. That number can feel discouraging if you think of each rejection as a personal failure, but the framing matters. Most of those rejections came early, often from a quick screening call or even just an automated response, before I'd invested significant preparation time. The system helped me fail fast on bad matches instead of discovering the mismatch three interviews deep when I'd already spent hours preparing.
I estimate the agents saved me at least 20 hours of work that would have gone to manual research, job board scrolling, company tracking, and application logistics. That's 20 hours I spent on interview preparation and actual conversations instead. The agents handled breadth; I focused on depth where it mattered.
Luck played a role too, and it would be dishonest to pretend otherwise. Fairtiq happened to be hiring at exactly the right time for my search. My background happened to align with what they needed for this specific role. Their interview process happened to move quickly while other companies were still scheduling first rounds. A system can surface opportunities and help you prepare, but timing and fit still depend on factors you can't control. The system improved my odds, but it didn't guarantee outcomes.
What The Agents Can't Do
This approach has real limitations, and I ran into several of them during the search.
The agents didn't filter out companies in industries where I had no relevant domain experience. I applied to four FinTech companies that were never going to hire someone without financial services background, and I could have predicted that if the screener had been calibrated to catch it. That's a configuration fix for next time: explicit industry exclusions in the profile that the screener enforces before opportunities reach me.
The agents also can't assess culture fit from the outside. They can read engineering blogs and synthesize employee reviews, but they can't tell you whether you'll actually enjoy working with the people, whether the team dynamics are healthy, or whether the company's stated values match how they actually operate. That only becomes clear in interviews, and sometimes not even then. The agents get you to more interviews at companies worth interviewing at, but the human judgment of "do I want to work here" remains entirely yours.
And of course, the agents don't make you a better candidate. They don't write your applications or prepare you for interviews or help you communicate clearly under pressure. They handle research and logistics, which frees up time and mental energy for the parts that actually determine whether you get hired. But if you're not qualified for the roles you're applying to, or if you interview poorly, no amount of agent-powered research will fix that.
The human-in-the-loop design is intentional throughout. Every transition between agents requires my input. The Discoverer surfaces opportunities, but I decide which ones merit screening. The Screener recommends, but I decide what goes to deep analysis. The Strategist ranks, but I choose where to apply and in what order. The agents handle scale. I handle judgment. That separation felt right, and I wouldn't automate the decision points even if I could.
Try It Yourself
The system is open source at github.com/MarcoMuellner/jobparty.
You'll need OpenCode installed and a Google Custom Search API key for the discovery agent to search the web. Add your CV as a text file, run the profiler to build your candidate profile through a conversation about what you're actually looking for, and start the discovery process for your target regions. The README walks through setup, and the agent prompts are readable if you want to understand or modify how each one works.
It's not polished software. I built it in a few days to solve my own problem, and I'm sharing it because the approach worked well enough that others might find it useful. Fork it, adapt it to your situation, improve the parts that frustrated you. The core idea is what matters: job searching is fundamentally a research problem, and AI agents can handle research at scale while you focus on the decisions and conversations that actually determine outcomes.
I start at Fairtiq in April. The job search is over, and I'm happy with how it ended.