AI/ML Platform Engineer
About Lekta AI
Founded in 2016 in Kraków, Poland, Lekta AI is a technological company working with major European banking, insurance, and telco enterprises. The company was set up with an ambitious goal of providing the best customer service in the world. Lekta stands out from the crowd of conversational AI companies because we combine technological expertise with domain-specific knowledge from contact center industry veterans. Through the use of our proprietary conversational neurosymbolic engine and backend integrations, our systems are intent- and context-aware, handling over 1.5 million conversations every month. Today, with offices in Poland and Spain, Lekta is rapidly expanding its business division to scale operations internationally.
About the role
Lekta is hiring an AI/ML Platform Engineer to own the full AI/ML stack behind our conversational agents — language models, ASR and TTS, retrieval and grounding, and the eval discipline that ties them together. The work is shaped by real constraints: ultra-low latency on voice channels, where every 100 ms is felt at telco and mobile scale; the architectural calls our neurosymbolic design demands, where you decide what belongs in deterministic logic and what belongs to the language model; and the bar that regulated domains — banking, insurance, telco — set for auditability and predictable behaviour. You will also be the team's eye on the frontier — model releases, new techniques, and research worth integrating.
We are building the next generation of conversational AI — a platform where the creator's experience is itself AI-native. Think of what Lovable did for app-building, applied to conversational agents: a visual IDE, a copilot that helps the user shape, refine and evolve the agent, and a runtime engineered to deploy what gets built into enterprise production. What makes the engineering genuinely hard is the combination our agents have to deliver simultaneously: fluent (natural language, real conversation), fast (sub-second on live voice channels), and reliable (predictable behaviour under real customer pressure, every time). Most stacks force a choice between two of those at the cost of the third. We don't, and that constraint shapes almost every engineering problem we work on.
AI is a power tool, and we treat it like one. Lekta engineers are encouraged — and equipped — to use AI deeply across their work: Claude Code, model APIs, copilot tooling. We invest in the accounts, tooling, and time it takes to get fluent with them. We don't measure people on lines committed; we measure on what gets delivered and how well it holds up in production. We are not an agentic-workflow company. AI generates; the engineer directs and reviews. Every change going out under your name has been read, understood, and signed off by you.
Responsibilities
Own the language model layer — provider integration, model selection, prompting, evaluation, and cost-quality trade-offs at scale
Own the speech stack — ASR and TTS pipelines, latency budgets, multilingual quality
Own the techniques that take voice to ultra-low latency — incremental dialogue management, end-of-turn detection, and the engineering that closes the gap to natural conversation
Lead retrieval and grounding — embeddings, RAG, recall-precision trade-offs at production scale
Make the architectural calls between deterministic logic and language model invocation, and the composition between them
Track the frontier — model releases, new techniques, decisions on what we adopt and what we let the field figure out first
Qualities
Significant production experience with frontier language models — prompting, provider integration, cost-quality trade-offs at scale
Hands-on experience with speech models (ASR and/or TTS), including latency, multilingual quality, and provider trade-offs
Strong eval discipline — you've built or extended evaluation frameworks, and you know the difference between a benchmark that catches bugs and one that catches drift
Comfort with retrieval and grounding (embeddings, RAG, hybrid retrieval) and architectural judgment on what belongs in deterministic logic vs language model invocation
Strong software engineering background — you ship production code, not just notebooks; TypeScript or Python at depth
Track record of shipping production work with AI in your daily loop, and reviewing what it produces
Experience pushing voice latency down (incremental dialogue management, end-of-turn detection), fine-tuning, dialogue systems, or regulated-domain AI is strongly preferred
Fluent in English (B2) and Polish (C1); other languages are a plus
Why join us
Visible impact — your engineering decisions land in real customer experience, not in roadmap slides
Real ownership — few layers between you and production, and close work with engineering leadership and founders
AI-first workflow with company-paid tooling — Claude Code, model access, IDEs of your choice
Flexible contract types (B2B / Employment Contract), work arrangements, and working hours
Remote, hybrid or in-office — from a location of your choice
Direct, low-bureaucracy, results-oriented culture; no micromanagement
Competitive salary, with an honest conversation about trajectory as Lekta scales
AI/ML Platform Engineer
AI/ML Platform Engineer