Junior - AI Verification Engineer
Location
Cadiz / Spain
Type
Full time / Part time
Department
Engineering
Overview
Predictable Machines is building the next generation of verifiable AI systems—combining cutting-edge language models with formal verification, functional programming, and mathematical rigor. We're seeking a Verification-Focused AI Engineer who thrives at the intersection of AI capabilities and mathematical precision.
We're looking for someone who:
Understands both AI potential and limitations—excited about LLMs but equally passionate about making them reliable, traceable, and mathematically sound.
Embraces functional programming paradigms—comfortable with Kotlin, TypeScript, and compositional system design for building deterministic, verifiable AI workflows.
Has curiosity about formal methods—interested in SMT solvers, logical reasoning, mathematical validation, or formal verification techniques (experience preferred but not required).
Thinks in systems and workflows—drawn to event-driven architectures, streaming systems, and building complex verification pipelines rather than just prompt engineering.
Values transparency and explainability—motivated by building AI systems where every decision can be traced, verified, and explained with mathematical rigor.
Ideal backgrounds include:
Computer Science with formal methods exposure
Mathematics/Logic with programming experience
Software Engineering with AI/verification interest
Research experience in AI safety, verification, or explainable AI
This role involves building verification systems that make AI trustworthy, not just impressive. If you're excited about combining the power of large language models with the rigor of formal verification, we want to meet you
Key responsibilities
Build verification-first AI systems alongside senior engineers, focusing on Server-Sent Events architectures, streaming workflows, and real-time verification pipelines using Kotlin and TypeScript.
Develop and integrate formal verification tools—work with SMT solvers, logical reasoning systems, and mathematical validation tools to ensure AI outputs are provably correct and traceable.
Design streaming verification workflows that combine factual verification (web search), logical validation (formal methods), and mathematical checking (computational tools) into coherent, auditable pipelines.
Implement TypeScript client libraries and UI components for real-time research steppers, verification progress visualization, and interactive audit trail interfaces with full type safety.
Contribute to Docker-based tool ecosystem—help maintain and extend the 17+ containerized verification tools, MCP server implementations, and automated deployment systems.
Participate in verification methodology research—explore new approaches to AI fact-checking, logical consistency testing, and mathematical validation while maintaining functional programming principles.
Support enterprise integration patterns—help build authentication systems, multi-tenancy features, and API integrations that allow verification capabilities to be embedded in customer applications.
Qualifications
Required:
Strong foundation in Computer Science, Mathematics, or Engineering—degree preferred but exceptional self-taught candidates with demonstrated systems-building experience welcome.
Proficiency in functional programming languages—experience with Kotlin, TypeScript, or Scala preferred; comfort with immutable data structures, composable functions, and type-safe architectures.
Interest in mathematical reasoning and formal methods—curiosity about logic, proof systems, SMT solvers, or mathematical validation (coursework or personal projects demonstrate this).
Systems thinking mindset—experience with event-driven architectures, streaming systems, API design, or containerized applications; understanding that AI is part of larger, reliable systems.
Collaborative engineering skills—comfort with Git workflows, code reviews, and building production-quality software rather than just research prototypes.
Bonus Points:
Formal methods exposure—coursework or projects involving theorem provers, model checking, constraint solving, or mathematical verification tools.
LLM integration experience—but focused on reliability, evaluation, and systematic testing rather than just prompt engineering.
Functional programming enthusiasm—personal projects or contributions to FP ecosystems; understanding of monads, type systems, or category theory.
Enterprise software experience—authentication systems, multi-tenancy, observability, or building APIs that other developers actually use.
Interest in AI safety/explainability—genuine curiosity about making AI systems transparent, auditable, and mathematically sound.
What we offer
Hands-on mentorship from experienced AI engineers and researchers.
Opportunity to work on real projects with impact in the AI reliability space.
Flexible, remote-first working environment.
A chance to grow your skills and transition into a full-time role in a fast-growing company.
Access to state-of-the-art AI tools and learning resources.