Applied mathematician with extensive postdoctoral experience, transitioning from foundational mathematical research to machine learning and AI systems. PhD in Applied Mathematics from University of Bath, UK, with expertise in statistical mechanics, large deviation theory, stochastic control, POMDPs, risk measures, and optimal transport theory - areas that now form important mathematical foundations for modern deep learning. Within optimal transport, I have specialized in Wasserstein spaces and Hellinger-Kantorovich spaces, independently discovering and studying the spherical Hellinger-Kantorovich space and contributing novel theoretical insights to optimal transport on curved manifolds. My career has spanned four countries (Greece, UK, USA, Germany), providing diverse research perspectives and collaborative experiences across international academic environments.
Mathematical Foundation to ML Innovation: My research journey reflects a deliberate evolution from abstract mathematical theory to practical AI applications. Beginning with pure mathematics (measure theory, topology, geometry, complex analysis), I specialized in statistical mechanics, large deviation theory, stochastic control, POMDPs, risk measures, and optimal transport theory - mathematical frameworks increasingly important to generative AI, optimal matching, and neural network training. Particularly, my extensive work on gradient flows through the De Giorgi framework and minimizing movement schemes provides deep theoretical foundations for understanding modern diffusion models - the mathematical structures I studied for Wasserstein gradient flows and evolutionary variational inequalities now underpin score-based diffusion models and denoising diffusion probabilistic models (DDPMs). This expertise directly enabled my supervision of research published at ICML 2025 on Universal Neural Optimal Transport (UNOT), showing how Fourier Neural Operators can capture Wasserstein space geometry for machine learning applications.
Research Leadership & Mentorship: During my postdocs at TU-Berlin (Computer Science department) and WIAS Berlin, I've supervised over 20 master's theses spanning reinforcement learning for game environments, meta-learning, and neural optimization. My students have explored both gradient-based and evolutionary approaches, reflecting my commitment to diverse methodological perspectives in machine learning research. My most ambitious project has been developing ad hoc adaptation algorithms for the cooperative card game Hanabi - a challenging multi-agent coordination problem that has become something of a white whale for me, combining partial observability, communication constraints, and team adaptation challenges. Beyond student supervision, my research has benefited from collaborations across institutions: Brown University for probability theory and stochastic analysis, various European institutions for PDEs and gradient flows, WIAS Berlin for optimal transport geometry, and TU Berlin for machine learning applications. This diverse network has enabled the interdisciplinary approach that characterizes my work.
Current ML Research & Development: To gain practical experience and facilitate my transition to industry, I'm actively developing hands-on LLM capabilities through direct implementation. I've trained small LLMs with novel architectures, fine-tuned models up to 32B parameters, and applied reinforcement learning techniques (DPO, GRPO) to enhance mathematical and programming reasoning, including participation in the Kaggle AIME 25 competition. My work on the ARC-2 challenge involves developing synthetic data generation methods and exploring transformer architectures that separate rule generation from execution - addressing fundamental questions about systematic reasoning in AI systems.
Technical Implementation & Deployment: Beyond research, I've built practical ML applications including an agentic system for automatically fetching and processing arXiv papers with OCR and LLM analysis, a foreign language tutoring system with voice capabilities (developed before proprietary LLMs offered voice options), and a podcast generation tool similar to NotebookLM but with a novel feedback mechanism that uses TextGrad to automatically improve prompts based on user responses. These projects are distributed between my website (Metaskepsis.com) and GitHub repositories. I also participate in AI hackathons for agent development, demonstrating the combination of theoretical depth and practical implementation skills essential for both research and application development.
Near-Future Learning Projects: I'm developing agentic systems that enable LLMs to master unseen games - a research direction that serves as my laboratory for understanding the true capabilities and boundaries of modern AI. This work explores how large language models perform on "out-of-distribution" tasks, though I find this terminology somewhat paradoxical: the very notion of defining distributional boundaries presupposes an answer to what constitutes intelligence itself. At the same time, I'm exploring whether the meta-reflection capabilities I've acquired through years of mathematical research - my ability to understand and scaffold learning processes - can help these systems reason systematically about novel domains, revealing both the remarkable adaptability and the fundamental limitations of current architectures.
Professional Fit & Aspirations: I am well-suited for research-focused roles where mathematical rigor meets practical innovation. I can also contribute effectively to mid-senior level LLM development positions, with the understanding that my non-traditional path means some skills may need polishing - however, my mathematical foundation and meta-learning abilities allow me to cover new ground rapidly. My research philosophy involves combining rigorous mathematical foundations with computational innovation - I believe the best AI systems emerge when we deeply understand their mathematical underpinnings. With a spherical profile score of 54/60 (breadth, depth, connectivity, balance, innovation, and impact), I bring both specialized expertise and the ability to connect disparate fields. Most importantly, I need to be part of a team that is genuinely passionate about their work and mission. I have difficulty treating work as merely work; I require purpose and meaning in what I'm building. The intersection of mathematical beauty and practical impact is where I thrive, whether that's advancing AI capabilities or solving real-world problems through intelligent systems.
Journal of Functional Analysis, Electronic Journal of Probability, Mathematics of Control, Signals, and Systems, Journal of Mathematical Analysis and Applications, ESAIM: Control, Optimisation and Calculus of Variations, Journal of Mathematical Physics
Extensive teaching experience across multiple institutions, including tutorials in Calculus and Linear Algebra as a PhD student. Served as coordinator of the Probability seminar at Brown University for two consecutive years. At TU-Berlin, conducted several seminars on advanced topics in Reinforcement Learning and co-organized a course where students completed projects related to Machine Learning or Computational Neuroscience.
Supervised 20+ Master's theses (2019-2024) on diverse topics including:
Note: Quantification is mostly for fun, though there's some reasoning behind the numbers.
Climbing, Yoga, Travelling, Cooking, Trying random sports, Making decisions that take me far away from my comfort zone, and then complaining about those decisions.