Vaios Laschos
Vaios Laschos
Mathematician & Machine Learning Researcher
🆔 ORCID: 0000-0001-8721-5335
Professional Summary

Applied mathematician with extensive postdoctoral experience, transitioning from foundational mathematical research to machine learning and AI systems. PhD in Applied Mathematics from University of Bath, UK, with expertise in statistical mechanics, large deviation theory, stochastic control, POMDPs, risk measures, and optimal transport theory - areas that now form important mathematical foundations for modern deep learning. Within optimal transport, I have specialized in Wasserstein spaces and Hellinger-Kantorovich spaces, independently discovering and studying the spherical Hellinger-Kantorovich space and contributing novel theoretical insights to optimal transport on curved manifolds. My career has spanned four countries (Greece, UK, USA, Germany), providing diverse research perspectives and collaborative experiences across international academic environments.

Mathematical Foundation to ML Innovation: My research journey reflects a deliberate evolution from abstract mathematical theory to practical AI applications. Beginning with pure mathematics (measure theory, topology, geometry, complex analysis), I specialized in statistical mechanics, large deviation theory, stochastic control, POMDPs, risk measures, and optimal transport theory - mathematical frameworks increasingly important to generative AI, optimal matching, and neural network training. Particularly, my extensive work on gradient flows through the De Giorgi framework and minimizing movement schemes provides deep theoretical foundations for understanding modern diffusion models - the mathematical structures I studied for Wasserstein gradient flows and evolutionary variational inequalities now underpin score-based diffusion models and denoising diffusion probabilistic models (DDPMs). This expertise directly enabled my supervision of research published at ICML 2025 on Universal Neural Optimal Transport (UNOT), showing how Fourier Neural Operators can capture Wasserstein space geometry for machine learning applications.

Research Leadership & Mentorship: During my postdocs at TU-Berlin (Computer Science department) and WIAS Berlin, I've supervised over 20 master's theses spanning reinforcement learning for game environments, meta-learning, and neural optimization. My students have explored both gradient-based and evolutionary approaches, reflecting my commitment to diverse methodological perspectives in machine learning research. My most ambitious project has been developing ad hoc adaptation algorithms for the cooperative card game Hanabi - a challenging multi-agent coordination problem that has become something of a white whale for me, combining partial observability, communication constraints, and team adaptation challenges. Beyond student supervision, my research has benefited from collaborations across institutions: Brown University for probability theory and stochastic analysis, various European institutions for PDEs and gradient flows, WIAS Berlin for optimal transport geometry, and TU Berlin for machine learning applications. This diverse network has enabled the interdisciplinary approach that characterizes my work.

Current ML Research & Development: To gain practical experience and facilitate my transition to industry, I'm actively developing hands-on LLM capabilities through direct implementation. I've trained small LLMs with novel architectures, fine-tuned models up to 32B parameters, and applied reinforcement learning techniques (DPO, GRPO) to enhance mathematical and programming reasoning, including participation in the Kaggle AIME 25 competition. My work on the ARC-2 challenge involves developing synthetic data generation methods and exploring transformer architectures that separate rule generation from execution - addressing fundamental questions about systematic reasoning in AI systems.

Technical Implementation & Deployment: Beyond research, I've built practical ML applications including an agentic system for automatically fetching and processing arXiv papers with OCR and LLM analysis, a foreign language tutoring system with voice capabilities (developed before proprietary LLMs offered voice options), and a podcast generation tool similar to NotebookLM but with a novel feedback mechanism that uses TextGrad to automatically improve prompts based on user responses. These projects are distributed between my website (Metaskepsis.com) and GitHub repositories. I also participate in AI hackathons for agent development, demonstrating the combination of theoretical depth and practical implementation skills essential for both research and application development.

Near-Future Learning Projects: I'm developing agentic systems that enable LLMs to master unseen games - a research direction that serves as my laboratory for understanding the true capabilities and boundaries of modern AI. This work explores how large language models perform on "out-of-distribution" tasks, though I find this terminology somewhat paradoxical: the very notion of defining distributional boundaries presupposes an answer to what constitutes intelligence itself. At the same time, I'm exploring whether the meta-reflection capabilities I've acquired through years of mathematical research - my ability to understand and scaffold learning processes - can help these systems reason systematically about novel domains, revealing both the remarkable adaptability and the fundamental limitations of current architectures.

Professional Fit & Aspirations: I am well-suited for research-focused roles where mathematical rigor meets practical innovation. I can also contribute effectively to mid-senior level LLM development positions, with the understanding that my non-traditional path means some skills may need polishing - however, my mathematical foundation and meta-learning abilities allow me to cover new ground rapidly. My research philosophy involves combining rigorous mathematical foundations with computational innovation - I believe the best AI systems emerge when we deeply understand their mathematical underpinnings. With a spherical profile score of 54/60 (breadth, depth, connectivity, balance, innovation, and impact), I bring both specialized expertise and the ability to connect disparate fields. Most importantly, I need to be part of a team that is genuinely passionate about their work and mission. I have difficulty treating work as merely work; I require purpose and meaning in what I'm building. The intersection of mathematical beauty and practical impact is where I thrive, whether that's advancing AI capabilities or solving real-world problems through intelligent systems.

Academic Appointments
WIAS
Postdoctoral Researcher
2021 - Present
Research focus on Bayesian methods on optimal transport, optimal transport and rough paths, algorithms on discrete optimal transport, and Evolutionary Variational Inequalities (EVIs) on spaces of measures.
Technical University of Berlin
Postdoctoral Researcher
2018 - 2020
Conducted research on risk-sensitive decision making, POMDPs and risk under partial observability, optimal transport and machine learning, reinforcement learning and meta-learning techniques.
WIAS
Postdoctoral Researcher
2015 - 2017
Investigated metrics on spaces of probability measures.
Brown University
Postdoctoral Researcher
2013 - 2015
Researched large deviations for Gibbs configurations and multi-agent risk-sensitive stochastic control.
MPI Leipzig
Guest Postdoctoral Researcher
2013
Studied solutions of the Euler equation as quasigeodesics on the Wasserstein space.
Education
University of Bath
PhD in Applied Mathematics
2009 - 2013
Thesis: Wasserstein gradient flows via large deviations from thermodynamic limits of independent particle models.
Aristotle University of Thessaloniki
Master in Pure Mathematics
2005 - 2009
Dissertation: Potential theory and Brownian motion
Aristotle University of Thessaloniki
Major in Pure Mathematics
2000 - 2005
Focus on Real Analysis
Research Expertise

Mathematical Foundations

  • Optimal Transport Theory (Wasserstein, HK, SHK distances)
  • Gradient Flows & Evolutionary Variational Inequalities
  • Large Deviation Principles
  • McKean-Vlasov Equations
  • Metric Geometry on Non-smooth Spaces
  • PDEs & Variational Methods

Machine Learning & AI

  • Large Language Models (LLMs)
  • Diffusion Models & Score-Based Methods
  • Neural Optimal Transport
  • GANs with Optimal Transport Costs
  • Reinforcement Learning (DPO, GRPO)
  • Meta-learning & Few-Shot Learning
  • LLM Training for Reasoning
  • Synthetic Data Generation

Optimization & Control Theory

  • Risk-Sensitive Decision Making
  • Stochastic Control Theory
  • Multi-agent Systems
  • Optimal Transport Applications
Publications & Research Impact
12
Journal Publications
2
Conference Papers
2
Submitted Papers

Key Research Contributions

  • Introduced new Hellinger-Kantorovich metrics with scaling properties
  • Established gradient flow structures that connect to modern diffusion models
  • Built bridges between large deviations and Wasserstein gradient flows
  • Advanced risk-sensitive control for cooperative multi-agent systems
  • Reformulated POMDPs as utility optimization problems
  • Supervised Neural OT research published at ICML 2025

Notable Publication Venues

Journal of Functional Analysis, Electronic Journal of Probability, Mathematics of Control, Signals, and Systems, Journal of Mathematical Analysis and Applications, ESAIM: Control, Optimisation and Calculus of Variations, Journal of Mathematical Physics

Teaching & Mentorship

Extensive teaching experience across multiple institutions, including tutorials in Calculus and Linear Algebra as a PhD student. Served as coordinator of the Probability seminar at Brown University for two consecutive years. At TU-Berlin, conducted several seminars on advanced topics in Reinforcement Learning and co-organized a course where students completed projects related to Machine Learning or Computational Neuroscience.

Student Supervision

Supervised 20+ Master's theses (2019-2024) on diverse topics including:

  • Machine Learning & AI: LLM performance optimization, prompt engineering with RL, prompt optimization techniques, representation learning, few-shot classification
  • Reinforcement Learning: Generative algorithms, deep neuroevolution, ad-hoc cooperation, risk-sensitive RL
  • Optimal Transport: Neural OT solvers, GANs with transport costs, entropic optimal transport, Sinkhorn algorithm initializations
  • Diffusion & Generative Models: Score-based methods, gradient flow perspectives on generative modeling
Skills

Note: Quantification is mostly for fun, though there's some reasoning behind the numbers.

Technical & AI/ML Skills

Python 4/6
PyTorch 4.5/6
Train NN of Various Architectures 5/6
Fine-Tune LLMs 4/6
Prompt Optimization 5/6
LaTeX 5.5/6
Git 4.5/6
Haskell 3.5/6

Languages & Soft Skills

Greek (Native) 5.5/6
English (Fluent) 5/6
German (Intermediate) 3/6
Spanish (Intermediate) 3/6
Listening 6/6
Problem Solving 5.5/6
Teamwork 5.5/6
Flexibility 5/6
Personal Interests

Climbing, Yoga, Travelling, Cooking, Trying random sports, Making decisions that take me far away from my comfort zone, and then complaining about those decisions.