R&D Experience
After my postdoc in 2021, I moved into industry for a few years, driven by caring responsibilities that required more stability at the time.
AI Innovation Senior Engineer — Verkor, Grenoble (2023–2024)
I worked on predictive ML systems for battery cell performance, at one of Europe’s first large-scale gigafactories, during a critical scale-up phase.
The main challenge was analysing data from multiple physical machines across a pilot production line, while minimising cost and complexity to allow scaling in a gigafactory context — all in a fast-changing environment where processes were continuously evolving and large volumes of data were accumulating in parallel.
I built monitoring and drift detection frameworks to track model performance over time in live production environments. The parallels with clinical research are real. In both settings, the data is noisy, the ground truth is hard to obtain, and the people who need to use the outputs are not data scientists. The stack included Python, Spark and Databricks, Streamlit, Docker, and Git/GitHub.
Application Engineer – Data Scientist — Araymond Networks, Grenoble (2021–2023)
I worked on AI-based systems for manufacturing quality control, embedded within an internal startup incubator dedicated to translating scientific AI projects into field-deployable products.
The problem was one of machine health engineering: detecting and characterising anomalous behaviour in production lines from multi-source sensor data (audio and inertial sensors), using a staged pipeline combining anomaly detection, unsupervised clustering, and supervised classification. I also worked on optimising Design of Experiments (DoE) via Reinforcement Learning to reduce the number of physical tests needed during product qualification.
What I genuinely enjoyed was working directly on the embedded sensors installed on production line equipment. A key outcome was moving from blind data acquisition to a qualified, controlled acquisition strategy — improving both detection model performance and DoE cost-efficiency. It required working across data acquisition, modelling, and stakeholder communication, with, it turns out, a fair amount of French-style diplomatic persistence. The stack included Python, MongoDB, Docker, and Git/GitHub.
What I took away from both: getting a model to work in a real environment is a different problem from getting it to work on a dataset. The gap is about engineering rigour, honest validation, and understanding your users. I also learned the value of ambitious but well-structured roadmaps — clear milestones, first-principles thinking, fast iteration. That has stayed with me.