About
London, 6th September 2025.
Hi, I’m Adam. I’m an AI scientist at DeepMind’s Science unit, where I’m focused on one of the most fascinating challenges I can imagine: understanding the language of our DNA. My work is at the intersection of biology and artificial intelligence, where I try to understand how the genome is used by different cells and how mutations can affect their function.
In my most recent project, I helped deliver AlphaGenome, a deep learning model that predicts gene expression and various properties of a DNA sequence. My hope is that models like this will help us to diagnose rare genetic diseases, understand cancer, and aid longevity research.
My shift to biology is relatively recent, with most of my research career in learning grounded representations and building generative models of images, videos, and 3D scenes. This work, which included a collaboration with Geoff Hinton on capsule networks, was centered on baking structure directly into the model. While I’ve moved away from that approach, I’m still convinced that the underlying question – how can we build models that learn efficiently and continually without massive, human-curated datasets – is fundamental to our pursuit of truly general AI. And I think we can do better than next-token prediction with transformers – humans being the best example of that.
When not at a keyboard, you’ll find me with a book, or trying to move as much as I can: running, swimming, lifting, dancing, or climbing. More recently, I’ve been exploring my voice and learning the handpan. I also make it a point to write every day, and to be in the mountains as much as I can. You can find some of my non-technical writing on my substack.
If you have ideas related to any of the above, feel free to reach out. I answer every email.
Publications
Y. Shi, N. Siddharth, P. H.S. Torr, A. R. Kosiorek “Adversarial Masking for Self-Supervised Learning”, ICML, 2022. code
M. Engelcke, A. R. Kosiorek, O. Parker Jones, I. Posner “GENESIS: Generative Scene Inference and Sampling with Object-Centric Latent Representations”, ICLR, 2020. code
A. R. Kosiorek, S. Sabour, Y. W. Teh, G. E. Hinton “Stacked Capsule Autoencoders”, NeurIPS, 2019. code
J. Lee, Y. Lee, J. Kim, A. R. Kosiorek, S. Choi, Y. W. Teh “Set Transformer”, ICML, 2019. code
F. B. Fuchs, O. Groth, A. R. Kosiorek, A. Bewley, M. Wulfmeier, A. Vedaldi, I. Posner “Learning Physics with Neural Stethoscopes”, NeurIPS workshop on Modeling the Physical World: Learning, Perception, and Control, 2018.
A. R. Kosiorek, H. Kim, I. Posner, Y. W. Teh “Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects”, NeurIPS, 2018. code
, NeurIPS spotlight talk
, video
, poster
A. R. Kosiorek, A. Bewley, I. Posner, “Hierarchical Attentive Recurrent Tracking”, NeurIPS, 2017. code
\(\circ\) Equal contribution.
Projects
A. R. Kosiorek “Learning Object-Centric Representations”, University of Oxford, 2020. source
Forge - a lightweight framework-agnostic tool for managing ML experiments.
An implementation of “Attend, Infer, Repeat” by Ali Eslami et. al. code