I am a PhD student in the Autonomous Intelligent Machines and Systems CDT at the University of Oxford and am supervised by Philip Torr and Adel Bibi. Currently, I am a research intern at Adobe working on the Content Authenticity Initiative. My research interests lie in safe and reliable machine learning, such as autonomous systems and language models, focusing on methods to evaluate and verify their performance. I am also interested in the ethical implications of deploying AI-based systems as well as their regulation and governance.
Prior to coming to Oxford, I got my MSc at ETH Zürich focusing on robotics, machine learning, statistics, and applied category theory. My thesis was on Compositional Computational Systems. At ETH, I was working closely with Prof. Emilio Frazzoli's group and my studies were generously funded by the Excellence Scholarship & Opportunity Programme (ESOP). I was also a research intern at Motional.
Full list on Google Scholar.
Universal In-Context Approximation By Prompting Fully Recurrent Models
Aleksandar Petrov, Tom A. Lamb, Alasdair Paren, Philip H.S. Torr, Adel Bibi
Risks and Opportunities of Open-Source Generative AI
Francisco Eiras, Aleksandar Petrov, Bertie Vidgen, Christian Schroeder, Fabio Pizzati, Katherine Elkins, Supratik Mukhopadhyay, Adel Bibi, Aaron Purewal, Csaba Botos, Fabro Steibel, Fazel Keshtkar, Fazl Barez, Genevieve Smith, Gianluca Guadagni, Jon Chun, Jordi Cabot, Joseph Imperial, Juan Arturo Nolazco, Lori Landay, Matthew Jackson, Phillip H.S. Torr, Trevor Darrell, Yong Lee, Jakob Foerster
International Conference on Machine Learning (ICML) 2024
Prompting a Pretrained Transformer Can Be a Universal Approximator
Aleksandar Petrov, Philip H.S. Torr, Adel Bibi
International Conference on Machine Learning (ICML) 2024
When Do Prompting and Prefix-Tuning Work? A Theory of Capabilities and Limitations
Aleksandar Petrov, Philip H.S. Torr, Adel Bibi
International Conference on Learning Representations (ICLR) 2024
Language Models as a Service: Overview of a New Paradigm and its Challenges
Emanuele La Malfa, Aleksandar Petrov, Simon Frieder, Christoph Weinhuber, Ryan Burnell, Anthony G. Cohn, Nigel Shadbolt, Michael Wooldridge
Preprint
Language Model Tokenizers Introduce Unfairness Between Languages
Aleksandar Petrov, Emanuele La Malfa, Philip H.S. Torr, Adel Bibi
Conference on Neural Information Processing Systems (NeurIPS) 2023
Certifying Ensembles: A General Certification Theory with S-Lipschitzness
Aleksandar Petrov*, Francisco Eiras, Amartya Sanyal, Philip H.S. Torr, Adel Bibi*
International Conference on Machine Learning (ICML) 2023
Robustness of Unsupervised Representation Learning without Labels
Aleksandar Petrov, Marta Kwiatkowska
Preprint
HiddenGems: Efficient safety boundary detection with active learning
Aleksandar Petrov, Carter Fang, Khang Minh Pham, You Hong Eng, James Guo Ming Fu, Scott Drew Pendleton
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2022
Compositional Computational Systems
Aleksandar Petrov, supervised by Gioele Zardini, Andrea Censi, Emilio Frazzoli
Master thesis
Integrated Benchmarking and Design for Reproducible and Accessible Evaluation of Robotic Agents
Jacopo Tani, Andrea F. Daniele, Gianmarco Bernasconi, Amaury Camus, Aleksandar Petrov, Anthony Courchesne, Bhairav Mehta, Rohit Suri, Tomasz Zaluska, Matthew R. Walter, Emilio Frazzoli, Liam Paull, Andrea Censi
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2020
Learning Camera Miscalibration Detection
Andrei Cramariuc*, Aleksandar Petrov*, Rohit Suri, Mayank Mittal, Roland Siegwart, Cesar Cadena
IEEE International Conference on Robotics and Automation (ICRA) 2020
Optimizing multi-rendezvous spacecraft trajectories: ΔV matrices and sequence selection
Aleksandar Petrov, Ron Noomen