Adrian Sauter

prof_pic.jpg

Hey, I’m Adrian, welcome to my website. :)

I’m a third-year MSc student in Artificial Intelligence at the University of Amsterdam. I am broadly interested in the interpretability, safety, and alignment of AI and believe that these topics are crucial for developing trustworthy and beneficial AI systems.

Most recently, I worked on a project exploring how visual grounding affects both speech- and text-based language models (submitted to ICASSP 2026, preprint on arXiv), and completed an internship at KPN, where I applied concept bottleneck models with LLMs to customer call transcripts to develop an explainable-by-design framework for identifying key drivers of customer churn. I am now excited to be pursuing my thesis on “Contextual Sensitivity in Moral Judgements of Large Language Models”.

Before my MSc, I completed a BSc in Cognitive Science at the University of Tübingen, where I explored the intersection of human cognition and intelligent systems and wrote my thesis on “Video Background Extraction with a Masked-Autoencoder-Based Neural Network”, which contributed to a publication at ICANN 2024.

Feel free to reach out if you’d like to discuss AI, cognitive science, or any exciting interdisciplinary projects. I’m always open to new opportunities and collaborations! You can reach me at adrian dot a dot sauter at student dot uva dot nl or adriansauter07 dot as at gmail dot com.

news

Sep 01, 2025 Today, I started the third year of my AI Master’s at the University of Amsterdam. Over the next six months, I’ll be working on my thesis, “Contextual Sensitivity in Moral Judgements of Large Language Models”, while continuing my role as a teaching assistant for first-year master’s students.
Feb 03, 2025 I recently started a role as an XAI intern at KPN in Amsterdam. Over the next six months, I’ll be researching Explainable Customer Call Classification, applying Concept Bottleneck Models to text data with the use of LLMs.
Oct 10, 2024 I’m excited to announce that my co-authors and I will be presenting our recent paper, ““Studying How to Efficiently and Effectively Guide Models with Explanations” - A Reproducibility Study”, at NeurIPS in Vancouver this December.
Sep 13, 2024 Our paper ““Studying How to Efficiently and Effectively Guide Models with Explanations” - A Reproducibility Study” has been accepted at the ML Reproducibility Challenge 2023. You can find it here on OpenReview.
Sep 02, 2024 I began my second year in the MSc Artificial Intelligence program at the University of Amsterdam and took up a position as a Teaching Assistant for the courses Computer Vision 1 and Natural Language Processing 1, taught to first-year MSc AI students.

selected publications

  1. loci-segmented_preview.png
    Loci-Segmented: Improving Scene Segmentation Learning
    Manuel TraubFrederic Becker, Adrian Sauter, and 2 more authors
    arXiv preprint arXiv:2310.10410, 2023
  2. model-guidance_preview.png
    “Studying How to Efficiently and Effectively Guide Models with Explanations” - A Reproducibility Study
    Adrian Sauter, Milan Miletić, Ryan Ott, and 1 more author
    Transactions on Machine Learning Research, 2024