SURF Research Day 2026

Can You Trust AI? A Researcher’s Guide to Truth‑Finding
2026-05-19 , Cineac

Do you use language models in your research or support, but sometimes doubt whether the answer is correct? In this interactive session, you will discover how to quickly and practically check AI results using the SURF AI Hub.

You will start in a secure, sovereign AI environment that runs in the SURF data center and is built on public values such as autonomy and fairness. In a short live demo, you will see how convincing AI can sound when it is wrong: hallucinations, fabrications, and false certainties. You will then practice with simple checks that you can immediately apply in your own work.

You will also discover how small choices in your prompt—such as framing or bias—can have a major impact on what the AI makes seem “true.”


What is the nature of your session?: Technical, Technology impact, Policy, Community With whom do you want to connect?:

Researchers, lecturers, research support staff, data stewards, librarians, and IT professionals who use or support AI and want practical ways to verify AI outputs and use language models responsibly in research workflows.

What is the key take away of your session?:

Discover why AI models aren’t truth sources. Verify outputs via SURF AI-Hub (https://hr-ai-hub.github.io/) and apply FAIR-based guidelines for responsible AI use.

With a PhD in biophysics, Rob van der Willigen serves as the Tech-Lead of the HR DataLabs: Healthcare, EAS and AI SusTech; specializing in creating robust data infrastructures (Data Fabric) for explainable AI and Neural Networks. In 2019, he founded the Prometheus Data Science Lab as part of the SURF Digital Competence Center for Practice-Oriented Research, underscoring his deep commitment to the ethical and responsible application of artificial intelligence. He currently drives the technical development of the HR AI-Hub (https://hr-ai-hub.github.io/), providing researchers with sovereign, secure, and on-premises generative AI workflows. Furthermore, his postdoctoral research with the Donders Center for Neuroscience on natural language comprehension forms the scientific foundation for his ongoing focus on machine learning, truth-finding protocols, transparent AI systems and the biological origin of language understanding.