<?xml version='1.0' encoding='utf-8' ?>
<!-- Made with love by pretalx v2025.2.2. -->
<schedule>
    <generator name="pretalx" version="2025.2.2" />
    <version>0.11</version>
    <conference>
        <title>Advanced Computing User Day</title>
        <acronym>acud-2025</acronym>
        <start>2025-12-04</start>
        <end>2025-12-04</end>
        <days>1</days>
        <timeslot_duration>00:05</timeslot_duration>
        <base_url>https://pretalx.surf.nl</base_url>
        <logo>https://pretalx.surf.nl/media/acud-2025/img/ACUD_logo_TDnAbQP.svg</logo>
        <time_zone_name>Europe/Amsterdam</time_zone_name>
        
        
        <track name="Plenary" slug="146-plenary"  color="#6852b3" />
        
        <track name="Generative AI and Machine Learning" slug="147-generative-ai-and-machine-learning"  color="#b3842d" />
        
        <track name="Innovative Technologies &amp; Services" slug="144-innovative-technologies-services"  color="#f47b34" />
        
        <track name="Data Processing &amp; Cloud Solutions" slug="143-data-processing-cloud-solutions"  color="#7e9db4" />
        
        <track name="HPC for Societal and Industrial Impact" slug="145-hpc-for-societal-and-industrial-impact"  color="#4f99cc" />
        
        <track name="High Performance Computing" slug="142-high-performance-computing"  color="#ff4eff" />
        
    </conference>
    <day index='1' date='2025-12-04' start='2025-12-04T04:00:00+01:00' end='2025-12-05T03:59:00+01:00'>
        <room name='Progress' guid='cb99caa7-30df-54ef-bae3-4eae559dae24'>
            <event guid='59433f19-970d-515b-a242-ee14fb714251' id='4442'>
                <room>Progress</room>
                <title>Opening Experience</title>
                <subtitle></subtitle>
                <type>Opening Experience</type>
                <date>2025-12-04T09:30:00+01:00</date>
                <start>09:30</start>
                <duration>00:15</duration>
                <abstract>**A Spark to Begin**
Before we dive into the day&#8217;s discoveries, we invite you to experience a moment that will awaken your senses and set the tone for what&#8217;s to come. Something unexpected will unfold , a spark of creativity and wonder to open the stage and the mind.

After this opening act, Valeriu, our day chair and a leading figure in the advanced computing community, will take the stage to officially open the Advanced Computing User Day.</abstract>
                <slug>acud-2025-4442-opening-experience</slug>
                <track>Plenary</track>
                
                <persons>
                    <person id='1466'>Rob en Emiel</person><person id='606'>Valeriu Codreanu</person>
                </persons>
                <language>en</language>
                <description>Rob and Emiel will be present at our event as energisers. They will open the day, amaze us with their award-winning World Cup act, and have promised to inspire us in the area of mindset change. We hope you are ready for some unforgettable and magical moments.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://pretalx.surf.nl/acud-2025/talk/AGUFK8/</url>
                <feedback_url>https://pretalx.surf.nl/acud-2025/talk/AGUFK8/feedback/</feedback_url>
            </event>
            <event guid='1927ded5-0117-5476-9d16-23b3d914a216' id='4441'>
                <room>Progress</room>
                <title>Keynote Maria Girone</title>
                <subtitle></subtitle>
                <type>Keynote</type>
                <date>2025-12-04T09:45:00+01:00</date>
                <start>09:45</start>
                <duration>00:50</duration>
                <abstract>Keynote by Maria Girone, Head of CERN openlab</abstract>
                <slug>acud-2025-4441-keynote-maria-girone</slug>
                <track>Plenary</track>
                
                <persons>
                    <person id='1426'>Maria Girone</person>
                </persons>
                <language>en</language>
                <description>Maria has spent her career driving innovation at the intersection of science and computing. At CERN, she has led transformative projects that bring together HPC, AI, and cloud technologies to handle the immense data challenges of the Large Hadron Collider. A leader, collaborator, and advocate for diversity in STEM, Maria continues to inspire how we think about computing for discovery.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://pretalx.surf.nl/acud-2025/talk/D8NASW/</url>
                <feedback_url>https://pretalx.surf.nl/acud-2025/talk/D8NASW/feedback/</feedback_url>
            </event>
            <event guid='70b6de1a-75eb-57e4-b98e-02d977e521ba' id='4408'>
                <room>Progress</room>
                <title>Introducing WeatherGenerator</title>
                <subtitle></subtitle>
                <type>Short presentation with Q&amp;A</type>
                <date>2025-12-04T11:00:00+01:00</date>
                <start>11:00</start>
                <duration>00:25</duration>
                <abstract>Artificial intelligence has been transformative for earth and environmental sciences: nowadays this technique is a common instrument scientists&#8217; toolbox. In the domain of meteorology, machine learning often displays superior accuracy compared to traditional computational methods. Even in weather prediction, where complex numerical PDE-solving codes have seen decades of development, graph neural networks and transformer architectures have proven to produce more skillful forecast at a fraction of the computational cost. Inspired by the recent developments in generative modeling of textual data through large language models, several research groups have made efforts to design a foundation model for weather and climate, one that allows fine-tuning for specific objectives and benefits from a pre-trained rich latent space. The WeatherGenerator EU  project aims to develop the leading European AI foundation model of the atmosphere. This model will be pre-trained with petabytes of multi-modal data (reanalyses, station observations, satellite products,&#8230;) on Europe&#8217;s first exascale-class supercomputers, ultimately keeping Europe&#8217;s global forecast capabilities at the forefront as we enter an era of democratized data-driven weather prediction.</abstract>
                <slug>acud-2025-4408-introducing-weathergenerator</slug>
                <track>Generative AI and Machine Learning</track>
                
                <persons>
                    <person id='797'>Gijs van den Oord</person>
                </persons>
                <language>en</language>
                <description>In recent years, artificial intelligence has grown to be a ubiquitous tool in earth and environmental sciences. In meteorology and climate sciences, neural networks have shown to be  the superior strategy for a multitude of data-driven tasks such as bias correction, down-scaling and even now-casting. Lately, also weather prediction and data assimilation - traditionally the domain of state-of-the-art large numerical HPC codes - have shown substantial improvements by using graph neural networks or transformer architectures. As a result, the current best weather forecasts are obtained with models such as Google&#8217;s graphcast and the ECMWF&#8217;s AIFS, both trained on the global reanalysis dataset ERA5. As a bonus, the inference rollout requires just a fraction of the computational cost of a traditional forecast.

Although machine learning outperforms traditional methods in these specific tasks, the question remains whether a unified core model, equipped with a rich latent space, opens the pathway towards improved predictive skill and increased flexibility. Several initiatives to build such a foundation model of the atmosphere have emerged and shown promising results. Within the EU project WeatherGenerator, we aim to construct a large, high-resolution foundation model for weather prediction and atmospheric climate modeling. We aim to combine a very large volume of reanalysis products, observational data and climate model output into a multi-channel transformer architecture that can easily be fine-tuned to execute common weather modeling and prediction tasks. The pre-training will be a technical feat that has to be executed on Europe&#8217;s exascale compute infrastructure. To substantiate the claim of being a foundation model, the project hosts many stakeholders that will re-implement existing ML applications with the WeatherGenerator model.

In this talk I will motivate this ambitious endeavour and outline the innovative ideas and techniques behind WeatherGenerator. I will briefly discuss some of the future applications and explain how the Netherlands eScience Center plans to bring this technology to potential stakeholders such as the European research community, public institutions and industry.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://pretalx.surf.nl/acud-2025/talk/UCP9QG/</url>
                <feedback_url>https://pretalx.surf.nl/acud-2025/talk/UCP9QG/feedback/</feedback_url>
            </event>
            <event guid='122472ba-cc45-5385-82a8-ce4a5c5a7770' id='4480'>
                <room>Progress</room>
                <title>Fundamental bottlenecks for AI and HPC</title>
                <subtitle></subtitle>
                <type>Short presentation with Q&amp;A</type>
                <date>2025-12-04T11:30:00+01:00</date>
                <start>11:30</start>
                <duration>00:25</duration>
                <abstract>Snellius and other HPC systems are not magic, even if it sometimes may feel so.
Efficient usage of the available hardware is the difference between a model that may be just &apos;OK&apos;, or a model that is State-of-the-art (and I have examples in my pocket to prove it!).
Trusting that whatever you throw at the system will be efficient &apos;automagically&apos; is the quickest way to burn GPU hours without getting what you really want: breakthrough science!</abstract>
                <slug>acud-2025-4480-fundamental-bottlenecks-for-ai-and-hpc</slug>
                <track>Generative AI and Machine Learning</track>
                
                <persons>
                    <person id='1478'>Robert-Jan Schlimbach</person>
                </persons>
                <language>en</language>
                <description>Is your dataloader asleep at the wheel? Is over-eager logging killing your performance because it&apos;s forcing CPU&lt;-&gt;GPU syncs? Does 100% GPU utilization actually mean that your GPU is being used effectively? (Hint: it&apos;s not!)  
In this talk we&apos;ll go over the fundamental bottlenecks of compute: those things in any HPC system that will cause your workflow to be slower than it needs to be, and what you can do to transform your workflow from &apos;it eventually works&apos; to &apos;it works remarkably well&apos;.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://pretalx.surf.nl/acud-2025/talk/A8QYRM/</url>
                <feedback_url>https://pretalx.surf.nl/acud-2025/talk/A8QYRM/feedback/</feedback_url>
            </event>
            <event guid='8237d6e0-32ed-55fb-920a-555890d1326a' id='4406'>
                <room>Progress</room>
                <title>No GPU required: Training and using scalable LLMs on CPUs</title>
                <subtitle></subtitle>
                <type>Short presentation with Q&amp;A</type>
                <date>2025-12-04T12:00:00+01:00</date>
                <start>12:00</start>
                <duration>00:25</duration>
                <abstract>Transformer-based LLMs, at scale, are prohibitively expensive to train, requiring massive GPU capacity. Alternative technologies do exist, producing functionally equivalent LLMs at a fraction of the training costs, using high-memory CPU nodes. I will illustrate this with a memory-based LLM trained on Snellius&apos; hi-mem nodes.</abstract>
                <slug>acud-2025-4406-no-gpu-required-training-and-using-scalable-llms-on-cpus</slug>
                <track>Generative AI and Machine Learning</track>
                
                <persons>
                    <person id='1342'>Antal van den Bosch</person>
                </persons>
                <language>en</language>
                <description>Memory-based language modeling, proposed by Van den Bosch (2005), is a machine learning approach to next-token prediction based on the k-nearest neighbor (k-NN) classifier (Aha, Kibler, and Albert, 1991; Daelemans and Van den Bosch, 2006a). This non-neural machine learning approach relies on storing all training data in memory, and generalizes from this training data when classifying unseen new data using similarity-based inference. Memory-based language modeling is functionally roughly equivalent to decoder Transformers (Vaswani et al., 2017), in the sense that both can run in autoregressive text generation mode and predict next tokens based on a certain amount of prior context.

While training a memory-based language model is generally low-cost, as it involves a one-pass reading of training data and does not involve any convergence-based iterative training, a naive implementation would render it useless for inference. The upper-bound complexity of k-nearest neighbor classification is notoriously unfavorable, i.e. O(nd), where n is the number of examples in memory, and d is the number of features or dimensions (e.g. context size). However, improvements and fast approximations are available. Daelemans et al. (2010) offer a range of approximations offering fast classification and data compression using prefix tries. Another notable aspect of memory-based language modeling, as observed earlier by Van den Bosch (2006b), is that its next-word prediction performance increases log-linearly: with every 10-fold increase in the amount of training data, next-word prediction accuracy increases by a more or less constant amount (although there may be a plateau eventually which we never reached because of memory limitations).

The relatively costs in learning as well as inference make memory-based language modeling a potential eco-friendly alternative to the generally costly training of Transformer-based language models (Strubell, 2019). All experiments carried out so far with memory-based language models have been based on publicly available software, with TiMBL as the basic classification engine (https://github.com/LanguageMachines/timbl). All required scripts for training and inference are available on GitHub (https://github.com/antalvdb/memlm).

References

D. W. Aha, D. Kibler, and M. Albert. 1991. Instance-based learning algorithms. Machine Learning, 6:37&#8211;
66.

W. Daelemans and A. Van den Bosch. 2005. Memory-based language processing. Cambridge University
Press, Cambridge, UK.

W. Daelemans, J. Zavrel, K. Van der Sloot, and A. Van den Bosch. 2010. TiMBL: Tilburg memory based
learner, version 6.3, reference guide. Technical Report ILK 10-01, ILK Research Group, Tilburg University.

A. Van den Bosch. 2006a. Scalable classification-based word prediction and confusible correction. Traitement Automatique des Langues, 46(2):39&#8211;63.

Antal van den Bosch. 2006b. All-word prediction as the ultimate confusible disambiguation. In Proceedings of the Workshop on Computationally Hard Problems and Joint Inference in Speech and Language Processing, pages 25&#8211;32, New York City, New York. Association for Computational Linguistics.

Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, &#321;ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International
Conference on Neural Information Processing Systems, NIPS&#8217;17, page 6000&#8211;6010, Red Hook, NY, USA. Curran Associates Inc.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://pretalx.surf.nl/acud-2025/talk/7YLTTZ/</url>
                <feedback_url>https://pretalx.surf.nl/acud-2025/talk/7YLTTZ/feedback/</feedback_url>
            </event>
            <event guid='268a17c5-fdfd-5574-9419-d4056301f50c' id='4443'>
                <room>Progress</room>
                <title>Energy Boost: Mind in Motion</title>
                <subtitle></subtitle>
                <type>Energizer</type>
                <date>2025-12-04T13:25:00+01:00</date>
                <start>13:25</start>
                <duration>00:10</duration>
                <abstract>Right after lunch, Rob and Emiel invite you to engage your mind in an unexpected way.</abstract>
                <slug>acud-2025-4443-energy-boost-mind-in-motion</slug>
                <track>Plenary</track>
                
                <persons>
                    <person id='1466'>Rob en Emiel</person>
                </persons>
                <language>en</language>
                <description>Through a short, and interactive moment, they&#8217;ll challenge how we perceive focus, logic, and awareness. Demonstrating that our brains might just be capable of more than we think.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://pretalx.surf.nl/acud-2025/talk/WWKFUY/</url>
                <feedback_url>https://pretalx.surf.nl/acud-2025/talk/WWKFUY/feedback/</feedback_url>
            </event>
            <event guid='1ec1bf66-8d5b-5ba4-80a1-9e3a33b59d0e' id='4444'>
                <room>Progress</room>
                <title>Next-Generation Applications for Advancing&#160;Scientific Discovery</title>
                <subtitle></subtitle>
                <type>In Conversation</type>
                <date>2025-12-04T13:35:00+01:00</date>
                <start>13:35</start>
                <duration>00:30</duration>
                <abstract>We are increasingly engaged in transdisciplinary research to address the complex challenges our world faces today, such as transitioning to renewable energy systems, advancing personalised medicine, leveraging digital twins, and accurately predicting climate change. Advanced research e-infrastructure has become essential for tackling these questions in an integrated manner. To achieve these ambitions, we are converging on using multiple technologies, methodologies, computing and data infrastructures, and software stacks to create sustainable, long-term value. In this respect, applications and workflows are crucial for addressing scientific challenges, achieving outcomes, and advancing the boundaries of research.</abstract>
                <slug>acud-2025-4444-next-generation-applications-for-advancing-scientific-discovery</slug>
                <track>Plenary</track>
                
                <persons>
                    <person id='1461'>Sander Houweling</person><person id='794'>Prof. Zeila Zanolli</person><person id='186'>Sagar Dolas</person>
                </persons>
                <language>en</language>
                <description>Engaging in discussions and debates about the future of workflows and applications is just as crucial for advancing research as conversations about infrastructure. It is essential to ensure this topic receives equal attention and support, especially when strategising on infrastructure planning and development, to provide a well-rounded approach that fosters innovation and collaboration.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://pretalx.surf.nl/acud-2025/talk/MFYS89/</url>
                <feedback_url>https://pretalx.surf.nl/acud-2025/talk/MFYS89/feedback/</feedback_url>
            </event>
            <event guid='aa54c0c7-dfbc-51c8-ab94-bddcb0eb8b68' id='4420'>
                <room>Progress</room>
                <title>TULIP: A Prototype for Open, Locally Hosted LLM Infrastructure</title>
                <subtitle></subtitle>
                <type>Short presentation with Q&amp;A</type>
                <date>2025-12-04T14:10:00+01:00</date>
                <start>14:10</start>
                <duration>00:25</duration>
                <abstract>Large Language Models are becoming core research tools, yet dependence on commercial APIs raises issues of privacy, compliance, and long-term cost. At TU Delft, REIT and ICT are prototyping TULIP, a Kubernetes-based platform for locally hosted open LLMs. We&#8217;ll share design choices that prioritize responsible innovation: containerized serving with an OpenAI-compatible API, cluster-native scaling, and transparent monitoring. 

While national initiatives like SURF&#8217;s WiLLMa focus on shared capacity, TULIP explores the campus-level space: providing researchers with reproducible endpoints, model governance, and early feasibility metrics for institutional hosting. We will share early lessons, governance implications, and practical guidance for universities and labs aiming to offer sustainable, open alternatives to proprietary AI services.</abstract>
                <slug>acud-2025-4420-tulip-a-prototype-for-open-locally-hosted-llm-infrastructure</slug>
                <track>Generative AI and Machine Learning</track>
                
                <persons>
                    <person id='1376'>Azza Ahmed</person>
                </persons>
                <language>en</language>
                <description>TULIP is TU Delft&#8217;s prototype for open, locally hosted LLM infrastructure. This session will highlight: - Why a local pilot like TULIP matters for researcher engagement - How it complements WiLLMa, SURF&#8217;s AI hub initiative - Early lessons on balancing technical feasibility with governance and sustainability - Open discussion on how institutional platforms can provide sustainable, open alternatives to proprietary AI services The session is aimed at researchers, research engineers, and infrastructure managers curious about first steps in hosting open LLMs on institutional hardware</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://pretalx.surf.nl/acud-2025/talk/MHUJHK/</url>
                <feedback_url>https://pretalx.surf.nl/acud-2025/talk/MHUJHK/feedback/</feedback_url>
            </event>
            <event guid='cd406a83-bed0-5c1c-b8ab-4831ab6ba44b' id='4428'>
                <room>Progress</room>
                <title>Developing Robust Search with Open-Source LLMs</title>
                <subtitle></subtitle>
                <type>Short presentation with Q&amp;A</type>
                <date>2025-12-04T15:25:00+01:00</date>
                <start>15:25</start>
                <duration>00:25</duration>
                <abstract>While open-source search models have greatly improved with transformer-based architectures, they face challenges outside their training domain, such as when applied to multi-modal or non-English text data. In this talk, we will describe some of our ongoing work developing new open-source models to address these challenges:

- **Multilingual retrieval.** We train an effective multilingual sparse retrieval model achieving state-of-the-art performance on standard multilingual benchmarks while continuing to perform well in English. 
- **Multimodal retrieval.** We improve multimodal retrieval for the visual document retrieval task with an approach leveraging existing vision-language models. 
- **Complex retrieval.** We develop query expansion for complex information needs that cannot be handled well with standard methods. 
- **Synthetic data generation.** We explore synthetic data generation for enabling training and evaluation on broader scenarios like retrieval-augmented generation (RAG). 
- **Efficient retrieval models.** Given the increased computational costs of using LLMs for retrieval, we explore several strategies for improving their efficiency, including an effective pruning approach that results in smaller models with comparable performance and engineering work.</abstract>
                <slug>acud-2025-4428-developing-robust-search-with-open-source-llms</slug>
                <track>Generative AI and Machine Learning</track>
                
                <persons>
                    <person id='1386'>Dylan Ju</person><person id='1387'>Yibin Lei</person><person id='1388'>Thong Nguyen</person>
                </persons>
                <language>en</language>
                <description>Our talk will describe our research on robust search with open source LLMs and briefly describe our engineering work developing a Triton kernel to speed up training and inference with learned sparse retrieval models, with both efforts leveraging the computational power of the LUMI supercomputer.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://pretalx.surf.nl/acud-2025/talk/9LR7FS/</url>
                <feedback_url>https://pretalx.surf.nl/acud-2025/talk/9LR7FS/feedback/</feedback_url>
            </event>
            <event guid='de96facc-a1ac-56d7-b680-e0956f6f5e8c' id='4445'>
                <room>Progress</room>
                <title>Technology &amp; Service Updates</title>
                <subtitle></subtitle>
                <type>Technology &amp; Service Updates</type>
                <date>2025-12-04T15:55:00+01:00</date>
                <start>15:55</start>
                <duration>00:20</duration>
                <abstract>Get up to speed with the latest developments in advanced computing, services, and technology within SURF. This session offers a concise overview of what&#8217;s new, what&#8217;s changing, and how these innovations will support the community in the year ahead.</abstract>
                <slug>acud-2025-4445-technology-service-updates</slug>
                <track>Plenary</track>
                
                <persons>
                    <person id='604'>Walter Lioen</person>
                </persons>
                <language>en</language>
                <description>This session offers a concise overview of what&#8217;s new, what&#8217;s changing, and how these innovations will support the community in the year ahead.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://pretalx.surf.nl/acud-2025/talk/VLWDSR/</url>
                <feedback_url>https://pretalx.surf.nl/acud-2025/talk/VLWDSR/feedback/</feedback_url>
            </event>
            <event guid='635dc303-b44f-5845-961b-729b128fae09' id='4446'>
                <room>Progress</room>
                <title>Closing Experience</title>
                <subtitle></subtitle>
                <type>Closing Experience</type>
                <date>2025-12-04T16:15:00+01:00</date>
                <start>16:15</start>
                <duration>00:20</duration>
                <abstract>This final moment isn&#8217;t just a closing. It&#8217;s a celebration, a lasting spark to carry with you beyond today. And when their act finishes, Valeriu will step back into the spotlight to guide us through the day&#8217;s final reflections and officially bring the Advanced Computing User Day to a close.</abstract>
                <slug>acud-2025-4446-closing-experience</slug>
                <track>Plenary</track>
                
                <persons>
                    <person id='1466'>Rob en Emiel</person><person id='606'>Valeriu Codreanu</person>
                </persons>
                <language>en</language>
                <description>This final moment isn&#8217;t just a closing. It&#8217;s a celebration, a lasting spark to carry with you beyond today.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://pretalx.surf.nl/acud-2025/talk/QN8AS7/</url>
                <feedback_url>https://pretalx.surf.nl/acud-2025/talk/QN8AS7/feedback/</feedback_url>
            </event>
            
        </room>
        <room name='Quest' guid='625ee01d-544d-5656-b15c-7ac6c725f8a8'>
            <event guid='f0b08d31-04df-5112-901a-63e2aeda1573' id='4461'>
                <room>Quest</room>
                <title>Accelerating CRISPR gRNA Efficiency Prediction on the Snellius HPC system</title>
                <subtitle></subtitle>
                <type>Short presentation with Q&amp;A</type>
                <date>2025-12-04T11:00:00+01:00</date>
                <start>11:00</start>
                <duration>00:25</duration>
                <abstract>CRISPR gene editing is transforming how we approach challenges in health, food, and sustainability,  but one question still slows everyone down: which guideRNA will actually work?</abstract>
                <slug>acud-2025-4461-accelerating-crispr-grna-efficiency-prediction-on-the-snellius-hpc-system</slug>
                <track>HPC for Societal and Industrial Impact</track>
                
                <persons>
                    <person id='1460'>Sjoerd Kelder</person>
                </persons>
                <language>en</language>
                <description>A guide RNA is the molecule that tells the CRISPR system where to cut or modify DNA. Predicting how well it performs is essential for everything from developing new therapies to improving crops or designing cleaner bioprocesses.

For my MSc project, I used the Snellius supercomputer at SURF, supported through the EuroCC Netherlands infrastructure, to test whether adding RNA structure information could make prediction models smarter. Scaling this workflow on HPC let me process tens of thousands of sequences, train deep-learning models efficiently, and keep every step reproducible.

The work shows how advanced computing can bridge scientific insight and industrial impact, illustrating how reproducible, large-scale AI workflows can drive innovation across sectors that depend on complex biological or experimental data.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://pretalx.surf.nl/acud-2025/talk/NRV7JC/</url>
                <feedback_url>https://pretalx.surf.nl/acud-2025/talk/NRV7JC/feedback/</feedback_url>
            </event>
            <event guid='bc0a44ba-08b8-52d3-8eab-646b5c79a1a2' id='4450'>
                <room>Quest</room>
                <title>Benchmarking Delft3D FM on HPC systems for real-life problems in surface water</title>
                <subtitle></subtitle>
                <type>Short presentation with Q&amp;A</type>
                <date>2025-12-04T12:00:00+01:00</date>
                <start>12:00</start>
                <duration>00:25</duration>
                <abstract>**Importance of Simulation of Surface Water Systems**
Forecasting of flooding, morphology and water quality in coastal and estuarine areas, rivers, and lakes is of great importance for society. To tackle this, the Delft3D Flexible Mesh Suite (Delft3D FM) has been developed by Deltares. Delft3D FM is used worldwide and consists of modules for modelling hydrodynamics, waves, morphology, water quality, and ecology.</abstract>
                <slug>acud-2025-4450-benchmarking-delft3d-fm-on-hpc-systems-for-real-life-problems-in-surface-water</slug>
                <track>HPC for Societal and Industrial Impact</track>
                
                <persons>
                    <person id='1450'>Menno Genseberger</person>
                </persons>
                <language>en</language>
                <description>**Need for HPC Optimization in Real-Life Applications**
There is urgency to make Delft3D FM more efficient and scalable for high performance computing for large scale models of real-life applications. The range of these applications is quite broad: from forecasting of flooding near the dikes to ecological impact assessments of wind parks and/or floating solar panels and from the design of harbours to large scale land reclamation projects. For that purpose, a small project focussed on new benchmarks to get an actual status of the parallel performance. These benchmarks of Delft3D FM were performed a.o. on Snellius from SURF for several typical real-life applications.

**Use of Sixth-Generation Models and Snellius Benchmarks**
Several selected cases are from the sixth-generation models for Rijkswaterstaat. These models are developed and under maintenance for a broad application range in the main Dutch waterbodies and used by other parties for applications also (requests via iplo.nl). On Snellius the Apptainer version of Delft3D FM was used for the benchmarks. Deltares offers maintenance and support for this Apptainer version, also in combination with the sixth-generation models. The Apptainer version of Delft3D FM is available also for other users of Snellius.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://pretalx.surf.nl/acud-2025/talk/GF37TS/</url>
                <feedback_url>https://pretalx.surf.nl/acud-2025/talk/GF37TS/feedback/</feedback_url>
            </event>
            <event guid='4e8b5226-9579-54f7-bb6d-9d93dbf1fb3c' id='4447'>
                <room>Quest</room>
                <title>ROMEO HPC center: missions and projects</title>
                <subtitle></subtitle>
                <type>Interactive Presentation with Conversation &amp; Input</type>
                <date>2025-12-04T14:10:00+01:00</date>
                <start>14:10</start>
                <duration>00:50</duration>
                <abstract>ROMEO HPC Center of University of Reims, under the lead of Teratec, is partner of the French National Competence Center.</abstract>
                <slug>acud-2025-4447-romeo-hpc-center-missions-and-projects</slug>
                <track>HPC for Societal and Industrial Impact</track>
                
                <persons>
                    <person id='1428'>Florence Draux</person><person id='1429'>Fr&#233;d&#233;ric Maugui&#232;re</person>
                </persons>
                <language>en</language>
                <description>ROMEO HPC Center hosts one of the most powerful academic supercomputer in France. We will present the ecosystem of the HPC center, the supercomputer itself and the other infrastructures as well as our participation in national and European projects.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://pretalx.surf.nl/acud-2025/talk/RJ8H93/</url>
                <feedback_url>https://pretalx.surf.nl/acud-2025/talk/RJ8H93/feedback/</feedback_url>
            </event>
            
        </room>
        <room name='Expedition' guid='acd37fdb-6d54-5f09-be9b-492a92a8478c'>
            <event guid='04edd1d2-90cf-5b29-b8b1-40d4cd240cd4' id='4432'>
                <room>Expedition</room>
                <title>Unveiling the Radio Sky: High-Resolution LOFAR Imaging with Advanced Computing</title>
                <subtitle></subtitle>
                <type>Short presentation with Q&amp;A</type>
                <date>2025-12-04T11:00:00+01:00</date>
                <start>11:00</start>
                <duration>00:25</duration>
                <abstract>LOFAR, Europe&#8217;s powerful low-frequency radio telescope, produces vast amounts of data, making high-resolution imaging a major challenge. Thanks to new algorithms, SURF&#8217;s Spider platform, and AI expertise, researchers now achieve unprecedented detail, delivering the sharpest LOFAR images of the Universe so far.</abstract>
                <slug>acud-2025-4432-unveiling-the-radio-sky-high-resolution-lofar-imaging-with-advanced-computing</slug>
                <track>Data Processing &amp; Cloud Solutions</track>
                
                <persons>
                    <person id='1414'>Reinout van Weeren</person>
                </persons>
                <language>en</language>
                <description>LOFAR is a low-frequency radio telescope composed of thousands of simple antennas distributed across Europe, with most located in the north of the Netherlands. By combining signals from these stations, LOFAR can in principle deliver extremely high-resolution images over large regions of the sky. Achieving this, however, is highly challenging: the telescope generates massive data volumes that require complex calibration and imaging algorithms. Without careful calibration, the resulting images remain severely blurred.

Another hurdle is the computational expense, as single observations can produce images exceeding 10 gigapixels. These challenges long prevented imaging at LOFAR&#8217;s full theoretical resolution. Recently, by developing new algorithms and exploiting SURF&#8217;s high-throughput processing platform Spider, operated by the Distributed Data Processing team, together with SURF&#8217;s AI expertise from the High-Performance Machine Learning team, we have overcome these barriers&#8212;producing the deepest, highest-resolution LOFAR images of the Universe to date.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://pretalx.surf.nl/acud-2025/talk/7FHCNZ/</url>
                <feedback_url>https://pretalx.surf.nl/acud-2025/talk/7FHCNZ/feedback/</feedback_url>
            </event>
            <event guid='034cd7f6-26d2-534b-912e-2619de6442f6' id='4430'>
                <room>Expedition</room>
                <title>Spaceborne air-sea heat flux enabled with Spider</title>
                <subtitle></subtitle>
                <type>Short presentation with Q&amp;A</type>
                <date>2025-12-04T11:30:00+01:00</date>
                <start>11:30</start>
                <duration>00:25</duration>
                <abstract>We live in a golden age of satellite remote sensing. The European Space Agency&apos;s Sentinel program in particular, which focuses on spaceborne Earth observation (EO), has set the standard for continuous, free and readily available satellite observations and products. The archive of EO data is vast, and continues to grow by many petabytes a year. From a scientific perspective, these vast quantities enable ever more research to be conducted, especially in the age of data-hungry artifical intelligence. Yet, without special infrastructure, the sheer magnitude of satellite data causes it to become unwieldly. Cloud-hosted services tailored to satellite data exist (noteably Google Earth Engine), but these tend to have their own shortcomings, such as limited availability of low-level (raw) satellite observations. 

In our research group we study the ocean surface with spaceborne radars. Radars are uniquely capable for ocean monitoring: when mounted on a satellite they provide large coverage while being mostely unaffected by atmospheric interference (e.g. they can look through clouds), and the signals reflecting from the ocean surface provide all sorts of geophysical insights that are used in meteorology, storm tracking, swell predictions, and much more. But when looking at high resolution radar imagery of the ocean, one can identify a wealth of atmospheric information that is commonly ignored. Thus, we set about developing a methodology for extracting this information, focusing on the heat-flux exchange between the ocean and atmosphere---a critical climate variable for which no satellite products are available---from Sentinel-1&apos;s entire 10+ year data catalogue. 

In this presentation we will outline the development of our methodology, which we call FluxSAR, and share preliminary scientific results. The presentation focuses on the challenges involved in working with the petabytes of high-resolution radar data, and how SURF&apos;s Spider HPC has enabled us to tackle these challenges head on.</abstract>
                <slug>acud-2025-4430-spaceborne-air-sea-heat-flux-enabled-with-spider</slug>
                <track>Data Processing &amp; Cloud Solutions</track>
                
                <persons>
                    <person id='1403'>Owen O&apos;Driscoll</person>
                </persons>
                <language>en</language>
                <description>1. Need for air-sea flux information
2. Utilizing existing remote sensing data
3. Too much data, HPC needed
4. Spider to the rescue</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://pretalx.surf.nl/acud-2025/talk/XTPDP7/</url>
                <feedback_url>https://pretalx.surf.nl/acud-2025/talk/XTPDP7/feedback/</feedback_url>
            </event>
            <event guid='223b0b1b-e0b7-5353-9f4e-c30b8a79557a' id='4481'>
                <room>Expedition</room>
                <title>Modern Data Lakehouse for Research</title>
                <subtitle></subtitle>
                <type>Short presentation with Q&amp;A</type>
                <date>2025-12-04T14:10:00+01:00</date>
                <start>14:10</start>
                <duration>00:25</duration>
                <abstract>SURF will start next year to investigate a Data Lakehouse, among others to explore its application in scientific workflows.</abstract>
                <slug>acud-2025-4481-modern-data-lakehouse-for-research</slug>
                <track>Data Processing &amp; Cloud Solutions</track>
                
                <persons>
                    <person id='1479'>Robert Griffioen</person>
                </persons>
                <language>en</language>
                <description>I will briefly discuss the concept of Data Lakehouse, its architecture and components. One of their characteristics is that they have some functionality like consistency similar to a Data warehouse, but they can process unstructured and semi-structured data. We did already some investigations in a number of projects encompassing scientific fields such as earth observation, sentiment analysis, and bio-imaging. I will share some preliminary insights to what kind of scientific use workflows it can be applied. And will show how we can use the various Data Lakehouse components in the workflow. The talk will also touch upon commercial solutions like Data Bricks that have full stack, including an ML-ops component, and open source solutions.      

I will share some preliminary insights to what kind of scientific use workflows it can be applied. And will show how we can use the various Data Lakehouse components in the workflow. The talk will also touch upon commercial solutions like Data Bricks that have full stack, including an ML-ops component, and open source solutions.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://pretalx.surf.nl/acud-2025/talk/XBGRDU/</url>
                <feedback_url>https://pretalx.surf.nl/acud-2025/talk/XBGRDU/feedback/</feedback_url>
            </event>
            <event guid='dd92f4c3-a283-5c69-87dd-e9e3504824e1' id='4427'>
                <room>Expedition</room>
                <title>SPECTRUM Technical Blueprint and Strategic Agenda: Delivering Europe&apos;s Roadmap for Exabyte-Scale Scientific Infrastructure</title>
                <subtitle></subtitle>
                <type>Short presentation with Q&amp;A</type>
                <date>2025-12-04T15:25:00+01:00</date>
                <start>15:25</start>
                <duration>00:25</duration>
                <abstract>This presentation introduces the public draft of SPECTRUM&apos;s Technical Blueprint and Strategic Research, Innovation and Deployment Agenda (SRIDA) for European data-intensive science infrastructure. The Technical Blueprint addresses the technical aspects whilst the SRIDA defines the strategic and policy dimensions of a unified European compute and data continuum. Both are based on comprehensive requirements analysis from High-Energy Physics and Radio Astronomy communities, including HL-LHC&apos;s exabyte-scale data processing, SKA&apos;s unprecedented computational demands, and LOFAR&apos;s distributed data processing challenges. We invite the advanced computing community to provide feedback during the open consultation phase to ensure the final documents address the research infrastructure needs.</abstract>
                <slug>acud-2025-4427-spectrum-technical-blueprint-and-strategic-agenda-delivering-europe-s-roadmap-for-exabyte-scale-scientific-infrastructure</slug>
                <track>Data Processing &amp; Cloud Solutions</track>
                
                <persons>
                    <person id='782'>Sergio Andreozzi</person>
                </persons>
                <language>en</language>
                <description>The SPECTRUM project has developed a comprehensive framework for Europe&apos;s next-generation research infrastructure to support exabyte-scale scientific computing. 

The Technical Blueprint addresses the fragmentation of current European computing resources by proposing an integrated technical architecture spanning. The SRIDA provides actionable priorities, implementation roadmaps, and policy recommendations for policymakers, infrastructure providers, and research communities. Together, they define both the technical capabilities and strategic governance needed for seamless workload migration across heterogeneous infrastructure whilst maintaining sovereignty and reducing environmental impact.

This presentation will outline the key architectural components, strategic priorities, and implementation pathways emerging from our community-driven analysis. We are currently in the open consultation phase and actively solicit feedback from the advanced computing community to ensure the final blueprint and agenda address the research community needs, helping shape Europe&apos;s strategic approach to exascale scientific infrastructure.

For more information: www.spectrumproject.eu</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://pretalx.surf.nl/acud-2025/talk/QKLGP7/</url>
                <feedback_url>https://pretalx.surf.nl/acud-2025/talk/QKLGP7/feedback/</feedback_url>
            </event>
            
        </room>
        <room name='Mission 1' guid='8d937413-b6f4-5967-8917-69165be5f68b'>
            <event guid='e6f9fbc2-e702-5722-bb6a-3341e4da341a' id='4431'>
                <room>Mission 1</room>
                <title>interTwin: Advancing Scientific Digital Twins through AI, Federated Computing and Data</title>
                <subtitle></subtitle>
                <type>Short presentation with Q&amp;A</type>
                <date>2025-12-04T11:00:00+01:00</date>
                <start>11:00</start>
                <duration>00:25</duration>
                <abstract>Digital Twins (DT), highly accurate virtual representations of physical entities, have revolutionized industries by integrating numerical simulations and observational data to create realistic, dynamic models. Initially developed for industrial applications, digital twins have expanded into diverse domains. They enable accurate predictions by simulating real-world performance, identifying potential issues, and iterating feedback loops for optimized decision-making. This paradigm leverages advanced computational techniques to enhance our understanding and management of complex systems. The Horizon Europe  interTwin project has  developed a highly generic Digital Twin Engine (DTE) to support interdisciplinary Digital Twins(DT). The project brought together infrastructure providers, technology providers and DT use cases from different domains. This group of experts enables the co-design of both the DTE Blueprint Architecture and the prototype platform; not only benefiting end users like scientists and policymakers but also DT developers. The main contributions of the DTE are: (1) a federated architecture that allows seamless integration of distributed computing and storage resources across various institutions, (2) standardized interfaces and protocols that support interoperability among different scientific fields, (3) a co-design approach that includes requirements from high-energy physics, radio astronomy, gravitational-wave astrophysics, climate research, and environmental monitoring, and (4) strong methods for assessing AI model quality, provenance, and uncertainty measurement in federated settings. 
The talk will focus on the DTE software components developed and integrated, detailing some of the pilot use cases that have successfully driven the DTE implementation.</abstract>
                <slug>acud-2025-4431-intertwin-advancing-scientific-digital-twins-through-ai-federated-computing-and-data</slug>
                <track>Innovative Technologies &amp; Services</track>
                
                <persons>
                    <person id='1406'>Andrea Manzi</person>
                </persons>
                <language>en</language>
                <description>Digital Twins (DT), highly accurate virtual representations of physical entities, have revolutionized industries by integrating numerical simulations and observational data to create realistic, dynamic models. Initially developed for industrial applications, digital twins have expanded into diverse domains. They enable accurate predictions by simulating real-world performance, identifying potential issues, and iterating feedback loops for optimized decision-making. This paradigm leverages advanced computational techniques to enhance our understanding and management of complex systems. The Horizon Europe  interTwin project has  developed a highly generic Digital Twin Engine (DTE) to support interdisciplinary Digital Twins(DT). The project brought together infrastructure providers, technology providers and DT use cases from different domains. This group of experts enables the co-design of both the DTE Blueprint Architecture and the prototype platform; not only benefiting end users like scientists and policymakers but also DT developers. The main contributions of the DTE are: (1) a federated architecture that allows seamless integration of distributed computing and storage resources across various institutions, (2) standardized interfaces and protocols that support interoperability among different scientific fields, (3) a co-design approach that includes requirements from high-energy physics, radio astronomy, gravitational-wave astrophysics, climate research, and environmental monitoring, and (4) strong methods for assessing AI model quality, provenance, and uncertainty measurement in federated settings. 
The talk will focus on the DTE software components developed and integrated, detailing some of the pilot use cases that have successfully driven the DTE implementation.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://pretalx.surf.nl/acud-2025/talk/3JX3NH/</url>
                <feedback_url>https://pretalx.surf.nl/acud-2025/talk/3JX3NH/feedback/</feedback_url>
            </event>
            <event guid='ed7e40bb-2ec9-54c2-ace6-d901f3a450df' id='4425'>
                <room>Mission 1</room>
                <title>EAR (Energy Aware Runtime) dashboard workshop</title>
                <subtitle></subtitle>
                <type>Workshop</type>
                <date>2025-12-04T11:30:00+01:00</date>
                <start>11:30</start>
                <duration>00:50</duration>
                <abstract>Have you ever wondered how much energy your research on Snellius uses? Or maybe how performant or efficient your application is on Snellius? We at SURF did as well. This is why we have developed an end-user friendly, interactive, dashboard that allows you to display the energy usage of your jobs on Snellius and gives you insight into how well your application is using Snellius. This dashboard gives researchers the tools and visualizations for energy aware computing.</abstract>
                <slug>acud-2025-4425-ear-energy-aware-runtime-dashboard-workshop</slug>
                <track>Innovative Technologies &amp; Services</track>
                
                <persons>
                    <person id='222'>Casper van Leeuwen</person>
                </persons>
                <language>en</language>
                <description>We at the HPCV-team at SURF have developed an end-user (researcher) focused energy dashboard that displays an overview of the energy statistics and efficiency metrics of jobs that are submitted to the supercomputer. This dashboard is built to display metrics that are collected from the EAR (Energy Aware Runtime) software which provides energy management, accounting and optimization for supercomputers. With the interactive figures that are displayed, end users should be able to gain insight into their energy footprint of their research, and what they can do about it to reduce or optimize their energy usage. In this workshop we want to give an introduction into the usage of EAR and the EAR dashboard. To get the full potential out of this workshop we recommend the attendees to run their own jobs and analyze their energy usage. This workshop is geared to every expertise level.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://pretalx.surf.nl/acud-2025/talk/Y8REXW/</url>
                <feedback_url>https://pretalx.surf.nl/acud-2025/talk/Y8REXW/feedback/</feedback_url>
            </event>
            <event guid='3a1e4d0a-38e1-531e-a2c5-f1e20ad69de4' id='4411'>
                <room>Mission 1</room>
                <title>How can a community-driven approach improve competences in energy-efficient scientific computing in the Netherlands?</title>
                <subtitle></subtitle>
                <type>Interactive Presentation with Conversation &amp; Input</type>
                <date>2025-12-04T14:10:00+01:00</date>
                <start>14:10</start>
                <duration>00:50</duration>
                <abstract>This session aims to explore the feasibility of a community-driven approach to foster energy-efficient scientific computing in the Netherlands. By engaging researchers, support staff, and infrastructure providers, such an initiative can establish a self-sustaining Community of Practice, create an open knowledge base with practical training on energy monitoring and reduction in scientific computing, and organize nationwide training sessions to build foundational expertise. Together, these actions can complement infrastructure-level efficiency improvements with user-level practices, advancing sustainable and environmentally responsible research. A soon-to-be-launched initiative supported by TDCC-NES seeks to do this in the Natural and Engineering Sciences (NES) domain. During the session, participants will have the opportunity to learn more about the initiative, share feedback, and discuss ways to expand its impact to other scientific domains in the Netherlands.</abstract>
                <slug>acud-2025-4411-how-can-a-community-driven-approach-improve-competences-in-energy-efficient-scientific-computing-in-the-netherlands</slug>
                <track>Innovative Technologies &amp; Services</track>
                
                <persons>
                    <person id='33'>Dr. Serkan Girgin</person><person id='595'>Bhawiyuga, Adhitya (UT-ITC)</person>
                </persons>
                <language>en</language>
                <description>The growing energy demands of scientific computing present significant challenges for environmental sustainability, particularly in disciplines that rely on large-scale, compute-intensive methods. Although advances in energy-efficient hardware and infrastructure have been made, researchers often remain unaware of the energy consumption and environmental impact of their computational tasks. This lack of awareness is largely due to the absence of systematic energy reporting from infrastructure providers and the limited availability of monitoring tools that allow task-level assessment. Additionally, current scientific computing frameworks typically do not offer pre-execution energy estimates, limiting researchers&#8217; ability to make informed trade-offs between performance and energy efficiency. As a result, without appropriate tools and expertise to measure and interpret energy use, researchers cannot fully evaluate the environmental footprint of their work or implement practices that support sustainable computing.

In this session, we aim to explore the potential of a community-driven approach to raise awareness and encourage the adoption of energy-efficient practices in the Netherlands. By engaging research organizations, support institutions, and infrastructure providers, such a collaborative effort can pursue three interconnected objectives to foster a culture of energy-conscious computing: (1) establish a self-sustaining Community of Practice, where researchers, support staff, and infrastructure providers collaboratively share and advance best practices for energy-efficient computing; (2) develop an open knowledge base offering training on practical tools and methods for monitoring and reducing energy consumption; and (3) organize a nationwide series of training sessions to build foundational expertise in energy-efficient computing among researchers and support staff. Together, these initiatives can complement ongoing infrastructure-level efficiency improvements with user-level practices, advancing the broader goal of sustainable and environmentally responsible research. During the session, we will present a soon-to-be-launched initiative supported by the TDCC-NES, designed to improve competences in energy-efficient scientific computing within the Natural and Engineering Sciences (NES) domain, and gather ideas and feedback from participants for expanding its impact to other domains across the Netherlands.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://pretalx.surf.nl/acud-2025/talk/Q8H8RN/</url>
                <feedback_url>https://pretalx.surf.nl/acud-2025/talk/Q8H8RN/feedback/</feedback_url>
            </event>
            <event guid='d6f606e9-b430-5d29-98e5-9ed0f8a7e434' id='4423'>
                <room>Mission 1</room>
                <title>Visualization support from SURF on Snellius (and beyond)</title>
                <subtitle></subtitle>
                <type>Short presentation with Q&amp;A</type>
                <date>2025-12-04T15:25:00+01:00</date>
                <start>15:25</start>
                <duration>00:25</duration>
                <abstract>An overview of the data visualization options that are available from SURF, including usage of Snellius and other infrastructure, plus available support from SURF for visualization projects.</abstract>
                <slug>acud-2025-4423-visualization-support-from-surf-on-snellius-and-beyond</slug>
                <track>Innovative Technologies &amp; Services</track>
                
                <persons>
                    <person id='19'>Paul Melis</person>
                </persons>
                <language>en</language>
                <description>This presentation will touch upon a few technical topics such as remote visualization, OpenOnDemand and GPU usage. We will also talk about a few non-technical things such as high-level workflows, support and courses. Finally, we show some visualization examples.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://pretalx.surf.nl/acud-2025/talk/D7WK9G/</url>
                <feedback_url>https://pretalx.surf.nl/acud-2025/talk/D7WK9G/feedback/</feedback_url>
            </event>
            
        </room>
        <room name='Mission 2' guid='7b8786d5-83c5-5062-a247-38f6a53d1cb4'>
            <event guid='3513dc8e-5f18-5e77-82e6-c564cdee2d95' id='4416'>
                <room>Mission 2</room>
                <title>Accelerating MPI-AMRVAC on Snellius and LUMI</title>
                <subtitle></subtitle>
                <type>Short presentation with Q&amp;A</type>
                <date>2025-12-04T11:00:00+01:00</date>
                <start>11:00</start>
                <duration>00:25</duration>
                <abstract>[MPI-AMRVAC](https://amrvac.org) is a parallel adaptive mesh refinement framework aimed at solving partial differential equations by a number of different numerical schemes. It is written in Fortran 90 and uses MPI for parallelisation across many CPUs.

In modern HPC infrastructure, most compute power is however not in the CPUs but in accelerators such as GPUs. In order to make use of this compute power, we have enabled MPI-AMRVAC to run on GPUs using OpenACC, enabling larger-scale simulations than ever before.

I will discuss the advantages and challenges of OpenACC in our experience, and highlight the achieved performance improvement in MPI-AMRVAC on both Snellius and LUMI.</abstract>
                <slug>acud-2025-4416-accelerating-mpi-amrvac-on-snellius-and-lumi</slug>
                <track>High Performance Computing</track>
                
                <persons>
                    <person id='1374'>Leon Oostrum</person>
                </persons>
                <language>en</language>
                <description>[MPI-AMRVAC](https://amrvac.org) is a parallel adaptive mesh refinement framework aimed at solving partial differential equations by a number of different numerical schemes. It is written in Fortran 90 and uses MPI for parallelisation across many CPUs.

In modern HPC infrastructure, most compute power is however not in the CPUs but in accelerators such as GPUs. In order to make use of this compute power, we have enabled MPI-AMRVAC to run on GPUs using OpenACC, enabling larger-scale simulations than ever before.

I will discuss the advantages and challenges of OpenACC in our experience, and highlight the achieved performance improvement in MPI-AMRVAC on both Snellius and LUMI.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://pretalx.surf.nl/acud-2025/talk/Z7DXLP/</url>
                <feedback_url>https://pretalx.surf.nl/acud-2025/talk/Z7DXLP/feedback/</feedback_url>
            </event>
            <event guid='daf09537-3760-578d-b513-8fb0dce7d8d3' id='4412'>
                <room>Mission 2</room>
                <title>PartitionedArrays: an alternative programming model for distributed-memory parallel systems</title>
                <subtitle></subtitle>
                <type>Short presentation with Q&amp;A</type>
                <date>2025-12-04T11:30:00+01:00</date>
                <start>11:30</start>
                <duration>00:25</duration>
                <abstract>In this presentation, we discuss the PartitionedArrays programming model as an alternative to the message passing interface (MPI). We present the key features of this model and illustrate how it can help users of Snellius and other supercomputers to reduce the burden of implementing complex distributed-memory parallel applications. We illustrate the capabilities of this model with the implementation of key kernels in scientific computing such as the distributed sparse matrix-vector product (SpMV), the distributed sparse matrix-matrix product (SpMM), as well as the high-performance conjugate gradient (HPCG) benchmark used in the top 500 supercomputer list. We also compare the performance of the resulting codes against state-of-the art implementations, showing that the proposed model improves user experience without compromising performance, or even improving it.</abstract>
                <slug>acud-2025-4412-partitionedarrays-an-alternative-programming-model-for-distributed-memory-parallel-systems</slug>
                <track>High Performance Computing</track>
                
                <persons>
                    <person id='1362'>Francesc Verdugo</person>
                </persons>
                <language>en</language>
                <description>MPI is the gold-standard to program distributed-memory parallel computers, but it comes with well-known challenges. The programmer explicitly controls data distribution and communication, making the logic of MPI-enabled algorithms significantly more complex than their sequential versions. Debugging this additional logic at large scales is cumbersome or even impractical. Execution order might affect results and inspecting the local variables might be very tedious and time consuming, even for a moderate number of processes. Partitioned Global Address Space (PGAS) systems and other alternatives to MPI have been introduced to address these challenges. They often aim at freeing the users from communication-related details, but they offer less control on performance and face a strong adoption barrier as the programming model of MPI is deeply rooted in the high-performance computing (HPC) community. The PartitionedArrays programming model solves the challenges of MPI without the limitations of PGAS. It provides an effective way of expressing and debugging the logic of distributed applications instead of trying to hide these details from the user. To this end, PartitionedArrays decouples the number of parts used for data partition from the number of processes that run the code. Hence, the logic of data distribution and communication can be debugged on a single process using conventional tools. Moreover, computation and communication are written as a sequence of logically collective phases, which (unlike many MPI directives) have deterministic semantics independently of process execution order. This allows one to implement safety checks and rule out the possibility of dead-locks. These additional benefits come with virtually no penalty in performance, since MPI can still be used to run algorithms implemented with PartitionedArrays by setting the number of parts equal to the number of processes. In addition, the logic of many MPI codes can be expressed in PartitionedArrays allowing to readily port applications developed with MPI in mind, minimizing its adoption barrier in the HPC community. 

PartitionedArrays is FAIR software available at https://github.com/PartitionedArrays/PartitionedArrays.jl</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://pretalx.surf.nl/acud-2025/talk/YBPFRY/</url>
                <feedback_url>https://pretalx.surf.nl/acud-2025/talk/YBPFRY/feedback/</feedback_url>
            </event>
            <event guid='696029eb-c4d0-514b-8727-79dc97773b95' id='4415'>
                <room>Mission 2</room>
                <title>Real-time Quantitative MRI Reconstruction</title>
                <subtitle></subtitle>
                <type>Short presentation with Q&amp;A</type>
                <date>2025-12-04T14:10:00+01:00</date>
                <start>14:10</start>
                <duration>00:25</duration>
                <abstract>In this talk, we address the challenges of quantitative MRI (qMRI) reconstruction and introduce COMPAS, a flexible and GPU-accelerated toolkit designed for use in qMRI research. Our evaluation shows that COMPAS significantly reduces reconstruction times, from hours to minutes, using the GPU infracture provided by SURF, including Snellius and LUMI supercomputers.</abstract>
                <slug>acud-2025-4415-real-time-quantitative-mri-reconstruction</slug>
                <track>High Performance Computing</track>
                
                <persons>
                    <person id='1336'>Alessio Sclocco</person><person id='1351'>Stijn Heldens</person><person id='1378'>Oscar van der Heide</person>
                </persons>
                <language>en</language>
                <description>Quantitative MRI (qMRI) has great potential to transform clinical radiology by offering higher-quality medical images while reducing acquisition times. This enables faster diagnoses by radiologists and shorter scanning times for patients. However, the computational demands of qMRI algorithms are significant, often causing image reconstruction to take hours and thus hindering clinical adoption.

We present COMPAS, a composable toolkit of high-performance qMRI primitives for developing state-of-the-art qMRI methods. COMPAS hides the technical complexity required to achieve near-real-time performance while providing an easy-to-use interface for both C++ and Julia.

COMPAS integrates several cutting-edge technologies, including work developed at the Netherlands eScience Center. We use Kernel Tuner to auto-tune the performance of individual GPU kernels. We develop KMM, a parallel dataflow and memory-manager layer for multi-GPU systems that minimizes data transfers, reuses GPU allocations, and overlaps computation with communication. We also perform selected operations in low precision to increase performance at the cost of a minimal loss in numerical accuracy. Finally, by targeting both CUDA and HIP, we support AMD and NVIDIA GPUs with a single codebase.

We present results using Snellius (NVIDIA H100) and LUMI (AMD MI250X) supercomputers, reducing reconstruction times from hours to nearly one minute, making qMRI ready for potential use in clinical trials.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://pretalx.surf.nl/acud-2025/talk/3FS7CU/</url>
                <feedback_url>https://pretalx.surf.nl/acud-2025/talk/3FS7CU/feedback/</feedback_url>
            </event>
            <event guid='fca13c66-8e5d-51cb-b19b-f5f26c8c9236' id='4407'>
                <room>Mission 2</room>
                <title>Preparing for Einstein Telescope: GPU-native scientific computing without compromises</title>
                <subtitle></subtitle>
                <type>Short presentation with Q&amp;A</type>
                <date>2025-12-04T15:25:00+01:00</date>
                <start>15:25</start>
                <duration>00:25</duration>
                <abstract>Gravitational wave astronomy has advanced from theoretical prediction to observational reality, with over 200 black hole and neutron star mergers detected in the past decade. The Einstein Telescope, a proposed next-generation detector with a candidate site at the border of the Netherlands, Germany, and Belgium, is expected to detect hundreds of thousands of events per year. However, analyzing this unprecedented volume of observations poses a fundamental challenge, as existing software cannot scale to extract science from such a rich dataset. We present ongoing development of a GPU-native framework written in JAX that accelerates the analysis of gravitational wave data from hours to mere minutes. Crucially, our approach avoids using machine learning surrogates, preserving high fidelity in the results while achieving this speedup. By developing and testing our framework on the Snellius GPU cluster, we underscore the Netherlands&apos; active role in both the instrumentation and the data analysis for the Einstein Telescope.</abstract>
                <slug>acud-2025-4407-preparing-for-einstein-telescope-gpu-native-scientific-computing-without-compromises</slug>
                <track>High Performance Computing</track>
                
                <persons>
                    <person id='1347'>Thibeau Wouters</person>
                </persons>
                <language>en</language>
                <description>See abstract</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://pretalx.surf.nl/acud-2025/talk/LLQ3MN/</url>
                <feedback_url>https://pretalx.surf.nl/acud-2025/talk/LLQ3MN/feedback/</feedback_url>
            </event>
            
        </room>
        
    </day>
    
</schedule>
