<?xml version='1.0' encoding='utf-8' ?>
<iCalendar xmlns:pentabarf='http://pentabarf.org' xmlns:xCal='urn:ietf:params:xml:ns:xcal'>
    <vcalendar>
        <version>2.0</version>
        <prodid>-//Pentabarf//Schedule//EN</prodid>
        <x-wr-caldesc></x-wr-caldesc>
        <x-wr-calname></x-wr-calname>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>AGUFK8@@pretalx.surf.nl</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-AGUFK8</pentabarf:event-slug>
            <pentabarf:title>Opening Experience</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20251204T093000</dtstart>
            <dtend>20251204T094500</dtend>
            <duration>0.01500</duration>
            <summary>Opening Experience</summary>
            <description>Rob and Emiel will be present at our event as energisers. They will open the day, amaze us with their award-winning World Cup act, and have promised to inspire us in the area of mindset change. We hope you are ready for some unforgettable and magical moments.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Opening Experience</category>
            <url>https://pretalx.surf.nl/acud-2025/talk/AGUFK8/</url>
            <location>Progress</location>
            
            <attendee>Rob en Emiel</attendee>
            
            <attendee>Valeriu Codreanu</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>D8NASW@@pretalx.surf.nl</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-D8NASW</pentabarf:event-slug>
            <pentabarf:title>Keynote Maria Girone</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20251204T094500</dtstart>
            <dtend>20251204T103500</dtend>
            <duration>0.05000</duration>
            <summary>Keynote Maria Girone</summary>
            <description>Maria has spent her career driving innovation at the intersection of science and computing. At CERN, she has led transformative projects that bring together HPC, AI, and cloud technologies to handle the immense data challenges of the Large Hadron Collider. A leader, collaborator, and advocate for diversity in STEM, Maria continues to inspire how we think about computing for discovery.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Keynote</category>
            <url>https://pretalx.surf.nl/acud-2025/talk/D8NASW/</url>
            <location>Progress</location>
            
            <attendee>Maria Girone</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>UCP9QG@@pretalx.surf.nl</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-UCP9QG</pentabarf:event-slug>
            <pentabarf:title>Introducing WeatherGenerator</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20251204T110000</dtstart>
            <dtend>20251204T112500</dtend>
            <duration>0.02500</duration>
            <summary>Introducing WeatherGenerator</summary>
            <description>In recent years, artificial intelligence has grown to be a ubiquitous tool in earth and environmental sciences. In meteorology and climate sciences, neural networks have shown to be  the superior strategy for a multitude of data-driven tasks such as bias correction, down-scaling and even now-casting. Lately, also weather prediction and data assimilation - traditionally the domain of state-of-the-art large numerical HPC codes - have shown substantial improvements by using graph neural networks or transformer architectures. As a result, the current best weather forecasts are obtained with models such as Google’s graphcast and the ECMWF’s AIFS, both trained on the global reanalysis dataset ERA5. As a bonus, the inference rollout requires just a fraction of the computational cost of a traditional forecast.

Although machine learning outperforms traditional methods in these specific tasks, the question remains whether a unified core model, equipped with a rich latent space, opens the pathway towards improved predictive skill and increased flexibility. Several initiatives to build such a foundation model of the atmosphere have emerged and shown promising results. Within the EU project WeatherGenerator, we aim to construct a large, high-resolution foundation model for weather prediction and atmospheric climate modeling. We aim to combine a very large volume of reanalysis products, observational data and climate model output into a multi-channel transformer architecture that can easily be fine-tuned to execute common weather modeling and prediction tasks. The pre-training will be a technical feat that has to be executed on Europe’s exascale compute infrastructure. To substantiate the claim of being a foundation model, the project hosts many stakeholders that will re-implement existing ML applications with the WeatherGenerator model.

In this talk I will motivate this ambitious endeavour and outline the innovative ideas and techniques behind WeatherGenerator. I will briefly discuss some of the future applications and explain how the Netherlands eScience Center plans to bring this technology to potential stakeholders such as the European research community, public institutions and industry.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Short presentation with Q&amp;A</category>
            <url>https://pretalx.surf.nl/acud-2025/talk/UCP9QG/</url>
            <location>Progress</location>
            
            <attendee>Gijs van den Oord</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>A8QYRM@@pretalx.surf.nl</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-A8QYRM</pentabarf:event-slug>
            <pentabarf:title>Fundamental bottlenecks for AI and HPC</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20251204T113000</dtstart>
            <dtend></dtend>
            <duration>0.02500</duration>
            <summary>Fundamental bottlenecks for AI and HPC</summary>
            <description>Is your dataloader asleep at the wheel? Is over-eager logging killing your performance because it&#x27;s forcing CPU&lt;-&gt;GPU syncs? Does 100% GPU utilization actually mean that your GPU is being used effectively? (Hint: it&#x27;s not!)  
In this talk we&#x27;ll go over the fundamental bottlenecks of compute: those things in any HPC system that will cause your workflow to be slower than it needs to be, and what you can do to transform your workflow from &#x27;it eventually works&#x27; to &#x27;it works remarkably well&#x27;.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Short presentation with Q&amp;A</category>
            <url>https://pretalx.surf.nl/acud-2025/talk/A8QYRM/</url>
            <location>Progress</location>
            
            <attendee>Robert-Jan Schlimbach</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>7YLTTZ@@pretalx.surf.nl</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-7YLTTZ</pentabarf:event-slug>
            <pentabarf:title>No GPU required: Training and using scalable LLMs on CPUs</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20251204T120000</dtstart>
            <dtend>20251204T122500</dtend>
            <duration>0.02500</duration>
            <summary>No GPU required: Training and using scalable LLMs on CPUs</summary>
            <description>Memory-based language modeling, proposed by Van den Bosch (2005), is a machine learning approach to next-token prediction based on the k-nearest neighbor (k-NN) classifier (Aha, Kibler, and Albert, 1991; Daelemans and Van den Bosch, 2006a). This non-neural machine learning approach relies on storing all training data in memory, and generalizes from this training data when classifying unseen new data using similarity-based inference. Memory-based language modeling is functionally roughly equivalent to decoder Transformers (Vaswani et al., 2017), in the sense that both can run in autoregressive text generation mode and predict next tokens based on a certain amount of prior context.

While training a memory-based language model is generally low-cost, as it involves a one-pass reading of training data and does not involve any convergence-based iterative training, a naive implementation would render it useless for inference. The upper-bound complexity of k-nearest neighbor classification is notoriously unfavorable, i.e. O(nd), where n is the number of examples in memory, and d is the number of features or dimensions (e.g. context size). However, improvements and fast approximations are available. Daelemans et al. (2010) offer a range of approximations offering fast classification and data compression using prefix tries. Another notable aspect of memory-based language modeling, as observed earlier by Van den Bosch (2006b), is that its next-word prediction performance increases log-linearly: with every 10-fold increase in the amount of training data, next-word prediction accuracy increases by a more or less constant amount (although there may be a plateau eventually which we never reached because of memory limitations).

The relatively costs in learning as well as inference make memory-based language modeling a potential eco-friendly alternative to the generally costly training of Transformer-based language models (Strubell, 2019). All experiments carried out so far with memory-based language models have been based on publicly available software, with TiMBL as the basic classification engine (https://github.com/LanguageMachines/timbl). All required scripts for training and inference are available on GitHub (https://github.com/antalvdb/memlm).

References

D. W. Aha, D. Kibler, and M. Albert. 1991. Instance-based learning algorithms. Machine Learning, 6:37–
66.

W. Daelemans and A. Van den Bosch. 2005. Memory-based language processing. Cambridge University
Press, Cambridge, UK.

W. Daelemans, J. Zavrel, K. Van der Sloot, and A. Van den Bosch. 2010. TiMBL: Tilburg memory based
learner, version 6.3, reference guide. Technical Report ILK 10-01, ILK Research Group, Tilburg University.

A. Van den Bosch. 2006a. Scalable classification-based word prediction and confusible correction. Traitement Automatique des Langues, 46(2):39–63.

Antal van den Bosch. 2006b. All-word prediction as the ultimate confusible disambiguation. In Proceedings of the Workshop on Computationally Hard Problems and Joint Inference in Speech and Language Processing, pages 25–32, New York City, New York. Association for Computational Linguistics.

Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International
Conference on Neural Information Processing Systems, NIPS’17, page 6000–6010, Red Hook, NY, USA. Curran Associates Inc.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Short presentation with Q&amp;A</category>
            <url>https://pretalx.surf.nl/acud-2025/talk/7YLTTZ/</url>
            <location>Progress</location>
            
            <attendee>Antal van den Bosch</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>WWKFUY@@pretalx.surf.nl</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-WWKFUY</pentabarf:event-slug>
            <pentabarf:title>Energy Boost: Mind in Motion</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20251204T132500</dtstart>
            <dtend>20251204T133500</dtend>
            <duration>0.01000</duration>
            <summary>Energy Boost: Mind in Motion</summary>
            <description>Through a short, and interactive moment, they’ll challenge how we perceive focus, logic, and awareness. Demonstrating that our brains might just be capable of more than we think.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Energizer</category>
            <url>https://pretalx.surf.nl/acud-2025/talk/WWKFUY/</url>
            <location>Progress</location>
            
            <attendee>Rob en Emiel</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>MFYS89@@pretalx.surf.nl</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-MFYS89</pentabarf:event-slug>
            <pentabarf:title>Next-Generation Applications for Advancing Scientific Discovery</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20251204T133500</dtstart>
            <dtend>20251204T140500</dtend>
            <duration>0.03000</duration>
            <summary>Next-Generation Applications for Advancing Scientific Discovery</summary>
            <description>Engaging in discussions and debates about the future of workflows and applications is just as crucial for advancing research as conversations about infrastructure. It is essential to ensure this topic receives equal attention and support, especially when strategising on infrastructure planning and development, to provide a well-rounded approach that fosters innovation and collaboration.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>In Conversation</category>
            <url>https://pretalx.surf.nl/acud-2025/talk/MFYS89/</url>
            <location>Progress</location>
            
            <attendee>Sander Houweling</attendee>
            
            <attendee>Prof. Zeila Zanolli</attendee>
            
            <attendee>Sagar Dolas</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>MHUJHK@@pretalx.surf.nl</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-MHUJHK</pentabarf:event-slug>
            <pentabarf:title>TULIP: A Prototype for Open, Locally Hosted LLM Infrastructure</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20251204T141000</dtstart>
            <dtend>20251204T143500</dtend>
            <duration>0.02500</duration>
            <summary>TULIP: A Prototype for Open, Locally Hosted LLM Infrastructure</summary>
            <description>TULIP is TU Delft’s prototype for open, locally hosted LLM infrastructure. This session will highlight: - Why a local pilot like TULIP matters for researcher engagement - How it complements WiLLMa, SURF’s AI hub initiative - Early lessons on balancing technical feasibility with governance and sustainability - Open discussion on how institutional platforms can provide sustainable, open alternatives to proprietary AI services The session is aimed at researchers, research engineers, and infrastructure managers curious about first steps in hosting open LLMs on institutional hardware</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Short presentation with Q&amp;A</category>
            <url>https://pretalx.surf.nl/acud-2025/talk/MHUJHK/</url>
            <location>Progress</location>
            
            <attendee>Azza Ahmed</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>9LR7FS@@pretalx.surf.nl</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-9LR7FS</pentabarf:event-slug>
            <pentabarf:title>Developing Robust Search with Open-Source LLMs</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20251204T152500</dtstart>
            <dtend>20251204T155000</dtend>
            <duration>0.02500</duration>
            <summary>Developing Robust Search with Open-Source LLMs</summary>
            <description>Our talk will describe our research on robust search with open source LLMs and briefly describe our engineering work developing a Triton kernel to speed up training and inference with learned sparse retrieval models, with both efforts leveraging the computational power of the LUMI supercomputer.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Short presentation with Q&amp;A</category>
            <url>https://pretalx.surf.nl/acud-2025/talk/9LR7FS/</url>
            <location>Progress</location>
            
            <attendee>Dylan Ju</attendee>
            
            <attendee>Yibin Lei</attendee>
            
            <attendee>Thong Nguyen</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>VLWDSR@@pretalx.surf.nl</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-VLWDSR</pentabarf:event-slug>
            <pentabarf:title>Technology &amp; Service Updates</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20251204T155500</dtstart>
            <dtend>20251204T161500</dtend>
            <duration>0.02000</duration>
            <summary>Technology &amp; Service Updates</summary>
            <description>This session offers a concise overview of what’s new, what’s changing, and how these innovations will support the community in the year ahead.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Technology &amp; Service Updates</category>
            <url>https://pretalx.surf.nl/acud-2025/talk/VLWDSR/</url>
            <location>Progress</location>
            
            <attendee>Walter Lioen</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>QN8AS7@@pretalx.surf.nl</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-QN8AS7</pentabarf:event-slug>
            <pentabarf:title>Closing Experience</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20251204T161500</dtstart>
            <dtend>20251204T163500</dtend>
            <duration>0.02000</duration>
            <summary>Closing Experience</summary>
            <description>This final moment isn’t just a closing. It’s a celebration, a lasting spark to carry with you beyond today.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Closing Experience</category>
            <url>https://pretalx.surf.nl/acud-2025/talk/QN8AS7/</url>
            <location>Progress</location>
            
            <attendee>Rob en Emiel</attendee>
            
            <attendee>Valeriu Codreanu</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>NRV7JC@@pretalx.surf.nl</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-NRV7JC</pentabarf:event-slug>
            <pentabarf:title>Accelerating CRISPR gRNA Efficiency Prediction on the Snellius HPC system</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20251204T110000</dtstart>
            <dtend>20251204T112500</dtend>
            <duration>0.02500</duration>
            <summary>Accelerating CRISPR gRNA Efficiency Prediction on the Snellius HPC system</summary>
            <description>A guide RNA is the molecule that tells the CRISPR system where to cut or modify DNA. Predicting how well it performs is essential for everything from developing new therapies to improving crops or designing cleaner bioprocesses.

For my MSc project, I used the Snellius supercomputer at SURF, supported through the EuroCC Netherlands infrastructure, to test whether adding RNA structure information could make prediction models smarter. Scaling this workflow on HPC let me process tens of thousands of sequences, train deep-learning models efficiently, and keep every step reproducible.

The work shows how advanced computing can bridge scientific insight and industrial impact, illustrating how reproducible, large-scale AI workflows can drive innovation across sectors that depend on complex biological or experimental data.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Short presentation with Q&amp;A</category>
            <url>https://pretalx.surf.nl/acud-2025/talk/NRV7JC/</url>
            <location>Quest</location>
            
            <attendee>Sjoerd Kelder</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>GF37TS@@pretalx.surf.nl</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-GF37TS</pentabarf:event-slug>
            <pentabarf:title>Benchmarking Delft3D FM on HPC systems for real-life problems in surface water</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20251204T120000</dtstart>
            <dtend>20251204T122500</dtend>
            <duration>0.02500</duration>
            <summary>Benchmarking Delft3D FM on HPC systems for real-life problems in surface water</summary>
            <description>**Need for HPC Optimization in Real-Life Applications**
There is urgency to make Delft3D FM more efficient and scalable for high performance computing for large scale models of real-life applications. The range of these applications is quite broad: from forecasting of flooding near the dikes to ecological impact assessments of wind parks and/or floating solar panels and from the design of harbours to large scale land reclamation projects. For that purpose, a small project focussed on new benchmarks to get an actual status of the parallel performance. These benchmarks of Delft3D FM were performed a.o. on Snellius from SURF for several typical real-life applications.

**Use of Sixth-Generation Models and Snellius Benchmarks**
Several selected cases are from the sixth-generation models for Rijkswaterstaat. These models are developed and under maintenance for a broad application range in the main Dutch waterbodies and used by other parties for applications also (requests via iplo.nl). On Snellius the Apptainer version of Delft3D FM was used for the benchmarks. Deltares offers maintenance and support for this Apptainer version, also in combination with the sixth-generation models. The Apptainer version of Delft3D FM is available also for other users of Snellius.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Short presentation with Q&amp;A</category>
            <url>https://pretalx.surf.nl/acud-2025/talk/GF37TS/</url>
            <location>Quest</location>
            
            <attendee>Menno Genseberger</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>RJ8H93@@pretalx.surf.nl</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-RJ8H93</pentabarf:event-slug>
            <pentabarf:title>ROMEO HPC center: missions and projects</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20251204T141000</dtstart>
            <dtend>20251204T150000</dtend>
            <duration>0.05000</duration>
            <summary>ROMEO HPC center: missions and projects</summary>
            <description>ROMEO HPC Center hosts one of the most powerful academic supercomputer in France. We will present the ecosystem of the HPC center, the supercomputer itself and the other infrastructures as well as our participation in national and European projects.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Interactive Presentation with Conversation &amp; Input</category>
            <url>https://pretalx.surf.nl/acud-2025/talk/RJ8H93/</url>
            <location>Quest</location>
            
            <attendee>Florence Draux</attendee>
            
            <attendee>Frédéric Mauguière</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>7FHCNZ@@pretalx.surf.nl</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-7FHCNZ</pentabarf:event-slug>
            <pentabarf:title>Unveiling the Radio Sky: High-Resolution LOFAR Imaging with Advanced Computing</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20251204T110000</dtstart>
            <dtend>20251204T112500</dtend>
            <duration>0.02500</duration>
            <summary>Unveiling the Radio Sky: High-Resolution LOFAR Imaging with Advanced Computing</summary>
            <description>LOFAR is a low-frequency radio telescope composed of thousands of simple antennas distributed across Europe, with most located in the north of the Netherlands. By combining signals from these stations, LOFAR can in principle deliver extremely high-resolution images over large regions of the sky. Achieving this, however, is highly challenging: the telescope generates massive data volumes that require complex calibration and imaging algorithms. Without careful calibration, the resulting images remain severely blurred.

Another hurdle is the computational expense, as single observations can produce images exceeding 10 gigapixels. These challenges long prevented imaging at LOFAR’s full theoretical resolution. Recently, by developing new algorithms and exploiting SURF’s high-throughput processing platform Spider, operated by the Distributed Data Processing team, together with SURF’s AI expertise from the High-Performance Machine Learning team, we have overcome these barriers—producing the deepest, highest-resolution LOFAR images of the Universe to date.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Short presentation with Q&amp;A</category>
            <url>https://pretalx.surf.nl/acud-2025/talk/7FHCNZ/</url>
            <location>Expedition</location>
            
            <attendee>Reinout van Weeren</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>XTPDP7@@pretalx.surf.nl</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-XTPDP7</pentabarf:event-slug>
            <pentabarf:title>Spaceborne air-sea heat flux enabled with Spider</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20251204T113000</dtstart>
            <dtend>20251204T115500</dtend>
            <duration>0.02500</duration>
            <summary>Spaceborne air-sea heat flux enabled with Spider</summary>
            <description>1. Need for air-sea flux information
2. Utilizing existing remote sensing data
3. Too much data, HPC needed
4. Spider to the rescue</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Short presentation with Q&amp;A</category>
            <url>https://pretalx.surf.nl/acud-2025/talk/XTPDP7/</url>
            <location>Expedition</location>
            
            <attendee>Owen O&#x27;Driscoll</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>XBGRDU@@pretalx.surf.nl</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-XBGRDU</pentabarf:event-slug>
            <pentabarf:title>Modern Data Lakehouse for Research</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20251204T141000</dtstart>
            <dtend>20251204T143500</dtend>
            <duration>0.02500</duration>
            <summary>Modern Data Lakehouse for Research</summary>
            <description>I will briefly discuss the concept of Data Lakehouse, its architecture and components. One of their characteristics is that they have some functionality like consistency similar to a Data warehouse, but they can process unstructured and semi-structured data. We did already some investigations in a number of projects encompassing scientific fields such as earth observation, sentiment analysis, and bio-imaging. I will share some preliminary insights to what kind of scientific use workflows it can be applied. And will show how we can use the various Data Lakehouse components in the workflow. The talk will also touch upon commercial solutions like Data Bricks that have full stack, including an ML-ops component, and open source solutions.      

I will share some preliminary insights to what kind of scientific use workflows it can be applied. And will show how we can use the various Data Lakehouse components in the workflow. The talk will also touch upon commercial solutions like Data Bricks that have full stack, including an ML-ops component, and open source solutions.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Short presentation with Q&amp;A</category>
            <url>https://pretalx.surf.nl/acud-2025/talk/XBGRDU/</url>
            <location>Expedition</location>
            
            <attendee>Robert Griffioen</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>QKLGP7@@pretalx.surf.nl</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-QKLGP7</pentabarf:event-slug>
            <pentabarf:title>SPECTRUM Technical Blueprint and Strategic Agenda: Delivering Europe&#x27;s Roadmap for Exabyte-Scale Scientific Infrastructure</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20251204T152500</dtstart>
            <dtend>20251204T155000</dtend>
            <duration>0.02500</duration>
            <summary>SPECTRUM Technical Blueprint and Strategic Agenda: Delivering Europe&#x27;s Roadmap for Exabyte-Scale Scientific Infrastructure</summary>
            <description>The SPECTRUM project has developed a comprehensive framework for Europe&#x27;s next-generation research infrastructure to support exabyte-scale scientific computing. 

The Technical Blueprint addresses the fragmentation of current European computing resources by proposing an integrated technical architecture spanning. The SRIDA provides actionable priorities, implementation roadmaps, and policy recommendations for policymakers, infrastructure providers, and research communities. Together, they define both the technical capabilities and strategic governance needed for seamless workload migration across heterogeneous infrastructure whilst maintaining sovereignty and reducing environmental impact.

This presentation will outline the key architectural components, strategic priorities, and implementation pathways emerging from our community-driven analysis. We are currently in the open consultation phase and actively solicit feedback from the advanced computing community to ensure the final blueprint and agenda address the research community needs, helping shape Europe&#x27;s strategic approach to exascale scientific infrastructure.

For more information: www.spectrumproject.eu</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Short presentation with Q&amp;A</category>
            <url>https://pretalx.surf.nl/acud-2025/talk/QKLGP7/</url>
            <location>Expedition</location>
            
            <attendee>Sergio Andreozzi</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>3JX3NH@@pretalx.surf.nl</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-3JX3NH</pentabarf:event-slug>
            <pentabarf:title>interTwin: Advancing Scientific Digital Twins through AI, Federated Computing and Data</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20251204T110000</dtstart>
            <dtend>20251204T112500</dtend>
            <duration>0.02500</duration>
            <summary>interTwin: Advancing Scientific Digital Twins through AI, Federated Computing and Data</summary>
            <description>Digital Twins (DT), highly accurate virtual representations of physical entities, have revolutionized industries by integrating numerical simulations and observational data to create realistic, dynamic models. Initially developed for industrial applications, digital twins have expanded into diverse domains. They enable accurate predictions by simulating real-world performance, identifying potential issues, and iterating feedback loops for optimized decision-making. This paradigm leverages advanced computational techniques to enhance our understanding and management of complex systems. The Horizon Europe  interTwin project has  developed a highly generic Digital Twin Engine (DTE) to support interdisciplinary Digital Twins(DT). The project brought together infrastructure providers, technology providers and DT use cases from different domains. This group of experts enables the co-design of both the DTE Blueprint Architecture and the prototype platform; not only benefiting end users like scientists and policymakers but also DT developers. The main contributions of the DTE are: (1) a federated architecture that allows seamless integration of distributed computing and storage resources across various institutions, (2) standardized interfaces and protocols that support interoperability among different scientific fields, (3) a co-design approach that includes requirements from high-energy physics, radio astronomy, gravitational-wave astrophysics, climate research, and environmental monitoring, and (4) strong methods for assessing AI model quality, provenance, and uncertainty measurement in federated settings. 
The talk will focus on the DTE software components developed and integrated, detailing some of the pilot use cases that have successfully driven the DTE implementation.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Short presentation with Q&amp;A</category>
            <url>https://pretalx.surf.nl/acud-2025/talk/3JX3NH/</url>
            <location>Mission 1</location>
            
            <attendee>Andrea Manzi</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>Y8REXW@@pretalx.surf.nl</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-Y8REXW</pentabarf:event-slug>
            <pentabarf:title>EAR (Energy Aware Runtime) dashboard workshop</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20251204T113000</dtstart>
            <dtend>20251204T122000</dtend>
            <duration>0.05000</duration>
            <summary>EAR (Energy Aware Runtime) dashboard workshop</summary>
            <description>We at the HPCV-team at SURF have developed an end-user (researcher) focused energy dashboard that displays an overview of the energy statistics and efficiency metrics of jobs that are submitted to the supercomputer. This dashboard is built to display metrics that are collected from the EAR (Energy Aware Runtime) software which provides energy management, accounting and optimization for supercomputers. With the interactive figures that are displayed, end users should be able to gain insight into their energy footprint of their research, and what they can do about it to reduce or optimize their energy usage. In this workshop we want to give an introduction into the usage of EAR and the EAR dashboard. To get the full potential out of this workshop we recommend the attendees to run their own jobs and analyze their energy usage. This workshop is geared to every expertise level.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Workshop</category>
            <url>https://pretalx.surf.nl/acud-2025/talk/Y8REXW/</url>
            <location>Mission 1</location>
            
            <attendee>Casper van Leeuwen</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>Q8H8RN@@pretalx.surf.nl</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-Q8H8RN</pentabarf:event-slug>
            <pentabarf:title>How can a community-driven approach improve competences in energy-efficient scientific computing in the Netherlands?</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20251204T141000</dtstart>
            <dtend>20251204T150000</dtend>
            <duration>0.05000</duration>
            <summary>How can a community-driven approach improve competences in energy-efficient scientific computing in the Netherlands?</summary>
            <description>The growing energy demands of scientific computing present significant challenges for environmental sustainability, particularly in disciplines that rely on large-scale, compute-intensive methods. Although advances in energy-efficient hardware and infrastructure have been made, researchers often remain unaware of the energy consumption and environmental impact of their computational tasks. This lack of awareness is largely due to the absence of systematic energy reporting from infrastructure providers and the limited availability of monitoring tools that allow task-level assessment. Additionally, current scientific computing frameworks typically do not offer pre-execution energy estimates, limiting researchers’ ability to make informed trade-offs between performance and energy efficiency. As a result, without appropriate tools and expertise to measure and interpret energy use, researchers cannot fully evaluate the environmental footprint of their work or implement practices that support sustainable computing.

In this session, we aim to explore the potential of a community-driven approach to raise awareness and encourage the adoption of energy-efficient practices in the Netherlands. By engaging research organizations, support institutions, and infrastructure providers, such a collaborative effort can pursue three interconnected objectives to foster a culture of energy-conscious computing: (1) establish a self-sustaining Community of Practice, where researchers, support staff, and infrastructure providers collaboratively share and advance best practices for energy-efficient computing; (2) develop an open knowledge base offering training on practical tools and methods for monitoring and reducing energy consumption; and (3) organize a nationwide series of training sessions to build foundational expertise in energy-efficient computing among researchers and support staff. Together, these initiatives can complement ongoing infrastructure-level efficiency improvements with user-level practices, advancing the broader goal of sustainable and environmentally responsible research. During the session, we will present a soon-to-be-launched initiative supported by the TDCC-NES, designed to improve competences in energy-efficient scientific computing within the Natural and Engineering Sciences (NES) domain, and gather ideas and feedback from participants for expanding its impact to other domains across the Netherlands.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Interactive Presentation with Conversation &amp; Input</category>
            <url>https://pretalx.surf.nl/acud-2025/talk/Q8H8RN/</url>
            <location>Mission 1</location>
            
            <attendee>Dr. Serkan Girgin</attendee>
            
            <attendee>Bhawiyuga, Adhitya (UT-ITC)</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>D7WK9G@@pretalx.surf.nl</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-D7WK9G</pentabarf:event-slug>
            <pentabarf:title>Visualization support from SURF on Snellius (and beyond)</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20251204T152500</dtstart>
            <dtend>20251204T155000</dtend>
            <duration>0.02500</duration>
            <summary>Visualization support from SURF on Snellius (and beyond)</summary>
            <description>This presentation will touch upon a few technical topics such as remote visualization, OpenOnDemand and GPU usage. We will also talk about a few non-technical things such as high-level workflows, support and courses. Finally, we show some visualization examples.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Short presentation with Q&amp;A</category>
            <url>https://pretalx.surf.nl/acud-2025/talk/D7WK9G/</url>
            <location>Mission 1</location>
            
            <attendee>Paul Melis</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>Z7DXLP@@pretalx.surf.nl</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-Z7DXLP</pentabarf:event-slug>
            <pentabarf:title>Accelerating MPI-AMRVAC on Snellius and LUMI</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20251204T110000</dtstart>
            <dtend>20251204T112500</dtend>
            <duration>0.02500</duration>
            <summary>Accelerating MPI-AMRVAC on Snellius and LUMI</summary>
            <description>[MPI-AMRVAC](https://amrvac.org) is a parallel adaptive mesh refinement framework aimed at solving partial differential equations by a number of different numerical schemes. It is written in Fortran 90 and uses MPI for parallelisation across many CPUs.

In modern HPC infrastructure, most compute power is however not in the CPUs but in accelerators such as GPUs. In order to make use of this compute power, we have enabled MPI-AMRVAC to run on GPUs using OpenACC, enabling larger-scale simulations than ever before.

I will discuss the advantages and challenges of OpenACC in our experience, and highlight the achieved performance improvement in MPI-AMRVAC on both Snellius and LUMI.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Short presentation with Q&amp;A</category>
            <url>https://pretalx.surf.nl/acud-2025/talk/Z7DXLP/</url>
            <location>Mission 2</location>
            
            <attendee>Leon Oostrum</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>YBPFRY@@pretalx.surf.nl</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-YBPFRY</pentabarf:event-slug>
            <pentabarf:title>PartitionedArrays: an alternative programming model for distributed-memory parallel systems</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20251204T113000</dtstart>
            <dtend>20251204T115500</dtend>
            <duration>0.02500</duration>
            <summary>PartitionedArrays: an alternative programming model for distributed-memory parallel systems</summary>
            <description>MPI is the gold-standard to program distributed-memory parallel computers, but it comes with well-known challenges. The programmer explicitly controls data distribution and communication, making the logic of MPI-enabled algorithms significantly more complex than their sequential versions. Debugging this additional logic at large scales is cumbersome or even impractical. Execution order might affect results and inspecting the local variables might be very tedious and time consuming, even for a moderate number of processes. Partitioned Global Address Space (PGAS) systems and other alternatives to MPI have been introduced to address these challenges. They often aim at freeing the users from communication-related details, but they offer less control on performance and face a strong adoption barrier as the programming model of MPI is deeply rooted in the high-performance computing (HPC) community. The PartitionedArrays programming model solves the challenges of MPI without the limitations of PGAS. It provides an effective way of expressing and debugging the logic of distributed applications instead of trying to hide these details from the user. To this end, PartitionedArrays decouples the number of parts used for data partition from the number of processes that run the code. Hence, the logic of data distribution and communication can be debugged on a single process using conventional tools. Moreover, computation and communication are written as a sequence of logically collective phases, which (unlike many MPI directives) have deterministic semantics independently of process execution order. This allows one to implement safety checks and rule out the possibility of dead-locks. These additional benefits come with virtually no penalty in performance, since MPI can still be used to run algorithms implemented with PartitionedArrays by setting the number of parts equal to the number of processes. In addition, the logic of many MPI codes can be expressed in PartitionedArrays allowing to readily port applications developed with MPI in mind, minimizing its adoption barrier in the HPC community. 

PartitionedArrays is FAIR software available at https://github.com/PartitionedArrays/PartitionedArrays.jl</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Short presentation with Q&amp;A</category>
            <url>https://pretalx.surf.nl/acud-2025/talk/YBPFRY/</url>
            <location>Mission 2</location>
            
            <attendee>Francesc Verdugo</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>3FS7CU@@pretalx.surf.nl</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-3FS7CU</pentabarf:event-slug>
            <pentabarf:title>Real-time Quantitative MRI Reconstruction</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20251204T141000</dtstart>
            <dtend>20251204T143500</dtend>
            <duration>0.02500</duration>
            <summary>Real-time Quantitative MRI Reconstruction</summary>
            <description>Quantitative MRI (qMRI) has great potential to transform clinical radiology by offering higher-quality medical images while reducing acquisition times. This enables faster diagnoses by radiologists and shorter scanning times for patients. However, the computational demands of qMRI algorithms are significant, often causing image reconstruction to take hours and thus hindering clinical adoption.

We present COMPAS, a composable toolkit of high-performance qMRI primitives for developing state-of-the-art qMRI methods. COMPAS hides the technical complexity required to achieve near-real-time performance while providing an easy-to-use interface for both C++ and Julia.

COMPAS integrates several cutting-edge technologies, including work developed at the Netherlands eScience Center. We use Kernel Tuner to auto-tune the performance of individual GPU kernels. We develop KMM, a parallel dataflow and memory-manager layer for multi-GPU systems that minimizes data transfers, reuses GPU allocations, and overlaps computation with communication. We also perform selected operations in low precision to increase performance at the cost of a minimal loss in numerical accuracy. Finally, by targeting both CUDA and HIP, we support AMD and NVIDIA GPUs with a single codebase.

We present results using Snellius (NVIDIA H100) and LUMI (AMD MI250X) supercomputers, reducing reconstruction times from hours to nearly one minute, making qMRI ready for potential use in clinical trials.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Short presentation with Q&amp;A</category>
            <url>https://pretalx.surf.nl/acud-2025/talk/3FS7CU/</url>
            <location>Mission 2</location>
            
            <attendee>Alessio Sclocco</attendee>
            
            <attendee>Stijn Heldens</attendee>
            
            <attendee>Oscar van der Heide</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>LLQ3MN@@pretalx.surf.nl</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-LLQ3MN</pentabarf:event-slug>
            <pentabarf:title>Preparing for Einstein Telescope: GPU-native scientific computing without compromises</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20251204T152500</dtstart>
            <dtend>20251204T155000</dtend>
            <duration>0.02500</duration>
            <summary>Preparing for Einstein Telescope: GPU-native scientific computing without compromises</summary>
            <description>See abstract</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Short presentation with Q&amp;A</category>
            <url>https://pretalx.surf.nl/acud-2025/talk/LLQ3MN/</url>
            <location>Mission 2</location>
            
            <attendee>Thibeau Wouters</attendee>
            
        </vevent>
        
    </vcalendar>
</iCalendar>
