Skip to content
Sections
Research

Research

Research coverage from the Netherlands, by Sophie de Vries.

Research

New Research Pinpoints Why LLMs Struggle With Multiple Tasks at Once

via arXiv

A new arXiv paper investigates how LLM performance degrades when processing multiple instances simultaneously, identifying both instance count and context length as key contributing factors. The research provides a structured analysis of these failure modes, offering insights into why batch or multi-task inference settings produce unreliable outputs. The findings have direct implications for production deployments where cost efficiency drives multi-instance processing decisions.

AnalysisFor Dutch AI teams scaling enterprise LLM pipelines — from logistics to legal tech — understanding these degradation patterns is critical to building reliable, cost-effective systems without silently trading away accuracy.

Research

Amsterdam AI Coalition Publishes Open-Source Bias Detection Toolkit

via DutchNews.nl

The Amsterdam AI Coalition, a partnership between the city government, University of Amsterdam, and local AI companies, has released an open-source toolkit for detecting and mitigating bias in AI systems. The toolkit includes testing frameworks for gender, ethnic, and socioeconomic bias across text generation, classification, and recommendation systems. It is designed to help organizations meet EU AI Act fairness requirements and has been adopted by several Dutch government agencies for internal AI audits.

AnalysisOpen-source bias detection tooling tied to EU AI Act requirements has genuine adoption potential — it solves a problem every European AI deployer now faces. Watch for this becoming a de facto standard if other EU cities adopt it.

Research

TU Delft and Leiden University Announce Joint AI Safety Research Program

via NRC

TU Delft and Leiden University have launched a joint research program on AI safety and alignment, funded with €15 million from the Dutch Research Council. The program will investigate value alignment in language models, safe deployment of autonomous systems, and human-AI collaboration frameworks. Ten new faculty positions and 25 PhD candidates will be recruited, making it the largest AI safety research initiative in the Benelux region.

AnalysisAI safety research in Europe has been underfunded relative to the UK's focus at Oxford and Cambridge. This program could establish the Netherlands as a continental hub for alignment research — the 25 PhD positions will build a pipeline.

Research

TNO Launches AI Testing and Evaluation Facility in The Hague

via NRC

The Netherlands Organisation for Applied Scientific Research (TNO) has opened a dedicated AI testing and evaluation facility in The Hague. The center provides independent assessment of AI systems against EU AI Act requirements, including bias testing, robustness evaluation, and transparency audits. Government agencies and private companies can submit AI systems for certification, creating a quality assurance pipeline for AI deployment in the Netherlands.

AnalysisTNO positioning as the EU AI Act compliance testing authority mirrors what DFKI/Fraunhofer are doing in Germany. The Hague location is deliberate — proximity to ICC and international legal institutions adds credibility for cross-border AI governance.