New Global Platform Aims to Standardize and Improve Single-Cell Analysis

Researchers from more than 50 international institutions have launched Open Problems (https://openproblems.bio), a collaborative open-source platform to benchmark, improve, and run competitions for computational methods in single-cell genomics. Co-led by Helmholtz Munich and Yale University, the initiative aims to standardize evaluations, foster reproducibility, and accelerate progress towards open challenges in this fast-moving field.

A Common Language for a Complex Field

Single-cell genomics allows scientists to analyze individual cells at unprecedented resolution, revealing how they function, interact, and contribute to health and disease. But as the field has grown, so has the number of computational tools – now numbering in the thousands – designed to process and interpret this complex data. This rapid growth presents a major challenge: how can researchers identify the most suitable tool, or determine the best combination of processing steps to achieve a specific analytical goal? Many tools are specialized, and evaluating their performance is challenging due to the limited availability of datasets with known, accurate outcomes (so-called ground truth). As a result, researchers often turn to large-scale benchmarking studies. However, these studies can be inconsistent, quickly become outdated, and often make comparisons difficult to interpret – making it challenging to identify the best method for a given task.

We need a common language to measure what works – and what doesn't – that can stand the test of time. With Open Problems, we're introducing a reproducible, living, and transparent framework to guide tool development and evaluation – one that the community can actively shape and use."

Prof. Fabian Theis, Director of the Computational Health Center at Helmholtz Munich and Professor at the Technical University of Munich

Transparent, Reproducible, and Community-Driven

Open Problems currently includes 81 public datasets and tests 171 methods across 12 core tasks in single-cell analysis. Each method is evaluated using a suite of metrics – quantitative measures that show how well a method performs on a specific task. These metrics include accuracy, scalability, and robustness, among others, and are chosen based on the goals of each task. In total, 37 different metrics are used across the platform, with each task using the most relevant ones.

All evaluations run automatically in the cloud and follow standardized procedures to ensure the results are fully reproducible. Researchers can see how each method performs, explore the underlying code, and suggest improvements. To remain relevant and impactful over the long term, the platform is designed to be open to contributions: scientists can propose new tasks, add their own methods, join regular community calls, and take part in collaborative hackathons to help shape the future of the project.

Real-World Benefits

By comparing tools side by side, Open Problems helps researchers identify the most effective methods for their specific scientific questions and often challenges established assumptions in the process. As Dr. Smita Krishnaswamy, Associate Professor of Genetics and of Computer Science at the Yale School of Medicine, explains: "We found that looking at overall patterns of gene activity gives more accurate results than focusing on individual genes when studying how cells communicate. And for some tasks, like identifying cell types across different datasets, a simple statistical model can actually outperform complex AI methods, making the analysis both faster and more efficient for many researchers."

The platform also powers major machine learning competitions, including the NeurIPS multimodal integration challenges. These global contests bring together experts in biology and artificial intelligence to solve real-world problems using common datasets and evaluation standards.

"Open Problems lowers the barrier for AI researchers outside biology to contribute to genomics," says Dr. Malte Lücken, who co-led the project. "It's a blueprint for interdisciplinary innovation."

All code and results are openly available under a CC-BY licence at github.com/openproblems-bio/openproblems.

Source:
Journal reference:

Luecken, M. D., et al. (2025). Defining and benchmarking open problems in single-cell analysis. Nature Biotechnology. doi.org/10.1038/s41587-025-02694-w.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoLifeSciences.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Natural Compound from Propolis Alters Membrane Fluidity