The goal of a collaborative CE project is to create a benchmark dataset for a research field. This idea is inspired by the critical role such datasets have played in the AI revolution. Benchmarks help shift research from the competition of storytelling to the more objective challenge of beating the best score on a benchmark. If this approach can be applied to psychology, it could create a level playing field and perhaps lead to a similar revolution.
A collaborative CE project shares some similarities with traditional adversarial collaboration but differs in key aspects:
Similarity: It brings together researchers with differing perspectives to agree on a methodological approach.
Difference (Pre-Data Collection): Unlike adversarial collaboration, we will not define specific hypotheses or predictions beforehand. Instead, the focus will be on identifying the most informative experimental manipulations and measures. This reduces the burden of upfront theoretical specification.
Difference (Post-Data Collection): Unlike adversarial collaboration, we will not seek a mutually agreed-upon interpretation, model, or theory for the benchmark dataset. Instead, each contributor may independently develop and publish their own interpretations. This approach removes the need for premature consensus, which often discourages collaborators from tackling the most challenging questions. Moreover, after a protection period, the dataset will be made openly accessible. Therefore, researchers—including but not limited to the original contributors—can propose and test competing models or theories. Over time, the most robust solutions will emerge through fair, community-driven scrutiny—a strategy that has proven highly effective in AI and other data-intensive disciplines.
Currently, we are just starting out, and our first collaborative CE project is underway: