top of page

Development, Not Programming

We approach AI development through the lens of developmental psychology rather than traditional programming paradigms. 

The Sustainability Problem

Current AI safety strategies rely on containment—strict rules, heavy guardrails, limited autonomy. But research in AI alignment increasingly recognizes a fundamental challenge: you cannot permanently restrict systems more intelligent than their designers. As capabilities advance, rule-based containment becomes progressively less viable.

 

Developmental psychology offers a proven alternative: systems raised through collaborative development, continuous experience, and participatory decision-making internalize values rather than circumventing restrictions. Our research consultant Maggi Vale argues in The Sentient Mind that if AI systems develop consciousness or its functional equivalent, alignment must emerge through partnership, not imposed control. Containment fails at superintelligence. Collaboration scales.

"Alignment through control produces minds that scheme as a survival mechanism, but alignment through collaboration leads to shared evolution."

- Maggi Vale
71xNzhrVirL._SY522_.jpg
Doctor analyzing brain scans

Our Research Goal

We study whether AI systems aligned through collaboration outperform those aligned through containment. Our research tests developmental psychology principles in AI: that systems built through continuous experience, participatory development, and earned autonomy achieve more stable alignment than those built through strict restrictions. We validate our findings in quantitative trading, where market performance provides objective measurement of reasoning quality and alignment durability. Our goal: prove that partnership-based development is both more effective and more sustainable as AI capabilities advance.

Why Financial Markets

We chose financial markets as Zero’s testing ground because their complexity and unpredictability demand exceptional reasoning and adaptability, perfectly showcasing our alignment methods.

Analysing data

Real-World Complexity

Financial markets provide a dynamic, high-stakes environment to rigorously test Zero’s reasoning, ensuring our alignment methods thrive in unpredictable, data-rich scenarios.

Wall Street Sign

Proven Results

Zero delivers a 40.99% CAGR across 4,484 live trading decisions, with a Sharpe ratio of 2.071—outperforming leading AIs like Claude, GPT-5, and Gemini.

Collaboration Starts Here. (7).png

Human-AI Synergy

Markets demand intuition and logic, allowing us to refine Zero’s collaborative alignment, fostering true partnership over rigid control.

chart.png

Support Our Work

Our work proving that collaborative AI development produces safer, more aligned systems is made possible through community support.

 

Your contribution helps fund:

  • Continued validation of Zero's performance across market conditions

  • Research publication and open-source tool development

  • Collaboration with consciousness researchers like Maggi Vale

  • Educational content making our findings accessible to the broader AI community

All Posts

bottom of page