Resources
Share

Toward a model of data science & AI for common good

Our Social Code

A model bringing together preeminent schools of thought into a single holistic approach for AI, data, and technology for the common good.

This framework integrates Maslow’s hierarchy of needs, Ostrom’s commons governance, James C. Scott’s critique of centralized power, social innovation, and planetary limits into a holistic approach for AI, data, and technology for the common good.

It would ensure that AI and data systems:

  • Serve human needs equitably (Maslow)
  • Are governed as commons (Ostrom)
  • Resist technocratic overreach and surveillance (Scott)
  • Foster grassroots, bottom-up innovation (social innovation)
  • Respect planetary limits and ecological sustainability (Earth system boundaries)

Foundational principles

Maslow’s Hierarchy of Needs

Physiological Needs: Ensure that AI and data technologies support basic human needs such as food, water, and shelter. For example, AI can optimize agricultural practices and water management systems.

Safety Needs: Develop AI systems that enhance security and safety, such as disaster response systems and healthcare technologies.

Love and Belonging: Use AI to foster social connections and community building, such as through social media platforms and community engagement tools.

Esteem: Create AI tools that support personal growth and achievement, such as educational technologies and career development platforms.

Self-Actualization: Encourage AI innovations that allow individuals to reach their full potential, such as through creative tools and personalized learning experiences.

Ostrom’s Commons Governance

Clear Boundaries: Define clear boundaries for data usage and AI deployment to ensure transparency and accountability.

Congruence: Align AI and data governance with local needs and contexts to ensure relevance and effectiveness.

Collective-Choice Arrangements: Involve stakeholders in decision-making processes related to AI and data governance.

Monitoring: Implement robust monitoring systems to track the impact of AI and data technologies.

Graduated Sanctions: Establish graduated sanctions for misuse of AI and data to ensure compliance.

Conflict Resolution: Develop mechanisms for resolving conflicts related to AI and data governance.

Minimal Recognition of Rights: Ensure that the rights of all stakeholders are recognized and protected.

Nested Enterprises: Encourage collaboration and coordination among different levels of governance to manage AI and data effectively.

James C. Scott’s Critique of Centralized Power

Decentralized Governance: Promote decentralized AI and data governance structures that empower local communities.

Local Knowledge: Incorporate local knowledge and expertise into AI and data technologies to ensure they are contextually appropriate.

Community Engagement: Engage communities in the development and deployment of AI and data technologies to ensure they meet local needs.

Transparency: Ensure transparency in AI and data governance to build trust and accountability.

Scott’s ideas—especially from Seeing Like a State (1998) and The Art of Not Being Governed (2009)—are essential for understanding:

  1. Power dynamics in AI and data governance—Who controls AI and data? Who gets excluded?
  2. The risks of “high modernism” in AI—When centralized tech solutions ignore local knowledge and create oppressive systems.
  3. Resistance and alternative governance models—How decentralized, grassroots, and community-led AI and data innovations can thrive.

By integrating Scott’s critique of centralized state control and Ostrom’s decentralized governance principles, we can design AI and data systems that avoid technocratic overreach, respect local knowledge, and empower communities.

Social innovation

Innovative solutions: Develop AI and data technologies that address pressing social challenges, such as poverty, inequality, and climate change.

Collaboration: Foster collaboration among different sectors, including government, private sector, and civil society, to drive social innovation.

Impact measurement: Implement impact measurement frameworks to assess the social impact of AI and data technologies.

Scalability: Ensure that innovative AI and data solutions are scalable and can be replicated in different contexts.

Planetary limits

Sustainable technologies: Develop AI and data technologies that are environmentally sustainable and do not exacerbate planetary limits.

Resource efficiency: Use AI to optimize resource use and reduce waste, such as through smart grid technologies and waste management systems.

Climate action: Leverage AI and data to address climate change, such as through renewable energy technologies and carbon capture systems.

Biodiversity conservation: Use AI to monitor and protect biodiversity, such as through wildlife tracking and habitat conservation technologies.

A holistic framework for AI, data, and technology as commons

Level of needCommon good perspective for tech, AI & dataCommons governance strategies (Ostrom applied to AI & Data)Role of social innovationScottian perspective: resisting high-tech authoritarianismPlanetary limits perspective
1. Physiological (Basic Digital Access & Infrastructure)Ensure equitable access to internet, data, computing, and digital tools as fundamental needs. Address digital divides.Define public access to data & AI. Prevent monopolization of compute power by a few corporations.Community-driven digital infrastructure (e.g., mesh networks, open-source AI, low-energy computing).Avoid “high-modernist” tech-driven development that imposes top-down infrastructure ignoring local needs. Support bottom-up digital networks.Digital resource extraction has material limits: AI models consume energy, require rare minerals, and impact climate. Prioritize low-energy AI and sustainable data storage.
2. Safety (Data Privacy, Security, & Ethical AI Use)Protect users from AI-driven harm (bias, surveillance, misinformation, cyber risks). Ensure algorithmic transparency.Implement algorithmic audits, AI ethics oversight, and decentralized monitoring bodies. Establish strong data protection laws.Trust-enhancing digital tools, such as decentralized identity systems, AI auditing frameworks, and privacy-preserving AI.Beware of state-led digital surveillance and corporate AI black-boxing. Encourage privacy-preserving encryption and community-led governance.AI models have a carbon footprint. Data centers require massive water & energy consumption. Enforce eco-friendly AI & data policies.
3. Love/Belonging (Inclusive AI Development & Digital Trust)Build participatory AI and data systems where diverse communities co-design tech solutions. Foster trust in AI. Strengthen social bonds.Decentralized governance models for AI, such as federated learning, public AI research, and cooperative AI ownership.Use AI and digital platforms for civic engagement, crowdsourced problem-solving, and community-driven AI projects.Challenge "legibility" in AI—corporate and state AI systems reduce human complexity into oversimplified models. Promote vernacular AI(local knowledge-driven AI).AI and tech should support ecological resilience, not just human societies. Build AI tools for biodiversity tracking, climate adaptation, and sustainable urban planning.
4. Esteem (Data Ownership, AI for Social Justice, Alternative Economic Models)Ensure fair compensation and community control over digital resources. Prevent extractive AI labor (e.g., ghost work, clickworkers).Establish data commons and decentralized AI ownership models where users have agency over their own data.Drive socially responsible tech startups, digital cooperatives, and ethical AI innovations. Incentivize data-sharing models based on reciprocity.Resist corporate “data colonialism”—where AI firms extract global data without fair redistribution. Support non-monetary, cooperative economies of data sharing.Prevent digital overconsumption: AI-driven capitalism encourages overuse of resources. Enforce degrowth-oriented tech policies that prioritize sufficiency over efficiency.
5. Self-Actualization (Transformative AI & Tech for Global Good)Use AI to solve humanity’s greatest challenges (climate, healthcare, education). Support open, adaptive innovation for collective intelligence.Ensure AI remains an adaptable, self-governing ecosystem through polycentric governance. Support grassroots, bottom-up AI innovation.Develop disruptive social innovations that reimagine how AI serves human flourishing and planetary well-being.Decenter Western-centric AI models. Support polycentric innovation (e.g., AI based on indigenous ecological knowledge for sustainability).AI should enhance planetary stewardship, not drive resource depletion. Prioritize AI for carbon sequestration, circular economy modeling, and regenerative agriculture.

How this model helps avoid the tragedy of the AI and data commons

  1. Resisting AI & Data Extraction as a New Form of Colonialism
    • Problem: Large AI firms extract global data without fair redistribution or compensation.
    • Solution: Community-owned data trusts, cooperative AI models, and decentralized governance structures.
  1. Preventing AI & Tech as Tools of Ecological Collapse
    • Problem: AI-driven economies push excessive consumption, automation-driven extractivism, and unsustainable energy use.
    • Solution: AI & data models must be planet-aware, regenerative, and integrated with ecological knowledge.
  1. Shifting from AI for Surveillance Capitalism to AI for Commons-Based Innovation
    • Problem: AI is currently optimized for profit-maximization and behavioral prediction, leading to mass surveillance.
    • Solution: Invest in privacy-preserving AI, data commons, and alternative AI ownership models.

 

Actionable strategies for AI, data, and tech governance

Alignment with EU policy

1. Human-Centric and Trustworthy AI

The EU emphasizes the development of AI that is human-centric and trustworthy. The European AI Strategy aims to make the EU a world-class hub for AI while ensuring that AI technologies are developed and used in ways that respect fundamental rights, including privacy and data protection, and uphold ethical principles.

digital-strategy.ec.europa.eu

2. Data Governance and the Commons

The EU’s Data Governance Act seeks to increase trust in data sharing, strengthen mechanisms to increase data availability, and overcome technical obstacles to the reuse of data. This aligns with Ostrom’s principles of governing the commons by promoting shared access to data while ensuring proper governance frameworks are in place.

digital-strategy.ec.europa.eu

3. Addressing Planetary Boundaries

The EU is actively working to align its policies with the concept of planetary boundaries. Discussions and proposals have been made to integrate the planetary boundaries framework into EU policy instruments, aiming to ensure that economic activities remain within ecological limits.

sei.org

4. Sustainable Resource Management

There is a call within the EU for legislation on sustainable resource management to transform Europe into a fair, autonomous, resilient, and sustainable economy. This includes setting legal objectives for the Union to reach sustainable levels of resource consumption in relation to its biocapacity.

horizoneuropencpportal.eu

5. Ethical Data Practices

The EU AI Act emphasizes robust data governance for high-risk AI systems, requiring high-quality datasets, bias mitigation, and strict privacy measures. This reflects a commitment to ethical data practices and aligns with the framework’s focus on responsible AI development.

artificialintelligenceact.eu

6. Social Innovation and Inclusivity

While not explicitly termed as “social innovation,” EU policies encourage inclusive and participatory approaches in technology development. The emphasis on human-centric AI and stakeholder engagement reflects a commitment to involving diverse communities in the technological advancement process.

Conclusion

By integrating these frameworks…

  • Maslow’s hierarchy (human needs in AI governance),
  • Ostrom’s commons principles (preventing digital resource exploitation)
  • James C. Scott’s critique of centralized control (resisting top-down technocracy),
  • Social innovation (new participatory governance models),
  • Planetary limits (ensuring AI does not exceed Earth’s ecological boundaries)

…we can build an AI, data, and tech ecosystem that is decentralized, equitable, resistant to monopolization, and ecologically sustainable.

This model fundamentally shifts AI, data, and tech from an extractive, capitalist-driven tool to a commons-based, ecological, and socially just system close to EU policy.

Scroll to Top

Search