
Ethereum co-founder Vitalik Buterin identified limits to human attention as the core problem plaguing decentralized autonomous organizations (DAOs) and democratic governance systems.
Summary
Buterin says limited human attention is DAOs’ core governance flaw.
Personal AI agents could vote using user preferences and context.
Suggestion markets and MPC may improve privacy and decisions.
Writing on X, Buterin argued that participants face thousands of decisions across multiple domains of expertise without sufficient time or skill to evaluate them properly.
The usual solution of delegation creates disempowerment where a small group controls decision-making while supporters have no influence after clicking the delegate button.
Buterin proposed personal large language models as the solution to the attention problem and shared four approaches. Personal governance agents, public conversation agents, suggestion markets, and privacy-preserving multi-party computation for sensitive decisions.
Personal LLMs can vote based on preferences
Personal governance agents would perform all necessary votes based on preferences inferred from personal writing, conversation history, and direct statements.
When the agent faces uncertainty about voting preferences and considers an issue important, it should ask the user directly while providing all relevant context.
Public conversation agents would aggregate information from many participants before giving each person or their LLM a chance to respond.
The system would summarize individual views, convert them into shareable formats without exposing private information, and identify commonalities between inputs similar to LLM-enhanced Polis systems.
Buterin noted that good decisions cannot come from “a linear process of taking people’s views that are based only on their own information, and averaging them (even quadratically).” “Processes must aggregate collective information first, then allow informed responses.
Suggestion markets could surface high-quality proposals
Governance mechanisms valuing high-quality inputs could implement prediction markets where anyone submits proposals while AI agents bet on tokens. When the mechanism accepts the input, it pays out to token holders.
The approach applies to proposals, arguments, or any conversation units the system passes along to participants. The market structure creates financial incentives for surfacing valuable contributions.
Decentralized governance fails when important decisions need secret information, Buterin argued. Organizations generally handle adversarial conflicts, internal disputes, and compensation decisions by appointing individuals with great power.
Multi-party computation using trusted execution environments could incorporate many people’s inputs without compromising privacy.
“You submit your personal LLM into a black box, the LLM sees private info, it makes a judgement based on that, and it outputs only that judgement,” Buterin explained.
Privacy protection becomes important as participants submit larger inputs containing more personal information. Anonymity needs zero-knowledge proofs, which Buterin said should be built into all governance tools.

