Evolution
In our pursuit to design a new type of organization, Optimism’s public decision making process has undergone significant evolution since its inception, reflecting Optimism’s commitment to iterative improvement and experimentation. Below is a summary of some of the key things we learned along the way.Key Stakeholders
- We’ve run multiple experiments to understand who our most engaged stakeholders are and how they participate in our public decision making process.
- Tokenholders: Anyone who holds OP can vote
- We’ve also run multiple delegation experiments aimed at getting if specific types of tokenholders (protocols, chains, and individual community members) more involved. Our main learning has been that tokenholders need strong incentive alignment to invest time in decision making processes and they want to be involved in low effort, high impact ways.
- While our system allows for delegation - whereby tokenholders can assign their votes to someone else to cast on their behalf - over time we’ve come to believe that delegation disrupts the incentives of token-weighted voting and that voting directly should be heavily encouraged.
- Tokenholders are asked to make decisions that would benefit from investor protections
- Users, Apps, and Chains: You must qualify to be a Citizen
- Citizenship started with a small initial group and expanded via a Web of Trust model. This model suffered from in-group dynamics (which were replicated here), resulting in many Citizens that were impacted by the decisions being made.
- We later ran targeted experiments to evaluate how community members, chains, and past grant recipients voted, ultimately resulting in our key stakeholder model. Our stakeholder models ensures chains, apps, and end-users are able to influence the decisions that impact them.
- Citizens are asked to make decisions that would benefit from consumer protections
- We’ve realized that input from different stakeholders is needed depending on the type of decision being made:
- Preferences: There is no absolute “right” answer and all stakeholders should have a say
- Prediction: There is a correct answer, which is only revealed in the future. Experts are best suited to make these decisions.
- When “experts” are needed, these decisions are made by Councils and Boards - such as the Developer Advisory Board and Security Council. These Councils and Boards are still ultimately accountable to key stakeholders.
- Measurement: This is best done objectively, by a computer, when possible, or by experts.
High Impact Inputs
- Different decisions impact each stakeholder group in unique ways. Our approach has evolved from “everyone decides everything” to one that only asks stakeholders to make decisions that directly impact them.
- In many cases, a stakeholder doesn’t need to make a decision directly, but should still have the ability to veto - or reject - a decision that disadvantages their stakeholder group.
- Stakeholders will also be able to express preferences and influence strategy via non-voting processes.
- We’ve outlined the different decisions here: Figma
The Core Set of Decisions
- Governance minimization is a foundational principle of Optimism’s collective decision making process. Our evolution has been one of continuously simplifying process, reducing structure, and further refining scope.
- We’ve learned over time that several decisions that used to be made publicly, actually benefit from more centralized decision making (CoCC, CFC, BB.)
- We believe the set of decisions that should be made collectively are those that:
- Reduce platform risk for customers and users of the protocol
- Prevent short-term profit seeking at the long-term expense of the platform
- Optimism has always been committed to supporting public goods, but the way we support public goods has evolved greatly over time. We started with a fully public grant making process, which gradually evolved to be more metrics-driven and programmatic approach, requiring less human input. We expect this to be a continued area of evolution and innovation.
- Our Decentralization Milestones outlines the remaining steps we hope to take to refine and further decentralize our public decision making process.
Experimentation
Underpinning the learnings outlined above is a culture of experimentation. In our early days, our iterative approach sometimes involved a less-scientific, trial-and-error approach. Over time, we’ve realized a more rigorous, data-driven approach - leveraging controlled trials where possible - allows us to truly understand what works and what doesn’t. A sample of our Research & Experiments findings are summarized in the table below. We often collaborate with academics, industry experts, and independent researchers.| Topic | Research question | Methods | Key Takeaways | Write-up |
|---|---|---|---|---|
| Airdrops | Do airdrops drive prosocial behaviors like delegation? Do they increase retention among new users? | Regression discontinuity design (RDD) | Increased delegation esp among small wallets; Baseline reward increases retention but high activity bonuses decrease retention | - Did OP Airdrop 2 Increase Governance Engagement? - Did OP Airdrop 5 Increase User Retention Rates? A Regression Discontinuity Analysis |
| Citizenship | How do we identify key stakeholders (eg, end users, app devs, or partner chains) and give them decision-making rights? | Voting data analysis, surveys, qualitative interviews | Experts no “better” at values questions but better at assessing impact; Guest voters don’t vote differently to existing set; 3 clear personas | - Citizenship Learnings 2024 |
| Deliberation | How does participating in a deliberative process with direct policy implications change individual attitudes and behaviors? | Randomized experiment, instrumental variable regression | Deliberation increases knowledge and trust; No reduction in polarization when outcome is binding | - When Is Deliberation Useful for Optimism Governance? |
| Futarchy | Do projects selected via Futarchy see greater increase in TVL than projects selected by existing Grants Council? | Time-series analysis, RDD, analysis of telegram, survey, and trading data | Futarchy grants produced more Superchain TVL after 3 months than Grants Council picks; Predictions notably overpriced; 400+ forecasters participated | - Futarchy v1 Preliminary Findings |
| Public Goods Funding | What voting designs lead to impactful grant allocation decisions? Does algorithmic/ metrics-based voting improve outcomes? | Voting data analysis, synthetic control method, surveys, qualitative feedback | Humans are bad at quantification and bias toward even distributions rather than reflecting value; Experts with context make better decisions for OSS; Individual bias about impact vs need is inevitable | - Retro Funding 4: Learnings and Reflections - Season 7 Retro Funding - Early Evidence on Developer Tooling Impact |
| Voter mobilization | Do appeals to civic duty, economic self-interest, collective security, or decision authority increase tokenholder turnout? | Randomized multi-wave experiment | Economic and security (tangible stakes) were most effective in driving turnout; Repeated reminders are necessary to sustain increase in participation; catchy visuals and follow-ups important | - “What Drives Turnout in Digital Governance? Evidence from a Multi-stage Voter Mobilization Experiment among 34,328 Tokenholders” (Draft available upon request: [email protected]) |