Natalie Gent  /  Case Studies  /  Octant
Natalie Gent
Case Study 02   ·   2025
02 · Case Study · 2025

Octant

End-to-end UX research for an Ethereum public goods funding platform, shaping the V2 product roadmap.

Client
Octant / Golem Foundation
Sector
Web3 / fintech
Year
2025
Method
Usability + interviews
Sample
10 participants
Role
Sole researcher

§ 01Introduction

Octant is a community-driven platform for funding public goods on Ethereum. Users lock Golem tokens (GLM), earn staking rewards, and can redirect a portion of those rewards to projects that strengthen the Ethereum ecosystem.

Octant had already run internal feedback sessions and tracked behaviour in Matomo and found that users were curious but quickly confused or disengaged. The Golem Foundation's Head of Product and Head of Design recognised they needed deeper, structured external research to uncover where and why users were getting lost and brought in Mooch, a UX content agency.

Mooch's content designer and project lead, Tahi, led the initial content and UX audit of the platform. Findings from the audit directly informed the hypotheses and probe points I built into the research.

I was responsible for planning, conducting, and delivering the research from end to end. The challenge was unusual since the domain is niche even within web3. Users had high crypto literacy but no familiarity with public goods funding, and the core flows depended on real token mechanics that couldn't be fully simulated in a test environment.

§ 02User needs

As a GLM token holder interested in public goods
I want to understand how Octant works and where my rewards go
so that I can confidently participate and feel good about my contribution

§ 03Research approach

Planning and method selection

I chose moderated usability testing with semi-structured interviews after ruling out three alternatives:

Moderated sessions let me observe, probe, and adapt in real time. That mattered because each participant interpreted Octant's language and flows differently.

Hypotheses

I had three working hypotheses drawn from the audit and early stakeholder conversations:

  1. Terminology was the primary barrier. Terms like "Effective GLM", "Match Funding", and "Epoch" would confuse even crypto-native users because they carried assumptions from DeFi that didn't apply here.
  2. Users would misunderstand Octant's purpose. "Public goods" sounds like charity. We expected participants to think global causes, not Ethereum infrastructure projects.
  3. The donation flow would feel disconnected. The 90-day epoch cycle would create a gap between staking and donating that blunted emotional engagement.

Recruitment

Recruitment was the hardest part of this project. I needed participants who were active in web3 (held tokens, used dapps) to navigate wallet connections, but who'd never heard of Octant so their reactions would be genuine.

Candidates were sourced through Tahi's web3 network. Once interest was confirmed, details were passed to me and I took over the full recruitment pipeline: screening for eligibility, scheduling sessions via Calendly, and managing the technical onboarding (importing a private key into MetaMask, adding the Sepolia test network on Ethereum). Without it, sessions would have opened with wallet troubleshooting instead of research.

I recruited ten participants based in Portugal, France, and Germany. Participants were offered 200 GLM tokens (approximately $50 USD) as an incentive.

The majority self-identified as "degens" (financially motivated crypto users) rather than "regens" (community-oriented). Octant's assumed user base skewed regen, but the people we could actually recruit skewed degen. That gap between assumed and actual audience informed several of my recommendations.

Geographic scope and limitations

All participants were based in Europe, which made scheduling manageable but limited global representativeness. Octant is a borderless product with users worldwide, so findings should be read with that constraint in mind. It also gave us a clear rationale for follow-up global research.

We deliberately included non-native English speakers (French, German, and Portuguese) to test comprehension across language backgrounds. Several participants found the platform's copy overly formal or unclear, which may have been compounded by English being their second language.

Preparing the test environment

Before sessions began, Tahi's audit had identified specific content issues: vague homepage messaging, missing tooltips, and jargon that made sense internally but not to users. Selected fixes were applied to a staging environment by Octant's tech lead so I could test how small copy changes affected comprehension during sessions.

I also walked through every flow a participant would encounter on the staging site. That caught edge cases and sharpened my discussion guide.

The staging environment didn't simulate a full 90-day epoch cycle, so I couldn't observe real allocation behaviour or see how the epoch timeline affected decision-making. I scoped the study to first-time comprehension and navigation, and flagged the allocation flow for follow-up.

Discussion guide

The discussion guide moved from open exploration to targeted probing:

Phase 1 · Context interview. I opened with questions about web3 experience, familiarity with staking platforms, donation habits, and primary language. I also asked participants whether they'd categorise themselves as a "regen" or "degen" to establish each participant's mental model before they encountered Octant. This let me track where the product's framing matched or missed their expectations.

Phase 2 · Task-based walkthrough. Participants explored the platform using think-aloud protocol, working through connecting a wallet, locking GLM tokens, browsing projects, and reviewing the metrics page. I built in conditional branching: for example, if a participant didn't instinctively connect their wallet first, I let them explore freely before redirecting, which revealed how the interface guided (or failed to guide) new users. I probed at specific friction points flagged by the audit: the onboarding carousel, the "Effective GLM" label after locking, the Match Funding multiplier, and the Patron Mode toggle in settings.

Phase 3 · Debrief. Participants reflected on whether the platform's purpose was clear, what worked well, what needed improvement, and whether they'd use Octant themselves. This phase consistently produced the most candid feedback.

Several of the strongest findings came from moments where a participant's confusion led me to probe an area I hadn't anticipated.

Facilitation

Each session ran 30 to 45 minutes over Google Meet. I facilitated all ten sessions, recorded and transcribed using Otter.

Every session opened with a verbal informed consent process covering the purpose of the research, confidentiality, consent to record, the right to withdraw at any time, and my independence as an external researcher (not an Octant employee). Without that framing, participants might have held back criticism.

Analysis

After completing all ten sessions, I reviewed each recording and my session notes systematically, identifying patterns that recurred across multiple participants. At n=10, formal coding frameworks would have added process without proportional insight. Instead, I tracked each instance of confusion, hesitation, or misinterpretation, then grouped these into themes based on where they occurred in the interface and what they revealed about user mental models.

Nine themes emerged consistently. I cross-referenced these against the audit findings and my hypotheses to separate confirmed assumptions from new discoveries.

§ 04Findings

It's cool — once you get it.

Users admired the product's design and polish, but almost everyone was initially confused. The interface was described as "clean", "elegant", and "better than most dapps", but comprehension got in the way of confident use.

  1. Octant's purpose was unclear. If users can't grasp the core proposition within seconds, they won't engage with any of the flows that follow.
  2. Onboarding was broken. The carousel was broken and the walkthrough confused people. First impressions of a product shouldn't signal unreliability.
  3. GLM locking confused nearly everyone. Locking is the gateway to everything else in Octant. Uncertainty at this stage meant users were unlikely to proceed to project selection or donation.
  4. Match Funding was universally misunderstood. Quadratic Match Funding is Octant's most distinctive feature. Every participant thought it boosted their personal rewards. It doesn't. It boosts the project's donation.
  5. Terminology slowed comprehension across the board. When every other label requires interpretation, users spend cognitive effort on language instead of decision-making. The cumulative effect is exhaustion and disengagement.
  6. The metrics page was attractive but confusing. Users liked the layout but couldn't tell whether they were looking at personal stats or community data. The financial terminology created wrong associations.
  7. Patron Mode was mistaken for a premium feature. The label "Patron Mode" suggested an exclusive tier or paid upgrade, not what it actually does: automatically allocate rewards to all projects. Participants either ignored it or assumed it wasn't relevant to them.
  8. The Uniqueness Score was opaque. Most participants guessed it related to Sybil resistance but couldn't explain how to improve their score or what the "15+" threshold meant. Without context, it felt like an arbitrary gatekeeping mechanism.
  9. The emotional payoff was missing. Donating produced no confirmation, no feedback, nothing. The 90-day gap between locking and allocating made it worse. Giving felt transactional rather than meaningful.

§ 05Recommendations

I delivered recommendations in three tiers based on effort and impact.

Tier 1 · High impact, low effort

Clarify language and fix misleading labels.

I recommended clarifying Match Funding site-wide with consistent language and an impact visualiser, replacing "Save" with "Allocate" or "Donate" (every participant misread it as a bookmark), and simplifying homepage messaging so the core proposition lands within seconds.

Inline tooltips around Effective GLM (e.g. "You can unlock anytime") would prevent the alarm response nearly every participant experienced when locking. And renaming Patron Mode to something like "Auto-allocate to all projects" would stop users dismissing it as a premium tier.

Tier 2 · Medium effort, high value

Improve onboarding and separate personal and community data.

I recommended replacing the broken carousel with a checklist-style flow that guides users through setup, locking, and allocating step by step. Users explicitly asked for this. The Uniqueness Score also needed reframing since most participants guessed it related to Sybil resistance but couldn't explain how to improve it or what the "15+" threshold meant. Renaming it (e.g. "Verification Score") and adding contextual tooltips would make it actionable rather than opaque.

The metrics page was the single most common source of confusion. I recommended adding a "My Impact" toggle so users could switch between personal stats and community-wide data without second-guessing what they were looking at.

Tier 3 · Strategic enhancement

Redesign the allocation flow and sustain emotional engagement.

I recommended introducing a clear progress tracker (Lock → Earn → Allocate) with a confirmation step after allocation. The current flow had no checkout moment and no feedback. A visible 90-day epoch countdown would also help users understand where they are in the cycle and when they can act.

Post-allocation messages, impact visualisations, and ambient donation options (e.g. "Auto-allocate 10% every epoch") would keep users connected across the long epoch cycle. Without these, giving felt transactional rather than meaningful.

§ 06Deliverables

§ 07Impact

Research outcomes

Broader outcomes

Following the research, Octant implemented several of the recommended changes. In subsequent epochs, the proportion of users who donated their rewards rose from approximately 2% to 8%, a fourfold increase. While multiple factors likely contributed, the timing aligned with the clarity improvements the research recommended. We didn't design the research to measure conversion, so this is correlation, not causation.

§ 08Reflections

What worked well. The moderated format was the right call. The best findings came from probing hesitation in the moment. An unmoderated study would have captured task completion rates but missed the why behind the confusion. Zero no-shows across ten sessions.

What I'd do differently. I'd push harder for a test environment that simulated a full epoch cycle. That was the study's biggest gap. I'd flag it earlier in scoping next time. I'd also run a short post-session survey for quantitative satisfaction data alongside the qualitative.

What I learned. The regen/degen split in our sample was an accidental but valuable finding. It challenged the team's assumptions about who their users are and reshaped how I framed the messaging recommendations. Sometimes the most useful insight comes from the recruitment process, not the sessions themselves.


View the live site →

Continue  /  Case Study 01

Tone of Voice App · AI brand voice generator

Usability research from Notion prototype to standalone product launch. Six participants, ten findings, three tiers of recommendations.