Skip to content
ai-adoption leaderboards productivity

Strava for AI Developers: Why Visibility Drives Better Coding

Making AI coding activity visible — not surveillant — is the key to sustained adoption. The fitness tracker analogy explains why.

Pierre Sauvignon
Pierre Sauvignon February 4, 2026 11 min read
Strava for AI developers — why visibility drives better coding

Strava changed running. Not by making people faster. Not by inventing a better shoe. Strava changed running by making it visible.

Before Strava, your run existed only in your own memory. You ran, you felt tired, you maybe remembered your time. After Strava, your run existed in a social graph. Your friends could see it. You could see theirs. The act of running became observable, comparable, and — critically — shareable.

That visibility changed behavior at scale. People who logged their runs on Strava ran more often. They ran slightly farther. They stuck with the habit longer. Research published in the British Journal of Sports Medicine confirms that social features in activity trackers significantly increase exercise adherence. Not because anyone forced them. Not because a manager was watching. Because visibility itself is a motivator.

The same dynamic is now playing out in software development. AI coding tools are reshaping how developers write code. But most teams have zero visibility into how those tools are actually being used. The developer who spent three hours crafting precise prompts and shipping clean code looks identical to the developer who spent three hours in a doom loop. From the outside, both sat at a desk and typed.

This is the Strava problem for AI-assisted development. And solving it matters more than most engineering leaders realize.

The Feedback Loop Problem

Developers who track their AI usage improve faster. This is not a motivational slogan. It is a consequence of how skill acquisition works.

Learning any new tool requires feedback. You try something, observe the result, and adjust. The tighter the feedback loop, the faster you learn. A developer learning a new programming language gets feedback from the compiler instantly. A developer learning to write effective prompts gets almost no structured feedback at all.

Think about what happens today. A developer writes a prompt. The AI generates code. The developer accepts it, modifies it, or rejects it. Then they move on to the next thing. There is no record of what worked. No data on how many iterations it took. No comparison to yesterday, last week, or last month.

Without feedback, improvement is random. You might stumble into a better prompting pattern, or you might repeat the same inefficient habits for six months. You cannot know because you cannot see.

Now consider what happens when that activity becomes visible. You can see that your acceptance rate this week was 62%, up from 48% last month. You can see that your sessions average 14 iterations, while the team median is 8. You can see that you spend 70% more tokens per feature than your peer who ships at the same velocity.

That is feedback. And feedback drives improvement.

The runner who can see their pace per kilometer improves their pace. The developer who can see their token efficiency improves their prompting. The mechanism is identical. Visibility creates a feedback loop where none existed before.

Surveillance vs. Visibility: The Critical Distinction

Here is where most engineering leaders get it wrong. They hear “track AI usage” and think “monitoring.” They imagine dashboards in a manager’s office. Compliance reports. Performance reviews based on tokens consumed. Developers forced to justify every prompt.

That is surveillance. And surveillance kills adoption.

Surveillance is top-down. It monitors compliance. It asks: “Are you using the tool enough?” It creates anxiety. It incentivizes gaming — developers inflating token counts or avoiding the AI tool for tasks where they might look slow. Surveillance treats developers as subjects to be watched.

Visibility is fundamentally different. Visibility is personal first. It starts with the individual developer seeing their own data. It asks: “How am I doing? Am I getting better?” It creates curiosity. It incentivizes experimentation — trying new approaches because you can see whether they work. Visibility treats developers as professionals who want to improve.

The distinction is not semantic. It has real consequences for adoption.

Teams that implement surveillance-style tracking see initial spikes in tool usage followed by resentment and gaming. The numbers go up. The value does not. Developers learn to generate code they do not need in order to hit usage targets. This is the corporate equivalent of running on a treadmill to inflate your Strava numbers — technically moving, producing nothing.

Teams that implement visibility-first tracking see slower initial adoption but sustained improvement. Developers choose to engage with their data because it is useful to them. They share insights with peers because they are genuine. The numbers reflect reality because nobody is incentivized to fake them.

The difference comes down to one question: who is the data for? If it is for managers, it is surveillance. If it is for developers, it is visibility. The best implementations serve both — but they start with the developer.

What the Activity Feed Looks Like

Strava works because it made the right things visible. Not your heart rate variability or your lactate threshold. Your route, your pace, your distance, your frequency. Simple, understandable metrics that any runner can act on.

The equivalent for AI-assisted development is a focused set of metrics that tell developers something useful about their own practice:

Tokens consumed per session. This is your cost-per-attempt metric. High token consumption on simple tasks signals prompt inefficiency. Low token consumption on complex tasks signals mastery. The trend matters more than any single number.

Sessions per day and per week. Frequency of use is a basic adoption indicator, but it also reveals patterns. Developers who use AI tools in short, focused sessions tend to produce better outcomes than those who run marathon sessions. The data shows you which camp you are in.

Acceptance rate. What percentage of AI-generated code makes it into your commits? A low acceptance rate means you are spending time generating code you do not use. A very high acceptance rate might mean you are not reviewing critically enough. Both are worth knowing.

Streaks and consistency. Like Strava’s streak features, tracking consecutive days of AI tool usage reveals commitment patterns. Developers who use AI tools consistently improve faster than those who use them sporadically. Streaks and gamification tap into the same psychology that makes fitness apps sticky.

Session duration and iteration count. How long does it take you to get to code you accept? If your sessions are getting shorter over time, you are getting better at prompting. If they are getting longer, something has changed — new problem domain, new tool version, or degraded habits.

None of these metrics require exposing prompt content. None require tracking what code was written. They are activity metrics, not content metrics. They tell you how you work without revealing what you work on. That distinction is essential for trust.

The Social Layer

Strava’s genius was not personal tracking. Plenty of apps did that before Strava. The genius was the social layer. Seeing that your friend ran 30 kilometers last week makes you think about your own 15 kilometers differently. Not because you feel bad. Because activity is contagious.

The same principle applies to AI coding adoption. When developers can see — anonymously or by choice — that their peers are actively using AI tools, it normalizes the behavior. The developer who felt awkward asking an AI to help write a function sees that senior engineers on the team are doing it daily. The stigma evaporates.

This is not about competition. Or rather, it is not only about competition. Leaderboards can drive healthy competition when designed well. But the more powerful effect is normalization. Social visibility turns AI tool usage from something unusual into something everyone does.

Consider the adoption curve of any new tool in an engineering organization — what Everett Rogers described in Diffusion of Innovations. There are early adopters who try it immediately. There is a skeptical middle who waits. There are holdouts who resist. The bottleneck is always the skeptical middle. They will not adopt because a manager tells them to. They will not adopt because a vendor shows them a demo. They adopt because they see people they respect — peers at their level, on their team — using the tool and getting results.

Visibility accelerates this process. Without it, the skeptical middle has no signal. They cannot see who is using the tool, how often, or with what results. With visibility, the signal is everywhere. The team activity feed shows daily engagement. The weekly summary shows who shipped what. The adoption curve compresses because the social proof is built into the workflow, not delivered in a quarterly all-hands presentation.

See how developers track their AI coding

Explore LobsterOne

Why This Works Better Than Mandates

Engineering leaders who want their teams to adopt AI coding tools face a choice. They can mandate usage: “Everyone must use AI tools for at least 30% of their coding tasks.” Or they can make usage visible and let social dynamics do the work.

Mandates fail for three reasons.

First, they are unenforceable without surveillance. How do you know if a developer used AI tools for 30% of their coding tasks? You would need to monitor their entire workflow. That monitoring destroys trust, which undermines the adoption you are trying to drive. It is a self-defeating loop.

Second, mandates create compliance without competence. A developer who is forced to use an AI tool will use it. Badly. They will generate code they do not need. They will accept suggestions they should reject. They will hit whatever metric satisfies the mandate and then go back to their preferred way of working. You get the appearance of adoption without the reality.

Third, mandates ignore individual variation. Some developers will take to AI tools immediately. Others need more time, different use cases, or different prompting styles. A flat mandate treats everyone the same, which is efficient for managers and useless for developers.

Visibility avoids all three problems. It does not require surveillance because developers track their own activity voluntarily. It creates competence because the feedback loop drives genuine improvement. And it accommodates individual variation because each developer engages with their own data at their own pace.

The evidence from fitness tracking is unambiguous on this point, as documented in a meta-analysis published in JAMA Internal Medicine covering wearable activity trackers and health outcomes. Strava did not mandate that people run three times a week. It made running visible and social. The mandate came from within — from seeing your own activity, comparing it to your potential, and wanting to do more. Developers are no different. Give them visibility, and most will do the right thing. Force them, and they will comply on paper while resenting it in practice.

From Invisible to Observable

The fundamental problem with AI coding adoption today is that it is invisible. Developers use AI tools alone, in their own editors, with no shared record of the activity. Managers cannot see whether the tools are helping. Peers cannot see what good usage looks like. The developers themselves cannot see whether they are improving.

That invisibility is the enemy of adoption, improvement, and ROI. You cannot optimize what you cannot observe. You cannot learn from peers whose behavior you cannot see. You cannot make a business case for tools whose impact you cannot measure.

The Strava analogy is not perfect. Software development is not a solo sport with clear metrics. Code quality is harder to measure than running pace. The social dynamics of an engineering team are more complex than a running club. There is real risk in getting this wrong — in building systems that feel like surveillance rather than support.

But the core insight transfers cleanly. Visibility changes behavior. Making an activity observable, measurable, and social drives sustained engagement in ways that mandates, training sessions, and executive memos never can.

The teams that figure this out first will have a compounding advantage. Their developers will improve faster because they have feedback loops. Their adoption will spread faster because they have social dynamics working in their favor. Their ROI will be clearer because they have data instead of anecdotes.

The question is not whether to make AI coding visible. The question is whether you do it in a way that empowers developers or alienates them. Get the answer right, and you get the engineering culture equivalent of a running club where everyone is getting faster together. Get it wrong, and you get the equivalent of a corporate wellness program that nobody believes in.

The teams getting this right share three traits. They start with individual visibility before team dashboards. They measure activity, not content. And they let social dynamics drive adoption instead of top-down mandates. Those three decisions are the difference between a tool rollout that sticks and one that fades into the graveyard of unused enterprise software.

Start by making the activity visible. The behavior change follows.

Pierre Sauvignon

Pierre Sauvignon

Founder

Founder of LobsterOne. Building tools that make AI-assisted development visible, measurable, and fun.

Related Articles