Distribution Strategy: From Published to Discovered
James pulled up ClawHub in his browser. He could see TutorClaw listed under his publisher account. Published. Available. Anyone could install it with a single command. But when he searched for "tutor" in the ClawHub directory, his product appeared somewhere in the middle of the results, below products with more installs and higher ratings.
"It is published," he said. "But nobody is finding it."
Emma sat down next to him. "Publishing is a technical step. You completed that in Chapter 58. Distribution is a business problem. And it is a different kind of thinking entirely."
James turned to her. "At the warehouse, we had this exact problem. We manufactured a great product. Reliable, well-priced, exactly what the market needed. We put it in the catalog. And nothing happened. It was on page forty-seven of a three-hundred-page catalog. Nobody found it. The product did not change. What changed was where we placed it, which distributors carried it, and how we got it onto the shelves people actually looked at."
Emma raised an eyebrow. "You are about to tell me that shelf placement matters more than product quality."
"No. Product quality is the baseline. Without it, nothing else matters. But a great product in a bad location sells less than a good product in a great location. Distribution is the location problem."
You are doing exactly what James is doing. You published your product to ClawHub and completed the technical work. Now you face the question that separates a published product from a discovered one: how do people find it?
Publishing Is Not Distribution
In Chapter 58 Lesson 16, you completed the publishing workflow: package manifest, verification, and the clawhub publish command. Your product is on ClawHub. That is necessary but not sufficient.
Publishing makes your product available. Distribution makes it discoverable. The difference is the difference between stocking a product in a warehouse and placing it where customers actually shop.
| Concern | Publishing (Ch58 L16) | Distribution (This Lesson) |
|---|---|---|
| Question | "How do I get this onto ClawHub?" | "How do people find and install it?" |
| Effort | Technical: manifest, verify, publish | Strategic: discovery, placement, community |
| Frequency | Once (plus version updates from L8) | Ongoing: every interaction shapes future discovery |
| Success metric | "It is listed on ClawHub" | "People find it, install it, and recommend it" |
This lesson covers the strategy side. No commands to run. No manifests to write. The work here is analytical: understanding how marketplaces work and how to position your product within one.
How Discovery Works on ClawHub
ClawHub is a marketplace, not just a package registry. A package registry stores and serves packages. A marketplace adds discovery: search, categories, ratings, featured listings, and recommendations. The difference matters because a registry serves users who already know what they want, while a marketplace helps users find things they did not know existed.
Discovery on ClawHub follows a hierarchy:
Search. A user types a query ("tutor," "code review," "finance") and sees results ranked by relevance and quality signals. Search is the primary discovery path for users who have a specific need.
Categories. ClawHub organizes products into categories (education, productivity, development, finance). Category browsing is the primary discovery path for users who are exploring, not searching.
Ratings and reviews. Each product accumulates ratings from users who have installed it. Higher-rated products appear higher in search results and category listings. A rating is a quality signal that other users trust more than the product's own description.
Featured listings. ClawHub highlights products that meet certain criteria (high ratings, high install velocity, editor recommendations). Featured products receive disproportionate visibility. This is the equivalent of shelf placement at eye level.
Notice what is not in this hierarchy: marketing spend, advertising, paid placement. ClawHub's current discovery model is merit-based. The quality signals (ratings, reviews, install counts) are generated by real users. You earn visibility by building something that users rate highly, not by purchasing placement.
This means the product itself is the primary distribution mechanism. Every good experience a learner has with TutorClaw is a potential rating. Every rating improves ranking. Every ranking improvement increases discovery. The product's quality is the engine of its own distribution.
The Three Install Paths
Not every user discovers and installs products the same way. At scale, three install paths serve three distinct user segments:
| Path | How It Works | Primary Segment | Friction Level |
|---|---|---|---|
| CLI | clawhub install panaversity/tutorclaw | Power users | Low (for them) |
| Launch GUI | One-click install from ClawHub web page | Mainstream users | Very low |
| Manual config | Editing .mcp.json to add the MCP server configuration | Enterprise users | Higher |
CLI users are comfortable with terminal commands. They read documentation, follow quickstart guides, and install by pasting a command. This is the developer audience. For them, CLI is the fastest path.
Launch GUI users interact through a web interface. They browse ClawHub's website, find a product, and click an install button that configures their OpenClaw automatically. This is the mainstream audience. The one-click path removes the barrier of knowing command-line syntax.
Manual configuration users need control over exactly what gets installed and how. Enterprise environments may have policies about which MCP servers are permitted, which network configurations are allowed, and which approval processes must be followed. These users edit configuration files directly because their workflow requires explicit, auditable steps.
The three paths exist because distribution is not one-size-fits-all. A product that only supports CLI installation loses the mainstream audience. A product that only supports GUI installation frustrates power users who prefer commands. A product without manual configuration options is invisible to enterprise evaluators.
Your TutorClaw already supports all three. The clawhub install command works. The ClawHub listing includes a launch button. The shim skill's MCP server configuration can be manually added to .mcp.json. These three paths were established when you published in Chapter 58. The distribution question is which paths your users actually use and how to optimize each one.
The Network Effect Flywheel
Each install potentially generates a rating. Each rating affects the product's ranking. Higher ranking means more discovery. More discovery means more installs. This is a feedback loop, and it is the most powerful force in marketplace distribution.
┌─────────────┐
│ Installs │
└──────┬───────┘
│ users try the product
▼
┌─────────────┐
│ Ratings │
└──────┬───────┘
│ quality signals accumulate
▼
┌─────────────┐
│ Ranking │
└──────┬───────┘
│ higher position in search/categories
▼
┌─────────────┐
│ Discovery │
└──────┬───────┘
│ more users find the product
└──────────────┐
▼
(back to Installs)
This loop has a critical property: it compounds. The first ten installs might generate three ratings. Those three ratings might move the product up one position in search results. That position might generate five more installs. Those five installs generate two more ratings. The cycle accelerates.
But the loop works in both directions. Poor ratings push the product down. Lower ranking means less discovery. Less discovery means fewer installs. Fewer installs mean the product stagnates. The flywheel is not guaranteed to spin forward. It spins in the direction the ratings push it.
This is why product quality is the foundation of distribution strategy, not a separate concern. In a merit-based marketplace, the product's quality determines whether the flywheel spins forward (good ratings, more discovery) or backward (poor ratings, less discovery). Marketing cannot overcome a product that users rate poorly.
Community as Distribution
Beyond the marketplace mechanics, community engagement creates a distribution channel that the flywheel does not capture. Community operates on a different mechanism: trust and reputation built through direct interaction.
Documentation quality. Clear, comprehensive documentation reduces the friction of the first experience. A user who installs TutorClaw and immediately understands how to use it is more likely to rate it positively than a user who installs it and gets confused. Documentation is not a post-launch afterthought; it is a distribution asset.
Support responsiveness. When users report issues or ask questions, the speed and quality of the response shapes their perception. A user whose issue is acknowledged and resolved becomes an advocate. A user whose issue is ignored becomes a detractor. Each interaction is a micro-distribution event.
Issue transparency. Publishing known issues, workarounds, and planned improvements builds trust. Users who see that the creator is actively maintaining the product are more likely to recommend it to others. Transparency is a signal that the product has a future, not just a present.
Community engagement does not scale the way the marketplace flywheel does. Responding to individual users takes time. Writing documentation takes effort. But community creates something the flywheel cannot: trust that survives a bad release, a temporary bug, or a period of slow improvement. Community is the resilience layer of distribution.
Try With AI
Exercise 1: Map Your Marketplace Dynamics
Think of a product you use (an app, a tool, a service) that you discovered through a marketplace or directory. Use this prompt to analyze how discovery worked:
I discovered [product name] through [marketplace/directory name].
Here is how my discovery journey worked:
1. What was I searching for or browsing when I found it?
2. What quality signals influenced my decision to try it?
(ratings, reviews, install count, featured listing, recommendation)
3. How did I install it? (CLI, GUI, manual, other)
4. Did I rate or review it afterward? Why or why not?
Analyze this journey through the network effect lens:
- Was I part of the flywheel? (Did my discovery lead to a signal
that would help others discover the product?)
- What could the product creator have done to make my discovery
faster or my decision easier?
- What was the biggest friction point in my journey?
What you are learning: Distribution strategy becomes concrete when you trace your own behavior as a user. Every product you have discovered through a marketplace involved the same dynamics: search or browse, evaluate quality signals, choose an install path, and potentially generate a rating. By analyzing your own journey, you see the flywheel from the inside. The creator's job is to make each step in that journey as smooth as possible.
Exercise 2: Design a Distribution Strategy
You are launching a new MCP application on ClawHub. Use this prompt to design a distribution strategy that addresses all four parts of the flywheel:
I am launching an MCP application on ClawHub. The application
is: [describe what it does and who it serves].
Design a distribution strategy that addresses each part of
the network effect flywheel:
1. INSTALLS: How will I get the first wave of installs?
(The flywheel has not started yet. What bootstraps it?)
2. RATINGS: How will I encourage users to rate the product?
(Most users do not rate unless prompted. What triggers a rating?)
3. RANKING: What quality signals will improve my ranking?
(Ratings, install velocity, documentation completeness)
4. DISCOVERY: Beyond search ranking, how will potential users
find the product? (Categories, community, external channels)
For each part, give specific actions I can take in the first
week, first month, and first quarter after launch.
What you are learning: The hardest part of the flywheel is the beginning. Before any ratings exist, the product's ranking is determined by its metadata (title, description, category) and the publisher's reputation. The first installs come from direct outreach, community presence, or complementary channels. Once the flywheel starts, it generates its own momentum. Designing the bootstrap strategy (how to get from zero to the first meaningful rating threshold) is the most important distribution decision you make.
Exercise 3: Evaluate Network Effects in Your Domain
Network effects appear in many contexts beyond software marketplaces. Use this prompt to find them in your own professional domain:
Think about a system in your professional domain (a marketplace,
a platform, a community, an ecosystem) that exhibits network
effects: where more participants make the system more valuable
for each participant.
Describe the system and answer:
1. What is the feedback loop? (More X leads to more Y leads
to more X)
2. Is the network effect direct (more users = more value for
each user) or indirect (more users = more content/products
= more value)?
3. What bootstrapped the network effect initially? How did
it get past the cold-start problem?
4. What could cause the network effect to reverse?
5. How does this compare to ClawHub's install-rating-ranking-
discovery loop?
What you are learning: Network effects are not unique to software marketplaces. They appear in physical retail (more foot traffic in a shopping district attracts more stores, which attracts more foot traffic), in professional communities (more members create more discussions, which attract more members), and in education (more students create more peer learning, which attracts more students). Recognizing network effects in your own domain helps you design distribution strategies that harness them, whether you are distributing software, physical products, or services.
James was quiet for a while. He was thinking about the warehouse.
"We had three distribution channels," he said. "Direct sales for big accounts. Distributors for regional coverage. And a catalog for individual orders. The same product. Three channels. Each one reached customers the others could not. If we had only used one channel, we would have reached a third of the market."
He looked at the three install paths on his notes. CLI, Launch GUI, manual configuration. "It is the same principle. One product. Three paths. Each path serves people the others miss."
Emma started to respond, then stopped. She had been about to say something about multi-channel distribution from a technical perspective: how API surfaces, web interfaces, and configuration files map to different integration patterns. But James's framing was more direct. Distribution channels, not API surfaces. Customer segments, not user personas. Market reach, not platform coverage.
"Your framing is better than mine," she said.
James looked surprised. "What do you mean?"
"I was about to explain install paths in terms of API surfaces and integration patterns. Technical framing. Your version is simpler and more accurate. You are not thinking about how the software works. You are thinking about how the customer buys. That is the distribution question."
She paused. "Engineers tend to think about distribution as a technical problem: how do I deliver the bytes? Business people think about distribution as a reach problem: how do I get this in front of the people who need it? The technical problem was solved when you published. The reach problem is the one that determines whether anyone actually uses it."
James nodded slowly. He looked at his notes from the entire chapter. Six pivots had shaped the architecture. Invariant layers had survived every change. Eight meta-lessons had distilled the principles. An ADR had documented the reasoning. A versioning strategy would keep existing users current. And now a distribution strategy would help new users find the product.
"We have built it, analyzed it, understood it, documented it, versioned it, and figured out how to distribute it," he said. "That feels like the end of something."
Emma stood up. "It is. You have built, analyzed, understood, documented, versioned, and distributed. Time to look back at the entire journey."