Building a Knowledge Site in 1 Hour with 6 AIs in Parallel
You have probably seen plenty of "I built X with AI" stories. Most either skip the hard parts or just show the polished result. This post is different — I am going to walk through the entire process of building the Claw101 knowledge site, minute by minute.
One hour. Six AI agents running in parallel. One person orchestrating.
Why Build a Knowledge Site
The OpenClaw community had accumulated a wealth of tutorials, case studies, and how-to content, but it was scattered across docs, group chats, and GitHub repos. We needed a proper home for it all, and we wanted to test a hypothesis: would people pay for high-quality AI tooling tutorials?
The traditional approach would be hiring a frontend dev, finding a designer, and iterating slowly. Instead, we decided to use OpenClaw itself — running parallel AI agents to handle every part of the build.
If our own tool cannot build our own site, why would anyone else trust it?
The Timeline
0:00 — 0:05 Goal Setting and Task Decomposition
Five minutes to get crystal clear on what we were building:
- Goal: Ship an accessible, bilingual knowledge site with a payment-ready structure
- Stack: Next.js 15 + Tailwind CSS v4 + next-intl + MDX
- Core pages: Homepage, tutorial list, tutorial detail, blog, join page, FAQ
- Requirements: Chinese/English bilingual, SEO-friendly, mobile-responsive
We split the work into six parallel tracks:
- Homepage & Navigation: Hero section, feature showcase, CTAs, navbar, footer
- Pricing & Membership: Pricing page, payment flow skeleton, access control
- Content Production: Tutorial MDX files, blog MDX files, translations
- SEO & Metadata: Sitemap, JSON-LD, Open Graph, canonical tags
- Deployment & Infra: Vercel config, domain, HTTPS, CI pipeline
- Testing & QA: Cross-device testing, link validation, performance checks
0:05 — 0:15 Launching Six AI Agents
This was the critical moment. I opened six agent sessions in OpenClaw simultaneously, assigning one task track to each.
Agent 1 (Homepage) received:
"Based on the existing Next.js project structure, build the homepage components. Include a Hero section, feature cards, community stats, and CTA buttons. Use dark theme with Tailwind v4 CSS variables."
Agent 2 (Pricing) received:
"Design the pricing page data structure and UI. Three tiers: Free, Basic, Pro. No real payment integration yet — just get the page and data flow working."
Agent 3 (Content) received:
"Generate MDX files for all 13 tutorial chapters listed in chapters.ts. One Chinese and one English version for each. Keep it technically accurate."
Agent 4 (SEO) received:
"Configure complete SEO metadata for every page: title, description, Open Graph, Twitter Card, JSON-LD, and sitemap.xml."
Agent 5 (Deployment) received:
"Set up Vercel deployment. Make sure next.config.ts is correct, environment variables are in place, and the build passes."
Agent 6 (QA) received:
"Once other tracks produce output, verify everything: page rendering, link validity, responsive layout, TypeScript type checks."
All six agents started working almost simultaneously. Six streams of output scrolling in the terminal — like watching an async dispatch center come alive.
0:15 — 0:30 Parallel Progress
This was the densest production window:
- Homepage track: Hero component done. Feature cards using grid layout with gradient borders. Navbar with language switcher. Mobile hamburger menu completed.
- Pricing track: Three-column pricing cards finished. Data pulled from a pricing.ts config file. Join page CTA logic working.
- Content track: All 13 Chinese tutorial chapters generated. English versions progressing — 8 of 13 done. Blog MDX templates drafted.
- SEO track: sitemap.ts written, covering all pages and tutorial routes. JSON-LD structured data injected into tutorial detail pages.
- Deployment track: Vercel config done. next.config.ts tuned for MDX-related settings.
- QA track: First
npm run buildrun. Found 3 TypeScript type errors.
0:30 — 0:45 Integration and Debugging
Debug phase. This is where coordination matters most in a parallel execution model.
Issues encountered:
- Next.js 15 params are Promises — Two pages forgot to await the params, causing build failures. Agent 6 applied a unified fix.
- Translation key mismatch — en.json was missing 3 keys that existed in zh.json. Agent 3 added them.
- Tutorial slug inconsistency — One chapter had a slightly different slug than what chapters.ts expected, resulting in a 404. Agent 3 corrected it.
- Mobile navbar overlapping content — A z-index issue. Agent 1 adjusted.
Each fix took seconds — tell the relevant agent what is wrong, it fixes it immediately. No back-and-forth meetings, no sprint planning. This is the core advantage of AI parallel execution: fix velocity matches discovery velocity.
0:45 — 1:00 Launch and Verification
Final 15 minutes:
npm run buildpassed with 0 errors and 0 warningsnpx tsc --noEmittype checks all green- Vercel deployment successful, HTTPS working
- Chinese homepage, English homepage, tutorial list, tutorial detail, blog list — all accessible
- Mobile testing on Chrome and Safari passed
- sitemap.xml reachable by search engine crawlers
We were live.
Key Decision Points
Looking back, a few decisions had the biggest impact on overall efficiency:
Technology Choices
Next.js 15 App Router plus MDX was the right call. Static generation combined with server components made deployment trivial. MDX let us mix content and code naturally. WordPress or Notion exports would not have been faster, and they would have been far less flexible.
Content Strategy
The tutorial content was not written from scratch. It was distilled and restructured from existing community docs and discussions. AI is not best at creating from nothing — it excels at reorganizing scattered information into a consumable format.
Pricing
We shipped free tutorials first, kept the blog fully public, and planned to layer paid content on top later. No time wasted agonizing over the business model in the first hour. Get the content live, watch the data, then decide.
Core Takeaways from Parallel Execution
This exercise confirmed several things:
-
Task decomposition quality determines parallel efficiency. If the six tracks had heavy dependencies on each other, parallel would have collapsed into serial. Our tracks were nearly independent, only requiring coordination during the final debug phase.
-
The best use of AI agents is well-bounded subtasks. Do not tell an agent "build me a website." Tell it "create this component in this directory, using this layout, with these color variables." Specificity is everything.
-
The human role is architect and dispatcher, not executor. I did not write a single line of code during the entire hour, but I determined the direction of every line that was written.
-
One hour is not the ceiling — it is the normal pace for this workflow. If the task were three times more complex — say, adding real payments, user accounts, and an admin panel — it would probably take three hours. Linear scaling, not exponential.
Advice for Anyone Who Wants to Replicate This
First, invest time in task decomposition. Do not rush into execution. Five minutes of thoughtful decomposition saves 30 minutes of rework. Define the inputs, outputs, and boundaries of each task track clearly.
Second, pick the right tool. OpenClaw's advantage is the ability to manage multiple agent sessions simultaneously, each with its own context and toolchain. If your tooling does not support parallelism, you are stuck running things sequentially.
Third, embrace "ship imperfect, iterate later." The first version does not need to be perfect. Pricing can be adjusted, design can be refined, content can be expanded. The key is getting the skeleton live so users can reach it.
Fourth, do not panic during the debug phase. Six parallel tracks will inevitably produce conflicts and gaps. That is normal, not a failure. Having a dedicated QA agent track significantly reduces your cognitive load.
Fifth, document the process. The blog post you are reading right now was assembled by AI from execution logs. Process documentation is not just useful for retrospectives — it becomes a content asset in its own right.
That is the full story of Claw101 going from zero to live. No magic, no exaggeration — just what happens when you use AI tooling to its full potential.
If you want to experience this workflow yourself, start with the free tutorial.
Related Posts
Related Tutorial Chapters

Follow WeChat: 彭少
Stay updated with OpenClaw tips, AI coding techniques, and productivity tools. Follow for the latest content.


