Hi, I'm Ben Schippers

I build AI platforms, products, and teams that get better in production.

Not "move fast and break things." More like: ship intentionally, measure honestly, and harden what matters—so the system improves over time instead of collapsing under scale.

What I Do

I specialize in the gap between "it works in demo" and "it works at scale."

🔍

Find the Blockers

Most product problems aren't bugs—they're adoption blockers hiding in support tickets. I built systems that found 700+ of them and converted hundreds into shipped improvements.

Build the Infrastructure

Not features—the internal systems that make teams faster. Routing logic. Prioritization frameworks. Launch playbooks. The boring stuff that enables velocity.

🛠

Make It Stick

18 months post-launch isn't "maintenance"—it's where products either scale or collapse. I've kept $212M ARR from churning by treating post-launch like product.

🔥

Communicate Under Fire

Crisis response for pharma, biotech, manufacturing, finance—when the VP's phone is ringing and 433,000 users are impacted, you need someone who can write the exec update AND fix the queue.

Why I Build This Way

I don't believe in scaling headcount.

When one product grew 173% in users, we didn't hire 173% more people. We built systems—diagnostics, routing, self-service—that made the team 80% more efficient. More users, same-ish team, better outcomes.

Support data is product signal.

Most companies treat support as a cost center. I see it as a massive source of product signal that most companies ignore. Every escalation is a failed user journey. Every blocker is a feature gap. You just need systems to capture it.

I write playbooks people actually use.

The framework I rebuilt didn't get adopted across 8 product lines because leadership mandated it. It got adopted because it worked—teams saw results and pulled it into their workflows.

By the Numbers

From 3 people to $355M in value—here's the progression:

3 → 150 agents Co-founded enterprise support program, scaled it 50x
700+ blockers Systematic identification of what was actually stopping adoption
95+ features 64% engineering acceptance rate—not suggestions, shipped code
220K users unblocked Real humans who couldn't use the product until we fixed it
94K seats added Post-support conversions—support as growth engine
$355M+ value protected ARR retained + costs avoided + growth enabled

Background

Microsoft

2020 – 2025

Senior Program Manager, AI Platforms & Enterprise Operations

Copilot · Graph · Windows 365 · Teams Devices

Owned a portfolio of internal platforms across 8 product lines—signal systems, routing intelligence, self-service tools, and quality measurement. The infrastructure that turned customer friction into engineering action.

  • Rebuilt the signal-to-engineering pipeline from scratch. Defined the taxonomy, intake criteria, and routing logic. Trained teams to surface insights engineering could act on. 95+ features shipped through this system; adopted org-wide.
  • Built routing intelligence that classifies incoming work by complexity and matches it to the right skill level. Owned the cost-vs-quality tradeoff while maintaining top-tier satisfaction.
  • Scaled self-service from pilot to ~50% adoption on flagship products. Tens of thousands of tickets per year that never get created.
  • Created early risk detection—a system that identifies at-risk customers before they escalate. 700+ situations identified and resolved.
  • Built and shipped a recommender reaching 14K enterprise customers with 14% conversion.
  • Led executive escalation response across pharma, biotech, manufacturing, and financial services.
  • Crisis response for 433K-user transition—zero churn.

Experis / ManpowerGroup

2016 – 2020

Program Manager → Operations Manager

Microsoft Premier Support

Built Premier Engineering from a 3-person pilot to 150 agents handling 60K incidents/year. Full lifecycle ownership—built it, scaled it, responsibly wound it down when business conditions changed.

  • Designed complexity-based routing and escalation frameworks—90+ SOPs still in use today
  • Top 2.5% customer satisfaction globally (195/200)
  • Created knowledge base with 90+ articles adopted across multiple departments

Mural Consulting

2014 – 2016

Engagement Manager

1,000+ Office 365 migrations across 12 global partners. Built partner enablement programs for enterprise cloud adoption.

Skills

AI & Platforms

Recommender systems, LLM integration, A/B testing, telemetry-driven roadmap

Product

Developer ecosystems, diagnostics/reliability, 0-to-1 frameworks, enterprise deployment

Technical

SQL/Kusto, Python, Power BI, Azure DevOps, TypeScript, Supabase

Leadership

Cross-org alignment, executive communication, crisis execution

Education

B.S. Interdisciplinary Science & Technology — University of Arizona

Former dendrochronologist. Yes, tree rings. It's where the domain name comes from.

Case Study: Copilot Extensibility

From Silos to Signal

The Situation

November 2024. Microsoft's Copilot rollout is accelerating across enterprise. Three clouds—Dynamics, Azure, M365—operating in silos with no shared visibility. Enterprise customers hitting adoption walls that nobody can see from inside any single cloud. Support cases piling up with patterns that span organizational boundaries. No systematic way to capture what's actually breaking or why.

Leadership needed someone who could bridge the gap. I got the call.

What I Did

I was asked to lead the Copilot for All moment insight collection team—not because I had formal authority over these clouds, but because I'd built the cross-functional trust to make it work.

Building the Bridge:

  • Assembled a 10-15 person vTeam spanning Dynamics, Azure, and M365 support
  • Deployed real-time case analytics for insights on Copilot adoption patterns
  • Created the first cross-cloud visibility into adoption blockers
  • Established direct lines to VP-level platform leadership and product teams
  • Hosted share-back sessions that turned support signal into engineering priorities

The real challenge wasn't technical—it was organizational. Each cloud had its own priorities, its own metrics, its own definition of success. I had to build something valuable enough that teams would voluntarily participate.

The Framework

I took the new-product support methodology I'd refined across other launches and adapted it for the cross-cloud Copilot challenge:

  1. Signal capture — Real-time case monitoring across all three clouds
  2. Pattern recognition — Identifying blockers that only became visible when you connected the dots across silos
  3. Prioritization — Severity scoring tied to business impact that engineering would actually use
  4. Feedback loop — Monthly reviews, direct stakeholder escalation, feature request tracking

The Results

Metric Before After
Cross-cloud visibility None Real-time
Blockers identified 76
Users unblocked 7,380
Feature requests submitted 23
Seats added post-support 94,000
Self-help success 47%

The framework didn't just fix immediate problems—it created a sustainable system for surfacing and resolving adoption blockers across the Copilot ecosystem.

What I Learned

Enterprise AI adoption fails in the gaps between teams. The model works. The demo is impressive. But when real users hit real edge cases, they fall into organizational seams where nobody has visibility.

My job was to build the connective tissue—the systems that capture signal across silos and route it to people who can act. That's not support. That's product intelligence infrastructure.

The 94,000 seats added post-support engagement weren't because we answered tickets faster. They were because we identified what was actually blocking adoption and got it fixed.

Selected Builds

Production applications I've shipped end-to-end

In Dev
🖥

Workstation Zero

Distraction-resistant desktop environment. Full-screen CRT terminal aesthetic with Pomodoro integration, focus-gated media, and embedded productivity tools. 7,700+ lines of GDScript.

Godot GDScript Deep Work

Writing

Behind the Screens — thoughts on AI, product, and building in public

If You Can Read a Recipe, You Can Now Be a Developer

The $1K Experiment Part 2: What happens when the framework compounds. 5.5 hours to working MVP. 2,031 lines became 106,000. Shipping is addictive—here's the warning label.

Read on LinkedIn →

The $1K Claude Code Credit: What Happens When a PM Learns to Ship

Could a senior PM with product clarity but no coding background actually build and ship real software? 31 days, 215 commits, 38K lines of TypeScript. The 64/33/3 collaboration model that made it work.

Read on LinkedIn →

The 90-Day Death Spiral: Why 95% of AI Projects Fail

MIT found only 5% of AI pilots deliver impact. The early warning system hiding in your support tickets—and the metrics that predict failure before day 90.

Read on LinkedIn →

Labs

Experimental projects and works in progress

In Dev

∞-ball

Interactive exploration of mindfulness and mechanics. Blending hard science with contemplative practice.

TypeScript Experimental
Coming Soon
🦐

C-Monkies

Multi-platform simulation game. More details coming soon.

Unreleased
In Dev

Dead Radius

Location-based ASCII survival simulation. Uses real geography for procedural world generation.

C# Geo-based

Let's Talk

If you're building enterprise AI infrastructure that has to work at scale—especially when production gets weird and customers get loud—I've been there. Reach out.

Atlanta, GA · Open to relocation

×

Career Cross-Section

Read from the center out. Thicker rings = growth years.

Bark 2026 (now) Microsoft 2020–2025 Experis 2016–2020 LTRR 2011–2014 Pith (start)
Good growth years
Research years

Former dendrochronologist. The domain name had to mean something.

× Bi-directional RAG Screenshot
×