The 500 Hours I Was Losing Every Year
-
I was spending 8 hours every week answering the same type of question.
"What does this API response mean?"
Engineers would get valid data back from our Vehicle Data Platform. A feature code. A status flag. Technically correct.
But they couldn't interpret it for their use case.
500 hours per year. Not spent on roadmap work. Spent translating responses.
-
An engineer on my team had AI experience. Wanted to explore it.
I ran the numbers: 6 weeks of engineering time to recover 350 hours of my strategic capacity annually.
That's 10x leverage.
But more importantly, if we got it right, we'd create Mercedes' template for practical AI integration. Not theater. Workflow automation.
I set clear success criteria: engineers query it twice per week minimum, 70% of queries need no follow-up.
Six weeks to prove AI could replicate human judgment in our domain.
-
During our weekly tech session, our staff engineer surfaced a problem.
Querying our full Cosmos DB vehicle knowledge base: 1.2-second latency.
Not Desirable. Engineers wouldn't adopt something that felt slow, no matter how comprehensive.
Speed beats completeness when adoption is the goal.
I scoped us down to the top 4 vehicle categories. Sub-500ms latency. Only 70% coverage.
But if it felt fast, engineers would use it.
-
We launched in 6 weeks.
Within days, engineers reported accurate factual answers "this car has a sunroof" but missing the trim-level context they needed for their work.
We pushed an update adding contextual follow-ups.
We weren't building features we thought were clever. We were responding to real workflow friction.
Query volume: 2.3 per week to 4.4 per week in the first month. 91% increase.
Engineers weren't just using it. They were asking more questions because the friction was gone.
-
Three other platform teams reached out asking how to replicate this.
Leadership started pointing to this as the model: "AI-first, not AI-theater."
In a company trying to figure out AI strategy, this became the reference example.
Find a real problem. Constrain the experiment. Prove value fast. Scale.
The hardest question with emerging technology isn't "can we build it?"
It's "should we, and what's the smallest version that proves it works?"