The research pipeline I built when the feedback loop was too slow

6 MINUTES READ

At Doctor Anywhere, we weren't short on qualitative signal. Sales calls happened every day. Customer conversations were logged. Support escalations were tracked. The feedback existed. The problem was that by the time it reached the people making product and marketing decisions, it had already missed the meeting.

That's not a research problem. It's an infrastructure problem.

The gap between insight and decision is where research value gets destroyed. Not in the research itself — the interviews, the calls, the synthesis. In the distance between the moment a signal appears and the moment it influences a decision. Most organisations let that gap widen until qualitative research becomes a retrospective exercise: confirming what was already shipped, explaining why something didn't work, arriving too late to redirect anything.

I wanted to close that gap. So I built a pipeline.

What I built

The system connects Apollo (where sales call data lives) to a structured analysis layer built in Cursor. When a sales call is logged, the pipeline extracts the signals — objections, feature requests, onboarding friction points, competitive mentions — and routes them to a shared view that product, marketing, and design can actually read without archaeology.

It's not sophisticated by engineering standards. There's no ML model, no custom embeddings, no dashboard with seventeen filters no one uses. It's closer to a very well-structured pipe: signal goes in, categorised insight comes out, latency drops from weeks to hours.

The tools matter less than the design principle. I chose Cursor and Apollo because they were already in the workflow — adoption cost was near zero. The architecture was deliberately minimal. A pipeline that requires daily maintenance is a pipeline that gets abandoned.

Why it's a distribution problem, not a synthesis problem

Most research operations optimise for synthesis. Better frameworks. More rigorous analysis. Cleaner artefacts. All of that is valuable, but it solves the wrong bottleneck.

The synthesis was fine. The problem was that insight was being produced in one place and consumed — if at all — in another, with no reliable path between them. The sales team had a clear view of what SME customers were confused about during onboarding. The product team was making onboarding decisions three weeks later with no access to that view.

The pipeline doesn't improve synthesis. It improves distribution. Those are different problems, and conflating them is why a lot of research investment produces little operational impact.

What the build taught me

The biggest shift wasn't speed. It was directness.

Before, we relied on other teams to pass the message — a sales rep would flag that a client needed a feature, that flag would travel through a few people, and by the time it reached product or design it had already been interpreted, compressed, and shaped by whoever carried it. That's not a process failure. It's human nature. People translate what they hear into what they already understand.

The problem is that most customer insight lives in the root cause, not the surface request. A sales rep shares that Client A needs Feature X. But if you go deeper — into the actual call, the actual language the customer used — you often find that Feature X is a proxy for a different problem entirely, and the solution that would actually address it is something no one in the chain thought to surface.

Research has always known this. The challenge is that interpretation is hard, and it happens at every hand-off. Each person who touches the signal adds their own layer of meaning. By the time it reaches a decision, it's carrying the fingerprints of five different people's understanding.

Direct access to the source doesn't eliminate interpretation — it just means the person doing the interpreting is the one closest to the decision. That alone changes the quality of what gets built.

What it doesn't solve

It doesn't replace judgment. Automated signal is still raw material. If the questions being asked in sales calls are shallow, the pipeline surfaces shallow insight faster. Garbage in, garbage out — just at speed.

It doesn't fix a research culture problem. If the organisation doesn't have habits around acting on qualitative signal, the pipeline makes the signal more visible, not more actionable. You can lead a decision-maker to an insight; you can't make them change course.

And it drifts. Every pipeline degrades. Categories that made sense in month one stop making sense in month six. Build in a review cadence, or accept that you'll be cleaning up a year from now.

The pipeline didn't give us more research. It gave us faster access to research that was already happening. The signal was always there. We just stopped letting it arrive too late to matter.

That's the only thing it needed to do.