Empowering property agents: designing for informed marketing decisions
company
Property Guru
time
Q1 '21
Role
Product Design
Keywords
#decision support #data design #retention
Designing a decision-making system that helped agents invest in listings with confidence
One tension stood out clearly in our retention work.
We expected property agents to operate like small business owners, constantly making financial decisions on how much to invest in their listings.
But in reality, many of them were making those decisions almost blindly.
They could spend more on Boost, Spotlight, or keyword changes, yet had very little confidence in whether the investment was working, whether the listing was healthy, or whether the problem was something deeper like pricing or market demand.
This uncertainty wasn't just frustrating for agents. It was making it harder for them to justify continued spend on the platform and weakening our broader promise of helping them become trusted advisors.
That tension made the opportunity much bigger than simply improving analytics.
This was a decision-confidence problem sitting at the intersection of user trust, retention, and monetization.
That became the core problem I wanted to solve.
How might we help agents make informed financial decisions about their listings?


Reframing the problem from analytics to decisions
The obvious direction would have been to give agents more data. But early research and prototype testing quickly showed that raw numbers alone wouldn't solve the problem.
Agents already had access to basic listing activity metrics.
What they lacked was:
context
market benchmarks
trust in how the data was derived
guidance on what to do next
The real challenge was helping agents translate performance signals into confident action.
This reframing was critical because it shifted the solution away from "more reporting" and toward a decision-support product.
The goal was not simply to show listing performance. The goal was to help agents answer the questions that actually mattered:
Is this listing underperforming?
Should I continue investing in it?
Is the problem visibility, pricing, or demand?
What action should I take next?
Designing a performance system agents could act on
The solution evolved into a new performance layer that helped agents understand how their listings were performing over time, against benchmarks, and in relation to money spent.
Instead of fragmented metrics, the experience brought multiple decision signals into one coherent flow.
Agents could now understand:
how many people viewed their listing over time
how many users meaningfully engaged
which channels were generating leads
how promotional tools impacted reach and clicks
how the listing compared against similar properties
what ROI they were generating for each credit spent
how similar property types historically performed in the same location
The most important part of the experience was the performance overview. At a glance, agents could understand whether a listing was healthy, plateauing, or underperforming relative to the market.
This reduced the ambiguity around whether further spend was justified. The product was no longer just surfacing metrics. It was helping agents make better financial decisions with more confidence.
Designing trust into the interpretation layer
One of the strongest insights from testing was that some agents didn't fully trust the platform's interpretation of the data.
A recurring question was: "How did you calculate this?"
This became especially important because the product was influencing financial behavior. If agents didn't trust the benchmarks or ROI logic, the entire experience would lose credibility.
To address this, I introduced transparent data explanations for key metrics and benchmarks. We made it easier for users to understand:
how scores were derived
what benchmark comparisons were based on
how ROI signals connected to actual listing behavior
This was a relatively small product layer, but it had a significant impact on trust.
The challenge wasn't simply visualizing data. It was ensuring users trusted the product enough to act on what it was telling them.
Supporting less experienced agents through explanatory guidance
Another pattern from research was that less experienced agents often knew something had changed, but struggled to understand what it meant.
For example, they could see declining reach, but were less confident in whether the right next step was:
adjusting price
improving the listing content
using a promo tool
waiting for market demand to change
To support this, I introduced contextual tips and explanatory guidance directly into the experience.
Instead of expecting users to interpret every signal themselves, the product started helping them understand:
why a metric may be changing
what likely factors were influencing it
when promo tools were most relevant
what action to consider next
This moved the product closer to coaching through design, rather than passive reporting.
That shift was particularly valuable for newer and mid-tier agents, helping them build confidence in both their platform spend and the recommendations they gave to clients.
Iterating toward clarity
One important usability issue emerged during prototype testing.
When promotional tools were active, the chart became harder to read because overlapping visual areas created too much noise. The data was technically correct, but the experience required too much cognitive effort.
I simplified the visualization by moving promotional activity indicators into solid bars beneath the main trend line. This kept the performance trend visually clean while still preserving the context agents needed.
A relatively small visual decision, but one that had a meaningful impact on readability and confidence.
For insight-heavy products, clarity is part of the value proposition.
Impact
The most important outcome was that agents could now make more informed and confident investment decisions about their listings.
Instead of relying on intuition, they had much stronger visibility into:
listing health
ROI from credits spent
lead generation performance
competitive benchmarks
historical trends
market positioning
This created value on both sides. For agents, it reduced uncertainty and strengthened their ability to act as trusted advisors to clients. For the business, it supported retention by making premium promotional tools easier to justify through visible outcomes.
The product shifted from selling promotional features to helping users make better business decisions. That shift is where the strongest retention value came from.
Reflection
What I value most about this project is how clearly it reinforced that data alone rarely changes behavior.
The original temptation could easily have been to focus on exposing more metrics or building a more sophisticated reporting dashboard. But the real challenge was much more human.
Agents were not struggling because they lacked numbers. They were struggling because they lacked confidence in what the numbers meant and what action to take next.
The most important shift in this work came from reframing the problem from showing performance to reducing uncertainty at the exact moment users decide whether to invest more.
That reframing changed the role of the product significantly. Instead of becoming a place where agents passively checked listing activity, it became a tool that helped them make better business decisions, justify spend, and strengthen the advice they give to their own clients.
For me, this project was a strong reminder that some of the most valuable product opportunities sit inside moments of hesitation. When users are about to spend money, make a recommendation, or change strategy, clarity becomes part of the product value itself.
That lens - designing for confidence, not just visibility - is something I continue to bring into how I approach decision-support products today.
Next Projects




