This uncertainty was the gap I set out to address in Q3 2021, leading the design of a feature that gave users clearer guidance and made platform data easier to interpret. The work began with a piece of user feedback:
NPS was a key driver in Demandbase's strategy as an account based marketing company.
Tying my work to OKR's early on proved helpful as it was easy to advocate for when consulting with leadership.
To better investigate our problem, we had to consult with our personas, and understand our user deeply. As you can see, she often struggles with understanding the information, and needs a new way to consume this information. To be able to solve her problem, we need to be able to create an effective problem statement.
Digital Marketers who launch Ad Campaigns need a clear and concise way to consume the information that is displayed to them within the Advertising Cloud Product in Demandbase One.
make the data presented to the user more contextual, and easier to understand?
First, I empathize with the user by creating a map of what they do, what possible gaps exist in their workflow and what this might really mean.
From this, I identified one of the core issues, the user is not able to understand what the metrics mean due to the abstract nature of how the numbers are presented.
The user will see info like the number shown above under impressions, but they don’t really know what it means.
How can we fix this?
My hypothesis to this: Comparisons
Humans are constantly comparing new sensory information to a point of reference that we already understand. We can adopt this mental model into the products that we create to take an advantage of this.
Now we have to figure out what to compare, with the core metrics being listed in advertising below:
From these, we can narrow down the type of comparisons possible to one core type, Targeting. The main campaigns summary card delivers cumulative information for bulk selected campaigns, and we could use these metrics to compare one campaign to another, using a benchmark.
To determine a direction for visualizing this comparison, I needed to scan the advertising product to see which patterns were in use.
These high level needs helped to scaffold my thinking when ideating on a potential design.
Once I had figured out where I could place the comparison, I needed a unit of analysis, and a point of entry into visualizing this comparison. For this, I chose the single campaign screen, which is found by drilling down one level through the main advertising page.
We can document this using a critical user journey for how they might flow into this comparison.
Documenting the flow itself, we can initiate with the following pattern.
To validate the concept and the basic patterns being used, I constructed a barebones prototype and started looking for participants.
I moved towards testing this pattern with a group of internal users, varying in expertise with the product. (N=7). This was done with a UX Researcher through moderated sessions on zoom, and recorded in dovetail. Some of the key insights, as extracted from testing:
The one bar approach might save visual space, but isn't effective in conveying the comparison.
This led me back to the drawing board where I took in some of the feedback given, and synthesized it with previous research. One option that I had explored earlier, but didn’t pursue fully was a 2 bar approach.
This had some implications on the flow, and it changed accordingly to accommodate the two bar approach. This change was mostly visual, but I added new tool tips to clarify which object was the point of interest.
The new design used a dropdown and datepicker to add granularity into the comparison process.
Generally, sentiment was positive!
This project was extremely fun throughout. I was able to lead a new feature release from the initial secondary research all the way up to development, and it was gratifying throughout the way. Hearing both negative and positive sentiments during user testing sessions helped me to improve in being able to rapidly act on feedback and address user needs.
I gained experience working on a new feature from scratch, learning project sizing in order to avoid scope creep, and working closely with user researchers to improve designs through feedback and iterations. A lot of the theory that is packed in the classroom became apparent through this project, and I was able to apply a lot of the skills that I learned.
If I had more time to work on this, I would try to test with more users who aren’t as familiar with the product. Due to time constraints, we were only able to test with a small number of external users, and mostly internal users who are power users. This can skew the actual adoption of the feature, as these users tend to behave a little differently, and have ready access to more support internally.
A followup on this project that would be a v2 improvement would be to show campaign metrics in a small expandable table below the comparison. This was heavily asked for during our user testing sessions for ease of access. The information is actually baked into the tooltips in the chart, but would be easier to parse with a table.