Our team got together to determine how to build an initial version of an audience comparison feature. We knew that we had existing data that we could leverage to help us make such a comparison. We needed to understand what data to compare, how to identify “peers” or campaign data that had similar characteristics, and then we had to determine what characteristics of a business we had available to use in our comparison. Lastly, we needed to determine where we would surface this feature.
Conceptualizing our approach
I designed a quick low-mid fidelity concept design of the feature in isolation that showcased all of the data points we planned on using together in one interface. We interviewed users and took in early feedback on the concept design artifact. We received enough validating feedback to move forward with higher-fidelity designs and prototyping with minor customer feedback being incorporated into the followup approaches.
A concept test mockup that we used to build confidence in a direction for building an email campaign benchmarking MVP.
Combining new and existing work
Our team determined that building this feature would make the most sense on the email campaign report page of the web app. While there were other potential surface areas for us to explore building this feature, we knew that on desktop web, 30% or more of customer engagement happened on the Campaign Report page. Choosing this surface area also gave us another advantage as we were already surfacing open rates, click rates, and unsubscribe rates for the campaign in this report.
We knew over 30% of Mailchimp's app engagement on web occurred on the email campaign report page. We chose this surface area to surface our benchmarking MVP.
Using predicted audience demographics
Another key point of data that we were able to leverage was a feature that our team worked on earlier in the year, predicted audience demographics. We would use this data point to help us make a comparison between other email campaigns with similar audiences, specifically audience size and audience gender.
Earlier, we added predicted audience demographics to the user's Audience Dashboard. We used the same data points from this feature in our Campaign Benchmarking feature.
Using the customer's business vertical
When a new customer was going through onboarding, they were given the option to self-categorize the business vertical that best matched their organization. We could use this data to help paint a picture of similar businesses within the same business vertical when doing our comparison. In collaboration with our Data Science team, we trained a ML model to analyze thousands of email campaigns to determine the business vertical. We used this ML model to predict the customers’ business vertical In the event where a customer hadn’t selected one during onboarding. We also provided a way to update their business vertical if the prediction was incorrect.
The customer's business vertical was a key ingredient in preparing our benchmarking comparison. In the case where a user had not self-selected their business vertical during onboarding, we used machine learning to predict the business vertical. A user could then update their business vertical in their Settings page for more accurate results.
Data visualization for comparison metrics
While open, click, and unsubscribe rates existed for the individual campaign report itself, we didn’t have data visualization that showed the comparison between their peer data. I tapped into our design system’s chart patterns and iterated through a few different ways that we could visualize the comparison between the email campaign’s performance against the average metric performance of their peers. After several team and internal stakeholder reviews, including reviews with our data scientists, I landed on a simple bar chart pattern to visualize the comparison data.
I explored some no-chart options but after several internal reviews with our design team and our immediate team, we felt that opting for a chart approach would provide better data visualization.
Other chart explorations ranging from more detailed to oversimplified, that utilized existing design systems chart colors. In the end we opted for a balance between the approaches with some light liberty taken with the colors. We also ran into some technical complexity with adjusting the business vertical on this surface area so we ditched this approach for our MVP.
Leveraging an early recommendation system
Another feature that our team built earlier in the year and decided to leverage for this feature was an early recommendation system that would surface to the user when key campaign performance moments would happen, for example, number of unopened campaigns would trigger a recommendation to re-send the campaign segmented to unopened users. We used this system of recommendations to provide an action based on the result of their campaign benchmarking performance. If their campaign performance was higher than the comparison average, we would display a congratulatory message, and if it was below the comparison average, we would display a recommendation with a suggested action and a link to a related support article with reasoning behind the recommendation.
An example of a "Smart" recommendation placement on the Audience Dashboard. We paired this feature with Campaign Benchmarking to provide a recommended action with supporting data from our help article.
Outlining the user journey
Prior to this our team worked on building out the foundations of a recommendation system that would later be used in other surface areas of the app. We had previously experimented with using these recommendations on the customer’s Audience Dashboard. We decided to utilize this system of recommendations to provide an action based on the result of their campaign benchmarking performance.
A high level user flow documenting the "happy path".
Testing in the wild
Now that we were moving forward with our concept, we had moved into higher fidelity designs and wanted to get additional user feedback on our more high-fidelity approach. To do this, our team took advantage of an opportunity to visit small business owners who were Mailchimp customers at their various places of business onsite in Portland, OR. We spent time with 5 Mailchimp customers who were also primarily in charge of handling their email marketing for their business. We also came away from the trip with a few observations:
Wishlist feature: Self-comparison
While participants saw value in the feature of comparison data against their peers, almost every participant mentioned a desire to compare campaign performance data against their own prior marketing campaigns
Trusting our recommendations
While the comparison data was valuable for participants, some expressed a desire to see some sort of proof that the recommendations provided would actually have the desired impact. In other words, they wanted to see that recommended action work for someone else before trying it themselves. This ended up being a major insight for us regarding recommendations in general.
Members of the team waiting to meet a Mailchimp customer outside of their place of business in Portland, OR. Unfortunately, I don't have photos of us actually talking to customers!