Campaign Benchmarking for Mailchimp

Helping Mailchimp customers better understand their email campaign's performance
Campaign Benchmarking for Mailchimp

Summary

Company
Role
Product Designer
Responsibilities
UI/Visual Design
Interaction Design
Prototyping
Platform
Web App
Year
2019
Team Structure
Project Manager
Product Manager
2 Frontend Engineers
2 Backend Engineers
1 Content strategist

The Campaign Benchmarking project was a collaboration with a newly-formed Data Science team and an existing Audience Management team. The goal of the feature was to help our customers understand how their email campaign performance compared to that of their peers based on the size of their audience, the demographic makeup of their audience, and the business vertical they were a part of. Depending on positive or negative campaign performance, we would either surface a congratulatory message or a recommended action to improve future campaign performance.

The Problem

Through customer interviews, we learned that our customers did not feel like they had a clear understanding of how their marketing campaign performances compared to that of their peers. Additionally, quarterly survey data indicated that the 2nd top most feature requested by customers was a way to compare audience performance data. Based on this data, our team, the Audience Management team, was asked to prioritize this feature request.

Retro moment

While this work was grounded in customer needs, it was essentially a top-down request. In retrospect, I would have done more stakeholder interviews to better understand where the request was coming from and why it was being prioritized over other initiatives.

The Approach

Our team got together to determine how to build an initial version of an audience comparison feature. We knew that we had existing data that we could leverage to help us make such a comparison. We needed to understand what data to compare, how to identify “peers” or campaign data that had similar characteristics, and then we had to determine what characteristics of a business we had available to use in our comparison. Lastly, we needed to determine where we would surface this feature.

Conceptualizing our approach

I designed a quick low-mid fidelity concept design of the feature in isolation that showcased all of the data points we planned on using together in one interface. We interviewed users and took in early feedback on the concept design artifact. We received enough validating feedback to move forward with higher-fidelity designs and prototyping with minor customer feedback being incorporated into the followup approaches.

Combining new and existing work

Our team determined that building this feature would make the most sense on the email campaign report page of the web app. While there were other potential surface areas for us to explore building this feature, we knew that on desktop web, 30% or more of customer engagement happened on the Campaign Report page. Choosing this surface area also gave us another advantage as we were already surfacing open rates, click rates, and unsubscribe rates for the campaign in this report.

Using predicted audience demographics

Another key point of data that we were able to leverage was a feature that our team worked on earlier in the year, predicted audience demographics. We would use this data point to help us make a comparison between other email campaigns with similar audiences, specifically audience size and audience gender.

Using the customer's business vertical

When a new customer was going through onboarding, they were given the option to self-categorize the business vertical that best matched their organization. We could use this data to help paint a picture of similar businesses within the same business vertical when doing our comparison. In collaboration with our Data Science team, we trained a ML model to analyze thousands of email campaigns to determine the business vertical. We used this ML model to predict the customers’ business vertical In the event where a customer hadn’t selected one during onboarding. We also provided a way to update their business vertical if the prediction was incorrect.

Data visualization for comparison metrics

While open, click, and unsubscribe rates existed for the individual campaign report itself, we didn’t have data visualization that showed the comparison between their peer data. I tapped into our design system’s chart patterns and iterated through a few different ways that we could visualize the comparison between the email campaign’s performance against the average metric performance of their peers. After several team and internal stakeholder reviews, including reviews with our data scientists, I landed on a simple bar chart pattern to visualize the comparison data.

Leveraging an early recommendation system

Another feature that our team built earlier in the year and decided to leverage for this feature was an early recommendation system that would surface to the user when key campaign performance moments would happen, for example, number of unopened campaigns would trigger a recommendation to re-send the campaign segmented to unopened users. We used this system of recommendations to provide an action based on the result of their campaign benchmarking performance. If their campaign performance was higher than the comparison average, we would display a congratulatory message, and if it was below the comparison average, we would display a recommendation with a suggested action and a link to a related support article with reasoning behind the recommendation.

Outlining the user journey

With the essential components and elements selected, we were ready to put concept to code. I outlined the user journey for the experience using the following user flow. This would illustrate the "happy path" the user would take as well as capturing edge cases along the way, for example, when to show a celebration moment for above average performance and when to show a banner suitable for underperforming campaigns.

Testing in the wild

Now that we were moving forward with our concept, we had moved into higher fidelity designs and wanted to get additional user feedback on our more high-fidelity approach. To do this, our team took advantage of an opportunity to visit small business owners who were Mailchimp customers at their various places of business onsite in Portland, OR. We spent time with 5 Mailchimp customers who were also primarily in charge of handling their email marketing for their business. We also came away from the trip with a few observations:

Wishlist feature: Self-comparison

While participants saw value in the feature of comparison data against their peers, almost every participant mentioned a desire to compare campaign performance data against their own prior marketing campaigns

Trusting our recommendations

While the comparison data was valuable for participants, some expressed a desire to see some sort of proof that the recommendations provided would actually have the desired impact. In other words, they wanted to see that recommended action work for someone else before trying it themselves. This ended up being a major insight for us regarding recommendations in general.

Outcomes

We launched campaign benchmarking under a feature flag in Optimizely and primarily measured engagement with the comparison chart metrics, along with engagement with an in-app Usabilla survey that collected user feedback data from people using campaign benchmarking.

10-15% statistical significant level of engagement

We saw a high level of engagement of people using this feature for those who were seeing the email campaign report under our feature flag experiment.

Recommendations adopted to core experience

Campaign Benchmarking is now available to all users starting with a free plan. Additionally, the recommendation engine that our team built and used in email campaign benchmarking is now used in other parts of Mailchimp’s platform, such as the campaign builder experience, and has since been updated and refined

Retro moment

If I could go back, I would have advocated for taking time to establish and align on a clearly defined hypothesis along with a set of KPIs that our team could track to determine success outside of engagement. I believe we could have better understood the impact that campaign benchmarking was having on our customers as well as the business.

Up Next
Kit Generator for Arcade