SYNCHRONY CONNECT Making an Impact with Direct Marketing Test and Learn Strategies that Maximize ROI
SYNCHRONY CONNECT GROW 2 Word on the street is that direct marketing is all but dead. Digital and social media marketing have taken over, with many saying that direct marketing will soon fade into the sunset. But to the contrary, the data shows that the demise of direct marketing has been greatly exaggerated. According to the Data & Marketing Association, marketers still spent nearly $50 billion on direct mail in 2016. This spend is not coming from just a few organizations: Nearly 60% of companies said they used direct mail on promotional marketing activities, with 87% of organizations leveraging email on promotional marketing activities. 1 As a result, companies conducting direct marketing campaigns need to carefully consider the impact of their investments while identifying best practices to measure and quantify the campaigns Return on Investment (ROI). The ability to measure ROI was cited as one of the significant barriers to the adoption of data-driven marketing in a survey conducted by Ascend2 and Research Partners. 1 Let s say your new marketing analyst has a great idea for a marketing campaign. Instead of offering a 10% off coupon for $50 spend, let s offer a $10 off coupon for $50 or more. This will save the company money because most customers spend more than $50, and it will drive traffic to the store. And let s say it drives $1 Million in sales. Great, we have a winner, right? But, hold on. How much would customers have spent with the old campaign? How much would the same customers have spent if they didn t get an offer at all? Setting up a proper test-and-learn campaign structure helps answer these key questions, and ensures a company gets the most out of their marketing dollars. To effectively measure the impact of direct marketing initiatives, each campaign should be designed so its impact can be isolated and analyzed relative to impacts brought on by other factors. A properly designed campaign will generate critical insights about: 1 Customer Customers who respond to their campaigns. 2 Channel Channels that drive response. 3 Creative Creative and messages that prompt customers to action. 4 Offer Offers that drive the biggest returns. Given ever-changing budgets and priorities, an organization needs to understand each element to identify which ones drive the greatest impact on the results. Let s look at two structures that are used to test the effectiveness of a marketing campaign: Section I: Test vs. Control. This section outlines the design of a direct marketing campaign to assess its direct impact when control groups are available and outside influences are eliminated. Section II: No Control or Biased Control Group. This section focuses on situations where it is not possible to separate a control group or the selected control group is biased. Quick definition: A control group comprises potential targets who are not given the marketing offer. That way the impact of the campaign can be measured against the non-marketed population.
SYNCHRONY CONNECT GROW 3 Section I: Test Design 1. Set aside a random control group Measuring and quantifying results is critical to any marketing campaign. Accurate measurement allows the marketer to understand which customers are responding to which offers over which channels so future campaigns can be modified. To get an accurate read of a campaign, it s critical to separate a randomly chosen control group which will not receive the offer from the test population, which will receive the offer. Comparing the two groups will allow the marketing team to isolate the impact of the direct marketing campaign. 2. Select the right sample size To enable a statistically significant read on the campaign, selecting the right sample size is a critical step. For example, if you have a marketing campaign where 50% of those receiving the direct mail offer shopped and 45% of those not receiving the offer also shopped then it may be difficult to determine if the results are statistically significant or not. Broadly speaking, two types of sample size calculators can be used to help size a campaign. One is based on the Estimated Response to the campaign and the other is based on the Estimated Sales. Either or both methods can be used while designing a marketing campaign. 3. Ensure true randomization After the required sample size has been determined, and customers for the test and control groups chosen, checks need to be performed to ensure the two groups are similar to each other. Test and control groups need to be compared based on such key metrics as sales and transactions in the months leading up to the campaign, and sales during the same period in prior years (to account for seasonality). The two groups may also need to be compared based on other factors, such as tenure and risk profile, to ensure they are similar. If you re working in an industry where usage of demographic attributes is permissible, then the analyst should ensure that the test and control groups have similar demographic profiles. 4. Set up a universal control group Universal control groups are typically selected at the beginning of the year, with customers in the group removed from all marketing campaigns for that year. By comparing test customers against the universal control group, analysts can calculate the cumulative impact of all marketing activities. Universal control groups also assist in lift attribution and calculating the impact due to each incremental touch point. While selecting the group, analysts should ensure that the group s size is appropriate, and representative, of the organization s customer base. 5. Multiple factors being tested at once? Consider Design of Experiments The marketing team may want to test the impact of multiple factors at several points in a campaign. For example, they may be interested in testing the impact of offering customers $10 off as compared to 10% off, as well as testing different creatives or different channels. In each case, the number of test and control groups required multiply rapidly, so the team may run into issues with an inadequate sample size in each test group. Design of Experiments (DOE) is a technique that can be leveraged in such cases to limit the number of groups needed to effectively measure the impact of multiple factors.
SYNCHRONY CONNECT GROW 4 Section II: Test Measurement without a control group While it s preferable to have a randomly chosen control group for all campaigns, it may not always be possible to remove a control group. Examples of these types of scenarios include: Analyzing the benefits of multi-tender loyalty programs to get an accurate measure of the program s impact. Quantifying the benefit of an e-commerce platform launch. There also may be cases where a control group was selected, but it was not large enough or was corrupted. Under these scenarios, analysts typically perform a pre- versus post- or prior year versus current year analysis. These techniques may not paint an accurate picture of the campaign due to the following gaps: They do not factor in the seasonality associated with the retail spend. Natural attrition behavior is not factored in (e.g., spend among existing customers tends to decrease year over year due to attrition as some customers migrate away from the brand while those who still shop spend less than they did a year ago). They do not adequately address the question of causality versus correlation (e.g., how can the analyst be confident that any observed increase or decrease is due to the marketing initiative and not to other factors, such as changes in merchandise or pricing). 1. KPI Decile Approach A simple but effective technique to measure the lift when there is a biased control group involves grouping all customers, grouping those who received the offer/ shopped (test group) and grouping those who didn t (control group), based upon spend or other key performance indicator (KPI) in the pre-period, prior to the campaign. You can then compare test customers and control customers in each of these pre-period spend groups. This allows for an apples-to-apples comparison of the test and control groups and a better read of the impact due to the marketing effort, as illustrated in the chart below. Average Sales per Account (Illustrative) Control +5% $52.6 Test Biased Control $55.7 $55.7 +10% $50.4 Normalized Control The results of this test show that the normalized control scenario actually resulted in a 10% increase versus a biased control. 2. Greedy match algorithm The KPI Decile Approach, while simple and effective, matches the test and control group on only one metric (e.g., sales). The problem might warrant matching test and control groups on more than one metric (e.g., sales, tenure and transaction frequency). Greedy match algorithm is a technique that can be used to match test and control accounts on more than one attribute. The goal of a greedy match algorithm is to produce matched samples with similar characteristics across the test and control groups. Average In-Store Spend per Account (Illustrative) Synthetic Control Test To enable an accurate read of these type of campaigns, the following three techniques can be used (in order of increasing complexity): Pre-Period Marketing Initiative Post-Period Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Jan
SYNCHRONY CONNECT GROW 5 3. Propensity Scoring You can also leverage modeling techniques to form a synthetic or artificial control group in situations where a control group was either not available or biased. Using such classification algorithms as logistic regression, the propensity scoring technique can score and match test and control accounts on pre-period attributes. A propensity scoring model is an alternative approach to specifying multiple attributes to match those who are in the test group and those in the control group. ROI Test Control Lift High - = $30 Medium - = $15 Low - = $6 Weighted Average = $17 4. Expected Sales An alternate approach involves developing a model to predict the expected sales (or other KPIs) in the postcampaign period. Based on customers who are not part of the test group, the model s intent is to predict what the sales (or other KPIs) would total without the offer/ treatment. The model can either be linear with the post-period sales of the control group as the dependent variable or formulated so the control customers with high sales are classified as responders. The independent attributes are the customer behaviors in the pre-campaign period. The resulting model is then scored against the test population to develop the baseline sales for each customer in the test group. The actual customer sale is then compared against the baseline sale to calculate the lift due to the campaign. Conclusion Many direct marketing professionals spend a great deal of time and effort designing the marketing campaign that is most attractive to their customers. But the setup of a proper test and control structure is key to determining which marketing campaign has the most impact on overall results. It is this structure that is the key to whether a campaign is impactful and delivers the best ROI to the business. In order to set up the right test-and-learn structure, it is essential to design it to effectively measure the impact of the campaign. To effectively do this, identifying a representative control group with an adequate sample size is critical to measuring the true lift of the campaign. Universal control groups are helpful when seeking to quantify the cumulative impact of all marketing initiatives, while DOE techniques allow marketing teams to quantify the impact of various offers, channels and creatives. In some cases, it is not possible to hold out a control. In such situations or when a control group is biased, analytical and modeling techniques presented in this paper can be leveraged to help measure campaign results. 1 Data & Marketing Association Statistical Factbook 2017. For more information or to connect with an expert, contact us at synchronyconnect@synchronyfinancial.com. Synchrony Connect is a value-added program that lets partners tap into our expertise in areas beyond credit. It offers knowledge and tools that can help you grow, lead and operate your business. synchronyfinancial.com This content is subject to change without notice and offered for informational use only. You are urged to consult with your individual business, financial, legal, tax and/or other advisors with respect to any information presented. and any of its affiliates (collectively, Synchrony ) make no representations or warranties regarding this content and accept no liability for any loss or harm arising from the use of the information provided. Your receipt of this material constitutes your acceptance of these terms and conditions. 0717-138