Use experiments in SAS Marketing Automation to gain insights into your campaigns. Jorge González Flores, Data Science & Analytics 15.6.

Size: px
Start display at page:

Download "Use experiments in SAS Marketing Automation to gain insights into your campaigns. Jorge González Flores, Data Science & Analytics 15.6."

Transcription

1 Use experiments in SAS Marketing Automation to gain insights into your campaigns Jorge González Flores, Data Science & Analytics

2 Introduction

3 A little bit about me From Madrid to Copenhagen MSc in Computer Enginering + MSc in Mathematics Background and experience with SAS: BI Internship ( ) Business Analytics Consultant (2011) Marketing Analyst ( ) SAS MA, SAS MOM Customer Insight Analyst in Data Science & Analytics (2015-) SAS MA+RTDM 3

4 Nordea s Customer vision 1:1 Marketing using Propensity models Experiments in SAS MA Omni-Channel setup Easy to deal with, relevant and competent, anywhere and anytime, SAS RTDM where the personal and digital relationship makes Nordea the safe and trusted partner 4

5 Experiments as part of campaign design

6 Types of experiments Target vs. Control We measure the communication effect comparing the response rate between: Target Group: Group of customers that will receive the communication Control Group: Group of customers that won t receive the communication A/B Test We identify the best communication comparing the response rate between: Target Group A: Group of customers that will receive the communication A Target Group B: Group of customers that will receive the communication B A B Model Performance We measure the modelling effect comparing the response rate between: Random Group: Will act as control group to measure the modelling effect, and will be based in a simple random selection of customers. High Probability Group: Will be a selection of customers with high probability according to a propensity model, after random group is chosen. In all campaigns we aim to have control groups. First, we will test all communications against each other (all challengers) until we identify the best one (champion). Later, we will test our champion against new ones (champion vs. challengers). Random High Probability 6

7 Campaign design without propensity model Target vs. Control Communication effect Universe Target Control Champion Challenger(s) A/B Test Best communication 7

8 Campaign design with propensity model Universe Random Selection Target vs. Control Communication effect Target vs. Control Communication effect High Probability Selection Control Target Model Performance Modelling effect with campaign Target Control Modelling effect without campaign Champion Challenger(s) Champion Challenger(s) A/B Test Best communication A/B Test Best communication 8

9 Response Rate Result of running experiments The total value creation that we can provide to the organization is the difference between not doing any targeted marketing campaign, and running all these three types of experiments: 20% Target vs. Control A/B test Model Performance 15% M Modelling effect All the value creation has to be presented in campaign reporting to be able to track how are we performing. 10% T B Best comm. Total value creation T A 5% C Communication effect 0% 9

10 How can we trust the results? We are essentially running experiments where we are testing if we have been able to change the behaviour (e.g. crosssell a product, customer downloads our mobile app, etc.) for a given group of customers with certain conditions (the communication). In statistical terms, we are testing the hypothesis of whether or not a statement is true: Null Hypothesis: Sending this communication doesn t change the response rate Alternative Hypothesis: Sending the communication does change the response rate Based on the results, the null hypothesis can be accepted (nothing has changed), or rejected in favour of the alternative hypothesis (something has changed) if there is enough statistical significance. Only with statistical significance we will be able to make decisions based on the results 10

11 Out-of-the-box options in SAS Marketing Automation

12 Control groups in Outbound campaigns For reporting purposes, we identify a cell connected to a communication node as control group in a checkbox found in its properties. All customers in this cell are not sent out to the channel. The name of the cells and communication nodes is also used in the reporting. 12

13 Reasons for a new custom node (I) The existing MA nodes didn t cover the desired functionality. Split node: The cells it creates can be set as control group, but the size of the control group needs to be calculated outside MA and set manually. The split is totally random hence different after every execution. For all possible split methods, the user has to specify the size either as count or percentage 13

14 Reasons for a new custom node (II) The existing MA nodes didn t cover the desired functionality. A/B Test node: This node has an embedded Calculator to set the optimal size for the control group with a statistical test. But the cells it generates can t be set as control group, because all customers are supposed to be sent to the channel in A/B testing. The split is totally random hence different after every execution. 14

15 How SAS supports running experiments: The custom A/B Test and Control node

16 Features of the A/B Test and Control node We have coded in SAS the desired functionality and put it in one stored process. The new custom node has the following features: Automatic calculation of a statistically significant size for the control group and the challengers with the least parameters to adjust. Uses existing functionality of control groups in MA. Possibility to choose between different combinations of tests: Target vs. Control, A/B Test or A/B Test vs. Control. Support of same types of A/B test as the existing node, Champion / Challenger and All Challengers, for any number of challengers. Uses a seed for the random split: ongoing executions of the same campaign will result in almost the same split. The campaign is also part of the seed, so that the split is different between campaigns. 16

17 Ensuring enough statistical significance (I) Inside the node, we will run a Pearson s Chi-Square Test for Two Proportions using SAS PROC POWER to calculate the minimum size of each group. It is the same calculation used by the A/B Test node that comes with the tool. Four of the parameters are used here: proc power; run; twosamplefreq test = pchi refproportion = &refproportion proportiondiff = &proportiondiff npergroup =. sides = 2 power = &power alpha = α And it returns npergroup, the minimum number of customers that should be in each group of customers for the specified values in the parameters. 17

18 Ensuring enough statistical significance (II) This is the meaning of the four parameters required to run the statistical test: Reference response rate: Is the response rate that we are usually getting for the campaign. If there is no reference (fx when running a pilot): Then the size of the control group is set to a default value, sufficient for tests where the response rate or increase is very small, as is usually the case with marketing campaigns. Expected increase: Is the expected increase in the response rate for this communication. If there is no expected increase: Then the size of the control group will be enough to properly compare the results between the test and the control group, assuming a small effect to measure. Power of test: Probability of a true-positive detection (values 80% and 90%). Alpha (Significance level): Probability of a false-positive detection (values 1%, 5% and 10%). 18

19 Other parameters The other four parameters help controlling the output: Include controls?: If set to yes, then customers will be sent to the corresponding cell. If no, it will be empty (we plan on doing just A/B test). A/B Test Type with three possible values: None: No customers will be sent to the corresponding cells. We will use this when doing just Test vs. Control. Champion / Challengers: Only enough customers will be sent to the Challengers, and the rest will be sent to the Champion. All Challengers: Customers will be equally split between the output cells. Max. relative size of the control group: Maximum percentage of the selection of customers that we allow to be selected as part of the control group. Extra seed value: To be added in case of change in the experiment. 19

20 Example 1 Test/Control Only We ignore this cell 20

21 Example 2a A/B Test Only (Champion/Challenger) We ignore this cell 21

22 Example 2b A/B Test Only (All Challengers) 22

23 Example 3 A/B Test (All Challengers) and Control Group 23

24 Summary

25 Summary In order to identify messages that are relevant for our customers, in Nordea we run three types of experiments in our campaigns: Target vs. Control, A/B Test and Model Performance. To ensure we can make the right decisions based on the results of the campaigns, we need statistically significant groups in our experiments. The standard functionalities in SAS Marketing Automation don t cover all our needs to run experiments. But it supports developing new nodes using SAS code inside a stored process. This has allowed us to extend the functionality of the tool to cover our requirements. With an automated new A/B Test and Control node, we have simplified our campaigns and reduced our time to market, ensuring campaign reporting can automatically present each group and the results.

26 Q&A

27 Thank you!