Why Does Size Matter So Much For Bidder Announcement Returns?

Size: px
Start display at page:

Download "Why Does Size Matter So Much For Bidder Announcement Returns?"

Transcription

1 Why Does Size Matter So Much For Bidder Announcement Returns? November 16, 2015 Abstract Bidder and target size are key drivers of bidder announcement returns in takeovers. But why do they matter so much? We document strong reversal patterns in the data: for public target deals, low bidder returns are associated with small bidders and large targets, while for non-public target deals it is the opposite. These reversals constitute a sharp test which rejects the leading recent explanations for size effects. A simple scaling framework in which size magnifies a given per-dollar value gain can parsimoniously explain the data. Results from quantile regressions, which document within-subsample reversals, provide additional support for the scaling model. Our results suggest a major shift in interpreting the role size plays for bidder returns: size scales, it does not proxy. JEL Classification: G34, G14 Keywords: Mergers and Acquisitions, Size Effects, Scaling.

2 1. Introduction Bidder and target size are key variables to explain bidder announcement returns in corporate takeovers. In a recent survey article on the large M&A literature, Betton, Eckbo, and Thorburn (2008) write that bidder size is one of the two key drivers of bidder announcement returns. The large impact of bidder size was first highlighted in an influential study by Moeller, Schlingemann, and Stulz (2004) who find that the difference in three-day bidder announcement returns between large and small bidders is 2.2%. Target size is similarly important: Alexandridis, Fuller, Terhaar, and Travlos (2013) find that the difference in bidder returns between large and small target deals is 2.4%. While there is broad agreement on the fact that size matters for bidder returns, there is much less consensus on why size matters so much. The existing literature has offered a limited number of explanations, but there are only few attempts to distinguish between those explanations, and there is almost no attempt to provide a unifying framework for understanding how bidder and target size affect bidder returns. Given the importance of takeovers for our economy, and given the importance of size for value created in these takeovers, we know remarkably little about the economic forces behind size effects. Our paper provides new evidence on why size matters so much for bidder returns. There are three main contributions. First, we devise a simple but powerful empirical test to evaluate candidate explanations for size effects. Second, we show the test rejects all leading explanations in the recent literature we consider. Third, we propose, test, and find evidence for, a novel unifying framework for understanding bidder and target size effects. Our paper proposes a fundamentally different view on the role size plays for bidder returns. In the recent literature, size is usually seen as a proxy variable for some underlying value driver. For example, bidder size is suggested to proxy for managerial overconfidence and target size is suggested to proxy for deal complexity, both of which affect bidder returns negatively. By contrast, we propose that, as a first-order approximation to the data, size should not be thought of as a proxy. Rather, size is size. It scales a given per-dollar gain or loss. This distinction is crucial: in our framework, size determines how much value is created or destroyed, but not whether value is created in the first place. Size is therefore a centrally important variable, but for entirely different 1

3 reasons than previously thought. We present empirical and theoretical arguments to support this new perspective. Our paper has three parts. In the first part of the paper, we present the facts in the data, based on a comprehensive set of about 28,000 acquisitions by US bidders from 1981 to 2014 from SDC. A key feature of our empirical design is that we split the data by public and non-public targets. Splitting the sample in that way is motivated by empirical relevance: the non-public target subsample represents more than 85% of all observations, while the public target subsample the most widely studied sample in the M&A literature represents about 64% of total dollars spent on acquisitions. It is also motivated by relevance for the literature: because many studies restrict themselves to public targets, our results can be directly related to those previous findings. Our central empirical result is that there are strong reversal patterns in the data across the subsamples: (i) for non-public target deals, holding constant the size of the target, larger bidders are associated with lower bidder returns, and, holding constant the size of the bidder, larger targets are associated with higher bidder returns, and, (ii) for public target deals, all signs reverse. These patterns are very robust. They show up in sorts and regressions, and they are not driven by particular industries and time periods. Because each subsample is economically so important, we argue a good theory of the impact of size on bidder returns needs to explain these pervasive facts in the data. A main conceptual point of our paper is to propose that we can use these reversals as a simple but sharp test on existing explanations for size effects in the recent M&A literature. In the second part of the paper, we show that all leading explanations for size effects in the recent M&A literature we consider fail the reversal test, because they do not predict signs to flip across the subsamples. For example, prior work suggests the bidder size effect may be driven by managerial overconfidence, under the assumption that managers in larger firms are more overconfident, and that more overconfident managers are more likely to make bad deals (e.g., Moeller, Schlingemann, and Stulz (2004)). But under the overconfidence hypothesis, smaller firms should make better acquisitions also when they acquire a public target. Since we find precisely the opposite, the reversal test rejects the hypothesis that size matters because it proxies for overconfidence. 1 The results from the reversal test therefore deepen the puzzle: if existing explanations are not driving 1 To avoid misunderstandings: we are not arguing overconfidence is not a relevant driver of bidder returns, nor do we argue that overconfidence is irrelevant for understanding takeovers in general. What we argue is that overconfidence does not seem to be what drives the size-bidder return patterns in the data. 2

4 size effects, then why is it that size matters so much for bidder returns? In the third part of the paper, we propose, test, and find evidence for a simple scaling framework of announcement returns to explain the otherwise puzzling reversal patterns. The framework posits that bidder and target size magnify a given per-dollar gain or loss in a given takeover, and rests on a straightforward but powerful economic mechanism. Assuming a bidder pays a price of P for the target firm s assets, and that each dollar invested by the bidder yields a net present value of R, the bidder announcement return is R (P/A), where A is the market value of bidder assets. If R > 0, bidder announcement returns are positive, increase in target size, and decrease in bidder size. For value destroying deals with R < 0 those patterns reverse. Scaling can explain both, (i) the opposing signs on bidder and target size within public and nonpublic deal subsamples, respectively, and (ii) the sign-flip of target and bidder size coefficients across the public and non-public deal subsamples. The first part follows from the fact that announcement returns depend on P/A, which means bidder and target size have opposing signs, by construction. The second part follows from the fact that public deals have on average negative announcement returns ( 1.4% in our sample), and that non-public deals have on average positive announcement returns (1.4% in our sample). Hence, since R has a different sign for public and non-public deals, bidder and target size have an impact of opposite sign across those deal types. Scaling is a good model in a Friedman (1953) sense: it is maximally parsimonious and still explains the first-order effects in the data that are at odds with explanations in the existing literature. Since the model does not take a stand on the origins of the sign of R, the scaling effects we uncover are likely important for a broad range of merger theories including neoclassical, agency-based, or behavioral ones, which is an advantage of the approach. As a notable special case, scaling is consistent with the neoclassical benchmark case in which buyers pay on average the fair price for the assets they acquire, and in which deviations from R = 0 occur by pure chance. Irrespective of what drives the sign of R, the size variables can substantially magnify per-dollar value gains or losses for all deals in which R 0. The scaling framework makes a sharp additional prediction: the sign reversals should also occur within public and non-public deals and not just across these categories. We test this prediction using quantile regressions and find results strongly consistent with the scaling model. For both deal types, the sign of bidder and target size coefficients show the same pattern: for low bidder 3

5 returns, the target size coefficient is negative and the bidder size coefficient is positive, while we find the opposite pattern for high bidder returns. The quantile regressions therefore support the view that a general economic mechanism scaling governs how size relates to returns for any type of deal. Hence, the differences between non-public and public deal subsamples in the OLS regressions are due to a different percentage of value generating deals in those subsamples, but not due to a different relation between size and bidder returns per se. The quantile regressions show that the signs on bidder and target size flip at the same point of the bidder return distribution. This simultaneous sign-flip is a rather bold prediction of the scaling model, which sets it apart from other explanations in the literature. Seeing the prediction confirmed in the data is thus a particularly strong piece of evidence supporting the scaling view. Overall, the within-subsample reversals make it unlikely that our results obtain because of omitted variables, or because non-public and public target deals are selected subsamples. And the results raise the bar for alternative interpretations of our findings considerably, because there is yet another sign-flip to be explained. There is an important corollary to our findings. A prominent stylized fact in the literature, which continues to influence both empirical and theoretical studies on takeovers, is that smaller bidders make better acquisitions (e.g., Moeller, Schlingemann, and Stulz (2004), Betton, Eckbo, and Thorburn (2008), Gorton, Kahl, and Rosen (2009)). However, Alexandridis, Fuller, Terhaar, and Travlos (2013) show that, in their sample, larger bidders make better takeovers. These findings appear to be squarely at odds with each other. Adopting a scaling interpretation of size effects, we can reconcile the two sets of findings: the stylized fact is derived in samples dominated by nonpublic targets (e.g., Moeller, Schlingemann, and Stulz (2004)), while Alexandridis, Fuller, Terhaar, and Travlos (2013) exclusively work on a subsample of public targets. Scaling predicts a negative sign on bidder size for non-public target deals and a positive sign for public target deals. Hence, while puzzling for the size as proxy view, the two sets of results are perfectly compatible under the scaling perspective. In sum, the results of our paper have several important implications for the M&A literature. First, to understand why size matters so much for bidder returns, bidder and target size should not be thought of as proxies for some underlying value drivers. They are best interpreted as scaling variables. Second, bidder and target size are not distinct variables that need separate 4

6 theories linking them to bidder returns. Scaling provides a unifying framework. Third, there is no stable correlation between size and bidder announcement returns in the data: the reversals show correlations flip signs across and within different subsets of the data. Hence, there are no simple stylized facts researchers should appeal to in all merger contexts. To the best of our knowledge, ours is the first paper to put the scaling role of size front-andcenter, and ours is the first paper to uncover the reversals and link them to scaling. But our paper is by no means the first one to ever suggest size may play a scaling role. In particular, our paper is related to an early study on bidder gains by Asquith, Bruner, and Mullins (1983) who analyze 156 merger bids from 1963 to Close in spirit to our approach, they suggest that relative size may matter for bidder returns. In contrast to our paper, they do not look at bidder and target size separately and can therefore not document the reversals we show exist in the data. Our paper extends Asquith, Bruner, and Mullins (1983) along several dimensions. First, we analyze a much larger sample of about 28,000 deals over the period from 1981 to Second, by analyzing the impact of bidder and target size separately, we have an increased ability to say something about the inner workings of the scaling mechanism. Third, we generate new predictions about how relative size impacts bidder returns with different signs in different subsamples, and we document the associated reversal patterns in the data using quantile regressions. Fourth, because we show size effects are offsetting for high and low bidder returns, the true importance of size and relative size is understated when running simple OLS regressions. Hence, our new results imply that the relative size effect in Asquith, Bruner, and Mullins (1983) may be even more important than their OLS regressions suggested. Our paper is also related to prior work by Alexandridis, Fuller, Terhaar, and Travlos (2013). Like us, they find that larger bidders and smaller targets are associated with higher bidder returns for public target deals. However, the scope of their study and the implications they draw are very different from ours. First, they exclusively analyze public targets. They do therefore not document the reversals in signs between public and non-public deals which are at the heart of our paper, and which motivate our scaling approach. Second, they propose a theory for how target size is related to bidder returns based on the idea that size may proxy for deal complexity. We find the complexity explanation lines up with the data for the subsample of public target deals they study. But we also find the patterns for the non-public target subsample are opposite from what the complexity 5

7 theory would predict. Our new reversal evidence therefore presents a challenge for that theory. We discuss additional work on the relation between size and bidder returns in Section Data The data we use are standard in the literature on takeovers. Merger Sample. Our initial sample consists of all takeover bids of public US bidders involving public and non-public US targets and bidders listed in the Thomson Reuters SDC database from January 1, 1981 to December 31, We require that the bidder owns less than 15% of the target before the announcement and more than 80% after the transaction is completed. We exclude deals with missing deal value, penny stocks, repurchases, recapitalizations, rumored, and target solicited deals. Following Moeller, Schlingemann, and Stulz (2004), we eliminate deals with below 1$ million deal value and deals with relative size (deal value over bidder market capitalization) smaller than 1%. We obtain stock price data from CRSP and balance sheet data from Compustat. Variable Definitions. We calculate bidder and target cumulative abnormal returns over a three day window around the announcement. Abnormal returns are determined relative to a market model estimated over days 280 to 31. Bidder size is measured by market capitalization, defined as price (CRSP: PRC) times shares outstanding (CRSP: SHROUT), at the last fiscal year end before the announcement. Target size is measured for public and non-public targets by deal value (as reported in SDC), but we have verified that our main results also obtain when we replace deal value by market capitalization before the announcement (which is available only for public targets). All variables denoted in US$ are adjusted for inflation and expressed in 2014 constant US$. We control for a set of standard variables in our regressions (e.g., Baker, Pan, and Wurgler (2012), Moeller, Schlingemann, and Stulz (2004)). We control for the return on assets, defined as EBITDA (Compustat: EBITDA) over total assets (Compustat: AT), and the book to market ratio, defined as book equity divided by market capitalization, where book equity is total shareholders equity (Compustat: SEQ) plus deferred taxes and investment tax credit (Compustat: TXDITC) minus the redemption value of preferred stock (Compustat: PSRKRV). All these variables are 6

8 based on the bidder s last fiscal year end before the announcement. We control for a set of deal characteristics obtained from from SDC, including dummy variables indicating payment through stock only or cash only, tender offers, hostile takeovers, conglomerate mergers (mergers in which the bidder is in a different 2-digit SIC code industry than the target), and competed deals (with more than one bidder). We also include a dummy variable indicating new economy firms (classified by SIC codes 3570 to 3579, 3661, 3674, 5045, 5961, or 7370 to 7379), and the number of transactions in the same 2-digit SIC code industry and year, to control for periods of heightened M&A activity in all our regressions. We include additional fixed effects for industry, year, and industry year in our regressions where appropriate. Summary Statistics. Tables 1 and 2 present summary statistics. Table 1 shows, for each year in our sample, the total number of deals, the number of deals involving non-public targets (both private and subsidiaries), the number of deals involving public targets, as well as the associated average bidder ACARs. Table 2 shows descriptive statistics for the main variables used in our analysis. One notable fact, which will turn out to be relevant later in our paper, is that, for nonpublic deals, average ACAR is positive 1.4%, while for public deals, ACARs are on average negative 1.4%. 3. The Relation Between Bidder Returns and Firm Size In this section we present the relation between bidder size, target size, and bidder announcement returns in the data. A key feature of our empirical design is that we analyze the subsample of public targets and non-public targets, separately. We document striking differences in the size-bidder return patterns across the subsamples which are informative for understanding which explanations capture the role size plays for bidder returns. 3.1 The Impact of Bidder and Target Size for Non-Public Targets We start by describing the data for non-public targets. Panel A of Table 3 sorts bidder cumulative announcement returns (ACARs) into five bidder size groups. Bidder returns decline monotonically when going from the smallest to the largest bidders. The difference between smallest and largest 7

9 bidder quintile is 2.5 percentage points (t-stat = 12.8) and therefore economically substantial. Since non-public targets are so plentiful (86% of all observations in the full data set are from non-public targets), the same bidder size effect also obtains for the full sample of public plus non-public targets (unreported). Panel B sorts ACARs by target size. There is a weaker, but still significant tendency of ACARs to increase in target size in this simple sort. The difference between smallest and largest bidder quintile is 0.8 percentage points (t-stat = 4.3). A potential issue with univariate sorting is the strong correlation between bidder and target size in the data, which is also apparent from the size numbers across quintiles shown in Panels A and B. Hence, sorting by one size variable, means we are also, implicitly, sorting on the other one. To get a clearer picture of the incremental impact of target and bidder size, Panels C and D use double-sorts. In Panel C, we first sort the sample into target size quintiles, and then, within each quintile, we sort on bidder size. The bidder size effect, i.e. smaller bidders make better deals, gets even stronger in this case. Panel D repeats the exercise reversing the order of sorting. A clear pattern emerges: bidder returns increase with the size of the target. At 2.7 percentage points (tstat = 12.9), this difference is significant both statistically and economically. Hence, the correlation between bidder and target size in the univariate sorts masked the substantial impact target size has on bidder returns. The size averages across groups in Panels C and D show that the double-sort does not perfectly remove the correlation between target and bidder size. We therefore, in a next test, regress ACAR on a full set of bidder size quintile dummies (without a constant) and demeaned target size. Figure 1, Panel (a) shows that removing the correlation between bidder and target size by controlling for target size further increases the effect of bidder size. ACARs for deals by large acquirers (top quintile) have more than 4 percentage points lower announcement returns than deals done by small bidders. Panel (b) shows that the pattern for target size quintiles is the mirror image of the patterns for bidder size quintile in Panel (a). Table 5 presents results from regressions of ACARs on bidder and target size, as well as a set of standard variables used in the literature. Specifications (1) and (2) present the full sample results (non-public and public targets), and specifications (3) and (4) show results for the non-public subsample (we comment on specifications (5) and (6) in the next section). Across all specifications (1) to (4) there is a strong negative association between bidder size and bidder returns, and speci- 8

10 fications (2) and (4) show there is also a strong positive association between target size and bidder returns. The results are thus consistent with our sorting evidence and shows that the size-bidder return patterns in the sorts are not induced by deal characteristics, firm characteristics, or industry characteristics. The regressions in Panel A include year and industry fixed effects and therefore remove potentially confounding time-invariant variation along those dimensions. One specific concern could be that mergers cluster by industry and year, and that our patterns are driven by time-varying industry-level factors, which our year and industry fixed effects are not sufficiently controlling for. To address this, we repeat the tests from Table 5, Panel A, but now include bidder industry year fixed effects and target industry year fixed effects. Specifications (1) to (4) of Table 5, Panel B, show that our results are effectively unchanged. In sum, there are three important takeaways from this section. First, bidder and target size are highly correlated. To accurately measure the incremental impact of one size variable, one needs to control for the other. Second, once correlation is removed, the size return patterns for bidder size are the mirror-image of the patterns for target size affect. Third, the stylized fact that smaller bidders make better acquisitions in term of ACAR is a robust feature of this subspace of the merger universe. 3.2 The Impact of Bidder and Target Size for Public Targets We now turn to the subset of deals with public targets. Panel B of Table 2 shows that out of a total of about 28,000 deals in our sample, only about 4,000 involve public targets. Hence, the average deal is one involving a non-public target. However, public deals represent about 64% of the dollars spent in takeovers in our sample (Table 2, Panel B), so the average dollar is spent on a public target. The difference comes from the fact that deals with public targets are more than ten times larger compared with deal values in non-public target deals. Economically, the subset of public deals is therefore of central importance. Further adding to the relevance of understanding size patterns in this subset of the data is the fact that many studies in the M&A literature restrict themselves to samples that include only public targets. Table 4 repeats the sorting exercise from Table 3, but this time with public deals. In the simple sort by bidder size, in Panel A, the bidder size effect we previously found for non-public deals is 9

11 substantially weaker. However, as noted above, one issue with the simple sorts is that bidder and target size are highly correlated. And the per-quintile size averages of target and bidder size in Panel A suggests this correlation may even be more problematic for public deals. Therefore, we double-sort in Panel C. If we sort by bidder size within target size quintile, and therefore remove some of the confounding positive correlation with target size, the sign on bidder size flips from negative to positive. Now, larger bidders make better acquisitions, with a significant difference of 0.86 percentage points between the extreme quintiles (t-stat = 2.1). This finding is remarkable, because it suggests a fundamental difference in how bidder size affects bidder returns across the non-public target deal subsample and the public target deal subsample: bidder size affects bidder returns significantly, but with opposite sign. More refined tests strengthen this conclusion. First, the size averages across groups in Panel C show that the double-sort reduces, but not perfectly removes, the correlation between bidder and target size. To better measure the true incremental effect of bidder size, we regress ACAR on a full set of bidder size quintile dummies (without a constant) and demeaned target size. Figure 1, Panel (c) presents results, which show that the effects from the double-sort become even more pronounced. While the pattern is not perfectly monotonic, it is obvious from the data that the stylized bidder size effect, by which smaller bidders make better acquisitions, is not present for public deals. Instead, we find again that larger bidders make better acquisitions of public targets. Regressions which control for firm, deal, industry, and year characteristics support these findings. Table 5, Specification (5) shows that, without controlling for target size the coefficient on bidder size is negative for the public target deal subsample, consistent with the results for the full sample and non-public deals (specifications (1) and (3)). However, once we introduce the control for target size, the coefficient reverses and becomes statistically significant with the opposite sign. This again highlights that controlling for target size is critically important for determining thee true incremental impact of bidder size on bidder returns. 2 Panel B of Table 5 shows that including bidder industry year fixed effects and target industry 2 Some previous work controls for relative size instead of target size. However, in such a regression relative size does not appropriately control for the impact of target size. Rather, the coefficient on bidder size reflects the impact of a change in bidder size keeping fixed relative size, which can only happen if target size changes simultaneously with bidder size. Hence, rather than removing the impact of target size from the bidder size coefficient, controlling for relative size forces the bidder size coefficient to reflect properties of target size, which makes interpreting the outcome of such a regression hard. If the goal is separating the incremental effect of bidder and target size on bidder returns, including both bidder and target size in the regression is the correct approach. We further discuss relative size in Section 5.3 below. 10

12 year fixed effects makes the result even stronger. Comparing deals in the same industry-year cell of our data via our fixed effects, almost doubles the coefficient on bidder size and increases its statistical significance substantially (t-values are now above 3.5). The effect is also economically large. For example, the results in specification (5) of Table 5, Panel B, imply that a one standard deviation increase in bidder size increases ACARs by 0.9%. Looking at target size reveals a similarly striking difference between the non-public and public subsamples. While larger targets were associated with higher ACARs for non public targets, they are associated with smaller ACARs for public targets. These patterns show up strongly in simple sorts (Table 4, Panel B) and double-sorts (Table 4, Panel D), where large targets are associated with 2.4 and 1.9 percentage points lower ACARs, respectively. They also show up strongly in Figure 1, Panel (d), where we regress ACAR on target size quintiles while controlling for bidder size. Finally, they show up strongly in the multivariate regressions in Table 5, Panel A, specification (6), and in Panel B, specifications (5) and (6), where we control for bidder industry year fixed effects and target industry year fixed effects. 3.3 Key Result: Reversals The main result from our new look at the data is that there are strong reversal patterns, both within and across subsamples. Figure 1 summarizes the reversals succinctly. Within subsamples, we observe that bidder returns increase in target size whenever it decreases in bidder size, and vice versa (compare Panel (a) with Panel (b) and Panel (c) with Panel (d)). Across subsamples, we observe that bidder and target size patterns are mirror images: while, for non-public target deals, bidder returns increase in bidder size, bidder returns decrease in bidder size in the public target deal subsample (compare Panel (a) with Panel (c)). And while, for non-public target deals, bidder returns decrease in target size, bidder returns increase in target size in the public target deal subsample (compare Panel (b) with Panel (d)). As we have shown above, these are pervasive facts in the data. The reversals show up in sorts, across years, are robust to controlling for a battery of deal and firm characteristics, and also obtain when we compare, in a given year, deals by same-industry acquirers or deals by same-industry targets. Documenting the reversals in the data is the first main contribution of our paper. In a next 11

13 step, we show the reversals patterns allow us to make substantial progress on understanding why size matters so much for bidder returns. 4. Using Reversals to Test Existing Explanations in the Literature The public and non-public target deal samples are major subspaces of the takeover universe. Because each subsample is so important, a good model on the impact of size on bidder returns should make useful predictions on both subsamples. However, as we now show, none of the leading recent theories in the literature we look at can. Intuitively, a main reason is that most existing work focuses only on one panel from Figure 1. (We focus on Figure 1 for ease of exposition only. We have shown above that the patterns in that figure hold up to more rigorous tests and many additional controls.) For example, some studies derive the relation between target size and bidder returns using a sample of public targets, which means they are focusing on Panel (d). While such studies are internally valid on the data they are designed to explain, external validity, i.e. making successful predictions also for other panels in Figure 1, is crucially important. To see why, consider the numerous empirical studies in the literature that work on the subset of public target deals from SDC. If such a study relies on the stylized fact that smaller bidders make better acquisitions, it gets the sign on the size-bidder return relationship precisely wrong. The reason is that the study uses the results from Panel (a) in Figure 1, while working on the subsample described by the patterns in Panel (c). On conceptual grounds, external validity is desirable because it is unsatisfactory if we need four different theories to explain the four panels of Figure 1. The second main contribution of this paper is therefore to investigate if existing explanations in the literature are externally valid in the above sense. We propose that we can use the reversals as a simple but sharp test on leading recent explanations for size effects in the literature. The test is powerful, because all size-bidder return relations in Figure 1 are significant with opposite signs across the public and non-public target subsamples, and because sign-flips are very hard to generate for most theories. We identify three leading recent explanations in the literature, two on bidder size and one on target size effects. As we have noted in the introduction, while the economic impact of size on 12

14 bidder returns is huge, the number of available explanations in the literature is rather small. Perhaps the most influential explanation for bidder size effects is managerial overconfidence. According to that hypothesis, overconfident managers make worse deals on average. And, because of the self-serving attribution bias, managers in large firms are likely more overconfident on average. In their seminal study on size effects, Moeller, Schlingemann, and Stulz (2004) note that overconfidence may be able to explain why smaller bidders make better acquisitions, i.e. overconfidence may explain the pattern shown in Panel (a) of Figure 1. 3 The reversals pose a challenge to the overconfidence explanation: if managers in larger firms are more overconfident, then larger bidders with more overconfident managers should also make worse acquisitions when they acquire public targets. The fact that larger bidders make better acquisitions when they acquire public targets (Panel (c) in Figure 1) can not be explained by managerial overconfidence unless one believes managers in large firms are more overconfident than small-firm managers when they acquire a non-public target, but less overconfident than small-firm managers when they acquire a public target. This is implausible. We conclude that the overconfidence explanation is rejected by the reversal test. Let us be clear on why the reversal test is powerful. In a strict mathematical sense, finding one counterexample is sufficient to reject a statement as true. However, because all models are simplifications, and therefore wrong sometimes, finding one empirical counterexample is in general not enough to reject an economic theory. What we must care about is whether a proposed economic mechanism fails to explain an economically significant part of the data. The value of the reversal test lies precisely there: the non-public and public target subsamples are both widely studied and economically important in terms of number of deals and money invested. So our contribution is not to show that a proposed explanation in the literature, like overconfidence, fails to describe some opaque corner of the data. Our contribution is to show those explanations fail to describe the core of the data. A second prominent explanation for bidder size effects is that bidder size may proxy for agency problems, because being large works like an anti-takeover device. This explanation has influenced both empirical and theoretical work (e.g., Masulis, Wang, and Xie (2007), Gorton, Kahl, and Rosen 3 Moeller, Schlingemann, and Stulz (2004) analyze the impact of bidder size on bidder returns using the full sample of public and non-public targets. But because there are so many non-public target deals (78% in their sample), the full sample reflects the pattern in Panel (a) of Figure 1. 13

15 (2009)). Under this hypothesis, larger bidders would make worse acquisitions on average, because large bidder deals are more likely to be motivated by managerial objectives other than shareholder value creation (for example, empire building, or defensive mergers which help secure a manager s job). This is a reasonable conjecture, which would explain Panel (a) in Figure 1. But, if it is true for acquisitions of non-public targets, it should also be true for acquisitions of public targets. Panel (c) in Figure 1 shows this is not the case. Therefore, the reversal test rejects the agency explanation. A third explanation we consider is on target size. Alexandridis, Fuller, Terhaar, and Travlos (2013) show that target size is positively related to bidder returns in their sample. Since they restrict their sample to public targets, Panel (d) in Figure 1 captures their results. They propose an explanation for the negative relation between target size and bidder returns based on the idea that target size proxies for deal complexity, and that more complex deals are likely to be less value generating for bidders. It is certainly plausible to assume that larger deals would be more complex. But, if true, then large deals should also be more complex than small deals in the subsample of non-public target deals. The complexity argument would therefore predict lower returns for large deals also for the non-public target deal subsample. However, Panel (b) in Figure 1 shows this is the opposite from what we see in the data. The complexity explanation is therefore not consistent with the reversals. A general pattern emerges. In all the above explanations, size acted as a proxy for some underlying value driver (managerial overconfidence, agency problems, or complexity). And all explanations rest on an ex ante plausible mechanism that links size to that value driver. In each case, that mechanism puts a sign on the expected correlation between size and the value driver, and therefore a sign on the correlation between size and bidder returns. But, crucially, no explanation predicts that the sign on the correlation changes across public and private subsamples. In fact, there is a paradox: the more plausible the ex ante link between size and some hidden value driver, the less likely it is that we should expect a sign reversal across subsamples. We therefore suspect that any reasonable theory which uses size as a proxy for some hidden underlying variable driving merger returns is unlikely to explain the reversals we have shown to exist in the data. In sum, to the best of our knowledge, there is no explanation in the existing literature for the sign-reversals in bidder and target size coefficients we document across public and non-public 14

16 subsamples. We emphasize that explanations such as the ones above may well contribute to some of the observed size effects in the data. And we do not suggest these economic forces are irrelevant for merger returns, in general. But we have shown that the proposed mechanism are by themselves insufficient to explain the first-order effects in the relation between size and bidder announcement returns. It is conceivable that some of the mechanisms above would be able to explain the reversal patterns when combined with a second mechanism, provided that one can tailor the model such that the right effect dominates for the right subsample. While perhaps possible, we believe this is non-trivial, and we have not seen it done in the literature. In any case, however, Occam s Razor suggests that we should prefer one parsimonious model which explains the four panels in Figure 1 using a single underlying economic mechanism to a more complex model which needs different mechanisms to explain different parts of the data. We propose such a model in the next section. 5. A Simple Scaling Explanation In this section we present a simple scaling explanation of merger returns which predicts the size patterns we documented earlier. Our aim is to capture the first-order effects in the data using one underlying economic mechanism in a framework that is maximally parsimonious. 5.1 The Framework Let A be the market value of equity of a bidding firm before the takeover announcement. The bidder pays a price of P for the target. We refer to A and P as bidder and target size, respectively. Each dollar invested by the bidder is assumed to yield a net present value of R. R can be dealspecific and depend on a range of factors, but not on size itself. After the deal, the value of the bidder is then: A post = A + NP V (Deal) (1) = A + R P (2) 15

17 The percentage change in bidder value due to the takeover is therefore given by: ACAR (A post A) /A = R (P/A). (3) If equation (3) holds for any deal, we get for any sample of deals: E (ACAR P, A) = E (R) (P/A) (4) = E (R) exp (p a), (5) where p and a denote logs. Equation (4) captures the main predictive content of the model. First, the sign on the average bidder return is equal to the sign on E(R) in the sample. Second, if E(R) > 0, then bidder returns increase in target size p, and decrease in bidder size a. Third, all signs flip when E(R) < 0. There are additional predictions we discuss below. Given these predictions, the reversal patterns in the data are no longer puzzling. For the sample of public target deals, Table 2, Panel A shows a negative average bidder return of 1.4%, which, in the model, implies that E(R) is negative. The model then predicts bidder returns are related positively to target size and negatively to bidder size, consistent with what we have shown in Table 5, specification (6), and in Panels (c) and (d) in Figure 1. Conversely, in the sample of non-public targets, the average ACAR is 1.4%, which implies E(R) > 0. Hence, the model predicts a reversal in signs, compared with the public target sample, which explains the patterns in Table 5, specification (4), and in Panels (a) and (b) in Figure 1. The intuition is simple. A bidder who loses R cents on every dollar invested will lose more if it invests more. Hence, for a given size of the bidder, a larger target implies a larger percentage decrease in the value of the bidder if R is negative. If we fix target size instead, and therefore the dollar loss from the transaction, the percentage drop in the value of the bidder will be smaller, the larger the bidder. For a bidder that gains R cents on every dollar invested, analogous reasoning implies that larger targets and smaller bidders lead to larger percentage changes in bidder value. We conclude that scaling provides a parsimonious unified framework for understanding (i) why bidder and target size have opposite effects on bidder returns, and (ii) why signs for each variable flip across public and non-public deals. What the scaling explanation does, is work well in the Friedman (1953) sense: it describes a lot 16

18 of the size-bidder return data using very few assumptions. What it does not do, is speak to what drives R. This is a feature of the model, and not a bug. Our paper proposes a fundamental shift in perspective on how size affects bidder returns compared with the recent M&A literature. As we discussed above, that literature interprets size as a proxy for some underlying variable which is associated with value creation, such as, for example, overconfidence, private benefits, or complexity. Hence, in those explanations, size influences R. In our model, size determines how much value is created or destroyed, but not whether value is created in the first place. We have set up the model to maximally highlight this difference by assuming R does not depend on size. In our view, this assumption is a natural starting point, and it is informative to see how far this benchmark case can take us in explaining the data. The assumption is also in line with prominent prior models of takeovers in which value creation per unit of capital is modelled as a constant (e.g., Jovanovic and Rousseau (2002), Shleifer and Vishny (2003)). But clearly, the assumption is stricter than needed. The qualitative predictions of the model are unchanged as long as R does not move too much with size. Indeed, empirically, all results in this paper are consistent with the view that potential non-linearities in R with respect to size are second-order effects. In Figure B1 in the Appendix we estimate R directly for different size groups. The results show that assuming R to be independent of size is not an unreasonable first-order approximation to the data. A second implication of our approach is that scaling, as a property, is centrally important for understanding how much value is created in takeovers, irrespective of the underlying driver of the sign of R. Whether R is determined by neoclassical, agency based, behavioral, or other mechanisms, size will magnify per-dollar value created, and therefore impact bidder returns. We end this section by discussing an important special case: the simplest neoclassical benchmark in which bidders pay, on average, the fair price for the target, such that E(R) = 0. Allowing for idiosyncratic variation for a given deal, for example random valuation mistakes in preparing the DCF forecasts, leads to a purely idiosyncratic firm-specific mean-zero wedge ɛ between the actual per-dollar net present value and the estimated per-dollar net present value. The expression for ACAR for each deal then becomes ACAR = ɛ (P/A). (6) 17

19 This special case is interesting for several reasons. First, it provides a natural justification for assuming R to be independent of size. Second, it shows the scaling property of size can be relevant even if, on average, firms pay the fair price. With idiosyncratic valuation errors, scaling influences value creation as soon as ɛ is not zero. Given that standard valuation tools at firms disposal (e.g., DCF, Multiples etc.) are notoriously imprecise, ɛ may frequently be very large. Hence, scaling may frequently be very important for value creation even in this neoclassical benchmark case. 5.2 Additional Testable Implication: Within-Subsample Sign-Reversals We have shown above that the sign-flips between public and non-public target subsamples are explained by scaling, but not by existing theories on how size influences bidder returns. A skeptical reader may point out, correctly, that the decision to take over a public or a nonpublic target is not random. Hence, it is theoretically possible that public and non-public deals differ on some unobservable characteristic which is itself correlated with bidder returns and the size variable. In this case, the reversals we document across subsamples could be spuriously induced by an omitted variable. While theoretically possible, it is not at all obvious what unobserved variable or mechanism could induce signs to flip for both target and bidder size precisely in the direction predicted by scaling. Nevertheless, to be conservative, we propose an additional test to rule out our empirical patterns obtain because public or non-public deals are special. The test is based on the insight that, because equation (3) holds for each deal, we should also be able to detect reversals within the subset of public and non-public deals. While we see reversals across subsamples, the qualitative pattern within the subsamples, governed by equation (3), is predicted to be the same: as we move from negative ACARs to positive ACARs, the coefficient on bidder size should flip from positive to negative, and the coefficient on target size should flip from negative to positive. Importantly, and as a particularly high hurdle for our simple model, we predict the signs on bidder and target size to flip at the same point within the ACAR distribution. Because the test is within subsample, its results cannot be driven by an omitted variable related to the endogenous decision to acquire a public or non-public target. More broadly, the test, which is a direct implication of the scaling model, further raises the bar for alternative theories because it poses yet another sign-flip to be explained. We propose two alternative approaches to test for within-sample sign-reversals. The first ap- 18

20 proach is to suitably define subsamples within subsamples, the second is to analyze the distribution of ACARs directly using quantile regressions. Evidence from Public Target Cash and Stock Deals. To minimize data mining concerns, we propose to look at the cash and stock payment subsamples within public target deals, which are widely studied throughout the literature. For our present purposes, these subsamples are interesting, because cash deals have small but positive announcement returns (+0.4%), while stock deals have strongly negative returns ( 2.7%) as can be seen in Table 2, Panel A. In Table 5, Panel C, we rerun our baseline regression separately for the subsample of public deals paid for by cash and stock, respectively. As predicted by scaling, we observe a sign flip from negative to positive for the bidder size coefficient if we compare the cash deal with the stock deal subsample (specification (2) vs. specification (4)). Similarly, the target size coefficient flips from positive to negative. This shows that our earlier results are not induced by public deals being associated with a particular size pattern for some unobserved reason. An interesting additional finding, which provides further support for the scaling model is that both size coefficients are noticeably smaller for the all cash sample. According to equation (4) effects should be, all else equal, smaller, the closer the sample average return is to zero. Hence, the smaller coefficients for cash deals are predictions of the scaling model born out by the data. Evidence from Quantile Regressions. Testing for within-subsample reversals based on the idea that sign-flips are determined by whether ACAR is positive or negative is complicated by the fact that ACAR is the dependent variable. The reason is that selection on the dependent variable will in general produce biased OLS estimates of the coefficient of interest (e.g., Heckman (1979), Angrist and Pischke (2008)). In particular, it is not a valid strategy to (i) group observations into positive and negative ACAR subsamples and then to (ii) run OLS regressions on the subsamples. To avoid this issue, we run quantile regressions (see e.g., Koenker and Hallock (2001)). Specifically, we estimate Q τ (y i X i ) = X i β τ, (7) where y i is the ACAR of the i-th deal, Q τ () is the quantile function conditional on the vector of 19