Archives For

Policymakers’ recent focus on how Big Tech should be treated under antitrust law has been accompanied by claims that companies like Facebook and Google hold dominant positions in various “markets.” Notwithstanding the tendency to conflate whether a firm is large with whether it hold a dominant position, we must first answer the question most of these claims tend to ignore: “dominant over what?”

For example, as set out in this earlier Truth on the Market post, a recent lawsuit filed by various states and the U.S. Justice Department outlined five areas related to online display advertising over which Google is alleged by the plaintiffs to hold a dominant position. But crucially, none appear to have been arrived at via the application of economic reasoning.

As that post explained, other forms of advertising (such as online search and offline advertising) might form part of a “relevant market” (i.e., the market in which a product actually competes) over which Google’s alleged dominance should be assessed. The post makes a strong case for the actual relevant market being much broader than that claimed in the lawsuit. Of course, some might disagree with that assessment, so it is useful to step back and examine the principles that underlie and motivate how a relevant market is defined.

In any antitrust case, defining the relevant market should be regarded as a means to an end, not an end in itself. While such definitions provide the basis to calculate market shares, the process of thinking about relevant markets also should provide a framework to consider and highlight important aspects of the case. The process enables one to think about how a particular firm and market operates, the constraints that it and rival firms face, and whether entry by other firms is feasible or likely.

Many naïve attempts to define the relevant market will limit their analysis to a particular industry. But an industry could include too few competitors, or it might even include too many—for example, if some firms in the industry generate products that do not constitute strong competitive constraints. If one were to define all cars as the “relevant” market, that would imply that a Dacia Sandero (a supermini model produced Renault’s Romanian subsidiary Dacia) constrains the price of Maserati’s Quattroporte luxury sports sedan as much as the Ferrari Portofino grand touring sports car does. This is very unlikely to hold in reality.[1]

The relevant market should be the smallest possible group of products and services that contains all such products and services that could provide a reasonable competitive constraint. But that, of course, merely raises the question of what is meant by a “reasonable competitive constraint.” Thankfully, by applying economic reasoning, we can answer that question.

More specifically, we have the “hypothetical monopolist test.” This test operates by considering whether a hypothetical monopolist (i.e., a single firm that controlled all the products considered part of the relevant market) could profitably undertake “a small but significant, non-transitory, increase in price” (typically shortened as the SSNIP test).[2]

If the hypothetical monopolist could profitably implement this increase in price, then the group of products under consideration is said to constitute a relevant market. On the other hand, if the hypothetical monopolist could not profitably increase the price of that group of products (due to demand-side or supply-side constraints on their ability to increase prices), then that group of products is not a relevant market, and more products need to be included in the candidate relevant market. The process of widening the group of products continues until the hypothetical monopolist could profitably increase prices over that group.

So how does this test work in practice? Let’s use an example to make things concrete. In particular, let’s focus on Google’s display advertising, as that has been a significant focus of attention. Starting from the narrowest possible market, Google’s own display advertising, the HM test would ask whether a hypothetical monopolist controlling these services (and just these services) could profitably increase prices of these services permanently by 5% to 10%.

At this initial stage, it is important to avoid the “cellophane fallacy,” in which a monopolist firm could not profitably increase its prices by 5% to 10% because it is already charging the monopoly price. This fallacy usually arises in situations where the product under consideration has very few (if any) substitutes. But as has been shown here, there are already plenty of alternatives to Google’s display-advertising services, so we can be reasonably confident that the fallacy does not apply here.

We would then consider what is likely to happen if Google were to increase the prices of its online display advertising services by 5% to 10%. Given the plethora of other options (such as Microsoft, Facebook, and Simpli.fi) customers have for obtaining online display ads, a sufficiently high number of Google’s customers are likely to switch away, such that the price increase would not be profitable. It is therefore necessary to expand the candidate relevant market to include those closest alternatives to which Google’s customers would switch.

We repeat the exercise, but now with the hypothetical monopolist also increasing the prices of those newly included products. It might be the case that alternatives such as online search ads (as opposed to display ads), print advertising, TV advertising and/or other forms of advertising would sufficiently constrain the hypothetical monopolist in this case that those other alternatives form part of the relevant market.

In determining whether an alternative sufficiently constrains our hypothetical monopolist, it is important to consider actual consumer/firm behavior, rather than relying on products having “similar” characteristics. Although constraints can come from either the demand side (i.e., customers switching to another provider) or the supply side (entry/switching by other providers to start producing the products offered by the HM), for market-definition purposes, it is almost always demand-side switching that matters most. Switching by consumers tends to happen much more quickly than does switching by providers, such that it can be a more effective constraint. (Note that supply-side switching is still important when assessing overall competitive constraints, but because such switching can take one or more years, it is usually considered in the overall competitive assessment, rather than at the market-definition stage.)

Identifying which alternatives consumers do and would switch to therefore highlights the rival products and services that constrain the candidate hypothetical monopolist. It is only once the hypothetical monopolist test has been completed and the relevant market has been found that market shares can be calculated.[3]

It is at that point than an assessment of a firm’s alleged market power (or of a proposed merger) can proceed. This is why claims that “Facebook is a monopolist” or that “Google has market power” often fail at the first hurdle (indeed, in the case of Facebook, they recently have.)

Indeed, I would go so far as to argue that any antitrust claim that does not first undertake a market-definition exercise with sound economic reasoning akin to that described above should be discounted and ignored.


[1] Some might argue that there is a “chain of substitution” from the Maserati to, for example, an Audi A4, to a Ford Focus, to a Mini, to a Dacia Sandero, such that the latter does, indeed, provide some constraint on the former. However, the size of that constraint is likely to be de minimis, given how many “links” there are in that chain.

[2] The “small but significant” price increase is usually taken to be between 5% and 10%.

[3] Even if a product or group of products ends up excluded from the definition of the relevant market, these products can still form a competitive constraint in the overall assessment and are still considered at that point.

The slew of recent antitrust cases in the digital, tech, and pharmaceutical industries has brought significant attention to the investments many firms in these industries make in “intangibles,” such as software and research and development (R&D).

Intangibles are recognized to have an important effect on a company’s (and the economy’s) performance. For example, Jonathan Haskel and Stian Westlake (2017) highlight the increasingly large investments companies have been making in things like programming in-house software, organizational structures, and, yes, a firm’s stock of knowledge obtained through R&D. They also note the considerable difficulties associated with valuing both those investments and the outcomes (such as new operational procedures, a new piece of software, or a new patent) of those investments.

This difficulty in valuing intangibles has gone somewhat under the radar until relatively recently. There has been progress in valuing them at the aggregate level (see Ellen R. McGrattan and Edward C. Prescott (2008)) and in examining their effects at the level of individual sectors (see McGrattan (2020)). It remains difficult, however, to ascertain the value of the entire stock of intangibles held by an individual firm.

There is a method to estimate the value of one component of a firm’s stock of intangibles. Specifically, the “stock of knowledge obtained through research and development” is likely to form a large proportion of most firms’ intangibles. Treating R&D as a “stock” might not be the most common way to frame the subject, but it does have an intuitive appeal.

What a firm knows (i.e., its intellectual property) is an input to its production process, just like physical capital. The most direct way for firms to acquire knowledge is to conduct R&D, which adds to its “stock of knowledge,” as represented by its accumulated stock of R&D. In this way, a firm’s accumulated investment in R&D then becomes a stock of R&D that it can use in production of whatever goods and services it wants. Thankfully, there is a relatively straightforward (albeit imperfect) method to measure a firm’s stock of R&D that relies on information obtained from a company’s accounts, along with a few relatively benign assumptions.

This method (set out by Bronwyn Hall (1990, 1993)) uses a firm’s annual expenditures on R&D (a separate line item in most company accounts) in the “perpetual inventory” method to calculate a firm’s stock of R&D in any particular year. This perpetual inventory method is commonly used to estimate a firm’s stock of physical capital, so applying it to obtain an estimate of a firm’s stock of knowledge—i.e., their stock of R&D—should not be controversial.

All this method requires to obtain a firm’s stock of R&D for this year is knowledge of a firm’s R&D stock and its investment in R&D (i.e., its R&D expenditures) last year. This year’s R&D stock is then the sum of those R&D expenditures and its undepreciated R&D stock that is carried forward into this year.

As some R&D expenditure datasets include, for example, wages paid to scientists and research workers, this is not exactly the same as calculating a firm’s physical capital stock, which would only use a firm’s expenditures on physical capital. But given that paying people to perform R&D also adds to a firm’s stock of R&D through the increased knowledge and expertise of their employees, it seems reasonable to include this in a firm’s stock of R&D.

As mentioned previously, this method requires making certain assumptions. In particular, it is necessary to assume a rate of depreciation of the stock of R&D each period. Hall suggests a depreciation of 15% per year (compared to the roughly 7% per year for physical capital), and estimates presented by Hall, along with Wendy Li (2018), suggest that, in some industries, the figure can be as high as 50%, albeit with a wide range across industries.

The other assumption required for this method is an estimate of the firm’s initial level of stock. To see why such an assumption is necessary, suppose that you have data on a firm’s R&D expenditure running from 1990-2016. This means that you can calculate a firm’s stock of R&D for each year once you have their R&D stock in the previous year via the formula above.

When calculating the firm’s R&D stock for 2016, you need to know what their R&D stock was in 2015, while to calculate their R&D stock for 2015 you need to know their R&D stock in 2014, and so on backward until you reach the first year for which you have data: in this, case 1990.

However, working out the firm’s R&D stock in 1990 requires data on the firm’s R&D stock in 1989. The dataset does not contain any information about 1989, nor the firm’s actual stock of R&D in 1990. Hence, it is necessary to make an assumption regarding the firm’s stock of R&D in 1990.

There are several different assumptions one can make regarding this “starting value.” You could assume it is just a very small number. Or you can assume, as per Hall, that it is the firm’s R&D expenditure in 1990 divided by the sum of the R&D depreciation and average growth rates (the latter being taken as 8% per year by Hall). Note that, given the high depreciation rates for the stock of R&D, it turns out that the exact starting value does not matter significantly (particularly in years toward the end of the dataset) if you have a sufficiently long data series. At a 15% depreciation rate, more than 50% of the initial value disappears after five years.

Although there are other methods to measure a firm’s stock of R&D, these tend to provide less information or rely on stronger assumptions than the approach described above does. For example, sometimes a firm’s stock of R&D is measured using a simple count of the number of patents they hold. However, this approach does not take into account the “value” of a patent. Since, by definition, each patent is unique (with differing number of years to run, levels of quality, ability to be challenged or worked around, and so on), it is unlikely to be appropriate to use an “average value of patents sold recently” to value it. At least with the perpetual inventory method described above, a monetary value for a firm’s stock of R&D can be obtained.

The perpetual inventory method also provides a way to calculate market shares of R&D in R&D-intensive industries, which can be used alongside current measures. This would be akin to looking at capacity shares in some manufacturing industries. Of course, using market shares in R&D industries can be fraught with issues, such as whether it is appropriate to use a backward-looking measure to assess competitive constraints in a forward-looking industry. This is why any investigation into such industries should also look, for example, at a firm’s research pipeline.

Naturally, this only provides for the valuation of the R&D stock and says nothing about valuing other intangibles that are likely to play an important role in a much wider range of industries. Nonetheless, this method could provide another means for competition authorities to assess the current and historical state of R&D stocks in industries in which R&D plays an important part. It would be interesting to see what firms’ shares of R&D stocks look like, for example, in the pharmaceutical and tech industries.