During the 2008 presidential campaign Barack Obama criticized the Bush Administration for “the weakest record of antitrust enforcement of any administration in the last half century” and promised “to reinvigorate antitrust enforcement.”  In particular, he singled out allegedly lax monopolization and merger enforcement as areas needing improvement, and also vowed “aggressive action to curb the growth of international cartels.”

The Obama Administration has now been in office for six years.  Has its antitrust enforcement record been an improvement over the Bush record, more of the same, or is its record worse?  Most importantly, have the Obama Administration’s enforcement initiatives been good or bad for the free market system, and the overall American economy?

On January 29 a Heritage Foundation Conference will address these questions.  You can register to attend this conference in person or watch it live at Heritage’s website.

The conference will feature an all start lineup of top antitrust enforcers and scholars, including four former Justice Department Assistant Attorneys General for Antitrust; a former Federal Trade Commission Chairman; two current Federal Trade Commissioners; five former senior antitrust enforcement officials; a distinguished federal appellate judge famous for his antitrust opinions; and a leading comparative antitrust law expert.  Separate panels will address FTC, Justice Department, and international developments.  Our leadoff speaker will be GWU Law School Professor and former FTC Chairman Bill Kovacic.

As an added bonus, around the time of the conference Heritage will be releasing a new paper by Professor Thom Lambert that analyzes recent Supreme Court jurisprudence and federal antitrust enforcement applying a “limits of antitrust” decision-theoretic framework.  Stay tuned.

On December 11 I published a Heritage Foundation Legal Memorandum on this topic. I concluded that the federal courts have done a fairly good job in harmonizing antitrust with constitutionally-based federalism and First Amendment interests (petitioning, free speech, and religious freedom). Nevertheless, it must be admitted that these “constitutional constraints” somewhat limit the ability of antitrust to promote a procompetitive, pro-efficiency, pro-innovation, pro-consumer welfare agenda. Anticompetitive government action – the most pernicious and long-lasting affront to competition, because it is backed by the coercive power of the state – presents a particularly serious and widespread problem. How can antitrust and other legal principles be applied to further promote economic freedom and combat anticompetitive government action, in a manner consistent with the Constitution?

First, it may be possible to further tweak antitrust to apply a bit more broadly to governmental conduct, without upsetting the constitutional balance.

For instance, in 2013, in Phoebe Putney, the United States Supreme Court commendably held that general grants of corporate powers (such as the power to enter into contracts) to sub-state governmental entities are not in themselves “clear articulations” of a state policy to displace competition. Thus, in that case, a special purpose hospital authority granted general corporate powers by the State of Georgia could not evade federal antitrust scrutiny when it orchestrated a potentially anticompetitive hospital merger. In short, by requiring states to be specific when they authorize regulators to displace competition, Phoebe Putney makes it a bit more difficult to achieve anticompetitive results through routine state governmental processes.

But what about when a subsidiary state entity has been empowered to displace competition? Imposing a greater requirement on states to actively supervise decisions by self-interested state regulatory boards could enhance competition without severely undermining state prerogatives. Specifically, where members of a profession dominate a state-created board that oversees the profession, the risk of self-dealing and consumer harm is particularly high, and therefore the board’s actions should be subject to exacting scrutiny. In its imminent ruling on the Federal Trade Commission’s (FTC) challenge to anticompetitive rules by the dentist-dominated North Carolina Dental Board of Dental Examiners (rules which forestall competition by storefront teeth whitening services), the Supreme Court will have the opportunity to require that states actively supervise the decisions of self-interested regulators as a prerequisite to federal antitrust immunity. At the very least, such a requirement would make states be more cautious before giving a blank check to potentially anticompetitive industry self-regulation. It could also raise the costs of obtaining special government favor, and shed needed light on rent-seekers’ efforts to achieve regulatory capture.

Unfortunately, though, a great deal of anticompetitive governmental activity, both state and federal, is and will remain beyond the bounds of federal antitrust prosecution. What can be done to curb such excesses, given the practical political difficulties in achieving far-reaching pro-competitive legislative and regulatory reforms? My December 11 Heritage Memo highlights a few possibilities rooted in constitutional economic liberties (see also the recent Heritage Foundation special report on economic liberty and the Constitution). One involves putting greater teeth into constitutional equal protection and due process analysis – say, by holding that pure protectionism standing alone does not pass muster as a “rational basis” justification for a facially anticompetitive law. Another approach is to deploy takings law (highlighted in a current challenge to the U.S. Agriculture Department’s raisin cartel) and the negative commerce clause in appropriate circumstances. The utility of these approaches, however, is substantially limited by case law.

Finally, competition advocacy – featuring public statements by competition agencies that describe the anticompetitive effects and welfare harm stemming from specific government regulations or proposed laws – remains a potentially fruitful means for highlighting the costs of anticompetitive government action and building a case for reform. As I have previously explained, the FTC has an established track record of competition advocacy filings, and the International Competition Network is encouraging the utilization of competition advocacy around the world. By shedding light on the specific baleful effects of government actions that undermine normal competitive processes, competition advocacy may over time help build a political case for reform that transcends the inherent limitations of antitrust and constitutional litigation.

It’s easy to look at the net neutrality debate and assume that everyone is acting in their self-interest and against consumer welfare. Thus, many on the left denounce all opposition to Title II as essentially “Comcast-funded,” aimed at undermining the Open Internet to further nefarious, hidden agendas. No matter how often opponents make the economic argument that Title II would reduce incentives to invest in the network, many will not listen because they have convinced themselves that it is simply special-interest pleading.

But whatever you think of ISPs’ incentives to oppose Title II, the incentive for the tech companies (like Cisco, Qualcomm, Nokia and IBM) that design and build key elements of network infrastructure and the devices that connect to it (i.e., essential input providers) is to build out networks and increase adoption (i.e., to expand output). These companies’ fundamental incentive with respect to regulation of the Internet is the adoption of rules that favor investment. They operate in highly competitive markets, they don’t offer competing content and they don’t stand as alleged “gatekeepers” seeking monopoly returns from, or control over, what crosses over the Interwebs.

Thus, it is no small thing that 60 tech companies — including some of the world’s largest, based both in the US and abroad — that are heavily invested in the buildout of networks and devices, as well as more than 100 manufacturing firms that are increasingly building the products and devices that make up the “Internet of Things,” have written letters strongly opposing the reclassification of broadband under Title II.

There is probably no more objective evidence that Title II reclassification will harm broadband deployment than the opposition of these informed market participants.

These companies have the most to lose from reduced buildout, and no reasonable nefarious plots can be constructed to impugn their opposition to reclassification as consumer-harming self-interest in disguise. Their self-interest is on their sleeves: More broadband deployment and adoption — which is exactly what the Open Internet proceedings are supposed to accomplish.

If the FCC chooses the reclassification route, it will most assuredly end up in litigation. And when it does, the opposition of these companies to Title II should be Exhibit A in the effort to debunk the FCC’s purported basis for its rules: the “virtuous circle” theory that says that strong net neutrality rules are necessary to drive broadband investment and deployment.

Access to all the wonderful content the Internet has brought us is not possible without the billions of dollars that have been invested in building the networks and devices themselves. Let’s not kill the goose that lays the golden eggs.

During the 2008 presidential campaign, Barack Obama criticized the Bush Administration for “the weakest record of antitrust enforcement of any administration in the last half century” and promised “to reinvigorate antitrust enforcement.” Has the Obama Administration’s antitrust enforcement record over the last six years lived up to this extravagant promise? More specifically, what grade should be assigned to Obama Administration antitrust policy overall?

The Heritage Foundation will explore these questions in a January 29, 2015 conference entitled “Obama Administration Antitrust Policy: A Report Card.” The conference will start with a bang, with keynote remarks by former FTC Chairman, Professor Bill Kovacic – perhaps the most dynamic antitrust orator of our time.   The conference will then feature a free-flowing discussion among antitrust experts, with separate panels discussing the U.S. Federal Trade Commission (FTC), the Justice Department’s Antitrust Division, and international antitrust. Speakers will include top notch practitioners and scholars who have led the FTC and the Antitrust Division – and one distinguished federal jurist, D.C. Circuit Judge and George Mason Law Professor Douglas Ginsburg, who has been a leading academic, Assistant Attorney General for Antitrust, and OMB Administrator for Information and Regulatory Affairs. Former Assistant Attorney General for Antitrust James Rill, who was the leader in promoting international antitrust convergence, will also speak.

Best of all, the conference (including lunch) is free – all you need to do is register for it at the Heritage Foundation’s website. You won’t want to miss it.

Last week, the George Washington University Center for Regulatory Studies convened a Conference (GW Conference) on the Status of Transatlantic Trade and Investment Partnership (TTIP) Negotiations between the European Union (EU) and the United States (U.S.), which were launched in 2013 and will continue for an indefinite period of time. In launching TTIP, the Obama Administration claimed that this pact would raise economic welfare in the U.S. and the EU through stimulating investment and lowering non-tariff barriers between the two jurisdictions, by, among other measures, “significantly cut[ting] the cost of differences in [European Union and United States] regulation and standards by promoting greater compatibility, transparency, and cooperation.

Whether TTIP, if enacted, would actually raise economic welfare in the United States is an open question, however. As a recent Heritage Foundation analysis of TTIP explained, a TTIP focus on “harmonizing” regulations could actually lower economic freedom (and welfare) by “regulating upward” through acceptance of the more intrusive approach, and by precluding future competition among alternative regulatory models that could lead to welfare-enhancing regulatory improvements. Thus, the Heritage study recommended that “[a]ny [TTIP] agreement should be based on mutual recognition, not harmonization, of regulations.”

Unfortunately, discussion at the GW Conference indicated that the welfare-superior mutual recognition approach has been rejected by negotiators – at least as of now. In response to a question I posed on the benefits of mutual recognition, an EU official responded that such an “academic” approach is not “realistic,” while a senior U.S. TTIP negotiator indicated that mutual recognition could prove difficult where regulatory approaches differ. I read those diplomatically couched responses as signaling that both sides opposed the mutual recognition approach. This is a real problem. As part of TTIP, U.S. and EU sector-specific regulators are actively engaged in discussing regulatory particulars. There is the distinct possibility that the regulators may agree on measures that raise regulatory burdens for the sectors covered – particularly given the oft-repeated motto at the GW Conference that TTIP must not reduce existing levels of “protection” for health, safety, and the environment. (Those blandishments eschew any cost-benefit calculus to justify existing protection levels.) This conclusion is further supported by public choice theory, which suggests that regulators may be expected to focus on expanding the size and scope of their regulatory domains, not on contracting them. To make things worse, TTIP raises the possibility that the highly successful U.S. tradition of reliance on private sector-led voluntary consensus standards, as opposed to the EU’s preference for heavy government involvement in standard-setting policies, may be undermined. Any move toward greater direct government influence on U.S. standard setting as part of a TTIP bargain would further undermine the vibrancy, competition, and innovation that have led to the great international success of U.S.-developed technical standards.

As a practical matter, however, is there time for a change in direction in TTIP negotiations regarding regulation and standards? Yes, there is. The TTIP negotiators face no true deadline. Moreover, as a matter of political reality, the eventual U.S. statutory adoption of TTIP measures may require the passage by Congress of “fast-track” trade promotion authority (TPA), which provides for congressional up-or-down votes (without possibility of amendment) on legislation embodying trade deals that have been negotiated by the Executive Branch. Given the political sensitivity of trade deals, they cannot easily be renegotiated if they are altered by congressional amendments. (Indeed, in recent decades all major trade agreements requiring implementing legislation have proceeded under TPA.)

If the Obama Administration decides that it wants to advance TTIP, it must rely on a Republican-controlled Congress to obtain TPA. Before it grants such authority, Congress should conduct hearings and demand that Administration officials testify about key aspects of the Administration’s TTIP negotiating philosophy, and, in particular, on how U.S. TTIP negotiators are approaching regulatory differences between the U.S. and the EU. Congress should make it a prerequisite to the grant of TPA that the final TTIP agreement embody welfare-enhancing mutual recognition of regulations and standards, rather than welfare-reducing harmonization. It should vote down any TTIP negotiated deal that fails to satisfy this requirement.

In March 2014, the U.S. Government’s National Telecommunications and Information Administration (NTIA, the Executive Branch’s telecommunications policy agency) abruptly announced that it did not plan to renew its contract with the Internet Corporation for Assigned Names and Numbers (ICANN) to maintain core functions of the Internet. ICANN oversees the Internet domain name system through its subordinate agency, the Internet Assigned Numbers Authority (IANA). In its March statement, NTIA proposed that ICANN consult with “global stakeholders” to agree on an alternative to the “current role played by NTIA in the coordination of the Internet’s [domain name system].”

In recent months Heritage Foundation scholars have discussed concerns stemming from this vaguely-defined NTIA initiative (see, for example, here, here, here, here, here, and here). These concerns include fears that eliminating the U.S. Government’s role in Internet governance could embolden other nations and international organizations (especially the International Telecommunications Union, an arm of the United Nations) to seek to regulate the Internet and limit speech, and create leeway for ICANN to expand beyond its core activities and trench upon Internet freedoms.

Although NTIA has testified that its transition plan would preclude such undesirable outcomes, the reaction to these assurances should be “trust but verify” (especially given the recent Administration endorsement of burdensome Internet common carrier regulation, which appears to be at odds with the spirit if not the letter of NTIA’s assurances).

Reflecting the “trust but verify” spirit, the just-introduced “Defending Internet Freedom Act of 2014” requires that NTIA maintain its existing Internet oversight functions, unless the NTIA Administrator certifies in writing that certain specified assurances have been met regarding Internet governance. Those assurances include findings that the management of the Internet domain name system will not be exercised by foreign governmental or intergovernmental bodies; that ICANN’s bylaws will be amended to uphold First Amendment-type freedoms of speech, assembly, and association; that a four-fifths supermajority will be required for changes in ICANN’s bylaws or fees for services; that an independent process for resolving disputes between ICANN and third parties be established; and that a host of other requirements aimed at protecting Internet freedoms and ensuring ICANN and IANA accountability be instituted.

Legislative initiatives of this sort, while no panacea, play a valuable role in signaling Congress’s intent to hold the Administration accountable for seeing to it that key Internet freedoms (including the avoidance of onerous regulation and deleterious restrictions on speech and content) are maintained. They merit thoughtful consideration.

Microsoft and its allies (the Microsoft-funded trade organization FairSearch and the prolific Google critic Ben Edelman) have been highly critical of Google’s use of “secret” contracts to license its proprietary suite of mobile apps, Google Mobile Services, to device manufacturers.

I’ve written about this at length before. As I said previously,

In order to argue that Google has an iron grip on Android, Edelman’s analysis relies heavily on ”secret” Google licensing agreements — “MADAs” (Mobile Application Distribution Agreements) — trotted out with such fanfare one might think it was the first time two companies ever had a written contract (or tried to keep it confidential).

For Edelman, these agreements “suppress competition” with “no plausible pro-consumer benefits.”

Microsoft (via another of its front groups, ICOMP) responded in predictable fashion.

While the hysteria over private, mutually beneficial contracts negotiated between sophisticated corporations was always patently absurd (who ever heard of sensitive commercial contracts that weren’t confidential?), Edelman’s claim that the Google MADAs operate to “suppress competition” with “no plausible pro-consumer benefits” was the subject of my previous post.

I won’t rehash all of those arguments here, but rather point to another indication that such contract terms are not anticompetitive: The recent revelation that they are used by others in the same industry — including, we’ve learned (to no one’s surprise), Microsoft.

Much like the release of Google’s MADAs in an unrelated lawsuit, the ongoing patent licensing contract dispute between Microsoft and Samsung has obliged the companies to release their own agreements. As it happens, they are at least as restrictive as the Google agreements criticized by Edelman — and, in at least one way, even more so.

Some quick background: As I said in my previous post, it is no secret that equipment manufacturers have the option to license a free set of Google apps (Google Mobile Services) and set Google as the default search engine. However, Google allows OEMs to preinstall other competing search engines as they see fit. Indeed, no matter which applications come pre-installed, the user can easily download Yahoo!, Microsoft’s Bing, Yandex, Naver, DuckDuckGo and other search engines for free from the Google Play Store.

But Microsoft has sought to impose even-more stringent constraints on its device partners. One of the agreements disclosed in the Microsoft-Samsung contract litigation, the “Microsoft-Samsung Business Collaboration Agreement,” requires Samsung to set Bing as the search default for all Windows phones and precludes Samsung from pre-installing any other search applications on Windows-based phones. Samsung must configure all of its Windows Phones to use Microsoft Search Services as the

default Web Search  . . . in all instances on such properties where Web Search can be launched or a Query submitted directly by a user (including by voice command) or automatically (including based on location or context).

Interestingly, the agreement also requires Samsung to install Microsoft Search Services as a non-default search option on all of Samsung’s non-Microsoft Android devices (to the extent doing so does not conflict with other contracts).

Of course, the Microsoft-Samsung contract is expressly intended to remain secret: Its terms are declared to be “Confidential Information,” prohibiting Samsung from making “any public statement regarding the specific terms of [the] Agreement” without Microsoft’s consent.

Meanwhile, the accompanying Patent License Agreement provides that

all terms and conditions in this Agreement, including the payment amount [and the] specific terms and conditions in this Agreement (including, without limitation, the amount of any fees and any other amounts payable to Microsoft under this Agreement) are confidential and shall not be disclosed by either Party.

In addition to the confidentiality terms spelled out in these two documents, there is a separate Non-Disclosure Agreement—to further dispel any modicum of doubt on that score. Perhaps this is why Edelman was unaware of the ubiquity of such terms (and their confidentiality) when he issued his indictment of the Google agreements but neglected to mention Microsoft’s own.

In light of these revelations, Edelman’s scathing contempt for the “secrecy” of Google’s MADAs seems especially disingenuous:

MADA secrecy advances Google’s strategic objectives. By keeping MADA restrictions confidential and little-known, Google can suppress the competitive response…Relatedly, MADA secrecy helps prevent standard market forces from disciplining Google’s restriction. Suppose consumers understood that Google uses tying and full-line-forcing to prevent manufacturers from offering phones with alternative apps, which could drive down phone prices. Then consumers would be angry and would likely make their complaints known both to regulators and to phone manufacturers. Instead, Google makes the ubiquitous presence of Google apps and the virtual absence of competitors look like a market outcome, falsely suggesting that no one actually wants to have or distribute competing apps.

If, as Edelman claims, Google’s objectionable contract terms “serve both to help Google expand into areas where competition could otherwise occur, and to prevent competitors from gaining traction,” then what are the very same sorts of terms doing in Microsoft’s contracts with Samsung? The revelation that Microsoft employs contracts similar to — and similarly confidential to — Google’s highlights the hypocrisy of claims that such contracts serve anticompetitive aims.

In fact, as I discussed in my previous post, there are several pro-competitive justifications for such agreements, whether undertaken by a market leader or a newer entrant intent on catching up. Most obviously, such contracts help to ensure that consumers receive the user experience they demand on devices manufactured by third parties. But more to the point, the fact that such arrangements permeate the market and are adopted by both large and small competitors is strong indication that such terms are pro-competitive.

At the very least, they absolutely demonstrate that such practices do not constitute prima facie evidence of the abuse of market power.

[Reminder: See the “Disclosures” page above. ICLE has received financial support from Google in the past, and I formerly worked at Microsoft. Of course, the views here are my own, although I encourage everyone to agree with them.]

There is always a temptation for antitrust agencies and plaintiffs to center a case around so-called “hot” documents — typically company documents with a snippet or sound-bites extracted, some times out of context. Some practitioners argue that “[h]ot document can be crucial to the outcome of any antitrust matter.” Although “hot” documents can help catch the interest of the public, a busy judge or an unsophisticated jury, they often can lead to misleading results. But more times than not, antitrust cases are resolved on economics and what John Adams called “hard facts,” not snippets from emails or other corporate documents. Antitrust case books are littered with cases that initially looked promising based on some supposed hot documents, but ultimately failed because the foundations of a sound antitrust case were missing.

As discussed below this is especially true for a recent case brought by the FTC, FTC v. St. Luke’s, currently pending before the Ninth Circuit Court of Appeals, in which the FTC at each pleading stage has consistently relied on “hot” documents to make its case.

The crafting and prosecution of civil antitrust cases by federal regulators is a delicate balancing act. Regulators must adhere to well-defined principles of antitrust enforcement, and on the other hand appeal to the interests of a busy judge. The simple way of doing this is using snippets of documents to attempt to show the defendants knew they were violating the law.

After all, if federal regulators merely had to properly define geographic and relevant product markets, show a coherent model of anticompetitive harm, and demonstrate that any anticipated harm would outweigh any procompetitive benefits, where is the fun in that? The reality is that antitrust cases typically rely on economic analysis, not snippets of hot documents. Antitrust regulators routinely include internal company documents in their cases to supplement the dry mechanical nature of antitrust analysis. However, in isolation, these documents can create competitive concerns when they simply do not exist.

With this in mind, it is vital that antitrust regulators do not build an entire case around what seem to be inflammatory documents. Quotes from executives, internal memoranda about competitors, and customer presentations are the icing on the cake after a proper antitrust analysis. As the International Center for Law and Economics’ Geoff Manne once explained,

[t]he problem is that these documents are easily misunderstood, and thus, while the economic significance of such documents is often quite limited, their persuasive value is quite substantial.

Herein lies the problem illustrated by the Federal Trade Commission’s use of provocative documents in its suit against the vertical acquisition of Saltzer Medical Group, an independent physician group comprised of 41 doctors, by St. Luke’s Health System. The FTC seeks to stop the acquisition involving these two Idaho based health care providers, a $16 million transaction, and a number comparatively small to other health care mergers investigated by the antitrust agencies. The transaction would give St. Luke’s a total of 24 primary care physicians operating in and around Nampa, Idaho.

In St. Luke’s the FTC used “hot” documents in each stage of its pleadings, from its complaint through its merits brief on appeal. Some of the statements pulled from executives’ emails, notes and memoranda seem inflammatory suggesting St. Luke’s intended to increase prices and to control market share all in order to further its strength relative to payer contracting. These statements however have little grounding in the reality of health care competition.

The reliance by the FTC on these so-called hot documents is problematic for several reasons. First, the selective quoting of internal documents paints the intention of the merger solely to increase profit for St. Luke’s at the expense of payers, when the reality is that the merger is premised on the integration of health care services and the move from the traditional fee-for-service model to a patient-centric model. St Luke’s intention of incorporating primary care into its system is in-line with the goals of the Affordable Care Act to promote over all well-being through integration. The District Court in this case recognized that the purpose of the merger was “primarily to improve patient outcomes.” And, in fact, underserved and uninsured patients are already benefitting from the transaction.

Second, the selective quoting suggested a narrow geographic market, and therefore an artificially high level of concentration in Nampa, Idaho. The suggestion contradicts reality, that nearly one-third of Nampa residents seek primary care physician services outside of Nampa. The geographic market advanced by the FTC is not a proper market, regardless of whether selected documents appear to support it. Without a properly defined geographic market, it is impossible to determine market share and therefore prove a violation of the Clayton Antitrust Act.

The DOJ Antitrust Division and the FTC have acknowledged that markets can not properly be defined solely on spicy documents. Writing in their 2006 commentary on the Horizontal Merger Guidelines, the agencies noted that

[t]he Agencies are careful, however, not to assume that a ‘market’ identified for business purposes is the same as a relevant market defined in the context of a merger analysis. … It is unremarkable that ‘markets’ in common business usage do not always coincide with ‘markets’ in an antitrust context, inasmuch as the terms are used for different purposes.

Third, even if St. Luke’s had the intention of increasing prices, just because one wants to do something such as raise prices above a competitive level or scale back research and development expenses — even if it genuinely believes it is able — does not mean that it can. Merger analysis is not a question of mens rea (or subjective intent). Rather, the analysis must show that such behavior will be likely as a result of diminished competition. Regulators must not look at evidence of this subjective intent and then conclude that the behavior must be possible and that a merger is therefore likely to substantially lessen competition. This would be the tail wagging the dog. Instead, regulators must first determine whether, as a matter of economic principle, a merger is likely to have a particular effect. Then, once the analytical tests have been run, documents can support these theories. But without sound support for the underlying theories, documents (however condemning) cannot bring the case across the goal line.

Certainly, documents suggesting intent to raise prices should bring an antitrust plaintiff across the goal line? Not so, as Seventh Circuit Judge Frank Easterbrook has explained:

Almost all evidence bearing on “intent” tends to show both greed and desire to succeed and glee at a rival’s predicament. … [B]ut drive to succeed lies at the core of a rivalrous economy. Firms need not like their competitors; they need not cheer them on to success; a desire to extinguish one’s rivals is entirely consistent with, often is the motive behind competition.

As Harvard Law Professor Phil Areeda observed, relying on documents describing intent is inherently risky because

(1) the businessperson often uses a colorful and combative vocabulary far removed from the lawyer’s linguistic niceties, and (2) juries and judges may fail to distinguish a lawful competitive intent from a predatory state of mind. (7 Phillip E. Areeda & Herbert Hovenkamp, Antitrust Law § 1506 (2d ed. 2003).)

So-called “hot” documents may help guide merger analysis, but served up as a main course make a paltry meal. Merger cases rise or fall on hard facts and economics, and next week we will see if the Ninth Circuit recognizes this as both St. Luke’s and the FTC argue their cases.

In my just published Heritage Foundation Legal Memorandum, I argue that the U.S. Federal Trade Commission (FTC) should substantially scale back its overly aggressive “advertising substantiation” program, which disincentivizes firms from providing the public with valuable information about the products they sell.  As I explain:

“The . . . [FTC] has a long history of vigorously combating false and deceptive advertising under its statutory authorities, but recent efforts by the FTC to impose excessive ‘advertising substantiation’ requirements on companies go far beyond what is needed to combat false advertising. Such actions threaten to discourage companies from providing useful information that consumers value and that improves the workings of the marketplace. They also are in tension with constitutional protection for commercial speech. The FTC should reform its advertising substantiation policy and allow businesses greater flexibility to tailor their advertising practices, which would further the interests of both consumers and businesses. It should also decline to seek ‘disgorgement’ of allegedly ‘ill-gotten gains’ in cases involving advertising substantiation.”

In particular, I recommend that the FTC issue a revised policy statement explaining that it will seek to restrict commercial speech to the minimum extent possible, consistent with fraud prevention, and will not require onerous clinical studies to substantiate non-fraudulent advertising claims.  I also urge that the FTC clarify that it will only seek equitable remedies (including injunctions and financial exactions) in court for cases of clear fraud.

In my just-published article in The Antitrust Source, I argue that the law and economics literature on patents and error cost analysis demonstrate that the recent focus by U.S. (and foreign) antitrust enforcers on single-firm patent abuses is misplaced, and may reduce incentives to innovate.  I recommend that antitrust enforcers focus instead on restrictions among competing technologies, which are the primary concern of the 1995 U.S. DOJ-FTC Antitrust-IP Guidelines.  I conclude:

“Patent-antitrust enforcement should “stick to its knitting” and focus on transactions that lessen competition among rival technologies or on wrongful actions (not competition on the merits) designed to artificially inflate the market value of a patent beyond its legitimate scope. New antitrust enforcement initiatives that seek to limit returns within the legitimate scope of the patentare unwise. Even if they appeared to restrain licensing fees in the short term, economic theory and evidence suggests that such “creative antitrust enforcement” would undermine incentives to invest in patenting, thereby weakening the patent system and tending to slow innovation and economic growth. Nations seeking to spur their economies would be well advised to avoid such antitrust adventurism.”