October 30, 2018
Although we are still at the relatively early stages of the commercialization of artificial intelligence (AI), it is clear that privacy and security considerations will be at the forefront of measures to regulate AI as industries increasingly adopt and integrate AI tools and store and utilize massive amounts of data generated through such tools. As acquirers of AI businesses struggle with how to properly value AI assets, potential liabilities associated with AI, including increased regulation, are making the valuation process even more challenging.
AI is a substantial and rapidly growing driver of M&A activity both in the United States and abroad. And new AI companies are being created and funded at a record pace. In 2017, investors poured $15.2 billion into AI startups, a 141 percent increase over 2016, according to CB Insights. That pace has continued into 2018, with Venture Scanner reporting that Q2 2018 saw a record $4.4 billion invested in AI companies, a 19 percent increase from the same period in 2017. During the first quarter of 2018, 20 percent of earnings calls of U.S. publicly listed companies discussed AI, according to a Bain & Co. study. Management and corporate development teams at companies engaged in a broad range of industry sectors are now encouraged to consider adopting AI solutions. The prospect of significant gains in efficiency and cost reduction, as well as concerns that competitors are investing in tools that could upend the status quo motivate this heightened interest.
This competitive tension is evident in a swathe of recent aqui-hire deals that value talent at between $5 million and $10 million per AI expert, according to a PitchBook study. But as the number and variety of AI-use cases grow, the methods for valuing target companies that fit within the broad AI umbrella has also expanded beyond the traditional talent metrics. Amazon, Google and Microsoft have begun offering enterprise AI solutions that act as an alternative to M&A for established companies looking to build out an AI capability. Accordingly, the talent-based valuation metrics, which are generic and tend to be established by serial acquirers in deals for cutting-edge technology, are being weighed against the cost of building out an AI capability in-house, using third-party AI tools such as Amazon AI, Google’s Cloud AutoML or IBM’s Virtual Assistant. As the capabilities of AI processes become better understood within industries, potential acquirers in M&A transactions are also increasingly able to produce valuations based on the efficiencies that they expect the underlying technology will bring to their business. In addition, some AI startups are able to demonstrate customer and user results as well as cross-selling opportunities through use of their AI tools, which can be another source of valuation data. As the AI M&A market matures, acquirers are establishing valuations using a combination of these valuation factors rather than the simple talent metrics that defined the early market.
Despite these strong drivers for building AI capabilities, acquirers would be wise to apply caution when approaching AI M&A prospects. Even as advances in AI solutions are opening exciting new value propositions for many companies, increasingly regulators are pressed to respond to demands by their constituents to enact stricter regulations on the collection and use of personal data.
The EU’s General Data Protection Regulation (GDPR) was first published in 2015 and came into effect in all EU member states in May 2018. Companies that transfer, process or maintain the data of EU residents must adhere to the new standards of GDPR. The GDPR represents the current high-water mark for regulation of data security and stands at the opposite end on the spectrum of regulatory approaches to those adopted in China. China’s approach is designed to foster accelerated development of AI and reflects a lower concern for protection of personally identifiable information. In the United States, the regulatory approach is evolving but there is evident tension between the opposing considerations of international competition for AI talent (in what is widely referred to as the AI arms race), and the demands by constituencies for protection of personal information. With the adoption of the California Consumer Privacy Act (CCPA) on June 28, California became the first state to adopt comprehensive regulations that establish the rights of consumers regarding control of their personal information, with personal data rights that track a number of the guiding principles contained in the GDPR. Other recent efforts to regulate AI include California SB 1001, signed into law on Sept. 28, SB1001 requires that a person who uses a bot in online communication to incentivize a purchase or sale of goods or services or to influence an election must disclose in that communication that they are using a bot.
In addition to regulatory compliance considerations, adopters of AI need to be cognizant of the reputational exposure that use and manipulation of big data through AI tools brings. Through the wide publicity generated by data breaches, public awareness of the way that companies safeguard and utilize information collected from customers and users has never been more intense. Whether that reputational exposure is at a general public level or at a narrower, customer or industry level, care should be taken in formulating data protection strategies (and in evaluating those of potential targets) to understand vulnerabilities beyond mere regulatory compliance.
As the upward trend in AI-related deal activity continues, corporate leaders across a vast array of sectors are becoming more educated in data protection matters. Deal professionals should prioritize privacy and data protection due diligence with any target having AI tools or extensive data sets, especially in light of the rapidly evolving regulatory landscape. Acquirers should also consider potential risks and liabilities associated with the integration of data sets and AI tools of potential targets with the acquirer’s existing businesses. Despite advances in developing methodologies for valuing AI companies, values tend to be inconsistent and often come down to how desperate an acquirer is to obtain the technology or prevent that technology from falling into the hands of a competitor.
Craig W. Adas is managing partner of Weil, Gotshal & Manges’ Silicon Valley office and a member of the corporate department. His practice focuses on mergers and acquisitions, private equity and securities, with particular emphasis on private and public acquisitions, leveraged buyouts, dispositions and joint ventures.
Alex Purtill is an Associate at the firm in the Silicon Valley. He participates in the representation of financial and strategic clients in various acquisition transactions, including public and private mergers and acquisitions, divestitures and cross-border matters.
Reprinted with permission from the October 30, 2018 edition of The Recorder© 2018 ALM Media Properties, LLC. Further duplication without permission is prohibited. All rights reserved.