Endemic failures in ESG issues cause public harm, can ruin a business’ reputation and ultimately even threaten its existence. But there remains one sector not subject to ESG impact and risk disclosures: responsible artificial intelligence and autonomous systems (AI systems). To overlook AI systems in any material dataset of ESG metrics can be likened to building the spine of responsible business and omitting the spinal cord at its center.
AI systems could be responsible for up to $3.9 trillion of value creation by 2022, according to Gartner, and are a blind risk factor in the ESG equation. The concept of responsible AI has become increasingly prominent, driven by citizen and government concerns regarding biases, misinformation and a host of other ethical breaches. While numerous well-intentioned AI guidelines have been published by a cross-section of august authorities, they remain generally invisible to the investment community. At the same time, companies’ growing reliance and manipulation of customer and user data is emphasizing the need for more responsible data management.
In 2019 at least 55 US companies highlighted the ethical risks of AI in their annual reports. Investor and shareholder concern around responsible AI systems is likely to grow stronger as issues relating to data use become more publicly visible, regulatory momentum around responsible AI increases and investors continue to realize the financial risks associated with their exposure to firms that are not building or using responsible technology.
IR effects of unethical use of AI in advertising
According to Tom Triscari, a leading programmatic economist, ‘the advertising sector is worth $1.5 tn annually, and AI will likely run or try to run everywhere companies communicate’. Advertising is where AI meets the consumer marketplace. As such, online advertising investment is the canary in the AI coal mine.
Advertising externalities are experienced by every reader: privacy violations, disinformation, user addiction, algorithmic bias, dubious consent, ad fraud, malware, pop-ups, pop-unders and fake dialogue boxes, abuse of special audiences and robotic reviews, to name a few.
In 2018 the Pew Research Center found that 44 percent of young Americans deleted the Facebook app from their phones, and 74 percent of users either deleted the app, took a break from checking the platform or adjusted privacy settings.
Investors are increasingly sensitive to the costs of such consumer abuse and the potential risks and loss of value they represent. For example, on July 26, 2018, more than $119 billion was wiped off Facebook’s market value in the wake of Cambridge Analytica’s abuse of millions of Facebook users’ data. Progressive shareholders are concerned that irresponsible approaches to content governance have proven ad hoc and ineffectual and pose continued (and growing) risks to shareholder value.
Upping the standard of AI reporting
Progress toward AI metrics is being made. At SASB, Greg Waters, sector lead for communications and technology, has recently developed an assessment of internet content moderation. Scott David at the University of Washington’s Applied Physics Laboratory – Information Risk Research Initiative has done an exhaustive analysis publishing the open source Atlas of Information Risk Maps. Such diligent analyses and emerging AI risk standards will rapidly evolve into meaningful additional ESG disclosures to be applied to corporate spend on AI supply chains.
SASB, the Carbon Disclosure Project, the Climate Disclosure Standards Board, the GRI and the International Integrated Reporting Council are building toward the necessity of a standardized set of ESG metrics. At Sustainability for Sceptics, a recent symposium held at King’s College London (which brought together business leaders and academics), the state of ESG reporting was compared with the state of annual reporting before the Wall Street crash of 1929. Not to include metrics that measure the impact and risks of AI systems in a standardized set of ESG metrics is to leave a gaping blind spot and miss a once-in-a-generation opportunity. Here are three steps you can take to help manage the risks for your business:
– Check your exposure to AI risk
Harmful AI deployed inside a company’s supply chain by vendors, third-party suppliers and contractors can cause a spectrum of data and intellectual property harms, loss of confidence in the integrity of the company and/or exploitation culminating in losses and/or business failure. If your company’s products and processes are heavily reliant on AI systems in the supply chain, the impact of malevolent AI will be amplified, increasing the risks.
Check with your IT and procurement departments and ask them questions like: are responsible AI requirements included in every request for proposal and contract? Does the supplier have a set of policies and procedures that address how integrity, resilience and quality will be measured (for example, self-attestation, auditing, certification structures)?
At build time, are systems accessible for assessment from a multi-stakeholder perspective in convenient, modifiable and open formats that can be retrieved, downloaded, indexed and searched? Such questions will begin to indicate the degree to which your company could be blind-sided by (or protected from) AI risk.
– Understand how your company’s ad spend is exposing you to risk
In the dark world of programmatic buying, online advertising budgets typically suffer corporate theft and abuse resulting in bottom-line losses. Funds are often rechanneled to funding misinformation, bad actors and the fueling of unintended consumer abuse.
The potential risk to the company is reputational harm by association, increased citizen-consumer skepticism, possible legal liability and increased regulatory scrutiny. Your marketing department may be powerless to change the situation – or think it is. Together you can begin to understand and estimate whether and how your company’s online advertising expenditure is (inadvertently) exposing the company to risk.
Ask whether the media platform carrying your advertising discloses the percentage of advertising investment slippage to impression fraud, click-fraud, malvertising and misattribution? Is such a disclosure independently audited? What is the platform’s total expenditure on fact checking as a percentage of product cost? Does the media platform have documented processes, procedures and technologies that continually monitor for dark patterns, misuse and/or dual use that explain how instances are identified and handled, along with details of any and all commensurate corrective and/or preventative measures? Such documentation could help to reduce bottom-line losses and will demonstrate your efforts to buy advertising responsibly.
– Build a consensus around the need to integrate AI into ESG metrics
Forward-thinking governance and IR professionals will engage with competitors, investors, regulators and peers to build a broad consensus around the need to integrate responsible AI metrics into ESG databases. Work with the big database providers like Refinitiv, MSCI and Sustainalytics to push for AI inclusion. Demonstrate not only your desire for such AI disclosures but also your willingness to provide them on behalf of your company.
A growing body of research suggests companies that perform in the top quintile of current ESG rankings (relative to bottom-performing companies) have been found to benefit from better financial performance, easier and cheaper access to capital and improved risk management. The integration of AI system disclosures into existing ESG frameworks will accelerate competitive advantage for progressive governance and IR professionals and investors.
Be a change-maker
As the prospect of a single, coherent, global system of ESG reporting appears on the horizon, there’s never been a better time to ensure that the impacts of unethical AI on consumers and citizens are recognized and the risks to companies (as major users of AI systems) are mitigated through timely disclosure.
Properly managed, ESG will voice a consideration in every AI system decision, via acknowledged impact-weighted scoring formulae. If stakeholders – and particularly the investment community – can understand the impacts of AI in a way that is meaningful to the allocation of capital, there are change-makers in every company waiting for an opportunity to step up and push for their company to make these disclosures, in the interests of all stakeholders.