In-house legal and governance teams are among the many groups taking a hard look at artificial intelligence (AI) and figuring out how to make the most of the opportunities it offers while mitigating the risks it presents.
At this stage, AI appears to be more widespread in attorneys’ minds than in their offices. Research conducted around this time last year for a report by FTI Consulting and Relativity found that 20 percent of general counsel and chief legal officers were using some type of AI, with a focus on contracts, e-discovery or privacy matters. That figure was lower than in previous surveys, although two-thirds of respondents said they expect to increase the use of AI in the coming years.
Similarly, an upcoming research report by Corporate Secretary on corporate transactions and boards finds that just 8 percent of respondents say their company has so far used AI and machine learning for M&A due diligence. The uptake is more pronounced at mega-caps, where almost a third (29 percent) of respondents say they have adopted the tools for such work.
What could AI do for legal?
Devika Kornbacher, partner with Clifford Chance and co-head of the firm’s tech group, echoes that last finding, commenting that the use of AI depends on the size and nature of an in-house legal team. Larger groups are more likely to be using non-public tools for work such as producing initial drafts of documents including contracts and letters, or to run analytics and produce reports, she says.
Regina Sam Penti, partner with law firm Ropes & Gray, says in-house counsel are probably using machine learning AI in areas such as e-discovery and to assist with document reviews.
According to Kari Endries, assistant secretary and managing counsel with Chevron Corporation, her company already uses AI outside the governance space, such as in robotic ‘dogs’ that can operate in places too hazardous for humans. And the company’s legal team already uses AI to spot unusual elements in contracts. But now she and her colleagues are working with Chevron’s IT team to explore new ways they could use the technology.
One option would be for litigation research. Endries’ team has built a database around the more than 2,000 companies Chevron has operated over time and a tool that could crawl through all those records would save a lot of time, she says. The technology could also be used to answer stockholder questions, such as about a stock certificate issued decades ago by one of those many companies.
At present, Chevron controls almost 900 companies and AI could be very helpful in tracking the resignations of their many directors and officers by linking to a human resources database, Endries notes.
Govenda CEO Marion Lewis says there has not been much change over recent years in the tools available to meet the changing demands governance teams face. Boards have been meeting more frequently, particularly since the onset of the Covid-19 pandemic, and they are dealing with new issues such as cyber-security and climate change, all of which means they need more information and guidance, she notes. This in turn means corporate secretaries need to manage more information and tie it to strategic initiatives, she explains: ‘AI seemed like a no-brainer.’
In response, Govenda earlier this year launched a new tool aimed at helping governance management by targeting corporate secretaries’ work in board portals. It initially focused on administrative matters such as scheduling board meetings, tracking attendance and logging which directors have read relevant materials. It added a function to show all documents on a specific initiative or topic.
The plan is for it later this year to start creating drafts of board meeting minutes in a preferred format. It will also expand its reach to include committee charters and managing their schedules and compliance. The aim, Lewis says, is to free governance teams to focus on strategic issues.
AI safety: Putting guardrails up
Lawyers are acutely conscious of the need to set up guardrails to prevent the misuse of AI, accidentally or otherwise, by company insiders or malevolent outsiders. The dangers are plentiful. Penti, for example, emphasizes that it would be risky for companies in heavily regulated industries to rely on AI in terms of accuracy and compliance.
Kornbacher says the main areas of concern posed by the use of AI – not just by in-house counsel but any employee – center on security issues such as cyber-attacks on AI tools or the use of algorithms that reach outside the company, data privacy and intellectual property (IP) leaks, particularly for companies such as those in healthcare that rely heavily on their own IP.
Attorneys also have specific duties imposed by state bar associations, Kornbacher notes. For example, they have a duty of confidentiality to their clients, which they must not breach via the use of AI. They must also meet a requirement of technical competence, meaning that they need to have a sound understanding of any AI tools they use.
In addition, Penti says AI raises ethical issues. These include the potential for tools to reinforce bias and discrimination in areas such as recruitment, and fears that AI might manipulate human behavior.
Guardrails around the use of AI should include human review of tools’ work, attorneys agree. They should also include ensuring sufficient cyber-security protections are in place, Kornbacher says. In addition, companies need to develop policies spelling out what employees may or may not do with AI; for example, prohibiting them from putting private information into a tool.
But that is not enough, Kornbacher warns: ‘An AI policy is the place to start but a company needs a governance framework to implement it. Otherwise, it’s just a piece of paper.’
She notes that clients are asking questions both about the use of AI by their legal teams and their companies as a whole. They are keen to know what types of guardrails to put in place and how to respond if there is pushback against aspects of their AI policies. ‘The only thing that’s worse than not having a policy is not following it,’ Kornbacher comments.
Job destroyer?
A widely held view is the AI will reduce or extinguish the need for lawyers – and other professionals – to spend time on repetitive and low-value work. This has naturally led to concerns that jobs will be eliminated, particularly as CFOs seek cost savings. But there is also a belief that in-house and external counsel will be freed up to focus on more value-added advisory work.
‘Like [with] all new technologies, some jobs go away but other jobs get created... and I think that’s what is going to happen in [law] firms and what is going to happen in-house,’ Anthony Davis, of counsel with Clyde & Co, said in the May episode of Corporate Secretary’s podcast Governance Matters.
Kornbacher, similarly, comments that the skills required of lawyers will change but that lawyers aren’t going away in the near future.
Endries is also not concerned about AI impacting staffing levels. She notes that she expected DocuSign, which her team began using more widely during the pandemic, would save some work but the number of requests they receive has grown exponentially to fill that gap.
In the meantime, Penti notes that lawyers have made use of technological advances in the past and wouldn’t want to go back to the days of having to red-line documents manually.