Navigating the Benefits and Risks of Generative AI
Expanding use of in-house AI requires law firms to monitor themselves
Artificial intelligence is rapidly weaving itself into our personal and professional lives – and in applications that can leave us feeling empowered and overwhelmed in equal measures. As the release of ChatGPT in November 2022 made clear, generative artificial intelligence (AI) has impressive power to synthesize large amounts of information in seconds. Data-heavy organisations are seeing its potential to transform their work – even as they acknowledge the need to refine the technology and verify the accuracy of the results it produces.
Many law firms have been fine-tuning their own generative AI tools in the wake of ChatGPT’s launch, aiming to tap into the benefits of the technology while managing its potential risks. However, the adoption of technology to support legal work is nothing new for law firms. For years, firms have been using technology to automate tasks ranging from e-discovery to conflict checks. The adoption of generative AI is simply the next step in that process – but it does potentially introduce new complexities for firms, their clients and employees.
“Law firms have to consider a range of interests as they use generative AI in their work,” said Sharon Glynn, director and underwriter in the Bond & Specialty department at Travelers Europe. “Some clients may insist that firms use it to drive cost savings, while others may insist on firms not using it because they prefer the benefit of working with a human, or are concerned about the risks to their data. Firms must consider their workforce too. They must balance the needs of current and future employees who are eager to embrace generative AI with those employees who are less enthusiastic. In the process, they must also be mindful of the evolving tools they need to compete in the marketplace.”
Firms forge ahead with generative AI
A number of firms have already been navigating these opportunities and challenges in real time with their own generative AI tools. Allen & Overy, for example, released its “Harvey” model in early 2023 (see sidebar). Travers Smith developed a ChatGPT-like tool that it is using internally and also making available to other firms.
Specifically, the legal technology team at Travers Smith built a chatbot, YCNBot, as an alternative to ChatGPT. While the firm isn’t using YCNBot for client work yet, it is encouraging its lawyers to experiment with it internally and provide feedback. The firm made the code downloadable via GitHub and free through an open-source licence. This allows other firms to use the code to plug into the application programming interface of Microsoft and OpenAI, then tailor it to their own needs — including enhanced controls around compliance, security and data privacy.1
Generative AI tools are triggering the creation of new roles within firms as well – and signalling a new commitment to the ongoing development and refinement of the technology.
In October, Eversheds Sutherland announced it had created a number of new positions to support AI. The firm appointed Nasser Ali Khasawneh, tech litigation partner at the firm, as its first Global Head of AI, a position that reports to Co-CEOs Lee Ranson and Mark Wasserman. In conjunction with that announcement, the firm said it had formed a global AI leadership team, a global AI task force, and a global generative AI skills development program.
Ranson and Wasserman described generative AI as a “catalyst for a fundamental shift in the legal profession” and “an imperative for the legal industry,” while also emphasising they would be taking a “people plus technology” approach.2
Think of AI as amplified intelligence
The Eversheds Sutherland announcement demonstrates how firms are treating generative AI as a powerful tool that serves firms best when combined with human oversight and contributions. But it’s difficult to regulate a technology that is evolving faster than humans can understand it. Law firms, which are in the business of defining rights and responsibilities and protecting against risk, are in an important position when it comes to identifying the pitfalls of overreliance on AI.
AI impacts more functions of organisations than we may even realise, so firms will need to continuously recalibrate their efforts to monitor it. The regulation of AI applications isn’t simply about getting legal facts right. There are wider-ranging consequences that may impact how well a firm keeps its commitments to stakeholders and how the firm’s leadership and workforce evolve over time.
For instance, Amelia Kallman, a futurist who addressed Travelers legal clients at a conference earlier this year, says many of her client firms have set goals to be carbon neutral by 2030, but they haven’t figured AI’s carbon footprint into those goals – and its footprint is substantial.
“The more powerful AI is, the more energy it requires,” she said. “While data is still being collected, research indicates a generative AI search requires 4-5 times more energy than a regular search-engine query. The energy it took to train ChatGPT-3 with 175 billion parameters was equivalent to driving 123 gas vehicles for one year, generating 552 tons of CO2. Comparatively, GPT-4 was trained on over a trillion parameters. I believe that while it will inevitably contribute to our footprint, it is also the same technology that will help us to solve some of our greatest climate challenges. It’s up to us to be strategic and responsible today to ensure sustainability for tomorrow.”
Within firms, AI use also has the power to impact the makeup of the organisation’s leadership and overall workforce. A study by Revelio Labs has found that AI is set to disproportionately replace jobs held by women – largely because today women are less likely to hold leadership and decision-making positions.3 If women or other people in underrepresented groups aren’t at the table when firms are discussing the potential applications of AI across the business, their concerns are more likely to be overlooked.
But this isn’t inevitable. Kallman said: “Significant evidence shows diversity is an advantage, and ultimately the people at the end of these decisions can choose not to let this technology set us even further back in terms of equality and diversity in the workplace.”
Maximise value, minimise risk
So how can firms best step into this new territory and embrace the opportunities it holds – while appreciating the risks it can introduce to legal work? Kallman suggests firms start by identifying potential cases where AI can accelerate, enhance and improve one aspect of their business. Then assign a team to manage the planning, implementation and monitoring of the technology’s use throughout a specified life cycle.
The next step is establishing an internal governance strategy that gives the firm a means of auditing the internal use of AI and retaining human oversight. This could be an IT strategy that builds a portal or an internal management system. It should be consciously aligned with the firm’s brand values so it accurately reflects and preserves the organisation’s culture, in addition to considering the needs of clients, employees and managing the risk of third-party suppliers.
Having such a framework in place for monitoring AI applications can help a firm to more confidently embrace the opportunities generative AI presents for increasing profits and reducing expenses, while also helping it keep an eye on the evolving risks it may pose.
“Generative AI is ushering in new ways of working for law firms and we’re interested in how firms are adapting to these changes and preparing for the future,” Glynn said. “We’re not necessarily expecting them to be at front of pack with adopting new technology, but we’re also not expecting them to turn off the lights and shut the doors in response to these changes, because that’s not a good approach to risk either. It’s about proceeding with change but maintaining risk awareness – much like we would approach any new way of working.”
Law 2.0: AI applications in law firms
As law firms adopt new AI tools to support their work, their internal policies will become more important in monitoring for inaccuracies, bias and other risks. Here are some of the AI-powered tools law firms are using today to manage a range of legal tasks:
Spellbook: Like ChatGPT but for law, this tool has been trained on billions of lines of legal text. It reviews and suggests terms for contracts, detects missing or aggressive terms, clauses, and definitions, and even offers negotiation suggestions.
Rainbird: This tool automates decision-making based on historic thought processes. It allows lawyers to scale expertise while providing evidence of reasoning, variables and quantitative impact often necessary for ensuring compliance. Law firm DAC Beachcroft uses Rainbird to power an intelligent claims triage tool called AlDan, which helps sort potentially valuable recovery claims for the firm’s insurer clients.
Justice Intelligence Platform: This tool finds cases that closely align with a firm’s expertise and scans data to predict a case’s outcome and financial value, helping firms decide which cases could be most beneficial to accept.
CoCounsel: This generative AI-powered legal assistant can research a range of legal questions, draft memos and provide detailed analysis with links to relevant cases. It also highlights cases that have been negatively treated in court.
In-house AI platforms: Law firms are developing their own ChatGPT-like models to support various aspects of legal work. Allen & Overy’s “Harvey” model is just one example. Released in early 2023, Harvey builds on GPT-3 technology and enables lawyers to create legal documents or conduct research by providing simple instructions using natural language. The firm says the tool enhances aspects of legal work including contract analysis, due diligence, litigation and regulatory compliance.
The information provided in this article is intended for use as a guideline and is not intended as, nor does it constitute, legal or professional advice. Travelers does not warrant that adherence to, or compliance with, any recommendations, best practices, checklists, or guidelines will result in a particular outcome. Travelers does not warrant that the information in this presentation constitutes a complete and finite list of each and every item or procedure related to the topics or issues referenced herein. In no event will The Travelers Companies, Inc. or any of its subsidiaries or affiliates be liable in tort or in contract to anyone who has access to or uses this information.
Sources
1 https://www.traverssmith.com/knowledge/knowledge-container/travers-smith-launches-innovative-open-sourced-ycnbot/
2 https://www.eversheds-sutherland.com/lists/shownews.html?News=nasser_ali_khasawneh_appointed_as_global_head_of_ai5393&country=uk
3 https://www.reveliolabs.com/news/macro/ai-exposed-jobs-employ-more-women/