How will new AI technologies shape the future of law? And how can legal professionals ensure that they adhere to legal ethics while they benefit from new technologies?
In part III of our new series, Filevine’s legal futurists Dr. Cain Elliott and Dr. Megan Ma, along with Senior Director of Product Alex McLaughlin, help lawyers answer these questions — and prepare for the future of their practice.
Missed previous parts, visit the links below, view the full AI, Ethics, and Legal: A Deep Dive Into the Future of Legal Tech Webinar on YouTube
- Part I - An AI Primer for Legal Professionals
- Part II - Unveiling the Complexity of Bias and Intellectual Property in AI
Exploring the Implications of AI on Legal Malpractice Insurance
AI is likely to impact legal malpractice insurance. While the precise effects are yet to be determined, experts anticipate significant changes in the insurance landscape as AI becomes more prevalent in the legal industry.
Insurance carriers are expected to adapt their policies and coverage to account for the risks associated with AI usage, and policyholders may face new requirements and inquiries regarding their AI implementation during the renewal process.
The Uncertainty and Anticipated Changes:
When asked about the influence of AI on legal malpractice insurance, Alex McLaughlin admits that the answer remains uncertain. However, he emphasizes that the impact is inevitable and predicts that insurance carriers will be working to keep pace with the evolving landscape.
Policyholders can expect adjustments in their insurance policies, such as the inclusion of riders specifically addressing AI usage. The increased scrutiny surrounding AI adoption highlights the growing importance of understanding and managing the risks associated with this technology.
The Potential for Essential AI Tooling:
Dr. Cain Elliott raises an intriguing perspective, suggesting that certain AI tooling may become indispensable in the legal profession. Similar to guidelines set by bar associations to stay up-to-date with technology, the use of AI tools could eventually be viewed as a necessity.
Dr. Elliott draws a parallel to the debate surrounding electronic assistance in discovery, where the volume and accessibility of information necessitated the adoption of technology. Not leveraging AI tools could pose its own set of risks and liabilities, highlighting the potential benefits and drawbacks of AI integration in the legal field.
Drawing Parallels with Cyber Insurance:
It’s instructive to consider the parallels between the emergence of cyber insurance riders and the anticipated changes in legal malpractice insurance due to AI. Alex McLaughlin recalls the initial vagueness of cyber insurance questions and requirements, which eventually evolved into comprehensive evaluations to determine cybersecurity coverage.
Similarly, the introduction of AI-related inquiries and considerations in legal malpractice insurance is expected to follow a similar trajectory. While the exact nature of these changes remains uncertain, the evolving nature of AI necessitates corresponding adaptations in insurance policies.
As AI continues to shape the legal industry, the implications for legal malpractice insurance are becoming increasingly relevant. Policyholders should anticipate changes in their insurance coverage and requirements, with insurers adapting their policies to address the risks associated with AI usage.
While uncertainties persist, the parallel with cyber insurance serves as a reminder of how insurance landscapes can evolve in response to emerging technologies. Proactive engagement with insurers, staying informed about AI-related insurance developments, and implementing responsible AI practices will be crucial for legal professionals navigating this evolving insurance landscape.
The Intersection of Legal Malpractice and Generative AI: Accountability and Prevention
Ensuring Accountability in Legal Drafting
The Mata versus Avianca Airlines case serves as an example where a lawyer utilized ChatGPT to draft a brief and included references to fictional cases. The incident highlights the need for control measures and accountability to ensure lawyers maintain their responsibility when using generative AI tools.
Trusting AI-generated content without thorough fact-checking and verification can lead to legal malpractice concerns.
Building on AI as a Tool, Not a Crutch
Generative AI should be seen as a tool rather than a substitute for human judgment and expertise. Relying solely on AI-generated output can pose risks and compromise the quality and accuracy of legal work. To protect practitioners from malpractice claims, generative AI can be leveraged to guide legal professionals, offering valuable insights and suggestions, but it should not be solely relied upon for final output.
The Role of Citing Sources and Engaging Service Providers
Dr. Cain Elliott shares his experience of a debate between chatbots where the ground rules required them to cite sources. While the citations may not always be genuine, this practice allows for fact-checking and verification. It is crucial for legal professionals to engage in discussions with service providers to understand the limitations and expectations of the AI tools they utilize.
Clear communication and awareness of the capabilities and constraints of generative AI are essential for responsible and effective usage.
This discussion emphasizes the need for control, accountability, and responsible practices when incorporating generative AI into legal work. Fact-checking, source verification, and engaging in proactive discussions with service providers are crucial steps to prevent legal malpractice concerns.
By treating AI as a tool rather than a crutch, legal professionals can harness its potential while maintaining the integrity and quality of their work.
AI as an Assisting Tool, Not an Encyclopedia
AI is widely used as a search engine or an encyclopedia. But wise legal practitioners would take care to use it instead to assist and provide support, rather than functioning as a comprehensive source of information. Generative AI systems have the ability to reconstruct information and generate natural language output. It is crucial for users to recognize the unique capabilities and limitations of AI tools, especially when integrated into platforms like Filevine.
Addressing the Impact on Fraud and Criminal Misuse
Legal experts are also concerned about the ways AI can be used to further fraud and criminal activities. These risks come in the form of fake profiles, false claims, and forged documents created with AI technology.
Preventive measures are necessary to counter unauthorized development and misuse of AI platforms for criminal activities. Balancing the need for safeguards with the positive contributions of AI requires careful consideration from lawmakers.
Embracing Change and Preparing for the Future
The adoption of AI technologies will bring significant changes to various practice areas within the legal profession. Professionals must adapt and explore new approaches to effectively incorporate AI tools into their work.
Understanding the limitations of AI tools, addressing concerns related to fraud prevention, and embracing forthcoming changes are critical for legal professionals. By navigating the evolving landscape of AI with caution and proactive measures, the legal community can harness its potential while mitigating risks and ensuring accountability.
Stay tuned for future blog posts in this series exploring how legal professionals can weigh the potential gains and harms of embracing AI technology.