The recent emergence of Generative AI tools such as ChatGPT has been met with excitement, trepidation, or doomsday predictions. Certainly, the benefits and risks associated with integrating AI tools into the workforce has been the subject-matter of much debate across industries.
If you are using or considering using AI tools to generate content for your business, you need to be mindful of your business’s potential legal risks. In this article, I discuss some of the legal risks associated with what I refer to as the “unreliability factor” of Generative AI.
The unreliability factor
ChatGPT and other similar AI tools generate content from information gathered primarily from the Internet, which by its nature includes false, inaccurate or misleading information. Moreover, it is common knowledge that generative AI tools have a propensity to “hallucinate” – the industry’s term for when the AI tool “makes things up”. Thus, the content generated by the AI tool can contain outdated, inaccurate or false information. This “unreliability factor” gives rise to potential legal liability for a business that uses the content in its operations or publications.
Civil liability in business dealings
When a business includes false or inaccurate information in its correspondence, contracts, publications, or marketing materials, it may give rise to potential civil liability for fraud, misrepresentation, breach of trust, breach of fiduciary duty, or breach of contract (depending on the circumstances).
Liability for defamation
If a business publishes content that was generated by an AI tool and contains false or misleading information about another person or business, it can face liability for defamation. For example, in a recent defamation lawsuit brought in the US against OpenAI (the company that owns ChatGPT), a plaintiff alleges that ChatGPT had falsely described him as having been accused of embezzling funds and committing fraud.
Misleading advertising or marketing practices
If your business uses AI to create marketing or advertising content that contains inaccurate information, it may give rise to civil, regulatory or even criminal liability for misleading marketing and advertising practices. For example, pursuant to Canada’s Competition Act, it is against the law to make materially false or misleading representations to promote a product, service or business interest, including online and in-store advertisements, direct mail, social media messages, promotional emails, and endorsements, among other things.
Professional liability risks
If you are a professional, such as an accountant or a lawyer, and you use an AI tool to generate content for your practice that contains inaccurate, false or misleading information, you may be facing professional negligence claims. You may also face regulatory and professional conduct liability, for violating integrity and competency rules, not to mention the significant potential harm to your professional reputation.
For example, in a recent case that made headlines all over the world, US lawyers had used ChatGPT to generate court documents, which included references to fake caselaw. Not only did ChatGPT reference the fake cases in its written arguments, it generated entirely fake judicial decisions, which the lawyers also filed with the court. The lawyers were publicly chastised by the court and were ordered to pay fines.
Tips for mitigating the risks
Until such time as the Generative AI tools become sufficiently reliable, it is important to take steps to mitigate the risks associated with the unreliability factor. Businesses should implement policies on the use of AI in the workplace. Such policies can range from prohibiting the use of these tools for generating work-related content, to limiting their use to specific circumstances or purposes. If a business permits the use of Generative AI in the workplace, however, it should require its employees to thoroughly fact-check the content with reference to external, reliable sources, before using or publishing it.
- Swinging the privacy rights pendulum – The recent proposed amendments to Canada’s privacy law regime - November 28, 2023
- Breaking the “glassdoor” – Dealing with online reviews by employees - August 29, 2023
- The unreliability factor of using AI in the workplace - June 27, 2023
Leave a Reply