Disclaimer: This Inside South Florida segment is sponsored by Demesmin and Dover Law Firm. All opinions expressed are those of the advertiser and do not necessarily reflect the views of WSFL-TV.
Artificial intelligence may be changing the way we live and work, but can tools like ChatGPT actually be used as evidence in court? Inside South Florida sat down with Hunter Rhyne and Christian Lexima from Demesmin & Dover Law Firm to break down this timely legal question.
The attorneys explained that courts won’t accept AI-generated text as inherently truthful. Instead, any content produced by AI would be tied back to the person using it. For example, if someone used AI to draft a threatening message or scam email, the liability still falls on the individual, not the bot.
They also warned about relying on AI for legal research. While convenient, AI tools can sometimes produce “hallucinations,” made-up citations or inaccurate legal references. Using those in court could lead to fines or sanctions.
That doesn’t mean AI has no place in the legal world. Rhyne and Lexma see it as a tool for drafting and research, but one that must be double-checked against trusted sources to avoid costly mistakes.
For those navigating both the fast-moving AI landscape and the law, they advise consulting a professional before relying on any digital assistant.
For more information, visit youraccidentattorneys.com or call 866-954-6673.