A lawyer from Canada is facing criticism for submitting fictitious cases that were generated by an AI-powered chatbot.

Estimated read time 3 min read

A Canadian attorney is facing criticism for utilizing an AI chatbot for legal research, which generated fake court cases. This incident highlights the dangers of unproven technologies in legal proceedings.

Chong Ke, a lawyer from Vancouver, is under investigation for her actions. She is accused of utilizing ChatGPT to create legal documents for a child custody case at the British Columbia supreme court.

Per court records, Ke was acting on behalf of a father who desired to take his children on an international trip but was facing a dispute with the children’s mother due to their separation. It has been claimed that Ke requested ChatGPT to provide relevant examples of prior legal cases that could be applicable to her client’s situation. The program, which was created by OpenAI, generated three outcomes, of which two were then submitted to the court.

Despite making multiple requests, the attorneys representing the mother of the children were unable to locate any records of the cases.

After being faced with the inconsistencies, Ke changed their position.

“I was unaware that these two instances could be in error. Upon being alerted by my colleague that they could not be found, I conducted my own research and was unable to identify any issues,” Ke stated in an email to the court. “I did not intend to deceive the opposing lawyer or the court and I sincerely apologize for my mistake.”

Although chatbots are widely used and trained using large amounts of data, they are still susceptible to making mistakes, also referred to as “hallucinations”.

The lawyers representing the mother stated that Ke’s behavior was unacceptable and deserving of criticism because it caused a significant amount of time and money to verify the validity of the cited cases.

The judge overseeing the case denied the request for special costs, stating that such action would only be taken in the case of reprehensible conduct or an abuse of process by the lawyer.

Justice David Masuhara stated that using fabricated examples in legal documents and other materials presented to the court is a form of misconduct that is equivalent to providing a false statement to the court. If not addressed, this behavior could result in a wrongful conviction.

He discovered that the opposing attorney had ample resources and had already submitted a significant amount of evidence in the case. It was clear that the possibility of unnoticed fraudulent cases was nonexistent.

Masuhara stated that Ke’s actions resulted in a considerable amount of unfavorable attention, and she demonstrated a lack of understanding of the potential consequences of utilizing ChatGPT. However, he acknowledged that she made efforts to rectify her mistakes.

I do not believe she had any intention of deceiving or misleading. I accept Ms. Ke’s apology to the lawyers and the court as genuine. It was clear from her behavior and spoken arguments in court that she felt genuine remorse.

Although Masuhara declined to grant special costs, the law society of British Columbia has begun an investigation into Ke’s behavior.

According to a statement from spokesperson Christine Tam, the Law Society acknowledges the advantages of implementing AI in legal services. However, they have also provided instructions to lawyers on how to use AI appropriately, and require them to adhere to the same ethical standards as any competent lawyer when using AI for their clients.

Source: theguardian.com

You May Also Like

More From Author