Don't Hire ChatGPT to Be Your Lawyer

Don't Hire ChatGPT to Be Your Lawyer

The latest story to come out that shows ChatGPT can't be trusted to run without human quality assurance testing is actually from the courthouse.

A lawyer used ChatGPT to prepare a court filing, and it created facts out of thin air.

Look forward to reading why you shouldn't trust ChatGPT absolutely for your most important work, and big news in the world of AI that will affect law firms in this week's edition from LawOps.

News of the Week

The important things your firm needs to be watching for

Bing Chat Update in Microsoft Windows 11 (copilot)

Microsoft announced they're adding AI features directly into Windows 11 and introducing "Windows Copilot" to their latest update. Many will remember the AI assistant of old, happily nicknamed "Clippy," which was a helpful paperclip who wasn't all that helpful at times.

If your firm runs on Windows 11, be ready for everyone to have access to generative AI tools they can use in their daily work.

Google Ads getting AI Interface

Set up campaigns faster and easier through a new chat interface Google announced they're adding to Google Ads.

Some of these features will be very helpful to those looking to set up small campaigns and start testing ad performance.

Bing Plugins for Windows

Microsoft is continuing to leverage its partnership with OpenAI by opening up plugins directly into Windows.

Developers will be able to submit plugins that will work across all major Windows/Bing products, which could change how people search for certain products.

Bing Browsing for ChatGPT

Bing browsing within ChatGPT has been slow and unreliable after testing in our own accounts so far. That doesn't mean that it will be this way forever.

Make sure your site is indexed and ranking well in Bing for core topics just as you would with Google.

If people continue to support ChatGPT and find value out of the search features, this could be a replacement for traditional search engines.

Don't Hire ChatGPT as Your Lawyer

In a recent case that has been making headlines, a lawyer used ChatGPT to file a lawsuit in federal court.

In a shock to all parties, the brief produced by the AI was found to reference non-existent cases, offering a crucial lesson in understanding and responsibly using AI technology.

A Misstep in the Courtroom

ChatGPT by OpenAI has seen wide-ranging applications across various sectors. But this incident demonstrates the possible pitfalls of over-relying on AI, especially when a client's case and a lawyer's reputation are at stake.

The lawyer in question used ChatGPT to generate a brief and file it in federal court. However, the AI ended up referencing imaginary cases, creating a legal faux pas that could potentially cost the lawyer their license.

This incident underscores how a lack of understanding of AI workings can lead to significant mistakes.

What Led to this AI Mishap?

Language models like GPT are amazing at predicting the next logical word in a sentence based on the words that precede it.

But these models do not actually 'know' facts. They only learn that certain words tend to appear in close proximity to each other based on their training data. This can lead to what is known as an 'AI hallucination'.

AI hallucinations occur when an AI model generates outputs – facts, narratives, or other data – that appear convincingly real but are actually false or nonsensical.

This is a side effect of the model's training process and its nature of producing content that statistically seems right, even if it might be factually incorrect. In the lawyer's case, this led to the generation of nonexistent legal cases.

Understanding and Preventing AI Hallucinations

Determining fact from fiction in AI outputs can be challenging because AI models are remarkably good at making fake items appear real. However, there are strategies to help prevent AI hallucinations:

  • Custom Language Models: These are designed to operate within a specific knowledge domain, reducing the likelihood of generating false or nonsensical outputs.
  • Embeddings in Prompt Context: This involves providing more detailed or specific instructions to the AI model, thereby guiding it towards more accurate and relevant outputs.
  • Fine-Tuning Models: This process involves training the AI model on a specific task or domain of data, which can improve its accuracy and reliability within that particular context.

The Takeaway: Have a Plan and Don't Go It Alone

While the above strategies can help prevent AI hallucinations, it is important to remember that there is no 'one size fits all' solution when it comes to applying AI.

ChatGPT and similar models are powerful tools for general purposes, but more specialized tools and workflows may be better suited for specific goals.

The key takeaway from this cautionary situation is that AI should be used as a tool to augment, not replace, human expertise.

Businesses seeking to harness the power of AI should ensure they have experienced teams at the helm who understand how to apply AI in a manner that is appropriate, responsible, and optimized for their specific needs.

In the end, we are reminded that AI is an incredibly powerful tool that is reshaping how we work, communicate, and solve problems.

As we move forward into an increasingly AI-driven world, understanding these technologies and how to use them responsibly is more critical than ever.