Is ChatGPT All It’s Cracked Up to Be?

Epiq
Contact

Epiq

What’s all the buzz around ChatGPT? This new chatbot powered by a deep artificial intelligence (AI) learning model has caused a lot of waves lately. Users can ask questions about anything, and the bot will engage in conversational dialogue with very organized and succinct responses.

How ChatGPT understands language is remarkable and seems superior to any other AI model currently available. It has been trained on information from the Internet and book collections. Users can provide feedback on responses which sharpens the tool’s grasp on language. General ChatGPT usage is currently free during the initial phase with advanced membership options available for a fee.

People are already starting to use ChatGPT to get answers to everyday questions and even for some research purposes. To test capabilities, users have asked ChatGPT to take standardized tests. While it did not perform for all, it did achieve passing scores for some tests including the Bar and MCAT.

As with every emerging technology it is crucial to balance the benefits and risks when utilizing for business purposes. From a legal standpoint, it is necessary to understand unique collection obstacles and review benefits this technology can offer for eDiscovery purposes; other use cases for lawyers and their clients; and, how ethics factors into any ChatGPT endeavor.

Putting ChatGPT to Work

When new technology trends, concerns surface about whether certain tools will replace humans – especially in the legal industry. The sentiment continues here, as there have already been questions about whether ChatGPT and other similar AI language models could take over the human function for jobs that still require an element of oversight. In the eDiscovery space, could ChatGPT review and flag responsive documents without needing human oversight? Or does this seem way too far-fetched?

Before answering that question, check out what quickly came back when asked to write a blog about ChatGPT and eDiscovery:

ChatGPT, a cutting-edge language model developed by OpenAI, has quickly gained recognition for its ability to process and understand natural language. As a result, it has become an invaluable tool for a variety of industries, including the legal sector.

In the field of eDiscovery, the use of ChatGPT has revolutionized the way that attorneys approach the discovery process. With its advanced language processing capabilities, ChatGPT is able to sift through massive amounts of data, including emails, text messages, and social media posts, in a matter of seconds. This allows attorneys to quickly identify relevant information, saving both time and money.

This is just a third of what the chatbot actually came back with. It also included information about the key advantages of using ChatGPT in eDiscovery and final thoughts on how use cases for language models will increase in the future. It probably makes more sense now why some analysts believe this tool could replace humans in higher level roles. However, it is important to recognize that although language models such as ChatGPT may prove beneficial – limitations absolutely exist.

Understanding the Limitations

While AI models like ChatGPT are innovative, there are still risks and limitations to account for before using for business purposes. Take the example above. While it came back with a lot of helpful information and even formatted the text like a blog – what did ChatGPT miss? It did not provide any information about the tool’s training history, limitations, risks, or ethical considerations. These are all things that lawyers and their organizations have to consider before using new technology so they can make an informed decision and adequately represent their clients.

Here are five key limitations to grasp and monitor as advanced language models continue to emerge and evolve. This will help balance the benefits and risks so organizations can make educated assessments about appropriate use cases.

  1. Lawyers will still need to make some relevance and privilege determinations if using ChatGPT for litigation or investigatory review functions. There is currently no strong evidence that this technology would be able to perform these human functions appropriately. As this type of model evolves it could instead prove well-suited for first pass review (similar to TAR), with the goal of reducing costs and better balancing legal workflow.
  1. ChatGPT will need to be trained on specific document sets in order to be useful for a specific organization’s investigation or case. This will require a cost-benefit analysis and comparison to tools already deployed, as it will likely require significant training to be useful in this scenario.
  1. Sometimes the chatbot will still answer inquiries incorrectly. This could be detrimental when utilizing for document review, research, settlement evaluation, motion drafting, or contract drafting. This does not mean that advanced language models will never be appropriate in such situations. Decisionmakers just need to weigh the risks and benefits for each use case, which will be easier to do as more studies and statistics become available.
  1. A good amount of training data will inevitably become stale, which means that tools like ChatGPT will need to be continuously trained and updated in order to generate good information.
  1. Lawyers always have to account for their ethical obligations when dealing with emerging technologies. Confidentiality and competence are two major duties that surface with tech usage. Putting confidential client information into a language model like ChatGPT will waive privilege and can violate the attorney-client relationship. Any information included in a prompt will not be deleted and can be used for training purposes. Consider these factors before using for document review, contracting, language translation, and other use cases that involve confidential information. Client consent is also crucial when using any new technology and lawyers need to remain informed about the benefits and risks in order to provide competent representation.

All of these limitations should provide some relief for those that fear this technology. There are just too many risks and factors that still require legal expertise and human judgment. While it could gain adoption for creating templates, contract management, administrative automation, and some document review – employing for legal research or brief writing seems unlikely. Tools like ChatGPT would not be able to account for factors such as a judge’s preferences, unique processes, or client goals. If and until there are options to bring these types of AI models behind an organization’s firewall, there is also no guarantee that sensitive information will be kept confidential.

What should legal organizations do now to stay ahead of the curve? Proceed with caution. Monitor developments with ChatGPT and similar models. Limit use cases until more is known. Create policies and trainings around this technology usage. Advise corporate clients about the benefits and risks of using tools like this for business purposes. And, above all, make sure to have internal staff or external partners that understand the technical aspect of emerging technologies to turn to for consultative purposes.

[View source.]

Written by:

Epiq
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Epiq on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide