Skip to Main Content

AI and Law Teaching

Artificial Intelligence (AI)

Artificial Intelligence (AI)

While there is no set definition of artificial intelligence (AI), the general consensus is that AI enables machines to attempt to imitate or even surpass human cognitive abilities, including sensing, reasoning, analyzing, conversating, problem solving, and possibly even creativity. AI thus is composed of many tools, processes, and algorithms that try to achieve these goals. Below are a list of terms that can aid in understanding AI.

Extractive AI 
  • Algorithms (coded sets of instructions) that select appropriate data based on the datasets it has been trained on. The established form of AI that most known machines and legal research tools have used for some time.

Generative AI 
  • AI that is capable of making and creating original content based on the datasets it has been trained on. This new content can be in the form of text, images, audio, etc. In response to prompts, ChatGPT, a chatbot capable of creating text, and DALL-E 2, a system capable of creating images, are examples of generative AI. Both of these tools were produced by OpenAI, an AI research company, that mission's states its committed to "beneficial and safe AI."
Machine Learning (ML)
  • Machine Learning is a ever-increasing field of AI by which computers are made to make pattern connections between data and answers without explicit programming of rules. ML is accomplished by statistical techniques where the computer will identify the pattern, make predictions after learning, and make rules based off data analysis. 
Natural Language Processing (NLP)
  • Natural Language Processing (NLP) uses large language models (LLMs) to parse, understand, and create human language by text identification and sentiment analysis to form fluent and relevant responses. LLMs are a type of neural network that learn language skills by analyzing large amounts of text from the Internet with the aim of correct prediction. Methods by which NLP is accomplished include machine learning, language rules, and reinforcement training by humans who review and correct the AI system.
Hallucination 
  • A phenomenon where generative AI systems fabricate information, go off topic, or do not make logical sense in their responses. This occurs because of limitations in their infrastructure, datasets, or drilling. The quality of these systems are often dependent on the scope and quality of data inputted when training. Hallucination is the primary concern identified with using generative AI and why the recommended use is to check all result outputs personally. 

Legal research companies have been using extractive AI in their search algorithms and research tools for quite some time. How exactly these tools are comprised is mostly unknown as these companies regard this technology as a trade secret to stay competitive. Most of what is known is from promotional materials. Listed below are some examples of this usage. 

  • Lexis: Lexis Answers, Brief Analysis, and Fact & Issue Finder, and other Lexis tools

  • Westlaw: WestSearch Plus, Litigation Analysis, Quick Check, and other West tools

  • Bloomberg: Smart Code, Docket Key, Litigation Analytics, Points of Law and other BLAW tools

A few legal research companies have already released products that have integrated generative AI or have announced development of such tools.

  • Lexis: Integrated Lexis+

    • Lexis's AI chatbot will be trained on Lexis's own legal materials behind their paywall

    • Release expected in July 2023

  • Westlaw: Integrated Westlaw Precision

    • West has partnered with Microsoft, with integration into Office products

  • Casetext: CoCounsel 

    • First "AI Legal Assistant," powered by GPT-4

AI-Enabled Detection Tools

With generative AI being so easy to access, AI detection technology has been created in order to combat it. This is of primary importance in the education field due to the foremost issue on a lot of educators' minds is cheating. However, the majority of these AI detection tools not only use AI themselves for the task (often resulting in false positives), but cannot seem to keep up with AI advancement as a whole. Some AI-enabled AI detection tools include:

  • Turnitin's AI Writing Detection
  • OpenAI's AI Text Classifier 
  • GPTZero
  • CatchGPT
  • Copyleaks

*Note that many generative AI tools (including detectors) will save any data inputted by users to train their AI models

Most AI-enabled tools will admit they aren't always accurate and some even give a disclaimer that its results should not be the sole evidence when determining AI use. These tools are always at risk of being outdated while working behind upgraded models. They also have been proven incorrect by experiments and real students who have suffered from the accusations that false positives bring. Some of these companies and tools have modified their trigger threshold in response to this issue, and thus potentially, weakened their detection efficiency.

Most of these detection tools can be bypassed by simply prompting the AI to up the language complexity or rewrite its response with literary language. As language complexity is usually how these detection tools make their determination, a Stanford study found that there is inherently a bias against non-native English authors. AI experts generally agree that AI-enabled detection is a losing battle as it will not be able to eliminate the ever-present obstacles of progressing AI and potential for errors.

Prevention & Integrated Methods
  • In-Class Writing Assignments
  • Verbal Discussions & Testing
  • Multiple Drafts & Draft Comparison
  • Requiring Word Processor's Version History (Microsoft Word, Google Docs, etc.)
  • Strict Citation Policy (including for AI outputs)

Other developers are considering options that allow students to use AI, but monitor their overall time spent, research gathered, changes in their writing, etc. PowerNotes "Insight" is a tool that does not use AI to accomplish detecting, but instead analyzes how students' work compares against their final submission. The most glaring issue with all these detection methods is the increased time commitment by educators and administrators in evaluating student work to ensure that no plagiarism or cheating has occurred.