OpenAI's ChatGPT:
Released in November 2022, this free chatbot uses the GPT-3.5 LLM and is trained on open Internet data up to September 2021. GPT stands for "Generative Pre-trained Transformer." GPT-4, the upgraded model, is currently only available through a paid subscription and is a marked improvement on the prior. To compare, ChatGPT/GPT 3.5 scored in the 10th percentile on the Uniform Bar Exam (UBE), while GPT-4 scored in the 90th percentile.
Google's Bard:
Introduced for free in May 2023, Google's chatbot uses the LaMDA (Language Model for Dialogue Applications), a Transformer-based AI model. Google invented the Transformer model in 2017, which was a "break-through in machine learning." As Google released this open source, the Transformer model has been a framework for many others, including ChatGPT.
Microsoft's Bing:
Launched in February 2023, the search engine Bing infused OpenAI's newer GPT-4 model with both its search function and core search algorithm. Microsoft has also added its own "Prometheus" interface to further tailor results and improve safety. Users can choose from precise, balanced or creative tones, which will generate different responses based on the same prompt.
While there is no set definition of artificial intelligence (AI), the general consensus is that AI enables machines to attempt to imitate or even surpass human cognitive abilities, including sensing, reasoning, analyzing, conversating, problem solving, and possibly even creativity. AI thus is composed of many tools, processes, and algorithms that try to achieve these goals. Below are a list of terms that can aid in understanding AI.
Algorithms (coded sets of instructions) that select appropriate data based on the datasets it has been trained on. The established form of AI that most known machines and legal research tools have used for some time.
Legal research companies have been using extractive AI in their search algorithms and research tools for quite some time. How exactly these tools are comprised is mostly unknown as these companies regard this technology as a trade secret to stay competitive. Most of what is known is from promotional materials. Listed below are some examples of this usage.
Lexis: Lexis Answers, Brief Analysis, and Fact & Issue Finder, and other Lexis tools
Westlaw: WestSearch Plus, Litigation Analysis, Quick Check, and other West tools
Bloomberg: Smart Code, Docket Key, Litigation Analytics, Points of Law and other BLAW tools
A few legal research companies have already released products that have integrated generative AI or have announced development of such tools.
Lexis: Integrated Lexis+
Lexis's AI chatbot will be trained on Lexis's own legal materials behind their paywall
Release expected in July 2023
Westlaw: Integrated Westlaw Precision
West has partnered with Microsoft, with integration into Office products
Casetext: CoCounsel
First "AI Legal Assistant," powered by GPT-4
With generative AI being so easy to access, AI detection technology has been created in order to combat it. This is of primary importance in the education field due to the foremost issue on a lot of educators' minds is cheating. However, the majority of these AI detection tools not only use AI themselves for the task (often resulting in false positives), but cannot seem to keep up with AI advancement as a whole. Some AI-enabled AI detection tools include:
*Note that many generative AI tools (including detectors) will save any data inputted by users to train their AI models
Most AI-enabled tools will admit they aren't always accurate and some even give a disclaimer that its results should not be the sole evidence when determining AI use. These tools are always at risk of being outdated while working behind upgraded models. They also have been proven incorrect by experiments and real students who have suffered from the accusations that false positives bring. Some of these companies and tools have modified their trigger threshold in response to this issue, and thus potentially, weakened their detection efficiency.
Most of these detection tools can be bypassed by simply prompting the AI to up the language complexity or rewrite its response with literary language. As language complexity is usually how these detection tools make their determination, a Stanford study found that there is inherently a bias against non-native English authors. AI experts generally agree that AI-enabled detection is a losing battle as it will not be able to eliminate the ever-present obstacles of progressing AI and potential for errors.
Other developers are considering options that allow students to use AI, but monitor their overall time spent, research gathered, changes in their writing, etc. PowerNotes "Insight" is a tool that does not use AI to accomplish detecting, but instead analyzes how students' work compares against their final submission. The most glaring issue with all these detection methods is the increased time commitment by educators and administrators in evaluating student work to ensure that no plagiarism or cheating has occurred.