If you cannot attend our LegalTech Series CLE Seminars live online, we will also have alternate delivery recordings of those online seminars for CLE credit. We have a number of alternate delivery recordings available for viewing, which you can register for here . Each online alternate delivery recording is $25 and again all proceeds from the alternate delivery recordings go to the Law School Carolina Fund. There are also discounts available when purchasing more than one seminar recording.
“Consider Your Own Black Box: Evaluating Human Decision-Making Alongside Artificial Intelligence”
January 13, 2021 — Jack Pringle, Esq., Partner, Adams and Reese, LLP in Columbia
Artificial intelligence (AI) systems are playing significant roles in decision-making processes that affect our lives. However, decisions made in a “black-box” fashion (such as algorithms hidden from view or evaluation), rarely inspire confidence or build trust. Moreover, opaque decision-making may run afoul of legal frameworks (for example the Fair Credit Reporting Act) that require support for certain decisions.
Because of the significance of the decisions that AI makes, the decision-making AI should be explainable and trustworthy.
Scientists from the National Institute of Standards and Technology (NIST) have proposed four fundamental principles explainable AI:
- Explanation. Systems deliver evidence or reasons for all their outputs.
- Meaningful. Systems provide explanations that are meaningful or understandable to individual users.
- Explanation Accuracy. The explanation correctly reflects the system’s process for generating the output.
- Knowledge Limits. The system only operates under conditions for which it was designed or when the system reaches a sufficient confidence in its output. (If a system has insufficient confidence in its decision, it should not supply a decision to the user.)
[The Four Principles for Explainable Artificial Intelligence, Draft NISTIR 8312, August 2020.]
This NIST draft also asks whether human decision-making can satisfy these principles. NIST concludes that human decision-making can only do so (if at all) in a limited way, due to how our brains consciously and unconsciously process information. Comparing AI system decision-making with the human decision process can help us evaluate the relative risks and benefits of using AI systems, and learn more about the upside and pitfalls of our own human decision-making systems.
This presentation will address some of the cognitive biases (reasoning flaws) that may affect not only legal decision-making processes, but health and well-being choices. By learning to be aware of cognitive bias and the way it may influence our thoughts and actions, we can improve our decision process — and hopefully our choices.
- 1 hour SA/MH CLE credit (211724ADO)
“The Insurance Defense Incubator”
January 27, 2021 — Jason Lockhart, Mark Davis, Ryan Adams, and John Stroud of McAngus Goudelock & Courie.
Recognizing that ideas and change can be found within any level of our organization, MGC extended an innovation challenge to their attorneys. The Insurance Defense Incubator challenged the firm’s lawyers to develop ideas that would improve processes, increase efficiencies and lead to enhanced client service. In response to the challenge, 18 attorneys presented their ideas in a “Shark Tank” style pitch to a team comprised of attorneys, practice management specialists and financial analysts. Proposals were scored based on creativity, practicality, disruptiveness, symbiosis and scale of importance.
The firm is currently at work on two projects; with one project focusing on workers compensation claims and the second focusing upon more traditional personal injury litigation.
The purpose of the workers compensation project is to collect and analyze data related to medical opinions in workers’ compensation claims to develop predictive software and other tools that will assist attorneys and clients in estimating exposure and choosing favorable physicians. Using this analysis, the firm will develop technological tools to objectively measure how much a particular physician’s opinion differs from other physician’s opinions for the same injury. This data-driven tool will provide an aggregate average of this measurement for each physician, taking into account all opinions that physician made on claims with more than one opinion. Eventually, the data driven tool will provide additional statistics for each physician, including average treatment time, and average overall cost of medical treatment, which will assist in measuring the life of the claim, cost of the claim, and selection of favorable physicians.
Traditionally, attorneys used to burn outdated copies of case law reporters and review the leftover scorch markings to assess a case. Some newer attorneys have resorted to law school notes and student loan bills. No more, we said at MGC! We’ll use data science!
The purpose of the personal injury litigation project is to provide attorneys a tool to accurately evaluate a case based on relevant, historical data. No longer will the firm’s attorneys resort to the “reading the tea leaves” when explaining to the client how much a case will cost and how much a case should settle for. Using historical data from past settlement agreements along with other factors when determine a case’s value, we hope to learn insights into the legal community not readily known to others. This data drive tool will provide a concrete foundation for which a case’s evaluation rests upon. Further, the model will be able to predict outcomes based on the attorneys that are involved and other data points that may be important, but not considered. An example question usually not considered is “How has past historical economic factors impact case resolution costs?” Can the market be used as an indicator for how claims will pan out? These are the kinds of analyses that our data driven tool will hope to provide to propel MGC to the forefront of Litigation Tactics.
Data science has the potential to transform the practice of law, but implementation has many challenges. The potential for bias and discrimination exists where the predictive model includes personal characteristics such as ethnicity, gender, education, work history, medical history, and venue. As in many AI-generated models, the contributing variables are not visible to the end user. Where the solution is designed to influence human behavior and decision-making, how can we build solutions that weigh the consequences of choices to select moral outcomes, act in accordance with social norms, and have accountability to society as a whole?
In addition to these general concerns about the impact of this emerging technology, lawyers carry an additional burden to ensure the development and application of these technological solutions comport with professional responsibility and ethical obligations. This session will address the ethical challenges of the application of data science to law, the role of the lawyer in building and maintaining technological applications, and the potential conflicts with the rules of professional responsibility that arise with machine learning and utilization of insights from data.
So, where do we go from here? Given the rapidly changing legal and business industries, we believe our willingness to adapt and innovate is more important than ever. In order for us to stay out front and provide our clients with the best service possible, we need to continue to assess our processes and technologies and find ways to improve efficiencies.
Ryan Adams and John Stroud are both associates in MGC’s Charleston, South Carolina office. Mark Davis is a member in MGC’s Charleston, South Carolina office, and Jason Lockhart is a member in MGC’s Columbia, South Carolina office.
- 1 hour Ethics CLE credit (211726ADO)
“The Three Forms of Legal Prediction — Experts, Crowds & Algorithms”
February 10, 2021 — Dr. Daniel Martin Katz, Professor of Law, Chicago Kent School of Law, Founder & Director at The Law Lab @ Illinois Tech — Chicago Kent College of Law and VP, Data Science & Innovation, Elevate Services
In this talk, Dr. Katz will briefly discuss the field of AI and Legal Technology. Next, he will discuss three forms of prediction — Experts, Crowds & Algorithms. Using these three approaches, he will highlight the application of these approaches to predict judicial decisions from the Supreme Court of the United States.
- 1 hour CLE credit (212740ADO)
“AI and Racial Bias”
March 24, 2021 — John Browning, Partner, Spencer Fane LLP.
Co-sponsored by the School of Law Diversity, Equity, and Inclusion (DEI) Task Force.
The field of artificial intelligence ( AI) is often regarded with an aura of objectivity and infallibility, perpetuating the idea that “numbers do not lie.” Yet the algorithms that drive AI are human-created, often reflecting human biases — including racial bias. This program will discuss racial bias in algorithms and the legal challenges that have been asserted against the use of such biased algorithms.
While minorities have been disproportionately impacted by biased AI in a number of civil contexts ( including employment, credit, and lending decisions), this presentation will focus on the criminal contexts, including “predictive policing” initiatives and the use of racially biased algorithms in criminal sentencing decisions. The program will examine such cases as the Wisconsin Supreme Court’s 2016 decision in State v. Loomis, and other legal challenges to the use of racially biased algorithmic risk assessments.
As this presentation will discuss, courts considering such challenges are often balancing the proprietary trade secret interests of algorithm developers seeking to protect the “ black box” of their technology versus the due process rights of criminal defendants. Finally, the program will look at potential measures being considered to address algorithmic bias, such as “Algorithmic Accountability Acts” at the state and national levels.
Ethically, attorneys have the duty, as part of providing competent representation to their clients, to be cognizant of both the benefits and risks associated with technology. Being aware of the benefits — and the risks — of AI includes an awareness of the issue of racial bias and algorithms.
- 1 hour Ethics CLE credit (214601ADO)