Labor and Employment Law

AI in the Workplace: A Primer for Lawyers

By Natalie A. Pierce

I. INTRODUCTION

Artificial Intelligence (AI) is everywhere in our world. We use AI-powered Google searches to help in our personal and professional endeavors. Netflix uses AI algorithms to help narrow our next movie choices, and Amazon uses them to steer us toward our next purchases. AI is also present in our workplaces. Whether you work at a desk in your home office or in the cockpit of a stealth fighter jet, AI is becoming ubiquitous. This article is a basic primer to introduce lawyers to key AI principles. It provides examples of AI uses across myriad job fields, explores some of the associated legal and ethical issues, and offers practical tips to help organizations adopt AI responsibly.

II. AI USES FOR EMPLOYERS

Nearly every workplace leverages AI to increase efficiency, reduce costs, and improve lives. AI uses a combination of algorithms and data to learn, evolve and—hopefully—improve over time. Here are several specific examples of how employers are using AI-powered tools to improve effectiveness and efficiency, and reduce costs:

a. Remote Workforce Management

During the COVID-19 pandemic, many organizations quickly transitioned to remote work. This shift brought both advantages and disadvantages to employees and their employers. Even as the risks of COVID-19 fade, remote work appears to be here to stay for many organizations. One of the issues facing employers is how to effectively monitor remote employees and assess their productivity. Several AI-powered tools have emerged to address this need. For example, RemoteDesk claims that it “is the world’s most advanced AI-based Remote Workforce Management & Employee Monitoring solution for work-at-home compliance.” ActivTrak appears to be a similar program that provides “Employee Productivity Monitoring” and “Remote Workforce Management.”

b. Human Resources (HR)

Many large companies are leveraging AI-powered tools to help with HR functions such as hiring. However, the tools have not always worked as planned and AI carries several risks, including algorithm bias. Patrick Huston & Lourdes Fuentes-Slater, The Legal Risks of Bias in Artificial Intelligence, Law360 (May 27, 2020). For example, in 2015, Amazon halted its use of an AI recruiting tool that showed a strong bias against women. There, the technology appeared to favor candidates who described themselves using verbs more commonly found on male engineers’ resumes, such as “executed” and “captured.” Jeffrey Dastin, Amazon scraps secret AI recruiting tool that showed bias against women, Reuters (Oct. 10, 2018). More recently, companies such as HiredScore have tried to address the concerns about bias and offer services claiming to make hiring more efficient and fair: “[w]e leverage the power of artificial intelligence to deliver deep hiring efficiencies, enhance talent mobility, and help the largest companies in the world enable data-driven Human Resources.” Organizations with significant HR functions may be able to improve operations and eliminate bias with the help of AI.

c. Flight Training

One particularly interesting example of AI is the U.S. military’s effort to leverage augmented reality (AR) to improve training for pilots. A company called “Red 6” uses this technology to allow pilots flying real aircraft to see projections of simulated aircraft through their helmet visors: “[i]n the future, when U.S. Air Force fighter pilots face off in aerial combat training missions, they could be dogfighting the video game version of Chinese and Russian warplanes at a fraction of the cost of using real jets like the F-22 Raptor.” Valerie Insinna, US Air Force’s T-38 trainer could soon dogfight with augmented reality adversaries, Defense News (Mar. 19, 2021). This type of training has proven to be cheaper, more realistic, and safer than traditional flight training. More importantly, this innovative concept appears to be scalable to other types of training across other industries.

d. Financial Industry

The use of AI in the financial industry is commonly called “FinTech.” According to a recent Forbes article, FinTech is rapidly changing the financial sector: “[i]nnovation has been unlocked with a single key: artificial intelligence. Almost all new approaches to managing money have AI in their DNA.” Annie Brown, Meet the Fintech Innovators Using AI to Reimagine the Financial Sector, Forbes (Jun. 15, 2021). The article explains AI-powered stock market trading tools that use predictive modeling for “price prediction” to help enhance profitability. The piece also notes the power of AI for fraud prevention. It explains how AI algorithms can identify perpetrators of fraud by measuring how hesitant or distracted the behavior is, comparing typing patterns against that of legitimate users, and observing how someone is tapping or scrolling. Id.

e. Academia

Academic and testing institutions have leveraged similar technologies to detect cheating on papers and exams. For example, California recently administered online bar exams during the pandemic. The testing program recorded audio and video files of the test takers: “The audio-video file is then run against an artificial intelligence tool that flags potential anomalies.” Jake Holland & Sam Skolnik, Cheating Scandal Aside, New Remote Bar Looks a Lot Like Old One, Bloomberg Law (Feb. 1, 2021). California’s effort has been criticized for “false positives” after one-third of all test takers were flagged for potential cheating. Id. This is a good reminder that AI technology is not a cure-all. Like any technology, AI must be monitored to ensure it works properly and that it’s used correctly.

f. Healthcare

AI has been credited with helping scientists to rapidly develop COVID-19 vaccines. Nicole Decario & Oren Etzioni, AI Can Help Scientists Find a Covid-19 Vaccine, Wired (Mar. 28, 2020). AI also assisted employers with their return-to-office policies through contact tracing and predictive modeling. Natalie Pierce, Julie Stockton, & Courtney Chambers, Beyond HIPAA: Inside the Use of AI to Collect COVID-19-Related Information From Employees, Legaltech News (Jun. 18, 2020). Algorithms-scoured data show meeting invites, e-mail traffic, and GPS data from employer-issued computers and cell phones. Other tools helped by tracking information such as employee health status or recent travel. The results assisted employers as they managed continuity of business and navigated uncertainty.

g. Legal Field

One of the most obvious examples of AI use in the legal field is eDiscovery. Discovery was once a task relegated to associates and paralegals who would manually sift through boxes full of files, searching for responsive documents. Today’s eDiscovery programs can sift through thousands of documents and e-mails to perform the same function far faster, more efficiently, and more effectively. It can reduce multiple copies of the same e-mail, identify responsive documents, and even flag privileged communications that should not be disclosed to opposing counsel. Chris Egan, 4 Tips for Making Sure Your AI Use in Law is ‘Ethical,’ LegalTech News (Mar. 9, 2021). The most revolutionary change in the legal industry comes with automated legal processes such as those done by LawGeex, the first AI company in America to obtain a license to practice law. I recently interviewed the founder and Chief Executive Officer of LawGeex, which uses AI-powered software to review contracts for legal issues. In a study at Stanford Law School, the software performed better than human lawyers in identifying known contract law issues: “85% accuracy for lawyers and 95% for AI.” Matt Reynolds, Lawyers warned of AI pitfalls, cybersecurity attacks and deepfake threats, ABA Journal (Mar. 8, 2021); see also Charlie Dunlap, Guest Post: BG Pat Huston on “Future War and Future Law,” Lawfire (Dec. 3, 2018). The AI’s accuracy was impressive, but its speed was even more telling: “It took the average attorney 92 minutes to review those contracts. It took the AI system just 26 seconds.” Id. Use of this type of automation tool could lead to many changes in the legal profession, including a switch from traditional hourly billing to flat rates for legal tasks.

h. Transportation Industry

The transportation industry is changing rapidly as a result of AI. Self-driving cars are already common on the streets of California. According to the Harvard Business Review, the long-haul trucking industry (generally interstate trips of 201 miles or more) will soon be dominated by self-driving trucks. Maury Gittleman & Kristen Monaco, Automation Isn’t About to Make Truckers Obsolete, Harvard Business Review (Sept. 18, 2019). This trend toward autonomy and efficiency will likely continue, and it is made possible by AI. The AI revolution in the transportation industry extends beyond the vehicles themselves. For example, AI-powered “SmartMaintenance” programs are optimizing maintenance to greatly improve efficiency and reduce repair costs. R. Patrick Huston, A Pacifist-General’s Plan to Win America’s Next War, Articles of War (Apr. 21, 2021). This same technology can be used to streamline operations in other industries.

III. LEGAL & ETHICAL ISSUES ASSOCATED WITH AI

Unsurprisingly, as with any emerging technology, AI has legal and ethical implications. As legal advisors and counselors to our clients, we should be familiar with some of the basic issues:

a. Intellectual Property

One of the most common legal issues encountered with AI is intellectual property (IP). IP issues often associated with AI include protecting the two primary components of AI: the algorithm and the data. “A machine learning system typically comprises a computational model based on an algorithm (or algorithm stack) with a dataset to train it.” Kathy Berry & Yohan Liyanage, INSIGHT: Intellectual Property Challenges during an AI Boom, Bloomberg Law (Oct. 29, 2019). Data have been described as the “new oil” or “currency,” so obtaining IP protections can be a very important consideration for clients. Scott A. Snyder, Using Data as Currency: Your Company's Next Big Advantage - InformationWeek (May 14, 2021).

b. Privacy

The data feed AI’s machine learning cycle and allows it to evolve, which is AI’s greatest strength. However, that same data can also create liabilities if it includes personally identifiable information (PII) or other information protected by privacy rules. David A. Teich, Artificial Intelligence and Data Privacy – Turning a Risk into a Benefit, Forbes (Aug. 10, 2020). Lawyers need to be aware of applicable rules, such as the California Consumer Privacy Act (CCPA) (Cal. Civ. Code §§ 1798.100-1798.199 (West 2021)), and the European Union’s General Data Protection Regulation (GDPR). Laws governing individual consent for the collection of PII vary by type of data that is collected, and by state. That’s just one example of the complexity lawyers’ face in this area. Another example involves biometrics: “these tools raise privacy, biometric data collection, and data retention concerns that should be carefully examined before adoption. These issues will continue to persist long after the technology’s adoption, and recurring evaluation procedures should be drafted and implemented to mitigate potential liability exposure.” Natalie Pierce & Chase Perkins, Leveraging Emerging Technology in the Post-Pandemic Workplace, HR Daily Advisor (May 20, 2020).

c. Attorney Professional Responsibility

All lawyers have the duty of competence, and this includes a duty of technical competence. The comments to ABA Model Rule 1.1 state that Lawyers must: “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.” ABA Model Rules of Prof’l Conduct, Rule 1.1, cmt. 8. What does this mean in terms of understanding AI? You don’t need to understand exactly how the AI algorithms work, but you do need to understand the relationship between inputs and outputs. As a general rule, a system’s reliability and the client’s liability are inversely related: The more reliable a system is, the lower the risk of legal liability. Reynolds, supra. This means we need to remember two things about AI. The first is the risk of bias, and the second is that AI learns and evolves. Therefore, we should consider advising clients to build in periodic checks and balances to confirm that AI systems remain reliable. To conclude, our duty of technical competence requires us all to stay informed about the basic risks and benefits of AI, so that we can effectively advise our clients.

d. Broader Ethical Concerns

AI raises several topics that fall into the category of broader ethical concerns, rather than narrow attorney professional responsibility matters. Due to the complexity of AI’s algorithms, these include efforts to ensure that AI is responsible, equitable, traceable, reliable, and governable. U.S. Department of Defense, DOD Adopts Ethical Principles for Artificial Intelligence, (Feb. 24, 2020). One of the most common concerns is the risk of AI Bias. For example, as discussed above, Amazon halted use of an AI recruiting tool that showed a strong bias against women. Dastin, supra; see also Huston & Fuentes-Slater, supra. Lawyers should advise clients to carefully consider the advantages and disadvantages of an AI system, so that they can adopt it responsibly.

IV. RECOMMENDATIONS FOR LEVERAGING AI

Organizations that plan to leverage AI should have a coordinated strategy to guide them on the path to success. I offer three general recommendations to assist in this endeavor: (1) establish multi-disciplinary AI teams; (2) leverage human-machine teaming; and (3) integrate cybersecurity defenses.

a. Multi-disciplinary AI teams

AI adoption should involve a coordinated strategy led from the top. The organization’s executive-level leaders should build multi-disciplinary AI teams. These teams should “include technical experts such as coders and data scientists, as well as lawyers and AI ethicists, to effectively integrate AI into their organizations. These teams would enable all parties to provide input from their respective perspectives at all stages in the adoption process.” Huston, supra. This approach has been successfully leveraged by several large companies and government entities, and serves as a good model for any organization adopting AI.

b. Human-Machine Teaming

Organizations adopting AI should also leverage Human-Machine Teaming. Humans outperform computers and machines on some tasks such as judgment, common sense, and leadership. However, machines outperform humans on other tasks such as digesting large quantities of data, rapid computation, and completing boring repetitive tasks. The key to successfully adopting AI is to combine humans and machines in a way that leverages the respective strengths of each. Huston, supra; Egan, supra; Reynolds, supra. AI systems change and evolve. Therefore, as technology improves, the roles between human and machine will shift to strike the right balance. Organizations adopting AI technology “should conduct periodic evaluations throughout the life cycle of a technology’s use to ensure maximum compliance of applicable laws in every jurisdiction in which they operate.” Pierce & Perkins, supra.

c. Cybersecurity

Any winning strategy needs a strong offense and defense. Organizations are leveraging AI to improve efficiency, reduce costs, and get ahead of their competition. That’s their offense, so cybersecurity is the defense: The corresponding defense must include a strong cybersecurity plan to protect against increased cyber vulnerabilities caused by a reliance on AI. These vulnerabilities include risks of cyber intrusion, data theft, and system disruption. Although often treated as a mere afterthought, cybersecurity should instead be an integral part of any organization’s AI-adoption strategy. Huston, supra; Egan, supra.

V. CONCLUSION

Organizations should leverage technology in order to stay ahead of their competition. AI has proven to be a valuable tool in nearly every industry, but there is no guaranteed path to success. Organizations must understand AI’s strengths and weaknesses, and have a coordinated strategy for its adoption. Strong leadership that leverages the respective strengths of its employees and technology, and that guards against unintended bias and breach of privacy, stands the best chances of achieving excellence. As lawyers, we are well positioned to help advise our clients on how to succeed, and it is our obligation as lawyers to do so.

Natalie Pierce is a Partner at Gunderson Dettmer in San Francisco, and chair of the firm's labor & employment practice. She was selected as one of Daily Journal’s “Top Artificial Intelligence Lawyers” and “Top Labor and Employment Lawyers.” Ms. Pierce represents technology and life sciences companies. Her practice focuses on the needs of start-ups and emerging growth companies. She counsels companies on incorporating robotics, biometrics, telepresence, artificial intelligence, and other enhancement technologies into the workplace as part of her practice. She earned her bachelor’s at the University of California Berkeley and her law degree from Columbia University School of Law. Ms. Pierce can be contacted at 415.801.4920 or npierce@gunder.com.

Not a member?

Share this article