A.I. Fairness and Data Privacy

We are experiencing one of the defining developments of the 21st century: the rise of Artificial Intelligence (“A.I.”). From suggesting what movies to stream to determining which schools our children should attend, this technology has found its way into virtually all aspects of our lives. Although A.I. brings countless benefits to our lives, a critical question must be addressed: do A.I. systems benefit everyone?

Legal and tech experts alike have detected biases in the implementation of A.I. systems that mimic the prejudices and biases that exist in society. Without proper checks and balances, deployment of biased A.I. systems present serious risks in criminal justice, government surveillance, healthcare, credit scores, online speech rights, and internet privacy.

In order to protect clients against these new and profound risks, lawyers must become experts in this area of rapid innovation. To help clients navigate this new territory, Eisenberg & Baum has launched its Artificial Intelligence Fairness and Data Privacy practice group to promote fairness and accountability in step with technological advancements. Through nation-wide legal advocacy, our team of attorneys work tirelessly to rectify injustice stemming from unfair implementations of A.I. systems. Collaborating with experienced tech experts, we aim to combine our legal expertise and rigorous scientific research to decode systemic bias in the A.I. systems used in various private and public institutions.

While A.I. was once thought of as a niche area, rapid innovation in this space has allowed A.I. to embed itself in nearly every conceivable part of day-to-day life. This practice group brings together advocates, public organizations, and the scientific community, to tackle the following issues:

  • Criminal Justice: Automated risk assessments tools are used in courts to determine a defendant’s pre- and post-trial incarceration, such as in bail and sentencing hearings. Flawed algorithms and skewed data used in the programming of these A.I. systems deprive individuals of fundamental liberties and rights. (Links to Articles)
  • Government Surveillance: Facial recognition technology is widely used in police searches, investigations and surveillance cameras. However, there are growing concerns over the technology’s error rates and misidentification that disproportionately affect women and individuals of color. This can gravely harm a person’s right to liberty and privacy. (Links to Articles)
  • Healthcare: Algorithms are used to prioritize the care of certain patients over others in hospitals. Yet, research shows that flaws in A.I. decision-making in life-and-death situations may result in discriminatory treatment based on economic status and race. Moreover, the abuse of sensitive health information, biometric and genetic data pose a threat to one’s privacy.
    • Fixing Bias in Algorithms is Possible, And This Scientist is Doing It (CAI)
    • New Research Finds "Significant Racial Bias" in Commonly-Used Healthcare Algorithm (Emerging Tech Brew)
    • Widely used health care prediction algorithm biased against black people (Berkeley Public Health)
    • Is Artificial Intelligence Worsening COVID-19’s Toll on Black Americans? (Massive Science)
  • Employment: A.I.-based hiring software is now commonplace. Employers screen and interview candidates by relying on algorithms employed by hiring platforms.. Web-based recruiting sites use algorithms to target advertisements to select groups of candidates based on a wide range of personal and social data. However, researchers have detected biases in these platforms arising from faulty data and proxies that perpetuate discrimination based on gender, race, disability, and social class. (Links to Articles)
    • An MIT researcher who analyzed facial recognition software found eliminating bias in AI is a matter of priorities (Business Insider)
    • When the Robot Doesn’t See Dark Skin (New York Times)
    • How AI Technology Discriminates Against Job Candidates With Disabilities (Texas Public Radio)
    • For Some Employment Algorithms, Disability Discrimination by Default (Brookings)
  • Credit Scores Impacting Financial and Housing: A.I.-based credit risk assessments that determine a person’s ability to gain loans and housing are prone to explicit or implicit bias based on the data sources and proxies that are used in designing the system. Studies show that some risk assessment tools perpetuate discrimination based on race and gender, and exacerbate the financial and information gap between the haves and have-nots in society.
    • Housing Discrimination goes High Tech (Curbed)
    • Consumer Lending Discrimination in the Fin Tech Era (UC Berkeley)
  • Online Speech and Information: Social media platforms are surreptitiously censored by algorithms based on contents of speech and visual image. Online sources of (dis)information pose harm to public health, national security, voting rights, and free speech rights. Free speech and access to information is the fundamental bedrock of democracy, and we are developing strategies to protect these rights using our existing legal framework.

Free Case Evaluation

Speak with our team, for free, about your legal situation.

New York Office 24 Union Square East , Fourth Floor | New York, NY 10003
Los Angeles Office 10100 Santa Monica Blvd. , Suite 300 | Los Angeles, CA 90067
Philadelphia Office 1500 Market Street , 12th Floor, East Tower | Philadelphia, PA 19102
Manhasset Office 36 Maple Place | Manhasset, NY 11030
Telephone: (212) 353-8700 | Facsimile: (212) 353-1708 |
closeClose