Your Fight For Justice Is Our Team’s Priority

AI Fairness And Data Privacy

Last updated on March 25, 2025

We are experiencing one of the defining developments of the 21st century: the rise of Artificial Intelligence (“AI”). From suggesting what movies to stream to determining which schools our children should attend, this technology has found its way into virtually all aspects of our lives. Although AI brings countless benefits to our lives, a critical question must be addressed: do AI systems benefit everyone?

Legal and tech experts alike have detected biases in the implementation of AI systems that mimic the prejudices and biases that exist in society. Without proper checks and balances, deployment of biased AI systems present serious risks in criminal justice, government surveillance, health care, credit scores, online speech rights and internet privacy.

In order to protect clients against these new and profound risks, lawyers must become experts in this area of rapid innovation. To help clients navigate this new territory, Eisenberg & Baum, LLP, has launched its Artificial Intelligence Fairness and Data Privacy practice group to promote fairness and accountability in step with technological advancements. Through nationwide legal advocacy, our team of attorneys work tirelessly to rectify injustice stemming from unfair implementations of AI systems. Collaborating with experienced tech experts, we aim to combine our legal expertise and rigorous scientific research to decode systemic bias in the AI systems used in various private and public institutions.

While AI was once thought of as a niche area, rapid innovation in this space has allowed AI to embed itself in nearly every conceivable part of day-to-day life. This practice group brings together advocates, public organizations, and the scientific community, to tackle the following issues:

  • Criminal Justice: Automated risk assessments tools are used in courts to determine a defendant’s pre- and post-trial incarceration, such as in bail and sentencing hearings. Flawed algorithms and skewed data used in the programming of these AI systems deprive individuals of fundamental liberties and rights.
  • Government Surveillance: Facial recognition technology is widely used in police searches, investigations and surveillance cameras. However, there are growing concerns over the technology’s error rates and misidentification that disproportionately affect women and individuals of color. This can gravely harm a person’s right to liberty and privacy.
    • NYPD Surveillance Technology Use: Eisenberg & Baum, LLP, Hosts Public Forums with Experts and the Public to Facilitate the Public Commenting Process (Feb. 2021)
    • Additional Resources:
  • Health care: Algorithms are used to prioritize the care of certain patients over others in hospitals. Yet, research shows that flaws in AI decision-making in life-and-death situations may result in discriminatory treatment based on economic status and race. Moreover, the abuse of sensitive health information, biometric and genetic data pose a threat to one’s privacy.
    • Health care Algorithms and Discrimination (Eisenberg & Baum, LLP, Current Investigations)
    • Fixing Bias in Algorithms is Possible, And This Scientist is Doing It (CAI)
    • New Research Finds “Significant Racial Bias” in Commonly-Used Healthcare Algorithm (Emerging Tech Brew)
    • Widely used health care prediction algorithm biased against black people (Berkeley Public Health)
    • Is Artificial Intelligence Worsening COVID-19’s Toll on Black Americans? (Massive Science)
  • Employment: AI-based hiring software is now commonplace. Employers screen and interview candidates by relying on algorithms employed by hiring platforms. Web-based recruiting sites use algorithms to target advertisements to select groups of candidates based on a wide range of personal and social data. However, researchers have detected biases in these platforms arising from faulty data and proxies that perpetuate discrimination based on gender, race, disability, and social class.
    • An MIT researcher who analyzed facial recognition software found eliminating bias in AI is a matter of priorities (Business Insider)
    • When the Robot Doesn’t See Dark Skin (New York Times)
    • How AI Technology Discriminates Against Job Candidates With Disabilities (Texas Public Radio)
    • For Some Employment Algorithms, Disability Discrimination by Default (Brookings)
  • Credit Scores Impacting Financial and Housing: AI-based credit risk assessments that determine a person’s ability to gain loans and housing are prone to explicit or implicit bias based on the data sources and proxies that are used in designing the system. Studies show that some risk assessment tools perpetuate discrimination based on race and gender, and exacerbate the financial and information gap between the haves and have-nots in society.
    • Housing Discrimination goes High Tech (Curbed)
    • Consumer Lending Discrimination in the Fin Tech Era (UC Berkeley)
  • Online Speech and Information: Social media platforms are surreptitiously censored by algorithms based on contents of speech and visual image. Online sources of (dis)information pose harm to public health, national security, voting rights and free speech rights. Free speech and access to information is the fundamental bedrock of democracy, and we are developing strategies to protect these rights using our existing legal framework.
  • Cyberbullying: Some social media platforms have stated policies asserting that they will remove, ban, and report abusive users – and they have the technological tools available to follow through on these statements – but they have not done so.
    • Eisenberg & Baum, LLP, files lawsuit on behalf of Carson Bride, a young cyberbullying victim, against app makers of Snapchat, YOLO and LMK (Eisenberg & Baum, LLP, lawsuit)
    • Suit Against Snap Over Suicide May Test Platform Protections (LA Times)