When Machines Discriminate: The Rise of AI Bias Lawsuits -
Over the past three years, businesses in the United States have rapidly adopted artificial intelligence (“AI”) technology – defined broadly as the ability of machines to perform tasks that typically require human intelligence. In particular, many companies now perform critical business functions using machine learning technology, a subset of AI in which computers use algorithms and statistical models to analyze and draw conclusions from data. Much like with humans, the conclusions drawn by these AI tools are susceptible to bias. Bias in an AI model can arise from the data used to train the model or from the design of the model itself. Data bias occurs when the AI model is trained on a biased data set and then replicates that bias in its conclusions. Algorithmic bias occurs when the AI model is coded to look for certain terms that may be more likely used by certain groups. Either can lead the AI model to produce biased outcomes.
Please see full publication below for more information.