Artificial intelligence algorithm that screens for child neglect raises concerns!

(AP Illustration/Peter Hamlin)

Dear Commons Community,

The Associated Press has an article this morning raising concerns about an artificial intelligence algorithm being used in Pittsburgh to assist in placements in child welfare decisions. Critics say it gives a program powered by data mostly collected about poor people an outsized role in deciding families’ fates, and they warn against local officials’ growing reliance on these types of artificial intelligence tools.

​​If the tool had acted on its own to screen in a comparable rate of calls, it would have recommended that two-thirds of Black children be investigated, compared with about half of all other children reported, according to another study published last month and co-authored by a researcher who audited the county’s algorithm.

Advocates worry that if similar tools are used in other child welfare systems with minimal or no human intervention–akin to how algorithms have been used to make decisions in the criminal justice system–they could reinforce existing racial disparities in the child welfare system.

“It’s not decreasing the impact among Black families,” said Logan Stapleton, a researcher at Carnegie Mellon University. “On the point of accuracy and disparity, (the county is) making strong statements that I think are misleading.”

Because family court hearings are closed to the public and the records are sealed, AP wasn’t able to identify first-hand any families who the algorithm recommended be mandatorily investigated for child neglect, nor any cases that resulted in a child being sent to foster care. Families and their attorneys can never be sure of the algorithm’s role in their lives either because they aren’t allowed to know the scores.

This is a classic example of the possible misuse of artificial intelligence technology for a deeply sensitive human activity.

The article rightfully raises critical questions.



Comments are closed.