reviews a new book about how states are using supposedly scientific algorithms to continue punitive and racist policies toward the poor.
VIRGINIA EUBANKS has written an important new book, Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor, that describes the development of technology to update regulatory systems long used under capitalism to punish poor people under the guise of helping them.
Summarizing the scope of her findings and the comprehensive nature of the surveillance and control, Eubanks writes:
Across the country, poor and working class people are targeted by new tools of digital poverty management and face life-threatening consequences as a result.
Automated eligibility systems discourage them from claiming public resources that they need to survive and thrive. Complex integrated databases collect personal information, with few safeguards for privacy or data security, while offering almost nothing in return. Predictive models and algorithms tag them as risky investments and problematic parents. Vast complexes of social service, law enforcement, and neighborhood surveillance make their every move visible and offer up their behavior for government, commercial, and public scrutiny.
Eubanks calls the aggregate of these new systems of collecting and mining data and using it to automate decision-making a new “digital poorhouse,” constructed with databases, algorithms and risk models. To illustrate the national problem, Eubanks selected three examples.
In 2006, Indiana committed more than $1 billion to a 10-year contract for a consortium of for-profit companies to fully automate the application process for TANF, food stamps and other food programs, and Medicaid.
In Los Angeles, a “coordinated entry system” to match homeless people to housing was launched in 2013, built around an assessment tool using information collected from homeless people to generate a “vulnerability index” score, which in turn is intended to match housing to the un-housed.
Although respondents to the survey weren’t necessarily aware of this, the survey results could be shared with law enforcement without consent or a warrant.
Also in 2013, Allegheny County, Pennsylvania, which includes Pittsburgh, began a project with academic consultants from New Zealand and California to develop a “predictive analytics” model to assess the risk that parents who are the subject of calls to the state’s child maltreatment hotline will neglect or abuse their children.
These three case studies allow Eubanks to look in some level of detail at how different systems are constructed, populated and used, and how they can be abused or simply malfunction. They provide some basis to ask questions about other systems that are operating or being developed elsewhere.
Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor. St. Martin’s Press, 2018, 272 pages, $26.99.
Equally important, focusing on these three cases gives Eubanks the opportunity to visit individuals and families affected by each of the systems, and to allow them to describe their experiences.
She also talks to those tasked with using the high-tech tools, many of whom are experienced people subordinated to some degree by a digital automation process they don’t fully understand. These human stories are compelling, and help to evaluate the unique challenges of resistance to automated decision-making in welfare and human services.
WHAT EACH of the three examples has in common is that, while they were all touted as minimizing subjective and arbitrary decision-making, and therefore reducing the effect of human bias, none can actually do so. Instead, each bakes biases into an algorithm, risk model or automated response system.
For example, peeling back the assumptions built into the Allegheny algorithm shows how it perpetuates systemic racism in child welfare and, as Eubanks says, automates inequality.
In Allegheny County, the system was intended to predict child maltreatment by comparing new cases to old cases. Since there is no database consisting of cases in which child maltreatment has been proven, “proxies” had to be selected. Those chosen were repeat calls to the hotline or removal into foster care.
In other words, any parent who is subject to a call is compared to a database of cases in which more than one call has been made to the hotline, or in which local authorities removed a child. Matches are made according to 131 variables looking for similarities to the cases in which there is no way of knowing if maltreatment actually occurred.
The variables included receipt of welfare, whether the parent herself was ever in foster care, whether there has been involvement with juvenile or criminal justice systems, and many other experiences that correlate more to race and class than to ability to parent.
Since poor Black parents are scrutinized more closely by “mandated reporters” to the hotline and the child welfare authorities who investigate the calls, the proxies result in a comparison population that necessarily over-represents Black families. What Eubanks calls “poverty profiling” measures is used to come up with a risk score that really amounts to the traditional systemic bias dressed up as objective math.
EUBANKS EMPHASIZES the continuity in motivation and ideology that connects high-tech data systems with 19th century “poorhouses,” which imprisoned the poor and deprived them of all dignity and autonomy as a condition of assistance barely sufficient to keep residents alive — and for some, given the high mortality rates within the poorhouses, completely insufficient.
Prior to the development of industrial capitalism, which involved rural displacement, urbanization and employment for wages on a much larger scale than had been seen previously, the problem of providing relief for people who were impoverished, homeless, sick or with some kind of impairment was largely a local concern for people known to one another.
Assistance might be at the level of a village or parish, or it might be provided by a landlord, whose privilege of exploiting agricultural workers was accompanied by the paternalistic expectation that he would care for them. There was inequity and condescension in these relief efforts, no doubt, but there were also ongoing relationships between individuals in the same community.
Once these relationships were disrupted and replaced with the arms-length sale of wage labor and a more urbanized working class, poor relief had to be institutionalized on a larger, more impersonal scale.
Means had to be found to control, regulate and stigmatize the poor — both to maintain the pressure on the working class to accept wage labor on the employers’ terms, and to establish an ideological narrative that held those in poverty as responsible for their circumstances.
The development of institutions and mechanisms to police the poor was undertaken by government bodies and private charities, often acting with quasi-governmental authority.
Periodic struggles of poor and working class people to break free of punitive regulation and create a counter-narrative about inequality and systemic racism have at times been successful, but a backlash has inevitably followed to prioritize punitive policies and stigmatize the poor.
THE POORHOUSES were supplemented and eventually replaced by “scientific charity” and “casework,” which used intrusive investigation based on moralistic, often racist assumptions about the poor and working class, as a means to stigmatize recipients and separate the “deserving” from the “non-deserving” poor.
Organized resistance to the most oppressive aspects of relief administration achieved uneven gains during the Great Depression of the 1930s, although the system retained its distinctions between those deserving of help and those who weren’t.
Gains were also achieved during the welfare rights movement of the 1960s, but a racist backlash quickly emerged, most famously represented by Ronald Reagan’s invocation of Black “welfare queens” as he campaigned for the White House in 1976 and 1980.
Computer technology became a favored response as part of the backlash. As Eubanks notes:
When poor and working people in the United States become a politically viable force, relief institutions and their technologies of control shift to better facilitate cultural denial and to rationalize a brutal return to normalcy. Relief institutions are machines for undermining the collective power of poor and working class people, and for producing indifference in everyone else.
While reading the book, I was reminded of another: 2012’s Killing the Poormaster by Holly Metz, which recounts the story of an unemployed stonemason who was accused of stabbing to death the official in charge of welfare benefits in Hoboken, New Jersey in 1938.
The radical lawyer Samuel Leibowitz staged a political defense, exposing the cruelty, bullying and bigotry of the deceased, and indicting the entire system of poor relief in the process.
The impersonal nature of decision-making using high-tech tools presents new challenges in organizing and publicizing. Not only can you not stab an algorithm, you can’t personify it, and you can’t stage a sit-in against a cloud storage facility.
Eubanks acknowledges these challenges and the effect of isolation of those subjected to decision-making using high-tech tools, but she ultimately strikes an optimistic note. She emphasizes — and demonstrates — the power of individual stories to overcome indifference.
She also maintains a belief that solidarity is possible because the larger public, both working class and middle class, can comprehend that digital technology developed to control and police the poorest among us can and will be deployed against everyone outside the economic elite.