The Perks of Being a Data Wallflower?

As machine learning and statistical modeling start to have real consequences in peoples lives, we have to start asking ourselves about how these systems are being used and what their impacts are. When these algorithms and models fail to accurately model a given individual, this can have real consequences on their life. From the seemingly mundane situation where you are fed an advertisement that does not reflect your interests on a social media feed to the more dire situation where you are identified as a potential security risk, models fail to be representative. In many cases, and not at all like Callon’s Scallops, this “dissidence” in representation does not challenge or unravel the efforts in place (Callon, 1984). Algorithms and statistical models hide not only behind a veil of numerical objectivity but also behind the propriety of private organizations that design and implement them. When these models fail, there are real consequences. Is the implication that it all falls on private citizens to conform? Do we make ourselves more predictable for the sake of being better understood and thus not an outlier in a statistical model used by a private or governing body? Do algorithms simply reflect the human bias that already exists in the world? Do they amplify this bias? …Or do they introduce new biases altogether? Is it better to actively produce data and conform to become a more predictable person or is it better to become what I call a “Data Wallflower”?  A Data Wallflower is person silently sits like a flower decoration on the wall watching the party of social media and internet-goers meander and leave their digital footprints for others to pick up and use. What are the power dynamics involved in withholding and producing public data?

In another post I discussed how algorithms are being used to forecast crime recidivism and how this is impacting sentencing guidelines. Recently, The Verge reported on a case where the New Orleans Police Department partnered with the silicon valley based data mining and predictive analytics firm Palantir to implement a predictive policing strategy. The system used disparate data sources including social media data, court filings, public records, phone numbers, and other records which the police had to create an “individualized crime forecasting” system. The system identified roughly 3,900 at risk of being involved in gun violence. These people were subsequently targeted for the city’s “CeaseFire Program” which involved police interventions that included positive incentives (job training, education, health services etc.) for cooperation as well as negative consequences such as harsher sentencing if identified individuals did not cooperate. In a well-known example, Cambridge Analytica “microtargeted” the electorate using data models and various media campaigns and to sway the 2016 US elections. Where else are models and algorithms being used in the social and civic arena?

Seeing all these examples certainly paints a dismal picture of how statistics and machine learning are being used as tools of oppression, manipulation, or careless self-gain. Becoming a passive Data Wallflower certainly seems like a viable option to simply “opt-out” until you get a better grasp of how these systems work. The troubling thing is that machine learning algorithms, especially the more sophisticated ones, are much harder to understand. They are a “black box” that is very hard or impossible to untangle. The architecture and reasons behind why a certain algorithm produces a certain output are often so complicated and obscure that they cannot be translated into human logic and reasoning systems. What place do these algorithms have in argumentation? On the topic of becoming a Data Wallflower, is it really that easy to not produce data and avoid surveillance? Could becoming on outlier and not participating in data production introduce a form of oppression in and of itself? Could there be special privileges for those who participate in data production versus those who do not?

Wrapped up in all of these questions of power, control, objectivity, reasoning systems, incommensurability, and representativeness is the question of culture. Who are the arbiters of social algorithms? Perhaps we should be looking at how Silicon Valley views the ideas of progress, morality, and rationality to understand what the future may hold.

 

Callon, M. (1984). Some elements of a sociology of translation: domestication of the scallops and the fishermen of St Brieuc Bay. The Sociological Review, 32(1_suppl), 196-233.

Leave me a Comment