The numeric standardisation of humans. Risks and dangers of Super-Scoring

Prepared for the interdisciplinary symposium “Super-Scoring? Data-driven societal technologies in China and Western-style democracies as a new challenge for education.” Cologne, Germany; October 11, 2019. The essay can be downloaded here as a PDF-File.

By Stefan Selke

1 Organisation and Control 

Super-Scoring may sound new and disruptive, but the basic principles of numeric evaluation and standardisation of humans can in fact be seen from a historical perspective. At the beginning of the 20th century, the industrial magnate Henry Ford was the most well-known person around the globe, comparable to Elon Musk, Jeff Bezos and Marc Zuckerberg today. His socio-political commentary was widely received and highly controversial. In 1922, he formulated his perspective on society as follows: “The security of the people today is that they are unorganised and therefore cannot be trapped.” (Grandin 2009: 181) Of course he was not referring to big data, but to unions, which he considered dangerous. The correlation between organisation and control that Ford was referring to can be expanded on with a review of the cultural history of social screening.

2 The Beginning: Social screening in the Analogue World

Henry Ford is the prototype of an authoritarian patriarch. His reign was more than ambivalent. On the one hand, he paid extremely high wages; he built hospitals and introduced free health care. On the other hand, he ran the infamous “Sociological Department”. All employees were repeatedly asked about a number of aspects around their personal and intimate lives, including a screening of their consumer behaviour, savings behaviour, diet, alcohol consumption, and even their sex lives. Ford introduced various regulations that included “well-meant” contemporary fundamental ideas of social reform and that were, in part, simply unrealistic. His “Service Department” regulated the upholding of the regulations with unannounced home visits and sanctioning as needed. Therefore, over the years, extensive amounts of data about the lifestyles and conduct of Ford employees were collected. For Ford, data was the answer. But what was even the question? Ford had naïve motives that have surfaced in different variations up until today. First of all, there is the utopian motive: Like many others, Ford assumed that it was desirable and possible to create an ideal world. Then there is the technocratic motive, meaning the belief that “controlled” laboratory conditions can be used to conduct social experiments and create utopias.

Astonishingly enough, long before the term “natural experiment” was introduced in social psychology by Kurt Lewin in the 1960s, there had already been numerous “natural” experimental designs that tried to create an ideal world under controlled conditions. This is because technocrats would prefer a world with clear rules instead of confusing complexities. Therefore, time and again, alternative concepts to the disappointing present have been designed. There is currently no lack of such ideas for the future: Spiralled underwater cities, micro-nations on man-made islands, and high-tech oases in desert regions, or even populating the Moon or Mars. Some well-known experimental designs are worth revisiting (see Selke 2020 for a detailed review).

Saltaire near Bradford (Yorkshire) was founded as a Victorian model village in 1851. At the heyday of industrialism, industrialist Titus Salt, concerned with the welfare of his employees, had an entire town built for them, including a school, library, washhouse, and workhouse. Since the beginning of the 20th century, people on a search for meaning have settled in special spiritual zones all over the world. Proponents of the Swabian Bohemia founded Monte Verità near the Swiss town of Ascona to the North of Lago Maggiore in 1900. Exactly 50 years ago, the opening of Auroville was celebrated as a cosmopolitan social laboratory in Southern India. And in 1920, Henry Ford decided to build his own city state in the middle of the Brazilian Amazon: Fordlândia was a combination between a rubber plantation and utopian city. After the Second World War, architect Bill Levitt, concerned with returning homeless US-American war veterans, designed the utopian city of Levittown on Long Island, with 17,000 prefabricated, simple family homes. And none other than Walt Disney planned the utopian version of a perfect “Duckburg”: Celebration is a car-free, completely digitally networked city designed for up to 20,000 people. His plan was realised near Orlando, Florida in 1994.

As different as these projects may initially seems to be, they all have one obvious basic commonality: All of these utopias are based on rigid regulatory systems that were developed by the founder himself, and the compliance with which was meticulously monitored. Depending on the respective founder’s fundamental worldview, organisation and control were used to make certain expectations the focal point of the inhabitants’ conduct. At the same time, the use of rigid social engineering made these societal models more or less unlivable.

The motto in Saltaire, for example, was: church not pub. On Sundays, English workers had to refrain from drinking their ale and instead go to church to feign piety to the tune of organ music. Alcohol was strictly forbidden in Fordlândia, there was a instead of steak. Ford was a professed vegetarian and would have liked to have converted all of humanity to vegetarianism. The stipulations of the soya enthusiast Ford did not sit well with the workers in the Brazilian jungle at all. Brazilians love their sugarcane liqueur and a lunch of rice, beans and a serving of meat. Bill Levitt’s set of rules was also drastic, as it only tolerated white neighbours. In Levittown, Blacks were excluded long after segregation was officially abolished. In Disney’s Celebration, a comprehensive rule book stipulated which kinds of cars were allowed to be parked on the street, how yards were to be tended to and how street signs and store fronts were to be designed. There was an agency that regulated the observance of these standards, making “polite calls” to the disloyal inhabitants when necessary.

The failure of well-meant utopias

The concept of an ideal world may seem attractive. However, these exaggerated utopias all fail sooner or later. Even Berthold Brecht pointed that out: “All great ideas are doomed to fail because of the people involved.” One of the annalists of Monte Verità somewhat spitefully conceded that quite a few staunch followers of the life reform movement became connoisseurs of fine wine “virtually overnight.” Sick of the daily servings of crudités, they snuck away at night from the “mountain of truth” and found their own slice of heaven in donkey salami and red wine in one of the many rustic pubs. According to Auroville’s founding manifest, everyone should live enjoying equal rights; the toilets of the international community of people searching for meaning, however, are cleaned mainly using cheap Indian labour. And Fordlândia ended with a revolt of the soya-weary workers, lynch law and fatalities. Saltaire was at least declared a UNESCO World Heritage Site, probably because the reutilisation of old industrial facilities is very profitable. The current inhabitants can more often be found in the pub than in the church, where a lonely organist has been known to hammer out the melody from the movie “Terminator” on the keyboard. Levittown was turned upside down when the Wechsler family, who were Jewish communists, bought a house “undercover” for a black family, who then also moved in. And Celebration is the only city in the world that was sold by the Disney Company “in one piece” after the concept of an ideal community failed. The correlation between organisation and control is obvious here: The examples show that the principle of similarity (homogeneity) is extolled as the guiding organisational principle. The control is formal (rule books), but also informal in the way of social pressure and shame punishment. Neither one is convivial.

3 Progression: From Social Screening to Digital (Super-) Scoring

Digitalisation intensifies the correlation between organisation and control. A seemingly harmless example of this is  lifelogging, or self-tracking. The basic idea is well-known: People measure themselves in an increasing number of life  spheres, supposedly voluntarily, while carrying out popularised everyday activities. This is accompanied by a  powerful illusion of control. The driving motive of self-tracking is that one’s own body and conduct of life symbolise  the last levels of measurement of the controllable.

There are distinct parallels here. Even the “Sociological Department” at Ford had the same starting point as digital self-tracking: the measurable performance and personal responsibility of the individual. “In capitalism, the only thing that is considered to be acceptable performance is whatever appears to be measurable and calculable” (Distelhorst 2014). Besides using pharmacological and psychological remedies, anyone who is afraid of “being thrown out as junk goods” in capitalistic competition, as Siegfried Kracauer (2013) aptly said, will increasingly use low-threshold technological remedies for self optimisation and self control. Similarly to the assembly line and questionnaires at Ford, digital self-tracking technologies are used to remove rituals from everyday life and substitute them with calculation processes and accountability. The correlation between organisation and control then takes on a whole new dimension through direct measurement and voluntary comparisons.

It has already been explained in detail (Selke 2016, Mau 2017) how the socially inclusive character of society as well as the concept of humans that is oriented toward humanism is changing. Social institutions (such as law, education, data protection, the health care system) are overwhelmed by the complexity that comes with the subtle differences in types of data in metric cultures. We therefore can refer to the term rational discrimination when not only differences are made visible by data, but when these differences entail social implications. More and more interconnections of data and social chances result from the principle of rational discrimination: We start to perceive ourselves differently when we all observe each other based on data. Descriptive data become normative data. Normative data express, for example, social expectations of ‘correct’ behaviour, ‘correct’ appearance or ‘correct’ performance in the form of numbers. With that, normative data demands a certain, socially desirable behaviour – like in Saltaire, Levittown, Fordlândia or Celebration. So little by little, an organisational principle of the social that is focused on differences and deficits is established. There is a constant search for mistakes, decreased tolerance of errors and an increased sensitivity to deviation regarding ourselves and others. Rational discrimination may be based on supposedly objective and rational measurement methods, but these methods cause divisions between ‘useful’ and ‘dispensable’ people. Above all, we have arrived at a renaissance of pre-modern appeals of ‘culpability’ in the guise of talk about ‘personal responsibility’. In short: Rational observation represents an act of abstraction that alienates people from themselves and from others.

Forms of technology-based self care and self observation can be increased once more when heterogeneous types of data are compiled in a score or index (meaning in one single number). Both new organisational opportunities as well as control emerge from this numeric standardisation. First of all, the condensing of data into one number implies a radical reduction of complexity. With Pay-as-you-live (PAYL) insurance policies, complex health behaviour is reduced to one abstract number. Health insurance companies make decisions about possible incentives for their customers according to their health scores. The same applies to telematics tariffs for vehicle insurance companies. The following example is surely lesser-known: Glider pilots can compare their flights in a global online contest (OLC). In the evening, the pilot’s performance is converted into an abstract point value, which is the basis of a ranking system of the best pilots. All of these examples have one thing in common: The fundamentally immeasurable scope of a life and the qualitative aspects of conduct are reduced to one single number. Philosopher Harry G. Frankfurt speaks of a lack of respect, criticising the associated “attack on the existential reality” of human beings (Frankfurt 2016).

4 From industry rulers to data barons

The reduction of complexity with the use of an index system leads to new pressures to conform. They ultimately create copied existences (Luhmann 1991a), thus creating new digital vulnerability based on digital data: “If effectively de-identified, and shared, these data could be used for good in the form of a consensual contribution to medical and academic research. However, under current industry norms, these data are vulnerable to being used for any number of unidentified purposes.“ (Btihay 2018: 218) This new vulnerability is the result of a radical monopoly that goes far beyond the data collection of Ford’s “Sociological Department.” Ivan Illich already criticised the corresponding restrictions due to the fact that radical monopolies are denying human beings the opportunity to use their natural abilities, turning people into forced consumers and reducing autonomy. “It is a very special form of social control.” (Illich 1975: 84)

Henry Ford’s power was based on his industrial complex and the separation of production processes into the smallest possible units. The power of the new data barons, or “greedy institutions” (Coser 2015), however, comes from the separation of every life process imaginable into individual measurable aspects. As early as the 1970s, Joseph Weizenbaum criticised the fact that the world was being transformed into one of numbers without bringing about any advancements in social utopian thought. Data was being collected just for the sake of data collection without questioning the meaning (Weizenbaum 1977). The new instruments of data brokerage officially put an end to the era of the industry rulers. “Human beings have a tendency to hoard resources, thus creating wealth for some and poverty for others. (…) It’s plausible, even likely, that this spirit of imperialism will live on in the world of data resource extraction and trade. A new socio-economic category, that of the data baron, is currently in the making.” (Btihay 2018: 227) On the one hand, data are the “natural” resources of the future. On the other hand, civilisational ruptures are to be expected. Therefore, one of the fundamental questions is how dataveillance is changing the public sphere, and thus society as a whole (Lupton/Michael 2017) and how much room is (still) left to take action.

5 Risks and Dangers of Manipulative Technology

The more super-scoring is used in the future, the more humans will be affected by new dangers. In order to properly understand this hypothesis, two useful sociological differences must be considered (Luhmann 1991b).

While specific actors or institutions can be held accountable for the risks, “no one” is responsible for the dangers – except the gods, nature or fate. In modern societies, science and technology have increasingly transformed dangers into allocable, predictable risks. Weather events have been domesticated through digital data collection, meteorology and weather apps. In this respect, Max Weber waxed poetic about the “disenchantment of the world.” Paradoxically, big data technologies are rather an example of the “re-enchantment of the world” and thus the return of dangers due to the very asymmetrical role allocation within society that is characterised by a lack of transparency. The basic rule here is the following: People are being affected by the decisions that other people make. The concept of risk now correlates with the decision-makers; in contrast, the concept of danger correlates with the affected persons. As the decision-maker, a doctor may emphasise that the risk of a certain medical procedure in very low. The patient as the affected person would have their own, possibly different definition of risk.

“Prophetic” digital technologies should therefore be re-evaluated, for we are increasingly dealing with “smart” or “intelligent” technologies that independently determine their own purposes; the spectrum of these technologies ranges from smart virtual assistants to complex decision-making machines used for the purpose of social engineering. Similarly to the case of Henry Ford, a large amount of data is supposed to pave the way to complete security. This is in line with the belief in complete rationality, objectivity and efficiency as expressed by psychologist Steven Pinker (2018): “We have all learned not to believe in unicorns. Now we should also learn how to calculate risks with numbers. I consider quantitative, evidence-based thinking to be an imperative foundation.”

Digital societies’ promise of security has become increasingly evidence-based and thus relies on an incredible amount of data. At the same time, we have all become vulnerable to new dangers because human beings’ decision-making autonomy is being relegated to algorithms and artificial intelligence. The disastrous part of this is that neither actors nor institutions can be held accountable for the decisions that are made. Now, dangers cannot be the will of the gods, nature, or fate, but of decision-making machines that are based on (autodidactic) algorithms. Decisions based on “blackboxing” (opacity) make affected persons out of us, thus creating more dangers.

So we are living in an era in which the illusion is still being pursued that ideal societies can be created under controlled experimental conditions or in controlled human experiments, and that the only thing necessary to do so is to collect and analyse data. And at the same time we must increasingly allow human beings to become affected persons due to the creation of dangers that are no longer externally allocable. In order to appropriately deal with super-scoring, there must be a demand for a new kind of communication within society regarding dangers (and not only regarding risks).

6 From Risk to Danger Communication in the Context of Public Science

This type of communication can be the key contribution of public science. It can tie in with the previously mentioned illusion of ideal worlds and controlled experimental conditions because even well-meant attempts at social planning or social engineering can backfire. In the best case scenario, ideal worlds turn into inextricable paradoxes, such as when Marc Zuckerberg speaks of developing “the social infrastructure to give people the power to build a global community that works for all of us,” whereby only the power of Facebook grows, turning its users into affected persons. In the worst case scenario these worlds mutate, becoming totalitarian and inhumane machineries of coercion. Visions then become prisons. Wherever rules become almighty, control apparatuses, mechanisms of exploitation and tools of alienation emerge. The danger of future ideal worlds is not so much the massive use of technology, but the increasing link between technology and ideology that is no longer based on a realistic concept of humans. In the end, exaggerated technocratic utopias eventually lose their legitimation.

It is within this context that the question emerges regarding how society and science should deal with the depersonalisation of decision-making processes and the subtle transformation of risks into dangers. The most practical way seems to be the mutual observation of decision-makers and affected persons. Decision-makers would thus realise that the risky decisions they are responsible for (e.g. the use of AI in specific fields of application) become dangers for innocent affected persons. By the same token, affected persons would realise that risks arise wherever decisions are made (e.g. in the political system). Super-scoring would be an appropriate field for testing in order to establish these types of constellations of mutual observation. A methodological problem is that while affected persons can easily be identified, this is not true for the decision-makers behind the decision-making machines.

A shift from risk to danger communication must further acknowledge that neither ideal worlds nor controllable laboratories exist. Controlled laboratories may be the appropriate setting for technical experiments, but not for the simulation of real societies. The production of knowledge in the field of the “social” does not take place in closed laboratories where causal mechanisms are discovered and “nature” is decoded. If the human is to be comprehended, then society must be understood as an open laboratory where learning takes place in a different manner: A society cannot change if social practices are performed without exceptions to the rules or if “deviating behaviour” is completely non-existent. While laboratory experiments react to interference with improved isolation (because the results should not be skewed by a piece of dust), so-called living labs and humankind experiments (e.g. the introduction of the Social Credit Score in China) actually use external sources of interference as tools to gain more insight. Living experiments must, however, be functional in social and ethical contexts, not in technological ones. This is the exact problem with super-scoring. Technology is in need of more, and not less, ethics. When philosopher Hans Ulrich Gumbrecht claimed that the industry would pick up again “if people didn’t constantly talk about ethics,” it was, at most, indicative of a shift in standards of civilisation. If data is the answer, then the fitting question is how humans can coexist well and peacefully. Can data be used for purposes other than for organisation and control? It would be good if we could let go of the idea of controlled experiments and the desire for ideal worlds. For only then could real social utopias be developed instead of just continuing to uphold standard worlds designed according to the criteria of an efficient life. It would also be good to dilute pretentious technological ideas with a dose of practical wisdom and to conceive of necessary rules as more elastic instead of reducing human life to one single number. That would really be an ideal, and most of all livable world in which we can be humans and not products.

When people become affected by new dangers through scoring, then aspects of the human experience are lost. Maybe this quote by Walter Benjamin is a nice way to conclude this essay: “What others see as deviations is the data that determines my course.”

Resources

Btihay, Ajana (Hg.) (2018): Metric Culture. Ontologies of Self-Tracking Practices. Bingley: Emerald.
Coser, Lewis A. (2015): Gierige Institutionen. Soziologische Studien über totales Engagement (im Original: Greedy Institutions. Patterns of Undivided Commitment). Frankfurt a.M.: Suhrkamp.
Distelhorst, Lars (2014): Leistung. Das Endstadium einer Ideologie. Bielefeld: Transcript.
Frankfurt, Harry G. (2016): Ungleichheit. Warum wir nicht alle gleich viel haben müssen. Frankfurt a.M.: Suhrkamp.
Grandin, Greg (2009): Fordlandia. The Rise and Fall of Henry Ford’s Forgotten Jungle City. New York: Picador.
Illich, Ivan (1975): Selbstbegrenzung. Eine politische Kritik der Technik. Reinbek b. Hamburg: Rowohlt.
Kracauer, Siegfried (2013): Die Angestellten. Aus dem neuesten Deutschland. Frankfurt a.M.: Suhrkamp.
Luhmann, Niklas (1991a): »Copierte Existenz und Karriere. Zur Herstellung von Individualität«. In: Riskante Freiheiten. Individualisierung in modernen Gesellschaften. Hg. v. Ulrich; Beck-Gernsheim Beck, Elisabeth, Frankfurt a.M.: Suhrkamp, S. 191-200.
Luhmann, Niklas (1991b): Soziologie des Risikos. Opladen: Budrich.
Lupton, Deborah/Michael, Mike (2017): »‚Depends on Who’s Got the Data‘: Public Understandings of Personal Digital Dataveillance«. In: Surveillance & Society, 15/2.
Mau, Steffen (2017): Das metrische Wir. Über die Quantifizierung des Sozialen. Frankfurt a.M.: Suhrkamp.
Pinker, Steven (2018): Interview, In: DER SPIEGEL 8/2018, S. 59
Selke, Stefan (Hg.) (2016): Lifelogging. Digital self-measurement between disruptive technology and cultural change. Wiesbaden: Springer VS.
Weizenbaum, Josef (1977): Die Macht der Computer und die Ohnmacht der Vernunft. Frankfurt a.M.: Suhrkamp.