Digital consultant, Konnektiv, Co-founder of the Global Innovation Gathering (GiG) Network, member of the re:publica program team
A short history of predictive mathematics
In 1654 a series of letters written between Blaise Pascal and Pierre de Fermat changed the world we live in. Although humans have speculated about their future, betted on it and attempted to predict it through the ages, they were previously unable to calculate it. Pascal and Fermat opened up the idea and the possibility of predicting the future by calculating probabilities.
Today, we have become used to shaping our lives by calculating risks. How likely am I to find a good job if I study a certain subject at university? How much should I be investing in a pension fund, in order to live a comfortable life when I am old? Before Blaise and Pascal, this was an alien way of thinking. More so, we shape our world by letting algorithms calculate these risks for us.
In their exchange of letters, Blaise and Pascal created the mathematical foundations needed to work with big data for predictive analytics. Predictive analytics describes the practice of “extracting information from existing data sets in order to determine patterns and predict future outcomes and trends”. A number of statistical techniques are used to conduct predictive analytics, from data mining, modeling, and machine learning, all intended to analyse historical information, or rather information gathered in the past in order to make predictions about the unknown and upcoming. Predictive analytics cannot tell what will happen in the future, but what is likely to happen in the future based on the inputted data. However, often, we pretend it does exactly that (1).
Today, we live in a world driven by prediction through big data and algorithms. Every day, we let algorithms decide what movie we might want to watch next, which stocks to invest in, which advertisement we’re most likely to react to and what choices our self driving cars should make. Our data is gathered with or without consent and harvested by data scientists who use it to “guess the future” (2). This is not all at all a negative development per se. Many social use cases are being developed, such as the predictive models developed for the John Jay College of Criminal Justice in New York City to help them identify which students were at risk of dropping out of college even though they were close to graduating (3).
In our increasingly complex and information laden world, algorithms can be an important tool to help understand the world. Big data and AI based decision making can also be a perpetuator of existing biases and contribute to further establishing the already existing surveillance economy. Numerous documented cases of predictive policing gone wrong, or racist jail sentences being given out due to biased data, have demonstrated the dangers of relying on simplistic data models in sensitive social environments. Data determinism is not just impacting on our lives in such extreme or law enforcement situations, but on a daily basis. Social scoring is allowing data to rule over our lives and futures. Whilst we are eager to point the finger at the state surveillance and social scoring system employed in China, we often play down existing policies of Sillicon Valley companies imposing their rules on society by using similar technologies. An example; the AirBnB website states: “Every Airbnb reservation is scored for risk before it’s confirmed. We use predictive analytics and machine learning to instantly evaluate hundreds of signals that help us flag and investigate suspicious activity before it happens.” Recently, a number of reports covered the fact that this artificial intelligence is used to mark down users who were found to be “associated” with fake social network profiles, or if keywords, images or video associated with them are involved with drugs or alcohol, hate websites or organisations, or sex work. Because of this policy and the ability to crawl the web for information on peoples social media accounts, several sex workers accounts were erased, despite them having used AirBnB solely for private, touristic purposes, just like any other user (4).
Why is living in a world relying on big data prediction is a bad idea?
However, it is not just the negative examples such as amplifying racist biases through algorithmic jail sentencing and predictive policing or Orwellian social credit scoring systems that should make us weary to rely on these tools exclusively. The data driven realities and futures we are creating are based on data of the past, and will therefore always be a perpetuation of it. We need to compliment our big data in order to break free of the data deterministic structures we are programming today.
In her 2018 Tedx Cambridge talk, ethnographer and data scientist Tricia Wang explains why 73% of projects in the big data industry which is worth 122 billion dollars are not profitable. “Having more data is “not helping us make better decisions” because we are leaving out important perspectives to contextualize the data. Tricia Wang argues for the humanization of data – she calls “thick data” big data that has been enriched with non-quantifiable, qualitative data gathered from an ethnographic perspective that “delivers depth of meaning”. She draws this conclusion based on her own experience and research, for instance in 2009 in China, where she predicted the triumph of the smartphone over the feature phone but Nokia, the client she was doing research for, was unwilling to listen to the stories behind the data at the time and held on to the belief that people would not be willing to invest so much of their income in such a fragile device.
Failed data predictions shocked the world when President Trump came to power and when the UK voted for Brexit. Polls and other forms of prediction failed because the data was read without paying attention to the more nuanced shifts in political alliance and voter mobilization. Further, big data was used to target and directly influence millions of voters via social media channels, in particular Facebook through its involvement in the Cambridge Analytica scandal and other home-made platform mechanisms to micro-target voters in order to influence their political opinion. Big data failed society. It failed in predicting the actual outcomes of the votes as well as failing humanity in allowing a fair democratic process. In order to utilize data effectively, we have to enable ourselves to see what the data does not show us. Tricia Wang, warns “There is no greater risk than being blind to the unknown” (5).
Political imagination and the eye for the unknown
How do we keep an open eye for the unknown? By speculating, by moving away from the data and opening our ideas to the possibilities of what lies outside the measurable.
In his 2018 talk, “The Political Tragedy of Data-Driven-Determinism” Mushon Zer-Aviv describes the process of deskilling through the integration of digital services into our everyday lives. Does it matter if we forget how to do simple arithmetic in our heads or learn to use a pen? Whilst it might be acceptable, that we are deskilling in the sense of no longer being able to read maps or remember phone numbers, it is not acceptable to lose our ability to imagine different futures. Zer-Aviv reminds us of the importance to maintain and train “our ability for political imagination” (6). Zer-Aviv goes on to explain that the 20th century has shown how one man’s utopia might be another man’s worst nightmare. That is why we need to think of the future not as a linear, deterministic future. Instead we need to think of the future as plurals. Because we tend to find it easier to formulate non-desirable futures in forms of dystopias, we require tools for the development of desirable futures.
Power of speculations in today’s society
Speculation describes the process of “forming of a theory or conjecture without firm evidence” (7) or “the activity of guessing possible answers to a question without having enough information to be certain” (8). In today’s data driven society, speculation can be a liberating exercise. As Dunne and Rave argue in their book “Speculative Everything”:
“We believe that by speculating more, at all levels of society, and exploring alternative scenarios, reality will become more malleable and although the future cannot be predicted, we can help set in place… factors that will increase the probability of more desirable futures happening…equally, factors that may lead to undesirables futures can be spotted early on and addressed at least limited” (9).
Authors from different disciplines, from the world of design, business development, games, and political philosophy, provide such tools. In the recent past, a number of methodologies and tools have been developed that invite us to speculate, imagine and create, rather than just calculate, analyze and assess. Such methodologies include:
Reflections on our digital Future(s)
Around three hundred years after Blaise and Pascal exchanged their letters and enabled humans to calculate probability of future events, Christopher Strachey created one of the first letter writing algorithms. In 1952, Christopher Strachey created what has been called the first piece of digital literary art. He wrote a “combinatory love letter algorithm for the Manchester Mark 1 computer” (10).
In 2019 artists like Refik Anadol are experimenting with algorithms and imagination. His installation “Latent Being” is an attempt to create an algorithm that “dreams” about Berlin by creating “imagined” reflections of the city and the visitors. Anadol wants to create machines that allow us “to think beyond our linear life and have a new type of imagination”. Thereby stretching the boundaries of speculation what we so far defined as imagination and speculation (11). Whilst we are teaching our machines to be creative, let us retain that creativity ourselves and explore what we can achieve in combining the two.
Footnotes and Other Stray Thoughts
5 // https://www.ted.com/talks/tricia_wang_the_human_insights_missing_from_big_data?language=en Cambridge Tedx, 2018
9 // “Speculative Everything”, Raby and Dunne, p.