During the months-long worldwide lockdowns in response to the Covid-19 pandemic, not only our economies but also our public sphere decisively and irreversibly shifted into a digital realm. The omnipresence of algorithms in our increasingly digitalised public sphere has had a significant impact on the public discourse and agenda. At the same time, we cannot see what is happening inside the ‘black boxes’ where algorithms operate. Are such algorithms-based personalised recommendations upholding our individual freedom of choice or do they represent a threat to that choicet? Considering the ubiquity of these ‘guiding’ algorithmic mechanisms in online media and culture-related platforms, it is worth understanding how dependent we are on them and how this dependency may affect our future and culture – and how we can use them to strengthen our values and societies. In this article, we reflect on the correlation between algorithms and individual freedom in the increasingly digitalised European cultural domain, taking the quickly growing video-on-demand (VOD) sector as a case in point.
Easy access to any sort of audio-visual content is among the twenty-first-century conveniences that have already become a habitual, a part of our daily lives that is almost taken for granted. Anytime, anywhere, on any personal device, we freely search for, find, and watch videos for entertainment as well as for informative and professional purposes. Thanks to video-on-demand (VOD) platforms, such as YouTube and Netflix, we are now liberated from following the fixed schedules of limited numbers of shows and films offered by cable television channels or cinemas. Instead, we are free to choose among an endless variety of programmes and shape our own screening agenda for an evening or a weekend. In our digital 2021, this recent opportunity already seems to be an indispensable element of our very understanding of freedom: freedom of choice, access to information, even freedom of self-identification and self-expression. However, despite the liberating and horizon-widening potential of these developments, are we truly as free and conscious in our choices as we would like to think?
When it comes to personalised use of technologies, the concept of freedom and individual choice is arguably trickier than it seems. The way the content is organised, shown, or promoted in social networks and online platforms follows the logic of an artificial intelligence (AI) system, with its strengths and limitations. When users are looking for new content, the algorithm’s output will recommend things they might want to watch, at that precise moment in time and space, using data collected on their location and online behavioural habits. Recommendation engines are becoming ever more sophisticated in analysing data and fine-tuning the content selection for individual users to suggest what they might be looking for. On the purely technical side, the use of these engines helps optimise the functioning of the platform itself for different purposes (including creating prediction products based on users’ behaviour) as well as helping users navigate the chaotic vortex of continuously emerging and changing information on the Internet.
While the omnipresence of algorithms in our online searching is already too evident to have remained a secret to anyone, the question is whether algorithms-based personalised recommendations uphold our individual freedom of choice or represent a threat to it. In the light of the EU’s large-scale digital transition, it is worth understanding how dependent we are on AI systems and how this dependency may affect our future and culture – and how we can use those systems to strengthen our values and societies. Moreover, we need to understand the form that our fundamental liberal values and beliefs, with their purely human nature, can take in this quickly developing digital reality that is heavily reliant on algorithms.
In what follows, we reflect on the correlation between algorithms and individual freedom in the increasingly digitalised European cultural domain, taking the quickly growing VOD sector as a case in point. We first discuss the increasing role of recommender systems in Europe’s digital domain and how they are gradually substituting for the human factor in setting the public agenda. After that, we focus on the VOD sector to highlight the potential practical implications of this phenomenon for European culture. In the conclusion, we suggest a vector for finding solutions to this emerging dilemma between technological progress and human freedom.
New co-evolutionary vector: algorithms vs free choice?
The idea behind digital computers may be explained by saying that these machines are intended to carry out any operations which could be done by a human computer.
During the months-long worldwide lockdowns in response to the Covid-19 pandemic, not only our economies but also our public sphere (from administrative operations to public debates) shifted decisively and irreversibly into a digital realm. The EU’s long-term agenda for large-scale digitalisation is not a remote strategy but a concrete action plan for European economies, societies, and individuals. There may be ongoing debates on the means and ways of achieving it, but there is unanimity on the common goal to prepare Europeans for the new era, particularly to secure the bloc’s strategic autonomy in this domain. While advancements in technology have led to a massive shift towards an interconnected society, these unprecedented developments have also presented us with novel threats – not only of a technological nature (e.g. cybersecurity, privacy) but also related to the philosophical and moral underpinnings of our European way of life.
The use of algorithms, as implicit and ubiquitous elements in organising our digital environment, is gaining in importance across an ever-wider spectrum of areas. Gillespie defines algorithms as ‘encoded procedures for transforming input data into the desired output, based on specified calculations’. Algorithms are either made by humans, through coding by hand, or they are generated from datasets through machine learning techniques. Consisting of instructions to execute a succession of tasks for different purposes, algorithms are used for automatising various processes operated by software, for example, categorising search results and advertisements. Increased use of complex algorithms has become necessary with the introduction of online applications and services, such as social media and streaming platforms. The speed and amount of data they can handle in the core units are unimaginable, while they are fundamental to make sense of this amount of data, extracting information and knowledge that can be used afterwards. In addition, more complex and modern algorithms can learn from each other and even create new algorithms with the introduction of machine learning and deep learning. More complex systems of analysis, such as neural networks, are particularly useful when dealing with big data. Indeed, there is a mutual relation between algorithms and (big) data, that is, the phenomenon of employing immense datasets generated by, but that cannot be read by, traditional information and communication technologies (ICT) applications.
While algorithms are used in a variety of circumstances, their impact on our daily lives will only increase during the next decade. This is related to the rollout of new technologies such as next-generation networks and the large-scale deployment of AI techniques, such as machine learning and neural networks, which will affect many aspects of our lives. However, the increasing presence of algorithms itself should not worry us – at least for now. The underlying reason for using algorithms for recommender systems is to provide users with targeted information, based on their habits and needs. For instance, YouTube and Netflix are using algorithms to suggest videos that users might be interested in watching, potentially facilitating our access to what is relevant to us. These processes work by collecting data from users (based on their privacy settings and preferences), such as identifying users’ location, content already watched, and general browsing habits. In addition, the information collected helps online platforms provide targeted advertising, which without doubt constitutes the main source of revenue for digital companies and social media. Social networks and the digital economy have thus significantly benefitted from the evolution of complex algorithms and the automation of computational processes. However, this does not come without further implications.
Algorithms can be defined as a modern co-evolutionary vector. While up to recently human society was characterised by people‘s relationship with nature and with each other, recommendations-based systems have influenced the way our society has evolved in the last decade and will continue to affect its development in the future. In particular, as the transmission of information has gravitated towards online platforms, this has altered the communicative space and how the public perceives information. On the one hand, in the context of communication through the Internet, the information can be extrapolated from a single context and moved ‘from network to network’, making it ‘difficult for traditional gatekeepers, such as public relations professionals and journalists, to control or withhold information […]’. Carrigan and Porpora recently studied this interplay between human identity and our relation to technology and thinking machines. Describing how the digital technological matrix shaped society in the context of AI, they identify different phases of this transformation up to the point of the creation of a ‘humanted’, an augmented human identity ‘modified by technologies who is both the product and producer of the hybridization of society’.
On the other hand, as a result of the use of personalised recommendation systems, the targeting is shifting from a specific audience, or ‘target group’, with predefined interests to a ‘personalised’ approach. This has changed the way information reaches audiences, where the use of algorithms for both boosting research engines and influencing the emotional dimension (that is, suggesting content in social media) detracts from human rationality. In this situation, the individual relies on (or is subject to) the mathematical rules of the algorithms used by the platform rather than on their own will. Herein lies a hidden dialogue between a human-driven factor, that is, somebody actively sharing content on social media or entering their preferences in a search, and automated computing, with the shared or recommended content following predetermined paths established by an algorithm. As a result, the content that becomes ‘viral’ creates a volatile situation, with the human factor possibly being diminished in this interaction and dissemination process.
Algorithms as new agenda-setters
The role of algorithms in (re)shaping our perceptions and everyday culture has recently been the focus of scholarly attention. With the rise of free digital information and algorithms, it is the system that is preselecting the information for us, based on our perceived preferences. They not only influence our private everyday lives and choices but, in the increasingly digitalised public sphere, they have great potential to impact our political and socio-cultural discourses and agendas. Gillespie has coined the term ‘public relevance algorithms’ to refer to the way algorithms are ‘producing and certifying knowledge’, thereby to a great extent determining what we consider important, timely, and worthy of attention – in political, social, and cultural terms. As a result, the power of algorithms ranges from shaping public tastes and socio-cultural and political agendas to shaping ‘a public’s sense of self’.
What is novel here is not the phenomenon itself but the logic and the principles of filtering and classifying the information flow before it even reaches our eyes and ears. Societies have always had public arbiters whose expert judgement and authority (based on education, experience, achievements, or other qualities) would direct public attention and shape public opinion. Filtering and preselecting information to fit the anticipated needs of a certain target audience has always been among the key functions of the media and the cultural domain. The added value of a newspaper or an art critique consists not merely in transmitting and interpreting the news but, first of all, in identifying what information is relevant for their potential readers/listeners/viewers, thus determining whether certain facts or ideas are even worth mentioning and discussing. From this perspective, not only the audience’s opinion but even its very time and attention has always been to a significant degree directed by certain individuals, recognised and acknowledged as experts and public arbiters in a given domain (those with what Pierre Bourdieu would call social and cultural capital).
Today, with the shift towards digitalisation and a dramatic increase in the amount of information and the speed and scope of its circulation across the globe, the role of the human factor in this preselecting – and agenda-setting – process has decreased significantly, giving more and more power and credibility to technologies and automatisation.
Two theories from the literature are central to a discussion of freedom of choice and algorithms. The designs of both code architecture and nudges are not neutral, and their forms reflect aims and decisions. Thus, there is a risk that such designs taken in the dark and without any kind of scrutiny are likely to be used to benefit their creators or without due consideration of the balance of public interests.
Regarding architecture design (that is, coding), Lessig argues that the architecture of software can act as a regulator and constraint on human behaviours since this represents ‘[…] the “built environment” of social life in cyberspace. It is its “architecture”. […] The code or software or architecture or protocols set these features, which are selected by code writers. They constrain some behavior by making other behavior possible or impossible. The code embedscertain values or makes certain values impossible. In this sense, it too is regulation, just as the architectures of real-space codes are regulations.’
Moreover, when it comes to choices, behavioural economics theories such as ‘nudge’ theory can not only help us understand the functioning of complex recommender systems but also give us a broader perspective on the risks and implications. In Thaler and Sunstein’s words, ‘[a] nudge […] is any aspect of the choice architecture that alters people’s behaviour in a predictable way without forbidding any options or significantly changing their economic incentives. To count as a mere nudge, the intervention must be easy and cheap to avoid. Nudges are not mandates. Putting fruit at eye level counts as a nudge. Banning junk food does not.’
When machine learning algorithms are used as decision support tools with big data, as for instance in the case of recommender systems, nudges become a powerful tool. The recipients of these nudges are ‘hypernudged’, meaning that ‘Big Data-driven nudging is […] nimble, unobtrusive and highly potent, providing the data subject with a highly personalized choice environment’. Recommender systems are a ‘very powerful form of choice architecture, shaping user perceptions and behavior in subtle but effective ways through the use of “hypernudge” techniques, undermining an individual’s capacity to exercise independent discretion and judgment’.
What previously depended on personal choice, socio-cultural capital, and individual preferences and choices of an editor or an expert nowadays relies more and more on statistics, data, and variables and is filtered by algorithms. Even the phenomenon of self-made opinion leaders – such as YouTube and Instagram influencers – has only been possible thanks to the increasing role of recommender systems. After reaching a certain level of views and likes, the probability of a certain item of content being considered by algorithms as relevant to an ever-broader audience increases – as does its presence in recommendations and ratings. In this way, in the algorithms-dependent digital public domain, it is popularity that determines relevance – and not quality or trustworthiness.
With the advancement of AI systems, scenarios in which content, be it trustworthy or not, spreads quickly among a broad audience and gets beyond human control occur more and more often. Remarkable evidence has been provided by Facebook employees showing that the company does not fully control its recommendation engines, which can allow content of any kind to become viral in a split second. Although the technological might of the platform is commonly used for generating profit, it does not yet possess the means to guarantee that these very instruments are not facilitating the swift spread of unethical or potentially harmful and dangerous ideas, from misinformation on health-related issues to propagating openly discriminatory and hateful content. This evidence alone clearly points to the fact that the advancement of algorithmic technologies is currently not being matched by equally sophisticated gate-keeping engines.
Thus, the use of algorithmic information systems has led to a sea change in how information emerges and circulates in the public domain. In this context, we, liberals, are specifically concerned with how these developments might affect our fundamental values and principles in the long run. The question is whether the growing presence of such ‘guiding’ mechanisms in online media and culture-related platforms truly facilitates our access to the vibrant whirl of diverse content and increases our freedom of choice. Or does it, to the contrary, limit our focus to a certain (most popular or most familiar to us) segment of the available information?
European culture between technological progress and human values
Although the socio-cultural impact of the algorithmic logic behind recommender systems has been widely studied with regard to media and news, it is equally relevant for the cultural domain, or culture-related digital platforms. Due to the use of algorithms and the extensive deployment of recommendation engines, the digitalisation of (popular) culture is accelerating globalisation and ‘has shrunk the world into a much smaller interactive field’. There are a number of consequences and implications of this phenomenon for shaping the cultural horizon of Europeans, as individuals, citizens, and societies. Among the positive socio-cultural effects of this transformation is the fact that, thanks to better connectivity, more people have on-demand worldwide access to informative audio-visual content, such as documentaries, podcasts, and interviews. Anyone with an Internet connection is generally able to select independently what information to consume, in what way, and at what time. This opens up a seemingly limitless scope of constantly emerging cultural products and gives us the freedom to follow our own tastes, preferences, and interests. In an ideal scenario, this broadening of opportunities (in terms of accessibility of diverse content as well as increased personal liberty to select and filter it) allows for shaping one’s individual cultural and intellectual horizon.
However, in practice, algorithms-based recommendation systems present a substantial, even if not yet fully evident, threat to our freedom of choice – and, as a consequence, to our cultural sphere. Following the logic of similarity, which is a fundamental principle of recommendation systems, limits our awareness of diversity, differences, opposition, and alternatives. In fact, algorithms by their very nature are data-based, and this makes them values-dependent: they tend to enhance efficiency to achieve a specific outcome. In that sense, choices made by automated decision-making systems may be ‘an extremely potent tool [because they] translate normative values of stakeholders into actionable math’.
In doing so, they simplify the complexity of the world around us, narrowing our attention down to what is familiar, similar, and alike – and to what a recommendation system is trained to identify as interesting and relevant. Within this process, the abundance of options thus does not necessarily translate into freedom of choice. On the contrary, by limiting our focus to what is already most familiar to us, it may actually result in a reduction of this freedom. In this way, greater connectedness, as much as globalisation, not only potentially enriches our societies but also threatens to diminish our distinctive cultural specificities, as individuals and as societies. As a result, the use of AI in the cultural sector can lead to a more connected world, where cultural differences and individual preferences are less pronounced. This dynamic fosters a situation of simplistic identity-building, to which Chuck Pallanik’s character refers in Fight Club: ‘What kind of [Ikea] dining set defines me as a person?’
The example of VOD platforms sheds light on the practical implications that algorithms-based recommender systems can have for our cultural field. VOD streaming platforms are online services where users can access audio-visual content, such as videos and films, digitally. The idea behind them is single and simple: access any video content, anywhere, at any time. The popularity – not to say the omnipresence – of streaming services has increased dramatically in the course of the last decade and is expected to double in the next one. In their functioning, VOD platforms are heavily reliant on personalised recommendation systems, both for organising the platform’s functioning and for promoting specific content. The correct implementation of big data analysis to refine recommender systems is considered a success factor for big VOD providers, enabling them to follow and predict their subscribers’ habits and tastes.
Digital platforms entrust machines with the responsibility to select what is worthy of being promoted, watched, and discussed, thus enabling information and content to follow non-human-driven criteria. In a subtle yet powerful way, the omnipresence of recommender engines subjects the individual to the mathematical rules used by the platform. Does this mean that we are facing a new challenge – a potential clash between the freedom of the Internet and the freedom of the individual’s ‘right to self-identification’? This not only presents an ethical dilemma in itself, it also has far-reaching implications for European – and Europeans’ – overall cultural horizon. While an on-demand platform may offer high-quality content and original products, the mechanical way in which videos are recommended and promoted (or not) threatens to impoverish our public discourses, cultural agenda, and overall horizon. Following the logic of similarity and the growing reliance on mathematically generated guidance might divert public attention away from what could be truly new and thought-provoking, happening far away from us – or, ironically, just in front of us. In this way, the enriching cultural potential of the audio-visual sector can easily be lost, reducing it to a source of cultural fast food, where already known, ‘tasty’, easy-to-process, and accepted content makes us disregard and unintentionally dismiss important socio-cultural shifts, developments, and phenomena.
This issue remains hugely important for the future of the shared European culture. The way culture is promoted, communicated, and disseminated has the potential to shape and transform European society, today and in the future. Although this is not new in history, nowadays it is happening at the speed of a ‘bit’.
Instead of a conclusion: human-centric approach to digitalisation
Given the impact that digital platforms have on modern society, the purely mathematics-driven implementation of recommender systems remains tricky with regard to free choice. The VOD sector, placed at the intersection of culture and technologies, presents a case in point for demonstrating the potential clash between technology – neutral in and of itself from a moral point of view – and human values, culture, and ideological principles. In the context of the digital transformation in Europe, how can we use algorithms-based systems to strengthen our cultural richness and human capital, instead of allowing technological progress to reduce them?
Firstly, while considering the risks that the logic of technological advancement presents to our values-based European project, we should not overlook the potential value of culture in reversing this dynamic. Culture is a strong instrument in strengthening the European project as well as its guiding principle, ‘united in diversity’, while it also minimises the risk of losing human sensibility and critical thinking, both individually and collectively. In other words, not only can technology influence the evolution of the European cultural field, but the European cultural project could – and should – direct the pace of Europe’s technological advancement. The European Commission’s upcoming Media and Audio-visual Action Plan as well as its recent large-scale ‘New European Bauhaus’ initiative acknowledge the EU’s leading role in sustaining the European cultural project. Although it is questionable whether cultural projects should be directed in a top-down manner or include any sort of ideological underpinning, at the current stage in the EU’s history the role of culture is directly linked to preserving the attractiveness of European unity and uniqueness, both internally and externally. Therefore, we must ensure that algorithms do not side-track European cultural heritage and creativity (for example, vis-à-vis both its global and more local competitors). This means ensuring transparency about the very functioning of these recommenders and being capable of foreseeing any potential negative effect they might have. Here again, technology must be carefully examined within regulatory measures to mitigate those risks, while entailing the preservation of culture as among our fundamental values.
Secondly, the key question for our future society is not about the algorithms themselves – it is about who will control them. Such a statement implies that algorithms are impartial when it comes to social dynamics and human interactions. Despite this being an extreme exaggeration, it might represent a pivotal point in the discussion, since the relation between automatisation, culture, and individual freedoms concerns fundamental aspects in the debates on the future of Europe. While Europe’s path towards digitalisation is unavoidable, unstoppable, and represents a step forward in the evolutionary process of our societies, we have discussed how the automatisation of content and culture (in a broad sense) entails the risk of imposing on us convenient boxes or paradigms to satisfy our innate human need for comfort and familiarity. This might come at the expense of morally and intellectually mature liberal democracies. However, while the advancements in technology represent the next big change in the history of humanity, this transformation should be directed by us, not by mathematics and statistics. It is thus essential to put the human factor and human values at the heart of the large-scale implementation of digital means. Recent academic studies provide preliminary insight into the form and shape that this might take. For instance, as a general idea, Avezzu suggests a turn (back) from algorithm-based systems towards human-curated content. Furthermore, consideration of the freedom of choice vs technological progress dilemma should remain central to the approach that we take on the path towards digitalisation.
More specifically, in relation to the audio-visual sector, fostering the diversity of sources and promoting high-quality content requires changing the blind suggestion mechanisms based on the number of views or the virality of content and adding to the recommendation engines criteria based on qualitative parameters that reflect European values and our cultural heritage. For instance, the recommender system of VOD platforms can be nudged to prioritise award-winning and classical films. This, in addition to information about the general functioning of the suggestion algorithm given to the consumer, would allow one to make a free choice and decide whether to follow what is automatically recommended by the system (based on popularity or similarity to one’s search history, for instance) or to explore new strands based on qualitative criteria. While technically it is easy to nudge an algorithm to favour certain criteria or give more weight to certain features while arranging content, doing this is indeed of utmost importance for our cultural domain.
The Commission’s recent Digital Services Act package requests online platforms falling under the scope of the proposal to provide certain formal data on the functioning of the recommender systems which they employ (for example, related to the functioning of the algorithms, what data are collected, and for what purposes). At the same time, the metadata, or ‘conditions of recommendability’, fuelling the algorithms behind their recommender systems still operate inside a black box. These latter, however, constitute our main target if we aim to make the algorithmic systems instruments to promote both individual freedom and a quality-oriented cultural domain. In this regard, although it is unrealistic to aim for a recommender system to be fully controlled by humans and their values due to the complexity of such automated systems, technology must, nonetheless, be re-humanised to the greatest extent possible in order to uphold our fundamental values and reinforce our cultural objectives.
A decisive step in this direction will be introducing the theory of choice and an architecture aimed at building the environment that arranges content according to the qualitative criteria defined by humans. Applying the concept of nudges introduces into the equation the considerations of quality and cultural agenda as well as taking into account the freedom of choice dilemma. Stemming from psychology, this approach implies that choice architects influence behaviours by exploiting human cognitive biases. What is key here is that nudges are characterised as being choice-preserving: although they aim to influence human behaviour in a certain way (for example, following priorities), humans can always opt out. This could represent a solution for overcoming the risky technical implications of recommender systems, while accommodating the general requirements of safeguarding freedom of choice and avoiding any kind of censorship or intervention by external actors.
In this way, technology will follow not machine rhythms, or algorithms, but human ‘rhythms’, or androrithms. Although this remains a long-term project, for the preservation of liberalism it is fundamental to keep this principle in mind while elaborating our vision for Europe’s digital future. In summary, while the digitalisation of our society is already taking place, any further steps should follow a logic that takes into account our core beliefs, fundamental values, and (cultural) heritage. Human-centred digitalisation should thus be the vector for a liberal approach towards more inclusive growth for individuals, opening up endless opportunities, while sustaining the European cultural project.
If you would like to read more on EU tech policies, feel free to check this other Future of Europe Journal articles on the topic here.
Avezzù, G. (2017). ‘The Data Don’t Speak for Themselves: The Humanity of VOD Recommender Systems’. Cinema & CIE, 17, 51-66.
Abbasi, M.A., Liu, H., & Zafarani, R. (2014). ‘Social Media Mining: An Introduction’. New York: Cambridge University Press.
Carrigan, M., & Porpora, D.V. (eds.) (2021). ‘Post-Human Futures: Human Enhancement, Artificial Intelligence and Social Theory.’ Abingdon: Routledge, ISBN 9780815392781.
M. Castells (2010), ‘The Rise of the Network Society’, Wiley Blackwell, second Ed., ISBN 978-1-4051-9686-4.
Chen, G.M., & Zhang, K. (2010). ‘New Media and Cultural Identity in the Global Society’. In R. Taiwo (ed.), Handbook of Research on Discourse Behavior and Digital Communication: Language Structures and Social Interaction, pp. 801–815. Hershey, PA: Idea Group Inc.
De Mauro, Andrea, Greco, Marco, & Grimaldi, Michele (2015). ‘What is Big Data? A consensual definition and a review of key research topics’. 1644 AIP Conference Proceedings, AIP Publishing, doi: 10.1063/1.4907823.
European Commission (2020). ‘Shaping Europe’s digital future’. Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, COM(2020) 67 final, Brussels, 19 February.
European Union (2018). ‘‘The New European Bauhaus explained’.’ January, https://europa.eu/new-european-bauhaus/document/download/45f60059-6776-4fd7-8475-a456a56bbd5d_en.
European Union (2021). ‘About the initiative’. January, https://europa.eu/new-european-bauhaus/about/about-initiative_en.
Fawkes, J., & Gregory, A. (2000). ‘‘Applying Communication Theories to the Internet’.’ Journal of Communication Management, 5(2), 109–124.
Gillespie, Tarleton (2014). ‘The Relevance of Algorithms’. In Tarleton Gillespie, Pablo J. Boczkowski, and Kirsten A. Foot (eds.), Media Technologies: Essays on Communication, Materiality and Society, pp. 167–193. The MIT Press Cambridge, MA.
Heikkila, Melissa (2021). ‘‘Facebook’s bad algorithm’.’ Politico AI: Decoded, 27 October, https://www.politico.eu/newsletter/ai-decoded/facebooks-bad-algorithm-natos-ai-strategy-ai-liability-is-coming/.
Hristova, Stefka, Hong, Soonkwan, & Slack, Jennifer Daryl (eds.) (2020). Algorithmic Culture: How Big Data and Artificial Intelligence Are Transforming Everyday Life. Lanham: Lexington Books, https://www.amazon.com/Algorithmic-Culture-Artificial-Intelligence-Transforming/dp/1793635730.
Jordan, M.I., & Mitchell, T.M. (2015). ‘Machine Learning: Trends, Perspectives, and Prospects’. Science, 349(6245), 255–260.
Jenkins, H., Ford S., & Green J. (2013). Spreadable Media: Creating Value and Meaning in a Networked Culture. New York, London: New York University Press.
Leenes, Ronald E. (2011). ‘Framing Techno-Regulation: An Exploration of State and Non-State Regulation by Technology’. Legisprudence (Social Science Research Network), 5(2), 141–169, https://papers.ssrn.com/abstract=2182439.
Lehr, David, & Ohm, Paul (2017). ‘Playing with the Data: What Legal Scholars Should Learn about Machine Learning’. U.C. Davis Law Review, 51(653), 655–717.
Leonhard, G. (2016a). ‘‘Technology vs. Humanity – The Coming Clash between Man and Machine’.’ Zurich: The Futures Agency.
Leonhard, G. (2016b). ‘‘What are androrithms’.’ https://www.futuristgerd.com/2016/09/what-are-androrithms/.
Lessig, Lawrence (2006). Code: And Other Laws of Cyberspace, Version 2.0. New York: Basic Books.
Möller, Judith, Trilling, Damian, Helberger, Natali, & van Es, Bram (2018). ‘Do Not Blame It on the Algorithm: An Empirical Assessment of Multiple Recommender Systems and Their Impact on Content Diversity’. Information, Communication & Society, 21(7), 959–977, https://www.tandfonline.com/doi/full/10.1080/1369118X.2018.1444076.
Smith, Cooper (2014). ‘‘Social Networks Are Only Just Getting Started in Mining User Data’.’ Business Insider, 24 April, http://www.businessinsider.com/social-medias-big-data-future-2014-2.
Statista (2020). ‘‘Selected online companies ranked by total digital advertising revenue from 2012 to 2020’.’ https://www.statista.com/statistics/205352/digital-advertising-revenue-of-leading-online-companies/.
Statista (2021). ‘‘Share of respondents who read the written press every day or almost every day in the European Union (EU 28) from 2011 to 2020’.’ March, https://www.statista.com/statistics/452430/europe-daily-newspaper-consumption/.
Thaler, Richard H., & Sunstein, Cass R. (2008). Nudge: Improving Decisions about Health, Wealth, and Happiness. Connecticut: Yale University Press.
Uricchio, William (2017). ‘Data, Culture and the Ambivalence of Algorithms’. In Mirko Tobias Schäfer and Karin van Es (Eds.), The Datafied Society: Studying Culture through Data, pp. 125–137. Amsterdam: Amsterdam University Press, DOI: https://doi.org/10.25969/mediarep/12569.
van Drunen, Max (2021). ‘Editorial Independence in an Automated Media System’. Internet Policy Review, 10(3), https://policyreview.info/articles/analysis/editorial-independence-automated-media-system.
Yeung, Karen (2017). ‘‘Hypernudge’: Big Data as a Mode of Regulation by Design’. Information, Communication & Society, 20(1), 118–136.
K. Yeung (2018), ‘Algorithmic regulation: A critical interrogation’, Regulation and Governance, Wiley Online Library, Volume 12, Issue 4 December Pages 505-523
Zakurdayeva, A. ‘‘The future of the Algorithm and Its Benefits for Technology Companies’.’ Yalantis.com, https://yalantis.com/blog/the-future-of-the-algorithm-economy/.
Zuiderveen Borgesius, F.J., Trilling, D., Moeller, J., Bodó, B., de Vreese, C.H., & Helberger, N. (2016). ‘Should We Worry
about Filter Bubbles?’. Internet Policy Review, 5(1), https://doi.org/10.14763/2016.1.401.
 An artificial intelligence system (AI system) means ‘software that is developed […] for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with’ (European Commission, Artificial Intelligence Act).
 European Commission (2020), ‘Shaping Europe’s digital future’, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, COM(2020) 67 final, Brussels, 19 February.
 Tarleton Gillespie (2014), ‘The Relevance of Algorithms’, in Tarleton Gillespie, Pablo J. Boczkowski, and Kirsten A. Foot (eds.), Media Technologies: Essays on Communication, Materiality and Society (The MIT Press), p. 1, https://www.microsoft.com/en-us/research/wp-content/uploads/2014/01/Gillespie_2014_The-Relevance-of-Algorithms.pdf.
 M.I. Jordan and T.M. Mitchell (2015), ‘Machine Learning: Trends, Perspectives, and Prospects’, Science, 349(6245), 255.
 Andrea De Mauro, Marco Greco, and Michele Grimaldi (2015), ‘What is Big Data? A consensual definition and a review of key research topics’, 1644 AIP Conference Proceedings,106.
 Cooper Smith (2014), ‘‘Social networks are only just getting started in mining user data’’, Business Insider, 24 April, http://www.businessinsider.com/social-medias-big-data-future-2014-2.
 This may vary depending on the application, system, browser, and Terms and Conditions that single companies apply.
 It should be clear that an algorithm alone cannot work properly. It needs to use data collected from users’ behaviour. The process of obtaining data generated from users in social media is called social media (data) mining. The purpose is to analyse these data in order to implement technical advancement of the platform as well as to create targeted marketing campaigns. For further information: M.A. Abbasi, H. Liu, and R. Zafarani (2014), ‘Social Media Mining: An Introduction’ (New York: Cambridge University Press).
 Statista (2020), ‘‘Selected online companies ranked by total digital advertising revenue from 2012 to 2020’’, June, https://www.statista.com/statistics/205352/digital-advertising-revenue-of-leading-online-companies/.
 A. Zakurdayeva, ‘‘The future of the algorithm and its benefits for technology companies’’, Yalantis.com, https://yalantis.com/blog/the-future-of-the-algorithm-economy/.
 William Uricchio (2017), ‘Data, Culture and the Ambivalence of Algorithms’, in Mirko Tobias Schäfer and Karin van Es (eds.), The Datafied Society: Studying Culture through Data (Amsterdam: Amsterdam University Press), pp. 125–137, DOI: https://doi.org/10.25969/mediarep/12569.
 Statista (2021), ‘‘Share of respondents who read the written press every day or almost every day in the European Union (EU 28) from 2011 to 2020’’, March, https://www.statista.com/statistics/452430/europe-daily-newspaper-consumption/.
 J. Fawkes and A. Gregory (2000), ‘‘Applying Communication Theories to the Internet’’, Journal of Communication Management, 5(2), 109–124.
 M. Carrigan and D.V. Porpora (eds.) (2021), ‘Post-Human Futures: Human Enhancement, Artificial Intelligence and Social Theory’ (Routledge), ISBN 9780815392781.
 Carrigan and Porpora, Post-Human Futures.
 This leads to a horizontalisation of information dissemination, creating prerequisites for a shift from mass communication to personal communication and determining a hybrid situation of mass self-communication (see M. Castells (2010), The Rise of the Network Society, Wiley Blackwell, ISBN 978-1-4051-9686-4).
 Stefka Hristova, Soonkwan Hong, and Jennifer Daryl Slack (eds.) (2020), Algorithmic Culture: How Big Data and Artificial Intelligence Are Transforming Everyday Life (Lexington Books), https://www.amazon.com/Algorithmic-Culture-Artificial-Intelligence-Transforming/dp/1793635730; H. Jenkins, S. Ford, and J. Green (2013), Spreadable Media: Creating Value and Meaning in a Networked Culture (New York, London: New York University Press).
 See Max van Drunen (2021), ‘Editorial Independence in an Automated Media System’,Internet Policy Review, 10(3),- https://policyreview.info/articles/analysis/editorial-independence-automated-media-system; Judith Möller, Damian Trilling, Natali Helberger, and Bram van Es (2018),‘Do Not Blame It on the Algorithm: An Empirical Assessment of Multiple Recommender Systems and Their Impact on Content Diversity’, Information, Communication & Society, 21(7), 959–977, https://www.tandfonline.com/doi/full/10.1080/1369118X.2018.1444076.
 Gillespie, ‘The Relevance of Algorithms’, 168.
 Gillespie, ‘The Relevance of Algorithms’, 168.
 F.J. Zuiderveen Borgesius, D. Trilling, J. Moeller, B. Bodó, C.H. de Vreese, and N. Helberger (2016), ‘Should We Worry about Filter Bubbles?’, Internet Policy Review, 5(1), https://doi.org/10.14763/2016.1.401.
 Lawrence Lessig (2006), Code: And Other Laws of Cyberspace, Version 2.0 (Basic Books), 121–125.
 Richard H. Thaler and Cass R. Sunstein (2008), Nudge: Improving Decisions about Health, Wealth, and Happiness (Connecticut: Yale University Press), 6.
 Karen Yeung (2017), ‘‘Hypernudge’: Big Data as a Mode of Regulation by Design’, Information, Communication & Society, 20(1), 122–123.
 Melissa Heikkila (2021), ‘‘Facebook’s bad algorithm’’, Politico AI: Decoded, 27 October, https://www.politico.eu/newsletter/ai-decoded/facebooks-bad-algorithm-natos-ai-strategy-ai-liability-is-coming/.
 G.M. Chen and K. Zhang (2010), ‘New Media and Cultural Identity in the Global Society’, in R. Taiwo (ed.), Handbook of Research on Discourse Behavior and Digital Communication: Language Structures and Social Interaction (Hershey, PA: Idea Group Inc.), p. 12–14.
 David Lehr and Paul Ohm (2017), ‘Playing with the Data: What Legal Scholars Should Learn about Machine Learning’, U.C. Davis Law Review, 51(653), 692; see also Ronald E. Leenes (2011), ‘Framing Techno-Regulation: An Exploration of State and Non-State Regulation by Technology’, Legisprudence (Social Science Research Network), 5(2), 141–169, https://papers.ssrn.com/abstract=2182439.
 https://www.bilgi.edu.tr/tr/etkinlik/10374/algorithms-in-film-television-and-sound-cultures-new-ways-of-knowing-and-storytelling/; Uricchio, ‘Data, Culture and the Ambivalence of Algorithms’, 155.
 G. Leonhard (2016a), ‘‘Technology vs. Humanity – The Coming Clash between Man and Machine’ (Zurich: The Futures Agency), 133.
 European Union (2018), ‘‘The New European Bauhaus explained’’, January, https://europa.eu/new-european-bauhaus/document/download/45f60059-6776-4fd7-8475-a456a56bbd5d_en; see also: European Union (2021), ‘About the initiative’, January , https://europa.eu/new-european-bauhaus/about/about-initiative_en.
 Giorgio Avezzù (2017), ’The Data Don’t Speak for Themselves: The Humanity of VOD Recommender Systems’, Cinema & CIE, 17, 15. .
 Avezzù, ’The Data Don’t Speak’, 15.
 Avezzù, ’The Data Don’t Speak’, 15; Leonhard, Technology vs. Humanity.
 G. Leonhard (2016b), ‘‘What are androrithms’,’ https://www.futuristgerd.com/2016/09/what-are-androrithms/.