Nutzerkonto

Digital disrupture

Dieter Mersch

Digital Criticism
A Critique of “Algorithmic” Reason

Übersetzt von Michael Turnbull

Veröffentlicht am 10.12.2017

Digital disrupture

Theories of the digital owe their demand to an ambivalent situation. For a start they originate in the visions and utopias of the alternative cultural enlightenment of the 1970s, which not only brought forth the personal computer but also the media sciences and media theories, which examined and reflected on the digital disrupture and now diagnose a caesura as substantial as the early modern period and the invention of printing. All the signs and content of previous cultures are thus under close scrutiny, and are being transformed and overtaken by a development whose further dynamics are barely predictable. According to the general analysis, the technological turn associated with digitization will transform all circumstances of life to such an extent that following Marshall McLuhan and the book he has co-written with Quentin Fiore, the title of which, ironically, isn’t The Medium is the Message but The Medium is the Massage, it is necessary to speak of a thorough “massage” of the present age. What are the hopes connected with this? What are its boundaries and risks? And why is the actual breaking point the period of social upheaval in the 1970s—and not the late 1940s or early 50s, which saw the mathematical theorization of the phenomenon and construction of the first computers, then called “electronic brains”?

The origin, and thus the caesura, of digitization is in fact much earlier, or at least that of its theoretical model and its technological equivalent, which reaches far back to the early twentieth century and perhaps, with Boolean algebra and Charles Babbage and Ada Lovelace’s studies on the difference engine, to the middle of the nineteenth century, even. Alan Turing described the mathematical scheme of his eponymous Turing machine as a “universal machine,” which was able to simulate all calculable or mathematically representable problems. Moreover, alongside the mid-twentieth-century development of a general semiotics, Claude Shannon’s information theory and Norbert Wiener’s cybernetics brought a new scientific paradigm into the arena that from the early 1960s at the latest began to think about circular communication, self-correcting machines, or cyclical causalities and feedback loops, in order to reconstitute the natural sciences and humanities, and thus to reconcile what C.P. Snow described as two different cultures.

Since then there seems to have been hardly a boundary that has resisted the success of digitization: intellectual processes are captured by digital algorithms; the neurosciences still follow—or at least partially—the model of the brain as a computer; networks and streams of digital communication form our sense of the social, which is dominated by the idea of integral participation; science and the information society have become unthinkable without digital archives and data processing, just as the economies of desire can almost reliably be predicted and satisfied through algorithmic statistics and big data—a development that likewise goes back to the “probabilistic revolution” of the nineteenth century—to name only a few of the consequences. So why do we locate the caesura in the 1970s, the era of revolt? The theory proposed here, contrary to the above sketchy historical reconstruction of digitization and computerization as a cultural formation, is that digitization wouldn’t have had this unparalleled success if it had solely emerged, as Friedrich Kittler imagines, from the abuse of “military equipment” and “war machines” or a mathematization of economic exchange and communication and their accompanying theories. For neither the invention of the telephone, cinematography, and television—which as we know form the basis of the first media-theoretical reflections, namely for McLuhan and the Canadian School—nor, as Joachim Paech added, of the tape and video recorder—which permitted portable recordings of otherwise short-lived sounds and moving images, and therefore also their repetition, editing, and quotation, in a word their readability and manipulation—adequately explain the disruption of the 1970s and the caesuras of the present that are even affecting subjectivization.

The ambivalence of a digital “culture”

On the contrary, it wasn’t the above events, which undoubtedly contributed to the emergence of what is today termed a digital “culture,” that were decisive, but above all the miniaturization of the computer in the same period of time. This in turn was based on the 1960s ideas of democratization, and within two decades allowed the incorporation of the diverse media formats into a meta-medium, so as to connect them via the already existing telephone networks and transform them into a single, globally functioning media apparatus. And this fact alone, which Kittler, with an eye on his so-called universal discrete machine, also made central to his media-theoretical turn, wouldn’t have been so decisive had it not simultaneously been founded by a countercultural movement only possible in California, which in its characteristic mix of social and technological utopianism both celebrated and misunderstood the computer as a tool of emancipation and a counteroffensive against state control. So a vision was needed, whose history Fred Turner attempts to outline in his book From Counterculture to Cyberculture, as it was the Californian dropout movement in particular, with its critique of civilization, whose basic impulse opposed both the proliferation of capitalist economism and technological power, as primarily demonstrated in their emblem The Whole Earth Catalog, only to veer within a decade from Romanticism to technological affirmation and dream up the emergence of new forms of fellowship in hypertext and cyber-communities on the basis of egalitarian participation—exactly inverting its heritage and critique.

Symptomatic of this “turn” was among other things the fact that the author of The Whole Earth Catalog, Stuart Brand, brought out The Whole Earth Software Catalog only a few years after the first publication, and in 1985 founded the network Whole Earth ’Lectric Link, with the acronym WELL, which rapidly advanced to become an online-community pioneer. Brand adapted as a basic language the philosophemes of systems theory and cybernetics, later to be joined by post-structural vocabularies, whose actual concerns were at the same time unreservedly distorted. “As it turned out,” he wrote later, “psychedelic drugs, communes, and Buckminster Fuller domes were a dead end, but computers were an avenue to realms beyond our dreams.” So the idealisms that transitioned from Herbert Marcuse’s “great refusal” into the online communities were founded in the fantasy that the PC might advance to a dissidence machine par excellence, to an agent of the unmasking of anti-democratic and authoritarian formations, and transform itself into a new and unimagined critical “tool” that would not only help to develop subversive counter-publics but also enable the creation of different kinds of communities that would be capable of finally overcoming social structures based on classical hierarchies. But nothing more has emerged from this for years than a capitalist transgression on the basis of multinational IT corporations that own and administer more resources than whole countries and divide 95% of the internet and its communication between them while circulating slogans like “Don’t be evil” or “Making the world a better place.” Beyond any idea of the political and out of sight of national law they utilize the innocent teaser “You might also enjoy …” to instruct their users, who have mutated into compliant consumers of infinitely accelerated streams of communication and commodities, as to what they will desire and or want to enjoy in future—not, as one might think, to control them, but to release them for future consumption (which is worse than control).

Cyberculture then means both the consistent fulfilment of the counterculture, as Fred Turner emphasizes, and its complete ruin, its inversion and distortion, and the utter obliteration of all former utopias. Rarely has a time become as total a witness to the collapse of its hopes and visions, and thus to its disillusionment, as ours. I propose that the credo of digital disrupture and the constitution of digital cultures and their theories have been duped by this ambivalence, upon which they both based and to which they are also enslaved. Our capacity for scientific analysis, criticism, and judgment, together with our perception of the limits of computerization and its “evils,” has indeed been “scarred” by this double illusion.

Power and powerlessness:
the usurpation of the social sphere

The unprecedented triumph of digitization and computerization primarily stands for a new era of power. Power is inscribed into the technological dispositif itself, but it has to be read in the context of “digital disrupture” as that of a boundary-breaking mathematization. Mathematization means that access to the world, to the real, and even thought, action, decision-taking, communication, and the public and social sphere are submitted in a hegemonic act to the unified scheme of an exclusively quantitative operativity, and are thus algorithmized. The metasystem of mathematization is the network, in the form of many intertwined graphs, a complex, many-dimensional structure of lines, knots, and edges, that enables the blanket surveillance and probabilistic evaluation of its information streams via “big data,” the control of all connections and thus of interpersonal affects, senses, and desires. The regime of computerization has an almost phantasmal character, paired with the phallic yet infantile belief of its users in their omnipotence over the apparatus and their unconditional possession of its means, and their trust in its apparent promise of granting power over its technical equipment, even over their internal secrets, to anyone with access, keys, and codes. Every step, every action, and every impulse is based on a calculation step, a mathematical function and its formal derivation, which suggests both availability and solubility. Since Alan Turing applied Hilbert’s decision problem to the idea of a virtual “paper machine,” which produced its results based simply on the decidability between “true” and “false” or 0 and 1, or reading and writing movements from right to left and the retaining function, calculation of this sort has formed a “decision machine” par excellence, for which there are neither interim states nor vagueness nor indeterminacy (except for the undecidability of the predictability or unpredictability of the totality of machine conditions itself). But because only two positions are needed, every decidability awakens the fiction of feasibility, which evokes a pragmatics of trial and error that ascribes everything to the probability of actual choices, so long as anything is countable and therefore manageable.

Digital logic is a strictly discrete and binary logic, without the existence of paradoxes or a third party, which consequently upholds the appearance of solubility, as if everything were a question of “either–or” or of correct inference or veracity—similar to the counterstrategy of subversive hacking, which aims to expose the power of anonymous government under the banner of political clarification and makes use of the same mechanisms, hallucinating the utilization of technologies as adoptable and controllable. If, aside from the Turing machine and graph theory, the new digital technologies are based on mathematical cybernetics, then not only does the paradigm of control seem ubiquitous, even in the name of the discipline, but conversely so does that of technology as a cultural dispositif appear to be entirely governable through mathematics. The semblance of predictability is equivalent to that of the power of the mathematical—as if everything were a question of the right algorithm, an attitude that in turn ruthlessly transfers Laplace’s demon from nature and its causal explanation to the political and its institutions, as well as to society, ultimately in order to anticipate what our desires would have been and what we would have dreamed of. Power then not only produces and structures the real but also the social and the alterity, which Jean-Luc Nancy and Maurice Blanchot described, with the expression “désœuvrément,” not as inoperative but rather as unable to be made operative and therefore calculable.


“Structural change of public life”

The consequences for what used to be called “public life,” as distinct from the privacy of the person, are unforeseeable. Today Web 2.0 offers a battleground of attention, intrigue, and manipulation, along with the acquisition and utilization of personal information, images, and expressions of opinion, which occupy such a nebulous space between “intimate” and “public” that the classical differentiations have lost all critical function. As early as 1961 Jürgen Habermas diagnosed a “structural change in public life,” following up changes in the concept of “civil public life” since its establishment by the Enlightenment and its role for a functioning democracy into the 1960s. With the various “countercultural” utopias of the 1970s, and their technological turn in the late 80s and 90s, the process of digitization and the internet seem to have become the legitimate heirs of a critical public life, so that we have to do with a second, this time technology-driven, structural change. But this is still subject to unrestrained economism, while generating its own form of crisis through the distribution of ‘alternative facts’ and information bubbles with a life of their own, an the social erosion induced by shit storms and hate speech.

This raises the question of whether the bases of digitization and networking might not be redetermined. Critical discourse has for too long been solely involved with the distorted images of surveillance, and has prompted a debate exclusively on subversion and decontrol, despite the fact that this reproduces the technical mechanisms it professes to oppose. Just as the technical critique of technical power can do nothing more than affirm technology, WikiLeaks and other critical hacking programs reproduce technical structures without creating alternatives. Julian Assange, Edward Snowden, or viral actions as productive disturbance, following the principle of beating the rulers at their own game, are therefore unable to avoid utilizing the same surveillance they attempt to override in their “leaks.” Freedom can’t be unqualifiedly gained through the negation of power by the same power.

Moreover, the right to anonymity was seen for years as a confirmed freedom on the internet, despite abuse through anonymized posts that distribute falsities, inventions, or non-verifiable claims behind the masks of fake identities concealing social bots or robots in order to undermine the system of trust. The immateriality of communication and the possibility of unlimited connectibility without material barriers is indeed celebrated as one of the great achievements of internet culture, but at the same time the disappearance of physical presence and parrhesia as an attitude of self-confident avowal goes hand in hand with a loss of social responsibility and its inherent ethicality. When people lose their belief in religion, G.K. Chesterton once mocked, they no longer believe in nothing but in everything. Hence the necessity for a restitution of the faculty of discrimination, which is able to arrest the dissemination of arbitrary information and renew credibility. Suggestions have so far been muddled and ineffective, but this is no argument against them, rather an indication of the difficulty of the task. In addition, the proliferation of an internet communicativity tending toward deregulation has been underestimated from the very beginning, because only the infrastructure and acceleration of the systems have been realized, not their possible regulation. This has to do with increased access and speed of circulation, in a word with what Friedrich Nietzsche denounced as the unconditional will to escalation and excess, which sure enough represents nothing other than a glut of economic logic itself. It should also be asked whether current developments towards de-democratization, the emergence of populist tendencies and internet disinhibition and rabble-rousing are a direct fruit of the network structures themselves, so that the digital public sphere and its formatting inherently bear the stamp of increasing antisocialization.

Technological criticism as cultural criticism:
philosophy as demarcation

The above points also lead to the more underlying question of the future of a critical and democratic power of political judgment—however the question is raised as to whether the concept of the social hasn’t suffered under the conditions of a communication based exclusively on the directives of a mathematical decision logic, and lost its meaning through the substitution in digitality of facts and knowledge by data, reflection by reference, practice by operativity, justice by participation, and trust by accessibility. While in the 1980s Jacques Derrida showed the asymmetry of “endowment” to be an indispensable basis of sociality, and Emmanuel Lévinas thought of the—equally asymmetrical—principle of alterity as antecedent to any possible form of relationship, it should now be asked whether these dimensions have once again been suspended and deleted by the technological conditions of “internet culture.” This has nothing to do with technological criticism as cultural pessimism—in the shadow of which the suspicion of a neoconservative animosity towards progress unavoidably creeps in—but rather with a cultural criticism that, in as much as it addresses the reverse sides of a totalizing principle, appears in its increasing futility to be all the more necessary.

The aim, accordingly, is to sound out the boundaries, obstacles, and misapprehensions of what comes under the heading of “digital disrupture,” in order to be able to identify what its “culture”—and “danger,” to quote Heidegger—might be. This can particularly be indicated by a dialectic of dissolution, truth, and moderation recurrently observed in the history of freedom. Two historical examples can be mentioned here: The rise of the ancient Greek city-states saw the burgeoning of sophism, which transformed the technique of rhetoric into both a method of clarification and one of alienation and power—by exploiting the chronic indistinguishability of conviction and persuasion in order, as the Socratic critique of sophism goes, “to improve the worse thing.” The power of oration consequently surpassed the force of reason, in that apparently every groundless claim was represented. Thus the scandal that truth and lie could be expressed using the same linguistic practices, without the possession of binding distinguishing criteria, was deeply felt. The reaction to this was the emergence of the classical period of Greek philosophy with Plato and Aristotle, who elevated the principle of logos itself to a criterion within language, so as to play it off against sophist fallacies and to implement a rhetorical demarcation line that was able to limit the power of oration.

A second example is the invention of printing, in the early modern period, which led to a changed culture of knowledge whose individualization at the same time placed no restrictions on a flood of publications. Apparently anything could be said, alleged, or distributed, against whose excessive production and its associated erosion of certain knowledge on the one hand the sciences themselves established the (exoteric) standard of the repeatability, public enactment, and verifiability of experiments, while on the other Denis Diderot and Jean Baptist d’Alembert contrived the big encyclopedias as a project of social enlightenment and a corrective intended to objectify knowledge. Finally Kant attempted to put a stop to the all too free and boundless pragmatics of reason with his Critique of Pure Reason, in which he understood even rationally undertaken “criticism” as a “restriction of the exercise and scope of reason.” It wasn’t the arbitrary assertion that triumphed over public life, said Kant, but rather what had restricted itself through the dressage of reason and its sayability. Once again philosophical reflection provided an answer to the imminent dispersion of truth and freedom, in order to relate each to the other appropriately.

Critique of “algorithmic reason”

In transferring these two sample historical situations to the current “cultures of the digital” it should accordingly be asked whether a yet unwritten “critique of algorithmic reason,” understood as a restriction of its claim to validity, might be necessary, so as to restrain the overproduction of “truths” and “falsities” on the internet. For it can already be seen that large fields of knowledge and social practice been taken over by digitization processes—from technical material control, medical expertise, mathematical reasoning, to the regulation of the flow of money or the automatic generation of knowledge and its evaluation and qualification, to name only a few. But the problem consists in a double challenge, however. On the one hand, in the digital context the propositional content of a statement, its actual provision, becomes an exclusive “matter” of data and their calculation, that is of the mathematical function—often through statistical techniques such as “big date” or “brute force,” which turn judgment into a thing of numerousness and consequentially of an averaging-out of a variety of processes and programs. On the other hand, the algorithmic code is totalized and even applied where it fails; for example in relation to moral decisions or legal judgments requiring sagacity and the consideration of individual cases, but also in the semantic analysis of “true” or “truthful” statements as against explicit deception and mystification, or irony and caricature, because verification can’t solely be left to the calculations of probability theory. New, different, and substantiated positions or creative answers must instead be found that are capable of prompting paradigm changes, but that can’t be covered statistically because they don’t correspond to the norm. This also applies to the alternative convictions or opinions that appear in the echoing spaces of “social media.”

We really need an analysis of algorithmic conditions and their paradoxes and ambiguities that gives them an adequate framework and horizon. But instead we currently seem to be finding an algorithmic solution of the algorithmic, much as digital solutions are being offered for the problems of the digital public sphere, in the way that IT corporations, for example, use exclusively mathematical procedures to evaluate and delete “fake news,” inappropriate portrayals, or the violation of personal rights. This tends to result in a circularity that leaves the drawing of boundaries and raising of barriers solely to programming, instead of restoring them to our ethical conscience and understanding of what the social could mean today. The machine, by contrast, remains alien to any mechanical limitation—just as its inability to decide lies in the impossibility of self-calculation. The nucleus of digital culture should instead be sought where the cultural of culture is located: in human beings and their capacity for self-criticism and reflection.

When people lose their belief in religion, G.K. Chesterton once mocked, they no longer believe in nothing but in everything. Hence the necessity for a restitution of the faculty of discrimination, which is able to arrest the dissemination of arbitrary information and renew credibility. Suggestions have so far been muddled and ineffective, but this is no argument against them, rather an indication of the difficulty of the task. In addition, the proliferation of an internet communicativity tending toward deregulation has been underestimated from the very beginning, because only the infrastructure and acceleration of the systems have been realized, not their possible regulation. This has to do with increased access and speed of circulation, in a word with what Friedrich Nietzsche denounced as the unconditional will to escalation and excess, which sure enough represents nothing other than a glut of economic logic itself. A yet unwritten “critique of algorithmic reason,” understood as a restriction of its claim to validity, might be necessary, so as to restrain the overproduction of “truths” and “falsities” on the internet.

Meine Sprache
Deutsch

Aktuell ausgewählte Inhalte
Deutsch, Englisch, Französisch

Dieter Mersch

Dieter Mersch

war bis zu seiner Emeritierung Professor für Ästhetik an der Zürcher Hochschule der Künste und ist Präsident der Deutschen Gesellschaft für Ästhetik. Studium der Mathematik und Philosophie in Köln, Bochum und Darmstadt. Mitherausgeber des Internationalen Jahrbuchs für Medienphilosophie. Arbeitsschwerpunkte: Philosophische Ästhetik, Kunsttheorie, Medienphilosophie, Bildtheorie, Musikphilosophie und kontinentale Philosophie des 20. und 21. Jahrhunderts.

Weitere Texte von Dieter Mersch bei diaphanes