Ancient-Hands-Argentina

Proper quotation: « The Philosophical Concept of Algorithmic Intelligence », Spanda Journal special issue on “Collective Intelligence”, V (2), December 2014, p. 17-25. The original text can be found for free online at  Spanda

“Transcending the media, airborne machines will announce the voice of the many. Still indiscernible, cloaked in the mists of the future, bathing another humanity in its murmuring, we have a rendezvous with the over-language.” Collective Intelligence, 1994, p. xxviii.

Twenty years after Collective Intelligence

This paper was written in 2014, twenty years after L’intelligence collective [the original French edition of Collective Intelligence].[2] The main purpose of Collective Intelligence was to formulate a vision of a cultural and social evolution that would be capable of making the best use of the new possibilities opened up by digital communication. Long before the success of social networks on the Web,[3] I predicted the rise of “engineering the social bond.” Eight years before the founding of Wikipedia in 2001, I imagined an online “cosmopedia” structured in hypertext links. When the digital humanities and the social media had not even been named, I was calling for an epistemological and methodological transformation of the human sciences. But above all, at a time when less than one percent of the world’s population was connected,[4] I was predicting (along with a small minority of thinkers) that the Internet would become the centre of the global public space and the main medium of communication, in particular for the collaborative production and sharing of knowledge and the dissemination of news.[5] In spite of the considerable growth of interactive digital communication over the past twenty years, we are still far from the ideal described in Collective Intelligence. It seemed to me already in 1994 that the anthropological changes under way would take root and inaugurate a new phase in the human adventure only if we invented what I then called an “over-language.” How can communication readily reach across the multiplicity of dialects and cultures? How can we map the deluge of digital data, order it around our interests and extract knowledge from it? How can we master the waves, currents and depths of the software ocean? Collective Intelligence envisaged a symbolic system capable of harnessing the immense calculating power of the new medium and making it work for our benefit. But the over-language I foresaw in 1994 was still in the “indiscernible” period, shrouded in “the mists of the future.” Twenty years later, the curtain of mist has been partially pierced: the over-language now has a name, IEML (acronym for Information Economy MetaLanguage), a grammar and a dictionary.[6]

Reflexive collective intelligence

Collective intelligence drives human development, and human development supports the growth of collective intelligence. By improving collective intelligence we can place ourselves in this feedback loop and orient it in the direction of a self-organizing virtuous cycle. This is the strategic intuition that has guided my research. But how can we improve collective intelligence? In 1994, the concept of digital collective intelligence was still revolutionary. In 2014, this term is commonly used by consultants, politicians, entrepreneurs, technologists, academics and educators. Crowdsourcing has become a common practice, and knowledge management is now supported by the decentralized use of social media. The interconnection of humanity through the Internet, the development of the knowledge economy, the rush to higher education and the rise of cloud computing and big data are all indicators of an increase in our cognitive power. But we have yet to cross the threshold of reflexive collective intelligence. Just as dancers can only perfect their movements by reflecting them in a mirror, just as yogis develop awareness of their inner being only through the meditative contemplation of their own mind, collective intelligence will only be able to set out on the path of purposeful learning and thus move on to a new stage in its growth by achieving reflexivity. It will therefore need to acquire a mirror that allows it to observe its own cognitive processes. Be careful! Collective intelligence does not and will not have autonomous consciousness: when I talk about reflexive collective intelligence, I mean that human individuals will have a clearer and better-shared knowledge than they have today of the collective intelligence in which they participate, a knowledge based on transparent principles and perfectible scientific methods.

The key: A complete modelling of language

But how can a mirror of collective intelligence be constructed? It is clear that the context of reflection will be the algorithmic medium or, to put it another way, the Internet, the calculating power of cloud computing, ubiquitous communication and distributed interactive mobile interfaces. Since we can only reflect collective intelligence in the algorithmic medium, we must yield to the nature of that medium and have a calculable model of our intelligence, a model that will be fed by the flows of digital data from our activities. In short, we need a mathematical (with calculable models) and empirical (based on data) science of collective intelligence. But, once again, is such a science possible? Since humanity is a species that is highly social, its intelligence is intrinsically social, or collective. If we had a mathematical and empirical science of human intelligence in general, we could no doubt derive a science of collective intelligence from it. This leads us to a major problem that has been investigated in the social sciences, the human sciences, the cognitive sciences and artificial intelligence since the twentieth century: is a mathematized science of human intelligence possible? It is language or, to put it another way, symbolic manipulation that distinguishes human cognition. We use language to categorize sensory data, to organize our memory, to think, to communicate, to carry out social actions, etc. My research has led me to the conclusion that a science of human intelligence is indeed possible, but on the condition that we solve the problem of the mathematical modelling of language. I am speaking here of a complete scientific modelling of language, one that would not be limited to the purely logical and syntactic aspects or to statistical correlations of corpora of texts, but would be capable of expressing semantic relationships formed between units of meaning, and doing so in an algebraic, generative mode.[7] Convinced that an algebraic model of semantics was the key to a science of intelligence, I focused my efforts on discovering such a model; the result was the invention of IEML.[8] IEML—an artificial language with calculable semantics—is the intellectual technology that will make it possible to find answers to all the above-mentioned questions. We now have a complete scientific modelling of language, including its semantic aspects. Thus, a science of human intelligence is now possible. It follows, then, that a mathematical and empirical science of collective intelligence is possible. Consequently, a reflexive collective intelligence is in turn possible. This means that the acceleration of human development is within our reach.

The scientific file: The Semantic Sphere

I have written two volumes on my project of developing the scientific framework for a reflexive collective intelligence, and I am currently writing the third. This trilogy can be read as the story of a voyage of discovery. The first volume, The Semantic Sphere 1 (2011),[9] provides the justification for my undertaking. It contains the statement of my aims, a brief intellectual autobiography and, above all, a detailed dialogue with my contemporaries and my predecessors. With a substantial bibliography,[10] that volume presents the main themes of my intellectual process, compares my thoughts with those of the philosophical and scientific tradition, engages in conversation with the research community, and finally, describes the technical, epistemological and cultural context that motivated my research. Why write more than four hundred pages to justify a program of scientific research? For one very simple reason: no one in the contemporary scientific community thought that my research program had any chance of success. What is important in computer science and artificial intelligence is logic, formal syntax, statistics and biological models. Engineers generally view social sciences such as sociology or anthropology as nothing but auxiliary disciplines limited to cosmetic functions: for example, the analysis of usage or the experience of users. In the human sciences, the situation is even more difficult. All those who have tried to mathematize language, from Leibniz to Chomsky, to mention only the greatest, have failed, achieving only partial results. Worse yet, the greatest masters, those from whom I have learned so much, from the semiologist Umberto Eco[11] to the anthropologist Levi-Strauss,[12] have stated categorically that the mathematization of language and the human sciences is impracticable, impossible, utopian. The path I wanted to follow was forbidden not only by the habits of engineers and the major authorities in the human sciences but also by the nearly universal view that “meaning depends on context,”[13] unscrupulously confusing mathematization and quantification, denouncing on principle, in a “knee jerk” reaction, the “ethnocentric bias” of any universalist approach[14] and recalling the “failure” of Esperanto.[15] I have even heard some of the most agnostic speak of the curse of Babel. It is therefore not surprising that I want to make a strong case in defending the scientific nature of my undertaking: all explorers have returned empty-handed from this voyage toward mathematical language, if they returned at all.

The metalanguage: IEML

But one cannot go on forever announcing one’s departure on a voyage: one must set forth, navigate . . . and return. The second volume of my trilogy, La grammaire d’IEML,[16] contains the very technical account of my journey from algebra to language. In it, I explain how to construct sentences and texts in IEML, with many examples. But that 150-page book also contains 52 very dense pages of algorithms and mathematics that show in detail how the internal semantic networks of that artificial language can be calculated and translated automatically into natural languages. To connect a mathematical syntax to a semantics in natural languages, I had to, almost single-handed,[17] face storms on uncharted seas, to advance across the desert with no certainty that fertile land would be found beyond the horizon, to wander for twenty years in the convoluted labyrinth of meaning. But by gradually joining sign, being and thing in turn in the sense of the virtual and actual, I finally had my Ariadne’s thread, and I made a map of the labyrinth, a complicated map of the metalanguage, that “Northwest Passage”[18] where the waters of the exact sciences and the human sciences converged. I had set my course in a direction no one considered worthy of serious exploration since the crossing was thought impossible. But, against all expectations, my journey reached its goal. The IEML Grammar is the scientific proof of this. The mathematization of language is indeed possible, since here is a mathematical metalanguage. What is it exactly? IEML is an artificial language with calculable semantics that puts no limits on the possibilities for the expression of new meanings. Given a text in IEML, algorithms reconstitute the internal grammatical and semantic network of the text, translate that network into natural languages and calculate the semantic relationships between that text and the other texts in IEML. The metalanguage generates a huge group of symmetric transformations between semantic networks, which can be measured and navigated at will using algorithms. The IEML Grammar demonstrates the calculability of the semantic networks and presents the algorithmic workings of the metalanguage in detail. Used as a system of semantic metadata, IEML opens the way to new methods for analyzing large masses of data. It will be able to support new forms of translinguistic hypertextual communication in social media, and will make it possible for conversation networks to observe and perfect their own collective intelligence. For researchers in the human sciences, IEML will structure an open, universal encyclopedic library of multimedia data that reorganizes itself automatically around subjects and the interests of its users.

A new frontier: Algorithmic Intelligence

Having mapped the path I discovered in La grammaire d’IEML, I will now relate what I saw at the end of my journey, on the other side of the supposedly impassable territory: the new horizons of the mind that algorithmic intelligence illuminates. Because IEML is obviously not an end in itself. It is only the necessary means for the coming great digital civilization to enable the sun of human knowledge to shine more brightly. I am talking here about a future (but not so distant) state of intelligence, a state in which capacities for reflection, creation, communication, collaboration, learning, and analysis and synthesis of data will be infinitely more powerful and better distributed than they are today. With the concept of Algorithmic Intelligence, I have completed the risky work of prediction and cultural creation I undertook with Collective Intelligence twenty years ago. The contemporary algorithmic medium is already characterized by digitization of data, automated data processing in huge industrial computing centres, interactive mobile interfaces broadly distributed among the population and ubiquitous communication. We can make this the medium of a new type of knowledge—a new episteme[19]—by adding a system of semantic metadata based on IEML. The purpose of this paper is precisely to lay the philosophical and historical groundwork for this new type of knowledge.

Philosophical genealogy of algorithmic intelligence

The three ages of reflexive knowledge

Since my project here involves a reflexive collective intelligence, I would like to place the theme of reflexive knowledge in its historical and philosophical context. As a first approximation, reflexive knowledge may be defined as knowledge knowing itself. “All men by nature desire to know,” wrote Aristotle, and this knowledge implies knowledge of the self.[20] Human beings have no doubt been speculating about the forms and sources of their own knowledge since the dawn of consciousness. But the reflexivity of knowledge took a decisive step around the middle of the first millennium BCE,[21] during the period when the Buddha, Confucius, the Hebrew prophets, Socrates and Zoroaster (in alphabetical order) lived. These teachers involved the entire human race in their investigations: they reflected consciousness from a universal perspective. This first great type of systematic research on knowledge, whether philosophical or religious, almost always involved a divine ideal, or at least a certain “relation to Heaven.” Thus we may speak of a theosophical age of reflexive knowledge. I will examine the Aristotelian lineage of this theosophical consciousness, which culminated in the concept of the agent intellect. Starting in the sixteenth century in Europe—and spreading throughout the world with the rise of modernity—there was a second age of reflection on knowledge, which maintained the universal perspective of the previous period but abandoned the reference to Heaven and confined itself to human knowledge, with its recognized limits but also its rational ideal of perfectibility. This was the second age, the scientific age, of reflexive knowledge. Here, the investigation follows two intertwined paths: one path focusing on what makes knowledge possible, the other on what limits it. In both cases, knowledge must define its transcendental subject, that is, it must discover its own determinations. There are many signs in 2014 indicating that in the twenty-first century—around the point where half of humanity is connected to the Internet—we will experience a third stage of reflexive knowledge. This “version 3.0″ will maintain the two previous versions’ ideals of universality and scientific perfectibility but will be based on the intensive use of technology to augment and reflect systematically our collective intelligence, and therefore our capacities for personal and social learning. This is the coming technological age of reflexive knowledge with its ideal of an algorithmic intelligence. The brief history of these three modalities—theosophical, scientific and technological—of reflexive knowledge can be read as a philosophical genealogy of algorithmic intelligence.

The theosophical age and its agent intellect

A few generations earlier, Socrates might have been a priest in the circle around the Pythia; he had taken the famous maxim “Know thyself” from the Temple of Apollo at Delphi. But in the fifth century BCE in Athens, Socrates extended the Delphic injunction in an unexpected way, introducing dialectical inquiry. He asked his contemporaries: What do you think? Are you consistent? Can you justify what you are saying about courage, justice or love? Could you repeat it seriously in front of a little group of intelligent or curious citizens? He thus opened the door to a new way of knowing one’s own knowledge, a rational expansion of consciousness of self. His main disciple, Plato, followed this path of rigorous questioning of the unthinking categorization of reality, and finally discovered the world of Ideas. Ideas for Plato are intellectual forms that, unlike the phenomena they categorize, do not belong to the world of Becoming. These intelligible forms are the original essences, archetypes beyond reality, which project into phenomenal time and space all those things that seem to us to be truly real because they are tangible, but that are actually only pale copies of the Ideas. We would say today that our experience is mainly determined by our way of categorizing it. Plato taught that humanity can only know itself as an intelligent species by going back to the world of Ideas and coming into contact with what explains and motivates its own knowledge. Aristotle, who was Plato’s student and Alexander the Great’s tutor, created a grand encyclopedic synthesis that would be used as a model for eighteen centuries in a multitude of cultures. In it, he integrates Plato’s discovery of Ideas with the sum of knowledge of his time. He places at the top of his hierarchical cosmos divine thought knowing itself. And in his Metaphysics,[22] he defines the divinity as “thought thinking itself.” This supreme self-reflexive thought was for him the “prime mover” that inspires the eternal movement of the cosmos. In De Anima,[23] his book on psychology and the theory of knowledge, he states that, under the effect of an agent intellect separate from the body, the passive intellect of the individual receives intelligible forms, a little like the way the senses receive sensory forms. In thinking these intelligible forms, the passive intellect becomes one with its objects and, in so doing, knows itself. Starting from the enigmatic propositions of Aristotle’s theology and psychology, a whole lineage of Peripatetic and Neo-Platonic philosophers—first “pagans,” then Muslims, Jews and Christians—developed the discipline of noetics, which speculates on the divine intelligence, its relation to human intelligence and the type of reflexivity characteristic of intelligence in general.[24] According to the masters of noetics, knowledge can be conceptually divided into three aspects that, in reality, are indissociable and complementary:

  • the intellect,or the knowing subject
  • the intelligence,or the operation of the subject
  • the intelligible,or what is known—or can be known—by the subject by virtue of its operation

From a theosophical perspective, everything that happens takes place in the unity of a self-reflexive divine thought, or (in the Indian tradition) in the consciousness of an omniscient Brahman or Buddha, open to infinity. In the Aristotelian tradition, Avicenna, Maimonides and Albert the Great considered that the identity of the intellect, the intelligence and the intelligible was achieved eternally in God, in the perfect reflexivity of thought thinking itself. In contrast, it was clear to our medieval theosophists that in the case of human beings, the three aspects of knowledge were neither complete nor identical. Indeed, since the passive intellect knows itself only through the intermediary of its objects, and these objects are constantly disappearing and being replaced by others, the reflexive knowledge of a finite human being can only be partial and transitory. Ultimately, human knowledge could know itself only if it simultaneously knew, completely and enduringly, all its objects. But that, obviously, is reserved only for the divinity. I should add that the “one beyond the one” of the neo-Platonist Plotinus and the transcendent deity of the Abrahamic traditions are beyond the reach of the human mind. That is why our theosophists imagined a series of mediations between transcendence and finitude. In the middle of that series, a metaphysical interface provides communication between the unimaginable and inaccessible deity and mortal humanity dispersed in time and space, whose living members can never know—or know themselves—other than partially. At this interface, we find the agent intellect, which is separate from matter in Aristotle’s psychology. The agent intellect is not limited—in the realm of time—to sending the intelligible categories that inform the human passive intellect; it also determines—in the realm of eternity—the maximum limit of what the human race can receive of the universal and perfectly reflexive knowledge of the divine. That is why, according to the medieval theosophists, the best a mortal intelligence can do to approach complete reflexive knowledge is to contemplate the operation in itself of the agent intellect that emanates from above and go back to the source through it. In accordance with this regulating ideal of reflexive knowledge, living humanity is structured hierarchically, because human beings are more or less turned toward the illumination of the agent intellect. At the top, prophets and theosophists receive a bright light from the agent intellect, while at the bottom, human beings turned toward coarse material appetites receive almost nothing. The influx of intellectual forms is gradually obscured as we go down the scale of degree of openness to the world above.

The scientific age and its transcendental subject

With the European Renaissance, the use of the printing press, the construction of new observation instruments, and the development of mathematics and experimental science heralded a new era. Reflection on knowledge took a critical turn with Descartes’s introduction of radical doubt and the scientific method, in accordance with the needs of educated Europe in the seventeenth century. God was still present in the Cartesian system, but He was only there, ultimately, to guarantee the validity of the efforts of human scientific thought: “God is not a deceiver.”[25] The fact remains that Cartesian philosophy rests on the self-reflexive edge, which has now moved from the divinity to the mortal human: “I think, therefore I am.”[26] In the second half of the seventeenth century, Spinoza and Leibniz received the critical scientific rationalism developed by Descartes, but they were dissatisfied with his dualism of thought (mind) and extension (matter). They therefore attempted, each in his own way, to constitute reflexive knowledge within the framework of coherent monism. For Spinoza, nature (identified with God) is a unique and infinite substance of which thought and extension are two necessary attributes among an infinity of attributes. This strict ontological monism is counterbalanced by a pluralism of expression, because the unique substance possesses an infinity of attributes, and each attribute, an infinity of modes. The summit of human freedom according to Spinoza is the intellectual love of God, that is, the most direct and intuitive possible knowledge of the necessity that moves the nature to which we belong. For Leibniz, the world is made up of monads, metaphysical entities that are closed but are capable of an inner perception in which the whole is reflected from their singular perspective. The consistency of this radical pluralism is ensured by the unique, infinite divine intelligence that has considered all possible worlds in order to create the best one, which corresponds to the most complex—or the richest—of the reciprocal reflections of the monads. As for human knowledge—which is necessarily finite—its perfection coincides with the clearest possible reflection of a totality that includes it but whose unity is thought only by the divine intelligence. After Leibniz and Spinoza, the eighteenth century saw the growth of scientific research, critical thought and the educational practices of the Enlightenment, in particular in France and the British Isles. The philosophy of the Enlightenment culminated with Kant, for whom the development of knowledge was now contained within the limits of human reason, without reference to the divinity, even to envelop or guarantee its reasoning. But the ideal of reflexivity and universality remained. The issue now was to acquire a “scientific” knowledge of human intelligence, which could not be done without the representation of knowledge to itself, without a model that would describe intelligence in terms of what is universal about it. This is the purpose of Kantian transcendental philosophy. Here, human intelligence, armed with its reason alone, now faces only the phenomenal world. Human intelligence and the phenomenal world presuppose each other. Intelligence is programmed to know sensory phenomena that are necessarily immersed in space and time. As for phenomena, their main dimensions (space, time, causality, etc.) correspond to ways of perceiving and understanding that are specific to human intelligence. These are forms of the transcendental subject and not intrinsic characteristics of reality. Since we are confined within our cognitive possibilities, it is impossible to know what things are “in themselves.” For Kant, the summit of reflexive human knowledge is in a critical awareness of the extension and the limits of our possibility of knowing. Descartes, Spinoza, Leibniz, the English and French Enlightenment, and Kant accomplished a great deal in two centuries, and paved the way for the modern philosophy of the nineteenth and twentieth centuries. A new form of reflexive knowledge grew, spread, and fragmented into the human sciences, which mushroomed with the end of the monopoly of theosophy. As this dispersion occurred, great philosophers attempted to grasp reflexive knowledge in its unity. The reflexive knowledge of the scientific era neither suppressed nor abolished reflexive knowledge of the theosophical type, but it opened up a new domain of legitimacy of knowledge, freed of the ideal of divine knowledge. This de jure separation did not prevent de facto unions, since there was no lack of religious scholars or scholarly believers. Modern scientists could be believers or non-believers. Their position in relation to the divinity was only a matter of motivation. Believers loved science because it revealed the glory of the divinity, and non-believers loved it because it explained the world without God. But neither of them used as arguments what now belonged only to their private convictions. In the human sciences, there were systematic explorations of the determinations of human existence. And since we are thinking beings, the determinations of our existence are also those of our thought. How do the technical, historical, economic, social and political conditions in which we live form, deform and set limits on our knowledge? What are the structures of our biology, our language, our symbolic systems, our communicative interactions, our psychology and our processes of subjectivation? Modern thought, with its scientific and critical ideal, constantly searches for the conditions and limits imposed on it, particularly those that are as yet unknown to it, that remain in the shadows of its consciousness. It seeks to discover what determines it “behind its back.” While the transcendental subject described by Kant in his Critique of Pure Reason fixed the image a great mind had of it in the late eighteenth century, modern philosophy explores a transcendental subject that is in the process of becoming, continually being re-examined and more precisely defined by the human sciences, a subject immersed in the vagaries of cultures and history, emerging from its unconscious determinations and the techno-symbolic mechanisms that drive it. I will now broadly outline the figure of the transcendental subject of the scientific era, a figure that re-examines and at the same time transforms the three complementary aspects of the agent intellect.

  • The Aristotelian intellect becomes living intelligence. This involves the effective cognitive activities of subjects, what is experienced spontaneously in time by living, mortal human beings.
  • The intelligence becomes scientific investigation. I use this term to designate all undertakings by which the living intelligence becomes scientifically intelligible, including the technical and symbolic tools, the methods and the disciplines used in those undertakings.
  • The intelligible becomes the intelligible intelligence, which is the image of the living intelligence that is produced through scientific and critical investigation.

An evolving transcendental subject emerges from this reflexive cycle in which the living intelligence contemplates its own image in the form of a scientifically intelligible intelligence. Scientific investigation here is the internal mirror of the transcendental subjectivity, the mediation through which the living intelligence observes itself. It is obviously impossible to confuse the living intelligence and its scientifically intelligible image, any more than one can confuse the map and the territory, or the experience and its description. Nor can one confuse the mirror (scientific investigation) with the being reflected in it (the living intelligence), nor with the image that appears in the mirror (the intelligible intelligence). These three aspects together form a dynamic unit that would collapse if one of them were eliminated. While the living intelligence would continue to exist without a mirror or scientific image, it would be very much diminished. It would have lost its capacity to reflect from a universal perspective. The creative paradox of the intellectual reflexivity of the scientific age may be formulated as follows. It is clear, first of all, that the living intelligence is truly transformed by scientific investigation, since the living intelligence that knows its image through a certain scientific investigation is not the same (does not have the same experience) as the one that does not know it, or that knows another image, the result of another scientific investigation. But it is just as clear, by definition, that the living intelligence reflects itself in the intelligible image presented to it through scientific knowledge. In other words, the living intelligence is equally dependent on the scientific and critical investigation that produces the intelligible image in which it is reflected. When we observe our physical appearance in a mirror, the image in the mirror in no way changes our physical appearance, only the mental representation we have of it. However, the living intelligence cannot discover its intelligible image without including the reflexive process itself in its experience, and without at the same time being changed. In short, a critical science that explores the limits and determinations of the knowing subject does not only reflect knowledge—it increases it. Thus the modern transcendental subject is—by its very nature—evolutionary, participating in a dynamic of growth. In line with this evolutionary view of the scientific age, which contrasts with the fixity of the previous age, the collectivity that possesses reflexive knowledge is no longer a theosophical hierarchy oriented toward the agent intellect but a republic of letters oriented toward the augmentation of human knowledge, a scientific community that is expanding demographically and is organized into academies, learned societies and universities. While the agent intellect looked out over a cosmos emanating from eternity, in analog resonance with the human microcosm, the transcendental subject explores a universe infinitely open to scientific investigation, technical mastery and political liberation.

The technological age and its algorithmic intelligence

Reflexive knowledge has, in fact, always been informed by some technology, since it cannot be exercised without symbolic tools and thus the media that support those tools. But the next age of reflexive knowledge can properly be called technological because the technical augmentation of cognition is explicitly at the centre of its project. Technology now enters the loop of reflexive consciousness as the agent of the acceleration of its own augmentation. This last point was no doubt glimpsed by a few pre–twentieth century philosophers, such as Condorcet in the eighteenth century, in his posthumous book of 1795, Sketch for a Historical Picture of the Progress of the Human Mind. But the truly technological dimension of reflexive knowledge really began to be thought about fully only in the twentieth century, with Pierre Teilhard de Chardin, Norbert Wiener and Marshall McLuhan, to whom we should also add the modest genius Douglas Engelbart. The regulating ideal of the reflexive knowledge of the theosophical age was the agent intellect, and that of the scientific-critical age was the transcendental subject. In continuity with the two preceding periods, the reflexive knowledge of the technological age will be organized around the ideal of algorithmic intelligence, which inherits from the agent intellect its universality or, in other words, its capacity to unify humanity’s reflexive knowledge. It also inherits its power to be reflected in finite intelligences. But, in contrast with the agent intellect, instead of descending from eternity, it emerges from the multitude of human actions immersed in space and time. Like the transcendental subject, algorithmic intelligence is rational, critical, scientific, purely human, evolutionary and always in a state of learning. But the vocation of the transcendental subject was to reflexively contain the human universe. However, the human universe no longer has a recognizable face. The “death of man” announced by Foucault[27] should be understood in the sense of the loss of figurability of the transcendental subject. The labyrinth of philosophies, methodologies, theories and data from the human sciences has become inextricably complicated. The transcendental subject has not only been dissolved in symbolic structures or anonymous complex systems, it is also fragmented in the broken mirror of the disciplines of the human sciences. It is obvious that the technical medium of a new figure of reflexive knowledge will be the Internet, and more generally, computer science and ubiquitous communication. But how can symbol-manipulating automata be used on a large scale not only to reunify our reflexive knowledge but also to increase the clarity, precision and breadth of the teeming diversity enveloped by our knowledge? The missing link is not only technical, but also scientific. We need a science that grasps the new possibilities offered by technology in order to give collective intelligence the means to reflect itself, thus inaugurating a new form of subjectivity. As the groundwork of this new science—which I call computational semantics—IEML makes use of the self-reflexive capacity of language without excluding any of its functions, whether they be narrative, logical, pragmatic or other. Computational semantics produces a scientific image of collective intelligence: a calculated intelligence that will be able to be explored both as a simulated world and as a distributed augmented reality in physical space. Scientific change will generate a phenomenological change,[28] since ubiquitous multimedia interaction with a holographic image of collective intelligence will reorganize the human sensorium. The last, but not the least, change: social change. The community that possessed the previous figure of reflexive knowledge was a scientific community that was still distinct from society as a whole. But in the new figure of knowledge, reflexive collective intelligence emerges from any human group. Like the previous figures—theosophical and scientific—of reflexive knowledge, algorithmic intelligence is organized in three interdependent aspects.

  • Reflexive collective intelligence represents the living intelligence, the intellect or soul of the great future digital civilization. It may be glimpsed by deciphering the signs of its approach in contemporary reality.
  • Computational semantics holds up a technical and scientific mirror to collective intelligence, which is reflected in it. Its purpose is to augment and reflect the living intelligence of the coming civilization.
  • Calculated intelligence, finally, is none other than the scientifically knowable image of the living intelligence of digital civilization. Computational semantics constructs, maintains and cultivates this image, which is that of an ecosystem of ideas coming out of the human activity in the algorithmic medium and can be explored in sensory-motor mode.

In short, in the emergent unity of algorithmic intelligence, computational semantics calculates the cognitive simulation that augments and reflects the collective intelligence of the coming civilization.

[1] Professor at the University of Ottawa

[2] And twenty-three years after L’idéographie dynamique (Paris: La Découverte, 1991).

[3] And before the WWW itself, which would become a public phenomenon only in 1994 with the development of the first browsers such as Mosaic. At the time when the book was being written, the Web still existed only in the mind of Tim Berners-Lee.

[4] Approximately 40% in 2014 and probably more than half in 2025.

[5] I obviously do not claim to be the only “visionary” on the subject in the early 1990s. The pioneering work of Douglas Engelbart and Ted Nelson and the predictions of Howard Rheingold, Joël de Rosnay and many others should be cited.

[6] See The basics of IEML (on line at: http://wp.me/P3bDiO-9V )

[7] Beyond logic and statistics.

[8] IEML is the acronym for Information Economy MetaLanguage. See La grammaire d’IEML (On line http://wp.me/P3bDiO-9V ) [9] The Semantic Sphere 1: Computation, Cognition and Information Economy (London: ISTE, 2011; New York: Wiley, 2011).

[10] More than four hundred reference books.

[11] Umberto Eco, The Search for the Perfect Language (Oxford: Blackwell, 1995).

[12] “But more madness than genius would be required for such an enterprise”: Claude Levi-Strauss, The Savage Mind (University of Chicago Press, 1966), p. 130.

[13] Which is obviously true, but which only defines the problem rather than forbidding the solution.

[14] But true universalism is all-inclusive, and our daily lives are structured according to a multitude of universal standards, from space-time coordinates to HTTP on the Web. I responded at length in The Semantic Sphere to the prejudices of extremist post-modernism against scientific universality.

[15] Which is still used by a large community. But the only thing that Esperanto and IEML have in common is the fact that they are artificial languages. They have neither the same form nor the same purpose, nor the same use, which invalidates criticisms of IEML based on the criticism of Esperanto.

[16] See IEML Grammar (On line http://wp.me/P3bDiO-9V ).

[17] But, fortunately, supported by the Canada Research Chairs program and by my wife, Darcia Labrosse.

[18] Michel Serres, Hermès V. Le passage du Nord-Ouest (Paris: Minuit, 1980).

[19] The concept of episteme, which is broader than the concept of paradigm, was developed in particular by Michel Foucault in The Order of Things (New York: Pantheon, 1970) and The Archaeology of Knowledge and the Discourse on Language (New York: Pantheon, 1972).

[20] At the beginning of Book A of his Metaphysics.

[21] This is the Axial Age identified by Karl Jaspers.

[22] Book Lambda, 9

[23] In particular in Book III.

[24] See, for example, Moses Maimonides, The Guide For the Perplexed, translated into English by Michael Friedländer (New York: Cosimo Classic, 2007) (original in Arabic from the twelfth century). – Averroes (Ibn Rushd), Long Commentary on the De Anima of Aristotle, translated with introduction and notes by Richard C. Taylor (New Haven: Yale University Press, 2009) (original in Arabic from the twelfth century). – Saint Thomas Aquinas: On the Unity of the Intellect Against the Averroists (original in Latin from the thirteenth century) – Herbert A. Davidson, Alfarabi, Avicenna, and Averroes, on Intellect. Their Cosmologies, Theories of the Active Intellect, and Theories of Human Intellect (New York, Oxford: Oxford University Press, 1992). – Henri Corbin, History of Islamic Philosophy, translated by Liadain and Philip Sherrard (London: Kegan Paul, 1993). – Henri Corbin, En Islam iranien: aspects spirituels et philosophiques, 2d ed. (Paris: Gallimard, 1978), 4 vol. – De Libera, Alain Métaphysique et noétique: Albert le Grand (Paris: Vrin, 2005).

[25] In Meditations on First Philosophy, “First Meditation.” [26] Discourse on the Method, “Part IV.”

[27] At the end of The Order of Things (New York: Pantheon Books, 1970). [28] See, for example, Stéphane Vial, L’être et l’écran (Paris: PUF, 2013).

Pierre Lévy:

Pierre Levy-photo 1

Originally published by the CCCTLab as an interview with Sandra Alvaro.

Pierre Lévy is a philosopher and a pioneer in the study of the impact of the Internet on human knowledge and culture. In Collective Intelligence. Mankind’s Emerging World in Cyberspace, published in French in 1994 (English translation in 1999), he describes a kind of collective intelligence that extends everywhere and is constantly evaluated and coordinated in real time, a collective human intelligence, augmented by new information technologies and the Internet. Since then, he has been working on a major undertaking: the creation of IEML (Information Economy Meta Language), a tool for the augmentation of collective intelligence by means of the algorithmic medium. IEML, which already has its own grammar, is a metalanguage that includes the semantic dimension, making it computable. This in turn allows a reflexive representation of collective intelligence processes.

In the book Semantic Sphere I. Computation, Cognition, and Information Economy, Pierre Lévy describes IEML as a new tool that works with the ocean of data of participatory digital memory, which is common to all humanity, and systematically turns it into knowledge. A system for encoding meaning that adds transparency, interoperability and computability to the operations that take place in digital memory.

By formalising meaning, this metalanguage adds a human dimension to the analysis and exploitation of the data deluge that is the backdrop of our lives in the digital society. And it also offers a new standard for the human sciences with the potential to accommodate maximum diversity and interoperability.

In “The Technologies of Intelligence” and “Collective Intelligence”, you argue that the Internet and related media are new intelligence technologies that augment the intellectual processes of human beings. And that they create a new space of collaboratively produced, dynamic, quantitative knowledge. What are the characteristics of this augmented collective intelligence?

The first thing to understand is that collective intelligence already exists. It is not something that has to be built. Collective intelligence exists at the level of animal societies: it exists in all animal societies, especially insect societies and mammal societies, and of course the human species is a marvellous example of collective intelligence. In addition to the means of communication used by animals, human beings also use language, technology, complex social institutions and so on, which, taken together, create culture. Bees have collective intelligence but without this cultural dimension. In addition, human beings have personal reflexive intelligence that augments the capacity of global collective intelligence. This is not true for animals but only for humans.

Now the point is to augment human collective intelligence. The main way to achieve this is by means of media and symbolic systems. Human collective intelligence is based on language and technology and we can act on these in order to augment it. The first leap forward in the augmentation of human collective intelligence was the invention of writing. Then we invented more complex, subtle and efficient media like paper, the alphabet and positional systems to represent numbers using ten numerals including zero. All of these things led to a considerable increase in collective intelligence. Then there was the invention of the printing press and electronic media. Now we are in a new stage of the augmentation of human collective intelligence: the digital or – as I call it – algorithmic stage. Our new technical structure has given us ubiquitous communication, interconnection of information, and – most importantly – automata that are able to transform symbols. With these three elements we have an extraordinary opportunity to augment human collective intelligence.

You have suggested that there are three stages in the progress of the algorithmic medium prior to the semantic sphere: the addressing of information in the memory of computers (operating systems), the addressing of computers on the Internet, and finally the Web – the addressing of all data within a global network, where all information can be considered to be part of an interconnected whole–. This externalisation of the collective human memory and intellectual processes has increased individual autonomy and the self-organisation of human communities. How has this led to a global, hypermediated public sphere and to the democratisation of knowledge?

This democratisation of knowledge is already happening. If you have ubiquitous communication, it means that you have access to any kind of information almost for free: the best example is Wikipedia. We can also speak about blogs, social media, and the growing open data movement. When you have access to all this information, when you can participate in social networks that support collaborative learning, and when you have algorithms at your fingertips that can help you to do a lot of things, there is a genuine augmentation of collective human intelligence, an augmentation that implies the democratisation of knowledge.

What role do cultural institutions play in this democratisation of knowledge?

Cultural Institutions are publishing data in an open way; they are participating in broad conversations on social media, taking advantage of the possibilities of crowdsourcing, and so on. They also have the opportunity to grow an open, bottom-up knowledge management strategy.

dialect_human_development

A Model of Collective Intelligence in the Service of Human Development (Pierre Lévy, en The Semantic Sphere, 2011) S = sign, B = being, T = thing.

We are now in the midst of what the media have branded the ‘big data’ phenomenon. Our species is producing and storing data in volumes that surpass our powers of perception and analysis. How is this phenomenon connected to the algorithmic medium?

First let’s say that what is happening now, the availability of big flows of data, is just an actualisation of the Internet’s potential. It was always there. It is just that we now have more data and more people are able to get this data and analyse it. There has been a huge increase in the amount of information generated in the period from the second half of the twentieth century to the beginning of the twenty-first century. At the beginning only a few people used the Internet and now almost the half of human population is connected.

At first the Internet was a way to send and receive messages. We were happy because we could send messages to the whole planet and receive messages from the entire planet. But the biggest potential of the algorithmic medium is not the transmission of information: it is the automatic transformation of data (through software).

We could say that the big data available on the Internet is currently analysed, transformed and exploited by big governments, big scientific laboratories and big corporations. That’s what we call big data today. In the future there will be a democratisation of the processing of big data. It will be a new revolution. If you think about the situation of computers in the early days, only big companies, big governments and big laboratories had access to computing power. But nowadays we have the revolution of social computing and decentralized communication by means of the Internet. I look forward to the same kind of revolution regarding the processing and analysis of big data.

Communications giants like Google and Facebook are promoting the use of artificial intelligence to exploit and analyse data. This means that logic and computing tend to prevail in the way we understand reality. IEML, however, incorporates the semantic dimension. How will this new model be able to describe they way we create and transform meaning, and make it computable?

Today we have something called the “semantic web”, but it is not semantic at all! It is based on logical links between data and on algebraic models of logic. There is no model of semantics there. So in fact there is currently no model that sets out to automate the creation of semantic links in a general and universal way. IEML will enable the simulation of ecosystems of ideas based on people’s activities, and it will reflect collective intelligence. This will completely change the meaning of “big data” because we will be able to transform this data into knowledge.

We have very powerful tools at our disposal, we have enormous, almost unlimited computing power, and we have a medium were the communication is ubiquitous. You can communicate everywhere, all the time, and all documents are interconnected. Now the question is: how will we use all these tools in a meaningful way to augment human collective intelligence?

This is why I have invented a language that automatically computes internal semantic relations. When you write a sentence in IEML it automatically creates the semantic network between the words in the sentence, and shows the semantic networks between the words in the dictionary. When you write a text in IEML, it creates the semantic relations between the different sentences that make up the text. Moreover, when you select a text, IEML automatically creates the semantic relations between this text and the other texts in a library. So you have a kind of automatic semantic hypertextualisation. The IEML code programs semantic networks and it can easily be manipulated by algorithms (it is a “regular language”). Plus, IEML self-translates automatically into natural languages, so that users will not be obliged to learn this code.

The most important thing is that if you categorize data in IEML it will automatically create a network of semantic relations between the data. You can have automatically-generated semantic relations inside any kind of data set. This is the point that connects IEML and Big Data.

So IEML provides a system of computable metadata that makes it possible to automate semantic relationships. Do you think it could become a new common language for human sciences and contribute to their renewal and future development?

Everyone will be able to categorise data however they want. Any discipline, any culture, any theory will be able to categorise data in its own way, to allow diversity, using a single metalanguage, to ensure interoperability. This will automatically generate ecosystems of ideas that will be navigable with all their semantic relations. You will be able to compare different ecosystems of ideas according to their data and the different ways of categorising them. You will be able to chose different perspectives and approaches. For example, the same people interpreting different sets of data, or different people interpreting the same set of data. IEML ensures the interoperability of all ecosystem of ideas. On one hand you have the greatest possibility of diversity, and on the other you have computability and semantic interoperability. I think that it will be a big improvement for the human sciences because today the human sciences can use statistics, but it is a purely quantitative method. They can also use automatic reasoning, but it is a purely logical method. But with IEML we can compute using semantic relations, and it is only through semantics (in conjunction with logic and statistics) that we can understand what is happening in the human realm. We will be able to analyse and manipulate meaning, and there lies the essence of the human sciences.

Let’s talk about the current stage of development of IEML: I know it’s early days, but can you outline some of the applications or tools that may be developed with this metalanguage?

Is still too early; perhaps the first application may be a kind of collective intelligence game in which people will work together to build the best ecosystem of ideas for their own goals.

I published The Semantic Sphere in 2011. And I finished the grammar that has all the mathematical and algorithmic dimensions six months ago. I am writing a second book entitled Algorithmic Intelligence, where I explain all these things about reflexivity and intelligence. The IEML dictionary will be published (online) in the coming months. It will be the first kernel, because the dictionary has to be augmented progressively, and not just by me. I hope other people will contribute.

This IEML interlinguistic dictionary ensures that semantic networks can be translated from one natural language to another. Could you explain how it works, and how it incorporates the complexity and pragmatics of natural languages?

The basis of IEML is a simple commutative algebra (a regular language) that makes it computable. A special coding of the algebra (called Script) allows for recursivity, self-referential processes and the programming of rhizomatic graphs. The algorithmic grammar transforms the code into fractally complex networks that represent the semantic structure of texts. The dictionary, made up of terms organized according to symmetric systems of relations (paradigms), gives content to the rhizomatic graphs and creates a kind of common coordinate system of ideas. Working together, the Script, the algorithmic grammar and the dictionary create a symmetric correspondence between individual algebraic operations and different semantic networks (expressed in natural languages). The semantic sphere brings together all possible texts in the language, translated into natural languages, including the semantic relations between all the texts. On the playing field of the semantic sphere, dialogue, intersubjectivity and pragmatic complexity arise, and open games allow free regulation of the categorisation and the evaluation of data. Ultimately, all kinds of ecosystems of ideas – representing collective cognitive processes – will be cultivated in an interoperable environment.

start-ieml

Schema from the START – IEML / English Dictionary by Prof. Pierre Lévy FRSC CRC University of Ottawa 25th August 2010 (Copyright Pierre Lévy 2010 (license Apache 2.0)

Since IEML automatically creates very complex graphs of semantic relations, one of the development tasks that is still pending is to transform these complex graphs into visualisations that make them usable and navigable.

How do you envisage these big graphs? Can you give us an idea of what the visualisation could look like?

The idea is to project these very complex graphs onto a 3D interactive structure. These could be spheres, for example, so you will be able to go inside the sphere corresponding to one particular idea and you will have all the other ideas of its ecosystem around you, arranged according to the different semantic relations. You will be also able to manipulate the spheres from the outside and look at them as if they were on a geographical map. And you will be able to zoom in and zoom out of fractal levels of complexity. Ecosystems of ideas will be displayed as interactive holograms in virtual reality on the Web (through tablets) and as augmented reality experienced in the 3D physical world (through Google glasses, for example).

I’m also curious about your thoughts on the social alarm generated by the Internet’s enormous capacity to retrieve data, and the potential exploitation of this data. There are social concerns about possible abuses and privacy infringement. Some big companies are starting to consider drafting codes of ethics to regulate and prevent the abuse of data. Do you think a fixed set of rules can effectively regulate the changing environment of the algorithmic medium? How can IEML contribute to improving the transparency and regulation of this medium?

IEML does not only allow transparency, it allows symmetrical transparency. Everybody participating in the semantic sphere will be transparent to others, but all the others will also be transparent to him or her. The problem with hyper-surveillance is that transparency is currently not symmetrical. What I mean is that ordinary people are transparent to big governments and big companies, but these big companies and big governments are not transparent to ordinary people. There is no symmetry. Power differences between big governments and little governments or between big companies and individuals will probably continue to exist. But we can create a new public space where this asymmetry is suspended, and where powerful players are treated exactly like ordinary players.

And to finish up, last month the CCCB Lab held began a series of workshops related to the Internet Universe project, which explore the issue of education in the digital environment. As you have published numerous works on this subject, could you summarise a few key points in regard to educating ‘digital natives’ about responsibility and participation in the algorithmic medium?

People have to accept their personal and collective responsibility. Because every time we create a link, every time we “like” something, every time we create a hashtag, every time we buy a book on Amazon, and so on, we transform the relational structure of the common memory. So we have a great deal of responsibility for what happens online. Whatever is happening is the result of what all the people are doing together; the Internet is an expression of human collective intelligence.

Therefore, we also have to develop critical thinking. Everything that you find on the Internet is the expression of particular points of view, that are neither neutral nor objective, but an expression of active subjectivities. Where does the money come from? Where do the ideas come from? What is the author’s pragmatic context? And so on. The more we know the answers to these questions, the greater the transparency of the source… and the more it can be trusted. This notion of making the source of information transparent is very close to the scientific mindset. Because scientific knowledge has to be able to answer questions such as: Where did the data come from? Where does the theory come from? Where do the grants come from? Transparency is the new objectivity.

Originally posted on Blog of Collective Intelligence 2003-2015:

pierre_levy

Pierre Lévy is a philosopher and a pioneer in the study of the impact of the Internet on human knowledge and culture. In Collective Intelligence. Mankind’s Emerging World in Cyberspace, published in French in 1994 (English translation in 1999), he describes a kind of collective intelligence that extends everywhere and is constantly evaluated and coordinated in real time, a collective human intelligence, augmented by new information technologies and the Internet. Since then, he has been working on a major undertaking: the creation of IEML (Information Economy Meta Language), a tool for the augmentation of collective intelligence by means of the algorithmic medium. IEML, which already has its own grammar, is a metalanguage that includes the semantic dimension, making it computable. This in turn allows a reflexive representation of collective intelligence processes.

In the book Semantic Sphere I. Computation, Cognition, and Information Economy, Pierre Lévy describes IEML as…

View original 2,744 more words

Rothko

Interview with Nelesi Rodriguez, published in spanish in the academic journal Comunicacion , Estudios venezolanos de comunicación • 2º trimestre 2014, n. 166

Collective intelligence in the digital age: A revolution just at its beginning

Pierre Lévy (P.L.) is a renowned theorist and media scholar. His ideas on collective intelligence have been essential for the comprehension of some phenomena of contemporary communication, and his research on Information Economy Meta Language (IEML) is today one of the biggest promises of data processing and of knowledge management. In this interview conducted by the team of the Comunicación(C.M.) magazine, he explained to us some of the basic points of his theory, and gave us an interesting reading on current topics related to communication and digital media. Nelesi Rodríguez, April 2014.

APPROACH TO THE SUBJECT MATTER

C.M: Collective intelligence can be defined as shared knowledge that exists everywhere, that is constantly measured, coordinated in real time, and that drives the effective mobilization of several skills. In this regard, it is understood that collective intelligence is not a quality exclusive to human beings. In what way is human collective intelligence different from other species’ collective intelligence?

P.L: You are totally right when you say that collective intelligence is not exclusive to human race. We know that the ants, the bees, and in general all social animals have got collective intelligence. They solve problems together, and –as social animals-, they are not able to survive alone and this is also the case with human species; we are not able to survive alone and we solve problems together.

But there is a big difference that is related to the use of language: Animals are able to communicate, but they do not have language, I mean, they cannot ask questions, they cannot tell stories, they cannot have dialogues, they cannot communicate about their emotions, their fears, and so on.

So there is the language, that is specific to the human kind, and with the language you have of course better communication and an enhanced collective intelligence; and you have also all that comes with this linguistic ability, that is the technology, the complexity of social institutions –like law, religion, ethics, economy… All these things that animals don`t have. This ability to play with symbolic systems, to play with tools and to build complex social institutions, creates a much more powerful collective intelligence for the humans.

Also, I would say that there are two important features that come from the human culture: The first is that human collective intelligence can improve during history, because each new generation can improve the symbolic systems, the technology, and the social institutions; so there is an evolution of human collective intelligence and, of course, we are talking about a cultural evolution, not a biological evolution. And then, finally, and maybe the most important feature of human collective intelligence, is that each unit of the human collectivity has an ability to reflect, to think by itself. We have individual consciousness, unfortunately for them, the ants don`t; so the fact that the humans have individual consciousness creates at the level of the social cognition something that it is very powerful. That is the main difference between human and animal collective intelligence.

C.M: Do the writing and digital technologies also contribute to this difference?

P.L: In the oral culture, there was certain kind of transmission of knowledge, but of course, when we invented the writing systems we were able to accumulate much more knowledge to transmit to the next generations. With the invention of the diverse writing systems, and then their improvements -like the invention of the alphabet, the invention of the paper, the printing press, and then the electronic media- human collective intelligence expanded. So, for example, the ability to build libraries, to build scientific coordination and collaboration, the communication supported by the telephone, the radio, the television makes human collective intelligence more powerful, and I think that it will be the main challenge our generation and the next will have to face: to take advantage of the digital tools; the computer, the internet, the smartphones, et caetera; to discover new ways to improve our cognitive abilities, our memory, our communication, our problem solving abilities, our abilities to coordinate and collaborate, and so on.

C.M: In an interview conducted by Howard Rheingold, you mentioned that every device and technology that have the purpose of enhancing language also enhance collective intelligence and, at the same time, have an impact on cognitive skills such as memory, collaboration and the ability to connect with one another. Taking this into account:

  • It is said that today, the enhancement of cognitive abilities manifests in different ways: from fandoms and wikis, to crowdsourcing projects that are created with the intent of finding effective treatments for serious illnesses. Do you consider that every one of these manifestations contribute in the same way towards the expansion of our collective intelligence?

P.L: Maybe the most important sector where we should put particular effort is scientific research and learning, because we are talking about knowledge, so the most important part is the creation of knowledge, the dissemination of knowledge or, generally, the collective and individual learning.

Today there is a transformation of communication in the scientific community; more and more journals are open and online, people are doing virtual teams, they communicate by internet, people are using big amounts of digital data, and they are processing this data with computer power; so we are already witnessing this augmentation, but we are just at the beginning of this new approach.

In the case of learning I think it is very important that we recognize the emergence of new ways of learning online collaboratively, where people who want to learn are helping each other, are communicating, are accumulating common memories from where they can take what is interesting for them. This collective learning is not limited to schools; it happens in all kinds of social environments. We could call this “knowledge management”, and there is an individual or personal aspect of this knowledge management that some people call “personal knowledge management”: choosing the right sources on the internet, featuring the sources, categorizing information, doing synthesis, sharing these synthesis on social media, looking for a feedback, initiating a conversation, and so on. We have to realize that learning is and always has been an individual process at is core. Someone has to learn; you cannot learn for someone else. Help other people to learn, this is teaching; but the learner is doing the real work. Then, if the learners are helping each other, you have a process of collective learning. Of course, it works better if these people are interested in the same topics or if they are engaged in the same activities.

Collective learning augmentation is something that is very general and that has increased with the online communication. It also happens at the political level; there is an augmented deliberation, because people can discuss easily on the internet and also there is an enhanced coordination (for public demonstrations and similar things).

  • M: With the passage of time, collective intelligence becomes less a human quality and more one akin to machines; this affair worries more than one individual. What is your stance in the wake of this reality?

P.L: There is a process of artificialization of cognition in general that is very old; it began with the writing, with books; it is already a kind of externalization or objectification of memory. I mean, a library, for instance, is something that is completely material, completely technical, and without libraries we would be much less intelligent.

We cannot be against libraries because instead of being pure brain they are just paper, and ink, and buildings, and index cards. Similarly, it makes no sense that we “revolt” against computer and against the internet. It is the same kind of reasoning than with the libraries, it is just another technology, more powerful, but it is the same idea. It is an augmentation of our cognitive ability -individual and collective-, so it is absurd to be afraid of it.

But we have to distinguish very clearly the material support and the texts. The texts come from our mind, but the text that is in my mind can be projected on paper as well as in a computer network. What it is really important here is the text.

IEML AND THE FUTURE OF COLLECTIVE INTELLIGENCE

C.M: You’ve mentioned before that what we define today as the “semantic web”, more than being based on semantic principles, is based on logical principles. According to your ideas, this represents a roadblock in making the most out of the possibilities offered by digital media. As an alternative, you proposed the IEML (Information Economy Meta Language).

  • Could you elaborate on the basic differences between the semantic web and the IEML?

P.L: The so called “semantic web” –in fact, people call it now “web of data”, and it is a better term for it– is based on very well known principles of artificial intelligence that were developed in the 70s, the 80s, and that were adapted to the web.

Basically, you have a well-organized database, and you have rules to compute the relations between different parts of the database, and these rules are mainly logical rules. IEML works in a completely different manner: you have as many data as you want, and you categorize this data in IEML.

IEML is a language, not a computer language, but an artificial human language. So you can say “the sea”, “this person”, or anything… There are words in IEML, there are no words in the semantic web formats, it doesn’t work like this.

In this artificial language that is IEML, each word is in semantic relations with the other words in the dictionary. So, all the words are intertwined by semantic relations, and are perfectly defined. When you use these words, create sentences, or create texts; you create new relationships between the words, grammatical relationships.

And from texts written in IEML you have algorithms that make automatic relations inside those sentences, from one sentence to the other, and so on. So you have a whole semantic network inside the text that is automatically computed, and even more, you can automatically compute the semantic relations between any text and any library of texts.

An IEML text automatically creates its own semantic relations with all the other texts, and these texts in IEML can automatically translate themselves into natural languages; Spanish, English, Portuguese or Chinese… So, when you use IEML to categorize data, you create automatically semantic links between the data; with all the openness, the subtleness, and the ability to say exactly what you want that language can offer you.

You can categorize any type of content; images, music, software, articles, websites, books, any kind of information. You can categorize these in IEML and at the same time you create links within the data because of the links that are internal to the language.

  • M: Can we consider metatags, hashtags, and Twitter lists as a precedent to the IEML?

P.L: Yes, exactly. I have been inspired by the fact that people are already categorizing data. They started doing this with social bookmarking sites, such as del.icio.us. The act of curation today goes with the act of categorization, of tagging. We do this very often on Twitter, and now we can do it on Facebook, on Google Plus, on Youtube, on Flickr, and so on. The thing is that these tags don`t have the ability to interconnect with other tags and to create a big and consistent semantic network. In addition, these tags are in different natural languages.

From the point of view of the user, it will be the same action, but tagging in IEML will just be more powerful.

  • M: What will the IEML’s initial array of applications be?

P.L: I hope the main applications will be in the creation of collective intelligence games; games of categorization and evaluation of data; a sort of collective curation that will help people to create a very useful memory for their collaborative learning. That, for me, would be the most interesting application, and of course, the creation of a inter-linguistic or trans-linguistic environment.

BIG DATA AND COLLECTIVE INTELLIGENCE

C.M: You’ve referred to big data as one of the phenomena that could take collective intelligence to a whole new level. You’ve mentioned as well that in fact this type of information can only be processed by powerful institutions (governments, corporations, etc.), and that only when the capacity to read big data is democratized, will there truly be a revolution.

Would you say that the IEML will have a key role in this process of democratization? If so, why?

P.L: I think that currently there are two important aspects of big data analytics: First, we have more and more data every day. We have to realize this. And, second, the main producer of this immense flow of data is ourselves. We, the users of the Internet are producing data. So currently lots of people are trying to make sense of this data and here you have two “avenues”:

First is the avenue that is more scientific. In natural sciences you have a lot of data –genetic data, data coming from physics or astronomy-, and also something that is relatively new; the data coming from human sciences. This is called “digital humanities”, and it takes data from spaces like social media and tries to make sense of it from a sociological point of view. Or you take data from libraries and you try to make sense of it from a literary or historical point of view. This is one application.

The second application is in business, in administration –private or public. You have many companies that are trying to sell services to companies and to governments.

I would say that there are two big problems with this landscape:

The first is related to the methodology; today we use mainly statistical methods and logical methods. It is very difficult to have a semantic analysis of the data, because we do not have a semantic code, and let’s remember that every thing we analyze is coded before we analyze it. So you can code quantitatively and you have statistical analysis, code logically and you have logical analysis. So you need a semantic code to have a semantic analysis. We do not have it yet, but I think that IEML will be that code.

The second problem is the fact that this analysis of data is currently in the hands of very powerful or rich players –big governments, big companies. It is expensive and it is not easy to do –you need to learn how to code, you need to learn how to read statistics…

I think that with IEML –because people will be able to code semantically the data– people will also be able to do semantic analysis with the help of the right user-interfaces. They will be able to manipulate this semantic code in natural language, it will be open to everybody.

This famous “revolution of big data” is just at its beginning. In the coming decades there will be much more data and many more powerful tools to analyze it. And it will be democratized; the tools will be open and free.

A BRIEF READING OF THE CURRENT SITUATION IN VENEZUELA

C.M: In the interview conducted by Howard Rheingold, you defined collective intelligence as a synergy between personal and collective knowledge; as an example, you mentioned the curation process that we, as users of social media, develop and that in most cases serves as resource material for others to use. Regarding this particular issue, I’d like to analyze with you this particular situation using collective intelligence:

During the last few months, Venezuela has suffered an important information blackout, product of the government’s monopolized grasp of the majority of the media outlets, the censorship efforts made by the State’s organisms, and the self-imposed censorship of the last independent media outlets of the country. As a response to this blockade, Venezuelans have taken upon themselves to stay informed by invading the digital space. In a relatively short period of time, various non-standard communication networks have been created, verified source lists have been consolidated, applications have been developed, and a sort of ethics code has been established in order to minimize the risk of spreading false information.

Based on your theory on collective intelligence, what reading could you give of this phenomenon?

P.L: You have already given a response to this; I have nothing else to say. Of course I am against any kind of censorship. We have already seen that many authoritarian regimes do not like the internet, because it represents an augmentation of freedom of expression. Not only in Venezuela but in fact in different countries, governments have tried to limit free expression and the people that are politically active and that are not pro-government have tried to organize themselves through the internet. I think that the new environment created by social media –Twitter, Facebook, Youtube, the blogs, and all the apps that help people find the information they need– helps to the coordination and the discussion inside all these opposition movements, and this is the current political aspect of collective intelligence.

E-sphere-copie

An IEML paradigm projected onto a sphere.

Communication presented at The Future of Text symposium IV at the Google’s headquarters in London (2014).

Symbolic manipulation accounts for the uniqueness of human cognition and consciousness. This symbolic manipulation is now augmented by algorithms. The problem is that we still have not invented a symbolic system that could fully exploit the algorithmic medium in the service of human development and human knowledge.

E-Cultural-revolutions

The slide above describes the successive steps in the augmentation of symbolic manipulation.

The first revolution is the invention of writing with symbols endowed with the ability of self-conservation. This leads to a remarquable augmentation of social memory and to the emergence of new forms of knowledge.

The second revolution optimizes the manipulation of symbols like the invention of the alphabet (phenician, hebrew, greek, roman, arab, cyrilic, korean, etc.), the chinese rational ideographies, the indian numeration system by position with a zero, paper and the early printing techniques of China and Korea.

The third revolution is the mecanization and the industrialization of the reproduction and diffusion of symbols, like the printing press, disks, movies, radio, TV, etc. This revolution supported the emergence of the modern world, with its nation states, industries and its experimental mathematized natural sciences.

We are now at the beginning of a fourth revolution where an ubiquitous and interconnected infosphere is filled with symbols – i.e. data – of all kinds (music, voice, images, texts, programs, etc.) that are being automatically transformed. With the democratization of big data analysis, the next generations will see the advent of a new scientific revolution… but this time it will be in the humanities and social sciences.

E-Algorithmic-medium

Let’s have a closer look to the algorithmic medium. Four layers have been added since the middle of the 20th century.

– The first layer is the invention of the automatic digital computer itself. We can describe computation as « processing on data ». It is self-evident that computation cannot be programmed if we don’t have a very precise addressing system for the data and for the specialized operators/processors that will transform the data. At the beginning these addressing systems were purely local and managed by operating systems.

– The second layer is the emergence of a universal addressing system for computers, the Internet protocol, that allowed for exchange of data and collaborative computing across the telecommunication network.

– The third layer is the invention of a data universal addressing and displaying system (http, html), welcoming a hypertextual global database: the World Wide Web. We all know that the Web has had a deep social, cultural and economic impact in the last fifteen years.

– The construction of this algorithmic medium is ongoing. We are now ready to add a fourth layer of addressing and, this time, we need a universal addressing system for metadata, and in particular for semantic metadata. Why? First, we are still unable to resolve the problem of semantic interoperability across languages, classifications and ontologies. And secondly, except for some approximative statistical and logical methods, we are still unable to compute semantic relations, including distances and differences. This new symbolic system will be a key element to a future scientific revolution in the humanities and social sciences leading to a new kind of reflexive collective intelligence for our species. There lies the future of text.

E-IEML-math2

My version of a universal semantic addressing system is IEML, an artificial language that I have invented and developped over the last 20 years.

IEML is based on a simple algebra with six primitive variables (E, U, A, S, B, T) and two operations (+, ×). The multiplicative operation builds the semantic links. This operation has three roles: a depature node, an arrival node and a tag for the link. The additive operation gathers several links to build a semantic network and recursivity builds semantic networks with multiple levels of complexity: it is « fractal ». With this algebra, we can automatically compute an internal network corresponding to any variable and also the relationships between any set of variables.

IEML is still at the stage of fundamental research but we now have an extensive dictionary – a set of paradigms – of three thousand terms and grammatical algorithmic rules that conform to the algebra. The result is a language where texts self-translate into natural language, manifest as semantic networks and compute collaboratively their relationships and differences. Any library of IEML texts then self-organizes into ecosystems of texts and data categorized in IEML will self-organize according to their semantic relationships and differences.

E-Collective-intel2

Now let’s take an example of an IEML paradigm, the paradigm of “Collective Intelligence in the service of human development” for instance, where we will grasp the meaning of the primitives and in which way they are being used.

-First, let’s look at the dialectic between virtual (U) and actual (A) human development represented by the rows.

-Then, the ternary dialectic between sign (S), being (B) and thing (T) are represented by the columns.

-The result is six broad interdependent aspects of collective intelligence corresponding to the intersections of the rows (virtual/actual) and columns (sign/being/thing).

– Each of these six broad aspects of CI are then decomposed into three sub-aspects corresponding to the sign/being/thing dialectic.

The semantic relations (symmetries and inclusions) between the terms of a paradigm are all explicit and therefore computable. All IEML paradigms are designed with the same principles as this one, and you can build phrases by assembling the terms through multiplications and additions.

Fortunatly, fundamental research is now finished. I will spend the next months preparing a demo of the automatic computing of semantic relations between data coded in IEML. With tools to come…

E-Future-text2

Human-dev-CI

E = Emptiness, U = Virtual, A = Actual, S = Sign, B = Being, T = Thing


IEML medium

The algorithmic medium

Before the algorithmic medium was the typographical medium (printing press, broadcasting) that industrialized and automated the reproduction of information. In the new algorithmic medium, information is, de facto, ubiquitous and automation now concentrates on the transformation of information.

The algorithmic medium is built from three interdependent components: the Web as a universal database (big data), the Internet as a universal computer (cloud), and the algorithms in the hands of people.

IEML (the Information Economy MetaLanguage) has been designed to exploit the full potential of the new algorithmic medium.

IEML, who and what is it for?

It would have been impossible to have designed IEML before the automatic-computing era and, a fortiori, to implement and use it. IEML was designed for digital natives, and built to take advantage of the new pervasive social computing supported by big data, the cloud and open algorithms.

IEML is a language

IEML is an artificial language that has the expressive power of any natural language (like English, French, Russian, Arabic, etc.). In other words, you can say in IEML whatever you want and its opposite, with varying degrees of precision.

IEML is an inter-linguistic semantic code

We can describe IEML as a sort of pivot language. Its reading/writing interface pops up in the the natural language that you want with an IEML text that self-translates in that specific language.

IEML is a semantic metadata system

IEML was also designed as a tagging system supporting semantic interoperability. Its main use is data categorization. As a universal system addressing concepts, IEML can complement the universal addressing of data on the Web and of processors on the Internet.

IEML is a programming language

An IEML text programs the construction of a semantic network in natural languages and it computes its relations and its semantic differences with other texts.

IEML is a symbolic system

As with any other symbolic systems, IEML is a result from the interaction of three interdependent layers of linguistic complexity: a syntax, semantics and pragmatics.

EN-C-14-MMOM

IEML syntax

IEML syntax is an algebraic topology: this means that a complex network of relations (topology) is coded by an algebraic expression.

IEML Algebra

IEML algebra is based on six basic variables {E, U, A, S, B, T} and two operations {+, ×}. The multiplication builds links (node A, node B, tag) and the addition operation creates graphs by connecting the links. The results of any algebraic operation can be used as a basis for new operations. This recursivity allows the construction of successive layers of complexity.

A computable Topology

Each distinct variable of the IEML algebra corresponds to a distinct graph. Given a set of variables, their relations and their semantic differences are computable.

EN-D-10-MMu_MMu

IEML semantics

As it is projected onto an algebraic topology, IEML’s semantics becomes computable.

The semantic projection onto an algebraic topology

– An IEML script normalizes the notation of an algebraic expression.
– The IEML dictionary is organized as a set of paradigms, a paradigm being a semantic network of terms. Each IEML term can be translated in natural languages.
– With IEML operations {+, ×} and its recursivity, the IEML grammar allows the construction of morphemes, words, clauses, phrases, complex propositions, texts and hypertexts.

The grammatical algorithms

Embedded in IEML, any grammatical algorithms can compute:
– the intra-textual semantic network corresponding to an IEML text
– the translation of an IEML semantic network into any chosen natural language
– the inter-textual semantic network and the semantic differences corresponding to any set of IEML texts.

IEML pragamatics

IEML pragmatics is oriented towards self-organization and reflexive collective intelligence.

A new approach to data and social networks

When data are categorized in IEML, they self-organize into semantic networks and automatically compute their semantic relations and differences. Moreover, when communities engage in collaborative data curation using IEML, what they get in return is a simulated image of their collective intelligence process.

Modeling ideas as dynamic texts

We can model our collective intelligence into an evolving ecosystem of ideas. In this framework, an idea can be defined as the assembly of a concept, an affect, a percept (a sensory-motor image) and a social context. In a dynamic text, the concept is represented by an IEML text, the affect by credits (positive or negative), the percept by a multimedia dataset and the social context as an author (a player) a community (a semantic game) and a time-stamp.

Automatic computing of dynamic hypertexts

Thanks to IEML grammatical algorithms, any set of dynamic texts self-organizes into a dynamic hypertext that represents an ecosystem of ideas in the form of an immersive simulation. Now, a reflexive collective intelligence can emerge from a collaborative data curation.

lampadaire-5

Critique réciproque de l’intelligence artificielle et des sciences humaines

Je me souviens d’avoir participé, vers la fin des années 1980, à un Colloque de Cerisy sur les sciences cognitives auquel participaient quelques grands noms américains de la discipline, y compris les tenants des courants neuro-connexionnistes et logicistes. Parmi les invités, le philosophe Hubert Dreyfus (notamment l’auteur de What Computers Can’t Do, MIT Press, 1972) critiquait vertement les chercheurs en intelligence artificielle parce qu’ils ne tenaient pas compte de l’intentionnalité découverte par la phénoménologie. Les raisonnements humains réels, rappelait-il, sont situés, orientés vers une fin et tirent leur pertinence d’un contexte d’interaction. Les sciences de la cognition dominées par le courant logico-statistique étaient incapables de rendre compte des horizons de conscience qui éclairent l’intelligence. Dreyfus avait sans doute raison, mais sa critique ne portait pas assez loin, car ce n’était pas seulement la phénoménologie qui était ignorée. L’intelligence artificielle (IA) n’intégrait pas non plus dans la cognition qu’elle prétendait modéliser la complexité des systèmes symboliques et de la communication humaine, ni les médias qui la soutiennent, ni les tensions pragmatiques ou les relations sociales qui l’animent. A cet égard, nous vivons aujourd’hui dans une situation paradoxale puisque l’IA connaît un succès pratique impressionnant au moment même où son échec théorique devient patent.

Succès pratique, en effet, puisqu’éclate partout l’utilité des algorithmes statistiques, de l’apprentissage automatique, des simulations d’intelligence collective animale, des réseaux neuronaux et d’autres systèmes de reconnaissance de formes. Le traitement automatique du langage naturel n’a jamais été aussi populaire, comme en témoigne par exemple l’usage de Google translate. Le Web des données promu par le WWW consortium (dirigé par Sir Tim Berners-Lee). utilise le même type de règles logiques que les systèmes experts des années 1980. Enfin, les algorithmes de computation sociale mis en oeuvre par les moteurs de recherche et les médias sociaux montrent chaque jour leur efficacité.

Mais il faut bien constater l’échec théorique de l’IA puisque, malgré la multitude des outils algorithmiques disponibles, l’intelligence artificielle ne peut toujours pas exhiber de modèle convaincant de la cognition. La discipline a prudemment renoncé à simuler l’intelligence dans son intégralité. Il est clair pour tout chercheur en sciences humaines ayant quelque peu pratiqué la transdisciplinarité que, du fait de sa complexité foisonnante, l’objet des sciences humaines (l’esprit, la pensée, l’intelligence, la culture, la société) ne peut être pris en compte dans son intégralité par aucune des théories computationnelles de la cognition actuellement disponible. C’est pourquoi l’intelligence artificielle se contente dans les faits de fournir une boîte à outils hétéroclite (règles logiques, syntaxes formelles, méthodes statistiques, simulations neuronales ou socio-biologiques…) qui n’offrent pas de solution générale au problème d’une modélisation mathématique de la cognition humaine.

Cependant, les chercheurs en intelligence artificielle ont beau jeu de répondre à leurs critiques issus des sciences humaines : « Vous prétendez que nos algorithmes échouent à rendre compte de la complexité de la cognition humaine, mais vous ne nous en proposez vous-mêmes aucun pour remédier au problème. Vous vous contentez de pointer du doigt vers une multitude de disciplines, plus « complexes » les unes que les autres (philosophie, psychologie, linguistique, sociologie, histoire, géographie, littérature, communication…), qui n’ont pas de métalangage commun et n’ont pas formalisé leurs objets ! Comment voulez-vous que nous nous retrouvions dans ce bric-à-brac ? » Et cette interpellation est tout aussi sensée que la critique à laquelle elle répond.

lampadaire-13c0c12

Synthèse de l’intelligence artificielle et des sciences humaines

Ce que j’ai appris de Hubert Dreyfus lors de ce colloque de 1987 où je l’ai rencontré, ce n’était pas tant que la phénoménologie serait la clé de tous les problèmes d’une modélisation scientifique de l’esprit (Husserl, le père de la phénoménologie, pensait d’ailleurs que la phénoménologie – une sorte de méta-science de la conscience – était impossible à mathématiser et qu’elle représentait même le non-mathématisable par exellence, l’autre de la science mathématique de la nature), mais plutôt que l’intelligence artificielle avait tort de chercher cette clé dans la seule zone éclairée par le réverbère de l’arithmétique, de la logique et des neurones formels… et que les philosophes, herméneutes et spécialistes de la complexité du sens devaient participer activement à la recherche plutôt que de se contenter de critiquer. Pour trouver la clé, il fallait élargir le regard, fouiller et creuser dans l’ensemble du champ des sciences humaines, aussi opaque au calcul qu’il semble à première vue. Nous devions disposer d’un outil à traiter le sens, la signification, la sémantique en général, sur un mode computationnel. Une fois éclairé par le calcul le champ immense des relations sémantiques, une science de la cognition digne de ce nom pourrait voir le jour. En effet, pour peu qu’un outil symbolique nous assure du calcul des relations entre signifiés, alors il devient possible de calculer les relations sémantiques entre les concepts, entre les idées et entre les intelligences. Mû par ces considérations, j’ai développé la théorie sémantique de la cognition et le métalangage IEML : de leur union résulte la sémantique computationnelle.

Les spécialistes du sens, de la culture et de la pensée se sentent démunis face à la boîte à outils hétérogène de l’intelligence artificielle : ils n’y reconnaissent nulle part de quoi traiter la complexité contextuelle de la signification. C’est pourquoi la sémantique computationnelle leur propose de manipuler les outils algorithmiques de manière cohérente à partir de la sémantique des langues naturelles. Les ingénieurs s’égarent face à la multitude bigarrée, au flou artistique et à l’absence d’interopérabilité conceptuelle des sciences humaines. Remédiant à ce problème, la sémantique computationnelle leur donne prise sur les outils et les concepts foisonnants des insaisissables sciences humaines. En somme, le grand projet de la sémantique computationnelle consiste à construire un pont entre l’ingénierie logicielle et les sciences humaines de telle sorte que ces dernières puissent utiliser à leur service la puissance computationnelle de l’informatique et que celle-ci parvienne à intégrer la finesse herméneutique et la complexité contextuelle des sciences humaines. Mais une intelligence artificielle grande ouverte aux sciences humaines et capable de calculer la complexité du sens ne serait justement plus l’intelligence artificielle que nous connaissons aujourd’hui. Quant à des sciences humaines qui se doteraient d’un métalangage calculable, qui mobiliseraient l’intelligence collective et qui maîtriseraient enfin le médium algorithmique, elles ne ressembleraient plus aux sciences humaines que nous connaissons depuis le XVIIIe siècle : nous aurions franchi le seuil d’une nouvelle épistémè.

biface

Le concepteur

J’ai saisi dès la fin des années 1970 que la cognition était une activité sociale et outillée par des technologies intellectuelles. Il ne faisait déjà aucun doute pour moi que les algorithmes allaient transformer le monde. Et si je réfléchis au sens de mon activité de recherche depuis les trente dernières années, je réalise qu’elle a toujours été orientée vers la construction d’outils cognitifs à base d’algorithmes.

A la fin des années 1980 et au début des années 1990, la conception de systèmes experts et la mise au point d’une méthode pour l’ingénierie des connaissances m’ont fait découvrir la puissance du raisonnement automatique (J’en ai rendu compte dans De la programmation considérée comme un des beaux-arts, Paris, La Découverte, 1992). Les systèmes experts sont des logiciels qui représentent les connaissances d’un groupe de spécialistes sur un sujet restreint au moyen de règles appliquées à une base de données soigneusement structurée. J’ai constaté que cette formalisation des savoir-faire empiriques menait à une transformation de l’écologie cognitive des collectifs de travail, quelque chose comme un changement local de paradigme. J’ai aussi vérifié in situ que les systèmes à base de règles fonctionnaient en fait comme des outils de communication de l’expertise dans les organisations, menant ainsi à une intelligence collective plus efficace. J’ai enfin expérimenté les limites de la modélisation cognitive à base purement logique : elle ne débouchait alors, comme les ontologies d’aujourd’hui, que sur des micro-mondes de raisonnement cloisonnés. Le terme d’« intelligence artificielle », qui évoque des machines capables de décisions autonomes, était donc trompeur.

Je me suis ensuite consacré à la conception d’un outil de visualisation dynamique des modèles mentaux (Ce projet est expliqué dans L’Idéographie dynamique, vers une imagination artificielle, La Découverte, Paris, 1991). Cet essai m’a permis d’explorer la complexité sémiotique de la cognition en général et du langage en particulier. J’ai pu apprécier la puissance des outils de représentation de systèmes complexes pour augmenter la cognition. Mais j’ai aussi découvert à cette occasion les limites des modèles cognitifs non-génératifs, comme celui que j’avais conçu. Pour être vraiment utile, un outil d’augmentation intellectuelle devait être pleinement génératif, capable de simuler des processus cognitifs et de faire émerger de nouvelles connaissances.

Au début des années 1990 j’ai co-fondé une start up qui commercialisait un logiciel de gestion personnelle et collective des connaissances. J’ai été notamment impliqué dans l’invention du produit, puis dans la formation et le conseil de ses utilisateurs (Voir Les Arbres de connaissances, avec Michel Authier, La Découverte, Paris, 1992). Les Arbres de connaissances intégraient un système de représentation interactive des compétences et connaissances d’une communauté, ainsi qu’un système de communication favorisant l’échange et l’évaluation des savoirs. Contrairement aux outils de l’intelligence artificielle classique, celui-ci permettait à tous les utilisateurs d’enrichir librement la base de données commune. J’ai retenu de mon expérience dans cette entreprise la nécessité de représenter les contextes pragmatiques par des simulations immersives, dans lesquelles chaque ensemble de données sélectionné (personnes, connaissances, projets, etc.) réorganise l’espace autour de lui et génère automatiquement une représentation singulière du tout : un point de vue. Mais j’ai aussi rencontré lors de ce travail le défi de l’interopérabilité sémantique, qui allait retenir mon attention pendant les vingt-cinq années suivantes. En effet, mon expérience de constructeur d’outils et de consultant en technologies intellectuelles m’avait enseigné qu’il était impossible d’harmoniser la gestion personnelle et collective des connaissances à grande échelle sans langage commun. La publication de L’intelligence collective en 1994 traduisait en théorie ce que j’avais entrevu dans ma pratique : de nouveaux outils d’augmentation cognitive à support algorithmique allaient supporter des formes de collaboration intellectuelle inédites. Mais le potentiel des algorithmes ne serait pleinement exploité que grâce à un métalangage rassemblant les données numérisées dans le même système de coordonnées sémantique.

A partir du milieu des années 1990, pendant que je dévouais mon temps libre à concevoir ce système de coordonnées (qui ne s’appelait pas encore IEML), j’ai assisté au développement progressif du Web interactif et social. Le Web offrait pour la première fois une mémoire universelle accessible indépendamment de la localisation physique de ses supports et de ses lecteurs. La communication multimédia entre points du réseau était instantanée. Il suffisait de cliquer sur l’adresse d’une collection de données pour y accéder. Au concepteur d’outils cognitifs que j’étais, le Web apparaissait comme une opportunité à exploiter.

L’utilisateur

J’ai participé pendant près d’un quart de siècle à de multiples communautés virtuelles et médias sociaux, en particulier ceux qui outillaient la curation collaborative des données. Grâce aux plateformes de social bookmarking de Delicious et Diigo, j’ai pu expérimenter la mise en commun des mémoires personnelles pour former une mémoire collective, la catégorisation coopérative des données, les folksonomies émergeant de l’intelligence collective, les nuages de tags qui montrent le profil sémantique d’un ensemble de données. En participant à l’aventure de la plateforme Twine créée par Nova Spivack entre 2008 et 2010, j’ai mesuré les points forts de la gestion collective de données centrée sur les sujets plutôt que sur les personnes. Mais j’ai aussi touché du doigt l’inefficacité des ontologies du Web sémantique – utilisées entre autres par Twine – dans la curation collaborative de données. Les succès de Twitter et de son écosystème m’ont confirmé dans la puissance de la catégorisation collective des données, symbolisée par le hashtag, qui a finalement été adopté par tous les médias sociaux. J’ai rapidement compris que les tweets étaient des méta données contenant l’identité de l’auteur, un lien vers les données, une catégorisation par hashtag et quelques mots d’appréciation. Cette structure est fort prometteuse pour la gestion personnelle et collective des connaissances. Mais parce que Twitter est fait d’abord pour la circulation rapide de l’information, son potentiel pour une mémoire collective à long terme n’est pas suffisamment exploité. C’est pourquoi je me suis intéressé aux plateformes de curation de données plus orientées vers la mémoire à long terme comme Bitly, Scoop.it! et Trove. J’ai suivi sur divers forums le développement des moteurs de recherche sémantiques, des techniques de traitement du langage naturel et des big data analytics, sans y trouver les outils qui feraient franchir à l’intelligence collective un seuil décisif. Enfin, j’ai observé comment Google réunissait les données du Web dans une seule base et comment la firme de Mountain View exploitait la curation collective des internautes au moyen de ses algorithmes. En effet, les résultats du moteur de recherche sont basés sur les hyperliens que nous créons et donc sur notre collaboration involontaire. Partout dans les médias sociaux je voyais se développer la gestion collaborative et l’analyse statistique des données, mais à chaque pas je rencontrais l’opacité sémantique qui fragmentait l’intelligence collective et limitait son développement.

La future intelligence algorithmique reposera forcément sur la mémoire hypertextuelle universelle. Mais mon expérience de la curation collaborative de données me confirmait dans l’hypothèse que j’avais développée dès le début des années 1990, avant même le développement du Web. Tant que la sémantique ne serait pas transparente au calcul et interopérable, tant qu’un code universel n’aurait pas décloisonné les langues et les systèmes de classification, notre intelligence collective ne pourrait faire que des progrès limités.

Mon activité de veille et d’expérimentation a nourri mon activité de conception technique. Pendant les années où je construisais IEML, pas à pas, à force d’essais et d’erreurs, de versions, de réformes et de recommencements, je ne me suis jamais découragé. Mes observations me confirmaient tous les jours que nous avions besoin d’une sémantique calculable et interopérable. Il me fallait inventer l’outil de curation collaborative de données qui reflèterait nos intelligences collectives encore séparées et fragmentées. Je voyais se développer sous mes yeux l’activité humaine qui utiliserait ce nouvel outil. J’ai donc concentré mes efforts sur la conception d’une plateforme sémantique universelle où la curation de données serait automatiquement convertie en simulation de l’intelligence collective des curateurs.

Mon expérience de concepteur technique et de praticien a toujours précédé mes synthèses théoriques. Mais, d’un autre côté, la conception d’outils devait être associée à la connaissance la plus claire possible de la fonction à outiller. Comment augmenter la cognition sans savoir ce qu’elle est, sans connaître son fonctionnement ? Et puisque, dans le cas qui m’occupait, l’augmentation s’appuyait précisément sur un saut de réflexivité, comment aurais-je pu réfléchir, cartographier ou observer quelque chose dont je n’aurais eu aucun modèle ? Il me fallait donc établir une correspondance entre un outil interopérable de catégorisation des données et une théorie de la cognition. A suivre dans mon prochain livre: L’intelligence algorithmique

Follow

Get every new post delivered to your Inbox.

Join 25,322 other followers