Archives for posts with tag: Hypercortex

Emergence

Emergence happens through an interdependant circulation of information between two levels of complexity. A code translates and betrays information in both directions: bottom-up and top-down.

Nature

According to our model, human collective intelligence emerges from natural evolution. The lower level of quantic complexity translates into a higher level of molecular complexity through the atomic stabilization and coding. There are no more than 120 atomic elements that explain the complexity of matter by their connections and reactions. The emergence of the next level of complexity – life – comes from the genetic code that is used by organisms as a trans-generational memory. Communication in neuronal networks translates organic life into conscious phenomena, including sense data, pleasure and pain, desire, etc. So emerges the animal life. Let’s note that organic life is intrinsically ecosystemic and that animals have developed many forms of social or collective intelligence. The human level emerges through the symbolic code : language, music, images, rituals and all the complexity of culture. It is only thank to symbols that we are able to conceptualize phenomena and think reflexively about what we do and think. Symbolic systems are all conventional but the human species is symbolic by nature, so to speak. Here, collective intelligence reaches a new level of complexity because it is based on collaborative symbol manipulation.

Culture

[WARNING: the next 5 paragraphs can be found in “collective intelligence for educators“, if you have already read them, go to the next slide: “algorithmic medium”] The above slide describes the successive steps in the emergence of symbolic manipulation. As for the previous slide, each new layer of cultural complexity emerges from the creation of a coding system.

During the longest part of human history, the knowledge was only embedded in narratives, rituals and material tools. The first revolution in symbolic manipulation is the invention of writing with symbols endowed with the ability of self-conservation. This leads to a remarquable augmentation of social memory and to the emergence of new forms of knowledge. Ideas were reified on an external surface, which is an important condition for critical thinking. A new kind of systematic knowledge was developed: hermeneutics, astronomy, medicine, architecture (including geometry), etc.

The second revolution optimizes the manipulation of symbols like the invention of the alphabet (phenician, hebrew, greek, roman, arab, cyrilic, korean, etc.), the chinese rational ideographies, the indian numeration system by position with a zero, paper and the early printing techniques of China and Korea. The literate culture based on the alphabet (or rational ideographies) developed critical thinking further and gave birth to philosophy. At this stage, scholars attempted to deduce knowledge from observation and deduction from first principles. There was a deliberate effort to reach universality, particularly in mathematics, physics and cosmology.

The third revolution is the mecanization and the industrialization of the reproduction and diffusion of symbols, like the printing press, disks, movies, radio, TV, etc. This revolution supported the emergence of the modern world, with its nation states, industries and its experimental mathematized natural sciences. It was only in the typographic culture, from the 16th century, that natural sciences took the shape that we currently enjoy: systematic observation or experimentation and theories based on mathematical modeling. From the decomposition of theology and philosophy emerged the contemporary humanities and social sciences. But at this stage human science was still fragmented by disciplines and incompatible theories. Moreover, its theories were rarely mathematized or testable.

We are now at the beginning of a fourth revolution where an ubiquitous and interconnected infosphere is filled with symbols – i.e. data – of all kinds (music, voice, images, texts, programs, etc.) that are being automatically transformed. With the democratization of big data analysis, the next generations will see the advent of a new scientific revolution… but this time it will be in the humanities and social sciences. The new human science will be based on the wealth of data produced by human communities and a growing computation power. This will lead to reflexive collective intelligence, where people will appropriate (big) data analysis and where subjects and objects of knowledge will be the human communities themselves.

Algo-medium

Let’s have a closer look at the algorithmic medium. Four layers have been added since the middle of the 20th century. Again, we observe the progressive invention of new coding systems, mainly aimed at the addressing of processors, data and meta-data.

The first layer is the invention of the automatic digital computer itself. We can describe computation as « processing on data ». It is self-evident that computation cannot be programmed if we don’t have a very precise addressing system for the data and for the specialized operators/processors that will transform the data. At the beginning these addressing systems were purely local and managed by operating systems.

The second layer is the emergence of a universal addressing system for computers, the Internet protocol, that allows for exchange of data and collaborative computing across the telecommunication network.

The third layer is the invention of a universal system for the addressing and displaying of data (URLs, http, html). Thank to this universal addressing of data, the World Wide Web is a hypertextual global database that we all create and share. It is obvious that the Web has had a deep social, cultural and economic impact in the last twenty years.

The construction of the algorithmic medium is ongoing. We are now ready to add a fourth layer of addressing and, this time, it will be a universal addressing system for semantic metadata. Why? First, we are still unable to resolve the problem of semantic interoperability across languages, classifications and ontologies. And secondly, except for some approximative statistical and logical methods, we are still unable to compute semantic relations, including distances and differences. This new symbolic system will be a key element to a future scientific revolution in the humanities and social sciences, leading to a new kind of reflexive collective intelligence for our species. Moreover, it will pave the way for the emergence of a new scientific cosmos – not a physical one but a cosmos of the mind that we will build and explore collaboratively. I want to strongly underline here that the semantic categorization of data will stay in the hands of people. We will be able to categorize the data as we want, from many different point of views. All that is required is that we use the same code. The description itself will be free.

Algo-intel

Let’s examine now the future emerging algorithmic intelligence. This new level of symbolic manipulation will be operated and shared in a mixed environment combining virtual worlds and augmented realities. The two lower levels of the above slide represent the current internet: an interaction between the « internet of things » and the « clouds » where all the data converge in an ubiquitous infosphere… The two higher levels, the « semantic sensorium » and the « reflexive collective intelligence » depict the human condition that will unfold in the future.

The things are material, localized realities that have GPS addresses. Here we speak about the smart territories, cities, buildings, machines, robots and all the mobile gadgets (phones, tablets, watches, etc.) that we can wear. Through binary code, the things are in constant interaction with the ubiquitous memory in the clouds. Streams of data and information processing reverberate between the things and the clouds.

When the data will be coded by a computable universal semantic addressing system, the data in the clouds will be projected automatically into a new sensorium. In this 3D, immersive and dynamic virtual environment we will be able to explore through our senses the abstract relationships between the people, the places and the meaning of digital information. I’m not speaking here of a representation, reproduction or imitation of the material space, like, for example, in Second Life. We have to imagine something completely different: a semantic sphere where the cognitive processes of human communities will be modeled. This semantic sphere will empower all its users. Search, knowledge exploration, data analysis and synthesis, collaborative learning and collaborative data curation will be multiplied and enhanced by the new interoperable semantic computing.

We will get reflexive collective intelligence thank to a scientific computable and transparent modeling of cognition from real data. This modeling will be based on the semantic code, that provides the « coordinate system » of the new cognitive cosmos. Of course, people will not be forced to understand the details of this semantic code. They will interact in the new sensorium through their prefered natural language (the linguistic codes of the above slide) and their favorite multimedia interfaces. The translation between different languages and optional interface metaphors will be automatic. The important point is that people will observe, analyze and map dynamically their own personal and collective cognitive processes. Thank to this new reflexivity, we will improve our collaborative learning processes and the collaborative monitoring and control of our physical environments. And this will boost human development!

Collective-Intelligence

The above slide represents the workings of a collective intelligence oriented towards human development. In this model, collective intelligence emerges from an interaction between two levels: virtual and actual. The actual is addressed in space and time while the virtual is latent, potential or intangible. The two levels function and communicate through several symbolic codes. In any coding system, there are coding elements (signs), coded references (things) and coders (being). This is why both actual and virtual levels can be conceptually analysed into three kinds of networks: signs, beings and things.

The actual human development can be analysed into a sphere of messages (signs), a sphere of people (beings) and a sphere of equipments – this last word understood in the largest possible sense – (things). Of course, the three spheres are interdependent.

The virtual human development is analysed into a sphere of knowledge (signs), a sphere of ethics (being) and a sphere of power (things). Again, the three spheres are interdependent.

Each of the six spheres is further analysed into three subdivisions, corresponding to the sub-rows on the slide. The mark S (sign) points to the abstract factors, the mark B (being) indicates the affective dimensions and the mark T (thing) shows the concrete aspects of each sphere.

All the realities described in the above table are interdependent following the actual/virtual and the sign/being/thing dialectics. Any increase of decrease in one « cell » will have consequences in other cells. This is just an example of the many ways collective intelligence will be represented, monitored and made reflexive in the semantic sensorium…

To dig into the philosophical concept of algorithmic intelligence go there

Ancient-Hands-Argentina

Proper quotation: « The Philosophical Concept of Algorithmic Intelligence », Spanda Journal special issue on “Collective Intelligence”, V (2), December 2014, p. 17-25. The original text can be found for free online at  Spanda

“Transcending the media, airborne machines will announce the voice of the many. Still indiscernible, cloaked in the mists of the future, bathing another humanity in its murmuring, we have a rendezvous with the over-language.” Collective Intelligence, 1994, p. xxviii.

Twenty years after Collective Intelligence

This paper was written in 2014, twenty years after L’intelligence collective [the original French edition of Collective Intelligence].[2] The main purpose of Collective Intelligence was to formulate a vision of a cultural and social evolution that would be capable of making the best use of the new possibilities opened up by digital communication. Long before the success of social networks on the Web,[3] I predicted the rise of “engineering the social bond.” Eight years before the founding of Wikipedia in 2001, I imagined an online “cosmopedia” structured in hypertext links. When the digital humanities and the social media had not even been named, I was calling for an epistemological and methodological transformation of the human sciences. But above all, at a time when less than one percent of the world’s population was connected,[4] I was predicting (along with a small minority of thinkers) that the Internet would become the centre of the global public space and the main medium of communication, in particular for the collaborative production and sharing of knowledge and the dissemination of news.[5] In spite of the considerable growth of interactive digital communication over the past twenty years, we are still far from the ideal described in Collective Intelligence. It seemed to me already in 1994 that the anthropological changes under way would take root and inaugurate a new phase in the human adventure only if we invented what I then called an “over-language.” How can communication readily reach across the multiplicity of dialects and cultures? How can we map the deluge of digital data, order it around our interests and extract knowledge from it? How can we master the waves, currents and depths of the software ocean? Collective Intelligence envisaged a symbolic system capable of harnessing the immense calculating power of the new medium and making it work for our benefit. But the over-language I foresaw in 1994 was still in the “indiscernible” period, shrouded in “the mists of the future.” Twenty years later, the curtain of mist has been partially pierced: the over-language now has a name, IEML (acronym for Information Economy MetaLanguage), a grammar and a dictionary.[6]

Reflexive collective intelligence

Collective intelligence drives human development, and human development supports the growth of collective intelligence. By improving collective intelligence we can place ourselves in this feedback loop and orient it in the direction of a self-organizing virtuous cycle. This is the strategic intuition that has guided my research. But how can we improve collective intelligence? In 1994, the concept of digital collective intelligence was still revolutionary. In 2014, this term is commonly used by consultants, politicians, entrepreneurs, technologists, academics and educators. Crowdsourcing has become a common practice, and knowledge management is now supported by the decentralized use of social media. The interconnection of humanity through the Internet, the development of the knowledge economy, the rush to higher education and the rise of cloud computing and big data are all indicators of an increase in our cognitive power. But we have yet to cross the threshold of reflexive collective intelligence. Just as dancers can only perfect their movements by reflecting them in a mirror, just as yogis develop awareness of their inner being only through the meditative contemplation of their own mind, collective intelligence will only be able to set out on the path of purposeful learning and thus move on to a new stage in its growth by achieving reflexivity. It will therefore need to acquire a mirror that allows it to observe its own cognitive processes. Be careful! Collective intelligence does not and will not have autonomous consciousness: when I talk about reflexive collective intelligence, I mean that human individuals will have a clearer and better-shared knowledge than they have today of the collective intelligence in which they participate, a knowledge based on transparent principles and perfectible scientific methods.

The key: A complete modelling of language

But how can a mirror of collective intelligence be constructed? It is clear that the context of reflection will be the algorithmic medium or, to put it another way, the Internet, the calculating power of cloud computing, ubiquitous communication and distributed interactive mobile interfaces. Since we can only reflect collective intelligence in the algorithmic medium, we must yield to the nature of that medium and have a calculable model of our intelligence, a model that will be fed by the flows of digital data from our activities. In short, we need a mathematical (with calculable models) and empirical (based on data) science of collective intelligence. But, once again, is such a science possible? Since humanity is a species that is highly social, its intelligence is intrinsically social, or collective. If we had a mathematical and empirical science of human intelligence in general, we could no doubt derive a science of collective intelligence from it. This leads us to a major problem that has been investigated in the social sciences, the human sciences, the cognitive sciences and artificial intelligence since the twentieth century: is a mathematized science of human intelligence possible? It is language or, to put it another way, symbolic manipulation that distinguishes human cognition. We use language to categorize sensory data, to organize our memory, to think, to communicate, to carry out social actions, etc. My research has led me to the conclusion that a science of human intelligence is indeed possible, but on the condition that we solve the problem of the mathematical modelling of language. I am speaking here of a complete scientific modelling of language, one that would not be limited to the purely logical and syntactic aspects or to statistical correlations of corpora of texts, but would be capable of expressing semantic relationships formed between units of meaning, and doing so in an algebraic, generative mode.[7] Convinced that an algebraic model of semantics was the key to a science of intelligence, I focused my efforts on discovering such a model; the result was the invention of IEML.[8] IEML—an artificial language with calculable semantics—is the intellectual technology that will make it possible to find answers to all the above-mentioned questions. We now have a complete scientific modelling of language, including its semantic aspects. Thus, a science of human intelligence is now possible. It follows, then, that a mathematical and empirical science of collective intelligence is possible. Consequently, a reflexive collective intelligence is in turn possible. This means that the acceleration of human development is within our reach.

The scientific file: The Semantic Sphere

I have written two volumes on my project of developing the scientific framework for a reflexive collective intelligence, and I am currently writing the third. This trilogy can be read as the story of a voyage of discovery. The first volume, The Semantic Sphere 1 (2011),[9] provides the justification for my undertaking. It contains the statement of my aims, a brief intellectual autobiography and, above all, a detailed dialogue with my contemporaries and my predecessors. With a substantial bibliography,[10] that volume presents the main themes of my intellectual process, compares my thoughts with those of the philosophical and scientific tradition, engages in conversation with the research community, and finally, describes the technical, epistemological and cultural context that motivated my research. Why write more than four hundred pages to justify a program of scientific research? For one very simple reason: no one in the contemporary scientific community thought that my research program had any chance of success. What is important in computer science and artificial intelligence is logic, formal syntax, statistics and biological models. Engineers generally view social sciences such as sociology or anthropology as nothing but auxiliary disciplines limited to cosmetic functions: for example, the analysis of usage or the experience of users. In the human sciences, the situation is even more difficult. All those who have tried to mathematize language, from Leibniz to Chomsky, to mention only the greatest, have failed, achieving only partial results. Worse yet, the greatest masters, those from whom I have learned so much, from the semiologist Umberto Eco[11] to the anthropologist Levi-Strauss,[12] have stated categorically that the mathematization of language and the human sciences is impracticable, impossible, utopian. The path I wanted to follow was forbidden not only by the habits of engineers and the major authorities in the human sciences but also by the nearly universal view that “meaning depends on context,”[13] unscrupulously confusing mathematization and quantification, denouncing on principle, in a “knee jerk” reaction, the “ethnocentric bias” of any universalist approach[14] and recalling the “failure” of Esperanto.[15] I have even heard some of the most agnostic speak of the curse of Babel. It is therefore not surprising that I want to make a strong case in defending the scientific nature of my undertaking: all explorers have returned empty-handed from this voyage toward mathematical language, if they returned at all.

The metalanguage: IEML

But one cannot go on forever announcing one’s departure on a voyage: one must set forth, navigate . . . and return. The second volume of my trilogy, La grammaire d’IEML,[16] contains the very technical account of my journey from algebra to language. In it, I explain how to construct sentences and texts in IEML, with many examples. But that 150-page book also contains 52 very dense pages of algorithms and mathematics that show in detail how the internal semantic networks of that artificial language can be calculated and translated automatically into natural languages. To connect a mathematical syntax to a semantics in natural languages, I had to, almost single-handed,[17] face storms on uncharted seas, to advance across the desert with no certainty that fertile land would be found beyond the horizon, to wander for twenty years in the convoluted labyrinth of meaning. But by gradually joining sign, being and thing in turn in the sense of the virtual and actual, I finally had my Ariadne’s thread, and I made a map of the labyrinth, a complicated map of the metalanguage, that “Northwest Passage”[18] where the waters of the exact sciences and the human sciences converged. I had set my course in a direction no one considered worthy of serious exploration since the crossing was thought impossible. But, against all expectations, my journey reached its goal. The IEML Grammar is the scientific proof of this. The mathematization of language is indeed possible, since here is a mathematical metalanguage. What is it exactly? IEML is an artificial language with calculable semantics that puts no limits on the possibilities for the expression of new meanings. Given a text in IEML, algorithms reconstitute the internal grammatical and semantic network of the text, translate that network into natural languages and calculate the semantic relationships between that text and the other texts in IEML. The metalanguage generates a huge group of symmetric transformations between semantic networks, which can be measured and navigated at will using algorithms. The IEML Grammar demonstrates the calculability of the semantic networks and presents the algorithmic workings of the metalanguage in detail. Used as a system of semantic metadata, IEML opens the way to new methods for analyzing large masses of data. It will be able to support new forms of translinguistic hypertextual communication in social media, and will make it possible for conversation networks to observe and perfect their own collective intelligence. For researchers in the human sciences, IEML will structure an open, universal encyclopedic library of multimedia data that reorganizes itself automatically around subjects and the interests of its users.

A new frontier: Algorithmic Intelligence

Having mapped the path I discovered in La grammaire d’IEML, I will now relate what I saw at the end of my journey, on the other side of the supposedly impassable territory: the new horizons of the mind that algorithmic intelligence illuminates. Because IEML is obviously not an end in itself. It is only the necessary means for the coming great digital civilization to enable the sun of human knowledge to shine more brightly. I am talking here about a future (but not so distant) state of intelligence, a state in which capacities for reflection, creation, communication, collaboration, learning, and analysis and synthesis of data will be infinitely more powerful and better distributed than they are today. With the concept of Algorithmic Intelligence, I have completed the risky work of prediction and cultural creation I undertook with Collective Intelligence twenty years ago. The contemporary algorithmic medium is already characterized by digitization of data, automated data processing in huge industrial computing centres, interactive mobile interfaces broadly distributed among the population and ubiquitous communication. We can make this the medium of a new type of knowledge—a new episteme[19]—by adding a system of semantic metadata based on IEML. The purpose of this paper is precisely to lay the philosophical and historical groundwork for this new type of knowledge.

Philosophical genealogy of algorithmic intelligence

The three ages of reflexive knowledge

Since my project here involves a reflexive collective intelligence, I would like to place the theme of reflexive knowledge in its historical and philosophical context. As a first approximation, reflexive knowledge may be defined as knowledge knowing itself. “All men by nature desire to know,” wrote Aristotle, and this knowledge implies knowledge of the self.[20] Human beings have no doubt been speculating about the forms and sources of their own knowledge since the dawn of consciousness. But the reflexivity of knowledge took a decisive step around the middle of the first millennium BCE,[21] during the period when the Buddha, Confucius, the Hebrew prophets, Socrates and Zoroaster (in alphabetical order) lived. These teachers involved the entire human race in their investigations: they reflected consciousness from a universal perspective. This first great type of systematic research on knowledge, whether philosophical or religious, almost always involved a divine ideal, or at least a certain “relation to Heaven.” Thus we may speak of a theosophical age of reflexive knowledge. I will examine the Aristotelian lineage of this theosophical consciousness, which culminated in the concept of the agent intellect. Starting in the sixteenth century in Europe—and spreading throughout the world with the rise of modernity—there was a second age of reflection on knowledge, which maintained the universal perspective of the previous period but abandoned the reference to Heaven and confined itself to human knowledge, with its recognized limits but also its rational ideal of perfectibility. This was the second age, the scientific age, of reflexive knowledge. Here, the investigation follows two intertwined paths: one path focusing on what makes knowledge possible, the other on what limits it. In both cases, knowledge must define its transcendental subject, that is, it must discover its own determinations. There are many signs in 2014 indicating that in the twenty-first century—around the point where half of humanity is connected to the Internet—we will experience a third stage of reflexive knowledge. This “version 3.0” will maintain the two previous versions’ ideals of universality and scientific perfectibility but will be based on the intensive use of technology to augment and reflect systematically our collective intelligence, and therefore our capacities for personal and social learning. This is the coming technological age of reflexive knowledge with its ideal of an algorithmic intelligence. The brief history of these three modalities—theosophical, scientific and technological—of reflexive knowledge can be read as a philosophical genealogy of algorithmic intelligence.

The theosophical age and its agent intellect

A few generations earlier, Socrates might have been a priest in the circle around the Pythia; he had taken the famous maxim “Know thyself” from the Temple of Apollo at Delphi. But in the fifth century BCE in Athens, Socrates extended the Delphic injunction in an unexpected way, introducing dialectical inquiry. He asked his contemporaries: What do you think? Are you consistent? Can you justify what you are saying about courage, justice or love? Could you repeat it seriously in front of a little group of intelligent or curious citizens? He thus opened the door to a new way of knowing one’s own knowledge, a rational expansion of consciousness of self. His main disciple, Plato, followed this path of rigorous questioning of the unthinking categorization of reality, and finally discovered the world of Ideas. Ideas for Plato are intellectual forms that, unlike the phenomena they categorize, do not belong to the world of Becoming. These intelligible forms are the original essences, archetypes beyond reality, which project into phenomenal time and space all those things that seem to us to be truly real because they are tangible, but that are actually only pale copies of the Ideas. We would say today that our experience is mainly determined by our way of categorizing it. Plato taught that humanity can only know itself as an intelligent species by going back to the world of Ideas and coming into contact with what explains and motivates its own knowledge. Aristotle, who was Plato’s student and Alexander the Great’s tutor, created a grand encyclopedic synthesis that would be used as a model for eighteen centuries in a multitude of cultures. In it, he integrates Plato’s discovery of Ideas with the sum of knowledge of his time. He places at the top of his hierarchical cosmos divine thought knowing itself. And in his Metaphysics,[22] he defines the divinity as “thought thinking itself.” This supreme self-reflexive thought was for him the “prime mover” that inspires the eternal movement of the cosmos. In De Anima,[23] his book on psychology and the theory of knowledge, he states that, under the effect of an agent intellect separate from the body, the passive intellect of the individual receives intelligible forms, a little like the way the senses receive sensory forms. In thinking these intelligible forms, the passive intellect becomes one with its objects and, in so doing, knows itself. Starting from the enigmatic propositions of Aristotle’s theology and psychology, a whole lineage of Peripatetic and Neo-Platonic philosophers—first “pagans,” then Muslims, Jews and Christians—developed the discipline of noetics, which speculates on the divine intelligence, its relation to human intelligence and the type of reflexivity characteristic of intelligence in general.[24] According to the masters of noetics, knowledge can be conceptually divided into three aspects that, in reality, are indissociable and complementary:

  • the intellect,or the knowing subject
  • the intelligence,or the operation of the subject
  • the intelligible,or what is known—or can be known—by the subject by virtue of its operation

From a theosophical perspective, everything that happens takes place in the unity of a self-reflexive divine thought, or (in the Indian tradition) in the consciousness of an omniscient Brahman or Buddha, open to infinity. In the Aristotelian tradition, Avicenna, Maimonides and Albert the Great considered that the identity of the intellect, the intelligence and the intelligible was achieved eternally in God, in the perfect reflexivity of thought thinking itself. In contrast, it was clear to our medieval theosophists that in the case of human beings, the three aspects of knowledge were neither complete nor identical. Indeed, since the passive intellect knows itself only through the intermediary of its objects, and these objects are constantly disappearing and being replaced by others, the reflexive knowledge of a finite human being can only be partial and transitory. Ultimately, human knowledge could know itself only if it simultaneously knew, completely and enduringly, all its objects. But that, obviously, is reserved only for the divinity. I should add that the “one beyond the one” of the neo-Platonist Plotinus and the transcendent deity of the Abrahamic traditions are beyond the reach of the human mind. That is why our theosophists imagined a series of mediations between transcendence and finitude. In the middle of that series, a metaphysical interface provides communication between the unimaginable and inaccessible deity and mortal humanity dispersed in time and space, whose living members can never know—or know themselves—other than partially. At this interface, we find the agent intellect, which is separate from matter in Aristotle’s psychology. The agent intellect is not limited—in the realm of time—to sending the intelligible categories that inform the human passive intellect; it also determines—in the realm of eternity—the maximum limit of what the human race can receive of the universal and perfectly reflexive knowledge of the divine. That is why, according to the medieval theosophists, the best a mortal intelligence can do to approach complete reflexive knowledge is to contemplate the operation in itself of the agent intellect that emanates from above and go back to the source through it. In accordance with this regulating ideal of reflexive knowledge, living humanity is structured hierarchically, because human beings are more or less turned toward the illumination of the agent intellect. At the top, prophets and theosophists receive a bright light from the agent intellect, while at the bottom, human beings turned toward coarse material appetites receive almost nothing. The influx of intellectual forms is gradually obscured as we go down the scale of degree of openness to the world above.

The scientific age and its transcendental subject

With the European Renaissance, the use of the printing press, the construction of new observation instruments, and the development of mathematics and experimental science heralded a new era. Reflection on knowledge took a critical turn with Descartes’s introduction of radical doubt and the scientific method, in accordance with the needs of educated Europe in the seventeenth century. God was still present in the Cartesian system, but He was only there, ultimately, to guarantee the validity of the efforts of human scientific thought: “God is not a deceiver.”[25] The fact remains that Cartesian philosophy rests on the self-reflexive edge, which has now moved from the divinity to the mortal human: “I think, therefore I am.”[26] In the second half of the seventeenth century, Spinoza and Leibniz received the critical scientific rationalism developed by Descartes, but they were dissatisfied with his dualism of thought (mind) and extension (matter). They therefore attempted, each in his own way, to constitute reflexive knowledge within the framework of coherent monism. For Spinoza, nature (identified with God) is a unique and infinite substance of which thought and extension are two necessary attributes among an infinity of attributes. This strict ontological monism is counterbalanced by a pluralism of expression, because the unique substance possesses an infinity of attributes, and each attribute, an infinity of modes. The summit of human freedom according to Spinoza is the intellectual love of God, that is, the most direct and intuitive possible knowledge of the necessity that moves the nature to which we belong. For Leibniz, the world is made up of monads, metaphysical entities that are closed but are capable of an inner perception in which the whole is reflected from their singular perspective. The consistency of this radical pluralism is ensured by the unique, infinite divine intelligence that has considered all possible worlds in order to create the best one, which corresponds to the most complex—or the richest—of the reciprocal reflections of the monads. As for human knowledge—which is necessarily finite—its perfection coincides with the clearest possible reflection of a totality that includes it but whose unity is thought only by the divine intelligence. After Leibniz and Spinoza, the eighteenth century saw the growth of scientific research, critical thought and the educational practices of the Enlightenment, in particular in France and the British Isles. The philosophy of the Enlightenment culminated with Kant, for whom the development of knowledge was now contained within the limits of human reason, without reference to the divinity, even to envelop or guarantee its reasoning. But the ideal of reflexivity and universality remained. The issue now was to acquire a “scientific” knowledge of human intelligence, which could not be done without the representation of knowledge to itself, without a model that would describe intelligence in terms of what is universal about it. This is the purpose of Kantian transcendental philosophy. Here, human intelligence, armed with its reason alone, now faces only the phenomenal world. Human intelligence and the phenomenal world presuppose each other. Intelligence is programmed to know sensory phenomena that are necessarily immersed in space and time. As for phenomena, their main dimensions (space, time, causality, etc.) correspond to ways of perceiving and understanding that are specific to human intelligence. These are forms of the transcendental subject and not intrinsic characteristics of reality. Since we are confined within our cognitive possibilities, it is impossible to know what things are “in themselves.” For Kant, the summit of reflexive human knowledge is in a critical awareness of the extension and the limits of our possibility of knowing. Descartes, Spinoza, Leibniz, the English and French Enlightenment, and Kant accomplished a great deal in two centuries, and paved the way for the modern philosophy of the nineteenth and twentieth centuries. A new form of reflexive knowledge grew, spread, and fragmented into the human sciences, which mushroomed with the end of the monopoly of theosophy. As this dispersion occurred, great philosophers attempted to grasp reflexive knowledge in its unity. The reflexive knowledge of the scientific era neither suppressed nor abolished reflexive knowledge of the theosophical type, but it opened up a new domain of legitimacy of knowledge, freed of the ideal of divine knowledge. This de jure separation did not prevent de facto unions, since there was no lack of religious scholars or scholarly believers. Modern scientists could be believers or non-believers. Their position in relation to the divinity was only a matter of motivation. Believers loved science because it revealed the glory of the divinity, and non-believers loved it because it explained the world without God. But neither of them used as arguments what now belonged only to their private convictions. In the human sciences, there were systematic explorations of the determinations of human existence. And since we are thinking beings, the determinations of our existence are also those of our thought. How do the technical, historical, economic, social and political conditions in which we live form, deform and set limits on our knowledge? What are the structures of our biology, our language, our symbolic systems, our communicative interactions, our psychology and our processes of subjectivation? Modern thought, with its scientific and critical ideal, constantly searches for the conditions and limits imposed on it, particularly those that are as yet unknown to it, that remain in the shadows of its consciousness. It seeks to discover what determines it “behind its back.” While the transcendental subject described by Kant in his Critique of Pure Reason fixed the image a great mind had of it in the late eighteenth century, modern philosophy explores a transcendental subject that is in the process of becoming, continually being re-examined and more precisely defined by the human sciences, a subject immersed in the vagaries of cultures and history, emerging from its unconscious determinations and the techno-symbolic mechanisms that drive it. I will now broadly outline the figure of the transcendental subject of the scientific era, a figure that re-examines and at the same time transforms the three complementary aspects of the agent intellect.

  • The Aristotelian intellect becomes living intelligence. This involves the effective cognitive activities of subjects, what is experienced spontaneously in time by living, mortal human beings.
  • The intelligence becomes scientific investigation. I use this term to designate all undertakings by which the living intelligence becomes scientifically intelligible, including the technical and symbolic tools, the methods and the disciplines used in those undertakings.
  • The intelligible becomes the intelligible intelligence, which is the image of the living intelligence that is produced through scientific and critical investigation.

An evolving transcendental subject emerges from this reflexive cycle in which the living intelligence contemplates its own image in the form of a scientifically intelligible intelligence. Scientific investigation here is the internal mirror of the transcendental subjectivity, the mediation through which the living intelligence observes itself. It is obviously impossible to confuse the living intelligence and its scientifically intelligible image, any more than one can confuse the map and the territory, or the experience and its description. Nor can one confuse the mirror (scientific investigation) with the being reflected in it (the living intelligence), nor with the image that appears in the mirror (the intelligible intelligence). These three aspects together form a dynamic unit that would collapse if one of them were eliminated. While the living intelligence would continue to exist without a mirror or scientific image, it would be very much diminished. It would have lost its capacity to reflect from a universal perspective. The creative paradox of the intellectual reflexivity of the scientific age may be formulated as follows. It is clear, first of all, that the living intelligence is truly transformed by scientific investigation, since the living intelligence that knows its image through a certain scientific investigation is not the same (does not have the same experience) as the one that does not know it, or that knows another image, the result of another scientific investigation. But it is just as clear, by definition, that the living intelligence reflects itself in the intelligible image presented to it through scientific knowledge. In other words, the living intelligence is equally dependent on the scientific and critical investigation that produces the intelligible image in which it is reflected. When we observe our physical appearance in a mirror, the image in the mirror in no way changes our physical appearance, only the mental representation we have of it. However, the living intelligence cannot discover its intelligible image without including the reflexive process itself in its experience, and without at the same time being changed. In short, a critical science that explores the limits and determinations of the knowing subject does not only reflect knowledge—it increases it. Thus the modern transcendental subject is—by its very nature—evolutionary, participating in a dynamic of growth. In line with this evolutionary view of the scientific age, which contrasts with the fixity of the previous age, the collectivity that possesses reflexive knowledge is no longer a theosophical hierarchy oriented toward the agent intellect but a republic of letters oriented toward the augmentation of human knowledge, a scientific community that is expanding demographically and is organized into academies, learned societies and universities. While the agent intellect looked out over a cosmos emanating from eternity, in analog resonance with the human microcosm, the transcendental subject explores a universe infinitely open to scientific investigation, technical mastery and political liberation.

The technological age and its algorithmic intelligence

Reflexive knowledge has, in fact, always been informed by some technology, since it cannot be exercised without symbolic tools and thus the media that support those tools. But the next age of reflexive knowledge can properly be called technological because the technical augmentation of cognition is explicitly at the centre of its project. Technology now enters the loop of reflexive consciousness as the agent of the acceleration of its own augmentation. This last point was no doubt glimpsed by a few pre–twentieth century philosophers, such as Condorcet in the eighteenth century, in his posthumous book of 1795, Sketch for a Historical Picture of the Progress of the Human Mind. But the truly technological dimension of reflexive knowledge really began to be thought about fully only in the twentieth century, with Pierre Teilhard de Chardin, Norbert Wiener and Marshall McLuhan, to whom we should also add the modest genius Douglas Engelbart. The regulating ideal of the reflexive knowledge of the theosophical age was the agent intellect, and that of the scientific-critical age was the transcendental subject. In continuity with the two preceding periods, the reflexive knowledge of the technological age will be organized around the ideal of algorithmic intelligence, which inherits from the agent intellect its universality or, in other words, its capacity to unify humanity’s reflexive knowledge. It also inherits its power to be reflected in finite intelligences. But, in contrast with the agent intellect, instead of descending from eternity, it emerges from the multitude of human actions immersed in space and time. Like the transcendental subject, algorithmic intelligence is rational, critical, scientific, purely human, evolutionary and always in a state of learning. But the vocation of the transcendental subject was to reflexively contain the human universe. However, the human universe no longer has a recognizable face. The “death of man” announced by Foucault[27] should be understood in the sense of the loss of figurability of the transcendental subject. The labyrinth of philosophies, methodologies, theories and data from the human sciences has become inextricably complicated. The transcendental subject has not only been dissolved in symbolic structures or anonymous complex systems, it is also fragmented in the broken mirror of the disciplines of the human sciences. It is obvious that the technical medium of a new figure of reflexive knowledge will be the Internet, and more generally, computer science and ubiquitous communication. But how can symbol-manipulating automata be used on a large scale not only to reunify our reflexive knowledge but also to increase the clarity, precision and breadth of the teeming diversity enveloped by our knowledge? The missing link is not only technical, but also scientific. We need a science that grasps the new possibilities offered by technology in order to give collective intelligence the means to reflect itself, thus inaugurating a new form of subjectivity. As the groundwork of this new science—which I call computational semantics—IEML makes use of the self-reflexive capacity of language without excluding any of its functions, whether they be narrative, logical, pragmatic or other. Computational semantics produces a scientific image of collective intelligence: a calculated intelligence that will be able to be explored both as a simulated world and as a distributed augmented reality in physical space. Scientific change will generate a phenomenological change,[28] since ubiquitous multimedia interaction with a holographic image of collective intelligence will reorganize the human sensorium. The last, but not the least, change: social change. The community that possessed the previous figure of reflexive knowledge was a scientific community that was still distinct from society as a whole. But in the new figure of knowledge, reflexive collective intelligence emerges from any human group. Like the previous figures—theosophical and scientific—of reflexive knowledge, algorithmic intelligence is organized in three interdependent aspects.

  • Reflexive collective intelligence represents the living intelligence, the intellect or soul of the great future digital civilization. It may be glimpsed by deciphering the signs of its approach in contemporary reality.
  • Computational semantics holds up a technical and scientific mirror to collective intelligence, which is reflected in it. Its purpose is to augment and reflect the living intelligence of the coming civilization.
  • Calculated intelligence, finally, is none other than the scientifically knowable image of the living intelligence of digital civilization. Computational semantics constructs, maintains and cultivates this image, which is that of an ecosystem of ideas coming out of the human activity in the algorithmic medium and can be explored in sensory-motor mode.

In short, in the emergent unity of algorithmic intelligence, computational semantics calculates the cognitive simulation that augments and reflects the collective intelligence of the coming civilization.

[1] Professor at the University of Ottawa

[2] And twenty-three years after L’idéographie dynamique (Paris: La Découverte, 1991).

[3] And before the WWW itself, which would become a public phenomenon only in 1994 with the development of the first browsers such as Mosaic. At the time when the book was being written, the Web still existed only in the mind of Tim Berners-Lee.

[4] Approximately 40% in 2014 and probably more than half in 2025.

[5] I obviously do not claim to be the only “visionary” on the subject in the early 1990s. The pioneering work of Douglas Engelbart and Ted Nelson and the predictions of Howard Rheingold, Joël de Rosnay and many others should be cited.

[6] See The basics of IEML (on line at: http://wp.me/P3bDiO-9V )

[7] Beyond logic and statistics.

[8] IEML is the acronym for Information Economy MetaLanguage. See La grammaire d’IEML (On line http://wp.me/P3bDiO-9V ) [9] The Semantic Sphere 1: Computation, Cognition and Information Economy (London: ISTE, 2011; New York: Wiley, 2011).

[10] More than four hundred reference books.

[11] Umberto Eco, The Search for the Perfect Language (Oxford: Blackwell, 1995).

[12] “But more madness than genius would be required for such an enterprise”: Claude Levi-Strauss, The Savage Mind (University of Chicago Press, 1966), p. 130.

[13] Which is obviously true, but which only defines the problem rather than forbidding the solution.

[14] But true universalism is all-inclusive, and our daily lives are structured according to a multitude of universal standards, from space-time coordinates to HTTP on the Web. I responded at length in The Semantic Sphere to the prejudices of extremist post-modernism against scientific universality.

[15] Which is still used by a large community. But the only thing that Esperanto and IEML have in common is the fact that they are artificial languages. They have neither the same form nor the same purpose, nor the same use, which invalidates criticisms of IEML based on the criticism of Esperanto.

[16] See IEML Grammar (On line http://wp.me/P3bDiO-9V ).

[17] But, fortunately, supported by the Canada Research Chairs program and by my wife, Darcia Labrosse.

[18] Michel Serres, Hermès V. Le passage du Nord-Ouest (Paris: Minuit, 1980).

[19] The concept of episteme, which is broader than the concept of paradigm, was developed in particular by Michel Foucault in The Order of Things (New York: Pantheon, 1970) and The Archaeology of Knowledge and the Discourse on Language (New York: Pantheon, 1972).

[20] At the beginning of Book A of his Metaphysics.

[21] This is the Axial Age identified by Karl Jaspers.

[22] Book Lambda, 9

[23] In particular in Book III.

[24] See, for example, Moses Maimonides, The Guide For the Perplexed, translated into English by Michael Friedländer (New York: Cosimo Classic, 2007) (original in Arabic from the twelfth century). – Averroes (Ibn Rushd), Long Commentary on the De Anima of Aristotle, translated with introduction and notes by Richard C. Taylor (New Haven: Yale University Press, 2009) (original in Arabic from the twelfth century). – Saint Thomas Aquinas: On the Unity of the Intellect Against the Averroists (original in Latin from the thirteenth century) – Herbert A. Davidson, Alfarabi, Avicenna, and Averroes, on Intellect. Their Cosmologies, Theories of the Active Intellect, and Theories of Human Intellect (New York, Oxford: Oxford University Press, 1992). – Henri Corbin, History of Islamic Philosophy, translated by Liadain and Philip Sherrard (London: Kegan Paul, 1993). – Henri Corbin, En Islam iranien: aspects spirituels et philosophiques, 2d ed. (Paris: Gallimard, 1978), 4 vol. – De Libera, Alain Métaphysique et noétique: Albert le Grand (Paris: Vrin, 2005).

[25] In Meditations on First Philosophy, “First Meditation.” [26] Discourse on the Method, “Part IV.”

[27] At the end of The Order of Things (New York: Pantheon Books, 1970). [28] See, for example, Stéphane Vial, L’être et l’écran (Paris: PUF, 2013).

Miroir-delvaux-2

Conférence à Science Po-Paris le 2 octobre 2014 à 17h 30

Voici ma présentation (PDF) : 2014-Master-Class

Texte introductif à la conférence


Réfléchir l’intelligence

Quels sont les enseignements de la philosophie sur l’augmentation de l’intelligence ? « Connais-toi toi-même » nous avertit Socrate à l’aurore de la philosophie grecque. Sous la multiplicité des traditions et des approches, en Orient comme en Occident, il existe un chemin universellement recommandé : pour l’intelligence humaine, la manière la plus sûre de progresser est d’atteindre un degré supérieur de réflexivité.

Or depuis le début du XXIe siècle, nous apprenons à nous servir d’automates de manipulation symbolique opérant dans un réseau ubiquitaire. Dans le médium algorithmique, nos intelligences personnelles s’interconnectent et fonctionnent en multiples intelligences collectives enchevêtrées. Puisque le nouveau médium abrite une part croissante de notre mémoire et de nos communications, ne pourrait-il pas fonctionner comme un miroir scientifique de nos intelligences collectives ? Rien ne s’oppose à ce que le médium algorithmique supporte bientôt une vision d’ensemble objectivable et mesurable du fonctionnement de nos intelligences collectives et de la manière dont chacun de nous y participe. Dès lors, un méta-niveau d’apprentissage collectif aura été atteint.

En effet, des problèmes d’une échelle de complexité supérieure à tous ceux que l’humanité a été capable de résoudre dans le passé se posent à nous. La gestion collective de la biosphère, le renouvellement des ressources énergétiques, l’aménagement du réseau de mégapoles où nous vivons désormais, les questions afférentes au développement humain (prospérité, éducation, santé, droits humains), vont se poser avec une acuité croissante dans les décennies et les siècles qui viennent. La densité, la complexité et le rythme croissant de nos interactions exigent de nouvelles formes de coordination intellectuelle. C’est pourquoi j’ai cherché toute ma vie la meilleure manière d’utiliser le médium algorithmique afin d’augmenter notre intelligence. Quelques titres parmi les ouvrages que j’ai publié témoignent de cette quête : La Sphère sémantique. Computation, cognition, économie de l’information (2011) ; Qu’est-ce que le virtuel ? (1995) ; L’Intelligence collective (1994) ; De la Programmation considérée comme un des beaux-arts (1992) ; Les Arbres de connaissances (1992) ; L’Idéographie dynamique (1991) ; Les Technologies de l’intelligence (1990) ; La Machine univers. Création, cognition et culture informatique (1987)… Après avoir obtenu ma Chaire de Recherche du Canada en Intelligence Collective à l’Université d’Ottawa en 2002, j’ai pu me consacrer presque exclusivement à une méditation philosophique et scientifique sur la meilleure manière de réfléchir l’intelligence collective avec les moyens de communication dont nous disposons aujourd’hui, méditation dont j’ai commencé à rendre compte dans La Sphère Sémantique et que j’approfondirai dans L’intelligence algorithmique (à paraître).

Élaboration d’un programme de recherche

Les grands sauts évolutifs ou, si l’on préfère, les nouveaux espaces de formes, sont générés par de nouveaux systèmes de codage. Le codage atomique génère les formes moléculaires, le codage génétique engendre les formes biologiques, le codage neuronal simule les formes phénoménales. Le codage symbolique enfin, propre à l’humanité, libère l’intelligence réflexive et la culture.

Je retrouve dans l’évolution culturelle la même structure que dans l’évolution cosmique : ce sont les progrès du codage symbolique qui commandent l’agrandissement de l’intelligence humaine. En effet, notre intelligence repose toujours sur une mémoire, c’est-à-dire un ensemble d’idées enregistrées, conceptualisées et symbolisées. Elle classe, retrouve et analyse ce qu’elle a retenu en manipulant des symboles. Par conséquent, la prise de l’intelligence sur les données, ainsi que la quantité et la qualité des informations qu’elle peut en extraire, dépendent au premier chef des systèmes symboliques qu’elle utilise. Lorsqu’avec l’invention de l’écriture les symboles sont devenus auto-conservateurs, la mémoire s’est accrue, réorganisée, et un nouveau type d’intelligence est apparu, relevant d’une épistémè scribale, comme celle de l’Egypte pharaonique, de l’ancienne Mésopotamie ou de la Chine pré-confucéenne. Quand le médium écrit s’est perfectionné avec le papier, l’alphabet et la notation des nombres par position, alors la mémoire et la manipulation symbolique ont crû en puissance et l’épistémè lettrée s’est développée dans les empires grec, chinois, romain, arabe, etc. La reproduction et la diffusion automatique des symboles, de l’imprimerie aux médias électroniques, a multiplié la disponibilité des données et accéléré l’échange des idées. Née de cette mutation, l’intelligence typographique a édifié le monde moderne, son industrie, ses sciences expérimentales de la nature, ses états-nations et ses idéologies inconnues des époques précédentes. Ainsi, suivant la puissance des outils symboliques manipulés, la mémoire et l’intelligence collective évoluent, traversant des épistémès successives.

Evolution medias

La relation entre l’ouverture d’un nouvel espace de formes et l’invention d’un système de codage se confirme encore dans l’histoire des sciences. Et puisque je suis à la recherche d’une augmentation de la connaissance réflexive, la science moderne me donne justement l’exemple d’une communauté qui réfléchit sur ses propres opérations intellectuelles et qui se pose explicitement le problème de préciser l’usage qu’elle fait de ses outils symboliques. La plupart des grandes percées de la science moderne ont été réalisées par l’unification d’une prolifération de formes disparates au moyen d’un coup de filet algébrique. En physique, le premier pas est accompli par Galilée (1564-1642), Descartes (1596-1650), Newton (1643-1727) et Leibniz (1646-1716). A la place du cosmos clos et cloisonné de la vulgate aristotélicienne qu’ils ont reçu du Moyen-Age, les fondateurs de la science moderne édifient un univers homogène, rassemblé dans l’espace de la géométrie euclidienne et dont les mouvements obéissent au calcul infinitésimal. De même, le monde des ondes électromagnétiques est-il mathématiquement unifié par Maxwell (1831-1879), celui de la chaleur, des atomes et des probabilités statistiques par Boltzmann (1844-1906). Einstein (1869-1955) parvient à unifier la matière-espace-temps en un même modèle algébrique. De Lavoisier (1743-1794) à Mendeleïev (1834, 1907), la chimie émerge de l’alchimie par la rationalisation de sa nomenclature et la découverte de lois de conservation, jusqu’à parvenir au fameux tableau périodique où une centaine d’éléments atomiques sont arrangés selon un modèle unificateur qui explique et prévoit leurs propriétés. En découvrant un code génétique identique pour toutes les formes de vie, Crick (1916-2004) et Watson (1928-) ouvrent la voie à la biologie moléculaire.

Enfin, les mathématiques n’ont-elles pas progressé par la découverte de nouvelles manières de coder les problèmes et les solutions ? Chaque avancée dans le niveau d’abstraction du codage symbolique ouvre un nouveau champ à la résolution de problèmes. Ce qui apparaissait antérieurement comme une multitude d’énigmes disparates se résout alors selon des procédures uniformes et simplifiées. Il en est ainsi de la création de la géométrie démonstrative par les Grecs (entre le Ve et le IIe siècle avant l’ère commune) et de la formalisation du raisonnement logique par Aristote (384-322 avant l’ère commune). La même remontée en amont vers la généralité s’est produite avec la création de la géométrie algébrique par Descartes (1596-1650), puis par la découverte et la formalisation progressive de la notion de fonction. Au tournant des XIXe et XXe siècles, à l’époque de Cantor (1845-1918), de Poincaré (1854-1912) et de Hilbert (1862-1943), l’axiomatisation des théories mathématiques est contemporaine de la floraison de la théorie des ensembles, des structures algébriques et de la topologie.

Mon Odyssée encyclopédique m’a enseigné cette loi méta-évolutive : les sauts intellectuels vers des niveaux de complexité supérieurs s’appuient sur de nouveaux systèmes de codage. J’en viens donc à me poser la question suivante. Quel nouveau système de codage fera du médium algorithmique un miroir scientifique de notre intelligence collective ? Or ce médium se compose justement d’un empilement de systèmes de codage : codage binaire des nombres, codage numérique de caractères d’écriture, de sons et d’images, codage des adresses des informations dans les disques durs, des ordinateurs dans le réseau, des données sur le Web… La mémoire mondiale est déjà techniquement unifiée par tous ces systèmes de codage. Mais elle est encore fragmentée sur un plan sémantique. Il manque donc un nouveau système de codage qui rende la sémantique aussi calculable que les nombres, les sons et les images : un système de codage qui adresse uniformément les concepts, quelles que soient les langues naturelles dans lesquelles ils sont exprimés.

Medium-algo

En somme, si nous voulons atteindre une intelligence collective réflexive dans le médium algorithmique, il nous faut unifier la mémoire numérique par un code sémantique interopérable, qui décloisonne les langues, les cultures et les disciplines.

Tour d’horizon techno-scientifique

Désormais en possession de mon programme de recherche, il me faut évaluer l’avancée du médium algorithmique contemporain vers l’intelligence collective réflexive : nous n’en sommes pas si loin… Entre réalité augmentée et mondes virtuels, nous communiquons dans un réseau électronique massivement distribué qui s’étend sur la planète à vitesse accélérée. Des usagers par milliards échangent des messages, commandent des traitements de données et accèdent à toutes sortes d’informations au moyen d’une tablette légère ou d’un téléphone intelligent. Objets fixes ou mobiles, véhicules et personnes géo-localisés signalent leur position et cartographient automatiquement leur environnement. Tous émettent et reçoivent des flots d’information, tous font appel à la puissance du cloud computing. Des efforts de Douglas Engelbart à ceux de Steve Jobs, le calcul électronique dans toute sa complexité a été mis à la portée de la sensori-motricité humaine ordinaire. Par l’invention du Web, Sir Tim Berners-Lee a rassemblé l’ensemble des données dans une mémoire adressée par le même système d’URL. Du texte statique sur papier, nous sommes passé à l’hypertexte ubiquitaire. L’entreprise de rédaction et d’édition collective de Wikipedia, ainsi qu’une multitude d’autres initiatives ouvertes et collaboratives ont mis gratuitement à la portée de tous un savoir encyclopédique, des données ouvertes réutilisables et une foule d’outils logiciels libres. Des premiers newsgroups à Facebook et Twitter, une nouvelle forme de sociabilité par le réseau s’est imposée, à laquelle participent désormais l’ensemble des populations. Les blogs ont mis la publication à la portée de tous. Tout cela étant désormais acquis, notre intelligence doit maintenant franchir le pas décisif qui lui permettra de maîtriser un niveau supérieur de complexité cognitive.

Du côté de la Silicon Valley, on cherche des réponses de plus en plus fines aux désirs des utilisateurs, et cela d’autant mieux que les big data analytics offrent les moyens d’en tracer le portrait fidèle. Mais il me semble peu probable que l’amélioration incrémentale des services rendus par les grandes entreprises du Web, même guidée par une bonne stratégie marketing, nous mène spontanément à l’unification sémantique de la mémoire numérique. L’entreprise non commerciale du « Web sémantique » promeut d’utiles standards de fichier (XML, RDF) et des langages de programmation ouverts (comme OWL), mais ses nombreuses ontologies sont hétéroclites et elle a échoué à résoudre le problème de l’interopérabilité sémantique. Parmi les projets les plus avancés d’intelligence computationnelle, aucun ne vise explicitement la création d’une nouvelle génération d’outils symboliques. Certains nourrissent même la chimère d’ordinateurs conscients devenant autonomes et prenant le pouvoir sur la planète avec la complicité de cyborgs post-humain…

La lumière viendra-t-elle des recherches académiques sur l’intelligence collective et le knowledge management ? Depuis les travaux pionniers de Nonaka à la fin du XXe siècle, nous savons qu’une saine gestion des connaissances suppose l’explicitation et la communication des savoirs implicites. L’expérience des médias sociaux nous a enseigné la nécessité d’associer étroitement gestion sociale et gestion personnelle des connaissances. Or, dans les faits, la gestion des connaissances par les médias sociaux passe nécessairement par la curation distribuée d’une énorme quantité de données. C’est pourquoi, on ne pourra coordonner le travail de curation collective et exploiter efficacement les données qu’au moyen d’un codage sémantique commun. Mais personne ne propose de solution au problème de l’interopérabilité sémantique.

Le secours nous viendra-t-il des sciences humaines, par l’intermédiaire des fameuses digital humanities ? L’effort pour éditer et mettre en libre accès les corpus, pour traiter et visualiser les données avec les outils des big data et pour organiser les communautés de chercheurs autour de ce traitement est méritoire. Je souscris sans réserve à l’orientation vers le libre et l’open. Mais je ne discerne pour l’instant aucun travail de fond pour résoudre les immenses problèmes de fragmentation disciplinaire, de testabilité des hypothèses et d’hyper-localité théorique qui empêchent les sciences humaines d’émerger de leur moyen-âge épistémologique. Ici encore, nulle théorie de la cognition, ni de la cognition sociale, permettant de coordonner l’ensemble des recherches, pas de système de catégorisation sémantique inter-opérable en vue et peu d’entreprises pratiques pour remettre l’interrogation scientifique sur l’humain entre les mains des communautés elles-mêmes. Quant à diriger l’évolution technique selon les besoins de sciences humaines renouvelées, la question ne semble même pas se poser. Il ne reste finalement que la posture critique, comme celle que manifestent, par exemple, Evgeny Morozov aux Etats-Unis et d’autres en Europe ou ailleurs. Mais si les dénonciations de l’avidité des grandes compagnies de la Silicon Valley et du caractère simpliste, voire dérisoire, des conceptions politiques, sociales et culturelles des chantres béats de l’algorithme touchent souvent juste, on chercherait en vain du côté des dénonciateurs le moindre début de proposition concrète.

En conclusion, je ne discerne autour de moi aucun plan sérieux propre à mettre la puissance computationnelle et les torrents de données du médium algorithmique au service d’une nouvelle forme d’intelligence réflexive. Ma conviction, je la puise dans une longue étude du problème à résoudre. Quant à ma solitude provisoire en 2014, au moment où j’écris ces lignes, je me l’explique par le fait que personne n’a consacré plus de quinze ans à temps plein pour résoudre le problème de l’interopérabilité sémantique. Je m’en console en observant l’exemple admirable de Douglas Engelbart. Ce visionnaire a inventé les interfaces sensori-motrices et les logiciels collaboratifs à une époque où toutes les subventions allaient à l’intelligence artificielle. Ce n’est que bien des années après qu’il ait exposé sa vision de l’avenir dans les années 1960 qu’il fut suivi par l’industrie et la masse des utilisateurs à partir de la fin des années 1980. Sa vision n’était pas seulement technique. Il a appelé à franchir un seuil décisif d’augmentation de l’intelligence collective afin de relever les défis de plus en plus pressants qui se posent, encore aujourd’hui, à notre espèce. Je poursuis son travail. Après avoir commencé à dompter le calcul automatique par nos interactions sensori-motrices avec des hypertextes, il nous faut maintenant explicitement utiliser le médium algorithmique comme une extension cognitive. Mes recherches m’ont affermi dans la conviction que nulle solution technique ignorante de la complexité de la cognition humaine ne nous mènera à bon port. Nous ne pourrons obtenir une intelligence agrandie qu’avec une claire théorie de la cognition et une profonde compréhension des ressorts de la mutation anthropologique à venir. Enfin, sur un plan technique, le rassemblement de la sagesse collective de l’humanité nécessite une unification sémantique de sa mémoire. C’est en respectant toutes ces exigences que j’ai conçu et construit IEML, outil commun d’une nouvelle puissance intellectuelle, origine d’une révolution scientifique.

Les ressorts d’une révolution scientifique

La mise en oeuvre de mon programme de recherche ne sera pas moins complexe ou ambitieuse que d’autres grands projets scientifiques et techniques, comme ceux qui nous ont mené à marcher sur la Lune ou à déchiffrer le génome humain. Cette grande entreprise va mobiliser de vastes réseaux de chercheurs en sciences humaines, en linguistique et en informatique. J’ai déjà réuni un petit groupe d’ingénieurs et de traducteurs dans ma Chaire de Recherche de l’Université d’Ottawa. Avec les moyens d’un laboratoire universitaire en sciences humaines, j’ai trouvé le code que je cherchais et j’ai prévu de quelle manière son utilisation allait mener à une intelligence collective réflexive.

J’étais bien résolu à ne pas me laisser prendre au piège qui consisterait à aménager superficiellement quelque système symbolique de l’épistémè typographique pour l’adapter au nouveau médium, à l’instar des premiers wagons de chemin de fer qui ressemblaient à des diligences. Au contraire, j’étais persuadé que nous ne pourrions passer à une nouvelle épistémè qu’au moyen d’un système symbolique conçu dès l’origine pour unifier et exploiter la puissance du médium algorithmique.

images

Voici en résumé les principales étapes de mon raisonnement. Premièrement, comment pourrais-je augmenter effectivement l’intelligence collective sans en avoir de connaissance scientifique ? C’est donc une science de l’intelligence collective qu’il me faut. Je fais alors un pas de plus dans la recherche des conditions. Une science de l’intelligence collective suppose nécessairement une science de la cognition en général, car la dimension collective n’est qu’un aspect de la cognition humaine. J’ai donc besoin d’une science de la cognition. Mais comment modéliser rigoureusement la cognition humaine, sa culture et ses idées, sans modéliser au préalable le langage qui en est une composante capitale ? Puisque l’humain est un animal parlant – c’est-à-dire un spécialiste de la manipulation symbolique – un modèle scientifique de la cognition doit nécessairement contenir un modèle du langage. Enfin, dernier coup de pioche avant d’atteindre le roc : une science du langage ne nécessite-t-elle pas un langage scientifique ? En effet, vouloir une science computationnelle du langage sans disposer d’une langue mathématique revient à prétendre mesurer des longueurs sans unités ni instruments. Or je ne dispose avant d’avoir construit IEML que d’une modélisation algébrique de la syntaxe : la théorie chomskienne et ses variantes ne s’étendent pas jusqu’à la sémantique. La linguistique me donne des descriptions précises des langues naturelles dans tous leurs aspects, y compris sémantiques, mais elle ne me fournit pas de modèles algébriques universels. Je comprends donc l’origine des difficultés de la traduction automatique, des années 1950 jusqu’à nos jours.

Parce que le métalangage IEML fournit un codage algébrique de la sémantique il autorise une modélisation mathématique du langage et de la cognition, il ouvre en fin de compte à notre intelligence collective l’immense bénéfice de la réflexivité.

IEML, outil symbolique de la nouvelle épistémè

Si je dois contribuer à augmenter l’intelligence humaine, notre intelligence, il me faut d’abord comprendre ses conditions de fonctionnement. Pour synthétiser en quelques mots ce que m’ont enseigné de nombreuses années de recherches, l’intelligence dépend avant tout de la manipulation symbolique. De même que nos mains contrôlent des outils qui augmentent notre puissance matérielle, c’est grâce à sa capacité de manipulation de symboles que notre cognition atteint à l’intelligence réflexive. L’organisme humain a partout la même structure, mais son emprise sur son environnement physico-biologique varie en fonction des techniques mises en oeuvre. De la même manière, la cognition possède une structure fonctionnelle invariable, innée aux êtres humains, mais elle manie des outils symboliques dont la puissance augmente au rythme de leur évolution : écriture, imprimerie, médias électroniques, ordinateurs… L’intelligence commande ses outils symboliques par l’intermédiaire de ses idées et de ses concepts, comme la tête commande aux outils matériels par l’intermédiaire du bras et de la main. Quant aux symboles, ils fournissent leur puissance aux processus intellectuels. La force et la subtilité conférée par les symboles à la conceptualisation se répercute sur les idées et, de là, sur la communication et la mémoire pour soutenir, en fin de compte, les capacités de l’intelligence.

J’ai donc construit le nouvel outil de telle sorte qu’il tire le maximum de la nouvelle puissance offerte par le médium algorithmique global. IEML n’est ni un système de classification, ni une ontologie, ni même une super-ontologie universelle, mais une langue. Comme toute langue, IEML noue une syntaxe, une sémantique et une pragmatique. Mais c’est une langue artificielle : sa syntaxe est calculable, sa sémantique traduit les langues naturelles et sa pragmatique programme des écosystèmes d’idées. La syntaxe, la sémantique et la pragmatique d’IEML fonctionnent de manière interdépendante. Du point de vue syntaxique, l’algèbre d’IEML commande une topologie des relations. De ce fait, les connexions linguistiques entre textes et hypertextes dynamiques se calculent automatiquement. Du point de vue sémantique, un code – c’est-à-dire un système d’écriture, une grammaire et un dictionnaire multilingue – donne sens à l’algèbre. Il en résulte que chacune des variables de l’algèbre devient un noeud d’inter-traduction entre langues naturelles. Les utilisateurs peuvent alors communiquer en IEML tout en utilisant la – ou les – langues naturelles de leur choix. Du point de vue pragmatique enfin, IEML commande la simulation d’écosystèmes d’idées. Les données catégorisées en IEML s’organisent automatiquement en hypertextes dynamiques, explorables et auto-explicatifs. IEML fonctionne donc en pratique comme un outil de programmation distribuée d’une simulation cognitive globale.

Le futur algorithmique de l’intelligence

Lorsqu’elle aura pris en main ce nouvel outil symbolique, notre espèce laissera derrière elle une épistémè typographique assimilée et assumée pour entrer dans le vaste champ de l’intelligence algorithmique. Une nouvelle mémoire accueillera des torrents de données en provenance de milliards de sources et transformera automatiquement le déluge d’information en hypertextes dynamiques auto-organisateurs. Alors que Wikipedia conserve un système de catégorisation hérité de l’épistémè typographique, une bibliothèque encyclopédique perspectiviste s’ouvrira à tous les systèmes de classification possibles. En s’auto-organisant en fonction des points de vue adoptés par leurs explorateurs, les données catégorisées en IEML reflèteront le fonctionnement multi-polaire de l’intelligence collective.

Les relations entre hypertextes dynamiques vont se projeter dans une fiction calculée multi-sensorielle explorable en trois dimensions. Mais c’est une réalité cognitive que les nouveaux mondes virtuels vont simuler. Leur spatio-temporalité sera donc bien différente de celle du monde matériel puisque c’est ici la forme de l’intelligence, et non celle de la réalité physique ordinaire, qui va se laisser explorer par la sensori-motricité humaine.

De la curation collaborative de données émergera de nouveaux types de jeux intellectuels et sociaux. Des collectifs d’apprentissage, de production et d’action communiqueront sur un mode stigmergique en sculptant leur mémoire commune. Les joueurs construiront ainsi leurs identités individuelles et collectives. Leurs tendances émotionnelles et les directions de leurs attentions se reflèteront dans les fluctuations et les cycles de la mémoire commune.

A partir de nouvelles méthodes de mesure et de comptabilité sémantique basés sur IEML, l’ouverture et la transparence des processus de production de connaissance vont connaître un nouvel essor. Les études de la cognition et de la conscience disposeront non seulement d’une nouvelle théorie, mais aussi d’un nouvel instrument d’observation, d’analyse et de simulation. Il deviendra possible d’accumuler et de partager l’expertise sur la culture des écosystèmes d’idées. Nous allons commencer à nous interroger sur l’équilibre, l’interdépendance, la fécondité croisée de ces écosystèmes d’idées. Quels services rendent-ils aux communautés qui les produisent ? Quels sont leurs effets sur le développement humain ?

Le grand projet d’union des intelligences auquel je convie ne sera le fruit d’aucune conquête militaire, ni de la victoire sur les esprits d’une idéologie politique ou religieuse. Elle résultera d’une révolution cognitive à fondement techno-scientifique. Loin de tout esprit de table rase radicale, la nouvelle épistémè conservera les concepts des épistémè antérieures. Mais ce legs du passé sera repris dans un nouveau contexte, plus vaste, et par une intelligence plus puissante.

[Image en tête de l’article: “Le Miroir” de Paul Delvaux, 1936]

Moon gravity

Voici une vidéo qui explique en cinq minutes en français le “pourquoi” de l’invention d’IEML.

Pour en savoir plus, vous pouvez écouter un podcast sur France Culture d’une quarantaine de minutes.

IEML (pour Information Economy MetaLanguage) est une langue artificielle à la sémantique calculable qui n’impose aucune limite aux possibilités d’expression de nouveaux sens.

Etant donné un texte en IEML, des algorithmes reconstituent le réseau grammatical et sémantique interne au texte, traduisent ce réseau en langues naturelles et calculent les relations sémantiques entre ce texte et les autres textes en IEML. Le métalangage génère un immense groupe de transformations symétriques entre réseaux sémantiques qui peut être mesuré et parcouru à volonté par des algorithmes.

Utilisé comme système de métadonnées, le métalangage IEML ouvre la voie à de nouvelles méthodes d’analyse de grandes masses de données. Dans les médias sociaux, il supporte des formes inédites de communication hypertextuelle translinguistique et permet à des réseaux de conversations d’observer et de perfectionner leur propre intelligence collective. Pour les chercheurs en sciences humaines, IEML structure une bibliothèque encyclopédique ouverte et universelle qui se réorganise automatiquement selon les intérêts de ses utilisateurs.

Cliquez ici pour obtenir La Grammaire d’IEML, avec table des matières, index et hyperliens internes.

Consacré à la Grammaire d’IEML (en français), cette annexe à La sphère sémantique possède un contenu essentiellement formel et technique. Elle démontre notamment la calculabilité d’IEML et de sa sémantique, calculabilité qui fonde le projet de l’intelligence algorithmique. Une version anglaise sera publiée bientôt.

Il n’y a pas encore d’outils pratiques, c’est une recherche fondamentale dont les retombées techniques n’apparaîtront que dans quelques années… Patience…

 

cover_sphere1

THE SEMANTIC SPHERE 1

Computation, Cognition and the Information Economy.

(Translated By Phyllis Aronoff and Howard Scott)

New advances in digital media offer unprecedented memory capacities, an omnipresent channel of communication, and ever-growing computational power.
We must ask ourselves how we can exploit this medium in order to augment our own social cognitive processes for human development.
Through a combination of a profound knowledge of humanities and social sciences, and an understanding of computer sciences, Pierre Lévy proposes a collaborative construction of a global hyper-cortex, coordinated by a computable metalanguage.
By fully recognizing the symbolic and social nature of human cognition, we could transform our current, opaque, global brain into a reflexive collective intelligence.

Amazon: http://www.amazon.com/Semantic-Sphere-Computation-Cognition-Information/dp/1848212518/ref=sr_1_1?ie=UTF8&qid=1310836670&sr=8-1

Written Interview in english: http://mastersofmedia.hum.uva.nl/2011/11/01/collective-intelligence-an-interv…

Video interview in english, sub-titled in portugese, about collective intelligence and the semantic sphere: http://bit.ly/vwTUgi

Review in english, by Yair Neuman: Technology becoming an Hypercortex

Written interview in english and spanish

More information here

CONTENT OF THE BOOK

1. General Introduction.

Part 1. A Philosophy of Information

2. The Nature of Information.
3. The Symbolic Cognition.
4. The Creative Conversation.
5. Toward a Mutation of Humanities and Social Sciences.
6. Information Economy.

Part 2. Cognition Modeling
7. Introduction to a Scientific Understanding of the Mind.
8. Computer Perspective: Towards a Reflexive Intelligence.
9. Overview of the Semantic Sphere IEML.
10. The Metalanguage IEML
11. The Semantic Machine IEML.
12. The Hypercortex.
13. A Hermeneutic Memory.
14. Humanistic Perspective: Towards Explicit Knowledge.
15. Observe the Collective Intelligence.

Cover_sphere1