Archives for category: Semantic Sphere

I put forward in this paper a vision for a new generation of cloud-based public communication service designed to foster reflexive collective intelligence. I begin with a description of the current situation, including the huge power and social shortcomings of platforms like Google, Apple, Facebook, Amazon, Microsoft, Alibaba, Baidu, etc. Contrasting with the practice of these tech giants, I reassert the values that are direly needed at the foundation of any future global public sphere: opennness, transparency and commonality. But such ethical and practical guidelines are probably not powerful enough to help us crossing a threshold in collective intelligence. Only a disruptive innovation in cognitive computing will do the trick. That’s why I introduce “deep meaning” a new research program in artificial intelligence, based on the Information Economy  MetaLanguage (IEML). I conclude this paper by evoking possible bootstrapping scenarii for the new public platform.

The rise of platforms

At the end of the 20th century, one percent of the human population was connected to the Internet. In 2017, more than half the population is connected. Most of the users interact in social media, search information, buy products and services online. But despite the ongoing success of digital communication, there is a growing dissatisfaction about the big tech companies – the “Silicon Valley” – who dominate the new communication environment.

The big techs are the most valued companies in the world and the massive amount of data that they possess is considered the most precious good of our time. Silicon Valley owns the big computers: the network of physical centers where our personal and business data are stored and processed. Their income comes from their economic exploitation of our data for marketing purposes and from their sales of hardware, software or services. But they also derive considerable power from the knowledge of markets and public opinions that stems from their information control.

The big cloud companies master new computing techniques mimicking neurons when they learn a new behavior. These programs are marketed as deep learning or artificial intelligence even if they have no cognitive autonomy and need some intense training by humans before becoming useful. Despite their well known limitations, machine learning algorithms have effectively augmented the abilities of digital systems. Deep learning is now used in every economic sector. Chips specialized in deep learning are found in big data centers, smartphones, robots and autonomous vehicles. As Vladimir Putin rightly told young Russians in his speech for the first day of school in fall 2017: “Whoever becomes the leader in this sphere [of artificial intelligence] will become the ruler of the world”.

The tech giants control huge business ecosystems beyond their official legal borders and they can ruin or buy competitors. Unfortunately, the big tech rivalry prevents a real interoperability between cloud services, even if such interoperability would be in the interest of the general public and of many smaller businesses. As if their technical and economic powers were not enough, the big tech are now playing into the courts of governments. Facebook warrants our identity and warns our family and friends that we are safe when a terrorist attack or a natural disaster occurs. Mark Zuckerberg states that one of Facebook’s mission is to insure that the electoral process is fair and open in democratic countries. Google Earth and Google Street View are now used by several municipal instances and governments as their primary source of information for cadastral plans and other geographical or geospatial services. Twitter became an official global political, diplomatic and news service. Microsoft sells its digital infrastructure to public schools. The kingdom of Denmark opened an official embassy in Silicon Valley. Cryptocurrencies independent from nation states (like Bitcoin) are becoming increasingly popular. Blockchain-based smart contracts (powered by Ethereum) bypass state authentication and traditional paper bureaucracies. Some traditional functions of government are taken over by private technological ventures.

This should not come as a surprise. The practice of writing in ancient palace-temples gave birth to government as a separate entity. Alphabet and paper allowed the emergence of merchant city-states and the expansion of literate empires. The printing press, industrial economy, motorized transportation and electronic media sustained nation-states. The digital revolution will foster new forms of government. Today, we discuss political problems in a global public space taking advantage of the web and social media and the majority of humans live in interconnected cities and metropoles. Each urban node wants to be an accelerator of collective intelligence, a smart city. We need to think about public services in a new way. Schools, universities, public health institutions, mail services, archives, public libraries and museums should take full advantage of the internet and de-silo their datasets. But we should go further. Are current platforms doing their best to enhance collective intelligence and human development? How about giving back to the general population the data produced in social media and other cloud services, instead of just monetizing it for marketing purposes ? How about giving to the people access to cognitive powers unleashed by an ubiquitous algorithmic medium?

Information wants to be open, transparent and common

We need a new kind of public sphere: a platform in the cloud where data and metadata would be our common good, dedicated to the recording and collaborative exploitation of memory in the service of our collective intelligence. The core values orienting the construction of this new public sphere should be: openness, transparency and commonality

Firstly openness has already been experimented in the scientific community, the free software movement, the creative commons license, Wikipedia and many more endeavors. It has been adopted by several big industries and governments. “Open by default” will soon be the new normal. Openness is on the rise because it maximizes the improvement of goods and services, foster trust and support collaborative engagement. It can be applied to data formats, operating systems, abstract models, algorithms and even hardware. Openness applies also to taxonomies, ontologies, search architectures, etc. This notion may be generalized to an open creation, description and interpretation of data. A new open public space should encourage all participants to create, comment, categorize, assess and analyze its content.

, transparency is the very basis of trust and the precondition of authentic dialogue. Data and people (including the administrators of a platform), should be traceable and audit-able. Transparency should be reciprocal, without distinction between rulers and ruled. Such transparency will ultimately be the basis of reflexive collective intelligence, allowing teams and communities of any size to observe and compare their cognitive activity

Commonality means that people will not have to pay to get access to the new public sphere: all will be free and public property. Commonality means also transversality: de-silo and cross-pollination. Smart communities will interconnect and recombine all kind of useful information: open archives of libraries and museums, free academic publications, shared learning resources, knowledge management repositories, open-source intelligence datasets, news, public legal databases…

From deep learning to deep meaning

The new public platform will be based on the web and its open standards like http, URL, html, etc. Like all current platforms, it will take advantage of distributed computing in the cloud. It will use “deep learning”: an artificial intelligence technology that employs specialized chips and algorithms that roughly mimic the learning process of neurons. Deep learning is used by Google, Facebook, Amazon, Microsoft and by other companies specialized in data analytics. Finally, to be completely up to date, the public platform should enable blockchain-based payments, transactions, contracts and secure records

If our public platform offers the same technologies as the big tech (cloud, deep learning, blockchain), with the sole difference of openness, transparency and commonality, it may prove insufficient to foster a swift adoption, as is demonstrated by the relative failures of Diaspora (open Facebook) and Mastodon (open Twitter). Such a project may only succeed if it has some technical advantage compared to the existing commercial platforms. Moreover, this technical advantage should have appealing political and philosophical dimensions.

The majority of us do not fancy the dream of autonomous machines, specially considering the current limitations of artificial intelligence. We want instead an artificial intelligence designed for the augmentation of human personal and collective intellect. That’s why, in addition to the current state of the art, the new platform should integrate the brand new deep meaning technology. Deep meaning will expand the actual reach of artificial intelligence, improve the user experience of big data analytics and allow the reflexivity of personal and collective intelligence.

Language as a platform

In a nutshell, deep learning models neurons and deep meaning models language. In order to augment the human intellect, we need both! Deep learning is based on neural networks simulation. It is enough to model roughly animal cognition (every animal species has neurons) but not enough to model human cognition. The difference between animal cognition and human reflexive thought comes from language, which adds a layer of semantic addressing on top of neuronal connectivity. Speech production and understanding is an innate property of individual human brains. But as humanity is a social species, language works only at the social scale. Languages are conventional, shared by members of the same culture and learned by social contact. In human cognition, the categories that organize perception, action, memory and learning are expressed linguistically so they may be reflected upon and shared in conversations. A language works like the semantic addressing system of a social virtual database.

The problem with natural languages (english, french, arabic, etc.) is that they are irregular and do not lend themselves easily to machine understanding or machine translation. The current trend in natural language processing (an important field of artificial intelligence) is to use statistical algorithms and deep learning methods to understand and produce linguistic data. Instead of using statistics, deep meaning adopts a regular and computable metalanguage to organize linguistic and non-linguistic data. IEML (Information Economy MetaLanguage) has been designed to optimize semantic computing. IEML words are built from six primitive symbols and two operations: addition and multiplication. The semantic relations between words follow the lines of their generative operations. Words (the total number of which do not exceed 10 000) represent the conceptual building blocks of the language. From these elementary concepts, the generative grammar of IEML allows the construction of propositions at three layers of complexity: words into topics, topics into phrases (facts, events) and phrases into super-phrases (theories, narratives). The higher meaning unit, or text, is a unique set of propositions. Deep meaning technology uses IEML as the semantic addressing system of a social database.

From an analytics angle, deep meaning allows the automatic computing of semantic relations between data and semantic visualizations of large datasets. From the point of view of interoperability, it decompartmentalizes tags, folksonomies, taxonomies, ontologies and languages. On the reflexive side, when on line communities categorize, assess and exchange semantic data, they generate explorable ecosystems of ideas that represent their collective intelligence. Note that the vision of collective intelligence proposed here is opposed to the “wisdom of the crowd” model, that assumes independent agents and excludes dialogue and reflexivity. Just the opposite : deep meaning was designed from the beginning to foster dialogue and reflexivity.

The main functions of the new public sphere


In the new public sphere, every netizen has the rights of an author, an editor, an artist, a curator, a critique, a messenger, a contractor and a gamer. The next platform weaves five functions together: curation, creation, communication, transaction and immersion.

By curation I mean the collaborative creation, edition, analysis, synthesis, visualization, explanation and publication of datasets. People posting, liking and commenting content on social media are already doing data curation, even if in a crude way and unknowingly. Active professionals in the fields of heritage preservation (library, museums), digital humanities, education, knowledge management, data-driven journalism or open-source intelligence practice data curation in a more systematic and mindful manner. The new platform offers a consistent service of collaborative data curation empowered by a common semantic addressing system.

Augmented by deep meaning, our public sphere includes a semantic metadata editor applicable to any document format. It works as a registration system for the works of the mind. Communication is ensured by a global Twitter-like public posting system. But instead of the current hashtags that are mere sequences of characters, the new semantic tags self-translate in all natural languages and interconnect by conceptual proximity. The blockchain layer allows any transaction to be recorded. The platform remunerates authors and curators in collective intelligence coins, according to the public engagement generated by their work. The new public sphere is grounded in the internet of things, smart cities, ambient intelligence and augmented reality. People control their environment and communicate with sensors, software agents and bots of all kinds in the same immersive semantic space. Virtual worlds simulate the collective intelligence of teams, networks and cities.


The design and prototyping of this platform has been developed between 2002 and 2017 at the University of Ottawa. A prototype is currently in a pre-alpha version, featuring the curation functionality. An alpha version will be demonstrated in the summer of 2018. How to bridge the gap from the fundamental research to the full scale industrial platform? Such endeavor will be much less expensive than the conquest of space and could bring a tremendous augmentation of human collective intelligence. Even if the network effect applies obviously to the new public space, small communities of pioneers will benefit immediately from its early release. On the humanistic side, I have already mentioned museums and libraries, researchers in humanities and social science, collaborative learning networks, data-oriented journalists, knowledge management and business intelligence professionals, etc. On the engineering side, deep meaning opens a new sub-field of artificial intelligence that will enhance current techniques of big data analytics, machine learning, natural language processing, internet of things, augmented reality and other immersive interfaces. Because it is open source by design, the development of the new technology can be crowdsourced and shared easily among many different actors.

Let’s draw a distinction between the new public sphere, including its semantic coordinate system, and the commercial platforms that will give access to it. This distinction being made, we can imagine a consortium of big tech companies, universities and governments supporting the development of the global public service of the future. We may also imagine one of the big techs taking the lead to associate its name to the new platform and developing some hardware specialized in deep meaning. Another scenario is the foundation of a company that will ensure the construction and maintenance of the new platform as a free public service while sustaining itself by offering semantic services: research, consulting, design and training. In any case, a new international school must be established around a virtual dockyard where trainees and trainers build and improve progressively the semantic coordinate system and other basic models of the new platform. Students from various organizations and backgrounds will gain experience in the field of deep meaning and will disseminate the acquired knowledge back into their communities.

What is IEML?

  • IEML (Information Economy MetaLanguage) is an open (GPL3) and free artificial metalanguage that is simultaneously a programming language, a pivot between natural languages and a semantic coordinate system. When data are categorized in IEML, the metalanguage compute their semantic relationships and distances.
  • From a “social” point of view, on line communities categorizing data in IEML generate explorable ecosystems of ideas that represent their collective intelligence.
  • Github.

What problems does IEML solve?

  • Decompartmentalization of tags, folksonomies, taxonomies, ontologies and languages (french and english for now).
  • Semantic search, automatic computing and visualization of semantic relations and distances between data.
  • Giving back to the users the information that they produce, enabling reflexive collective intelligence.

Who is IEML for?

Content curators

  • knowledge management
  • marketing
  • curation of open data from museums and libraries, crowdsourced curation
  • education, collaborative learning, connectionists MOOCs
  • watch, intelligence

Self-organizing on line communities

  • smart cities
  • collaborative teams
  • communities of practice…


  • artificial intelligence
  • data analytics
  • humanities and social sciences, digital humanities

What motivates people to adopt IEML?

  • IEML users participate in the leading edge of digital innovation, big data analytics and collective intelligence.
  • IEML can enhance other AI techniques like machine learning, deep learning, natural language processing and rule-based inference.

IEML tools

IEML v.0

IEML v.0 includes…

  • A dictionary of  concepts whose edition is restricted to specialists but navigation and use is open to all.
  • A library of tags – called USLs (Uniform Semantic Locators) – whose edition, navigation and use is open to all.
  • An API allowing access to the dictionary, the library and their functionalities (semantic computing).

Intlekt v.0

Intlekt v.0 is a collaborative data curation tool that allows
– the categorization of data in IEML,
– the semantic visualization of collections of data categorized in IEML
– the publication of these collections

The prototype (to be issued in May 2018) will be mono-user but the full blown app will be social.

Who made it?

The IEML project is designed and led by Pierre Lévy.

It has been financed by the Canada Research Chair in Collective Intelligence at the University of Ottawa (2002-2016).

At an early stage (2004-2011) Steve Newcomb and Michel Biezunski have contributed to the design and implementation (parser, dictionary). Christian Desjardins implemented a second version of the dictionary. Andrew Roczniak helped for the first mathematical formalization, implemented a second version of the parser and a third version of the dictionary (2004-2016).

The 2016 version has been implemented by Louis van Beurden, Hadrien Titeux (chief engineers), Candide Kemmler (project management, interface), Zakaria Soliman and Alice Ribaucourt.

The 2017 version (1.0) has been implemented by Louis van Beurden (chief engineer), Eric Waldman (IEML edition interface, visualization), Sylvain Aube (Drupal), Ludovic Carré and Vincent Lefoulon (collections and tags management).


Dice sculpture by Tony Cragg

Après avoir posé dans un post précédent les principes d’une cartographie de l’intelligence collective, je m’intéresse maintenant au développement humain qui en est le corrélat, la condition et l’effet de l’intelligence collective. Dans un premier temps, je vais élever au carré la triade sémiotique signe/être/chose (étoile/visage/cube) pour obtenir les neuf «devenirs», qui pointent vers les principales directions du développement humain.

F-PARA-devenirs-1.jpgCarte des devenirs

Les neuf chemins qui mènent de l’un des trois pôles sémiotiques vers lui-même ou vers les deux autres sont appelés en IEML des devenirs (voir dans le dictionnaire IEML la carte sémantique M:M:.) Un devenir ne peut être réduit ni à son point de départ ni à son point d’arrivée, ni à la somme des deux mais bel et bien à l’entre-deux ou à la métamorphose de l’un dans l’autre. Ainsi la mémoire signifie ultimement «devenir chose du signe». On remarquera également que chacun des neufs devenirs peut se tourner aussi bien vers l’actuel que vers le virtuel. Par exemple, la pensée peut prendre comme objet aussi bien le réel sensible que ses propres spéculations. A l’autre bout du spectre, l’espace peut référer aussi bien au contenant de la matérialité physique qu’aux idéalités de la géométrie. Au cours de notre exploration, nous allons découvrir que chacun des neufs devenirs indique une direction d’exploration possible de la philosophie. Les neuf devenirs sont à la fois conceptuellement distincts et réellement interdépendants puisque chacun d’eux a besoin du soutien des autres pour se déployer.


Dans la pensée – s. en IEML – aussi bien la substance (point de départ) que l’attribut (point d’arrivée) sont des signes. La pensée relève en quelque sorte du signe au carré. Elle marque la transformation d’un signe en un autre signe, comme dans la déduction, l’induction, l’interprétation, l’imagination et ainsi de suite.

Le concept de pensée ou d’intellection est central pour la tradition idéaliste occidentale qui part de Platon et passe notamment par Aristote, les néo-plationciens, les théologiens du moyen-Age, Kant, Hegel et jusqu’à Husserl. L’intellection se trouve également au coeur de la philosophie islamique, aussi bien chez Avicenne (Ibn Sina) et ses contituateurs dans la philosophie iranienne jusqu’au XVIIe siècle que chez l’andalou Averroes (Ibn Roshd). Elle l’est encore pour la plupart des grandes philosophies de l’Inde méditante. L’existence humaine, et plus encore l’existence philosophique, est nécessairement plongée dans la pensée discursive réfléchissante. Où cette pensée prend-elle son origine ? Quelles sont ses structures ? Comment mener la pensée humaine à sa perfection ? Autant de questions que l’interrogation philosophique ne peut éluder.


Le langage – b. en IEML – s’entend ici comme un code (au sens le plus large du terme) de communication qui fonctionne effectivement dans l’univers humain. Le langage est un «devenir-être du signe», une transformation du signe en intelligence, une illumination du sujet par le signe.

Certaines philosophies adoptent comme point de départ les problèmes du langage et de la communication. Wittgenstein, par exemple, a fait largement tourner sa philosophie autour du problème des limites du langage. Mais il faut noter qu’il s’intéresse également à des questions de logique et au problème de la vérité. Dans un style différent, un philosophe comme Peirce n’a cessé d’approfondir la question de la signification et du fonctionnement des signes. Austin a creusé le thème des actes de langage, etc. On comprend que ce devenir désigne le moment sémiotique (ou linguistique) de la philosophie. L’Homme est un être parlant dont l’existence ne peut se réaliser que par et dans le langage.


Dans la mémoire – t. en IEML – le signe en substance se réifie dans son attribut chose. Ce devenir évoque le geste élémentaire de l’inscription ou de l’enregistrement. Le devenir chose du signe est ici considéré comme la condition de possibilité de la mémoire. Il commande la notion même de temps.

Le passage du temps et son inscription – la mémoire – fut un des thèmes de prédilection de Bergson (auteur notamment de Matière et Mémoire). Bergson mettait l’épaisseur de la vie et le jaillissement évolutif de la création du côté de la mémoire par opposition avec le déterminisme physicien du XIXe siècle (la « matière ») et le mécanisme logico-mathématique, assignés à l’espace. On trouve également une analyse fine du passage du temps et de son inscription dans les philosophies de l’impermanence et du karma, comme le bouddhisme. L’évolutionnisme, de manière générale, qu’il soit cosmique, biologique ou culturel, se fonde sur une dialectique du passage du temps et de la rétention d’une mémoire codée. Notons enfin que nombre de grandes traditions religieuses se fondent sur des écritures sacrées relevant du même archétype de l’inscription. En un sens, parce que nous sommes inévitablement soumis à la séquentialité temporelle, notre existence est mémoire : mémoire à court terme de la perception, mémoire à long terme du souvenir et de l’apprentissage, mémoire individuelle où revivent et confluent les mémoires collectives.


Dans la société – k. en IEML –, une communauté d’êtres s’organise au moyen de signes. Nous nous engageons dans des promesses et des contrats. Nous obéïssons à la loi. Les membres d’un clan ont le même animal totémique. Nous nous battons sous le même drapeau. Nous échangeons des biens économiques en nous mettant d’accord sur leur valeur. Nous écoutons ensemble de la musique et nous partageons la même langue. Dans tous ces cas, comme dans bien d’autres, une communauté d’humains converge et crée une unité sociale en s’attachant à une même réalité signifiante conventionnelle : autant de manières de « faire société ».

On sait que la sociologie est un rejeton de la philosophie. Avant même que la discipline sociologique ne se sépare du tronc commun, le moment social de la philosophie a été illustré par de grands noms : Jean-Jacques Rousseau et sa théorie du contrat, Auguste Comte qui faisait culminer la connaissance dans la science des sociétés, Karl Marx qui faisait de la lutte des classes le moteur de l’histoire et ramenait l’économie, la politique et la culture en général aux « rapports sociaux réels ». Durkheim, Mauss, Weber et leurs successeurs sociologues et anthropologues se sont interrogé sur les mécanismes par lesquels nous « faisons société ». L’homme est un animal politique qui ne peut pas ne pas vivre en société. Comment vivifier la philia, lien d’amitié entre les membres de la même communauté ? Quelles sont les vraies ou les bonnes sociétés ? Spirituelles, cosmopolites, impériales, civiques, nationales…? Quels sont les meilleurs régimes politiques ? Autant d’interrogations toujours ouvertes.


Dans l’affect – m. en IEML – un être s’oriente vers d’autres êtres, ou détermine son intériorité la plus intime. L’affect est ici entendu comme le tropisme de la subjectivité. Désir, amour, haine, indifférence, compassion, équanimité sont des qualités émotionnelles qui circulent entre les êtres.

Après les poètes, les dévots et les comédiens, Freud, la psychanalyse et une bonne part de la psychologie clinique insistent sur l’importance de l’affect et des fonctions émotionnelles pour comprendre l’existence humaine. On a beaucoup souligné récemment l’importance de « l’intelligence émotionnelle ». Mais la chose n’est pas nouvelle. Cela fait bien longtemps que les philosophes s’interrogent sur l’amour (voir le Banquet de Platon) et les passions (Descartes lui-même a écrit un Traité des passions), même s’il n’en font pas toujours le thème central de leur philosophie. L’existence se débat nécessairement dans les problèmes affectifs parce qu’aucune vie humaine ne peut échapper aux émotions, à l’attraction et à la répulsion, à la joie et à la tristesse. Mais les émotions sont-elles des expressions légitimes de notre nature spontanée ou des «poisons de l’esprit» (selon la forte expression bouddhiste) auxquels il ne faut pas laisser le gouvernement de notre existence ? Ou les deux ? De nombreuses écoles philosophiques aussi bien Orient qu’en Occident, ont vanté l’ataraxie, le calme mental ou, tout au moins, la modération des passions. Mais comment maîtriser les passions, et comment les maîtriser sans les connaître ?


Dans le monde – n. en IEML – les êtres humains (être en substance) s’expriment dans leur environnement physique (chose en attribut). Ils habitent cet environnement, ils le travaillent au moyen d’outils, ils en nomment les parties et les objets, leur attribuent des valeurs. C’est ainsi que se construit un monde culturellement ordonné, un cosmos.

Nietzsche (qui accordait un rôle central à la création des valeurs), tout comme la pensée anthropologique, fondent principalement leur approche sur le concept de « monde », ou de cosmos organisé par la culture humaine. La notion indienne tout-englobante de dharma se réfère ultimement à un ordre cosmique transcendant qui veut se manifester jusque dans les plus petits détails de l’existence. L’interrogation philosophique sur la justice rejoint cette idée que les actes humains sont en résonance ou en dissonance avec un ordre universel. Mais quelle est la « voie » (le Dao de la philosophie chinoise) de cet ordre ? Son universalité est-elle naturelle ou conventionnelle ? A quels principes obeit-elle ?


La vérité – d. en IEML – décrit un « devenir signe de la chose ». Une référence (un état de chose) se manifeste par un message déclaratif (un signe). Un énoncé n’est vrai que s’il contient une description correcte d’un état de choses. L’authenticité se dit d’un signe qui garantit une chose.

La tradition logicienne et la philosophie analytique s’intéressent principalement au concept de vérité (au sens de l’exactitude des faits et des raisonnements) ainsi qu’aux problèmes liés à la référence. L’épistémologie et les sciences cognitives qui se situent dans cette mouvance mettent au fondement de leur démarche la construction d’une connaissance vraie. Mais, au-delà de ces spécialisations, la question de la vérité est un point de passage obligé de l’interrogation philosophique. Même les plus sceptiques ne peuvent renoncer à la vérité sans renoncer à leur propre scepticisme. Si l’on veut mettre l’accent sur sa stabilité et sa cohérence, on la fera découler des lois de la logique et de procédures rigoureuses de vérification empirique. Mais si l’on veut mettre l’accent sur sa fragilité et sa multiplicité, on la fera sécréter par des paradigmes (au sens de Khun), des épistémès, des constructions sociales de sens, toutes variables selon les temps et les lieux.


Dans la vie – f. en IEML – une chose substantielle (la matérialité du corps) prend l’attribut de l’être, avec sa qualité d’intériorité subjective. La vie évoque ainsi l’incarnation physique d’une créature sensible. Quand un être vivant mange et boit, il transforme des entités objectivées en matériaux et combustibles pour les processus organiques qui supportent sa subjectivité : devenir être de la chose.

Les empiristes fondent la connaissance sur les sens. Les phénoménologues analysent notamment la manière dont les choses nous apparaissent dans la perception. Le biologisme ramène le fonctionnement de l’esprit à celui des neurones ou des hormones. Autant de traditions et de points de vue qui, malgré leurs différences, convergent sur l’organisme humain, ses fonctions et sa sensibilité. Beaucoup de grands philosophes furent des biologistes (Aristote, Darwin) ou des médecins (Hippocrate, Avicenne, Maïmonide…). Médecine chinoise et philosophie chinoise sont profondément interreliées. Il est indéniable que l’existence humaine émane d’un corps vivant et que tous les événements de cette existence s’inscrivent d’une manière ou d’une autre dans ce corps.


Dans l’espace – l. en IEML –, qu’il soit concret ou abstrait, une chose se relie aux autres choses, se manifeste dans l’univers des choses. L’espace est un système de transformation des choses. Il se construit de relations topologiques et de proximités géométriques, de territoires, d’enveloppes, de limites et de chemins, de fermetures et de passages. L’espace manifeste en quelque sorte l’essence superlative de la chose, comme la pensée manifestait celle du signe et l’affect celle de l’être.

Sur un plan philosophique, les géomètres, topologues, atomistes, matérialistes et physiciens fondent leurs conceptions sur l’espace. Comme je le soulignais plus haut, le géométrisme idéaliste ou l’atomisme matérialiste se rejoignent sur l’importance fondatrice de l’espace. Les atomes sont dans le vide, c’est-à-dire dans l’espace. L’existence humaine se projette nécessairement dans la multitude spatiale qu’elle construit et qu’elle habite : géographies physiques ou imaginaires, paysages urbains ou ruraux, architectures de béton ou de concepts, distances géométriques ou connexions topologiques, replis et réseaux à l’infini.

On peut ainsi caractériser les philosophies en fonction du ou des devenirs qu’elles prennent pour point de départ de leur démarche ou qui constituent leur thème de prédilection. Les devenirs IEML représentent des « points de passage obligé » de l’existence. Dès son alphabet, le métalangage ouvre la sphère sémantique à l’expression de n’importe quelle philosophie, exactement comme une langue naturelle. Mais c’est aussi une langue philosophique, conçue pour éviter les zones cognitives aveugles, les réflexes de pensée limitants dus à l’usage exclusif d’une seule langue naturelle, à la pratique d’une seule discipline devenue seconde nature ou à des points de vue philosophiques trop exclusifs. Elle a justement été construite pour favoriser la libre exploration de toutes les directions sémantiques. C’est pourquoi, en IEML, chaque philosophie apparaît comme une combinaison de points de vue partiels sur une sphère sémantique intégrale qui peut les accommoder toutes et les entrelace dans sa circularité radicale.


Proper quotation: « The Philosophical Concept of Algorithmic Intelligence », Spanda Journal special issue on “Collective Intelligence”, V (2), December 2014, p. 17-25. The original text can be found for free online at  Spanda

“Transcending the media, airborne machines will announce the voice of the many. Still indiscernible, cloaked in the mists of the future, bathing another humanity in its murmuring, we have a rendezvous with the over-language.” Collective Intelligence, 1994, p. xxviii.

Twenty years after Collective Intelligence

This paper was written in 2014, twenty years after L’intelligence collective [the original French edition of Collective Intelligence].[2] The main purpose of Collective Intelligence was to formulate a vision of a cultural and social evolution that would be capable of making the best use of the new possibilities opened up by digital communication. Long before the success of social networks on the Web,[3] I predicted the rise of “engineering the social bond.” Eight years before the founding of Wikipedia in 2001, I imagined an online “cosmopedia” structured in hypertext links. When the digital humanities and the social media had not even been named, I was calling for an epistemological and methodological transformation of the human sciences. But above all, at a time when less than one percent of the world’s population was connected,[4] I was predicting (along with a small minority of thinkers) that the Internet would become the centre of the global public space and the main medium of communication, in particular for the collaborative production and sharing of knowledge and the dissemination of news.[5] In spite of the considerable growth of interactive digital communication over the past twenty years, we are still far from the ideal described in Collective Intelligence. It seemed to me already in 1994 that the anthropological changes under way would take root and inaugurate a new phase in the human adventure only if we invented what I then called an “over-language.” How can communication readily reach across the multiplicity of dialects and cultures? How can we map the deluge of digital data, order it around our interests and extract knowledge from it? How can we master the waves, currents and depths of the software ocean? Collective Intelligence envisaged a symbolic system capable of harnessing the immense calculating power of the new medium and making it work for our benefit. But the over-language I foresaw in 1994 was still in the “indiscernible” period, shrouded in “the mists of the future.” Twenty years later, the curtain of mist has been partially pierced: the over-language now has a name, IEML (acronym for Information Economy MetaLanguage), a grammar and a dictionary.[6]

Reflexive collective intelligence

Collective intelligence drives human development, and human development supports the growth of collective intelligence. By improving collective intelligence we can place ourselves in this feedback loop and orient it in the direction of a self-organizing virtuous cycle. This is the strategic intuition that has guided my research. But how can we improve collective intelligence? In 1994, the concept of digital collective intelligence was still revolutionary. In 2014, this term is commonly used by consultants, politicians, entrepreneurs, technologists, academics and educators. Crowdsourcing has become a common practice, and knowledge management is now supported by the decentralized use of social media. The interconnection of humanity through the Internet, the development of the knowledge economy, the rush to higher education and the rise of cloud computing and big data are all indicators of an increase in our cognitive power. But we have yet to cross the threshold of reflexive collective intelligence. Just as dancers can only perfect their movements by reflecting them in a mirror, just as yogis develop awareness of their inner being only through the meditative contemplation of their own mind, collective intelligence will only be able to set out on the path of purposeful learning and thus move on to a new stage in its growth by achieving reflexivity. It will therefore need to acquire a mirror that allows it to observe its own cognitive processes. Be careful! Collective intelligence does not and will not have autonomous consciousness: when I talk about reflexive collective intelligence, I mean that human individuals will have a clearer and better-shared knowledge than they have today of the collective intelligence in which they participate, a knowledge based on transparent principles and perfectible scientific methods.

The key: A complete modelling of language

But how can a mirror of collective intelligence be constructed? It is clear that the context of reflection will be the algorithmic medium or, to put it another way, the Internet, the calculating power of cloud computing, ubiquitous communication and distributed interactive mobile interfaces. Since we can only reflect collective intelligence in the algorithmic medium, we must yield to the nature of that medium and have a calculable model of our intelligence, a model that will be fed by the flows of digital data from our activities. In short, we need a mathematical (with calculable models) and empirical (based on data) science of collective intelligence. But, once again, is such a science possible? Since humanity is a species that is highly social, its intelligence is intrinsically social, or collective. If we had a mathematical and empirical science of human intelligence in general, we could no doubt derive a science of collective intelligence from it. This leads us to a major problem that has been investigated in the social sciences, the human sciences, the cognitive sciences and artificial intelligence since the twentieth century: is a mathematized science of human intelligence possible? It is language or, to put it another way, symbolic manipulation that distinguishes human cognition. We use language to categorize sensory data, to organize our memory, to think, to communicate, to carry out social actions, etc. My research has led me to the conclusion that a science of human intelligence is indeed possible, but on the condition that we solve the problem of the mathematical modelling of language. I am speaking here of a complete scientific modelling of language, one that would not be limited to the purely logical and syntactic aspects or to statistical correlations of corpora of texts, but would be capable of expressing semantic relationships formed between units of meaning, and doing so in an algebraic, generative mode.[7] Convinced that an algebraic model of semantics was the key to a science of intelligence, I focused my efforts on discovering such a model; the result was the invention of IEML.[8] IEML—an artificial language with calculable semantics—is the intellectual technology that will make it possible to find answers to all the above-mentioned questions. We now have a complete scientific modelling of language, including its semantic aspects. Thus, a science of human intelligence is now possible. It follows, then, that a mathematical and empirical science of collective intelligence is possible. Consequently, a reflexive collective intelligence is in turn possible. This means that the acceleration of human development is within our reach.

The scientific file: The Semantic Sphere

I have written two volumes on my project of developing the scientific framework for a reflexive collective intelligence, and I am currently writing the third. This trilogy can be read as the story of a voyage of discovery. The first volume, The Semantic Sphere 1 (2011),[9] provides the justification for my undertaking. It contains the statement of my aims, a brief intellectual autobiography and, above all, a detailed dialogue with my contemporaries and my predecessors. With a substantial bibliography,[10] that volume presents the main themes of my intellectual process, compares my thoughts with those of the philosophical and scientific tradition, engages in conversation with the research community, and finally, describes the technical, epistemological and cultural context that motivated my research. Why write more than four hundred pages to justify a program of scientific research? For one very simple reason: no one in the contemporary scientific community thought that my research program had any chance of success. What is important in computer science and artificial intelligence is logic, formal syntax, statistics and biological models. Engineers generally view social sciences such as sociology or anthropology as nothing but auxiliary disciplines limited to cosmetic functions: for example, the analysis of usage or the experience of users. In the human sciences, the situation is even more difficult. All those who have tried to mathematize language, from Leibniz to Chomsky, to mention only the greatest, have failed, achieving only partial results. Worse yet, the greatest masters, those from whom I have learned so much, from the semiologist Umberto Eco[11] to the anthropologist Levi-Strauss,[12] have stated categorically that the mathematization of language and the human sciences is impracticable, impossible, utopian. The path I wanted to follow was forbidden not only by the habits of engineers and the major authorities in the human sciences but also by the nearly universal view that “meaning depends on context,”[13] unscrupulously confusing mathematization and quantification, denouncing on principle, in a “knee jerk” reaction, the “ethnocentric bias” of any universalist approach[14] and recalling the “failure” of Esperanto.[15] I have even heard some of the most agnostic speak of the curse of Babel. It is therefore not surprising that I want to make a strong case in defending the scientific nature of my undertaking: all explorers have returned empty-handed from this voyage toward mathematical language, if they returned at all.

The metalanguage: IEML

But one cannot go on forever announcing one’s departure on a voyage: one must set forth, navigate . . . and return. The second volume of my trilogy, La grammaire d’IEML,[16] contains the very technical account of my journey from algebra to language. In it, I explain how to construct sentences and texts in IEML, with many examples. But that 150-page book also contains 52 very dense pages of algorithms and mathematics that show in detail how the internal semantic networks of that artificial language can be calculated and translated automatically into natural languages. To connect a mathematical syntax to a semantics in natural languages, I had to, almost single-handed,[17] face storms on uncharted seas, to advance across the desert with no certainty that fertile land would be found beyond the horizon, to wander for twenty years in the convoluted labyrinth of meaning. But by gradually joining sign, being and thing in turn in the sense of the virtual and actual, I finally had my Ariadne’s thread, and I made a map of the labyrinth, a complicated map of the metalanguage, that “Northwest Passage”[18] where the waters of the exact sciences and the human sciences converged. I had set my course in a direction no one considered worthy of serious exploration since the crossing was thought impossible. But, against all expectations, my journey reached its goal. The IEML Grammar is the scientific proof of this. The mathematization of language is indeed possible, since here is a mathematical metalanguage. What is it exactly? IEML is an artificial language with calculable semantics that puts no limits on the possibilities for the expression of new meanings. Given a text in IEML, algorithms reconstitute the internal grammatical and semantic network of the text, translate that network into natural languages and calculate the semantic relationships between that text and the other texts in IEML. The metalanguage generates a huge group of symmetric transformations between semantic networks, which can be measured and navigated at will using algorithms. The IEML Grammar demonstrates the calculability of the semantic networks and presents the algorithmic workings of the metalanguage in detail. Used as a system of semantic metadata, IEML opens the way to new methods for analyzing large masses of data. It will be able to support new forms of translinguistic hypertextual communication in social media, and will make it possible for conversation networks to observe and perfect their own collective intelligence. For researchers in the human sciences, IEML will structure an open, universal encyclopedic library of multimedia data that reorganizes itself automatically around subjects and the interests of its users.

A new frontier: Algorithmic Intelligence

Having mapped the path I discovered in La grammaire d’IEML, I will now relate what I saw at the end of my journey, on the other side of the supposedly impassable territory: the new horizons of the mind that algorithmic intelligence illuminates. Because IEML is obviously not an end in itself. It is only the necessary means for the coming great digital civilization to enable the sun of human knowledge to shine more brightly. I am talking here about a future (but not so distant) state of intelligence, a state in which capacities for reflection, creation, communication, collaboration, learning, and analysis and synthesis of data will be infinitely more powerful and better distributed than they are today. With the concept of Algorithmic Intelligence, I have completed the risky work of prediction and cultural creation I undertook with Collective Intelligence twenty years ago. The contemporary algorithmic medium is already characterized by digitization of data, automated data processing in huge industrial computing centres, interactive mobile interfaces broadly distributed among the population and ubiquitous communication. We can make this the medium of a new type of knowledge—a new episteme[19]—by adding a system of semantic metadata based on IEML. The purpose of this paper is precisely to lay the philosophical and historical groundwork for this new type of knowledge.

Philosophical genealogy of algorithmic intelligence

The three ages of reflexive knowledge

Since my project here involves a reflexive collective intelligence, I would like to place the theme of reflexive knowledge in its historical and philosophical context. As a first approximation, reflexive knowledge may be defined as knowledge knowing itself. “All men by nature desire to know,” wrote Aristotle, and this knowledge implies knowledge of the self.[20] Human beings have no doubt been speculating about the forms and sources of their own knowledge since the dawn of consciousness. But the reflexivity of knowledge took a decisive step around the middle of the first millennium BCE,[21] during the period when the Buddha, Confucius, the Hebrew prophets, Socrates and Zoroaster (in alphabetical order) lived. These teachers involved the entire human race in their investigations: they reflected consciousness from a universal perspective. This first great type of systematic research on knowledge, whether philosophical or religious, almost always involved a divine ideal, or at least a certain “relation to Heaven.” Thus we may speak of a theosophical age of reflexive knowledge. I will examine the Aristotelian lineage of this theosophical consciousness, which culminated in the concept of the agent intellect. Starting in the sixteenth century in Europe—and spreading throughout the world with the rise of modernity—there was a second age of reflection on knowledge, which maintained the universal perspective of the previous period but abandoned the reference to Heaven and confined itself to human knowledge, with its recognized limits but also its rational ideal of perfectibility. This was the second age, the scientific age, of reflexive knowledge. Here, the investigation follows two intertwined paths: one path focusing on what makes knowledge possible, the other on what limits it. In both cases, knowledge must define its transcendental subject, that is, it must discover its own determinations. There are many signs in 2014 indicating that in the twenty-first century—around the point where half of humanity is connected to the Internet—we will experience a third stage of reflexive knowledge. This “version 3.0” will maintain the two previous versions’ ideals of universality and scientific perfectibility but will be based on the intensive use of technology to augment and reflect systematically our collective intelligence, and therefore our capacities for personal and social learning. This is the coming technological age of reflexive knowledge with its ideal of an algorithmic intelligence. The brief history of these three modalities—theosophical, scientific and technological—of reflexive knowledge can be read as a philosophical genealogy of algorithmic intelligence.

The theosophical age and its agent intellect

A few generations earlier, Socrates might have been a priest in the circle around the Pythia; he had taken the famous maxim “Know thyself” from the Temple of Apollo at Delphi. But in the fifth century BCE in Athens, Socrates extended the Delphic injunction in an unexpected way, introducing dialectical inquiry. He asked his contemporaries: What do you think? Are you consistent? Can you justify what you are saying about courage, justice or love? Could you repeat it seriously in front of a little group of intelligent or curious citizens? He thus opened the door to a new way of knowing one’s own knowledge, a rational expansion of consciousness of self. His main disciple, Plato, followed this path of rigorous questioning of the unthinking categorization of reality, and finally discovered the world of Ideas. Ideas for Plato are intellectual forms that, unlike the phenomena they categorize, do not belong to the world of Becoming. These intelligible forms are the original essences, archetypes beyond reality, which project into phenomenal time and space all those things that seem to us to be truly real because they are tangible, but that are actually only pale copies of the Ideas. We would say today that our experience is mainly determined by our way of categorizing it. Plato taught that humanity can only know itself as an intelligent species by going back to the world of Ideas and coming into contact with what explains and motivates its own knowledge. Aristotle, who was Plato’s student and Alexander the Great’s tutor, created a grand encyclopedic synthesis that would be used as a model for eighteen centuries in a multitude of cultures. In it, he integrates Plato’s discovery of Ideas with the sum of knowledge of his time. He places at the top of his hierarchical cosmos divine thought knowing itself. And in his Metaphysics,[22] he defines the divinity as “thought thinking itself.” This supreme self-reflexive thought was for him the “prime mover” that inspires the eternal movement of the cosmos. In De Anima,[23] his book on psychology and the theory of knowledge, he states that, under the effect of an agent intellect separate from the body, the passive intellect of the individual receives intelligible forms, a little like the way the senses receive sensory forms. In thinking these intelligible forms, the passive intellect becomes one with its objects and, in so doing, knows itself. Starting from the enigmatic propositions of Aristotle’s theology and psychology, a whole lineage of Peripatetic and Neo-Platonic philosophers—first “pagans,” then Muslims, Jews and Christians—developed the discipline of noetics, which speculates on the divine intelligence, its relation to human intelligence and the type of reflexivity characteristic of intelligence in general.[24] According to the masters of noetics, knowledge can be conceptually divided into three aspects that, in reality, are indissociable and complementary:

  • the intellect,or the knowing subject
  • the intelligence,or the operation of the subject
  • the intelligible,or what is known—or can be known—by the subject by virtue of its operation

From a theosophical perspective, everything that happens takes place in the unity of a self-reflexive divine thought, or (in the Indian tradition) in the consciousness of an omniscient Brahman or Buddha, open to infinity. In the Aristotelian tradition, Avicenna, Maimonides and Albert the Great considered that the identity of the intellect, the intelligence and the intelligible was achieved eternally in God, in the perfect reflexivity of thought thinking itself. In contrast, it was clear to our medieval theosophists that in the case of human beings, the three aspects of knowledge were neither complete nor identical. Indeed, since the passive intellect knows itself only through the intermediary of its objects, and these objects are constantly disappearing and being replaced by others, the reflexive knowledge of a finite human being can only be partial and transitory. Ultimately, human knowledge could know itself only if it simultaneously knew, completely and enduringly, all its objects. But that, obviously, is reserved only for the divinity. I should add that the “one beyond the one” of the neo-Platonist Plotinus and the transcendent deity of the Abrahamic traditions are beyond the reach of the human mind. That is why our theosophists imagined a series of mediations between transcendence and finitude. In the middle of that series, a metaphysical interface provides communication between the unimaginable and inaccessible deity and mortal humanity dispersed in time and space, whose living members can never know—or know themselves—other than partially. At this interface, we find the agent intellect, which is separate from matter in Aristotle’s psychology. The agent intellect is not limited—in the realm of time—to sending the intelligible categories that inform the human passive intellect; it also determines—in the realm of eternity—the maximum limit of what the human race can receive of the universal and perfectly reflexive knowledge of the divine. That is why, according to the medieval theosophists, the best a mortal intelligence can do to approach complete reflexive knowledge is to contemplate the operation in itself of the agent intellect that emanates from above and go back to the source through it. In accordance with this regulating ideal of reflexive knowledge, living humanity is structured hierarchically, because human beings are more or less turned toward the illumination of the agent intellect. At the top, prophets and theosophists receive a bright light from the agent intellect, while at the bottom, human beings turned toward coarse material appetites receive almost nothing. The influx of intellectual forms is gradually obscured as we go down the scale of degree of openness to the world above.

The scientific age and its transcendental subject

With the European Renaissance, the use of the printing press, the construction of new observation instruments, and the development of mathematics and experimental science heralded a new era. Reflection on knowledge took a critical turn with Descartes’s introduction of radical doubt and the scientific method, in accordance with the needs of educated Europe in the seventeenth century. God was still present in the Cartesian system, but He was only there, ultimately, to guarantee the validity of the efforts of human scientific thought: “God is not a deceiver.”[25] The fact remains that Cartesian philosophy rests on the self-reflexive edge, which has now moved from the divinity to the mortal human: “I think, therefore I am.”[26] In the second half of the seventeenth century, Spinoza and Leibniz received the critical scientific rationalism developed by Descartes, but they were dissatisfied with his dualism of thought (mind) and extension (matter). They therefore attempted, each in his own way, to constitute reflexive knowledge within the framework of coherent monism. For Spinoza, nature (identified with God) is a unique and infinite substance of which thought and extension are two necessary attributes among an infinity of attributes. This strict ontological monism is counterbalanced by a pluralism of expression, because the unique substance possesses an infinity of attributes, and each attribute, an infinity of modes. The summit of human freedom according to Spinoza is the intellectual love of God, that is, the most direct and intuitive possible knowledge of the necessity that moves the nature to which we belong. For Leibniz, the world is made up of monads, metaphysical entities that are closed but are capable of an inner perception in which the whole is reflected from their singular perspective. The consistency of this radical pluralism is ensured by the unique, infinite divine intelligence that has considered all possible worlds in order to create the best one, which corresponds to the most complex—or the richest—of the reciprocal reflections of the monads. As for human knowledge—which is necessarily finite—its perfection coincides with the clearest possible reflection of a totality that includes it but whose unity is thought only by the divine intelligence. After Leibniz and Spinoza, the eighteenth century saw the growth of scientific research, critical thought and the educational practices of the Enlightenment, in particular in France and the British Isles. The philosophy of the Enlightenment culminated with Kant, for whom the development of knowledge was now contained within the limits of human reason, without reference to the divinity, even to envelop or guarantee its reasoning. But the ideal of reflexivity and universality remained. The issue now was to acquire a “scientific” knowledge of human intelligence, which could not be done without the representation of knowledge to itself, without a model that would describe intelligence in terms of what is universal about it. This is the purpose of Kantian transcendental philosophy. Here, human intelligence, armed with its reason alone, now faces only the phenomenal world. Human intelligence and the phenomenal world presuppose each other. Intelligence is programmed to know sensory phenomena that are necessarily immersed in space and time. As for phenomena, their main dimensions (space, time, causality, etc.) correspond to ways of perceiving and understanding that are specific to human intelligence. These are forms of the transcendental subject and not intrinsic characteristics of reality. Since we are confined within our cognitive possibilities, it is impossible to know what things are “in themselves.” For Kant, the summit of reflexive human knowledge is in a critical awareness of the extension and the limits of our possibility of knowing. Descartes, Spinoza, Leibniz, the English and French Enlightenment, and Kant accomplished a great deal in two centuries, and paved the way for the modern philosophy of the nineteenth and twentieth centuries. A new form of reflexive knowledge grew, spread, and fragmented into the human sciences, which mushroomed with the end of the monopoly of theosophy. As this dispersion occurred, great philosophers attempted to grasp reflexive knowledge in its unity. The reflexive knowledge of the scientific era neither suppressed nor abolished reflexive knowledge of the theosophical type, but it opened up a new domain of legitimacy of knowledge, freed of the ideal of divine knowledge. This de jure separation did not prevent de facto unions, since there was no lack of religious scholars or scholarly believers. Modern scientists could be believers or non-believers. Their position in relation to the divinity was only a matter of motivation. Believers loved science because it revealed the glory of the divinity, and non-believers loved it because it explained the world without God. But neither of them used as arguments what now belonged only to their private convictions. In the human sciences, there were systematic explorations of the determinations of human existence. And since we are thinking beings, the determinations of our existence are also those of our thought. How do the technical, historical, economic, social and political conditions in which we live form, deform and set limits on our knowledge? What are the structures of our biology, our language, our symbolic systems, our communicative interactions, our psychology and our processes of subjectivation? Modern thought, with its scientific and critical ideal, constantly searches for the conditions and limits imposed on it, particularly those that are as yet unknown to it, that remain in the shadows of its consciousness. It seeks to discover what determines it “behind its back.” While the transcendental subject described by Kant in his Critique of Pure Reason fixed the image a great mind had of it in the late eighteenth century, modern philosophy explores a transcendental subject that is in the process of becoming, continually being re-examined and more precisely defined by the human sciences, a subject immersed in the vagaries of cultures and history, emerging from its unconscious determinations and the techno-symbolic mechanisms that drive it. I will now broadly outline the figure of the transcendental subject of the scientific era, a figure that re-examines and at the same time transforms the three complementary aspects of the agent intellect.

  • The Aristotelian intellect becomes living intelligence. This involves the effective cognitive activities of subjects, what is experienced spontaneously in time by living, mortal human beings.
  • The intelligence becomes scientific investigation. I use this term to designate all undertakings by which the living intelligence becomes scientifically intelligible, including the technical and symbolic tools, the methods and the disciplines used in those undertakings.
  • The intelligible becomes the intelligible intelligence, which is the image of the living intelligence that is produced through scientific and critical investigation.

An evolving transcendental subject emerges from this reflexive cycle in which the living intelligence contemplates its own image in the form of a scientifically intelligible intelligence. Scientific investigation here is the internal mirror of the transcendental subjectivity, the mediation through which the living intelligence observes itself. It is obviously impossible to confuse the living intelligence and its scientifically intelligible image, any more than one can confuse the map and the territory, or the experience and its description. Nor can one confuse the mirror (scientific investigation) with the being reflected in it (the living intelligence), nor with the image that appears in the mirror (the intelligible intelligence). These three aspects together form a dynamic unit that would collapse if one of them were eliminated. While the living intelligence would continue to exist without a mirror or scientific image, it would be very much diminished. It would have lost its capacity to reflect from a universal perspective. The creative paradox of the intellectual reflexivity of the scientific age may be formulated as follows. It is clear, first of all, that the living intelligence is truly transformed by scientific investigation, since the living intelligence that knows its image through a certain scientific investigation is not the same (does not have the same experience) as the one that does not know it, or that knows another image, the result of another scientific investigation. But it is just as clear, by definition, that the living intelligence reflects itself in the intelligible image presented to it through scientific knowledge. In other words, the living intelligence is equally dependent on the scientific and critical investigation that produces the intelligible image in which it is reflected. When we observe our physical appearance in a mirror, the image in the mirror in no way changes our physical appearance, only the mental representation we have of it. However, the living intelligence cannot discover its intelligible image without including the reflexive process itself in its experience, and without at the same time being changed. In short, a critical science that explores the limits and determinations of the knowing subject does not only reflect knowledge—it increases it. Thus the modern transcendental subject is—by its very nature—evolutionary, participating in a dynamic of growth. In line with this evolutionary view of the scientific age, which contrasts with the fixity of the previous age, the collectivity that possesses reflexive knowledge is no longer a theosophical hierarchy oriented toward the agent intellect but a republic of letters oriented toward the augmentation of human knowledge, a scientific community that is expanding demographically and is organized into academies, learned societies and universities. While the agent intellect looked out over a cosmos emanating from eternity, in analog resonance with the human microcosm, the transcendental subject explores a universe infinitely open to scientific investigation, technical mastery and political liberation.

The technological age and its algorithmic intelligence

Reflexive knowledge has, in fact, always been informed by some technology, since it cannot be exercised without symbolic tools and thus the media that support those tools. But the next age of reflexive knowledge can properly be called technological because the technical augmentation of cognition is explicitly at the centre of its project. Technology now enters the loop of reflexive consciousness as the agent of the acceleration of its own augmentation. This last point was no doubt glimpsed by a few pre–twentieth century philosophers, such as Condorcet in the eighteenth century, in his posthumous book of 1795, Sketch for a Historical Picture of the Progress of the Human Mind. But the truly technological dimension of reflexive knowledge really began to be thought about fully only in the twentieth century, with Pierre Teilhard de Chardin, Norbert Wiener and Marshall McLuhan, to whom we should also add the modest genius Douglas Engelbart. The regulating ideal of the reflexive knowledge of the theosophical age was the agent intellect, and that of the scientific-critical age was the transcendental subject. In continuity with the two preceding periods, the reflexive knowledge of the technological age will be organized around the ideal of algorithmic intelligence, which inherits from the agent intellect its universality or, in other words, its capacity to unify humanity’s reflexive knowledge. It also inherits its power to be reflected in finite intelligences. But, in contrast with the agent intellect, instead of descending from eternity, it emerges from the multitude of human actions immersed in space and time. Like the transcendental subject, algorithmic intelligence is rational, critical, scientific, purely human, evolutionary and always in a state of learning. But the vocation of the transcendental subject was to reflexively contain the human universe. However, the human universe no longer has a recognizable face. The “death of man” announced by Foucault[27] should be understood in the sense of the loss of figurability of the transcendental subject. The labyrinth of philosophies, methodologies, theories and data from the human sciences has become inextricably complicated. The transcendental subject has not only been dissolved in symbolic structures or anonymous complex systems, it is also fragmented in the broken mirror of the disciplines of the human sciences. It is obvious that the technical medium of a new figure of reflexive knowledge will be the Internet, and more generally, computer science and ubiquitous communication. But how can symbol-manipulating automata be used on a large scale not only to reunify our reflexive knowledge but also to increase the clarity, precision and breadth of the teeming diversity enveloped by our knowledge? The missing link is not only technical, but also scientific. We need a science that grasps the new possibilities offered by technology in order to give collective intelligence the means to reflect itself, thus inaugurating a new form of subjectivity. As the groundwork of this new science—which I call computational semantics—IEML makes use of the self-reflexive capacity of language without excluding any of its functions, whether they be narrative, logical, pragmatic or other. Computational semantics produces a scientific image of collective intelligence: a calculated intelligence that will be able to be explored both as a simulated world and as a distributed augmented reality in physical space. Scientific change will generate a phenomenological change,[28] since ubiquitous multimedia interaction with a holographic image of collective intelligence will reorganize the human sensorium. The last, but not the least, change: social change. The community that possessed the previous figure of reflexive knowledge was a scientific community that was still distinct from society as a whole. But in the new figure of knowledge, reflexive collective intelligence emerges from any human group. Like the previous figures—theosophical and scientific—of reflexive knowledge, algorithmic intelligence is organized in three interdependent aspects.

  • Reflexive collective intelligence represents the living intelligence, the intellect or soul of the great future digital civilization. It may be glimpsed by deciphering the signs of its approach in contemporary reality.
  • Computational semantics holds up a technical and scientific mirror to collective intelligence, which is reflected in it. Its purpose is to augment and reflect the living intelligence of the coming civilization.
  • Calculated intelligence, finally, is none other than the scientifically knowable image of the living intelligence of digital civilization. Computational semantics constructs, maintains and cultivates this image, which is that of an ecosystem of ideas coming out of the human activity in the algorithmic medium and can be explored in sensory-motor mode.

In short, in the emergent unity of algorithmic intelligence, computational semantics calculates the cognitive simulation that augments and reflects the collective intelligence of the coming civilization.

[1] Professor at the University of Ottawa

[2] And twenty-three years after L’idéographie dynamique (Paris: La Découverte, 1991).

[3] And before the WWW itself, which would become a public phenomenon only in 1994 with the development of the first browsers such as Mosaic. At the time when the book was being written, the Web still existed only in the mind of Tim Berners-Lee.

[4] Approximately 40% in 2014 and probably more than half in 2025.

[5] I obviously do not claim to be the only “visionary” on the subject in the early 1990s. The pioneering work of Douglas Engelbart and Ted Nelson and the predictions of Howard Rheingold, Joël de Rosnay and many others should be cited.

[6] See The basics of IEML (on line at: )

[7] Beyond logic and statistics.

[8] IEML is the acronym for Information Economy MetaLanguage. See La grammaire d’IEML (On line ) [9] The Semantic Sphere 1: Computation, Cognition and Information Economy (London: ISTE, 2011; New York: Wiley, 2011).

[10] More than four hundred reference books.

[11] Umberto Eco, The Search for the Perfect Language (Oxford: Blackwell, 1995).

[12] “But more madness than genius would be required for such an enterprise”: Claude Levi-Strauss, The Savage Mind (University of Chicago Press, 1966), p. 130.

[13] Which is obviously true, but which only defines the problem rather than forbidding the solution.

[14] But true universalism is all-inclusive, and our daily lives are structured according to a multitude of universal standards, from space-time coordinates to HTTP on the Web. I responded at length in The Semantic Sphere to the prejudices of extremist post-modernism against scientific universality.

[15] Which is still used by a large community. But the only thing that Esperanto and IEML have in common is the fact that they are artificial languages. They have neither the same form nor the same purpose, nor the same use, which invalidates criticisms of IEML based on the criticism of Esperanto.

[16] See IEML Grammar (On line ).

[17] But, fortunately, supported by the Canada Research Chairs program and by my wife, Darcia Labrosse.

[18] Michel Serres, Hermès V. Le passage du Nord-Ouest (Paris: Minuit, 1980).

[19] The concept of episteme, which is broader than the concept of paradigm, was developed in particular by Michel Foucault in The Order of Things (New York: Pantheon, 1970) and The Archaeology of Knowledge and the Discourse on Language (New York: Pantheon, 1972).

[20] At the beginning of Book A of his Metaphysics.

[21] This is the Axial Age identified by Karl Jaspers.

[22] Book Lambda, 9

[23] In particular in Book III.

[24] See, for example, Moses Maimonides, The Guide For the Perplexed, translated into English by Michael Friedländer (New York: Cosimo Classic, 2007) (original in Arabic from the twelfth century). – Averroes (Ibn Rushd), Long Commentary on the De Anima of Aristotle, translated with introduction and notes by Richard C. Taylor (New Haven: Yale University Press, 2009) (original in Arabic from the twelfth century). – Saint Thomas Aquinas: On the Unity of the Intellect Against the Averroists (original in Latin from the thirteenth century) – Herbert A. Davidson, Alfarabi, Avicenna, and Averroes, on Intellect. Their Cosmologies, Theories of the Active Intellect, and Theories of Human Intellect (New York, Oxford: Oxford University Press, 1992). – Henri Corbin, History of Islamic Philosophy, translated by Liadain and Philip Sherrard (London: Kegan Paul, 1993). – Henri Corbin, En Islam iranien: aspects spirituels et philosophiques, 2d ed. (Paris: Gallimard, 1978), 4 vol. – De Libera, Alain Métaphysique et noétique: Albert le Grand (Paris: Vrin, 2005).

[25] In Meditations on First Philosophy, “First Meditation.” [26] Discourse on the Method, “Part IV.”

[27] At the end of The Order of Things (New York: Pantheon Books, 1970). [28] See, for example, Stéphane Vial, L’être et l’écran (Paris: PUF, 2013).

Pierre Levy-photo 1

Originally published by the CCCTLab as an interview with Sandra Alvaro.

Pierre Lévy is a philosopher and a pioneer in the study of the impact of the Internet on human knowledge and culture. In Collective Intelligence. Mankind’s Emerging World in Cyberspace, published in French in 1994 (English translation in 1999), he describes a kind of collective intelligence that extends everywhere and is constantly evaluated and coordinated in real time, a collective human intelligence, augmented by new information technologies and the Internet. Since then, he has been working on a major undertaking: the creation of IEML (Information Economy Meta Language), a tool for the augmentation of collective intelligence by means of the algorithmic medium. IEML, which already has its own grammar, is a metalanguage that includes the semantic dimension, making it computable. This in turn allows a reflexive representation of collective intelligence processes.

In the book Semantic Sphere I. Computation, Cognition, and Information Economy, Pierre Lévy describes IEML as a new tool that works with the ocean of data of participatory digital memory, which is common to all humanity, and systematically turns it into knowledge. A system for encoding meaning that adds transparency, interoperability and computability to the operations that take place in digital memory.

By formalising meaning, this metalanguage adds a human dimension to the analysis and exploitation of the data deluge that is the backdrop of our lives in the digital society. And it also offers a new standard for the human sciences with the potential to accommodate maximum diversity and interoperability.

In “The Technologies of Intelligence” and “Collective Intelligence”, you argue that the Internet and related media are new intelligence technologies that augment the intellectual processes of human beings. And that they create a new space of collaboratively produced, dynamic, quantitative knowledge. What are the characteristics of this augmented collective intelligence?

The first thing to understand is that collective intelligence already exists. It is not something that has to be built. Collective intelligence exists at the level of animal societies: it exists in all animal societies, especially insect societies and mammal societies, and of course the human species is a marvellous example of collective intelligence. In addition to the means of communication used by animals, human beings also use language, technology, complex social institutions and so on, which, taken together, create culture. Bees have collective intelligence but without this cultural dimension. In addition, human beings have personal reflexive intelligence that augments the capacity of global collective intelligence. This is not true for animals but only for humans.

Now the point is to augment human collective intelligence. The main way to achieve this is by means of media and symbolic systems. Human collective intelligence is based on language and technology and we can act on these in order to augment it. The first leap forward in the augmentation of human collective intelligence was the invention of writing. Then we invented more complex, subtle and efficient media like paper, the alphabet and positional systems to represent numbers using ten numerals including zero. All of these things led to a considerable increase in collective intelligence. Then there was the invention of the printing press and electronic media. Now we are in a new stage of the augmentation of human collective intelligence: the digital or – as I call it – algorithmic stage. Our new technical structure has given us ubiquitous communication, interconnection of information, and – most importantly – automata that are able to transform symbols. With these three elements we have an extraordinary opportunity to augment human collective intelligence.

You have suggested that there are three stages in the progress of the algorithmic medium prior to the semantic sphere: the addressing of information in the memory of computers (operating systems), the addressing of computers on the Internet, and finally the Web – the addressing of all data within a global network, where all information can be considered to be part of an interconnected whole–. This externalisation of the collective human memory and intellectual processes has increased individual autonomy and the self-organisation of human communities. How has this led to a global, hypermediated public sphere and to the democratisation of knowledge?

This democratisation of knowledge is already happening. If you have ubiquitous communication, it means that you have access to any kind of information almost for free: the best example is Wikipedia. We can also speak about blogs, social media, and the growing open data movement. When you have access to all this information, when you can participate in social networks that support collaborative learning, and when you have algorithms at your fingertips that can help you to do a lot of things, there is a genuine augmentation of collective human intelligence, an augmentation that implies the democratisation of knowledge.

What role do cultural institutions play in this democratisation of knowledge?

Cultural Institutions are publishing data in an open way; they are participating in broad conversations on social media, taking advantage of the possibilities of crowdsourcing, and so on. They also have the opportunity to grow an open, bottom-up knowledge management strategy.


A Model of Collective Intelligence in the Service of Human Development (Pierre Lévy, en The Semantic Sphere, 2011) S = sign, B = being, T = thing.

We are now in the midst of what the media have branded the ‘big data’ phenomenon. Our species is producing and storing data in volumes that surpass our powers of perception and analysis. How is this phenomenon connected to the algorithmic medium?

First let’s say that what is happening now, the availability of big flows of data, is just an actualisation of the Internet’s potential. It was always there. It is just that we now have more data and more people are able to get this data and analyse it. There has been a huge increase in the amount of information generated in the period from the second half of the twentieth century to the beginning of the twenty-first century. At the beginning only a few people used the Internet and now almost the half of human population is connected.

At first the Internet was a way to send and receive messages. We were happy because we could send messages to the whole planet and receive messages from the entire planet. But the biggest potential of the algorithmic medium is not the transmission of information: it is the automatic transformation of data (through software).

We could say that the big data available on the Internet is currently analysed, transformed and exploited by big governments, big scientific laboratories and big corporations. That’s what we call big data today. In the future there will be a democratisation of the processing of big data. It will be a new revolution. If you think about the situation of computers in the early days, only big companies, big governments and big laboratories had access to computing power. But nowadays we have the revolution of social computing and decentralized communication by means of the Internet. I look forward to the same kind of revolution regarding the processing and analysis of big data.

Communications giants like Google and Facebook are promoting the use of artificial intelligence to exploit and analyse data. This means that logic and computing tend to prevail in the way we understand reality. IEML, however, incorporates the semantic dimension. How will this new model be able to describe they way we create and transform meaning, and make it computable?

Today we have something called the “semantic web”, but it is not semantic at all! It is based on logical links between data and on algebraic models of logic. There is no model of semantics there. So in fact there is currently no model that sets out to automate the creation of semantic links in a general and universal way. IEML will enable the simulation of ecosystems of ideas based on people’s activities, and it will reflect collective intelligence. This will completely change the meaning of “big data” because we will be able to transform this data into knowledge.

We have very powerful tools at our disposal, we have enormous, almost unlimited computing power, and we have a medium were the communication is ubiquitous. You can communicate everywhere, all the time, and all documents are interconnected. Now the question is: how will we use all these tools in a meaningful way to augment human collective intelligence?

This is why I have invented a language that automatically computes internal semantic relations. When you write a sentence in IEML it automatically creates the semantic network between the words in the sentence, and shows the semantic networks between the words in the dictionary. When you write a text in IEML, it creates the semantic relations between the different sentences that make up the text. Moreover, when you select a text, IEML automatically creates the semantic relations between this text and the other texts in a library. So you have a kind of automatic semantic hypertextualisation. The IEML code programs semantic networks and it can easily be manipulated by algorithms (it is a “regular language”). Plus, IEML self-translates automatically into natural languages, so that users will not be obliged to learn this code.

The most important thing is that if you categorize data in IEML it will automatically create a network of semantic relations between the data. You can have automatically-generated semantic relations inside any kind of data set. This is the point that connects IEML and Big Data.

So IEML provides a system of computable metadata that makes it possible to automate semantic relationships. Do you think it could become a new common language for human sciences and contribute to their renewal and future development?

Everyone will be able to categorise data however they want. Any discipline, any culture, any theory will be able to categorise data in its own way, to allow diversity, using a single metalanguage, to ensure interoperability. This will automatically generate ecosystems of ideas that will be navigable with all their semantic relations. You will be able to compare different ecosystems of ideas according to their data and the different ways of categorising them. You will be able to chose different perspectives and approaches. For example, the same people interpreting different sets of data, or different people interpreting the same set of data. IEML ensures the interoperability of all ecosystem of ideas. On one hand you have the greatest possibility of diversity, and on the other you have computability and semantic interoperability. I think that it will be a big improvement for the human sciences because today the human sciences can use statistics, but it is a purely quantitative method. They can also use automatic reasoning, but it is a purely logical method. But with IEML we can compute using semantic relations, and it is only through semantics (in conjunction with logic and statistics) that we can understand what is happening in the human realm. We will be able to analyse and manipulate meaning, and there lies the essence of the human sciences.

Let’s talk about the current stage of development of IEML: I know it’s early days, but can you outline some of the applications or tools that may be developed with this metalanguage?

Is still too early; perhaps the first application may be a kind of collective intelligence game in which people will work together to build the best ecosystem of ideas for their own goals.

I published The Semantic Sphere in 2011. And I finished the grammar that has all the mathematical and algorithmic dimensions six months ago. I am writing a second book entitled Algorithmic Intelligence, where I explain all these things about reflexivity and intelligence. The IEML dictionary will be published (online) in the coming months. It will be the first kernel, because the dictionary has to be augmented progressively, and not just by me. I hope other people will contribute.

This IEML interlinguistic dictionary ensures that semantic networks can be translated from one natural language to another. Could you explain how it works, and how it incorporates the complexity and pragmatics of natural languages?

The basis of IEML is a simple commutative algebra (a regular language) that makes it computable. A special coding of the algebra (called Script) allows for recursivity, self-referential processes and the programming of rhizomatic graphs. The algorithmic grammar transforms the code into fractally complex networks that represent the semantic structure of texts. The dictionary, made up of terms organized according to symmetric systems of relations (paradigms), gives content to the rhizomatic graphs and creates a kind of common coordinate system of ideas. Working together, the Script, the algorithmic grammar and the dictionary create a symmetric correspondence between individual algebraic operations and different semantic networks (expressed in natural languages). The semantic sphere brings together all possible texts in the language, translated into natural languages, including the semantic relations between all the texts. On the playing field of the semantic sphere, dialogue, intersubjectivity and pragmatic complexity arise, and open games allow free regulation of the categorisation and the evaluation of data. Ultimately, all kinds of ecosystems of ideas – representing collective cognitive processes – will be cultivated in an interoperable environment.


Schema from the START – IEML / English Dictionary by Prof. Pierre Lévy FRSC CRC University of Ottawa 25th August 2010 (Copyright Pierre Lévy 2010 (license Apache 2.0)

Since IEML automatically creates very complex graphs of semantic relations, one of the development tasks that is still pending is to transform these complex graphs into visualisations that make them usable and navigable.

How do you envisage these big graphs? Can you give us an idea of what the visualisation could look like?

The idea is to project these very complex graphs onto a 3D interactive structure. These could be spheres, for example, so you will be able to go inside the sphere corresponding to one particular idea and you will have all the other ideas of its ecosystem around you, arranged according to the different semantic relations. You will be also able to manipulate the spheres from the outside and look at them as if they were on a geographical map. And you will be able to zoom in and zoom out of fractal levels of complexity. Ecosystems of ideas will be displayed as interactive holograms in virtual reality on the Web (through tablets) and as augmented reality experienced in the 3D physical world (through Google glasses, for example).

I’m also curious about your thoughts on the social alarm generated by the Internet’s enormous capacity to retrieve data, and the potential exploitation of this data. There are social concerns about possible abuses and privacy infringement. Some big companies are starting to consider drafting codes of ethics to regulate and prevent the abuse of data. Do you think a fixed set of rules can effectively regulate the changing environment of the algorithmic medium? How can IEML contribute to improving the transparency and regulation of this medium?

IEML does not only allow transparency, it allows symmetrical transparency. Everybody participating in the semantic sphere will be transparent to others, but all the others will also be transparent to him or her. The problem with hyper-surveillance is that transparency is currently not symmetrical. What I mean is that ordinary people are transparent to big governments and big companies, but these big companies and big governments are not transparent to ordinary people. There is no symmetry. Power differences between big governments and little governments or between big companies and individuals will probably continue to exist. But we can create a new public space where this asymmetry is suspended, and where powerful players are treated exactly like ordinary players.

And to finish up, last month the CCCB Lab held began a series of workshops related to the Internet Universe project, which explore the issue of education in the digital environment. As you have published numerous works on this subject, could you summarise a few key points in regard to educating ‘digital natives’ about responsibility and participation in the algorithmic medium?

People have to accept their personal and collective responsibility. Because every time we create a link, every time we “like” something, every time we create a hashtag, every time we buy a book on Amazon, and so on, we transform the relational structure of the common memory. So we have a great deal of responsibility for what happens online. Whatever is happening is the result of what all the people are doing together; the Internet is an expression of human collective intelligence.

Therefore, we also have to develop critical thinking. Everything that you find on the Internet is the expression of particular points of view, that are neither neutral nor objective, but an expression of active subjectivities. Where does the money come from? Where do the ideas come from? What is the author’s pragmatic context? And so on. The more we know the answers to these questions, the greater the transparency of the source… and the more it can be trusted. This notion of making the source of information transparent is very close to the scientific mindset. Because scientific knowledge has to be able to answer questions such as: Where did the data come from? Where does the theory come from? Where do the grants come from? Transparency is the new objectivity.

Blog of Collective Intelligence (since 2003)


Pierre Lévy is a philosopher and a pioneer in the study of the impact of the Internet on human knowledge and culture. In Collective Intelligence. Mankind’s Emerging World in Cyberspace, published in French in 1994 (English translation in 1999), he describes a kind of collective intelligence that extends everywhere and is constantly evaluated and coordinated in real time, a collective human intelligence, augmented by new information technologies and the Internet. Since then, he has been working on a major undertaking: the creation of IEML (Information Economy Meta Language), a tool for the augmentation of collective intelligence by means of the algorithmic medium. IEML, which already has its own grammar, is a metalanguage that includes the semantic dimension, making it computable. This in turn allows a reflexive representation of collective intelligence processes.

In the book Semantic Sphere I. Computation, Cognition, and Information Economy, Pierre Lévy describes IEML as…

View original post 2,744 more words


Interview with Nelesi Rodriguez, published in spanish in the academic journal Comunicacion , Estudios venezolanos de comunicación • 2º trimestre 2014, n. 166

Collective intelligence in the digital age: A revolution just at its beginning

Pierre Lévy (P.L.) is a renowned theorist and media scholar. His ideas on collective intelligence have been essential for the comprehension of some phenomena of contemporary communication, and his research on Information Economy Meta Language (IEML) is today one of the biggest promises of data processing and of knowledge management. In this interview conducted by the team of the Comunicación(C.M.) magazine, he explained to us some of the basic points of his theory, and gave us an interesting reading on current topics related to communication and digital media. Nelesi Rodríguez, April 2014.


C.M: Collective intelligence can be defined as shared knowledge that exists everywhere, that is constantly measured, coordinated in real time, and that drives the effective mobilization of several skills. In this regard, it is understood that collective intelligence is not a quality exclusive to human beings. In what way is human collective intelligence different from other species’ collective intelligence?

P.L: You are totally right when you say that collective intelligence is not exclusive to human race. We know that the ants, the bees, and in general all social animals have got collective intelligence. They solve problems together, and –as social animals-, they are not able to survive alone and this is also the case with human species; we are not able to survive alone and we solve problems together.

But there is a big difference that is related to the use of language: Animals are able to communicate, but they do not have language, I mean, they cannot ask questions, they cannot tell stories, they cannot have dialogues, they cannot communicate about their emotions, their fears, and so on.

So there is the language, that is specific to the human kind, and with the language you have of course better communication and an enhanced collective intelligence; and you have also all that comes with this linguistic ability, that is the technology, the complexity of social institutions –like law, religion, ethics, economy… All these things that animals don`t have. This ability to play with symbolic systems, to play with tools and to build complex social institutions, creates a much more powerful collective intelligence for the humans.

Also, I would say that there are two important features that come from the human culture: The first is that human collective intelligence can improve during history, because each new generation can improve the symbolic systems, the technology, and the social institutions; so there is an evolution of human collective intelligence and, of course, we are talking about a cultural evolution, not a biological evolution. And then, finally, and maybe the most important feature of human collective intelligence, is that each unit of the human collectivity has an ability to reflect, to think by itself. We have individual consciousness, unfortunately for them, the ants don`t; so the fact that the humans have individual consciousness creates at the level of the social cognition something that it is very powerful. That is the main difference between human and animal collective intelligence.

C.M: Do the writing and digital technologies also contribute to this difference?

P.L: In the oral culture, there was certain kind of transmission of knowledge, but of course, when we invented the writing systems we were able to accumulate much more knowledge to transmit to the next generations. With the invention of the diverse writing systems, and then their improvements -like the invention of the alphabet, the invention of the paper, the printing press, and then the electronic media- human collective intelligence expanded. So, for example, the ability to build libraries, to build scientific coordination and collaboration, the communication supported by the telephone, the radio, the television makes human collective intelligence more powerful, and I think that it will be the main challenge our generation and the next will have to face: to take advantage of the digital tools; the computer, the internet, the smartphones, et caetera; to discover new ways to improve our cognitive abilities, our memory, our communication, our problem solving abilities, our abilities to coordinate and collaborate, and so on.

C.M: In an interview conducted by Howard Rheingold, you mentioned that every device and technology that have the purpose of enhancing language also enhance collective intelligence and, at the same time, have an impact on cognitive skills such as memory, collaboration and the ability to connect with one another. Taking this into account:

  • It is said that today, the enhancement of cognitive abilities manifests in different ways: from fandoms and wikis, to crowdsourcing projects that are created with the intent of finding effective treatments for serious illnesses. Do you consider that every one of these manifestations contribute in the same way towards the expansion of our collective intelligence?

P.L: Maybe the most important sector where we should put particular effort is scientific research and learning, because we are talking about knowledge, so the most important part is the creation of knowledge, the dissemination of knowledge or, generally, the collective and individual learning.

Today there is a transformation of communication in the scientific community; more and more journals are open and online, people are doing virtual teams, they communicate by internet, people are using big amounts of digital data, and they are processing this data with computer power; so we are already witnessing this augmentation, but we are just at the beginning of this new approach.

In the case of learning I think it is very important that we recognize the emergence of new ways of learning online collaboratively, where people who want to learn are helping each other, are communicating, are accumulating common memories from where they can take what is interesting for them. This collective learning is not limited to schools; it happens in all kinds of social environments. We could call this “knowledge management”, and there is an individual or personal aspect of this knowledge management that some people call “personal knowledge management”: choosing the right sources on the internet, featuring the sources, categorizing information, doing synthesis, sharing these synthesis on social media, looking for a feedback, initiating a conversation, and so on. We have to realize that learning is and always has been an individual process at is core. Someone has to learn; you cannot learn for someone else. Help other people to learn, this is teaching; but the learner is doing the real work. Then, if the learners are helping each other, you have a process of collective learning. Of course, it works better if these people are interested in the same topics or if they are engaged in the same activities.

Collective learning augmentation is something that is very general and that has increased with the online communication. It also happens at the political level; there is an augmented deliberation, because people can discuss easily on the internet and also there is an enhanced coordination (for public demonstrations and similar things).

  • M: With the passage of time, collective intelligence becomes less a human quality and more one akin to machines; this affair worries more than one individual. What is your stance in the wake of this reality?

P.L: There is a process of artificialization of cognition in general that is very old; it began with the writing, with books; it is already a kind of externalization or objectification of memory. I mean, a library, for instance, is something that is completely material, completely technical, and without libraries we would be much less intelligent.

We cannot be against libraries because instead of being pure brain they are just paper, and ink, and buildings, and index cards. Similarly, it makes no sense that we “revolt” against computer and against the internet. It is the same kind of reasoning than with the libraries, it is just another technology, more powerful, but it is the same idea. It is an augmentation of our cognitive ability -individual and collective-, so it is absurd to be afraid of it.

But we have to distinguish very clearly the material support and the texts. The texts come from our mind, but the text that is in my mind can be projected on paper as well as in a computer network. What it is really important here is the text.


C.M: You’ve mentioned before that what we define today as the “semantic web”, more than being based on semantic principles, is based on logical principles. According to your ideas, this represents a roadblock in making the most out of the possibilities offered by digital media. As an alternative, you proposed the IEML (Information Economy Meta Language).

  • Could you elaborate on the basic differences between the semantic web and the IEML?

P.L: The so called “semantic web” –in fact, people call it now “web of data”, and it is a better term for it– is based on very well known principles of artificial intelligence that were developed in the 70s, the 80s, and that were adapted to the web.

Basically, you have a well-organized database, and you have rules to compute the relations between different parts of the database, and these rules are mainly logical rules. IEML works in a completely different manner: you have as many data as you want, and you categorize this data in IEML.

IEML is a language, not a computer language, but an artificial human language. So you can say “the sea”, “this person”, or anything… There are words in IEML, there are no words in the semantic web formats, it doesn’t work like this.

In this artificial language that is IEML, each word is in semantic relations with the other words in the dictionary. So, all the words are intertwined by semantic relations, and are perfectly defined. When you use these words, create sentences, or create texts; you create new relationships between the words, grammatical relationships.

And from texts written in IEML you have algorithms that make automatic relations inside those sentences, from one sentence to the other, and so on. So you have a whole semantic network inside the text that is automatically computed, and even more, you can automatically compute the semantic relations between any text and any library of texts.

An IEML text automatically creates its own semantic relations with all the other texts, and these texts in IEML can automatically translate themselves into natural languages; Spanish, English, Portuguese or Chinese… So, when you use IEML to categorize data, you create automatically semantic links between the data; with all the openness, the subtleness, and the ability to say exactly what you want that language can offer you.

You can categorize any type of content; images, music, software, articles, websites, books, any kind of information. You can categorize these in IEML and at the same time you create links within the data because of the links that are internal to the language.

  • M: Can we consider metatags, hashtags, and Twitter lists as a precedent to the IEML?

P.L: Yes, exactly. I have been inspired by the fact that people are already categorizing data. They started doing this with social bookmarking sites, such as The act of curation today goes with the act of categorization, of tagging. We do this very often on Twitter, and now we can do it on Facebook, on Google Plus, on Youtube, on Flickr, and so on. The thing is that these tags don`t have the ability to interconnect with other tags and to create a big and consistent semantic network. In addition, these tags are in different natural languages.

From the point of view of the user, it will be the same action, but tagging in IEML will just be more powerful.

  • M: What will the IEML’s initial array of applications be?

P.L: I hope the main applications will be in the creation of collective intelligence games; games of categorization and evaluation of data; a sort of collective curation that will help people to create a very useful memory for their collaborative learning. That, for me, would be the most interesting application, and of course, the creation of a inter-linguistic or trans-linguistic environment.


C.M: You’ve referred to big data as one of the phenomena that could take collective intelligence to a whole new level. You’ve mentioned as well that in fact this type of information can only be processed by powerful institutions (governments, corporations, etc.), and that only when the capacity to read big data is democratized, will there truly be a revolution.

Would you say that the IEML will have a key role in this process of democratization? If so, why?

P.L: I think that currently there are two important aspects of big data analytics: First, we have more and more data every day. We have to realize this. And, second, the main producer of this immense flow of data is ourselves. We, the users of the Internet are producing data. So currently lots of people are trying to make sense of this data and here you have two “avenues”:

First is the avenue that is more scientific. In natural sciences you have a lot of data –genetic data, data coming from physics or astronomy-, and also something that is relatively new; the data coming from human sciences. This is called “digital humanities”, and it takes data from spaces like social media and tries to make sense of it from a sociological point of view. Or you take data from libraries and you try to make sense of it from a literary or historical point of view. This is one application.

The second application is in business, in administration –private or public. You have many companies that are trying to sell services to companies and to governments.

I would say that there are two big problems with this landscape:

The first is related to the methodology; today we use mainly statistical methods and logical methods. It is very difficult to have a semantic analysis of the data, because we do not have a semantic code, and let’s remember that every thing we analyze is coded before we analyze it. So you can code quantitatively and you have statistical analysis, code logically and you have logical analysis. So you need a semantic code to have a semantic analysis. We do not have it yet, but I think that IEML will be that code.

The second problem is the fact that this analysis of data is currently in the hands of very powerful or rich players –big governments, big companies. It is expensive and it is not easy to do –you need to learn how to code, you need to learn how to read statistics…

I think that with IEML –because people will be able to code semantically the data– people will also be able to do semantic analysis with the help of the right user-interfaces. They will be able to manipulate this semantic code in natural language, it will be open to everybody.

This famous “revolution of big data” is just at its beginning. In the coming decades there will be much more data and many more powerful tools to analyze it. And it will be democratized; the tools will be open and free.


C.M: In the interview conducted by Howard Rheingold, you defined collective intelligence as a synergy between personal and collective knowledge; as an example, you mentioned the curation process that we, as users of social media, develop and that in most cases serves as resource material for others to use. Regarding this particular issue, I’d like to analyze with you this particular situation using collective intelligence:

During the last few months, Venezuela has suffered an important information blackout, product of the government’s monopolized grasp of the majority of the media outlets, the censorship efforts made by the State’s organisms, and the self-imposed censorship of the last independent media outlets of the country. As a response to this blockade, Venezuelans have taken upon themselves to stay informed by invading the digital space. In a relatively short period of time, various non-standard communication networks have been created, verified source lists have been consolidated, applications have been developed, and a sort of ethics code has been established in order to minimize the risk of spreading false information.

Based on your theory on collective intelligence, what reading could you give of this phenomenon?

P.L: You have already given a response to this; I have nothing else to say. Of course I am against any kind of censorship. We have already seen that many authoritarian regimes do not like the internet, because it represents an augmentation of freedom of expression. Not only in Venezuela but in fact in different countries, governments have tried to limit free expression and the people that are politically active and that are not pro-government have tried to organize themselves through the internet. I think that the new environment created by social media –Twitter, Facebook, Youtube, the blogs, and all the apps that help people find the information they need– helps to the coordination and the discussion inside all these opposition movements, and this is the current political aspect of collective intelligence.


Conférence à Science Po-Paris le 2 octobre 2014 à 17h 30

Voici ma présentation (PDF) : 2014-Master-Class

Texte introductif à la conférence

Réfléchir l’intelligence

Quels sont les enseignements de la philosophie sur l’augmentation de l’intelligence ? « Connais-toi toi-même » nous avertit Socrate à l’aurore de la philosophie grecque. Sous la multiplicité des traditions et des approches, en Orient comme en Occident, il existe un chemin universellement recommandé : pour l’intelligence humaine, la manière la plus sûre de progresser est d’atteindre un degré supérieur de réflexivité.

Or depuis le début du XXIe siècle, nous apprenons à nous servir d’automates de manipulation symbolique opérant dans un réseau ubiquitaire. Dans le médium algorithmique, nos intelligences personnelles s’interconnectent et fonctionnent en multiples intelligences collectives enchevêtrées. Puisque le nouveau médium abrite une part croissante de notre mémoire et de nos communications, ne pourrait-il pas fonctionner comme un miroir scientifique de nos intelligences collectives ? Rien ne s’oppose à ce que le médium algorithmique supporte bientôt une vision d’ensemble objectivable et mesurable du fonctionnement de nos intelligences collectives et de la manière dont chacun de nous y participe. Dès lors, un méta-niveau d’apprentissage collectif aura été atteint.

En effet, des problèmes d’une échelle de complexité supérieure à tous ceux que l’humanité a été capable de résoudre dans le passé se posent à nous. La gestion collective de la biosphère, le renouvellement des ressources énergétiques, l’aménagement du réseau de mégapoles où nous vivons désormais, les questions afférentes au développement humain (prospérité, éducation, santé, droits humains), vont se poser avec une acuité croissante dans les décennies et les siècles qui viennent. La densité, la complexité et le rythme croissant de nos interactions exigent de nouvelles formes de coordination intellectuelle. C’est pourquoi j’ai cherché toute ma vie la meilleure manière d’utiliser le médium algorithmique afin d’augmenter notre intelligence. Quelques titres parmi les ouvrages que j’ai publié témoignent de cette quête : La Sphère sémantique. Computation, cognition, économie de l’information (2011) ; Qu’est-ce que le virtuel ? (1995) ; L’Intelligence collective (1994) ; De la Programmation considérée comme un des beaux-arts (1992) ; Les Arbres de connaissances (1992) ; L’Idéographie dynamique (1991) ; Les Technologies de l’intelligence (1990) ; La Machine univers. Création, cognition et culture informatique (1987)… Après avoir obtenu ma Chaire de Recherche du Canada en Intelligence Collective à l’Université d’Ottawa en 2002, j’ai pu me consacrer presque exclusivement à une méditation philosophique et scientifique sur la meilleure manière de réfléchir l’intelligence collective avec les moyens de communication dont nous disposons aujourd’hui, méditation dont j’ai commencé à rendre compte dans La Sphère Sémantique et que j’approfondirai dans L’intelligence algorithmique (à paraître).

Élaboration d’un programme de recherche

Les grands sauts évolutifs ou, si l’on préfère, les nouveaux espaces de formes, sont générés par de nouveaux systèmes de codage. Le codage atomique génère les formes moléculaires, le codage génétique engendre les formes biologiques, le codage neuronal simule les formes phénoménales. Le codage symbolique enfin, propre à l’humanité, libère l’intelligence réflexive et la culture.

Je retrouve dans l’évolution culturelle la même structure que dans l’évolution cosmique : ce sont les progrès du codage symbolique qui commandent l’agrandissement de l’intelligence humaine. En effet, notre intelligence repose toujours sur une mémoire, c’est-à-dire un ensemble d’idées enregistrées, conceptualisées et symbolisées. Elle classe, retrouve et analyse ce qu’elle a retenu en manipulant des symboles. Par conséquent, la prise de l’intelligence sur les données, ainsi que la quantité et la qualité des informations qu’elle peut en extraire, dépendent au premier chef des systèmes symboliques qu’elle utilise. Lorsqu’avec l’invention de l’écriture les symboles sont devenus auto-conservateurs, la mémoire s’est accrue, réorganisée, et un nouveau type d’intelligence est apparu, relevant d’une épistémè scribale, comme celle de l’Egypte pharaonique, de l’ancienne Mésopotamie ou de la Chine pré-confucéenne. Quand le médium écrit s’est perfectionné avec le papier, l’alphabet et la notation des nombres par position, alors la mémoire et la manipulation symbolique ont crû en puissance et l’épistémè lettrée s’est développée dans les empires grec, chinois, romain, arabe, etc. La reproduction et la diffusion automatique des symboles, de l’imprimerie aux médias électroniques, a multiplié la disponibilité des données et accéléré l’échange des idées. Née de cette mutation, l’intelligence typographique a édifié le monde moderne, son industrie, ses sciences expérimentales de la nature, ses états-nations et ses idéologies inconnues des époques précédentes. Ainsi, suivant la puissance des outils symboliques manipulés, la mémoire et l’intelligence collective évoluent, traversant des épistémès successives.

Evolution medias

La relation entre l’ouverture d’un nouvel espace de formes et l’invention d’un système de codage se confirme encore dans l’histoire des sciences. Et puisque je suis à la recherche d’une augmentation de la connaissance réflexive, la science moderne me donne justement l’exemple d’une communauté qui réfléchit sur ses propres opérations intellectuelles et qui se pose explicitement le problème de préciser l’usage qu’elle fait de ses outils symboliques. La plupart des grandes percées de la science moderne ont été réalisées par l’unification d’une prolifération de formes disparates au moyen d’un coup de filet algébrique. En physique, le premier pas est accompli par Galilée (1564-1642), Descartes (1596-1650), Newton (1643-1727) et Leibniz (1646-1716). A la place du cosmos clos et cloisonné de la vulgate aristotélicienne qu’ils ont reçu du Moyen-Age, les fondateurs de la science moderne édifient un univers homogène, rassemblé dans l’espace de la géométrie euclidienne et dont les mouvements obéissent au calcul infinitésimal. De même, le monde des ondes électromagnétiques est-il mathématiquement unifié par Maxwell (1831-1879), celui de la chaleur, des atomes et des probabilités statistiques par Boltzmann (1844-1906). Einstein (1869-1955) parvient à unifier la matière-espace-temps en un même modèle algébrique. De Lavoisier (1743-1794) à Mendeleïev (1834, 1907), la chimie émerge de l’alchimie par la rationalisation de sa nomenclature et la découverte de lois de conservation, jusqu’à parvenir au fameux tableau périodique où une centaine d’éléments atomiques sont arrangés selon un modèle unificateur qui explique et prévoit leurs propriétés. En découvrant un code génétique identique pour toutes les formes de vie, Crick (1916-2004) et Watson (1928-) ouvrent la voie à la biologie moléculaire.

Enfin, les mathématiques n’ont-elles pas progressé par la découverte de nouvelles manières de coder les problèmes et les solutions ? Chaque avancée dans le niveau d’abstraction du codage symbolique ouvre un nouveau champ à la résolution de problèmes. Ce qui apparaissait antérieurement comme une multitude d’énigmes disparates se résout alors selon des procédures uniformes et simplifiées. Il en est ainsi de la création de la géométrie démonstrative par les Grecs (entre le Ve et le IIe siècle avant l’ère commune) et de la formalisation du raisonnement logique par Aristote (384-322 avant l’ère commune). La même remontée en amont vers la généralité s’est produite avec la création de la géométrie algébrique par Descartes (1596-1650), puis par la découverte et la formalisation progressive de la notion de fonction. Au tournant des XIXe et XXe siècles, à l’époque de Cantor (1845-1918), de Poincaré (1854-1912) et de Hilbert (1862-1943), l’axiomatisation des théories mathématiques est contemporaine de la floraison de la théorie des ensembles, des structures algébriques et de la topologie.

Mon Odyssée encyclopédique m’a enseigné cette loi méta-évolutive : les sauts intellectuels vers des niveaux de complexité supérieurs s’appuient sur de nouveaux systèmes de codage. J’en viens donc à me poser la question suivante. Quel nouveau système de codage fera du médium algorithmique un miroir scientifique de notre intelligence collective ? Or ce médium se compose justement d’un empilement de systèmes de codage : codage binaire des nombres, codage numérique de caractères d’écriture, de sons et d’images, codage des adresses des informations dans les disques durs, des ordinateurs dans le réseau, des données sur le Web… La mémoire mondiale est déjà techniquement unifiée par tous ces systèmes de codage. Mais elle est encore fragmentée sur un plan sémantique. Il manque donc un nouveau système de codage qui rende la sémantique aussi calculable que les nombres, les sons et les images : un système de codage qui adresse uniformément les concepts, quelles que soient les langues naturelles dans lesquelles ils sont exprimés.


En somme, si nous voulons atteindre une intelligence collective réflexive dans le médium algorithmique, il nous faut unifier la mémoire numérique par un code sémantique interopérable, qui décloisonne les langues, les cultures et les disciplines.

Tour d’horizon techno-scientifique

Désormais en possession de mon programme de recherche, il me faut évaluer l’avancée du médium algorithmique contemporain vers l’intelligence collective réflexive : nous n’en sommes pas si loin… Entre réalité augmentée et mondes virtuels, nous communiquons dans un réseau électronique massivement distribué qui s’étend sur la planète à vitesse accélérée. Des usagers par milliards échangent des messages, commandent des traitements de données et accèdent à toutes sortes d’informations au moyen d’une tablette légère ou d’un téléphone intelligent. Objets fixes ou mobiles, véhicules et personnes géo-localisés signalent leur position et cartographient automatiquement leur environnement. Tous émettent et reçoivent des flots d’information, tous font appel à la puissance du cloud computing. Des efforts de Douglas Engelbart à ceux de Steve Jobs, le calcul électronique dans toute sa complexité a été mis à la portée de la sensori-motricité humaine ordinaire. Par l’invention du Web, Sir Tim Berners-Lee a rassemblé l’ensemble des données dans une mémoire adressée par le même système d’URL. Du texte statique sur papier, nous sommes passé à l’hypertexte ubiquitaire. L’entreprise de rédaction et d’édition collective de Wikipedia, ainsi qu’une multitude d’autres initiatives ouvertes et collaboratives ont mis gratuitement à la portée de tous un savoir encyclopédique, des données ouvertes réutilisables et une foule d’outils logiciels libres. Des premiers newsgroups à Facebook et Twitter, une nouvelle forme de sociabilité par le réseau s’est imposée, à laquelle participent désormais l’ensemble des populations. Les blogs ont mis la publication à la portée de tous. Tout cela étant désormais acquis, notre intelligence doit maintenant franchir le pas décisif qui lui permettra de maîtriser un niveau supérieur de complexité cognitive.

Du côté de la Silicon Valley, on cherche des réponses de plus en plus fines aux désirs des utilisateurs, et cela d’autant mieux que les big data analytics offrent les moyens d’en tracer le portrait fidèle. Mais il me semble peu probable que l’amélioration incrémentale des services rendus par les grandes entreprises du Web, même guidée par une bonne stratégie marketing, nous mène spontanément à l’unification sémantique de la mémoire numérique. L’entreprise non commerciale du « Web sémantique » promeut d’utiles standards de fichier (XML, RDF) et des langages de programmation ouverts (comme OWL), mais ses nombreuses ontologies sont hétéroclites et elle a échoué à résoudre le problème de l’interopérabilité sémantique. Parmi les projets les plus avancés d’intelligence computationnelle, aucun ne vise explicitement la création d’une nouvelle génération d’outils symboliques. Certains nourrissent même la chimère d’ordinateurs conscients devenant autonomes et prenant le pouvoir sur la planète avec la complicité de cyborgs post-humain…

La lumière viendra-t-elle des recherches académiques sur l’intelligence collective et le knowledge management ? Depuis les travaux pionniers de Nonaka à la fin du XXe siècle, nous savons qu’une saine gestion des connaissances suppose l’explicitation et la communication des savoirs implicites. L’expérience des médias sociaux nous a enseigné la nécessité d’associer étroitement gestion sociale et gestion personnelle des connaissances. Or, dans les faits, la gestion des connaissances par les médias sociaux passe nécessairement par la curation distribuée d’une énorme quantité de données. C’est pourquoi, on ne pourra coordonner le travail de curation collective et exploiter efficacement les données qu’au moyen d’un codage sémantique commun. Mais personne ne propose de solution au problème de l’interopérabilité sémantique.

Le secours nous viendra-t-il des sciences humaines, par l’intermédiaire des fameuses digital humanities ? L’effort pour éditer et mettre en libre accès les corpus, pour traiter et visualiser les données avec les outils des big data et pour organiser les communautés de chercheurs autour de ce traitement est méritoire. Je souscris sans réserve à l’orientation vers le libre et l’open. Mais je ne discerne pour l’instant aucun travail de fond pour résoudre les immenses problèmes de fragmentation disciplinaire, de testabilité des hypothèses et d’hyper-localité théorique qui empêchent les sciences humaines d’émerger de leur moyen-âge épistémologique. Ici encore, nulle théorie de la cognition, ni de la cognition sociale, permettant de coordonner l’ensemble des recherches, pas de système de catégorisation sémantique inter-opérable en vue et peu d’entreprises pratiques pour remettre l’interrogation scientifique sur l’humain entre les mains des communautés elles-mêmes. Quant à diriger l’évolution technique selon les besoins de sciences humaines renouvelées, la question ne semble même pas se poser. Il ne reste finalement que la posture critique, comme celle que manifestent, par exemple, Evgeny Morozov aux Etats-Unis et d’autres en Europe ou ailleurs. Mais si les dénonciations de l’avidité des grandes compagnies de la Silicon Valley et du caractère simpliste, voire dérisoire, des conceptions politiques, sociales et culturelles des chantres béats de l’algorithme touchent souvent juste, on chercherait en vain du côté des dénonciateurs le moindre début de proposition concrète.

En conclusion, je ne discerne autour de moi aucun plan sérieux propre à mettre la puissance computationnelle et les torrents de données du médium algorithmique au service d’une nouvelle forme d’intelligence réflexive. Ma conviction, je la puise dans une longue étude du problème à résoudre. Quant à ma solitude provisoire en 2014, au moment où j’écris ces lignes, je me l’explique par le fait que personne n’a consacré plus de quinze ans à temps plein pour résoudre le problème de l’interopérabilité sémantique. Je m’en console en observant l’exemple admirable de Douglas Engelbart. Ce visionnaire a inventé les interfaces sensori-motrices et les logiciels collaboratifs à une époque où toutes les subventions allaient à l’intelligence artificielle. Ce n’est que bien des années après qu’il ait exposé sa vision de l’avenir dans les années 1960 qu’il fut suivi par l’industrie et la masse des utilisateurs à partir de la fin des années 1980. Sa vision n’était pas seulement technique. Il a appelé à franchir un seuil décisif d’augmentation de l’intelligence collective afin de relever les défis de plus en plus pressants qui se posent, encore aujourd’hui, à notre espèce. Je poursuis son travail. Après avoir commencé à dompter le calcul automatique par nos interactions sensori-motrices avec des hypertextes, il nous faut maintenant explicitement utiliser le médium algorithmique comme une extension cognitive. Mes recherches m’ont affermi dans la conviction que nulle solution technique ignorante de la complexité de la cognition humaine ne nous mènera à bon port. Nous ne pourrons obtenir une intelligence agrandie qu’avec une claire théorie de la cognition et une profonde compréhension des ressorts de la mutation anthropologique à venir. Enfin, sur un plan technique, le rassemblement de la sagesse collective de l’humanité nécessite une unification sémantique de sa mémoire. C’est en respectant toutes ces exigences que j’ai conçu et construit IEML, outil commun d’une nouvelle puissance intellectuelle, origine d’une révolution scientifique.

Les ressorts d’une révolution scientifique

La mise en oeuvre de mon programme de recherche ne sera pas moins complexe ou ambitieuse que d’autres grands projets scientifiques et techniques, comme ceux qui nous ont mené à marcher sur la Lune ou à déchiffrer le génome humain. Cette grande entreprise va mobiliser de vastes réseaux de chercheurs en sciences humaines, en linguistique et en informatique. J’ai déjà réuni un petit groupe d’ingénieurs et de traducteurs dans ma Chaire de Recherche de l’Université d’Ottawa. Avec les moyens d’un laboratoire universitaire en sciences humaines, j’ai trouvé le code que je cherchais et j’ai prévu de quelle manière son utilisation allait mener à une intelligence collective réflexive.

J’étais bien résolu à ne pas me laisser prendre au piège qui consisterait à aménager superficiellement quelque système symbolique de l’épistémè typographique pour l’adapter au nouveau médium, à l’instar des premiers wagons de chemin de fer qui ressemblaient à des diligences. Au contraire, j’étais persuadé que nous ne pourrions passer à une nouvelle épistémè qu’au moyen d’un système symbolique conçu dès l’origine pour unifier et exploiter la puissance du médium algorithmique.


Voici en résumé les principales étapes de mon raisonnement. Premièrement, comment pourrais-je augmenter effectivement l’intelligence collective sans en avoir de connaissance scientifique ? C’est donc une science de l’intelligence collective qu’il me faut. Je fais alors un pas de plus dans la recherche des conditions. Une science de l’intelligence collective suppose nécessairement une science de la cognition en général, car la dimension collective n’est qu’un aspect de la cognition humaine. J’ai donc besoin d’une science de la cognition. Mais comment modéliser rigoureusement la cognition humaine, sa culture et ses idées, sans modéliser au préalable le langage qui en est une composante capitale ? Puisque l’humain est un animal parlant – c’est-à-dire un spécialiste de la manipulation symbolique – un modèle scientifique de la cognition doit nécessairement contenir un modèle du langage. Enfin, dernier coup de pioche avant d’atteindre le roc : une science du langage ne nécessite-t-elle pas un langage scientifique ? En effet, vouloir une science computationnelle du langage sans disposer d’une langue mathématique revient à prétendre mesurer des longueurs sans unités ni instruments. Or je ne dispose avant d’avoir construit IEML que d’une modélisation algébrique de la syntaxe : la théorie chomskienne et ses variantes ne s’étendent pas jusqu’à la sémantique. La linguistique me donne des descriptions précises des langues naturelles dans tous leurs aspects, y compris sémantiques, mais elle ne me fournit pas de modèles algébriques universels. Je comprends donc l’origine des difficultés de la traduction automatique, des années 1950 jusqu’à nos jours.

Parce que le métalangage IEML fournit un codage algébrique de la sémantique il autorise une modélisation mathématique du langage et de la cognition, il ouvre en fin de compte à notre intelligence collective l’immense bénéfice de la réflexivité.

IEML, outil symbolique de la nouvelle épistémè

Si je dois contribuer à augmenter l’intelligence humaine, notre intelligence, il me faut d’abord comprendre ses conditions de fonctionnement. Pour synthétiser en quelques mots ce que m’ont enseigné de nombreuses années de recherches, l’intelligence dépend avant tout de la manipulation symbolique. De même que nos mains contrôlent des outils qui augmentent notre puissance matérielle, c’est grâce à sa capacité de manipulation de symboles que notre cognition atteint à l’intelligence réflexive. L’organisme humain a partout la même structure, mais son emprise sur son environnement physico-biologique varie en fonction des techniques mises en oeuvre. De la même manière, la cognition possède une structure fonctionnelle invariable, innée aux êtres humains, mais elle manie des outils symboliques dont la puissance augmente au rythme de leur évolution : écriture, imprimerie, médias électroniques, ordinateurs… L’intelligence commande ses outils symboliques par l’intermédiaire de ses idées et de ses concepts, comme la tête commande aux outils matériels par l’intermédiaire du bras et de la main. Quant aux symboles, ils fournissent leur puissance aux processus intellectuels. La force et la subtilité conférée par les symboles à la conceptualisation se répercute sur les idées et, de là, sur la communication et la mémoire pour soutenir, en fin de compte, les capacités de l’intelligence.

J’ai donc construit le nouvel outil de telle sorte qu’il tire le maximum de la nouvelle puissance offerte par le médium algorithmique global. IEML n’est ni un système de classification, ni une ontologie, ni même une super-ontologie universelle, mais une langue. Comme toute langue, IEML noue une syntaxe, une sémantique et une pragmatique. Mais c’est une langue artificielle : sa syntaxe est calculable, sa sémantique traduit les langues naturelles et sa pragmatique programme des écosystèmes d’idées. La syntaxe, la sémantique et la pragmatique d’IEML fonctionnent de manière interdépendante. Du point de vue syntaxique, l’algèbre d’IEML commande une topologie des relations. De ce fait, les connexions linguistiques entre textes et hypertextes dynamiques se calculent automatiquement. Du point de vue sémantique, un code – c’est-à-dire un système d’écriture, une grammaire et un dictionnaire multilingue – donne sens à l’algèbre. Il en résulte que chacune des variables de l’algèbre devient un noeud d’inter-traduction entre langues naturelles. Les utilisateurs peuvent alors communiquer en IEML tout en utilisant la – ou les – langues naturelles de leur choix. Du point de vue pragmatique enfin, IEML commande la simulation d’écosystèmes d’idées. Les données catégorisées en IEML s’organisent automatiquement en hypertextes dynamiques, explorables et auto-explicatifs. IEML fonctionne donc en pratique comme un outil de programmation distribuée d’une simulation cognitive globale.

Le futur algorithmique de l’intelligence

Lorsqu’elle aura pris en main ce nouvel outil symbolique, notre espèce laissera derrière elle une épistémè typographique assimilée et assumée pour entrer dans le vaste champ de l’intelligence algorithmique. Une nouvelle mémoire accueillera des torrents de données en provenance de milliards de sources et transformera automatiquement le déluge d’information en hypertextes dynamiques auto-organisateurs. Alors que Wikipedia conserve un système de catégorisation hérité de l’épistémè typographique, une bibliothèque encyclopédique perspectiviste s’ouvrira à tous les systèmes de classification possibles. En s’auto-organisant en fonction des points de vue adoptés par leurs explorateurs, les données catégorisées en IEML reflèteront le fonctionnement multi-polaire de l’intelligence collective.

Les relations entre hypertextes dynamiques vont se projeter dans une fiction calculée multi-sensorielle explorable en trois dimensions. Mais c’est une réalité cognitive que les nouveaux mondes virtuels vont simuler. Leur spatio-temporalité sera donc bien différente de celle du monde matériel puisque c’est ici la forme de l’intelligence, et non celle de la réalité physique ordinaire, qui va se laisser explorer par la sensori-motricité humaine.

De la curation collaborative de données émergera de nouveaux types de jeux intellectuels et sociaux. Des collectifs d’apprentissage, de production et d’action communiqueront sur un mode stigmergique en sculptant leur mémoire commune. Les joueurs construiront ainsi leurs identités individuelles et collectives. Leurs tendances émotionnelles et les directions de leurs attentions se reflèteront dans les fluctuations et les cycles de la mémoire commune.

A partir de nouvelles méthodes de mesure et de comptabilité sémantique basés sur IEML, l’ouverture et la transparence des processus de production de connaissance vont connaître un nouvel essor. Les études de la cognition et de la conscience disposeront non seulement d’une nouvelle théorie, mais aussi d’un nouvel instrument d’observation, d’analyse et de simulation. Il deviendra possible d’accumuler et de partager l’expertise sur la culture des écosystèmes d’idées. Nous allons commencer à nous interroger sur l’équilibre, l’interdépendance, la fécondité croisée de ces écosystèmes d’idées. Quels services rendent-ils aux communautés qui les produisent ? Quels sont leurs effets sur le développement humain ?

Le grand projet d’union des intelligences auquel je convie ne sera le fruit d’aucune conquête militaire, ni de la victoire sur les esprits d’une idéologie politique ou religieuse. Elle résultera d’une révolution cognitive à fondement techno-scientifique. Loin de tout esprit de table rase radicale, la nouvelle épistémè conservera les concepts des épistémè antérieures. Mais ce legs du passé sera repris dans un nouveau contexte, plus vaste, et par une intelligence plus puissante.

[Image en tête de l’article: “Le Miroir” de Paul Delvaux, 1936]