Archives for category: Semantic Sphere

Sign, symbol, language

Sign

Meaning involves at least three elements playing distinct roles. A sign (1) means something (2) for some being (3). The sign may be whatever entity or event. What makes it a “sign” is not an intrinsic property but the role that it plays in meaning. The thing indicated by the sign is often called the object or the referent and, again, what makes it the referent of the sign is not an intrinsic property but rather its role in the triadic relation. As for the being, it is often called the subject or the interpreter. It can be a human being, a group, an animal, a machine or whatever entity or process endowed with self-reference (distinction self/environment) and interpretation. The interpreter takes always the context into account for its interpretation of the sign. For example, I (interpreter) smell some smoke (sign) and I infer that it comes from some fire (a referent that is part of the context).

Pas une pipe

Communication and signs clearly exist at the level of any living organisms. Cells recognize concentrations of poison or food from afar, plants use their flowers to trick insects into their reproductive processes, animal species practice complex semiotic games, including camouflage and mimicries. Animals – organisms with brains – recognize, interpret and emit signs constantly. Their cognition is already complex: it goes along the sensorimotor cycle, it involves categorization, feelings, and environment mapping. Animals learn by experience, solve problems, communicate and the social species manifest collective intelligence. All these cognitive properties imply the emission and interpretation of signs. When a wolf growls, no need to add any long discourse, a clear message is sent to its adversary.

Symbol

A symbol is a special kind of sign that is split in two: the signifier and the signified. The signified (virtual) is a general category or abstract class, and the signifier (actual) is a sensible phenomenon that represents the signified. The signifier may be for example a sound, a black mark on white paper or a gesture. The word “ tree ” is a symbol. It is made of a signifier sound and a signified category: the family of plants with root, trunk, branches, and leaves. The relation between the signifier and the signified is conventional and belongs to the symbolic system (the English language) of which the symbol is a part. What we mean by a conventional relation between signified and signifier is that, in the majority of the cases, there is no analogy or causal connection between signifier and signified, for example between the sound “ crocodile ” and the crocodile species. Different languages use different signifiers to indicate the same signified. Moreover, languages cut the reality – or define their categories – in their own way, depending on the environment and of the social games of their speakers. In our example, It is the English language who decides what is the signified of “tree”. The signified is not left to the choice of the interpreter. What the interpreter does decide is the meaning of the word in the particular context of a speech act: is the referent of the word a syntactic tree, a palm tree, a Christmas tree…?

Language

By language, I mean a complete language, a general symbolic system that allows people to think reflexively, ask questions, tell stories, dialogue and engage in complex social interaction. English, French, Spanish, Arabic, Russian, Chinese Mandarin or Esperanto are languages. Every human being is biologically equipped to speak and recognize languages. The linguistic ability is natural, genetic, embedded in our brains and universal. In contrast languages (like English, French, etc.) are social, conventional, cultural, multiple, evolving and hybridizing. They mix and change according to the transformations of demographic, technological, economic, social and political contexts.

Our natural linguistic ability multiplies the cognitive faculties that we share with other social animals. It empowers reflexive thought, lasting and precise memory, fast learning, long-term planning, large-scale complex coordination and cultural evolution. Animals cannot understand and use linguistic symbols to their full extent, only humans can. Even the best-trained gorilla will not pretend that the story of another gorilla is false or exaggerated. It will neither ask you an appointment for the first Tuesday of the next month nor inform you where its grandfather was born.

In animal cognition, the categories that organize perception and action are enacted by neural networks. In human cognition, these categories become explicit thank to symbols and move to the forefront of our awareness. Ideas become objects of reflection. With language comes arithmetics, art, religion, politics, economy, and technology. Compared to other social species, human collective intelligence is more powerful and creative because it is supported and augmented by its linguistic ability. Therefore, if we work in data science, artificial intelligence or cognitive computing, it would be useful to understand – and model – not only the functioning of neurons and neurotransmitters, common to all animals but also the structure and organization of language, unique to our species.

Natural languages contain the possibility of logical reasoning and arithmetic computing but they cannot be reduced to these features. In this sense, programming languages like Python, Javascript or C++ are too specialized to be considered as complete languages. Their basic units are empty syntactic containers. No grandmother can tell a story in Python to her grandchildren and there are no words in OWL to say “butter” or “crocodile”.

Grammar

A natural language is made of recursively nested units: a phoneme is an elementary sound, a word is a chain of phonemes, a sentence is a chain of words and a text is a chain of sentences. A language has a finite dictionary of words and syntactic rules for the construction of texts. From its dictionary and set of syntactic rules, a language offers its users the ability to generate – and understand! – an infinity of texts.

Phonemes

Humans cannot pronounce or recognize several phonemes simultaneously. They can only pronounce one sound at a time. So languages have to obey the constraint of sequentiality. A speech looks like a temporal chain of phonemes, with an acoustic punctuation reflecting its grammatical organization.

Phonemes are generally divided into consonants and vowels. Some languages have “ click ” consonants (in East and Southern Africa) and others (like Chinese) have tones on their vowels. Despite the great diversity of sounds used to pronounce human languages, the number of conventional sounds for a language is limited: the order of magnitude is between thirty and one hundred.

Words

Phonemes are meaningless sounds without any signifier associated with it. The first symbolic unit, with a signifier related to a signifier, is the word. By “ word ” I mean an atomic sense unit. For example, technically, the expression “ smallest ” contains two words: “ small ” (meaning tiny) and “ est ” (meaning the most).

How many words does a language contain? The biggest English dictionary counts 200 000 words, Latin has 50 000 words, Chinese has 30 000 characters, the biblical Hebrew amounts to 6000 words. The French classical author Jean Racine was able to evoke the whole range of human passions with only 3700 words in all of his 13 plays. Most linguists think that whatever the language, as an order of magnitude, a skillful and cultivated speaker masters around 10 000 words.

All languages contain nouns depicting structures or entities and verbs describing actions, events, and processes. Depending on the language, there are other types of words, like adjectives, adverbs, prepositions or sense units marking the grammatical functions, the gender, the number, the person, the time, etc.

Note that a word cannot be true or false. As part of a language, its signifier points to a signified, an abstract category, and not to a state of things. Only a sentence that is spoken in context and pretends to describe a reality – a sentence that has a referent – can be true or false.

Sentences

At the level of the sentence, we leave the abstract dictionary of a language to enter the concrete world of speech acts in contexts. First, let’s distinguish three sub-levels of complexity at the sentence level: the topic, the phrase, and the super-phrase. A topic is a super-word indicating a subject, a matter, an object or a process that cannot be described by one single word, i.e., “ history of linguistics ”, “ smartphone ” or “ tourism in Canada ”. Different languages have diverse rules for building topics like joining root-words to case-words or straight agglutination of words. By relating several topics, a phrase brings to mind an event, an action or a fact, i.e., “ I bought her a new smartphone for her twentieth birthday ”. A phrase can be verbal, like the previous example, or nominal like “ the blue seat of my father’s car ”. Finally, a super-phrase evokes a network of relations between facts or events, like a theory or a narrative. The relationships between phrases can be temporal (after), spatial (behind), causal (because), logical (therefore), they can underline contrasts (but, despite…) and so on.

Texts

The higher linguistic unit, or text, results from a punctuated sequence of sentences. A text has a signified resulting from the syntactic rules applied to the signifieds of its words. It also has a referent in the mind of its speaker, a referent that is inferred by its listeners from the signified of the text and from the temporal, spatial and social contexts of its utterance. Even when the text is in fact produced by a computer program, the listener cannot help but imagine an intention to mean something by a speaker and to construct the mental model of a referent.

Semantics

When we listen to a speech, we transform a chain of sounds into a semantic network and we infer from this network a new mental model of our situation. Conversely, we are able to transform a mental model into the corresponding semantic network and then this network into a train of phonemes. Semantics is the back and forth translation between chains of phonemes and semantic networks. The semantic networks themselves are multi-layered and can be broken down into three levels: paradigmatic, syntagmatic and textual.

Paradigmatic relations

In any language dictionary, words are generally arranged in paradigms. A paradigm is a set of mutually exclusive words that cover a particular functional or thematic zone. For example, languages may comprise paradigms to indicate time (past, present, future) and mode (active, passive) of verbs. Most languages include paradigms for economic actions (buy, sell, lend, repay), or colors (red, blue, yellow…). For instance, a speaker may replace a word from a paradigm by another word from the same paradigm and still make sense. In the sentence “ I bought a car ” you can replace bought by sold because buy and sell are part of the same paradigm. But you cannot replace bought by yellow. Two words from the same paradigm are both opposed (they don’t have the same meaning) and related (they are exchangeable).

Words can also be related because they are in taxonomic relation, like horse and animal. The English dictionary indicates that a horse is a particular case of an animal. Words can also be composed of smaller words, for example, “ metalanguage ” comes from meta (beyond, second order) and language.

I will not write down here a complete list of all the relations that can be found between the words of a dictionary. The main point is that the words of a language are not isolated but inter-related by a dense network of semantic connections. In dictionaries, words are always defined and explained by the way of other words. Let’s call “ paradigmatic ” – in a very general sense – the relations between the words of a language. When we hear a sentence using the word “ sold ”, we know, in an implicit way, that “ sold ” is a verb, that it is opposed to “ bought ”, that it is not “ lent ”, and that it is the past tense of “ sell ”.

Syntagmatic relations

At a particular moment in time and in a definite situation of speakers, the relations between words in a language’s dictionary are constant. But in the speeches, the relations between words change according to their syntagmatic – or grammatical – roles. In the two following sentences: “ The gazelle smells the presence of the lion ” and “ The lion smells the presence of the gazelle ” the words “ gazelle ” and “ lion ” do not share the same grammatical role, so the words are not connected according to the same syntagmatic networks… therefore the sentences have distinct meanings. Syntagmatic networks can generally be reduced to a grammatical tree of verbal and nominal sentences (search «syntactic tree» on Google image).

Textual relations

At the grammatical level, a text is just a recognizable chain of sounds. But at the semantic level, texts are interconnected by relations like linguistic anaphoras and isotopies.

A text anaphoric links relate words or sentences to pronouns, conjunctions, etc. An example of anaphora is, when we read a pronoun, we know which noun – mentioned in a previous or following sentence – it is referring to.

On the other hand, isotopies are recurrences of themes that weave the unity of a text: the identity of heroes (characters), genres (love stories or historical novels), places, etc. These redundancies are essentially about words, paradigms, sentences and sentence structures. Iso-topia means “the same topic ” in greek. The notion of isotopy also encompasses all kinds of phonetic, prosodic, syntactic and narrative repetitions that help the listener to understand the text. From a sheer sequentiality of sentences, isotopies guide us into the construction of an intra-textual semantic network.

Ambiguities

What does it mean to understand the meaning of a train of phonemes at the semantic level? It means that, from the sequence of sounds, we build a multi-layered semantic network: paradigmatic, syntagmatic and textual. When weaving the paradigmatic layer, we answer questions like: “ what is this word, to what paradigms does it belong? Which one of its senses should I consider? ” Then we connect words by responding to this kind of questions: “ what are the syntagmatic relations between the words in that sentence? ” Finally we interlace the texts by recognizing the anaphoras and isotopies that inter-connect their sentences. Our understanding of a text is this three-layered network of sense units.

Ambiguities can happen at all three levels and multiply their effects. In case of homophony, the same sound can point to two different words like “ ate ” and “ eight ”. Sometimes, one word may convey several distinct meanings like “ mole (1) ”, that means an animal digging galleries and “ mole (2) ” that means a deep undercover spy. In case of synonymy, the same meaning can be represented by distinct words like “ tiny ” and “ small ”. Amphibologies refer to syntagmatic ambiguities like in: “ Mary saw the woman on the mountain with a telescope. ” Who is on the mountain, Mary or the woman? Moreover, is it the mountain or the woman that has the telescope? Textual relations are even more ambiguous than paradigmatic and syntagmatic ones because the rules for anaphora and isotopy are loosely defined. Text understanding goes beyond grammar and vocabulary. It implies the building and comparison of complex and dynamic mental models. Human beings do not always resolve correctly all the ambiguities of speech and when they do, it is often by taking into account the pragmatic (or extra-textual) context, that is generally implicit… and out of the reach of computers.

Computers cannot understand or translate texts with the only help of a dictionary and a grammar because dictionaries and grammars of natural languages like English or Arabic have local versions, are fuzzy and evolve constantly. Moreover, textual rules change with social contexts, language games, and literary genres. Finally, computers cannot engage in the pragmatic context of speeches – like human beings do – to disambiguate texts. Natural language processing (a sub-discipline of artificial intelligence) compensate for the irregularity of natural languages by using a lot of statistical calculations and “ deep learning ” algorithms. Depending on its training set, the algorithm interprets a text by choosing the most probable semantic network. The results of these algorithms have to be validated and improved by human reviewers.

Pragmatics

The word “ pragmatics ” comes from the ancient Greek pragma: “deed, act”. In their pragmatic sense, speeches are “acts” or performances. They do something. A speech may be descriptive and, in this case, it can be true or false. But a speech may also play a lot of other social functions like order, pray, judge, promise, etc. A speech act functions as a move in a game played by its speaker. So, distinct from the semantic meaning that we have analyzed in the previous section, the pragmatic meaning of a text is related to the kind of social game that is played by the interlocutors. For example, is the text pronounced on a stage in a play or in a real tribunal? The pragmatic meaning is also related to the real effects of its utterance, effects that are unknown at the moment of the pronunciation. For example: did I convince you? Have you kept your word? In the case of meaning as “ real effect ”, the sense of a speech can only be known after its utterance and future events can always modify it. The pragmatic ambiguity of a speech act comes from the ignorance about the time and place of the utterance, from the ignorance of the precise referents of the speech, from the uncertainty about of the social game played by the speaker, from the ambivalence or concealment of the speaker’s intentions and of course from the impossibility to know in advance the effects of an utterance.

Pragmatics is all about the triadic relation between symbols (speeches or texts), interpreters (people or interlocutors) and referents (objects, reality, extra-textual context). At the pragmatic level, any speech is pointing to – and acting on – a referential context that is common to the interlocutors. The pragmatic context is used for the disambiguation of the texts’ semantics and for the actualization of its deictic symbols (like: here, you, me, that one there, or next Tuesday). Indeed, the pragmatic context is often viewed by the specialists of natural language processing from the exclusive angle of disambiguation. But in the dynamics of communication, the pragmatic context is not only a tool for disambiguation but also – and more importantly – the common object that is at stake for the participants. The pragmatic context works like a shared and synchronized memory where interlocutors “write” and “read” their speeches – or other symbolic acts – in order to transform a real social situation.

I put forward in this paper a vision for a new generation of cloud-based public communication service designed to foster reflexive collective intelligence. I begin with a description of the current situation, including the huge power and social shortcomings of platforms like Google, Apple, Facebook, Amazon, Microsoft, Alibaba, Baidu, etc. Contrasting with the practice of these tech giants, I reassert the values that are direly needed at the foundation of any future global public sphere: openness, transparency and commonality. But such ethical and practical guidelines are probably not powerful enough to help us crossing a new threshold in collective intelligence. Only a disruptive innovation in cognitive computing will do the trick. That’s why I introduce “deep meaning” a new research program in artificial intelligence, based on the Information Economy  MetaLanguage (IEML). I conclude this paper by evoking possible bootstrapping scenarii for the new public platform.

The rise of platforms

At the end of the 20th century, one percent of the human population was connected to the Internet. In 2017, more than half the population is connected. Most of the users interact in social media, search information, buy products and services online. But despite the ongoing success of digital communication, there is a growing dissatisfaction about the big tech companies – the “Silicon Valley” – who dominate the new communication environment.

The big techs are the most valued companies in the world and the massive amount of data that they possess is considered the most precious good of our time. Silicon Valley owns the big computers: the network of physical centers where our personal and business data are stored and processed. Their income comes from their economic exploitation of our data for marketing purposes and from their sales of hardware, software or services. But they also derive considerable power from the knowledge of markets and public opinions that stems from their information control.

The big cloud companies master new computing techniques mimicking neurons when they learn a new behavior. These programs are marketed as deep learning or artificial intelligence even if they have no cognitive autonomy and need some intense training by humans before becoming useful. Despite their well known limitations, machine learning algorithms have effectively augmented the abilities of digital systems. Deep learning is now used in every economic sector. Chips specialized in deep learning are found in big data centers, smartphones, robots and autonomous vehicles. As Vladimir Putin rightly told young Russians in his speech for the first day of school in fall 2017: “Whoever becomes the leader in this sphere [of artificial intelligence] will become the ruler of the world”.

The tech giants control huge business ecosystems beyond their official legal borders and they can ruin or buy competitors. Unfortunately, the big tech rivalry prevents a real interoperability between cloud services, even if such interoperability would be in the interest of the general public and of many smaller businesses. As if their technical and economic powers were not enough, the big tech are now playing into the courts of governments. Facebook warrants our identity and warns our family and friends that we are safe when a terrorist attack or a natural disaster occurs. Mark Zuckerberg states that one of Facebook’s mission is to insure that the electoral process is fair and open in democratic countries. Google Earth and Google Street View are now used by several municipal instances and governments as their primary source of information for cadastral plans and other geographical or geospatial services. Twitter became an official global political, diplomatic and news service. Microsoft sells its digital infrastructure to public schools. The kingdom of Denmark opened an official embassy in Silicon Valley. Cryptocurrencies independent from nation states (like Bitcoin) are becoming increasingly popular. Blockchain-based smart contracts (powered by Ethereum) bypass state authentication and traditional paper bureaucracies. Some traditional functions of government are taken over by private technological ventures.

This should not come as a surprise. The practice of writing in ancient palace-temples gave birth to government as a separate entity. Alphabet and paper allowed the emergence of merchant city-states and the expansion of literate empires. The printing press, industrial economy, motorized transportation and electronic media sustained nation-states. The digital revolution will foster new forms of government. Today, we discuss political problems in a global public space taking advantage of the web and social media and the majority of humans live in interconnected cities and metropoles. Each urban node wants to be an accelerator of collective intelligence, a smart city. We need to think about public services in a new way. Schools, universities, public health institutions, mail services, archives, public libraries and museums should take full advantage of the internet and de-silo their datasets. But we should go further. Are current platforms doing their best to enhance collective intelligence and human development? How about giving back to the general population the data produced in social media and other cloud services, instead of just monetizing it for marketing purposes ? How about giving to the people access to cognitive powers unleashed by an ubiquitous algorithmic medium?

Information wants to be open, transparent and common

We need a new kind of public sphere: a platform in the cloud where data and metadata would be our common good, dedicated to the recording and collaborative exploitation of memory in the service of our collective intelligence. The core values orienting the construction of this new public sphere should be: openness, transparency and commonality

Firstly openness has already been experimented in the scientific community, the free software movement, the creative commons licensing, Wikipedia and many more endeavors. It has been adopted by several big industries and governments. “Open by default” will soon be the new normal. Openness is on the rise because it maximizes the improvement of goods and services, fosters trust and supports collaborative engagement. It can be applied to data formats, operating systems, abstract models, algorithms and even hardware. Openness applies also to taxonomies, ontologies, search architectures, etc. A new open public space should encourage all participants to create, comment, categorize, assess and analyze its content.

Then, transparency is the very ground for trust and the precondition of an authentic dialogue. Data and people (including the administrators of a platform), should be traceable and audit-able. Transparency should be reciprocal, without distinction between the rulers and the ruled. Such transparency will ultimately be the basis for reflexive collective intelligence, allowing teams and communities of any size to observe and compare their cognitive activity

Commonality means that people will not have to pay to get access to this new public sphere: all will be free and public property. Commonality means also transversality: de-silo and cross-pollination. Smart communities will interconnect and recombine all kind of useful information: open archives of libraries and museums, free academic publications, shared learning resources, knowledge management repositories, open-source intelligence datasets, news, public legal databases…

From deep learning to deep meaning

This new public platform will be based on the web and its open standards like http, URL, html, etc. Like all current platforms, it will take advantage of distributed computing in the cloud and it will use “deep learning”: an artificial intelligence technology that employs specialized chips and algorithms that roughly mimic the learning process of neurons. Finally, to be completely up to date, the next public platform will enable blockchain-based payments, transactions, contracts and secure records

If a public platform offers the same technologies as the big tech (cloud, deep learning, blockchain), with the sole difference of openness, transparency and commonality, it may prove insufficient to foster a swift adoption, as is demonstrated by the relative failures of Diaspora (open Facebook) and Mastodon (open Twitter). Such a project may only succeed if it comes up with some technical advantage compared to the existing commercial platforms. Moreover, this technical advantage should have appealing political and philosophical dimensions.

No one really fancies the dream of autonomous machines, specially considering the current limitations of artificial intelligence. Instead, we want an artificial intelligence designed for the augmentation of human personal and collective intellect. That’s why, in addition to the current state of the art, the new platform will integrate the brand new deep meaning technology. Deep meaning will expand the actual reach of artificial intelligence, improve the user experience of big data analytics and allow the reflexivity of personal and collective intelligence.

Language as a platform

In a nutshell, deep learning models neurons and deep meaning models language. In order to augment the human intellect, we need both! Right now deep learning is based on neural networks simulation. It is enough to model roughly animal cognition (every animal species has neurons) but it is not refined enough to model human cognition. The difference between animal cognition and human cognition is the reflexive thinking that comes from language, which adds a layer of semantic addressing on top of neural connectivity. Speech production and understanding is an innate property of individual human brains. But as humanity is a social species, language is a property of human societies. Languages are conventional, shared by members of the same culture and learned by social contact. In human cognition, the categories that organize perception, action, memory and learning are expressed linguistically so they may be reflected upon and shared in conversations. A language works like the semantic addressing system of a social virtual database.

But there is a problem with natural languages (english, french, arabic, etc.), they are irregular and do not lend themselves easily to machine understanding or machine translation. The current trend in natural language processing, an important field of artificial intelligence, is to use statistical algorithms and deep learning methods to understand and produce linguistic data. But instead of using statistics, deep meaning adopts a regular and computable metalanguage. I have designed IEML (Information Economy MetaLanguage) from the beginning to optimize semantic computing. IEML words are built from six primitive symbols and two operations: addition and multiplication. The semantic relations between IEML words follow the lines of their generative operations. The total number of words do not exceed 10 000. From its dictionary, the generative grammar of IEML allows the construction of sentences at three layers of complexity: topics are made of words, phrases (facts, events) are made of topics and super-phrases (theories, narratives) are made of phrases. The higher meaning unit, or text, is a unique set of sentences. Deep meaning technology uses IEML as the semantic addressing system of a social database.

Given large datasets, deep meaning allows the automatic computing of semantic relations between data, semantic analysis and semantic visualizations. This new technology fosters semantic interoperability: it decompartmentalizes tags, folksonomies, taxonomies, ontologies and languages. When on line communities categorize, assess and exchange semantic data, they generate explorable ecosystems of ideas that represent their collective intelligence. Take note that the vision of collective intelligence proposed here is distinct from the “wisdom of the crowd” model, that assumes independent agents and excludes dialogue and reflexivity. Just the opposite : deep meaning was designed from the beginning to nurture dialogue and reflexivity.

The main functions of the new public sphere

deepmeaning

In the new public sphere, every netizen will act as an author, editor, artist, curator, critique, messenger, contractor and gamer. The next platform weaves five functions together: curation, creation, communication, transaction and immersion.

By curation I mean the collaborative creation, edition, analysis, synthesis, visualization, explanation and publication of datasets. People posting, liking and commenting content on social media are already doing data curation, in a primitive, simple way. Active professionals in the fields of heritage preservation (library, museums), digital humanities, education, knowledge management, data-driven journalism or open-source intelligence practice data curation in a more systematic and mindful manner. The new platform will offer a consistent service of collaborative data curation empowered by a common semantic addressing system.

Augmented by deep meaning technology, our public sphere will include a semantic metadata editor applicable to any document format. It will work as a registration system for the works of the mind. Communication will be ensured by a global Twitter-like public posting system. But instead of the current hashtags that are mere sequences of characters, the new semantic tags will self-translate in all natural languages and interconnect by conceptual proximity. The blockchain layer will allow any transaction to be recorded. The platform will remunerate authors and curators in collective intelligence coins, according to the public engagement generated by their work. The new public sphere will be grounded in the internet of things, smart cities, ambient intelligence and augmented reality. People will control their environment and communicate with sensors, software agents and bots of all kinds in the same immersive semantic space. Virtual worlds will simulate the collective intelligence of teams, networks and cities.

Bootstrapping

This IEML-based platform has been developed between 2002 and 2017 at the University of Ottawa. A prototype is currently in a pre-alpha version, featuring the curation functionality. An alpha version will be demonstrated in the summer of 2018. How to bridge the gap from the fundamental research to the full scale industrial platform? Such endeavor will be much less expensive than the conquest of space and could bring a tremendous augmentation of human collective intelligence. Even if the network effect applies obviously to the new public space, small communities of pioneers will benefit immediately from its early release. On the humanistic side, I have already mentioned museums and libraries, researchers in humanities and social science, collaborative learning networks, data-oriented journalists, knowledge management and business intelligence professionals, etc. On the engineering side, deep meaning opens a new sub-field of artificial intelligence that will enhance current techniques of big data analytics, machine learning, natural language processing, internet of things, augmented reality and other immersive interfaces. Because it is open source by design, the development of the new technology can be crowdsourced and shared easily among many different actors.

Let’s draw a distinction between the new public sphere, including its semantic coordinate system, and the commercial platforms that will give access to it. This distinction being made, we can imagine a consortium of big tech companies, universities and governments supporting the development of the global public service of the future. We may also imagine one of the big techs taking the lead to associate its name to the new platform and developing some hardware specialized in deep meaning. Another scenario is the foundation of a company that will ensure the construction and maintenance of the new platform as a free public service while sustaining itself by offering semantic services: research, consulting, design and training. In any case, a new international school must be established around a virtual dockyard where trainees and trainers build and improve progressively the semantic coordinate system and other basic models of the new platform. Students from various organizations and backgrounds will gain experience in the field of deep meaning and will disseminate the acquired knowledge back into their communities.

Emission de radio (Suisse romande), 25 minutes en français.

You-Tube Video (in english) 1h

 

 

What is IEML?

  • IEML (Information Economy MetaLanguage) is an open (GPL3) and free artificial metalanguage that is simultaneously a programming language, a pivot between natural languages and a semantic coordinate system. When data are categorized in IEML, the metalanguage compute their semantic relationships and distances.
  • From a “social” point of view, on line communities categorizing data in IEML generate explorable ecosystems of ideas that represent their collective intelligence.
  • Github.

What problems does IEML solve?

  • Decompartmentalization of tags, folksonomies, taxonomies, ontologies and languages (french and english for now).
  • Semantic search, automatic computing and visualization of semantic relations and distances between data.
  • Giving back to the users the information that they produce, enabling reflexive collective intelligence.

Who is IEML for?

Content curators

  • knowledge management
  • marketing
  • curation of open data from museums and libraries, crowdsourced curation
  • education, collaborative learning, connectionists MOOCs
  • watch, intelligence

Self-organizing on line communities

  • smart cities
  • collaborative teams
  • communities of practice…

Researchers

  • artificial intelligence
  • data analytics
  • humanities and social sciences, digital humanities

What motivates people to adopt IEML?

  • IEML users participate in the leading edge of digital innovation, big data analytics and collective intelligence.
  • IEML can enhance other AI techniques like machine learning, deep learning, natural language processing and rule-based inference.

IEML tools

IEML v.0

IEML v.0 includes…

  • A dictionary of  concepts whose edition is restricted to specialists but navigation and use is open to all.
  • A library of tags – called USLs (Uniform Semantic Locators) – whose edition, navigation and use is open to all.
  • An API allowing access to the dictionary, the library and their functionalities (semantic computing).

Intlekt v.0

Intlekt v.0 is a collaborative data curation tool that allows
– the categorization of data in IEML,
– the semantic visualization of collections of data categorized in IEML
– the publication of these collections

The prototype (to be issued in May 2018) will be mono-user but the full blown app will be social.

Who made it?

The IEML project is designed and led by Pierre Lévy.

It has been financed by the Canada Research Chair in Collective Intelligence at the University of Ottawa (2002-2016).

At an early stage (2004-2011) Steve Newcomb and Michel Biezunski have contributed to the design and implementation (parser, dictionary). Christian Desjardins implemented a second version of the dictionary. Andrew Roczniak helped for the first mathematical formalization, implemented a second version of the parser and a third version of the dictionary (2004-2016).

The 2016 version has been implemented by Louis van Beurden, Hadrien Titeux (chief engineers), Candide Kemmler (project management, interface), Zakaria Soliman and Alice Ribaucourt.

The 2017 version (1.0) has been implemented by Louis van Beurden (chief engineer), Eric Waldman (IEML edition interface, visualization), Sylvain Aube (Drupal), Ludovic Carré and Vincent Lefoulon (collections and tags management).

dice-1-600x903

Dice sculpture by Tony Cragg

Après avoir posé dans un post précédent les principes d’une cartographie de l’intelligence collective, je m’intéresse maintenant au développement humain qui en est le corrélat, la condition et l’effet de l’intelligence collective. Dans un premier temps, je vais élever au carré la triade sémiotique signe/être/chose (étoile/visage/cube) pour obtenir les neuf «devenirs», qui pointent vers les principales directions du développement humain.

F-PARA-devenirs-1.jpgCarte des devenirs

Les neuf chemins qui mènent de l’un des trois pôles sémiotiques vers lui-même ou vers les deux autres sont appelés en IEML des devenirs (voir dans le dictionnaire IEML la carte sémantique M:M:.) Un devenir ne peut être réduit ni à son point de départ ni à son point d’arrivée, ni à la somme des deux mais bel et bien à l’entre-deux ou à la métamorphose de l’un dans l’autre. Ainsi la mémoire signifie ultimement «devenir chose du signe». On remarquera également que chacun des neufs devenirs peut se tourner aussi bien vers l’actuel que vers le virtuel. Par exemple, la pensée peut prendre comme objet aussi bien le réel sensible que ses propres spéculations. A l’autre bout du spectre, l’espace peut référer aussi bien au contenant de la matérialité physique qu’aux idéalités de la géométrie. Au cours de notre exploration, nous allons découvrir que chacun des neufs devenirs indique une direction d’exploration possible de la philosophie. Les neuf devenirs sont à la fois conceptuellement distincts et réellement interdépendants puisque chacun d’eux a besoin du soutien des autres pour se déployer.

Pensée

Dans la pensée – s. en IEML – aussi bien la substance (point de départ) que l’attribut (point d’arrivée) sont des signes. La pensée relève en quelque sorte du signe au carré. Elle marque la transformation d’un signe en un autre signe, comme dans la déduction, l’induction, l’interprétation, l’imagination et ainsi de suite.

Le concept de pensée ou d’intellection est central pour la tradition idéaliste occidentale qui part de Platon et passe notamment par Aristote, les néo-plationciens, les théologiens du moyen-Age, Kant, Hegel et jusqu’à Husserl. L’intellection se trouve également au coeur de la philosophie islamique, aussi bien chez Avicenne (Ibn Sina) et ses contituateurs dans la philosophie iranienne jusqu’au XVIIe siècle que chez l’andalou Averroes (Ibn Roshd). Elle l’est encore pour la plupart des grandes philosophies de l’Inde méditante. L’existence humaine, et plus encore l’existence philosophique, est nécessairement plongée dans la pensée discursive réfléchissante. Où cette pensée prend-elle son origine ? Quelles sont ses structures ? Comment mener la pensée humaine à sa perfection ? Autant de questions que l’interrogation philosophique ne peut éluder.

Langage

Le langage – b. en IEML – s’entend ici comme un code (au sens le plus large du terme) de communication qui fonctionne effectivement dans l’univers humain. Le langage est un «devenir-être du signe», une transformation du signe en intelligence, une illumination du sujet par le signe.

Certaines philosophies adoptent comme point de départ les problèmes du langage et de la communication. Wittgenstein, par exemple, a fait largement tourner sa philosophie autour du problème des limites du langage. Mais il faut noter qu’il s’intéresse également à des questions de logique et au problème de la vérité. Dans un style différent, un philosophe comme Peirce n’a cessé d’approfondir la question de la signification et du fonctionnement des signes. Austin a creusé le thème des actes de langage, etc. On comprend que ce devenir désigne le moment sémiotique (ou linguistique) de la philosophie. L’Homme est un être parlant dont l’existence ne peut se réaliser que par et dans le langage.

Mémoire

Dans la mémoire – t. en IEML – le signe en substance se réifie dans son attribut chose. Ce devenir évoque le geste élémentaire de l’inscription ou de l’enregistrement. Le devenir chose du signe est ici considéré comme la condition de possibilité de la mémoire. Il commande la notion même de temps.

Le passage du temps et son inscription – la mémoire – fut un des thèmes de prédilection de Bergson (auteur notamment de Matière et Mémoire). Bergson mettait l’épaisseur de la vie et le jaillissement évolutif de la création du côté de la mémoire par opposition avec le déterminisme physicien du XIXe siècle (la « matière ») et le mécanisme logico-mathématique, assignés à l’espace. On trouve également une analyse fine du passage du temps et de son inscription dans les philosophies de l’impermanence et du karma, comme le bouddhisme. L’évolutionnisme, de manière générale, qu’il soit cosmique, biologique ou culturel, se fonde sur une dialectique du passage du temps et de la rétention d’une mémoire codée. Notons enfin que nombre de grandes traditions religieuses se fondent sur des écritures sacrées relevant du même archétype de l’inscription. En un sens, parce que nous sommes inévitablement soumis à la séquentialité temporelle, notre existence est mémoire : mémoire à court terme de la perception, mémoire à long terme du souvenir et de l’apprentissage, mémoire individuelle où revivent et confluent les mémoires collectives.

Société

Dans la société – k. en IEML –, une communauté d’êtres s’organise au moyen de signes. Nous nous engageons dans des promesses et des contrats. Nous obéïssons à la loi. Les membres d’un clan ont le même animal totémique. Nous nous battons sous le même drapeau. Nous échangeons des biens économiques en nous mettant d’accord sur leur valeur. Nous écoutons ensemble de la musique et nous partageons la même langue. Dans tous ces cas, comme dans bien d’autres, une communauté d’humains converge et crée une unité sociale en s’attachant à une même réalité signifiante conventionnelle : autant de manières de « faire société ».

On sait que la sociologie est un rejeton de la philosophie. Avant même que la discipline sociologique ne se sépare du tronc commun, le moment social de la philosophie a été illustré par de grands noms : Jean-Jacques Rousseau et sa théorie du contrat, Auguste Comte qui faisait culminer la connaissance dans la science des sociétés, Karl Marx qui faisait de la lutte des classes le moteur de l’histoire et ramenait l’économie, la politique et la culture en général aux « rapports sociaux réels ». Durkheim, Mauss, Weber et leurs successeurs sociologues et anthropologues se sont interrogé sur les mécanismes par lesquels nous « faisons société ». L’homme est un animal politique qui ne peut pas ne pas vivre en société. Comment vivifier la philia, lien d’amitié entre les membres de la même communauté ? Quelles sont les vraies ou les bonnes sociétés ? Spirituelles, cosmopolites, impériales, civiques, nationales…? Quels sont les meilleurs régimes politiques ? Autant d’interrogations toujours ouvertes.

Affect

Dans l’affect – m. en IEML – un être s’oriente vers d’autres êtres, ou détermine son intériorité la plus intime. L’affect est ici entendu comme le tropisme de la subjectivité. Désir, amour, haine, indifférence, compassion, équanimité sont des qualités émotionnelles qui circulent entre les êtres.

Après les poètes, les dévots et les comédiens, Freud, la psychanalyse et une bonne part de la psychologie clinique insistent sur l’importance de l’affect et des fonctions émotionnelles pour comprendre l’existence humaine. On a beaucoup souligné récemment l’importance de « l’intelligence émotionnelle ». Mais la chose n’est pas nouvelle. Cela fait bien longtemps que les philosophes s’interrogent sur l’amour (voir le Banquet de Platon) et les passions (Descartes lui-même a écrit un Traité des passions), même s’il n’en font pas toujours le thème central de leur philosophie. L’existence se débat nécessairement dans les problèmes affectifs parce qu’aucune vie humaine ne peut échapper aux émotions, à l’attraction et à la répulsion, à la joie et à la tristesse. Mais les émotions sont-elles des expressions légitimes de notre nature spontanée ou des «poisons de l’esprit» (selon la forte expression bouddhiste) auxquels il ne faut pas laisser le gouvernement de notre existence ? Ou les deux ? De nombreuses écoles philosophiques aussi bien Orient qu’en Occident, ont vanté l’ataraxie, le calme mental ou, tout au moins, la modération des passions. Mais comment maîtriser les passions, et comment les maîtriser sans les connaître ?

Monde

Dans le monde – n. en IEML – les êtres humains (être en substance) s’expriment dans leur environnement physique (chose en attribut). Ils habitent cet environnement, ils le travaillent au moyen d’outils, ils en nomment les parties et les objets, leur attribuent des valeurs. C’est ainsi que se construit un monde culturellement ordonné, un cosmos.

Nietzsche (qui accordait un rôle central à la création des valeurs), tout comme la pensée anthropologique, fondent principalement leur approche sur le concept de « monde », ou de cosmos organisé par la culture humaine. La notion indienne tout-englobante de dharma se réfère ultimement à un ordre cosmique transcendant qui veut se manifester jusque dans les plus petits détails de l’existence. L’interrogation philosophique sur la justice rejoint cette idée que les actes humains sont en résonance ou en dissonance avec un ordre universel. Mais quelle est la « voie » (le Dao de la philosophie chinoise) de cet ordre ? Son universalité est-elle naturelle ou conventionnelle ? A quels principes obeit-elle ?

Vérité

La vérité – d. en IEML – décrit un « devenir signe de la chose ». Une référence (un état de chose) se manifeste par un message déclaratif (un signe). Un énoncé n’est vrai que s’il contient une description correcte d’un état de choses. L’authenticité se dit d’un signe qui garantit une chose.

La tradition logicienne et la philosophie analytique s’intéressent principalement au concept de vérité (au sens de l’exactitude des faits et des raisonnements) ainsi qu’aux problèmes liés à la référence. L’épistémologie et les sciences cognitives qui se situent dans cette mouvance mettent au fondement de leur démarche la construction d’une connaissance vraie. Mais, au-delà de ces spécialisations, la question de la vérité est un point de passage obligé de l’interrogation philosophique. Même les plus sceptiques ne peuvent renoncer à la vérité sans renoncer à leur propre scepticisme. Si l’on veut mettre l’accent sur sa stabilité et sa cohérence, on la fera découler des lois de la logique et de procédures rigoureuses de vérification empirique. Mais si l’on veut mettre l’accent sur sa fragilité et sa multiplicité, on la fera sécréter par des paradigmes (au sens de Khun), des épistémès, des constructions sociales de sens, toutes variables selon les temps et les lieux.

Vie

Dans la vie – f. en IEML – une chose substantielle (la matérialité du corps) prend l’attribut de l’être, avec sa qualité d’intériorité subjective. La vie évoque ainsi l’incarnation physique d’une créature sensible. Quand un être vivant mange et boit, il transforme des entités objectivées en matériaux et combustibles pour les processus organiques qui supportent sa subjectivité : devenir être de la chose.

Les empiristes fondent la connaissance sur les sens. Les phénoménologues analysent notamment la manière dont les choses nous apparaissent dans la perception. Le biologisme ramène le fonctionnement de l’esprit à celui des neurones ou des hormones. Autant de traditions et de points de vue qui, malgré leurs différences, convergent sur l’organisme humain, ses fonctions et sa sensibilité. Beaucoup de grands philosophes furent des biologistes (Aristote, Darwin) ou des médecins (Hippocrate, Avicenne, Maïmonide…). Médecine chinoise et philosophie chinoise sont profondément interreliées. Il est indéniable que l’existence humaine émane d’un corps vivant et que tous les événements de cette existence s’inscrivent d’une manière ou d’une autre dans ce corps.

Espace

Dans l’espace – l. en IEML –, qu’il soit concret ou abstrait, une chose se relie aux autres choses, se manifeste dans l’univers des choses. L’espace est un système de transformation des choses. Il se construit de relations topologiques et de proximités géométriques, de territoires, d’enveloppes, de limites et de chemins, de fermetures et de passages. L’espace manifeste en quelque sorte l’essence superlative de la chose, comme la pensée manifestait celle du signe et l’affect celle de l’être.

Sur un plan philosophique, les géomètres, topologues, atomistes, matérialistes et physiciens fondent leurs conceptions sur l’espace. Comme je le soulignais plus haut, le géométrisme idéaliste ou l’atomisme matérialiste se rejoignent sur l’importance fondatrice de l’espace. Les atomes sont dans le vide, c’est-à-dire dans l’espace. L’existence humaine se projette nécessairement dans la multitude spatiale qu’elle construit et qu’elle habite : géographies physiques ou imaginaires, paysages urbains ou ruraux, architectures de béton ou de concepts, distances géométriques ou connexions topologiques, replis et réseaux à l’infini.

On peut ainsi caractériser les philosophies en fonction du ou des devenirs qu’elles prennent pour point de départ de leur démarche ou qui constituent leur thème de prédilection. Les devenirs IEML représentent des « points de passage obligé » de l’existence. Dès son alphabet, le métalangage ouvre la sphère sémantique à l’expression de n’importe quelle philosophie, exactement comme une langue naturelle. Mais c’est aussi une langue philosophique, conçue pour éviter les zones cognitives aveugles, les réflexes de pensée limitants dus à l’usage exclusif d’une seule langue naturelle, à la pratique d’une seule discipline devenue seconde nature ou à des points de vue philosophiques trop exclusifs. Elle a justement été construite pour favoriser la libre exploration de toutes les directions sémantiques. C’est pourquoi, en IEML, chaque philosophie apparaît comme une combinaison de points de vue partiels sur une sphère sémantique intégrale qui peut les accommoder toutes et les entrelace dans sa circularité radicale.

Ancient-Hands-Argentina

Proper quotation: « The Philosophical Concept of Algorithmic Intelligence », Spanda Journal special issue on “Collective Intelligence”, V (2), December 2014, p. 17-25. The original text can be found for free online at  Spanda

“Transcending the media, airborne machines will announce the voice of the many. Still indiscernible, cloaked in the mists of the future, bathing another humanity in its murmuring, we have a rendezvous with the over-language.” Collective Intelligence, 1994, p. xxviii.

Twenty years after Collective Intelligence

This paper was written in 2014, twenty years after L’intelligence collective [the original French edition of Collective Intelligence].[2] The main purpose of Collective Intelligence was to formulate a vision of a cultural and social evolution that would be capable of making the best use of the new possibilities opened up by digital communication. Long before the success of social networks on the Web,[3] I predicted the rise of “engineering the social bond.” Eight years before the founding of Wikipedia in 2001, I imagined an online “cosmopedia” structured in hypertext links. When the digital humanities and the social media had not even been named, I was calling for an epistemological and methodological transformation of the human sciences. But above all, at a time when less than one percent of the world’s population was connected,[4] I was predicting (along with a small minority of thinkers) that the Internet would become the centre of the global public space and the main medium of communication, in particular for the collaborative production and sharing of knowledge and the dissemination of news.[5] In spite of the considerable growth of interactive digital communication over the past twenty years, we are still far from the ideal described in Collective Intelligence. It seemed to me already in 1994 that the anthropological changes under way would take root and inaugurate a new phase in the human adventure only if we invented what I then called an “over-language.” How can communication readily reach across the multiplicity of dialects and cultures? How can we map the deluge of digital data, order it around our interests and extract knowledge from it? How can we master the waves, currents and depths of the software ocean? Collective Intelligence envisaged a symbolic system capable of harnessing the immense calculating power of the new medium and making it work for our benefit. But the over-language I foresaw in 1994 was still in the “indiscernible” period, shrouded in “the mists of the future.” Twenty years later, the curtain of mist has been partially pierced: the over-language now has a name, IEML (acronym for Information Economy MetaLanguage), a grammar and a dictionary.[6]

Reflexive collective intelligence

Collective intelligence drives human development, and human development supports the growth of collective intelligence. By improving collective intelligence we can place ourselves in this feedback loop and orient it in the direction of a self-organizing virtuous cycle. This is the strategic intuition that has guided my research. But how can we improve collective intelligence? In 1994, the concept of digital collective intelligence was still revolutionary. In 2014, this term is commonly used by consultants, politicians, entrepreneurs, technologists, academics and educators. Crowdsourcing has become a common practice, and knowledge management is now supported by the decentralized use of social media. The interconnection of humanity through the Internet, the development of the knowledge economy, the rush to higher education and the rise of cloud computing and big data are all indicators of an increase in our cognitive power. But we have yet to cross the threshold of reflexive collective intelligence. Just as dancers can only perfect their movements by reflecting them in a mirror, just as yogis develop awareness of their inner being only through the meditative contemplation of their own mind, collective intelligence will only be able to set out on the path of purposeful learning and thus move on to a new stage in its growth by achieving reflexivity. It will therefore need to acquire a mirror that allows it to observe its own cognitive processes. Be careful! Collective intelligence does not and will not have autonomous consciousness: when I talk about reflexive collective intelligence, I mean that human individuals will have a clearer and better-shared knowledge than they have today of the collective intelligence in which they participate, a knowledge based on transparent principles and perfectible scientific methods.

The key: A complete modelling of language

But how can a mirror of collective intelligence be constructed? It is clear that the context of reflection will be the algorithmic medium or, to put it another way, the Internet, the calculating power of cloud computing, ubiquitous communication and distributed interactive mobile interfaces. Since we can only reflect collective intelligence in the algorithmic medium, we must yield to the nature of that medium and have a calculable model of our intelligence, a model that will be fed by the flows of digital data from our activities. In short, we need a mathematical (with calculable models) and empirical (based on data) science of collective intelligence. But, once again, is such a science possible? Since humanity is a species that is highly social, its intelligence is intrinsically social, or collective. If we had a mathematical and empirical science of human intelligence in general, we could no doubt derive a science of collective intelligence from it. This leads us to a major problem that has been investigated in the social sciences, the human sciences, the cognitive sciences and artificial intelligence since the twentieth century: is a mathematized science of human intelligence possible? It is language or, to put it another way, symbolic manipulation that distinguishes human cognition. We use language to categorize sensory data, to organize our memory, to think, to communicate, to carry out social actions, etc. My research has led me to the conclusion that a science of human intelligence is indeed possible, but on the condition that we solve the problem of the mathematical modelling of language. I am speaking here of a complete scientific modelling of language, one that would not be limited to the purely logical and syntactic aspects or to statistical correlations of corpora of texts, but would be capable of expressing semantic relationships formed between units of meaning, and doing so in an algebraic, generative mode.[7] Convinced that an algebraic model of semantics was the key to a science of intelligence, I focused my efforts on discovering such a model; the result was the invention of IEML.[8] IEML—an artificial language with calculable semantics—is the intellectual technology that will make it possible to find answers to all the above-mentioned questions. We now have a complete scientific modelling of language, including its semantic aspects. Thus, a science of human intelligence is now possible. It follows, then, that a mathematical and empirical science of collective intelligence is possible. Consequently, a reflexive collective intelligence is in turn possible. This means that the acceleration of human development is within our reach.

The scientific file: The Semantic Sphere

I have written two volumes on my project of developing the scientific framework for a reflexive collective intelligence, and I am currently writing the third. This trilogy can be read as the story of a voyage of discovery. The first volume, The Semantic Sphere 1 (2011),[9] provides the justification for my undertaking. It contains the statement of my aims, a brief intellectual autobiography and, above all, a detailed dialogue with my contemporaries and my predecessors. With a substantial bibliography,[10] that volume presents the main themes of my intellectual process, compares my thoughts with those of the philosophical and scientific tradition, engages in conversation with the research community, and finally, describes the technical, epistemological and cultural context that motivated my research. Why write more than four hundred pages to justify a program of scientific research? For one very simple reason: no one in the contemporary scientific community thought that my research program had any chance of success. What is important in computer science and artificial intelligence is logic, formal syntax, statistics and biological models. Engineers generally view social sciences such as sociology or anthropology as nothing but auxiliary disciplines limited to cosmetic functions: for example, the analysis of usage or the experience of users. In the human sciences, the situation is even more difficult. All those who have tried to mathematize language, from Leibniz to Chomsky, to mention only the greatest, have failed, achieving only partial results. Worse yet, the greatest masters, those from whom I have learned so much, from the semiologist Umberto Eco[11] to the anthropologist Levi-Strauss,[12] have stated categorically that the mathematization of language and the human sciences is impracticable, impossible, utopian. The path I wanted to follow was forbidden not only by the habits of engineers and the major authorities in the human sciences but also by the nearly universal view that “meaning depends on context,”[13] unscrupulously confusing mathematization and quantification, denouncing on principle, in a “knee jerk” reaction, the “ethnocentric bias” of any universalist approach[14] and recalling the “failure” of Esperanto.[15] I have even heard some of the most agnostic speak of the curse of Babel. It is therefore not surprising that I want to make a strong case in defending the scientific nature of my undertaking: all explorers have returned empty-handed from this voyage toward mathematical language, if they returned at all.

The metalanguage: IEML

But one cannot go on forever announcing one’s departure on a voyage: one must set forth, navigate . . . and return. The second volume of my trilogy, La grammaire d’IEML,[16] contains the very technical account of my journey from algebra to language. In it, I explain how to construct sentences and texts in IEML, with many examples. But that 150-page book also contains 52 very dense pages of algorithms and mathematics that show in detail how the internal semantic networks of that artificial language can be calculated and translated automatically into natural languages. To connect a mathematical syntax to a semantics in natural languages, I had to, almost single-handed,[17] face storms on uncharted seas, to advance across the desert with no certainty that fertile land would be found beyond the horizon, to wander for twenty years in the convoluted labyrinth of meaning. But by gradually joining sign, being and thing in turn in the sense of the virtual and actual, I finally had my Ariadne’s thread, and I made a map of the labyrinth, a complicated map of the metalanguage, that “Northwest Passage”[18] where the waters of the exact sciences and the human sciences converged. I had set my course in a direction no one considered worthy of serious exploration since the crossing was thought impossible. But, against all expectations, my journey reached its goal. The IEML Grammar is the scientific proof of this. The mathematization of language is indeed possible, since here is a mathematical metalanguage. What is it exactly? IEML is an artificial language with calculable semantics that puts no limits on the possibilities for the expression of new meanings. Given a text in IEML, algorithms reconstitute the internal grammatical and semantic network of the text, translate that network into natural languages and calculate the semantic relationships between that text and the other texts in IEML. The metalanguage generates a huge group of symmetric transformations between semantic networks, which can be measured and navigated at will using algorithms. The IEML Grammar demonstrates the calculability of the semantic networks and presents the algorithmic workings of the metalanguage in detail. Used as a system of semantic metadata, IEML opens the way to new methods for analyzing large masses of data. It will be able to support new forms of translinguistic hypertextual communication in social media, and will make it possible for conversation networks to observe and perfect their own collective intelligence. For researchers in the human sciences, IEML will structure an open, universal encyclopedic library of multimedia data that reorganizes itself automatically around subjects and the interests of its users.

A new frontier: Algorithmic Intelligence

Having mapped the path I discovered in La grammaire d’IEML, I will now relate what I saw at the end of my journey, on the other side of the supposedly impassable territory: the new horizons of the mind that algorithmic intelligence illuminates. Because IEML is obviously not an end in itself. It is only the necessary means for the coming great digital civilization to enable the sun of human knowledge to shine more brightly. I am talking here about a future (but not so distant) state of intelligence, a state in which capacities for reflection, creation, communication, collaboration, learning, and analysis and synthesis of data will be infinitely more powerful and better distributed than they are today. With the concept of Algorithmic Intelligence, I have completed the risky work of prediction and cultural creation I undertook with Collective Intelligence twenty years ago. The contemporary algorithmic medium is already characterized by digitization of data, automated data processing in huge industrial computing centres, interactive mobile interfaces broadly distributed among the population and ubiquitous communication. We can make this the medium of a new type of knowledge—a new episteme[19]—by adding a system of semantic metadata based on IEML. The purpose of this paper is precisely to lay the philosophical and historical groundwork for this new type of knowledge.

Philosophical genealogy of algorithmic intelligence

The three ages of reflexive knowledge

Since my project here involves a reflexive collective intelligence, I would like to place the theme of reflexive knowledge in its historical and philosophical context. As a first approximation, reflexive knowledge may be defined as knowledge knowing itself. “All men by nature desire to know,” wrote Aristotle, and this knowledge implies knowledge of the self.[20] Human beings have no doubt been speculating about the forms and sources of their own knowledge since the dawn of consciousness. But the reflexivity of knowledge took a decisive step around the middle of the first millennium BCE,[21] during the period when the Buddha, Confucius, the Hebrew prophets, Socrates and Zoroaster (in alphabetical order) lived. These teachers involved the entire human race in their investigations: they reflected consciousness from a universal perspective. This first great type of systematic research on knowledge, whether philosophical or religious, almost always involved a divine ideal, or at least a certain “relation to Heaven.” Thus we may speak of a theosophical age of reflexive knowledge. I will examine the Aristotelian lineage of this theosophical consciousness, which culminated in the concept of the agent intellect. Starting in the sixteenth century in Europe—and spreading throughout the world with the rise of modernity—there was a second age of reflection on knowledge, which maintained the universal perspective of the previous period but abandoned the reference to Heaven and confined itself to human knowledge, with its recognized limits but also its rational ideal of perfectibility. This was the second age, the scientific age, of reflexive knowledge. Here, the investigation follows two intertwined paths: one path focusing on what makes knowledge possible, the other on what limits it. In both cases, knowledge must define its transcendental subject, that is, it must discover its own determinations. There are many signs in 2014 indicating that in the twenty-first century—around the point where half of humanity is connected to the Internet—we will experience a third stage of reflexive knowledge. This “version 3.0” will maintain the two previous versions’ ideals of universality and scientific perfectibility but will be based on the intensive use of technology to augment and reflect systematically our collective intelligence, and therefore our capacities for personal and social learning. This is the coming technological age of reflexive knowledge with its ideal of an algorithmic intelligence. The brief history of these three modalities—theosophical, scientific and technological—of reflexive knowledge can be read as a philosophical genealogy of algorithmic intelligence.

The theosophical age and its agent intellect

A few generations earlier, Socrates might have been a priest in the circle around the Pythia; he had taken the famous maxim “Know thyself” from the Temple of Apollo at Delphi. But in the fifth century BCE in Athens, Socrates extended the Delphic injunction in an unexpected way, introducing dialectical inquiry. He asked his contemporaries: What do you think? Are you consistent? Can you justify what you are saying about courage, justice or love? Could you repeat it seriously in front of a little group of intelligent or curious citizens? He thus opened the door to a new way of knowing one’s own knowledge, a rational expansion of consciousness of self. His main disciple, Plato, followed this path of rigorous questioning of the unthinking categorization of reality, and finally discovered the world of Ideas. Ideas for Plato are intellectual forms that, unlike the phenomena they categorize, do not belong to the world of Becoming. These intelligible forms are the original essences, archetypes beyond reality, which project into phenomenal time and space all those things that seem to us to be truly real because they are tangible, but that are actually only pale copies of the Ideas. We would say today that our experience is mainly determined by our way of categorizing it. Plato taught that humanity can only know itself as an intelligent species by going back to the world of Ideas and coming into contact with what explains and motivates its own knowledge. Aristotle, who was Plato’s student and Alexander the Great’s tutor, created a grand encyclopedic synthesis that would be used as a model for eighteen centuries in a multitude of cultures. In it, he integrates Plato’s discovery of Ideas with the sum of knowledge of his time. He places at the top of his hierarchical cosmos divine thought knowing itself. And in his Metaphysics,[22] he defines the divinity as “thought thinking itself.” This supreme self-reflexive thought was for him the “prime mover” that inspires the eternal movement of the cosmos. In De Anima,[23] his book on psychology and the theory of knowledge, he states that, under the effect of an agent intellect separate from the body, the passive intellect of the individual receives intelligible forms, a little like the way the senses receive sensory forms. In thinking these intelligible forms, the passive intellect becomes one with its objects and, in so doing, knows itself. Starting from the enigmatic propositions of Aristotle’s theology and psychology, a whole lineage of Peripatetic and Neo-Platonic philosophers—first “pagans,” then Muslims, Jews and Christians—developed the discipline of noetics, which speculates on the divine intelligence, its relation to human intelligence and the type of reflexivity characteristic of intelligence in general.[24] According to the masters of noetics, knowledge can be conceptually divided into three aspects that, in reality, are indissociable and complementary:

  • the intellect,or the knowing subject
  • the intelligence,or the operation of the subject
  • the intelligible,or what is known—or can be known—by the subject by virtue of its operation

From a theosophical perspective, everything that happens takes place in the unity of a self-reflexive divine thought, or (in the Indian tradition) in the consciousness of an omniscient Brahman or Buddha, open to infinity. In the Aristotelian tradition, Avicenna, Maimonides and Albert the Great considered that the identity of the intellect, the intelligence and the intelligible was achieved eternally in God, in the perfect reflexivity of thought thinking itself. In contrast, it was clear to our medieval theosophists that in the case of human beings, the three aspects of knowledge were neither complete nor identical. Indeed, since the passive intellect knows itself only through the intermediary of its objects, and these objects are constantly disappearing and being replaced by others, the reflexive knowledge of a finite human being can only be partial and transitory. Ultimately, human knowledge could know itself only if it simultaneously knew, completely and enduringly, all its objects. But that, obviously, is reserved only for the divinity. I should add that the “one beyond the one” of the neo-Platonist Plotinus and the transcendent deity of the Abrahamic traditions are beyond the reach of the human mind. That is why our theosophists imagined a series of mediations between transcendence and finitude. In the middle of that series, a metaphysical interface provides communication between the unimaginable and inaccessible deity and mortal humanity dispersed in time and space, whose living members can never know—or know themselves—other than partially. At this interface, we find the agent intellect, which is separate from matter in Aristotle’s psychology. The agent intellect is not limited—in the realm of time—to sending the intelligible categories that inform the human passive intellect; it also determines—in the realm of eternity—the maximum limit of what the human race can receive of the universal and perfectly reflexive knowledge of the divine. That is why, according to the medieval theosophists, the best a mortal intelligence can do to approach complete reflexive knowledge is to contemplate the operation in itself of the agent intellect that emanates from above and go back to the source through it. In accordance with this regulating ideal of reflexive knowledge, living humanity is structured hierarchically, because human beings are more or less turned toward the illumination of the agent intellect. At the top, prophets and theosophists receive a bright light from the agent intellect, while at the bottom, human beings turned toward coarse material appetites receive almost nothing. The influx of intellectual forms is gradually obscured as we go down the scale of degree of openness to the world above.

The scientific age and its transcendental subject

With the European Renaissance, the use of the printing press, the construction of new observation instruments, and the development of mathematics and experimental science heralded a new era. Reflection on knowledge took a critical turn with Descartes’s introduction of radical doubt and the scientific method, in accordance with the needs of educated Europe in the seventeenth century. God was still present in the Cartesian system, but He was only there, ultimately, to guarantee the validity of the efforts of human scientific thought: “God is not a deceiver.”[25] The fact remains that Cartesian philosophy rests on the self-reflexive edge, which has now moved from the divinity to the mortal human: “I think, therefore I am.”[26] In the second half of the seventeenth century, Spinoza and Leibniz received the critical scientific rationalism developed by Descartes, but they were dissatisfied with his dualism of thought (mind) and extension (matter). They therefore attempted, each in his own way, to constitute reflexive knowledge within the framework of coherent monism. For Spinoza, nature (identified with God) is a unique and infinite substance of which thought and extension are two necessary attributes among an infinity of attributes. This strict ontological monism is counterbalanced by a pluralism of expression, because the unique substance possesses an infinity of attributes, and each attribute, an infinity of modes. The summit of human freedom according to Spinoza is the intellectual love of God, that is, the most direct and intuitive possible knowledge of the necessity that moves the nature to which we belong. For Leibniz, the world is made up of monads, metaphysical entities that are closed but are capable of an inner perception in which the whole is reflected from their singular perspective. The consistency of this radical pluralism is ensured by the unique, infinite divine intelligence that has considered all possible worlds in order to create the best one, which corresponds to the most complex—or the richest—of the reciprocal reflections of the monads. As for human knowledge—which is necessarily finite—its perfection coincides with the clearest possible reflection of a totality that includes it but whose unity is thought only by the divine intelligence. After Leibniz and Spinoza, the eighteenth century saw the growth of scientific research, critical thought and the educational practices of the Enlightenment, in particular in France and the British Isles. The philosophy of the Enlightenment culminated with Kant, for whom the development of knowledge was now contained within the limits of human reason, without reference to the divinity, even to envelop or guarantee its reasoning. But the ideal of reflexivity and universality remained. The issue now was to acquire a “scientific” knowledge of human intelligence, which could not be done without the representation of knowledge to itself, without a model that would describe intelligence in terms of what is universal about it. This is the purpose of Kantian transcendental philosophy. Here, human intelligence, armed with its reason alone, now faces only the phenomenal world. Human intelligence and the phenomenal world presuppose each other. Intelligence is programmed to know sensory phenomena that are necessarily immersed in space and time. As for phenomena, their main dimensions (space, time, causality, etc.) correspond to ways of perceiving and understanding that are specific to human intelligence. These are forms of the transcendental subject and not intrinsic characteristics of reality. Since we are confined within our cognitive possibilities, it is impossible to know what things are “in themselves.” For Kant, the summit of reflexive human knowledge is in a critical awareness of the extension and the limits of our possibility of knowing. Descartes, Spinoza, Leibniz, the English and French Enlightenment, and Kant accomplished a great deal in two centuries, and paved the way for the modern philosophy of the nineteenth and twentieth centuries. A new form of reflexive knowledge grew, spread, and fragmented into the human sciences, which mushroomed with the end of the monopoly of theosophy. As this dispersion occurred, great philosophers attempted to grasp reflexive knowledge in its unity. The reflexive knowledge of the scientific era neither suppressed nor abolished reflexive knowledge of the theosophical type, but it opened up a new domain of legitimacy of knowledge, freed of the ideal of divine knowledge. This de jure separation did not prevent de facto unions, since there was no lack of religious scholars or scholarly believers. Modern scientists could be believers or non-believers. Their position in relation to the divinity was only a matter of motivation. Believers loved science because it revealed the glory of the divinity, and non-believers loved it because it explained the world without God. But neither of them used as arguments what now belonged only to their private convictions. In the human sciences, there were systematic explorations of the determinations of human existence. And since we are thinking beings, the determinations of our existence are also those of our thought. How do the technical, historical, economic, social and political conditions in which we live form, deform and set limits on our knowledge? What are the structures of our biology, our language, our symbolic systems, our communicative interactions, our psychology and our processes of subjectivation? Modern thought, with its scientific and critical ideal, constantly searches for the conditions and limits imposed on it, particularly those that are as yet unknown to it, that remain in the shadows of its consciousness. It seeks to discover what determines it “behind its back.” While the transcendental subject described by Kant in his Critique of Pure Reason fixed the image a great mind had of it in the late eighteenth century, modern philosophy explores a transcendental subject that is in the process of becoming, continually being re-examined and more precisely defined by the human sciences, a subject immersed in the vagaries of cultures and history, emerging from its unconscious determinations and the techno-symbolic mechanisms that drive it. I will now broadly outline the figure of the transcendental subject of the scientific era, a figure that re-examines and at the same time transforms the three complementary aspects of the agent intellect.

  • The Aristotelian intellect becomes living intelligence. This involves the effective cognitive activities of subjects, what is experienced spontaneously in time by living, mortal human beings.
  • The intelligence becomes scientific investigation. I use this term to designate all undertakings by which the living intelligence becomes scientifically intelligible, including the technical and symbolic tools, the methods and the disciplines used in those undertakings.
  • The intelligible becomes the intelligible intelligence, which is the image of the living intelligence that is produced through scientific and critical investigation.

An evolving transcendental subject emerges from this reflexive cycle in which the living intelligence contemplates its own image in the form of a scientifically intelligible intelligence. Scientific investigation here is the internal mirror of the transcendental subjectivity, the mediation through which the living intelligence observes itself. It is obviously impossible to confuse the living intelligence and its scientifically intelligible image, any more than one can confuse the map and the territory, or the experience and its description. Nor can one confuse the mirror (scientific investigation) with the being reflected in it (the living intelligence), nor with the image that appears in the mirror (the intelligible intelligence). These three aspects together form a dynamic unit that would collapse if one of them were eliminated. While the living intelligence would continue to exist without a mirror or scientific image, it would be very much diminished. It would have lost its capacity to reflect from a universal perspective. The creative paradox of the intellectual reflexivity of the scientific age may be formulated as follows. It is clear, first of all, that the living intelligence is truly transformed by scientific investigation, since the living intelligence that knows its image through a certain scientific investigation is not the same (does not have the same experience) as the one that does not know it, or that knows another image, the result of another scientific investigation. But it is just as clear, by definition, that the living intelligence reflects itself in the intelligible image presented to it through scientific knowledge. In other words, the living intelligence is equally dependent on the scientific and critical investigation that produces the intelligible image in which it is reflected. When we observe our physical appearance in a mirror, the image in the mirror in no way changes our physical appearance, only the mental representation we have of it. However, the living intelligence cannot discover its intelligible image without including the reflexive process itself in its experience, and without at the same time being changed. In short, a critical science that explores the limits and determinations of the knowing subject does not only reflect knowledge—it increases it. Thus the modern transcendental subject is—by its very nature—evolutionary, participating in a dynamic of growth. In line with this evolutionary view of the scientific age, which contrasts with the fixity of the previous age, the collectivity that possesses reflexive knowledge is no longer a theosophical hierarchy oriented toward the agent intellect but a republic of letters oriented toward the augmentation of human knowledge, a scientific community that is expanding demographically and is organized into academies, learned societies and universities. While the agent intellect looked out over a cosmos emanating from eternity, in analog resonance with the human microcosm, the transcendental subject explores a universe infinitely open to scientific investigation, technical mastery and political liberation.

The technological age and its algorithmic intelligence

Reflexive knowledge has, in fact, always been informed by some technology, since it cannot be exercised without symbolic tools and thus the media that support those tools. But the next age of reflexive knowledge can properly be called technological because the technical augmentation of cognition is explicitly at the centre of its project. Technology now enters the loop of reflexive consciousness as the agent of the acceleration of its own augmentation. This last point was no doubt glimpsed by a few pre–twentieth century philosophers, such as Condorcet in the eighteenth century, in his posthumous book of 1795, Sketch for a Historical Picture of the Progress of the Human Mind. But the truly technological dimension of reflexive knowledge really began to be thought about fully only in the twentieth century, with Pierre Teilhard de Chardin, Norbert Wiener and Marshall McLuhan, to whom we should also add the modest genius Douglas Engelbart. The regulating ideal of the reflexive knowledge of the theosophical age was the agent intellect, and that of the scientific-critical age was the transcendental subject. In continuity with the two preceding periods, the reflexive knowledge of the technological age will be organized around the ideal of algorithmic intelligence, which inherits from the agent intellect its universality or, in other words, its capacity to unify humanity’s reflexive knowledge. It also inherits its power to be reflected in finite intelligences. But, in contrast with the agent intellect, instead of descending from eternity, it emerges from the multitude of human actions immersed in space and time. Like the transcendental subject, algorithmic intelligence is rational, critical, scientific, purely human, evolutionary and always in a state of learning. But the vocation of the transcendental subject was to reflexively contain the human universe. However, the human universe no longer has a recognizable face. The “death of man” announced by Foucault[27] should be understood in the sense of the loss of figurability of the transcendental subject. The labyrinth of philosophies, methodologies, theories and data from the human sciences has become inextricably complicated. The transcendental subject has not only been dissolved in symbolic structures or anonymous complex systems, it is also fragmented in the broken mirror of the disciplines of the human sciences. It is obvious that the technical medium of a new figure of reflexive knowledge will be the Internet, and more generally, computer science and ubiquitous communication. But how can symbol-manipulating automata be used on a large scale not only to reunify our reflexive knowledge but also to increase the clarity, precision and breadth of the teeming diversity enveloped by our knowledge? The missing link is not only technical, but also scientific. We need a science that grasps the new possibilities offered by technology in order to give collective intelligence the means to reflect itself, thus inaugurating a new form of subjectivity. As the groundwork of this new science—which I call computational semantics—IEML makes use of the self-reflexive capacity of language without excluding any of its functions, whether they be narrative, logical, pragmatic or other. Computational semantics produces a scientific image of collective intelligence: a calculated intelligence that will be able to be explored both as a simulated world and as a distributed augmented reality in physical space. Scientific change will generate a phenomenological change,[28] since ubiquitous multimedia interaction with a holographic image of collective intelligence will reorganize the human sensorium. The last, but not the least, change: social change. The community that possessed the previous figure of reflexive knowledge was a scientific community that was still distinct from society as a whole. But in the new figure of knowledge, reflexive collective intelligence emerges from any human group. Like the previous figures—theosophical and scientific—of reflexive knowledge, algorithmic intelligence is organized in three interdependent aspects.

  • Reflexive collective intelligence represents the living intelligence, the intellect or soul of the great future digital civilization. It may be glimpsed by deciphering the signs of its approach in contemporary reality.
  • Computational semantics holds up a technical and scientific mirror to collective intelligence, which is reflected in it. Its purpose is to augment and reflect the living intelligence of the coming civilization.
  • Calculated intelligence, finally, is none other than the scientifically knowable image of the living intelligence of digital civilization. Computational semantics constructs, maintains and cultivates this image, which is that of an ecosystem of ideas coming out of the human activity in the algorithmic medium and can be explored in sensory-motor mode.

In short, in the emergent unity of algorithmic intelligence, computational semantics calculates the cognitive simulation that augments and reflects the collective intelligence of the coming civilization.

[1] Professor at the University of Ottawa

[2] And twenty-three years after L’idéographie dynamique (Paris: La Découverte, 1991).

[3] And before the WWW itself, which would become a public phenomenon only in 1994 with the development of the first browsers such as Mosaic. At the time when the book was being written, the Web still existed only in the mind of Tim Berners-Lee.

[4] Approximately 40% in 2014 and probably more than half in 2025.

[5] I obviously do not claim to be the only “visionary” on the subject in the early 1990s. The pioneering work of Douglas Engelbart and Ted Nelson and the predictions of Howard Rheingold, Joël de Rosnay and many others should be cited.

[6] See The basics of IEML (on line at: http://wp.me/P3bDiO-9V )

[7] Beyond logic and statistics.

[8] IEML is the acronym for Information Economy MetaLanguage. See La grammaire d’IEML (On line http://wp.me/P3bDiO-9V ) [9] The Semantic Sphere 1: Computation, Cognition and Information Economy (London: ISTE, 2011; New York: Wiley, 2011).

[10] More than four hundred reference books.

[11] Umberto Eco, The Search for the Perfect Language (Oxford: Blackwell, 1995).

[12] “But more madness than genius would be required for such an enterprise”: Claude Levi-Strauss, The Savage Mind (University of Chicago Press, 1966), p. 130.

[13] Which is obviously true, but which only defines the problem rather than forbidding the solution.

[14] But true universalism is all-inclusive, and our daily lives are structured according to a multitude of universal standards, from space-time coordinates to HTTP on the Web. I responded at length in The Semantic Sphere to the prejudices of extremist post-modernism against scientific universality.

[15] Which is still used by a large community. But the only thing that Esperanto and IEML have in common is the fact that they are artificial languages. They have neither the same form nor the same purpose, nor the same use, which invalidates criticisms of IEML based on the criticism of Esperanto.

[16] See IEML Grammar (On line http://wp.me/P3bDiO-9V ).

[17] But, fortunately, supported by the Canada Research Chairs program and by my wife, Darcia Labrosse.

[18] Michel Serres, Hermès V. Le passage du Nord-Ouest (Paris: Minuit, 1980).

[19] The concept of episteme, which is broader than the concept of paradigm, was developed in particular by Michel Foucault in The Order of Things (New York: Pantheon, 1970) and The Archaeology of Knowledge and the Discourse on Language (New York: Pantheon, 1972).

[20] At the beginning of Book A of his Metaphysics.

[21] This is the Axial Age identified by Karl Jaspers.

[22] Book Lambda, 9

[23] In particular in Book III.

[24] See, for example, Moses Maimonides, The Guide For the Perplexed, translated into English by Michael Friedländer (New York: Cosimo Classic, 2007) (original in Arabic from the twelfth century). – Averroes (Ibn Rushd), Long Commentary on the De Anima of Aristotle, translated with introduction and notes by Richard C. Taylor (New Haven: Yale University Press, 2009) (original in Arabic from the twelfth century). – Saint Thomas Aquinas: On the Unity of the Intellect Against the Averroists (original in Latin from the thirteenth century) – Herbert A. Davidson, Alfarabi, Avicenna, and Averroes, on Intellect. Their Cosmologies, Theories of the Active Intellect, and Theories of Human Intellect (New York, Oxford: Oxford University Press, 1992). – Henri Corbin, History of Islamic Philosophy, translated by Liadain and Philip Sherrard (London: Kegan Paul, 1993). – Henri Corbin, En Islam iranien: aspects spirituels et philosophiques, 2d ed. (Paris: Gallimard, 1978), 4 vol. – De Libera, Alain Métaphysique et noétique: Albert le Grand (Paris: Vrin, 2005).

[25] In Meditations on First Philosophy, “First Meditation.” [26] Discourse on the Method, “Part IV.”

[27] At the end of The Order of Things (New York: Pantheon Books, 1970). [28] See, for example, Stéphane Vial, L’être et l’écran (Paris: PUF, 2013).

Pierre Levy-photo 1

Originally published by the CCCTLab as an interview with Sandra Alvaro.

Pierre Lévy is a philosopher and a pioneer in the study of the impact of the Internet on human knowledge and culture. In Collective Intelligence. Mankind’s Emerging World in Cyberspace, published in French in 1994 (English translation in 1999), he describes a kind of collective intelligence that extends everywhere and is constantly evaluated and coordinated in real time, a collective human intelligence, augmented by new information technologies and the Internet. Since then, he has been working on a major undertaking: the creation of IEML (Information Economy Meta Language), a tool for the augmentation of collective intelligence by means of the algorithmic medium. IEML, which already has its own grammar, is a metalanguage that includes the semantic dimension, making it computable. This in turn allows a reflexive representation of collective intelligence processes.

In the book Semantic Sphere I. Computation, Cognition, and Information Economy, Pierre Lévy describes IEML as a new tool that works with the ocean of data of participatory digital memory, which is common to all humanity, and systematically turns it into knowledge. A system for encoding meaning that adds transparency, interoperability and computability to the operations that take place in digital memory.

By formalising meaning, this metalanguage adds a human dimension to the analysis and exploitation of the data deluge that is the backdrop of our lives in the digital society. And it also offers a new standard for the human sciences with the potential to accommodate maximum diversity and interoperability.

In “The Technologies of Intelligence” and “Collective Intelligence”, you argue that the Internet and related media are new intelligence technologies that augment the intellectual processes of human beings. And that they create a new space of collaboratively produced, dynamic, quantitative knowledge. What are the characteristics of this augmented collective intelligence?

The first thing to understand is that collective intelligence already exists. It is not something that has to be built. Collective intelligence exists at the level of animal societies: it exists in all animal societies, especially insect societies and mammal societies, and of course the human species is a marvellous example of collective intelligence. In addition to the means of communication used by animals, human beings also use language, technology, complex social institutions and so on, which, taken together, create culture. Bees have collective intelligence but without this cultural dimension. In addition, human beings have personal reflexive intelligence that augments the capacity of global collective intelligence. This is not true for animals but only for humans.

Now the point is to augment human collective intelligence. The main way to achieve this is by means of media and symbolic systems. Human collective intelligence is based on language and technology and we can act on these in order to augment it. The first leap forward in the augmentation of human collective intelligence was the invention of writing. Then we invented more complex, subtle and efficient media like paper, the alphabet and positional systems to represent numbers using ten numerals including zero. All of these things led to a considerable increase in collective intelligence. Then there was the invention of the printing press and electronic media. Now we are in a new stage of the augmentation of human collective intelligence: the digital or – as I call it – algorithmic stage. Our new technical structure has given us ubiquitous communication, interconnection of information, and – most importantly – automata that are able to transform symbols. With these three elements we have an extraordinary opportunity to augment human collective intelligence.

You have suggested that there are three stages in the progress of the algorithmic medium prior to the semantic sphere: the addressing of information in the memory of computers (operating systems), the addressing of computers on the Internet, and finally the Web – the addressing of all data within a global network, where all information can be considered to be part of an interconnected whole–. This externalisation of the collective human memory and intellectual processes has increased individual autonomy and the self-organisation of human communities. How has this led to a global, hypermediated public sphere and to the democratisation of knowledge?

This democratisation of knowledge is already happening. If you have ubiquitous communication, it means that you have access to any kind of information almost for free: the best example is Wikipedia. We can also speak about blogs, social media, and the growing open data movement. When you have access to all this information, when you can participate in social networks that support collaborative learning, and when you have algorithms at your fingertips that can help you to do a lot of things, there is a genuine augmentation of collective human intelligence, an augmentation that implies the democratisation of knowledge.

What role do cultural institutions play in this democratisation of knowledge?

Cultural Institutions are publishing data in an open way; they are participating in broad conversations on social media, taking advantage of the possibilities of crowdsourcing, and so on. They also have the opportunity to grow an open, bottom-up knowledge management strategy.

dialect_human_development

A Model of Collective Intelligence in the Service of Human Development (Pierre Lévy, en The Semantic Sphere, 2011) S = sign, B = being, T = thing.

We are now in the midst of what the media have branded the ‘big data’ phenomenon. Our species is producing and storing data in volumes that surpass our powers of perception and analysis. How is this phenomenon connected to the algorithmic medium?

First let’s say that what is happening now, the availability of big flows of data, is just an actualisation of the Internet’s potential. It was always there. It is just that we now have more data and more people are able to get this data and analyse it. There has been a huge increase in the amount of information generated in the period from the second half of the twentieth century to the beginning of the twenty-first century. At the beginning only a few people used the Internet and now almost the half of human population is connected.

At first the Internet was a way to send and receive messages. We were happy because we could send messages to the whole planet and receive messages from the entire planet. But the biggest potential of the algorithmic medium is not the transmission of information: it is the automatic transformation of data (through software).

We could say that the big data available on the Internet is currently analysed, transformed and exploited by big governments, big scientific laboratories and big corporations. That’s what we call big data today. In the future there will be a democratisation of the processing of big data. It will be a new revolution. If you think about the situation of computers in the early days, only big companies, big governments and big laboratories had access to computing power. But nowadays we have the revolution of social computing and decentralized communication by means of the Internet. I look forward to the same kind of revolution regarding the processing and analysis of big data.

Communications giants like Google and Facebook are promoting the use of artificial intelligence to exploit and analyse data. This means that logic and computing tend to prevail in the way we understand reality. IEML, however, incorporates the semantic dimension. How will this new model be able to describe they way we create and transform meaning, and make it computable?

Today we have something called the “semantic web”, but it is not semantic at all! It is based on logical links between data and on algebraic models of logic. There is no model of semantics there. So in fact there is currently no model that sets out to automate the creation of semantic links in a general and universal way. IEML will enable the simulation of ecosystems of ideas based on people’s activities, and it will reflect collective intelligence. This will completely change the meaning of “big data” because we will be able to transform this data into knowledge.

We have very powerful tools at our disposal, we have enormous, almost unlimited computing power, and we have a medium were the communication is ubiquitous. You can communicate everywhere, all the time, and all documents are interconnected. Now the question is: how will we use all these tools in a meaningful way to augment human collective intelligence?

This is why I have invented a language that automatically computes internal semantic relations. When you write a sentence in IEML it automatically creates the semantic network between the words in the sentence, and shows the semantic networks between the words in the dictionary. When you write a text in IEML, it creates the semantic relations between the different sentences that make up the text. Moreover, when you select a text, IEML automatically creates the semantic relations between this text and the other texts in a library. So you have a kind of automatic semantic hypertextualisation. The IEML code programs semantic networks and it can easily be manipulated by algorithms (it is a “regular language”). Plus, IEML self-translates automatically into natural languages, so that users will not be obliged to learn this code.

The most important thing is that if you categorize data in IEML it will automatically create a network of semantic relations between the data. You can have automatically-generated semantic relations inside any kind of data set. This is the point that connects IEML and Big Data.

So IEML provides a system of computable metadata that makes it possible to automate semantic relationships. Do you think it could become a new common language for human sciences and contribute to their renewal and future development?

Everyone will be able to categorise data however they want. Any discipline, any culture, any theory will be able to categorise data in its own way, to allow diversity, using a single metalanguage, to ensure interoperability. This will automatically generate ecosystems of ideas that will be navigable with all their semantic relations. You will be able to compare different ecosystems of ideas according to their data and the different ways of categorising them. You will be able to chose different perspectives and approaches. For example, the same people interpreting different sets of data, or different people interpreting the same set of data. IEML ensures the interoperability of all ecosystem of ideas. On one hand you have the greatest possibility of diversity, and on the other you have computability and semantic interoperability. I think that it will be a big improvement for the human sciences because today the human sciences can use statistics, but it is a purely quantitative method. They can also use automatic reasoning, but it is a purely logical method. But with IEML we can compute using semantic relations, and it is only through semantics (in conjunction with logic and statistics) that we can understand what is happening in the human realm. We will be able to analyse and manipulate meaning, and there lies the essence of the human sciences.

Let’s talk about the current stage of development of IEML: I know it’s early days, but can you outline some of the applications or tools that may be developed with this metalanguage?

Is still too early; perhaps the first application may be a kind of collective intelligence game in which people will work together to build the best ecosystem of ideas for their own goals.

I published The Semantic Sphere in 2011. And I finished the grammar that has all the mathematical and algorithmic dimensions six months ago. I am writing a second book entitled Algorithmic Intelligence, where I explain all these things about reflexivity and intelligence. The IEML dictionary will be published (online) in the coming months. It will be the first kernel, because the dictionary has to be augmented progressively, and not just by me. I hope other people will contribute.

This IEML interlinguistic dictionary ensures that semantic networks can be translated from one natural language to another. Could you explain how it works, and how it incorporates the complexity and pragmatics of natural languages?

The basis of IEML is a simple commutative algebra (a regular language) that makes it computable. A special coding of the algebra (called Script) allows for recursivity, self-referential processes and the programming of rhizomatic graphs. The algorithmic grammar transforms the code into fractally complex networks that represent the semantic structure of texts. The dictionary, made up of terms organized according to symmetric systems of relations (paradigms), gives content to the rhizomatic graphs and creates a kind of common coordinate system of ideas. Working together, the Script, the algorithmic grammar and the dictionary create a symmetric correspondence between individual algebraic operations and different semantic networks (expressed in natural languages). The semantic sphere brings together all possible texts in the language, translated into natural languages, including the semantic relations between all the texts. On the playing field of the semantic sphere, dialogue, intersubjectivity and pragmatic complexity arise, and open games allow free regulation of the categorisation and the evaluation of data. Ultimately, all kinds of ecosystems of ideas – representing collective cognitive processes – will be cultivated in an interoperable environment.

start-ieml

Schema from the START – IEML / English Dictionary by Prof. Pierre Lévy FRSC CRC University of Ottawa 25th August 2010 (Copyright Pierre Lévy 2010 (license Apache 2.0)

Since IEML automatically creates very complex graphs of semantic relations, one of the development tasks that is still pending is to transform these complex graphs into visualisations that make them usable and navigable.

How do you envisage these big graphs? Can you give us an idea of what the visualisation could look like?

The idea is to project these very complex graphs onto a 3D interactive structure. These could be spheres, for example, so you will be able to go inside the sphere corresponding to one particular idea and you will have all the other ideas of its ecosystem around you, arranged according to the different semantic relations. You will be also able to manipulate the spheres from the outside and look at them as if they were on a geographical map. And you will be able to zoom in and zoom out of fractal levels of complexity. Ecosystems of ideas will be displayed as interactive holograms in virtual reality on the Web (through tablets) and as augmented reality experienced in the 3D physical world (through Google glasses, for example).

I’m also curious about your thoughts on the social alarm generated by the Internet’s enormous capacity to retrieve data, and the potential exploitation of this data. There are social concerns about possible abuses and privacy infringement. Some big companies are starting to consider drafting codes of ethics to regulate and prevent the abuse of data. Do you think a fixed set of rules can effectively regulate the changing environment of the algorithmic medium? How can IEML contribute to improving the transparency and regulation of this medium?

IEML does not only allow transparency, it allows symmetrical transparency. Everybody participating in the semantic sphere will be transparent to others, but all the others will also be transparent to him or her. The problem with hyper-surveillance is that transparency is currently not symmetrical. What I mean is that ordinary people are transparent to big governments and big companies, but these big companies and big governments are not transparent to ordinary people. There is no symmetry. Power differences between big governments and little governments or between big companies and individuals will probably continue to exist. But we can create a new public space where this asymmetry is suspended, and where powerful players are treated exactly like ordinary players.

And to finish up, last month the CCCB Lab held began a series of workshops related to the Internet Universe project, which explore the issue of education in the digital environment. As you have published numerous works on this subject, could you summarise a few key points in regard to educating ‘digital natives’ about responsibility and participation in the algorithmic medium?

People have to accept their personal and collective responsibility. Because every time we create a link, every time we “like” something, every time we create a hashtag, every time we buy a book on Amazon, and so on, we transform the relational structure of the common memory. So we have a great deal of responsibility for what happens online. Whatever is happening is the result of what all the people are doing together; the Internet is an expression of human collective intelligence.

Therefore, we also have to develop critical thinking. Everything that you find on the Internet is the expression of particular points of view, that are neither neutral nor objective, but an expression of active subjectivities. Where does the money come from? Where do the ideas come from? What is the author’s pragmatic context? And so on. The more we know the answers to these questions, the greater the transparency of the source… and the more it can be trusted. This notion of making the source of information transparent is very close to the scientific mindset. Because scientific knowledge has to be able to answer questions such as: Where did the data come from? Where does the theory come from? Where do the grants come from? Transparency is the new objectivity.

Blog of Collective Intelligence (since 2003)

pierre_levy

Pierre Lévy is a philosopher and a pioneer in the study of the impact of the Internet on human knowledge and culture. In Collective Intelligence. Mankind’s Emerging World in Cyberspace, published in French in 1994 (English translation in 1999), he describes a kind of collective intelligence that extends everywhere and is constantly evaluated and coordinated in real time, a collective human intelligence, augmented by new information technologies and the Internet. Since then, he has been working on a major undertaking: the creation of IEML (Information Economy Meta Language), a tool for the augmentation of collective intelligence by means of the algorithmic medium. IEML, which already has its own grammar, is a metalanguage that includes the semantic dimension, making it computable. This in turn allows a reflexive representation of collective intelligence processes.

In the book Semantic Sphere I. Computation, Cognition, and Information Economy, Pierre Lévy describes IEML as…

View original post 2,744 more words

Rothko

Interview with Nelesi Rodriguez, published in spanish in the academic journal Comunicacion , Estudios venezolanos de comunicación • 2º trimestre 2014, n. 166

Collective intelligence in the digital age: A revolution just at its beginning

Pierre Lévy (P.L.) is a renowned theorist and media scholar. His ideas on collective intelligence have been essential for the comprehension of some phenomena of contemporary communication, and his research on Information Economy Meta Language (IEML) is today one of the biggest promises of data processing and of knowledge management. In this interview conducted by the team of the Comunicación(C.M.) magazine, he explained to us some of the basic points of his theory, and gave us an interesting reading on current topics related to communication and digital media. Nelesi Rodríguez, April 2014.

APPROACH TO THE SUBJECT MATTER

C.M: Collective intelligence can be defined as shared knowledge that exists everywhere, that is constantly measured, coordinated in real time, and that drives the effective mobilization of several skills. In this regard, it is understood that collective intelligence is not a quality exclusive to human beings. In what way is human collective intelligence different from other species’ collective intelligence?

P.L: You are totally right when you say that collective intelligence is not exclusive to human race. We know that the ants, the bees, and in general all social animals have got collective intelligence. They solve problems together, and –as social animals-, they are not able to survive alone and this is also the case with human species; we are not able to survive alone and we solve problems together.

But there is a big difference that is related to the use of language: Animals are able to communicate, but they do not have language, I mean, they cannot ask questions, they cannot tell stories, they cannot have dialogues, they cannot communicate about their emotions, their fears, and so on.

So there is the language, that is specific to the human kind, and with the language you have of course better communication and an enhanced collective intelligence; and you have also all that comes with this linguistic ability, that is the technology, the complexity of social institutions –like law, religion, ethics, economy… All these things that animals don`t have. This ability to play with symbolic systems, to play with tools and to build complex social institutions, creates a much more powerful collective intelligence for the humans.

Also, I would say that there are two important features that come from the human culture: The first is that human collective intelligence can improve during history, because each new generation can improve the symbolic systems, the technology, and the social institutions; so there is an evolution of human collective intelligence and, of course, we are talking about a cultural evolution, not a biological evolution. And then, finally, and maybe the most important feature of human collective intelligence, is that each unit of the human collectivity has an ability to reflect, to think by itself. We have individual consciousness, unfortunately for them, the ants don`t; so the fact that the humans have individual consciousness creates at the level of the social cognition something that it is very powerful. That is the main difference between human and animal collective intelligence.

C.M: Do the writing and digital technologies also contribute to this difference?

P.L: In the oral culture, there was certain kind of transmission of knowledge, but of course, when we invented the writing systems we were able to accumulate much more knowledge to transmit to the next generations. With the invention of the diverse writing systems, and then their improvements -like the invention of the alphabet, the invention of the paper, the printing press, and then the electronic media- human collective intelligence expanded. So, for example, the ability to build libraries, to build scientific coordination and collaboration, the communication supported by the telephone, the radio, the television makes human collective intelligence more powerful, and I think that it will be the main challenge our generation and the next will have to face: to take advantage of the digital tools; the computer, the internet, the smartphones, et caetera; to discover new ways to improve our cognitive abilities, our memory, our communication, our problem solving abilities, our abilities to coordinate and collaborate, and so on.

C.M: In an interview conducted by Howard Rheingold, you mentioned that every device and technology that have the purpose of enhancing language also enhance collective intelligence and, at the same time, have an impact on cognitive skills such as memory, collaboration and the ability to connect with one another. Taking this into account:

  • It is said that today, the enhancement of cognitive abilities manifests in different ways: from fandoms and wikis, to crowdsourcing projects that are created with the intent of finding effective treatments for serious illnesses. Do you consider that every one of these manifestations contribute in the same way towards the expansion of our collective intelligence?

P.L: Maybe the most important sector where we should put particular effort is scientific research and learning, because we are talking about knowledge, so the most important part is the creation of knowledge, the dissemination of knowledge or, generally, the collective and individual learning.

Today there is a transformation of communication in the scientific community; more and more journals are open and online, people are doing virtual teams, they communicate by internet, people are using big amounts of digital data, and they are processing this data with computer power; so we are already witnessing this augmentation, but we are just at the beginning of this new approach.

In the case of learning I think it is very important that we recognize the emergence of new ways of learning online collaboratively, where people who want to learn are helping each other, are communicating, are accumulating common memories from where they can take what is interesting for them. This collective learning is not limited to schools; it happens in all kinds of social environments. We could call this “knowledge management”, and there is an individual or personal aspect of this knowledge management that some people call “personal knowledge management”: choosing the right sources on the internet, featuring the sources, categorizing information, doing synthesis, sharing these synthesis on social media, looking for a feedback, initiating a conversation, and so on. We have to realize that learning is and always has been an individual process at is core. Someone has to learn; you cannot learn for someone else. Help other people to learn, this is teaching; but the learner is doing the real work. Then, if the learners are helping each other, you have a process of collective learning. Of course, it works better if these people are interested in the same topics or if they are engaged in the same activities.

Collective learning augmentation is something that is very general and that has increased with the online communication. It also happens at the political level; there is an augmented deliberation, because people can discuss easily on the internet and also there is an enhanced coordination (for public demonstrations and similar things).

  • M: With the passage of time, collective intelligence becomes less a human quality and more one akin to machines; this affair worries more than one individual. What is your stance in the wake of this reality?

P.L: There is a process of artificialization of cognition in general that is very old; it began with the writing, with books; it is already a kind of externalization or objectification of memory. I mean, a library, for instance, is something that is completely material, completely technical, and without libraries we would be much less intelligent.

We cannot be against libraries because instead of being pure brain they are just paper, and ink, and buildings, and index cards. Similarly, it makes no sense that we “revolt” against computer and against the internet. It is the same kind of reasoning than with the libraries, it is just another technology, more powerful, but it is the same idea. It is an augmentation of our cognitive ability -individual and collective-, so it is absurd to be afraid of it.

But we have to distinguish very clearly the material support and the texts. The texts come from our mind, but the text that is in my mind can be projected on paper as well as in a computer network. What it is really important here is the text.

IEML AND THE FUTURE OF COLLECTIVE INTELLIGENCE

C.M: You’ve mentioned before that what we define today as the “semantic web”, more than being based on semantic principles, is based on logical principles. According to your ideas, this represents a roadblock in making the most out of the possibilities offered by digital media. As an alternative, you proposed the IEML (Information Economy Meta Language).

  • Could you elaborate on the basic differences between the semantic web and the IEML?

P.L: The so called “semantic web” –in fact, people call it now “web of data”, and it is a better term for it– is based on very well known principles of artificial intelligence that were developed in the 70s, the 80s, and that were adapted to the web.

Basically, you have a well-organized database, and you have rules to compute the relations between different parts of the database, and these rules are mainly logical rules. IEML works in a completely different manner: you have as many data as you want, and you categorize this data in IEML.

IEML is a language, not a computer language, but an artificial human language. So you can say “the sea”, “this person”, or anything… There are words in IEML, there are no words in the semantic web formats, it doesn’t work like this.

In this artificial language that is IEML, each word is in semantic relations with the other words in the dictionary. So, all the words are intertwined by semantic relations, and are perfectly defined. When you use these words, create sentences, or create texts; you create new relationships between the words, grammatical relationships.

And from texts written in IEML you have algorithms that make automatic relations inside those sentences, from one sentence to the other, and so on. So you have a whole semantic network inside the text that is automatically computed, and even more, you can automatically compute the semantic relations between any text and any library of texts.

An IEML text automatically creates its own semantic relations with all the other texts, and these texts in IEML can automatically translate themselves into natural languages; Spanish, English, Portuguese or Chinese… So, when you use IEML to categorize data, you create automatically semantic links between the data; with all the openness, the subtleness, and the ability to say exactly what you want that language can offer you.

You can categorize any type of content; images, music, software, articles, websites, books, any kind of information. You can categorize these in IEML and at the same time you create links within the data because of the links that are internal to the language.

  • M: Can we consider metatags, hashtags, and Twitter lists as a precedent to the IEML?

P.L: Yes, exactly. I have been inspired by the fact that people are already categorizing data. They started doing this with social bookmarking sites, such as del.icio.us. The act of curation today goes with the act of categorization, of tagging. We do this very often on Twitter, and now we can do it on Facebook, on Google Plus, on Youtube, on Flickr, and so on. The thing is that these tags don`t have the ability to interconnect with other tags and to create a big and consistent semantic network. In addition, these tags are in different natural languages.

From the point of view of the user, it will be the same action, but tagging in IEML will just be more powerful.

  • M: What will the IEML’s initial array of applications be?

P.L: I hope the main applications will be in the creation of collective intelligence games; games of categorization and evaluation of data; a sort of collective curation that will help people to create a very useful memory for their collaborative learning. That, for me, would be the most interesting application, and of course, the creation of a inter-linguistic or trans-linguistic environment.

BIG DATA AND COLLECTIVE INTELLIGENCE

C.M: You’ve referred to big data as one of the phenomena that could take collective intelligence to a whole new level. You’ve mentioned as well that in fact this type of information can only be processed by powerful institutions (governments, corporations, etc.), and that only when the capacity to read big data is democratized, will there truly be a revolution.

Would you say that the IEML will have a key role in this process of democratization? If so, why?

P.L: I think that currently there are two important aspects of big data analytics: First, we have more and more data every day. We have to realize this. And, second, the main producer of this immense flow of data is ourselves. We, the users of the Internet are producing data. So currently lots of people are trying to make sense of this data and here you have two “avenues”:

First is the avenue that is more scientific. In natural sciences you have a lot of data –genetic data, data coming from physics or astronomy-, and also something that is relatively new; the data coming from human sciences. This is called “digital humanities”, and it takes data from spaces like social media and tries to make sense of it from a sociological point of view. Or you take data from libraries and you try to make sense of it from a literary or historical point of view. This is one application.

The second application is in business, in administration –private or public. You have many companies that are trying to sell services to companies and to governments.

I would say that there are two big problems with this landscape:

The first is related to the methodology; today we use mainly statistical methods and logical methods. It is very difficult to have a semantic analysis of the data, because we do not have a semantic code, and let’s remember that every thing we analyze is coded before we analyze it. So you can code quantitatively and you have statistical analysis, code logically and you have logical analysis. So you need a semantic code to have a semantic analysis. We do not have it yet, but I think that IEML will be that code.

The second problem is the fact that this analysis of data is currently in the hands of very powerful or rich players –big governments, big companies. It is expensive and it is not easy to do –you need to learn how to code, you need to learn how to read statistics…

I think that with IEML –because people will be able to code semantically the data– people will also be able to do semantic analysis with the help of the right user-interfaces. They will be able to manipulate this semantic code in natural language, it will be open to everybody.

This famous “revolution of big data” is just at its beginning. In the coming decades there will be much more data and many more powerful tools to analyze it. And it will be democratized; the tools will be open and free.

A BRIEF READING OF THE CURRENT SITUATION IN VENEZUELA

C.M: In the interview conducted by Howard Rheingold, you defined collective intelligence as a synergy between personal and collective knowledge; as an example, you mentioned the curation process that we, as users of social media, develop and that in most cases serves as resource material for others to use. Regarding this particular issue, I’d like to analyze with you this particular situation using collective intelligence:

During the last few months, Venezuela has suffered an important information blackout, product of the government’s monopolized grasp of the majority of the media outlets, the censorship efforts made by the State’s organisms, and the self-imposed censorship of the last independent media outlets of the country. As a response to this blockade, Venezuelans have taken upon themselves to stay informed by invading the digital space. In a relatively short period of time, various non-standard communication networks have been created, verified source lists have been consolidated, applications have been developed, and a sort of ethics code has been established in order to minimize the risk of spreading false information.

Based on your theory on collective intelligence, what reading could you give of this phenomenon?

P.L: You have already given a response to this; I have nothing else to say. Of course I am against any kind of censorship. We have already seen that many authoritarian regimes do not like the internet, because it represents an augmentation of freedom of expression. Not only in Venezuela but in fact in different countries, governments have tried to limit free expression and the people that are politically active and that are not pro-government have tried to organize themselves through the internet. I think that the new environment created by social media –Twitter, Facebook, Youtube, the blogs, and all the apps that help people find the information they need– helps to the coordination and the discussion inside all these opposition movements, and this is the current political aspect of collective intelligence.