Pas une pipe

This blog post offers a simple guide to the landscape of signification in language. We’ll begin by distinguishing the numerous elements that construct meaning. We’ll start by having a look at signs, and how they are everywhere in communication between living beings and how a sign is different from a symbol for instance. A symbol is a special kind of sign unique to humans, that folds into a signifier (a sound, an image, etc.) and a signified (a category or a concept). We’ll learn that the relationship between a signifier and a signified is conventional. A bit further, I’ll explain the workings of language, our most powerful symbolic system. I will review successively what grammar is: the recursive construction of sense units; semantics: the relations between these units; and pragmatics: the relations between speech, reference and social context. I’ll end this chapter by recalling some of the problems in fields of natural language processing (NLP).

Sign, symbol, language

Sign

Meaning involves at least three actors playing distinct roles. A sign (1) is a clue, a trace, an image, a message or a symbol (2) that means something (3) for someone.

A sign may be an entity or an event. What makes it a sign is not its intrinsic properties but the role it plays in meaning. For example, an individual can be the subject (thing) of a conversation, the interpreter of a conversation (being) or he can be a clue in an investigation (sign).

A thing, designated by a sign, is often called the object or referent, and – again –what makes it a referent is not its intrinsic properties but the role it plays in the triadic relation.

A being is often called the subject or the interpreter. It may be a human being, a group, an animal, a machine or whatever entity or process endowed with self-reference (by distinguishing self from the environment) and interpretation. The interpreter always takes the context into account when it interprets a sign. For example, a puppy (being) understands that a bite (sign) from its playful sibling is part of a game (thing) and may not be a real threat in the context.

Generally speaking, communication and signs exist for any living organisms. Cells can recognize concentrations of poison or food from afar, plants use their flowers to trick insects and birds into their reproductive processes. Animals – organisms with brains or nervous systems – practice complex semiotic games that include camouflage, dance and mimicries. They acknowledge, interpret and emit signs constantly. Their cognition is complex: the sensorimotor cycle involves categorization, feeling, and environmental mapping. They learn from experience, solve problems, communicate and social species manifest collective intelligence. All these cognitive properties imply the emission and interpretation of signs. When a wolf growls, no need to add a long discourse, a clear message is sent to its adversary.

Symbol

A symbol is a sign divided into two parts: the signifier and the signified. The signified (virtual) is a general category, or an abstract class, and the signifier (actual) is a tangible phenomenon that represents the signified. A signifier may be a sound, a black mark on white paper, a trace or a gesture. For example, let’s take the word “tree” as a symbol. It is made of: 1) a signifier sound voicing the word “tree”, and 2) a signified concept that means it is part of the family of perennial plants with roots, trunk, branches, and leaves. The relationship between the signifier and the signified is conventional and depends on which symbolic system the symbol belongs to (in this case, the English language). What we mean by conventional is that in most cases, there is no analogy or causal connection between the sound and the concept: for example, between the sound “crocodile” and the actual crocodile species. We use different signifiers to indicate the same signified in different languages. Furthermore, the concepts symbolized by languages depend on the environment and culture of their speakers.

The signified of the sound “tree” is ruled by the English language and not left to the choice of the interpreter. However, it is in the context of a speech act that the interlocutor understands the referent of the word: is it a syntactic tree, a palm tree, a Christmas tree…? Let’s remember this important distinction: the signified is determined by the language but the referent depends on the context.

Language

A language is a general symbolic system that allows humans to think reflexively, ask questions, tell stories, dialogue and engage in complex social interactions. English, French, Spanish, Arabic, Russian, or Mandarin are all natural languages. Each one of us is biologically equipped to speak and recognize languages. Our linguistic ability is natural, genetic, universal and embedded in our brain. By contrast, any language (like English, French, etc.) is based on a social, conventional and cultural environment; it is multiple, evolving and hybridizing. Languages mix and change according to the transformations of demographic, technological, economic, social and political contexts.

Our natural linguistic abilities multiply our cognitive faculties. They empower us with reflexive thinking, making it easy for us to learn and remember, to plan in the long-term and to coordinate large-scale endeavors. Language is also the basis for knowledge transmission between generations. Animals can’t understand, grasp or use linguistic symbols to their full extent, only humans can. Even the best-trained animals can’t evaluate if a story is false or exaggerated. Koko the famous gorilla will never ask you for an appointment for the first Tuesday of next month, nor will it communicate to you where its grandfather was born. In animal cognition, the categories that organize perception and action are enacted by neural networks. In human cognition, these categories may become explicit once symbolized and move to the forefront of our awareness. Ideas become objects of reflection. With human language comes arithmetic, art, religion, politics, economy, and technology. Compared to other social animal species, human collective intelligence is most powerful and creative when it is supported and augmented by its linguistic abilities. Therefore, when working in artificial intelligence or cognitive computing, it would be paramount to understand and model the functioning of neurons and neurotransmitters common to all animals, as well as the structure and organization of language, unique to our species.

I will now describe briefly how we shape meaning through language. Firstly, we will review what the grammatical units are (words, sentences, etc.). Secondly, we will explore the semantic networks between these units, and thirdly, what are the pragmatic interactions between language and extralinguistic realities.

Grammatical units

A natural language is made of recursively nested units: a phoneme which is an elementary sound, a word, a chain of phonemes, a syntagm, a chain of words and a text, a chain of syntagms. A language has a finite dictionary of words and syntactic rules for the construction of texts. With its dictionary and set of syntactic rules, a language offers its users the possibility to generate – and understand – an infinity of texts.

Phonemes

Humans beings can’t pronounce or recognize several phonemes simultaneously. They can only pronounce one sound at a time. So languages have to obey the constraint of sequentiality. A speech is a chain of phonemes with an acoustic punctuation reflecting its grammatical organization.

Phonemes are meaningless sounds without signification1 and generally divided into consonants and vowels. Some languages also have “click” sounding consonants (in Eastern and Southern Africa) and others (in Chinese Mandarin) use different tones on their vowels. Despite the great diversity of sounds used to pronounce human languages, the number of conventional sounds in a language is limited: the order of magnitude is between thirty and one hundred.

Words

The first symbolic grammatical unit is the word, a signifier with a signified. By word, I mean an atomic unit of meaning. For example, “small” contains one unit of meaning. But “smallest” contains two: “small” (meaning tiny) and “est” (a superlative suffix used at the end of a word indicating the most).

Languages contain nouns depicting structures or entities, and verbs describing actions, events, and processes. Depending on the language, there are other types of words like adjectives, adverbs, prepositions or sense units that orient grammatical functions, such as gender, number, grammatical person, tense and cases.

Now let’s see how many words does a language hold? It depends. The largest English dictionary counts 200,000 words, Latin has 50,000 words, Chinese 30,000 characters and biblical Hebrew amounts to 6,000 words. The French classical author Jean Racine was able to evoke the whole range of human passions and emotions by using only 3,700 words in 13 plays. Most linguists think that whatever the language is, an educated, refined speaker masters about 10,000 words in his or her lifetime.

Sentences

Note that a word alone cannot be true or false. Its signifier points to its signified (an abstract category) and not to a state of things. It is only when a sentence is spoken in a context describing a reality – a sentence with a referent – that it can be true or false.

A syntagm (a topic, sentence, and super-sentence) is a sequence of words organized by grammatical relationships. When we utter a syntagm, we leave behind the abstract dictionary of a language to enter the concrete world of speech acts in contexts. We can distinguish three sub-levels of complexity in a syntagm: the topic, the sentence, and the super-sentence. Firstly, a topic is a super-word that designates a subject, a matter, an object or a process that cannot be described by just a single word, i.e., “history of linguistics”, “smartphone” or “tourism in Canada”. Different languages have diverse rules for building topics like joining the root of a word with a grammatical case (in Latin), or agglutination of words (in German or Turkish). By relating several topics together a sentence brings to mind an event, an action or a fact, i.e., “I bought her a smartphone for her twentieth birthday”. A sentence can be verbal like in the previous example, or nominal like “the leather seat of my father’s car”. Finally, a super-sentence evokes a network of relations between facts or events, like in a theory or a narrative. The relationships between sentences can be temporal (after), spatial (behind), causal (because), logical (therefore) or underline contrasts (but, despite…), and so on.

Texts

The highest grammatical unit is a text: a punctuated sequence of syntagms. The signification of a text comes from the application of grammatical rules by combining its signifieds. The text also has a referent inferred from its temporal, spatial and social context.

In order to construct a mental model of a referent, a reader can’t help but imagine a general intention of meaning behind a text, even when it is produced by a computer program, for instance.

Semantic relationships

When we hear a speech, we are actually transforming a chain of sounds into a semantic network, and from this network, we infer a new mental model of a situation. Conversely, we are able to transform a mental model into the corresponding semantic network and then from this network, back into a sequence of phonemes. Semantics is the back and forth translation between chains of phonemes and semantic networks. Semantic networks themselves are multi-layered and can be broken down into three levels: paradigmatic, syntagmatic and textual.

hierarchy-units-any-language

Figure: Hierarchy of grammatical units and semantic relations

Paradigmatic relationships

In linguistics, a paradigm is a set of semantic relations between words of the same language. They may be etymological, taxonomical relations, oppositions or differences. These relations may be the inflectional forms of a word, like “one apple” and “two apples”. Languages may comprise paradigms to indicate verb tenses (past, present, future) or mode (active, passive). For example, the paradigm for “go” is “go, went, gone”. The notion of paradigm also indicates a set of words which cover a particular functional or thematic area. For instance, most languages include paradigms for economic actions (buy, sell, lend, repay…), or colors (red, blue, yellow…). A speaker may transform a sentence by replacing one word from a paradigm by another from the same paradigm and get a sentence that still makes sense. In the sentence “I bought a car”, you could easily replace “bought” by “sold” because “buy” and “sell” are part of the same paradigm: they have some meaning in common. But in that sentence, you can’t replace “bought” by “yellow” for instance. Two words from the same paradigm may be opposites (if you are buying, you are not selling) but still related (buying and selling can be interchangeable).

Words can also be related when they are in taxonomic relation, like “horse” and “animal”. The English dictionary describes a horse as a particular case of animal. Some words come from ancient words (etymology) or are composed of several words: for example, the word metalanguage is built from “meta” (beyond, in ancient Greek) and “language”.

In general, the conceptual relationships between words from a dictionary may be qualified as paradigmatic.

Syntagmatic relationships

By contrast, syntagmatic relations describe the grammatical connections between words in the same sentence. In the two following sentences: “The gazelle smells the presence of the lion” and “The lion smells the presence of the gazelle”, the set of words are identical but the words “gazelle” and “lion” do not share the same grammatical role. Since those words are inversed in the syntagmatic structure, the sentences have distinct meanings.

Textual relationships

At the text level, which includes several syntagms, we find semantic relations like anaphoras and isotopies. Let’s consider the super-sentence: “If a man has talent and can’t use it, he’s failed.” (Thomas Wolfe). In this quotation “it” is an anaphora for “talent” and “he”, an anaphora for “a man”. When reading a pronoun (it, he), we resolve the anaphora when we know which noun – mentioned in a previous or following sentence – it is referring to. On the other hand, isotopies are recurrences of themes that weave the unity of a text: the identity of heroes (characters), genres (love stories or historical novels), settings, etc. The notion of isotopy also encompasses repetitions that help the listener understand a text.

Pragmatic interactions

Pragmatics weave the triadic relation between signs (symbols, speeches or texts), beings (interpreters, people or interlocutors) and things (referents, objects, reality, extra-textual context). On the pragmatic level of communication, speeches point to – and act upon – a social context. A speech act functions as a move in a game played by its speaker. So, distinct from semantic meaning, that we have analyzed in a previous section, pragmatic meaning would address questions like: what kind of act (an advice, a promise, a blame, a condemnation, etc.) is carried by a speech? Is a speech spoken in a play on a stage or in a real tribunal? The pragmatic meaning of a speech also relates to the actual effects of its utterance, effects that are not always known at the moment of the enunciation. For example: “Did I convince you? Have you kept your word?”. The sense of a speech can only be understood after its utterance and future events can always modify it.

A speech act is highly dependent on cultural conventions, on the identity of speakers and attendees, time and place, etc. By proclaiming: “The session is open”, I am not just announcing that an official meeting is about to start, I am actually opening the session. But I have to be someone relevant or important like the president of that assembly to do so. If I am a janitor and I say: “The session is open”, the act is not performed because I don’t have any legitimacy to open the session.

If an utterance is descriptive, it’s either true or false. In other cases, if an utterance does something instead of describing a state of things, it has a pragmatic force instead of a truth value.

Resolving ambiguities

We have just reviewed the different layers of grammatical, semantic and pragmatic complexity to better understand the meaning of a text. Now, we are going to examine the ambiguities that may arise during the reading or listening of a text in a natural language.

Semantic ambiguities

How do we go from to the sound of a chain of phonemes to the understanding of a text? From a sequence of sounds, we build a multi-layered (paradigmatic, syntagmatic and textual) semantic network. When weaving the paradigmatic layer, we answer questions like: “What is this word? To what paradigm does it belong? Which one of its meanings should I consider?”. Then, we connect words together by answering: “What are the syntagmatic relations between the words in that sentence?”. Finally, we comprehend the text by recognizing the anaphoras and isotopies that connect its sentences. Our understanding of a text is based on this three-layered network of sense units.

Furthermore, ambiguities or uncertainties of meaning in languages can happen on all three levels and can multiply their effects. In the case of homophony, the same sound can point to different words like in “ate” and “eight”. And sometimes, the same word may convey several distinct meanings like in “mole”: (1) a shortsighted mouse-like animal digging underground galleries, (2) an undercover spy, or (3) a pigmented spot or mark on the skin. In the case of synonymy, the same meaning can apply to distinct words like “tiny” and “small”. Amphibologies refer to syntagmatic ambiguities as in: “Mary saw a woman on the mountain with a telescope.” Who is on the mountain? Moreover, who has the telescope? Mary or the woman? On a higher level of complexity, textual relations can be even more ambiguous than paradigmatic and syntagmatic ones because rules for anaphoras and isotopies are loosely defined.

Resolving semantic ambiguities in pragmatic contexts

Human beings don’t always correctly resolve all the semantic ambiguities of a speech, but when they do, it is often because they take into account the pragmatic (or extra-textual) context that is generally implicit. It’s in a context, that deictic symbols like: here, you, me, that one over there, or next Tuesday, take their full meaning. Let’s add that, comparing a text in hand with the author’s corpus, genre, historical period, helps to better discern the meaning of a text. But some pragmatic aspects of a text may remain unknown. Ambiguities can stem from many causes: the precise referents of a speech, the uncertainty of the speaker’s social interactions, the ambivalence or concealment of the speaker’s intentions, and of course not knowing in advance the effects of an utterance.

Problems in natural language processing

Computer programs can’t understand or translate texts with dictionaries and grammars alone. They can’t engage in the pragmatic context of speeches like human beings do to disambiguate texts unless this context is made explicit. Understanding a text implies building and comparing complex and dynamic mental models of text and context.

On the other hand, natural language processing (a sub-discipline of artificial intelligence) compensates for the irregularity of natural languages by using a lot of statistical calculations and deep learning algorithms that have been trained on huge corpora. Depending on its training set, an algorithm can interpret a text by choosing the most probable semantic network amongst those compatible within a chain of phonemes. Imperatively, the results have to be validated and improved by human reviewers.

 

I was happily surprised to be chosen as an “IBM influencer” and invited to the innovation and disruption Forum organized in Toronto the 16th of November to celebrate the 100th anniversary of IBM in Canada. With a handful of other people, I had the privilege to meet with Bryson Koehler the CTO of the IBM Cloud and Watson (Watson is the name given to IBM’s artificial intelligence). That meeting was of great interest to me: I learned a lot about the current state of cloud computing and artificial intelligence.

Robot

Image: Demonstration of a robot at the IBM innovation and disruption forum in Toronto

Contrary to other big tech companies, IBM already existed when I was born in 1956. The company was in the business of computing even before the transistors. IBM adapted itself to electronics and dominated the industry in the era of big central mainframes. It survived the PC revolution when Microsoft and Apple were kings. They navigated the turbulent waters of the social Web despite the might of Google, Facebook, and Amazon. IBM is today one of the biggest players in the market for cloud computing, artificial intelligence and business consulting.

The transitions and transformations in IBM’s history were not only technological but also cultural. In the seventies, when I was a young philosopher and new technology enthusiast, IBM was the epitome of the grey suit, blue tie, black attache-case corporate America. Now, every IBM employee – from the CEO Dino Trevisani to the salesman – wears jeans. IBM used to be the “anti-Apple” company but now everybody has a Mac laptop. Instead of proprietary technology, IBM promotes open-source software. IBM posters advertise an all-inclusive and diverse “you” across the specter of gender, race, and age. Its official management and engineering philosophy is design thinking and, along with the innovative spirit, the greatest of IBM’s virtues is the ability to listen!

Toronto’s Forum was all about innovation and disruption. Innovation is mainly about entrepreneurship: self-confidence, audacity, tenacity, resilience and market orientation. Today’s innovation is “agile”: implement a little bit, test, listen to the clients, learn from your errors, re-implement, etc. As for the disruption, it is inevitable, not only because of the speed of digital transformation but also because of the cultural shifts and the sheer succession of generations. So their argument is fairly simple: instead of being disrupted, be the disruptor! The overall atmosphere of the Forum was positive and inspirational and it was a pleasure to participate.

There were two kinds of general presentations: by IBM clients and by IBM strategists and leaders. In addition, a lot of stands, product demonstrations and informative mini-talks on various subjects enabled the attendees to learn about current issues like e-health and hospital applications, robotics, data management, social marketing, blockchain and so on. One of the highlights of the day was the interview of Arlene Dickinson (a well known Canadian TV personality, entrepreneur, and investor) by Dino Trevisani, the CEO of IBM Canada himself. Their conversation about innovation in Canada today was both instructive and entertaining.

From my point of view as a philosopher specialized in computing, Bryson Koehler (CTO for IBM cloud and Watson) made a wonderful presentation, imbued with simplicity and clarity, yet full of interesting content. Before being an IBMer Bryson worked for the Weather Channel, so he was familiar handling exabytes of data! According to Bryson Koehler, the future is not only the cloud, that is to say, infrastructure and software as a service, but also in the “cloud-native architecture“, where a lot of loosely connected mini-services can be easily assembled like Lego blocks and on top of which you can build agile and resilient applications. Bryson is convinced that all businesses are going to become “cloud natives” because they need the flexibility and security that it provides. To illustrate this, I learned that Watson is not a standalone monolithic “artificial intelligence” anymore but is now divided into several mini-services, each one with its API, and part of the IBM cloud offer alongside other services like blockchain, video storage, weather forecast, etc.

BrysonImage: Bryson Koehler at the IBM innovation and disruption Forum in Toronto

Bryson Koehler recognizes that the techniques of artificial intelligence,  the famous deep learning algorithms, in particular, are all the same amongst the big competitors (Amazon, Google, Microsoft and IBM) in the cloud business. These algorithms are now taught in universities and implemented in open source programs. So what makes the difference in IA today is not the technique but the quality and quantity of the datasets in use to train the algorithms. Since every big player has access to the public data on the web and to the syndicated data (on markets, news, finance, etc.) sold by specialized companies, what makes a real difference is the *private data* that lies behind the firewall of businesses. So what is the competitive advantage of IBM? Bryson Koehler sees it in the trust that the company inspires to its clients, and their willingness to confide their data to its cloud. IBM is “secure by design” and will never use a client’s dataset to train algorithms used by this client’s competitors. Everything boils down to confidence.

At lunchtime, with a dozen of other influencers, I had a conversation with researchers at Watson. I was impressed by what I learned about cognitive computing, one of IBM’s leitmotiv. Their idea is that the value is not created by replicating the human mind in a computer but in amplifying human cognition in real-world situations. In other words, Big Blue (IBM’s nickname) does not entertain the myth of singularity. It does not want to replace people with machines but help its clients to make better decisions in the workplace. There is a growing flow of data from which we can learn about ourselves and the world. Therefore we have no other choice than to automate the process of selecting the relevant information, synthesize its content and predict, as much as possible, our environment. IBM’s philosophy is grounded in intellectual humility. In this process of cognitive augmentation, nothing is perfect or definitive: people make errors, machines too, and there is always room for improvement of our models. Let’s not forget that only humans have goals, ask questions and can be satisfied. Machines are just here to help.

Once the forum was over, I was walking in front of the Ontario lake and thought about the similarity between philosophy and computer engineering: aren’t both building cognitive tools?

Toronto-boardwalkImage: walking meditation in front of the Lake Ontario after the IBM innovation and disruption Forum in Toronto

I put forward in this paper a vision for a new generation of cloud-based public communication service designed to foster reflexive collective intelligence. I begin with a description of the current situation, including the huge power and social shortcomings of platforms like Google, Apple, Facebook, Amazon, Microsoft, Alibaba, Baidu, etc. Contrasting with the practice of these tech giants, I reassert the values that are direly needed at the foundation of any future global public sphere: openness, transparency and commonality. But such ethical and practical guidelines are probably not powerful enough to help us crossing a new threshold in collective intelligence. Only a disruptive innovation in cognitive computing will do the trick. That’s why I introduce “deep meaning” a new research program in artificial intelligence, based on the Information Economy  MetaLanguage (IEML). I conclude this paper by evoking possible bootstrapping scenarii for the new public platform.

The rise of platforms

At the end of the 20th century, one percent of the human population was connected to the Internet. In 2017, more than half the population is connected. Most of the users interact in social media, search information, buy products and services online. But despite the ongoing success of digital communication, there is a growing dissatisfaction about the big tech companies – the “Silicon Valley” – who dominate the new communication environment.

The big techs are the most valued companies in the world and the massive amount of data that they possess is considered the most precious good of our time. Silicon Valley owns the big computers: the network of physical centers where our personal and business data are stored and processed. Their income comes from their economic exploitation of our data for marketing purposes and from their sales of hardware, software or services. But they also derive considerable power from the knowledge of markets and public opinions that stems from their information control.

The big cloud companies master new computing techniques mimicking neurons when they learn a new behavior. These programs are marketed as deep learning or artificial intelligence even if they have no cognitive autonomy and need some intense training on huge masses of data. Despite their well known limitations, machine learning algorithms have effectively augmented the abilities of digital systems. Deep learning is now used in every economic sector. Chips specialized in deep learning are found in big data centers, smartphones, robots and autonomous vehicles. As Vladimir Putin rightly told young Russians in his speech for the first day of school in fall 2017: “Whoever becomes the leader in this sphere [of artificial intelligence] will become the ruler of the world”.

The tech giants control huge business ecosystems beyond their official legal borders and they can ruin or buy competitors. Unfortunately, the big tech rivalry prevents a real interoperability between cloud services, even if such interoperability would be in the interest of the general public and of many smaller businesses. As if their technical and economic powers were not enough, the big tech are now playing into the courts of governments. Facebook warrants our identity and warns our family and friends that we are safe when a terrorist attack or a natural disaster occurs. Mark Zuckerberg states that one of Facebook’s mission is to insure that the electoral process is fair and open in democratic countries. Google Earth and Google Street View are now used by several municipal instances and governments as their primary source of information for cadastral plans and other geographical or geospatial services. Twitter became an official global political, diplomatic and news service. Microsoft sells its digital infrastructure to public schools. The kingdom of Denmark opened an official embassy in Silicon Valley. Cryptocurrencies independent from nation states (like Bitcoin) are becoming increasingly popular. Blockchain-based smart contracts (powered by Ethereum) bypass state authentication and traditional paper bureaucracies. Some traditional functions of government are taken over by private technological ventures.

This should not come as a surprise. The practice of writing in ancient palace-temples gave birth to government as a separate entity. Alphabet and paper allowed the emergence of merchant city-states and the expansion of literate empires. The printing press, industrial economy, motorized transportation and electronic media sustained nation-states. The digital revolution will foster new forms of government. Today, we discuss political problems in a global public space taking advantage of the web and social media and the majority of humans live in interconnected cities and metropoles. Each urban node wants to be an accelerator of collective intelligence, a smart city. We need to think about public services in a new way. Schools, universities, public health institutions, mail services, archives, public libraries and museums should take full advantage of the internet and de-silo their datasets. But we should go further. Are current platforms doing their best to enhance collective intelligence and human development? How about giving back to the general population the data produced in social media and other cloud services, instead of just monetizing it for marketing purposes ? How about giving to the people access to cognitive powers unleashed by an ubiquitous algorithmic medium?

Information wants to be open, transparent and common

We need a new kind of public sphere: a platform in the cloud where data and metadata would be our common good, dedicated to the recording and collaborative exploitation of memory in the service of our collective intelligence. The core values orienting the construction of this new public sphere should be: openness, transparency and commonality

Firstly openness has already been experimented in the scientific community, the free software movement, the creative commons licensing, Wikipedia and many more endeavors. It has been adopted by several big industries and governments. “Open by default” will soon be the new normal. Openness is on the rise because it maximizes the improvement of goods and services, fosters trust and supports collaborative engagement. It can be applied to data formats, operating systems, abstract models, algorithms and even hardware. Openness applies also to taxonomies, ontologies, search architectures, etc. A new open public space should encourage all participants to create, comment, categorize, assess and analyze its content.

Then, transparency is the very ground for trust and the precondition of an authentic dialogue. Data and people (including the administrators of a platform), should be traceable and audit-able. Transparency should be reciprocal, without distinction between the rulers and the ruled. Such transparency will ultimately be the basis for reflexive collective intelligence, allowing teams and communities of any size to observe and compare their cognitive activity

Commonality means that people will not have to pay to get access to this new public sphere: all will be free and public property. Commonality means also transversality: de-silo and cross-pollination. Smart communities will interconnect and recombine all kind of useful information: open archives of libraries and museums, free academic publications, shared learning resources, knowledge management repositories, open-source intelligence datasets, news, public legal databases…

From deep learning to deep meaning

This new public platform will be based on the web and its open standards like http, URL, html, etc. Like all current platforms, it will take advantage of distributed computing in the cloud and it will use “deep learning”: an artificial intelligence technology that employs specialized chips and algorithms that roughly mimic the learning process of neurons. Finally, to be completely up to date, the next public platform will enable blockchain-based payments, transactions, contracts and secure records

If a public platform offers the same technologies as the big tech (cloud, deep learning, blockchain), with the sole difference of openness, transparency and commonality, it may prove insufficient to foster a swift adoption, as is demonstrated by the relative failures of Diaspora (open Facebook) and Mastodon (open Twitter). Such a project may only succeed if it comes up with some technical advantage compared to the existing commercial platforms. Moreover, this technical advantage should have appealing political and philosophical dimensions.

No one really fancies the dream of autonomous machines, specially considering the current limitations of artificial intelligence. Instead, we want an artificial intelligence designed for the augmentation of human personal and collective intellect.

Language as a platform

In order to augment the human intellect, we need both statistical and symbolic AI! Right now deep learning is based on neural networks simulation. It is enough to model roughly animal cognition (every animal species has neurons) but it is not refined enough to model human cognition. The difference between animal cognition and human cognition is the reflexive thinking that comes from language and explicit modelling. Why not adding a layer of semantic addressing on top of neural connectivity. In human cognition, the categories that organize perception, action, memory and learning are expressed linguistically so they may be reflected upon and shared in conversations. A language works like the semantic addressing system of a social virtual database.

But there is a problem with natural languages (english, french, arabic, etc.), they are irregular and do not lend themselves easily to machine understanding, I mean real understanding (projection of data on semantic networks) and not stochastic parroting. The current trend in natural language processing, an important field of artificial intelligence, is to use statistical algorithms and deep learning methods to understand and produce linguistic data. But instead of using only statistics, “deep meaning” adopts in addition a regular and computable metalanguage. I have designed IEML (Information Economy MetaLanguage) from the beginning to optimize semantic computing. IEML words are built from six primitive symbols and two operations: addition and multiplication. The semantic relations between IEML words follow the lines of their generative operations. The total number of words do not exceed 3 000. From its dictionary, the generative grammar of IEML allows the construction of recursive sentences that can define complex concepts and their relations. IEML would be the semantic addressing system of a social database.

Given large datasets, deep meaning allows the automatic computing of semantic relations between data, semantic analysis and semantic visualizations. This new technology fosters semantic interoperability: it decompartmentalizes tags, folksonomies, taxonomies, ontologies and languages. When on line communities categorize, assess and exchange semantic data, they generate explorable ecosystems of ideas that represent their collective intelligence. Take note that the vision of collective intelligence proposed here is distinct from the “wisdom of the crowd” model, that assumes independent agents and excludes dialogue and reflexivity. Just the opposite : deep meaning was designed from the beginning to nurture dialogue and reflexivity.

The main functions of the new public sphere

deepmeaning

In the new public sphere, every netizen will act as an author, editor, artist, curator, critique, messenger, contractor and gamer. The next platform weaves five functions together: curation, creation, communication, transaction and immersion.

By curation I mean the collaborative creation, edition, analysis, synthesis, visualization, explanation and publication of datasets. People posting, liking and commenting content on social media are already doing data curation, in a primitive, simple way. Active professionals in the fields of heritage preservation (library, museums), digital humanities, education, knowledge management, data-driven journalism or open-source intelligence practice data curation in a more systematic and mindful manner. The new platform will offer a consistent service of collaborative data curation empowered by a common semantic addressing system.

Augmented by deep meaning technology, our public sphere will include a semantic metadata editor applicable to any document format. It will work as a registration system for the works of the mind. Communication will be ensured by a global Twitter-like public posting system. But instead of the current hashtags that are mere sequences of characters, the new semantic tags will self-translate in all natural languages and interconnect by conceptual proximity. The blockchain layer will allow any transaction to be recorded. The platform will remunerate authors and curators in collective intelligence coins, according to the public engagement generated by their work. The new public sphere will be grounded in the internet of things, smart cities, ambient intelligence and augmented reality. People will control their environment and communicate with sensors, software agents and bots of all kinds in the same immersive semantic space. Virtual worlds will simulate the collective intelligence of teams, networks and cities.

Bootstrapping

How to bridge the gap from the fundamental research to the full scale industrial platform? Such endeavor will be much less expensive than the conquest of space and could bring a tremendous augmentation of human collective intelligence. Even if the network effect applies obviously to the new public space, small communities of pioneers will benefit immediately from its early release. On the humanistic side, I have already mentioned museums and libraries, researchers in humanities and social science, collaborative learning networks, data-oriented journalists, knowledge management and business intelligence professionals, etc. On the engineering side, deep meaning opens a new sub-field of artificial intelligence that will enhance current techniques of big data analytics, machine learning, natural language processing, internet of things, augmented reality and other immersive interfaces. Because it is open source by design, the development of the new technology can be crowdsourced and shared easily among many different actors.

Let’s draw a distinction between the new public sphere, including its semantic coordinate system, and the commercial platforms that will give access to it. This distinction being made, we can imagine a consortium of big tech companies, universities and governments supporting the development of the global public service of the future. We may also imagine one of the big techs taking the lead to associate its name to the new platform and developing some hardware specialized in deep meaning. Another scenario is the foundation of a company that will ensure the construction and maintenance of the new platform as a free public service while sustaining itself by offering semantic services: research, consulting, design and training. In any case, a new international school must be established around a virtual dockyard where trainees and trainers build and improve progressively the semantic coordinate system and other basic models of the new platform. Students from various organizations and backgrounds will gain experience in the field of deep meaning and will disseminate the acquired knowledge back into their communities.

Emission de radio (Suisse romande), 25 minutes en français.

Sémantique numérique et réseaux sociaux. Vers un service public planétaire, 1h en français

You-Tube Video (in english) 1h

Paul et Pierre - de dos

Paul, mon cousin, mon frère, mon ami,

Nous sommes nés à un an d’intervalle, presque à la même date, au milieu des années 1950, dans la communauté juive de Béja, en Tunisie.  Mon père Henri et sa sœur Nicole – ta mère – s’aimaient tendrement. Nos pères étaient associés dans la même boutique et nous jouions comme des frères dans l’arrière-boutique.

Très jeunes, l’histoire nous a balloté sur l’autre rive de la Méditerranée et nous avons atterri à Toulouse. C’est là que nos destins se sont séparés. Alors que tes parents tenaient bon et construisaient un foyer stable, j’ai été entraîné loin de l’Occitanie par les tourbillons d’un naufrage familial. Mais quand je revenais dans la ville rose visiter mon père pour les vacances de Pâques, ma tante Nicole bien aimée m’accueillait dans sa maison et elle était pour moi une véritable mère. Te souviens-tu quand nous allions ensemble à la bibliothèque, où quand tu me jouais au piano un morceau de musique que tu venais d’apprendre ?  Nous nous amusions d’un mot, d’un son, d’un geste, de tout et de rien. J’ai encore dans mes oreilles l’écho de nos rires…

Paul et Pierre au restau

Lorsque que tu faisais tes études de médecine, tu suivais en même temps des cours de philosophie à l’Université, en cachette de tes parents. Mais j’étais dans la confidence. A l’époque, nous avions d’homériques discussions sur les grands philosophes. Quand nous avons commencé à travailler et à fonder une famille, nous nous sommes un peu perdu de vue. Mais quelle fête, quelle joie, quand nous avions l’occasion de nous revoir ! Paul, tu étais ma référence, un autre moi-même, une version différente de mon destin. Nos deux vies étaient parallèles, elles rimaient comme Pierre et Paul.

Tu étais pour moi une manière de héros : tu aidais les mères à mettre leurs bébés au monde ! Médecin de garde, debout la nuit, tu opérais dans l’urgence pour sauver des vies. Consciencieux, responsable, tu étais toujours au fait des derniers développements de ta spécialité. Moi, quatre fois déraciné, j’enviais le médecin toulousain honorablement connu dans sa ville, aimé de ses patients et de leur famille.

J’aimais errer des heures dans ta bibliothèque de grand humaniste. Lucide, tu t’inquiétais partout de la tentation de la bonne conscience satisfaite. Tu étais ouvert, curieux de l’autre, mais sans jamais renier ton identité. Tu ne t’arrêtais pas à l’opinion moutonnière. Tu étais drôle, sympathique, bon vivant et généreux, mais aussi droit, honnête et authentique jusqu’à la rugosité. Je t’aimais, Paul. Qui ne t’aimait pas ? Ton humanité transparaissait immédiatement dans ton sourire et dans tes gestes.

Paul et Pierre Shabbat

Chacun a son Paul Boubli : le fils, le frère, l’époux, le père, le médecin, l’ami, le collègue… Mon Paul à moi, c’est le jumeau karmique, l’alter ego, l’âme sœur. Paul ! Notre dialogue a duré soixante ans. Mon cœur se brise mille fois à la pensée de ne plus te revoir… Rien n’efface la douleur de te perdre. Mais tu as engendré et élevé avec ta chère épouse Véronique quatre merveilleux enfants qui restent avec nous : Zacharie, Esther, Joseph et Samson. Mais tu lègues un héritage : le bien que tu as fait autour de toi, les étincelles que tu as semé dans nos vies. Par la blessure de mon cœur brisé, je recueille ces étincelles dans ma mémoire. Comprendre, aider, soigner, donner, éclairer le monde autour de soi, voilà l’exemple de courage que tu montres à chacun de nous. Toi – Prince d’une secrète noblesse andalouse – voici que de l’autre côté des larmes, de l’autre côté du temps, tu nous transmets le flambeau.

Bricologie & Sérendipité

Nous avons à résoudre des problèmes complexes au sens d’Edgar Morin : énergie, alimentation, dérèglement climatique, etc, que nous retrouvons “imbriqués” dans le domaine des transports. Individuellement, de nombreuses personnes perçoivent les enjeux et ont identifié des solutions. Mais collectivement les organisations, dans lesquelles ils évoluent, restent bloquées dans des processus et des schémas de décision, sans réelle capacité à évoluer et se transformer à la hauteur. Une des pistes pour expliquer ce paradoxe se trouve dans les mécanismes de l’intelligence collective.

L’intelligence collective est une propriété du vivant qui se manifeste quand plusieurs personnes interagissent avec un objectif commun : trouver une solution, développer un produit, réaliser une oeuvre ou une activité sportive. Un groupe de musique, une équipe de foot ou un service d’une entreprise mettent en oeuvre des actions coordonnées différentes en fonction de leur intelligence collective avec plus ou moins de réussite.

En effet, cette dernière…

View original post 1,403 more words

numerix

“Au pays de Numérix” d’Alexandre Moatti date de 2015, mais il est plus que jamais d’actualité, au moment où Mounir Mahjoubi vient d’être nommé secrétaire d’état au numérique du gouvernement Edouard Philippe. Beaucoup de gens attendent du nouveau président de la République française, jeune et réputé moderniste, un “cours nouveau” en matière de numérique en France. On ne saurait trop recommander la lecture de ce livre à son entourage.

Sur la forme c’est un ouvrage court, facile à lire, qui cultive un ton mesuré et rationnel. Il évoque le plus souvent des sujets que l’auteur connaît de première main, ce qui ne gâte rien. Franchement partisan des usages cognitifs du réseau et de “l’Internet de la connaissance” l’auteur a lui même oeuvré dans le domaine des bibliothèques numériques, a créé plusieurs sites web de type savant et participe de manière active à Wikipedia en français. Même s’il ne cite pas explicitement ces philosophes, on le sent opposé aux diatribes anti-GAFA – Google Apple Facebook Amazon – hystériques de Bernard Stiegler ou Eric Sadin, tout comme aux jugements négatifs à l’emporte pièce d’Alain Finkelkraut sur Internet. Mais il prend soin également de signaler certains aspects négatifs ou fâcheux de l’internet contemporain et de se distinguer du transhumanisme apocalyptique d’un Raymond Kurzweil ou du lyrisme a-critique d’un Pierre Lévy…

Une bonne partie de l’ouvrage est consacré aux réponses françaises et européennes au projet de Google Books autour de 2005. A l’origine, Google voulait utiliser ses centres de calcul et son algorithme de recherche pour construire une bibliothèque d’Alexandrie des temps modernes : tous les livres à disposition de tout le monde sur Internet! La France et l’Europe se devaient de relever le défi américain. Mais l’auteur montre que leurs réponses obéissent à des “effets de manche”, à des logiques d’annonce ou de communication politiques, à des stratégies de pouvoir et de captation de fonds publics par diverses institutions pour aboutir en fin de compte à d’infimes résultats concrets. Je note de mon côté que même si Google Books existe et rend des services (gratuits) au public et aux chercheurs, le projet initial est venu se fracasser sur la législation des droits d’auteurs, comme l’explique bien ce récent article de Wired. Tout cela permet de comprendre le succès d’entreprises illégales mais populaires comme la bibliothèque Genesis.

Au pays de Numérix, il y a beaucoup d’idéologie anti-américaine et anti-capitaliste… mais l’auteur montre que l’état – balkanisé par des baronnies ministérielles et institutionnelles en concurrence – travaille en fait au service d’intérêts sectoriels ou privés au lieu de mettre les capacités techniques de la France et l’argent du contribuable au service du public. Le bilan est accablant: projet après projet, les leçons des échecs ne sont jamais tirées et les mêmes erreurs sont répétées. Comme si, face à la domination de la Silicon Valley, il suffisait de s’indigner et de jeter des millions d’euros par la fenêtre pour que l’Europe ou la France (re)trouvent leur place dans le monde.

Au delà des divers projets de bibliothèques numériques européennes, Alexandre Moatti montre comment sont bloquées la collaboration des savants, la diffusion des connaissances et le rayonnement de la haute culture sur Internet. Trois coupables travaillent de conserve: la législation contemporaine des droits d’auteurs, d’ineptes politiques publiques et la rapacité des grandes maisons européennes de l’édition scientifique (Elsiever, Springer). Les arguments – de bon sens – mis en avant par Moatti ne sont pas nouveaux. Ils reprennent largement les idées du mouvement international de l’open data en général et de l’open science en particulier. Mais le réquisitoire est fort bien articulé. Il rejoint d’ailleurs les réflexions contemporaines autour de la nécessaire réinvention de l’édition scientifique (voir par exemple le récent article de Marcello Vitali-Rosati).

En refermant l’ouvrage, je n’ai pas pu m’empêcher de penser que, même s’il se trouvait à la tête de l’état français des gens conscients de l’importance capitale de l’internet au service de la connaissance et désireux de réformer les mauvaises habitudes de l’administration à cet égard, leur action ne serait pas forcément couronnée de succès. Car il faudrait faire évoluer les mentalités en profondeur, convaincre les enseignants, les journalistes, les hauts fonctionnaires. Il faudrait que la société dans son ensemble réalise que la grande transformation du numérique n’est pas seulement technique ou industrielle, mais concerne aussi et surtout le savoir et la culture. Il faudrait s’aviser que la civilisation du futur est à inventer et que cela ne se fait pas à coup de peur et de ressentiment, mais de courage, d’imagination et d’expérimentation.

data-science-education-at-monash-monash-university

L’accès du grand public à la puissance de diffusion du Web ainsi que les flots de données numériques qui coulent désormais de toutes les activités humaines nous confrontent au problème suivant : comment transformer les torrents de données en fleuves de connaissances ? Certains observateurs enthousiastes du traitement statistique des « big data », comme Chris Anderson, (l’ancien rédacteur en chef de Wired), se sont empressés de déclarer que les théories scientifiques – en général! – étaient désormais obsolètes [Voir : de Chris Anderson « The End of Theory: The Data Deluge Makes the Scientific Method Obsolete », Wired, 23 juin 2008.] Nous n’aurions plus besoin que de mégadonnées et d’algorithmes statistiques opérant dans les centres de calcul : les théories – et donc les hypothèses qu’elles proposent et la réflexion dont elles sont issues – appartiendraient à une étape révolue de la méthode scientifique. Il paraît que les nombres parlent d’eux-mêmes. Mais c’est évidemment oublier qu’il faut, préalablement à tout calcul, déterminer les données pertinentes, savoir exactement ce que l’on compte et nommer – c’est-à-dire catégoriser – les structures émergentes. De plus, aucune corrélation statistique ne livre directement des relations causales. Celles-ci relèvent nécessairement d’hypothèses qui expliquent les corrélations mises en évidence par les calculs statistiques. Sous couvert de pensée révolutionnaire, Chris Anderson et ses émules ressuscitent la vieille épistémologie positiviste et empiriste en vogue au XIXe siècle selon laquelle seuls les raisonnements inductifs (c’est-à-dire uniquement basés sur les données) sont scientifiques. Cette position revient à refouler ou à passer sous silence les théories – et donc les hypothèses risquées fondées sur une pensée personnelle – qui sont nécessairement à l’oeuvre dans n’importe quel processus d’analyse de données et qui se manifestent par des décisions de sélection, d’identification et de catégorisation. On ne peut initier un traitement statistique et interpréter ses résultats sans aucune théorie. Le seul choix que nous ayons est de laisser les théories à l’état tacite ou de les expliciter. Expliciter une théorie permet de la relativiser, de la comparer avec d’autres théories, de la partager, de la généraliser, de la critiquer et de l’améliorer [Parmi la très abondante littérature sur le sujet, voir notamment les ouvrages de deux grands épistémologues du XXe siècle, Karl Popper et Michael Polanyi]. Cela constitue même une des principales composantes de ce qu’il est convenu d’appeler « la pensée critique », que l’éducation secondaire et universitaire est censée développer chez les étudiants.

Outre l’observation empirique, la connaissance scientifique a toujours eu à voir avec le souci de la catégorisation et de la description correcte des données phénoménales, description qui obéit nécessairement à des théories plus ou moins formalisées. En décrivant des relations fonctionnelles entre des variables, la théorie offre une prise conceptuelle sur le monde phénoménal qui permet (au moins partiellement) de le prévoir et de le maîtriser. Les données d’aujourd’hui correspondent à ce que l’épistémologie des siècles passés appelait les phénomènes. Pour continuer de filer cette métaphore, les algorithmes d’analyse de flots de données correspondent aux instruments d’observation de la science classique. Ces algorithmes nous montrent des patterns, c’est-à-dire en fin de compte des images. Mais ce n’est pas parce que nous sommes capables d’exploiter la puissance du médium algorithmique pour « observer » les données qu’il faut s’arrêter en si bon chemin. Nous devons maintenant nous appuyer sur la puissance de calcul de l’Internet pour « théoriser » (catégoriser, modéliser, expliquer, partager, discuter) nos observations, sans oublier de remettre cette théorisation entre les mains d’une intelligence collective foisonnante.

Tout en soulignant la distinction entre corrélation et causalité dans leur livre de 2013 sur les big data, Viktor Mayer-Schonberger  et Kenneth Cukier annoncent que nous nous intéresserons de plus en plus aux corrélations et de moins en moins à la causalité, ce qui les range dans le camp des empiristes. Leur livre fournit néanmoins un excellent argument contre le positivisme statistique. Ils racontent dans leur ouvrage la très belle histoire de Matthew Maury, un officier de marine américain qui, vers le milieu du XIXe siècle, agrégea les données des livres de navigation figurant dans les archives officielles pour établir des cartes fiables des vents et des courants [In Big Data: A Revolution… (déjà cité) p. 73-77]. Certes, ces cartes ont été construites à partir d’une accumulation de données empiriques. Mais je fais respectueusement remarquer à Cukier et Mayer-Schonberger qu’une telle accumulation n’aurait jamais pu être utile, ou même simplement faisable, sans le système de coordonnées géographique des méridiens et des parallèles… qui est tout sauf empirique et basé sur des données. De la même manière, ce n’est qu’en adoptant un système de coordonnées sémantique que nous pourrons organiser et partager les flots de données de manière utile.

Aujourd’hui, la plupart des algorithmes qui gèrent l’acheminement des recommandations et la fouille des données sont opaques, puisqu’ils sont protégés par le secret commercial des grandes compagnies du Web. Quant aux algorithmes d’analyse, ils sont, pour la plupart, non seulement opaques, mais aussi hors d’atteinte de la majorité des internautes pour des raisons à la fois techniques et économiques. Or il est impossible de produire de la connaissance fiable au moyen de méthodes secrètes. Bien plus, si l’on veut résoudre le problème de l’extraction d’information utile à partir du flot diluvien des big data, on ne pourra pas éternellement se limiter à des algorithmes statistiques travaillant sur le type d’organisation de la mémoire numérique dont nous disposons en 2017. Il faudra tôt ou tard et le plus tôt sera le mieux, implémenter une organisation de la mémoire conçue dès l’origine pour les traitements sémantiques. On ne pourra apprivoiser culturellement la croissance exponentielle des données – et donc transformer ces données en connaissance réfléchie – que par une mutation qualitative du calcul.

Retenons que la « science des données » (data science en anglais) devient une composante essentielle de la compréhension des phénomènes économiques et sociaux. Plus aucune organisation ne peut s’en passer. Au risque de marcher à l’aveugle, les stratégies économiques, politiques et sociales doivent s’appuyer sur l’art d’analyser les mégadonnées. Mais cet art ne comprend pas seulement les statistiques et la programmation. Il inclut aussi ce que les américains appellent la « connaissance du domaine » et qui n’est autre qu’une modélisation ou une théorie causale de la réalité analysée, théorie forcément d’origine humaine, enracinée dans une expérience pratique et orientée par des fins. Ce sont toujours les humains et leurs récits producteurs de sens qui mobilisent les algorithmes.

datacentrique.jpgAutour de la Terre, les satellites artificiels transmettent nos communications et transportent une foule d’instruments d’observation et de capteurs : renseignement militaire, documentation du climat, monitoring des écosystèmes, surveillance des récoltes… Plus proche de la surface voici la zone des satellites de basse altitude qui connectent nos téléphones intelligents. Un peu plus bas, les avions sur pilote automatique communiquent avec les stations radar, les bases au sol, tandis que leurs événements internes s’enregistrent dans des boîtes noires. Passée la barrière des nuages se découvrent les réseaux lumineux des métropoles intelligentes. Les cargos, les navettes, les métros, les trains rapides, les flottes de véhicules autonomes se transmettent des signaux, restent en contact avec les satellites et les balises routières, s’échangent leurs passagers et leurs marchandises dûment identifiés. Surveillant le moindre coin de rue, truffant le sous-sol, flottant au milieu des océans, guettant sur les côtes et les sommets, embarqués sur les drones aériens ou sous-marins, les antennes, les capteurs, les caméras inondent de données les centres de calcul. Écouteurs, gants et chaussures sont connectés. Nous voici pourvus de bracelets qui enregistrent notre pouls, la composition chimique de notre sang et de notre peau, envoient les données pour analyse dans les nuages, reçoivent les notifications et conseils de santé en temps réel… Grâce aux identités infalsifiables de l’informatique portable nous passons partout sans fouille ni mot de passe. Les lunettes branchées prennent photos et vidéos, surimposent des couches virtuelles à la vision optique ordinaire et projettent sur demande des cartographies de données. Nos jeux de domination s’alignent sur les capacité d’exploitation de la mémoire et les vitesses d’analyse. Les nouveaux partis politiques rassemblent leurs membres autour de thèses épistémologiques. Entremêlés dans l’économie mondiale et le nouvel espace public transnational, nos essaims d’intelligence collective collaborent et se combattent sur les territoires hyper-connectés des grandes métropoles. Réfléchissant la pensée humaine sur le miroir sémantique du cloud, l’évolution des écosystèmes d’idées déploie son inépuisable spectacle immersif et multi-joueur. La prospérité, la sécurité, l’influence, tout se ramène à une forme ou une autre d’optimisation cognitive… sauf peut-être dans les zones analogiques reculées, presque désertes, qui s’étendent loin des grands centres.

Communication does not entail the use of words as reflected by a majority of the people, but a method employed by the sender to convey a particular message to the audience. Irrespective of the meth…

Source : What I have learnt from the course ” Advanced Theories of Communication”

 

On trouvera maintenant le contenu de mon post expliquant comment jutilise les médias sociaux dans mes cours à l’université à cette adresse  (ISSN 2386-8562)
Ce travail est la pré-impression d’un article dans le numéro 58 de RED. Il sera publié en tant que contribution d’invité, de type «histoire personnelle dans le domaine de la recherche éducative» (Personal History as Educational Research).

Same paper in Spanish

DONNÉES

Je me tiens à la disposition de toute équipe de recherche en sciences de l’éducation ou pédagogie pour aider à analyser les données produites par mes deux cours #UOAC (en anglais) et #UOIM (en français). Ces données consistent en Tweets, Moments et Blogposts. Tous les moments et Blogposts ont été publiés sur Twitter avec les hashtags correspondants.

Un article dans Quartier Libre (Journal des étudiants de l’Université de Montréal):

Moments (choix personnel de tweets) issus de mes cours de l’automne 2016

Mon cours de cent étudiants de deuxième année à l’hiver 2017: #uotm17

Choix de Storifys ou Moments de mes étudiants de 2016 (travail en cours, plus à venir):

A unique experience

How free are we?” C’est beau comme un poème soufi !

Choix de blogposts de mes étudiants témoignant de l’efficacité de la méthode “Twitter en Classe” (travail en cours, plus à venir)

Le meilleur cours de toute ma vie

prendre des notes en Tweetant“!

Voyage au monde des médias

Le système est simple, mais efficace.

Twitter et la mémoire collective

Les tweets nous permettaient de nous remémorer les sujets discutés en classe presque dans leur intégralité.

Un cours qui a changé ma perception de l’apprentissage

Innover pour enseigner

“Le cours de CMN 1560 a été un de mes cours préférés lors du trimestre d’automne 2016. Mon professeur M. Pierre Lévy a véritablement changé la façon d’apprendre la matière du cours. La grande majorité des étudiants ont bien apprécié le cours. Vers la fin du trimestre, j’ai vu beaucoup de tweets exprimant comment agréables que les étudiants ont trouvés le cours. Nous utilisons les médias sociaux, spécifiquement Twitter pour faire une prise de notes collectives. Effectivement, je pourrais toujours, comme n’importe qui, accéder à la matière qui m’a été enseignée durant mon cours grâce au #uoim sur Twitter. Ici je peux retrouver tout les notes, les questions, et les remarques ou commentaires que les étudiants ont tweetés en relation avec le contenu du cours. Cet élément interactif du cours a intéressé plusieurs étudiants et rendait la matière plus fascinante. Personnellement, j’ai bien aimé assister aux séances en classe, je ne les trouvais pas du tout ennuyantes 

This course, Advanced theories of communication, was like none other that I had ever taken.

The potential of Twitter for education

Collective intelligence in the classroom

I have learned to use the media that are at my fingertips

Communication happened

Becoming an autonomous thinker

what more professors should do nowadays to make their courses more interactive and stimulating

6 Things I Learned From Pierre Levy

“When we started the class and the professor told us to only take notes via twitter, I was very skeptical. I did not want to have an open mind towards his new method of teaching despite the fact that I am considered to be part of the millennials generation and you know we are known for our excessive use of technology”

From the blogpost of Cindi Cai ” Moreover, this advanced theory of communication not only taught me how to be a good speaker, but also in the class, I learnt how to be a good listener. For example, in the class, the professor encouraged students to participate in the class Q&A section by twittering through the internet, which allows every single student to have a chance to ask questions, and at the same time also encourage students to listen to other students’ ideas toward the subject. We, as students in the class just need to focus on the speech that the professor have given to the students, and catch the content in which we have questions, doubts, and raise our questions by twittering in order to get answers from the professor, while the professor also need to listen to students opinion by checking out the course tag.  For doing so, students have equal chances to listen and get their answers from the professor, and also students get an opportunity to listen to or inspired from other students’ learning stories( storify, blog post ).This teaching method is very interesting to me, because as a university student, what I want from the university is not just a piece of degree certificate, but also an opportunity to develop the ability to think extensively, solve problems, and challenge myself. To be honest, before I took this class, I was so tired of university, because I found that every single class I took at the University of Ottawa was really boring, and most students included me was more like machine, though we kept going to every class, and studied hard, we just wanted a better grade, and after exams or assignments, we just simply forgot what we have learnt in the class. This situation made me feel nervous and I started to doubt my university life and wonder if university could really help me in my future development?

Luckily, the CMN 3109 class strongly changed my mind toward university, because in the class, under the unique teaching method of the professor, I realized that if I just focus on grades, there’s a strong possibility that I won’t be as prepared for the world outside of university. But if I focus on learning as much as I can, and engage with all the opportunities presented by the class, I will be in a much better position to thrive after I graduate. It is just like how those communication techniques inspired me to how to be a better yoga instructor, this course has truly encouraged me to build my knowledge of the whole communication process, and rebuild my confidence to prepare for my future yoga teacher career positively….”