Over the last few days I’ve been working through Stephen Downes’ paper Learning Networks and Connective Knowledge. I often struggle to get through these longer works and actually pull anything relevant out of them, and this is definitely a longer work (over 12,000 words, just over 29 pages pasted in Word). I used Diigo to highlight and comment while reading.
Below are quotes from Stephen’s paper and my comments. Stephen’s words are in block quotes; my comments appear below each snippet. If you’d like to see my notes in context and don’t use Diigo, use this annotated link for the page.
This is a bit rough and pretty much my “thinking out loud” from when I was reading, so don’t expect a lot of polished thoughts here. It’s also still over 3000 words, so don’t expect a succinct summary either. This is more of “somewhere in the middle of the process,” and I have many questions remaining.
In other words, cognitivists defend an approach that may be called ‘folk psychology’. “In our everyday social interactions we both predict and explain behavior, and our explanations are couched in a mentalistic vocabulary which includes terms like ‘belief’ and ‘desire’.” The argument, in a nutshell, is that the claims of folk psychology are literally true, that there is, for example, an entity in the mind corresponding to the belief that ‘Paris is the capital of France’, and that this belief is, in fact, what might loosely be called ‘brain writing’ – or, more precisely, there is a one-to-one correspondence between a person’s brain states and the sentence itself.
I’ve never heard cognitivism compared to “folk psychology” before. I’m not totally convinced by this argument. Cognitivist methods do have some research support, after all. (Think multimedia learning, Clark & Mayer’s “ELearning and the Science of Instruction.”) But their methods could (at least sometimes) be right even if their explanation of the underlying mechanism is wrong.
We may contrast cognitivism, which is a causal theory of mind, with connectionism, which is an emergentist theory of mind. This is not to say that connectionism (see also) does away with causation altogether; it is not a ‘hand of God’ theory. It allows that there is a physical, causal connection between entities, and this is what makes communication possible. But where it differs is, crucially: the transfer of information does not reduce to this physical substrate. Contrary to the communications-theoretical account, the new theory is a non-reductive theory. The contents of communications, such as sentences, are not isomorphic with some mental state.
For example (and there are many we could choose from), consider Randall O’Reilly on how the brain represents conceptual structures, as described in Modeling Integration and Dissociation in Brain and Cognitive Development. He explicitly rejects the ‘isomorphic’ view of mental contents, and instead describes a network of distributed representations. “Instead of viewing brain areas as being specialized for specific representational content (e.g., color, shape, location, etc), areas are specialized for specific
computational functions by virtue of having different neural parameters…
For example, when I say, “What makes something a learning object is how we use the learning object,” I am asserting a functionalist approach to the definition of learning objects (people are so habituated to essentialist definitions that my definition does not even appear on lists of definitions of learning objects).
It’s like asking, what makes a person a ‘bus driver’? Is it the colour of his blood? The nature of his muscles? A particular mental state? No – according to functionalism, what makes him a ‘bus driver’ is the fact that he drives buses. He performs that function.
To illustrate this concept, I have been asking people to think of the concept ‘Paris’. If ‘Paris’ were represented by a simple symbol set, we would all mean the same thing when we say ‘Paris’. But in fact, we each mean a collection of different things and none of our collections is the same. Therefore, in our own minds, the concept ‘Paris’ is a loose association of a whole bunch of different things, and hence the concept ‘Paris’ exists in no particular place in our minds, but rather, is scattered throughout our minds.
As we examine the emergentist theory of mind we can arrive at five major implications of this approach for educational theorists:
– first, knowledge is subsymbolic. Mere possession of the words does not mean that there is knowledge;
the possession of knowledge does not necessarily result in the possession of the words (and for much more on this, see Michael Polanyi’s discussion of ‘tacit knowledge‘ in ‘Personal Knowledge‘).
– second, knowledge is distributed. There is no specific ‘mental entity’ that corresponds to the belief that ‘Paris is the capital of France’. What we call that ‘knowledge’ is (an indistinguishable) pattern of
connections between neurons. See, for example, Geoffrey Hinton, ‘Learning Distributed Representations of Concepts‘.
– third, knowledge is interconnected. The same neuron that is a part of ‘Paris is the capital of France’ might also be a part of ‘My dog is named Fred’. It is important to note that this is a non-symbolic interconnection – this is the basis for non-rational associations, such as are described in the recent Guardian article, ‘Where Belief is Born‘
– fourth, knowledge is personal. Your ‘belief’ that ‘Paris is the capital of France’ is quite literally different from my belief that ‘Paris is the capital of France’. If you think about it, this must be the case – otherwise Gestalt tests would be useless; we would all utter the same word when shown the same picture.
– fifth, what we call ‘knowledge’ (or ‘belief’, or ‘memory’) is an emergent phenomenon. Specifically, it is not ‘in’ the brain itself, or even ‘in’ the connections themselves, because there is no ‘canonical’ set of connections that corresponds with ‘Paris is the capital of France’. It is, rather (and carefully stated), a recognition of a pattern in a set of neural events (if we are introspecting) or behavioural events (if we are observing). We infer to mental contents the same way we watch Donald Duck on TV – we think we see something, but that something is not actually there – it’s just an organization of pixels.
If this is the case, then the concepts of what it is to know and what it is to teach are very different from the traditional theories that dominate distance education today. Because if learning is not the transfer of mental contents – if there is, indeed, no such mental content that exists to be transported – then we need to ask, what is it that we are attempting to do when we attempt to teach and learn.
we can identify the essential elements of network semantics.
First, context, that is, the localization of entities in a network. Each context is unique – entities see the network differently, experience the world differently. Context is required in order to interpret signals, that is, each signal means something different depending on the perspective of the entity receiving it.
Second, salience, that is, the relevance or importance of a message. This amounts to the similarity between one pattern of connectivity and another. If a signal creates the activation of a set of
connections that were previously activated, then this signal is salient. Meaning is created from context and messages via salience.
Third, emergence, that is, the development of patterns in the network. Emergence is a process of resonance or synchronicity, not creation. We do not create emergent phenomena. Rather
emergence phenomena are more like commonalities in patterns of perception. It requires an interpretation to be recognized; this happens when a pattern becomes salient to a perceiver.
Fourth, memory is the persistence of patterns of connectivity, and in particular, those patterns of connectivity that result from, and result in, salient signals or perceptions.
For example, in order to illustrate the observation that ‘knowledge is distributed’ I have frequently appealed to the story of the 747. In a nutshell, I ask, “who knows how to make a 747 fly
from London to Toronto?” The short answer is that nobody knows how to do this – no one person could design a 747, manufacture the parts (including tires and aircraft engines), take it off, fly it properly, tend to the passengers, navigate, and land it successfully. The knowledge is distributed across a network of
people, and the phenomenon of ‘flying a 747’ can exist at all only because of the connections between the constituent members of that network.
“What happens,” I asked, “when online learning ceases to be like a medium, and becomes more like a platform? What happens when online learning software ceases to be a type of content-consumption tool, where learning is “delivered,” and becomes more like a content-authoring tool, where learning is created?”
The answer turns out to be a lot like Web 2.0: “The model of e-learning as being a type of content, produced by publishers, organized and structured into courses, and consumed by students, is turned on its head. Insofar as there is content, it is used rather than read— and is, in any case, more likely to be produced by students than courseware authors. And insofar as there is structure, it is more likely to resemble a language or a conversation rather than a book or a manual.”
The idea behind the personal learning environment is that the management of learning migrates from the institution to the learner.
Learning therefore evolves from being a transfer of content and knowledge to the production of
content and knowledge.
In a distributed environment, however, the design is no longer defined as a type of process. Rather, designers need to characterize the nature of the connections between the constituent entities.
In effective networks, content and services are disaggregated. Units of content should be as small as possible and content should not be ‘bundled’. Instead, the organization and structure of content and services is created by the receiver.
An effective network is desegregated. For example, in network learning, learning is not thought of as a Separate Domain. Hence, there is no need for learning-specific tools and processes. Learning is instead thought of as a part of living, of work, of play. The same tools we use to perform day-to-day activities are the tools we use to learn.
Knowledge is a network phenomenon. To ‘know’ something is to be organized in a certain way, to exhibit patterns of connectivity. To ‘learn’ is to acquire certain patterns.
A good student learns by practice, practice and reflection.
A good teacher teaches by demonstration and modeling.
The essence of being a good teacher is to be the sort of person you want your students to become.
The most important learning outcome is a good and happy life.
In essence, on this theory, to learn is to immerse oneself in the network. It is to expose oneself to actual instances of the discipline being performed, where the practitioners of that discipline are (hopefully with some awareness) modeling good practice in that discipline. The student then, through a process of
interaction with the practitioners, will begin to practice by replicating what has been modeled, with a process of reflection (the computer geeks would say: back propagation) providing guidance and correction.
These environments cut across disciplines. Students will not study algebra beginning with the first principles and progressing through the functions. They will learn the principles of algebra as
needed, progressing more deeply into the subject as the need for new knowledge is provoked by the demands of the simulation. Learning opportunities – either in the form of interaction with others, in the
form of online learning resources (formerly known as learning objects), or in the form of interaction with mentors or instructors – will be embedded in the learning environment, sometimes presenting
themselves spontaneously, sometimes presenting themselves on request.
This does not mean that a ‘science’ of learning is impossible. Rather, it means that the science will be more like meteorology than like (classical) physics. It will be a science based on modeling and simulation, pattern recognition and interpretation, projection and uncertainty.