CCK09: Notes on Learning Networks and Connective Knowledge

Over the last few days I’ve been working through Stephen Downes’ paper Learning Networks and Connective Knowledge. I often struggle to get through these longer works and actually pull anything relevant out of them, and this is definitely a longer work (over 12,000 words, just over 29 pages pasted in Word). I used Diigo to highlight and comment while reading.

Below are quotes from Stephen’s paper and my comments. Stephen’s words are in block quotes; my comments appear below each snippet.  If you’d like to see my notes in context and don’t use Diigo, use this annotated link for the page.

This is a bit rough and pretty much my “thinking out loud” from when I was reading, so don’t expect a lot of polished thoughts here. It’s also still over 3000 words, so don’t expect a succinct summary either. This is more of “somewhere in the middle of the process,” and I have many questions remaining.

In other words, cognitivists defend an approach that may be called ‘folk psychology’. “In our everyday social interactions we both predict and explain behavior, and our explanations are couched in a mentalistic vocabulary which includes terms like ‘belief’ and ‘desire’.” The argument, in a nutshell, is that the claims of folk psychology are literally true, that there is, for example, an entity in the mind corresponding to the belief that ‘Paris is the capital of France’, and that this belief is, in fact, what might loosely be called ‘brain writing’ – or, more precisely, there is a one-to-one correspondence between a person’s brain states and the sentence itself.

I’ve never heard cognitivism compared to “folk psychology” before. I’m not totally convinced by this argument. Cognitivist methods do have some research support, after all. (Think multimedia learning, Clark & Mayer’s “ELearning and the Science of Instruction.”) But their methods could (at least sometimes) be right even if their explanation of the underlying mechanism is wrong.

We may contrast cognitivism, which is a causal theory of mind, with connectionism, which is an emergentist theory of mind. This is not to say that connectionism (see also) does away with causation altogether; it is not a ‘hand of God’ theory. It allows that there is a physical, causal connection between entities, and this is what makes communication possible. But where it differs is, crucially: the transfer of information does not reduce to this physical substrate. Contrary to the communications-theoretical account, the new theory is a non-reductive theory. The contents of communications, such as sentences, are not isomorphic with some mental state.

From Wikipedia: “A property of a system is said to be emergent if it is more than the sum of the properties of the system’s parts.” If I understand Stephen’s argument correctly, part of what he’s saying here is that rather than knowledge being exactly what we perceive it to be (a sentence like “Paris is a city in France”), what’s happening in our brains is more than that. When a teacher shares knowledge with a learner, it doesn’t work like a copy machine where the teacher gives the learner a duplicate of the original and then both people have discrete copies of that knowledge.

For example (and there are many we could choose from), consider Randall O’Reilly on how the brain represents conceptual structures, as described in Modeling Integration and Dissociation in Brain and Cognitive Development. He explicitly rejects the ‘isomorphic’ view of mental contents, and instead describes a network of distributed representations. “Instead of viewing brain areas as being specialized for specific representational content (e.g., color, shape, location, etc), areas are specialized for specific
computational functions by virtue of having different neural parameters…

I struggle a bit with the neurological arguments, but it does seem to make sense that the brain is divided by the different functions and not by the symbols we’ve created to communicate. And certainly when you look at brain scans of people doing different tasks, the activity isn’t just in one area: multiple areas of the brain are involved in any complex task.But I’m also cautious about the brain evidence because, frankly, I don’t really understand it that well. I’m also aware of research about how people find arguments more convincing when they’re shown with pictures of brain scans, even if it’s the same text. I don’t want to fall prey to that fallacy.

For example, when I say, “What makes something a learning object is how we use the learning object,” I am asserting a functionalist approach to the definition of learning objects (people are so habituated to essentialist definitions that my definition does not even appear on lists of definitions of learning objects).

It’s like asking, what makes a person a ‘bus driver’? Is it the colour of his blood? The nature of his muscles? A particular mental state? No – according to functionalism, what makes him a ‘bus driver’ is the fact that he drives buses. He performs that function.

These are better examples; this makes more sense to me. It does seems to support creating learning environments where content can be used multiple different ways, which fits with connectivism.

To illustrate this concept, I have been asking people to think of the concept ‘Paris’. If ‘Paris’ were represented by a simple symbol set, we would all mean the same thing when we say ‘Paris’. But in fact, we each mean a collection of different things and none of our collections is the same. Therefore, in our own minds, the concept ‘Paris’ is a loose association of a whole bunch of different things, and hence the concept ‘Paris’ exists in no particular place in our minds, but rather, is scattered throughout our minds.

Back to the cognitivist idea of the teacher as mental copy machine handing a student a duplicate copy of knowledge–this is the opposite of that. It’s more like if 20 artists sit down to draw the same scene; there will be similarities and overlaps, but nobody’s picture will be the same. This is, perhaps, part of why connectivism makes more sense when applied to learning complex topics. You don’t need connectivism to explain memorizing the state capitals or multiplication tables; the idea of the mental copy machine is probably a functional enough explanation. But if you’re trying to learn a big, gnarly topic, a model that works for regurgitating facts isn’t enough.

As we examine the emergentist theory of mind we can arrive at five major implications of this approach for educational theorists:

– first, knowledge is subsymbolic. Mere possession of the words does not mean that there is knowledge;
the possession of knowledge does not necessarily result in the possession of the words (and for much more on this, see Michael Polanyi’s discussion of ‘tacit knowledge‘ in ‘Personal Knowledge‘).
– second, knowledge is distributed. There is no specific ‘mental entity’ that corresponds to the belief that ‘Paris is the capital of France’. What we call that ‘knowledge’ is (an indistinguishable) pattern of
connections between neurons. See, for example, Geoffrey Hinton, ‘Learning Distributed Representations of Concepts‘.
– third, knowledge is interconnected. The same neuron that is a part of ‘Paris is the capital of France’ might also be a part of ‘My dog is named Fred’. It is important to note that this is a non-symbolic interconnection – this is the basis for non-rational associations, such as are described in the recent Guardian article, ‘Where Belief is Born
– fourth, knowledge is personal. Your ‘belief’ that ‘Paris is the capital of France’ is quite literally different from my belief that ‘Paris is the capital of France’. If you think about it, this must be the case – otherwise Gestalt tests would be useless; we would all utter the same word when shown the same picture.
– fifth, what we call ‘knowledge’ (or ‘belief’, or ‘memory’) is an emergent phenomenon. Specifically, it is not ‘in’ the brain itself, or even ‘in’ the connections themselves, because there is no ‘canonical’ set of connections that corresponds with ‘Paris is the capital of France’. It is, rather (and carefully stated), a recognition of a pattern in a set of neural events (if we are introspecting) or behavioural events (if we are observing). We infer to mental contents the same way we watch Donald Duck on TV – we think we see something, but that something is not actually there – it’s just an organization of pixels.

If this is the case, then the concepts of what it is to know and what it is to teach are very different from the traditional theories that dominate distance education today. Because if learning is not the transfer of mental contents – if there is, indeed, no such mental content that exists to be transported – then we need to ask, what is it that we are attempting to do when we attempt to teach and learn.

I’m finding myself resisting some of these ideas, and I’m not quite sure why. Is it because it’s so different from what I’ve been taught and assumed? Is it because I’m just too used to the folk psychology ideas and I need to unlearn them? I still feel like even cognitivism is a “good enough” explanation for some basic kinds of knowledge that do seem to operate as content transfer. Cognitivism isn’t a perfect model, but a simple knowledge transfer model might be good enough for some areas. But maybe education has focused too much on the simple knowledge transfer because it’s easy and we have an easy model to explan how it works–and education should be about a lot more than the kinds of learning that cognitivism explains well. The learning theories we believe must affect what we choose to teach, and not just how we choose to teach it.

we can identify the essential elements of network semantics.
First, context, that is, the localization of entities in a network. Each context is unique – entities see the network differently, experience the world differently. Context is required in order to interpret signals, that is, each signal means something different depending on the perspective of the entity receiving it.
Second, salience, that is, the relevance or importance of a message. This amounts to the similarity between one pattern of connectivity and another. If a signal creates the activation of a set of
connections that were previously activated, then this signal is salient. Meaning is created from context and messages via salience.
Third, emergence, that is, the development of patterns in the network. Emergence is a process of resonance or synchronicity, not creation. We do not create emergent phenomena. Rather
emergence phenomena are more like commonalities in patterns of perception. It requires an interpretation to be recognized; this happens when a pattern becomes salient to a perceiver.
Fourth, memory is the persistence of patterns of connectivity, and in particular, those patterns of connectivity that result from, and result in, salient signals or perceptions.

Earlier in this section, Stephen says that the constructivist idea of “making meaning” is meaningless. But here he says “Meaning is created from context and messages via salience.” What’s the difference between “making meaning” and “creating meaning”? I don’t get it.

For example, in order to illustrate the observation that ‘knowledge is distributed’ I have frequently appealed to the story of the 747. In a nutshell, I ask, “who knows how to make a 747 fly
from London to Toronto?” The short answer is that nobody knows how to do this – no one person could design a 747, manufacture the parts (including tires and aircraft engines), take it off, fly it properly, tend to the passengers, navigate, and land it successfully. The knowledge is distributed across a network of
people, and the phenomenon of ‘flying a 747’ can exist at all only because of the connections between the constituent members of that network.

This is an example of complicated knowledge, I think, and not complex, but the idea of complicated knowledge being distributed makes sense.

“What happens,” I asked, “when online learning ceases to be like a medium, and becomes more like a platform? What happens when online learning software ceases to be a type of content-consumption tool, where learning is “delivered,” and becomes more like a content-authoring tool, where learning is created?”
The answer turns out to be a lot like Web 2.0: “The model of e-learning as being a type of content, produced by publishers, organized and structured into courses, and consumed by students, is turned on its head. Insofar as there is content, it is used rather than read— and is, in any case, more likely to be produced by students than courseware authors. And insofar as there is structure, it is more likely to resemble a language or a conversation rather than a book or a manual.”

Summary of e-learning 2.0, although so much of what is being developed is still about content consumption

The idea behind the personal learning environment is that the management of learning migrates from the institution to the learner.

Learning therefore evolves from being a transfer of content and knowledge to the production of
content and knowledge.

I’m not sure if learning always has to be about the “production” of content by the learners; it could be about analyzing, summarizing, aggregating, tagging, etc. Am I really “producing content” with my comments on this article? I don’t feel like I’m producing something new, but I definitely feel like this is e-learning 2.0. I’m building on Downes’ work. But maybe my problem is with how I’m defining “content”; if “content” includes tagging and critiquing and commenting, then I am producing content now.

In a distributed environment, however, the design is no longer defined as a type of process. Rather, designers need to characterize the nature of the connections between the constituent entities.

An interesting idea for instructional design. Usually a big part of what we do as instructional designer is think about the structure and order of learning objects. But if the learning objects are scattered in different places and nonsequential, then the support learners need isn’t being told what order to follow: it’s how the objects relate to each other.

In effective networks, content and services are disaggregated. Units of content should be as small as possible and content should not be ‘bundled’. Instead, the organization and structure of content and services is created by the receiver.

The problem from everyone who has tried reusable learning objects is that it’s so hard to get objects that are really independent and free of context. I think this is a very difficult thing to actually achieve.

An effective network is desegregated. For example, in network learning, learning is not thought of as a Separate Domain. Hence, there is no need for learning-specific tools and processes. Learning is instead thought of as a part of living, of work, of play. The same tools we use to perform day-to-day activities are the tools we use to learn.

This is already happening to some extent. Blogs, wikis, and Twitter weren’t designed as learning tools, but lots of people use them as such. A look at Jane Hart’s top tools collection shows lots of tools used by learning professionals that weren’t originally intended for learning.

Knowledge is a network phenomenon. To ‘know’ something is to be organized in a certain way, to exhibit patterns of connectivity. To ‘learn’ is to acquire certain patterns.

If learning is about acquiring patterns, then the “to learn is to practice and reflect” would be ways of following and reinforcing those patterns. I suspect for this to really make sense that “pattern” has to be my individual pattern as a learner; my pattern isn’t the same as Stephen’s, even as I’m learning from him. But my pattern might be similar to Stephen’s or overlap with his, or connect with his.

Downes
Educational Theory

A good student learns by practice, practice and reflection.
A good teacher teaches by demonstration and modeling.
The essence of being a good teacher is to be the sort of person you want your students to become.
The most important learning outcome is a good and happy life.

One thing I’ve been wrestling with a bit lately is the idea of teachers demonstrating and modeling. It seems like demonstrating and modeling are mostly the same thing. What’s the difference between the two? And I do feel like “teacher” implies something a little more active than being a model off in the distance. What if we say that good teachers model and nurture instead? Nurturing doesn’t imply direct instruction or even most of what we think of as teaching, but it does imply interacting with students in ways that supports them and helps bring out the best in them.

In essence, on this theory, to learn is to immerse oneself in the network. It is to expose oneself to actual instances of the discipline being performed, where the practitioners of that discipline are (hopefully with some awareness) modeling good practice in that discipline. The student then, through a process of
interaction with the practitioners, will begin to practice by replicating what has been modeled, with a process of reflection (the computer geeks would say: back propagation) providing guidance and correction.

This description is helpful, but I again don’t see how demonstrating is different from modeling.

These environments cut across disciplines. Students will not study algebra beginning with the first principles and progressing through the functions. They will learn the principles of algebra as
needed
, progressing more deeply into the subject as the need for new knowledge is provoked by the demands of the simulation. Learning opportunities – either in the form of interaction with others, in the
form of online learning resources (formerly known as learning objects), or in the form of interaction with mentors or instructors – will be embedded in the learning environment, sometimes presenting
themselves spontaneously, sometimes presenting themselves on request.

This reinforces what Stephen said earlier about tools not being specific to learning; learning tools should be the tools we live and work and play with, integrated in our daily lives.

This does not mean that a ‘science’ of learning is impossible. Rather, it means that the science will be more like meteorology than like (classical) physics. It will be a science based on modeling and simulation, pattern recognition and interpretation, projection and uncertainty.

This is in the postscript about the futility of traditional empirical research on learning. Maybe this is where I run into problems reconciling the cognitivist research I’ve read (which is all traditional “change one variable” research) with connectivism. This would also explain why some of the cognitivist research that works OK in a lab environment fails in real classrooms; a lab environment doesn’t actually reflect the chaos of a classroom well enough. I’ve heard Stephen make this argument on a number of occasions, but I’ve always assumed that it meant any educational research would be worthless. That isn’t what he’s saying though; he’s saying that educational research is a different type of research. Now it’s finally making sense to me; of course educational research should be more like psychology, where we have trends and patterns but few absolutes.

1 thought on “CCK09: Notes on Learning Networks and Connective Knowledge

  1. Wow! Thanks for the reference and analysis. I found your take clarifying. I’m not participating in CCK this year, but it sound like interesting things continue.
    Just 2 comments. (1) Labels are not helpful for me (i.e. cognitivism) because the field is so fragmented and I find it helpful to speak of the historical development of the ideas of specific theorists (Bruner, Vygotsky, James, Dewey are some of my favorites) and
    (2). I find the problem of traditional research is that the interpretations derived go way too far in what they claim. I’ve been working on the following. Deduction can result in very secure knowledge of variables because it purposely limits generalizability, but making practice recommendations requires an inductive process generalizing from multiple sources.

Leave a Reply