What are Ideas? Networkological Musings on the Language of Neural Cinema
So seems Levi has a really interesting post up about ideas (prompted by reading Ian Hacking’s work on social constructionism).
And I’ve gotta say, I really agree with just about everything Levi says, and certainly with all the general instincts Levi describes. The only caveat I have, and I don’t doubt Levi would agree, is that I find the term ‘idea’ used by Hacking (as in ‘my idea of a dog’) to be a bit imprecise.
What are Ideas?
In networkological thought, there are several words which take up the common term ‘idea.’ Firstly, there are ‘mental images’ – anything that appears before a mind. Boundaries between these are often fuzzy, nested, etc. There are two primary types of mental images – ideas, which are images created from sense perception, or memories or hallucinations or fantasies constructed thereof, and then concepts, which are abstractions from ideas. Of course, the boundaries between these is quite fluid.
For the networkological perspective, mind and matter are two sides of the same, so that whenever there’s a mental anything, including metnal images, these are simply the mental side of a material structure. In the case of mental images, these are the mental side of the material structures in the brain, the activation patterns of particular neurons. That said, we must be careful to differentiate between mental images (ie: my image of a dog) and the neural and material pattens which create this, for there is no one-to-one correlation going on here, or at least, not a simple one. For there are in fact many different neural patterns which could give rise to a given mental image when in different contexts. The brain is a radically distributed whole, and so, we cannot just link neural correlates of a dog with its image.
What then? Mental images are distinctions we make within the flowing mental image of conscious experience, which itself actually DOES correspond to a given, if temporally extended and processural, neural state. The act of creating a distinct mental image from this flow, this act of distinction, is itself an action caused by the brain, which itself creates a particular set of experiences in consciousness, which can be dissected into mental images, etc.
And it is here that we see the degree to which particular mental images are abstractions, but useful ones. For in fact, these abstractions can be linked to particular objects in the world by means of our inner bodily and spatiotemporal maps, and used to guide action.
When I speak then of ideas, I mean these abstractions of other mental states which are themselves mental states which can be thought of, if fuzzily and processurally, as the mental side of neural patterns.
Complicating it with Consciousness
There are a few additional factors complicating things here, however. Firstly, the issue of consciousness. Many researchers have argued that only when the rhythmic firing of neurons come into sync do they bind together, and that this happens for individual objects (ie: the binding of color, texture, sound, and location in spacetime to that given dog walking across my line of sight) but also for consciousness as a whole. That is, consciousness is simply the largest binding of synchronous activity in the brain.
Thus, while there are many neural patterns which don’t enter consciousness, and which have mental as well as material aspects, these do not pass all of their mental aspects on (via loops into matter) to the part of our brain which at a given moment, due to sync, is ‘conscious.’ Our visual processing centers, for example, are outside conscious awareness, even if they mediate the relation between sensory and conscious aspects of our brains.
What we experience, then, is a highly filtered set of inputs from the mental side of the neurons which contribute to our conscious mind at any given moment. Thus, my idea of a dog cannot be simply separated from the neural and bodily mental/material patterns which gave rise to it (in all its components, such as color, shape, texture, etc.). But because a dog is always in a situation – against a background – we cannot separate the dog neural patterns from those which form its context, for they form a complex system.
But wait, what happens then, when I think only of my idea of the dog? Well, now my complex conscious synced neural state modifies, and I imagine that dog, stripped from that context. There is some similarity between the two, but if you looked at both neural states under an fMRI, I have no doubt they’d be substantially different. And because there are multiple pathways for the same things in the brain, perhaps there will be very little that is necessary and sufficient to determine by examining these states where the dog’s neural pattern in all this lies.
That said, researchers have been able to locate, they believe, particular cells that light up with particular people (the famous Bill Clinton experiment was taken as final confirmation of the existence of ‘grandmother cells’). But that given cell only ever activates in concert. So really, we are dealing with an abstraction.
Mental Representation and Neural Cinema
Mental images are representations which derive from mental side of neural patterns in a manner similar to the way in which meanings in language derive from the system of language. Each time you write a word on a page (ie: dog), there is a mental and material side to the imprint on the paper – networkological thought is panpsychist like this. But what makes that particular set of squiggles on the page, which has material and mental sides, also the representative of what is commonly called an idea?
In networkological terms, a word on a page is a seme, the idea behind it is a meme, and all memes have both particular and general sides. That is, when you write the word ‘dog’ on a page, it brings up a particular mental image in your mind, this is the particular side of that meme, but it relates to the general side, which is the word dog in general which circulates in culture. What relates these all – graphemes, memes, semes, phonemes, all these various systems of interconnected matters/minds which are not only themselves, but also, serve to link others via representation – is the manner in which they are related, and habitually related, to each other.
None of which is to say that representation is a special property of language. No, for networkological thought, when a cue ball knocks one billiard ball into another, the first billiard ball can be said to represent the cue ball to the second. Light, and visual perception, a play of photons, is simply a more complex system thereof. All that is is representational.
So what makes languages special? Limitation. Not every set of squiggles on a page will trigger the idea of a shaggy, smelly, barking creature in my and your minds, because we have agreed to limit the ways in which some squiggles refer to things. There are rules.
And so it goes with mental images. Just as the word dog is an utterance in the English language, one which does not represent it as a whole, nor which is meaningful if separated from it as a part, so a given mental image (ie: of a dog in isolation), is an utterance, as meaningless as the word dog if separated from the context of the English language.
But if meanings in language are composed via the matter of words, which are themselves produced by graphemes, we can say that a similar layering occurs in relation to mental images. Using Hjelmslev’s distinction between expression and content levels, we can say that just as in language we see the levels of grapheme/word/meaning (two intertwined sets of dual layers), so it is with mental images with levels of inputs from rest of the brain/global mental image/specific mental image (each of which is supported by activation patterns in specific neuron patterns). Unlike the digital, discrete nature of linguistic signifiers, however, mental images are fuzzy in their distinctions, and in this manner, their boundaries need to be thought of as only always already produced retroactively, not by convention, but by a new act each time.
That is, each mental image is an utterance, but a unique one, for this is a language ‘without a code’, as many theorists have frequently said about cinema. To talk about film, linguistics had to modify in this manner, and so it is with mental images. The similarities between mental and cinematic dynamics have long been noted. But is this langauge? It is certainly something like language, and certainly representation, and certainly representation which has limits to it, in the manner of language, even if the rules are never completely set in stone, but are determined on the fly. Language is a bit more formal on some levels (ie: grammar) and much more like this on others (ie: slang), which shows us the manner in which we need to think of sense making as a continuum, with physical representation on one end, mathematics and logic on the other, and everything else, from verbal language to cinema to consciousness, lying somewhere in between.
Binding it Together
What does this then mean? That my ‘idea of a dog’, isolated from that of a dog in context, is one where the first represents the second. It abstracts from it according to particular rules of abstraction, laid down by our mental machinery (ie: we tend to abstract aspects of prior mental states which relate to particular bodies or qualities or concepts which have been useful to us in the past, and these factors are determined by things like our evolutionary context, etc.).
I realize I’m telescoping a LOT in here. This will all be described in much more formal and polished manner in the manuscripts currently under construction (and responses from publishers have been promising so far, so hopefully more news soon!). But here’s some off the cuff summary of my thoughts on all of this . . .
To sum up, overall, I do agree with all Levi says – there are no ideas that are not matters in the world. But things get really complicated once we start discussing ideas in general, rather than meanings in relation to language, or things in the world, each of which is all quite different things. My guess is that Levi would quite agree with all of this, of course, certainly in general, if not more.