“Networkologies” is NOW PUBLISHED, Avail on Amazon!

•November 11, 2014 • 1 Comment

To Everyone Who Reads This Website -

Many thanks over the years for all the folks who’ve come and visited and enjoyed these posts. I’m happy to announce that after many years of editing and condensing my work on networkologies into one slim volume, that the first installment of the networkological project has been published by Zer0 books, and you can now order it on Amazon! It’s called “Networkologies: A Philosophy of Networks for a Hyperconnected Age – A Manifesto.” It’s split into two halves – a very user-friendly introduction, and a crazy over-the-top manifesto. The link to Amazon is here.

71FymkUVlgL[1]

You’ll also notice that I’ve uploaded all the works in progress that went into making this book, as supplemental materials on a page on the sidebar. I’ll work on bringing these to publication as further installments in the networkologies project. But the first volume is now out, so spread the word to any and all! Best wishes to all my readers, and thanks for the many questions and comments over the years!

Website Updates, and New Texts for Download

•October 31, 2014 • Leave a Comment
Just wanted to let you all know that I’ve just spent a few days revamping this website, in preparation for the official publication of my new book, Networkologies: A Philosophy of Networks for a Hyperconnected Age – A Manifesto, which is just about to be published by Zer0 Books.
Firstly, I’ve just uploaded two new texts that I wrote in the past that may be of interest – a book review for Steve Shaviro’s “Post-Cinematic Affect,” and an analysis of the logics of time-travel film in light of quantum physics, and the structure of Rian Johnson’s 2012 film “Looper.”
Mostly, however, the new version of the website now has links on the sidebar where you can download in pdf format the supplemental texts/works in progress I wrote before this in preparation for the Zer0 book, and which I will subsequently be working on bringing to publication in update format as time permits in order to fill out the networkological project into a multi-volume whole. These manuscripts include
- “Networlds: Networks and Philosophy From Experience to Meaning” (225 pgs), a very user-friendly introduction to the philosophy of networks from the standpoint of ‘everyday life.’ The Zer0 book was originally the introduction to this text, but it got too long, so I just published the introduction/manifesto. Highly polished text.
- “Netlogics: How to Diagram the World With Networks” (265 pgs), the original book I wrote on philosophy of networks which really has the entire project as a whole, but I decided was too dense to publish first, and so started writing Networlds. Needs minor modifications to fit the current form of the project, but otherwise highly polished draft.
- “The Networked Mind: Artificial Intelligence, “Soft-Computing,” and the Futures of Philosophy (234 pgs). The first book-draft I wrote, a sort of preface to the philosophy of networks, one steeped in the science and technology of networks, and shows the need for a philosophy of networks. It was from this that the rest of the project grew. The draft needs some work in terms of editing, and I am currently working on fixing-up this one first as the next volume to be published in the series. While the start of the text still needs some work, I’ve used middle sections of the text with students in class, and they say it is much clearer than already published sources on the materials at hand.
- My Dissertation, “The Untimely Richard Bruce Nugent” (351 pgs), an intersectional analysis of the work of the only openly ‘bisexual’ author and visual artist of the Harlem Renaissance. The work acts shows why an intersectional and networked approach to historiography and identity helps demonstrate the richness of Nugent’s overdetermined gestures, and how these resonate not only with Nugent’s time and our own, but the hermeneutics of reading the past which relate these in light of the rise of queer studies, women of color feminism, and other post-identity-political forms of interpretation.
Best wishes to all!
-Chris

Collapsing the Fuzzy Wave: Rian Johnson’s “Looper” (2012), Quantum Logics, and the Structures of Time Travel Films

•October 31, 2014 • Leave a Comment
A Very Uncanny, Bruce Willis-Like Version of Joseph Gordon-Levitt: Rian Johnson's Time-Travel Film "Looper" (2012)

A Very Uncanny, Bruce Willis-Like Version of Joseph Gordon-Levitt: Rian Johnson’s Time-Travel Film “Looper” (2012)

(Wrote this in 2012, but I’m updating my website, and adding some new content that should have been here long ago. Enjoy.)

Hypermodernity and the Time-Travel Film

In today’s hypermodernity, the experience of everyday life can often make it feel as if one is travelling through time, or even existing in many times and places at once. We are continually meeting our many slightly divergent copies, each existing within alterante yet often partially overlapping temporal dimensions. For in today’s world, we see refractions of ourselves everywhere, from our profiles on social networking sites to our continually updated online fragments of images, narratives, chat-histories and self-descriptions, it can often be hard to keep track of our bits and pieces, and this tendency is only likely to increase. To quote Agent Smith fromThe Matrix (Wachowki Bros.,1999): “The best thing about being me…there are so many “me’s!”

While Smith’s reaction is euphoria, there are many other possible reactions to this radical change in our way of relating to the world and ourselves. And while these were probably always multiple in one sense or another, for there were always many selves and worlds in the eyes of others, today we encounter so many different types of others, digital and otherwise, and leave so many virtual traces of ourselves, from voice-message greetings to videos, that it seems like we are always running into shards of ourselves in ever more concrete forms. As artificial intelligence and biotech get ever more powerful, who’s to know whether our digital avatars might one day literally have a life of their own which branch off from ours like roots from a tree, only to allow us to re-encounter full versions of ourselves again at some later date, even if today we see the foreshadowing of this in the proliferation of our virtual avatars.

And so, it should hardly be surprising then that the time travel genre has only gotten more popular, mainstream, and complex. While most of these films make use of science fiction devices, however, travelling in time isn’t merely something which happens in the domain of speculative fiction, for in a sense, memory and fantasy are always a form of interior time travel. From such a perspective, films which explore the depths of interior time, such as David Lynch’s Muholland Drive (2001), or David Cronenberg’s Spider (1999) can be thought of as films which use psychosis as the method to present time in the form of a shattered crystal, in which it is possible for people to encounter copies of themselves, sometimes exact and sometimes divergent, so many virtual avatars walking around within our heads. While technology may oneday literalize this phenomenon, the human mind, and cinema itself, are already media of time travel, and such that films which use speculative fiction simply have found a convenient means to dramatize this.

Recent time-travel films, such as Sean Carruth’s Primer (2004) or Duncan Jones’ Moon (2010), show new possibilities for the genre. The first uses the notion that time travel can create multiple copies of persons, while the second employs cloning to do the same. While the mechanics are slightly different, in many senses, both can be seen as attempt to think through the challenges of our age in an allegorical manner.

One of the best diagrams to help explain the temporal structure of Sean Carruth's "Primer" (2004), easily one of the best films of the decade.

One of the best diagrams to help explain the temporal structure of Sean Carruth’s “Primer” (2004), easily one of the best films of the decade.

“Looper”’s Quantum Memories and Fuzzy Futures

As time travel films get more complex, however, we need new models to think through their mechanics, and new ways to think about the ways they produce meanings. A recent and quite impressive addition to the genre is Rian Johnson’s 2012 film Looper. And as with many such films, soon after release, there were a panoply of explanations, diagrams, and attempts to explain its time-travel dynamics, many on the Internet. While most praised the film in one form or another, many argued that the presentation of time travel in the film is in some senses sloppy, because it mixes and matches various theories of time travel, and hence sacrifices structure for character development. This reading of the film has in some senses been supported by the director, who implied in at least one interview that he went with his gut rather than work out the details.[i]

Despite this, there’s actually some very strong reasons to see Johnson’s gut instinct as having a lot going for it. For when contextualized by aspects of contemporary quantum mechanics, the fuzzy logics of Looper are hardly inconsistent. In fact, this film can be seen as actually advances the time travel genre to a new level of complexity. In the process, the film introduces a series of new tools to add to the time-travel filmmaker’s toolbox.

Before getting to why I think Johnson’s film is actually much more of consistent approach to time travel than many have thought, it’s worth describing some of what makes this film so innovative if, that is, it can actually be made to work theoretically. One of Looper’s most unique innovations is the way it deals with memory and anticipation. In a powerful scene which recalls the avant-garde experiments of Japanese New Wave director Shuji Terayama’s Pastoral (1973), an older and younger version of the same character sit down to talk with each other in a diner. The older version of Joe, played by Bruce Willis, explains to his younger self, played by Joseph Gordon-Levitt, how time travelling to visit his younger self nevertheless also impacts him as well:

“My memory’s cloudy. It’s a cloud. Cause my memories aren’t really memories. They’re just one possible eventuality now. And they grow clearer, or cloudier, as they become more or less likely. But then they get to the present moment, and they’re instantly clear again. I can remember what you do after you do it. It hurts … But this is a precise description of a fuzzy mechanism. It’s messy.”

We see the memory issues described by “old” Joe [Willis] at work in the film, for example, when he has trouble remembering his wife’s face when they first met just as “young” Joe [Gordon-Levitt] meets Sara [Emily Blunt]. And it seems that both old and young Joe meet their respective love-interests with a blow to the face, for example, when the slap which Sara gives young Joe disrupts old Joe’s ability to remember the punch in the face immediately before he meets his future wife in his own past. It seems as if there’s cross-over or interference between the time lines of this cinematic world, even if these timelines aren’t strictly parallel, but rather, to one extent or another, also serial, which is to say, one after the other. The film goes out of its way to give us a sense that timelines interfere with each other, however, fast cross-cutting and aural-bridges between the scenes in question, such as the slap to young Joe and the punch to old Joe, to cement the link. While ultimately old Joe is able to recall his wife’s face, it is implied by these means that young Joe’s potential relationship with Sara could somehow erase that between old Joe and his wife. Time lines can not only interfere with each other, they can alter or even erase aspects of each other, and potential each other as a whole. Or in terms of the plot of the film, young Joe’s short term future can alter his long-term future, which is ultimately, old Joe’s past, hence, the memory fuzziness described by old Joe in the diner.

All of which is quite resonant with the structure of quantum entities, whether or not Johnson did this intentionally, or, as he has indicated, by instinct. According to quantum physics, sub-atomic entities aren’t quite particles, and this is why, after at first depicting electrons as tiny sattelites in fixed orbits around the central atomic nucleus, physicists came increasingly to depict electrons as fuzzy “clouds” which hover around a nucleus.[ii] What’s particularly difficult to grasp about this, however, is that electrons aren’t spread out like a rain cloud, but rather, the electron and the space and time in the area of the so-called cloud are, to use a popular metaphor in the science literature, “smeared” within and through each other. That is, while a rain cloud is full of little drops of water, there is only one electron in the spacetime of the cloud, and it could show up anywhere in there, but its location within this is, in a sense, “fuzzy.” The reason for this is that quantum entities, which are only described as particles by scientists today as a short-hand, simply don’t follow the laws of time and space like large entities do, they’re able to be in more than one space and time at once, even if more intensely in some of those spaces and times than others. What’s more, they interfere with each other, in a manner not all that dissimilar to what is described in Looper.

This is why we can think of old and young Joe as being similar, in many ways, to quantum entities. As they get closer to each other, in space and time, their actions increasingly begin to interfere, not only with each other in the present, but in the past and future as well. This manifests in the way in which young Joe’s actions can rewrite old Joe’s memories, or to the extent that the memory of one character is the future of another, also impact their potential future paths.

All of which is quite similar to what happens when quantum entities approach each other. The only way we can tell that particles as small as electrons are where we expect them to be is by shooting another “test” particle in their general vicinity, and then looking to see how the test particle is deflected by its interaction with an electron. After repeated trials, scientists have come to realize that particles this small are never exactly where you expect them to be, but rather, they exist within a cloud of probabilities of being in one particular zone in spacetime or another. Similar to old Joe’s description of his memory, particles are only ever more or less likely to be where they are expected to be, and any attempt to fix a quantum particle in place will produce a “precise description of a fuzzy mechanism.” Ultimately, it’s about probabilities. And this is what has lead scientists to argue that the particles and/or spacetimes involved, depending on how you look at things, are “smeared” in relation to each other, according to the degree generally indicated by the intensity of shading of the cloud. That is, the density of the cloud indicates how likely it is the particle is to be there, and hence, the clearness or cloudiness gets more intense as the presence of the “particle” is more or less likely.

What’s more, quantum particles and their probabilities interfere with each other. One of the first things which quantum physicists realized, in the famous split diffraction grate experiments, was that quantum phenomenon “diffract.” That is, just like ripples in a pool of water, quantum “clouds” extend in waves from their most intense points, and when these begin to overlap, the result is a pattern very similar to the ways in which waves of water interfere with each other over a distance. This is one of the reasons why many working in this field argue that quantum phenomenon have both particle-like and wave-like characteristics. While a quantum entity only ever “actualizes” at one point when it hits another, thereby acting like a particle, its ability to interfere with the action of others, and vice-versa, extends around it in rippling clouds, similar to the ways in which young and old Joe seem to be able to impact each other from afar in the film.

One particularity of quantum physics, however, is that this smearing, clouding, and rippling doesn’t merely happen over space, but time as well. This can be seen demonstrated in the famous “quantum eraser” experiments,[iii] which indicate that quantum particles act as if they go out of their way to avoid paradoxes, in ways which would require they “knew” what happened in the past. There have been countless attempts to explain the strange consistency of quantum phenomenon in these strange examples, but basically, it is as if quantum particles “know” what we are going to do before hand, and take this into account to make sure they don’t do something which would violate the often already paradoxical seeming, yet nevertheless consistent, laws of quantum mechanics. In many senses, the paradoxes only arise if we view the quantum world with our everyday, more linearly temporal lenses, similar to the way in which Looper appears inconsistent from a more traditionally linear temporal point of view.

But is Looper Consistent? Alternate Temporal Dimensions and Relativity Theory

It is the issue of consistency, however, which has worried many critics of Looper, even if ultimately, the forms of consistency desired by these critics don’t take quantum issues into account. One of the best streamlinings of the argument critical of Looper is presented by blogger Liam Maguren here.[iv] Maguren starts by describing the four primary approaches to time travel depicted in contemporary film:

“Theory A – Fate: there is only ever one destined timeline in the entire universe. If you travel to the past, your actions will not change the timeline at all, for they were always meant to happen (e.g.Timecrimes).

Theory B – Alternate Universes: Travelling to the past causes the creation of an alternate universe/timeline (e.g. Star Trek).

Theory C – Success: Instead of creating another universe/timeline, it shifts the current one, for there is only one linear universe/timeline in this theory. Thus, the previous future ceases to exist (or be altered). This could lead to the non-existence of the person that travelled (e.g. Back to the Future).

Theory D – Observer Effect: Just like Success except the traveller is essentially ‘Out of time,’ meaning they will not be affected by change (e.g. Groundhog Day).”

He then continues to apply this to Looper:

“… Looper creates its own time travel theory by merging two conventional ones: Alternate Dimensions and Success. This means the following:

1) Travelling back to the past can alter the future, creating another dimension in the process.

2) Changing past events can affect the traveller (e.g. losing limbs) [as happens to the character Seth, played by Paul Dano].”

The initial problem people have with this theory-merger is the seemingly contradictory nature of those two propositions. If a traveller creates/goes to an alternate dimension, are they not immune to any consequences faced by their younger, alternate self? How can an altered dimension still affect the original dimension the traveller came from?

Marguren attempts to then solve this problem:

“The temporal nature of Looper’s world wants the alternate dimension to remain as true to the original dimension as possible in order to avoid unnatural paradoxes. However, if a paradox were to occur, it’s not going to mean the entire universe implodes* (as the ending proves). It simply means that it’s unnatural for paradoxes to occur in the universe.

It’s similar to how opposite poles from two different magnets attract each other. They do not want to separate; it’s natural. An attraction is still held between them even when they are being forced apart (just like the original and alternate dimensions). However, with enough force, the attraction will cease and the magnets will separate. This is by no means a precise science (as they quite clearly state in the film). But Looper remains consistent to the rules of its universe … This does not mean that the exact same events will happen in the exact same way. Rather, key events will remain fixated due to the universe’s “magnetic” desire to keep dimensions linear.”

It is this “magnetic” attraction between dimensions or timelines which lead another blogger, Kofi Outlaw[v] to conclude that the film’s logic ultimately fails:

“We could go on and on like this, but we would inevitably find ourselves arriving back at the same conumdrum: time travel theory: you just can’t have it both ways. Looper crafts a very good story out of a wild sci-fi premise, and while it dodges a lot of its own potholes scene-to-scene, when viewed from a distance its clear that Rian Johnson has not yet cracked the time travel movie conundrum.”

Marguren's Final Diagram to Help Explain "Looper"'s Temporal Structure

Marguren’s Final Diagram to Help Explain “Looper”‘s Temporal Structure

By wanting to have it “both ways,” Johnsons’s film is then perhaps consistent with its own logic, even if that logic is not coherent in regard to common notions of what time travel should be like. This leads one of the commentors on Marguren’s page to clear up, using Marguren’s proposal, one of the plot’s most central mysteries in the following way:

“… the rainmaker [the film’s erstwhile antagonist] never time-travelled. he kills his aunt [his foster mother], then is brought up by sarah [his biological mother]. initially he becomes the rainmaker who starts closing the loops, but because the Joes appeared in his childhood and changed his relation with Sarah to one where he accepts her as his mother, he will likely not become the rainmaker this time. that is the whole story about Cid [Peirce Gagnon][sic].”

All of which seems consistent with what Johnson himself says of

“The approach that we take with it is a linear approach. That was an early decision that I made. Instead of stepping back to a mathematical, graph-like timeline of everything that’s happening, we’re going to experience this the way that the characters experience it. Which is dealing with it moment-to-moment. And so, the things that have happened have happened. Everything is kind of being created and fused in terms of the timeline in the present moment. So, the notion is, on this timeline, the way that old Joe is experiencing it, nothing has happened until it happens. Now, you could step back and say are there multiple timelines for each moment, and every decision you make creates a new timeline. That’s fine. You can step back and draw the charts and do all that. But in terms of what this character is actually seeing and experiencing, he’s living his life moment to moment-to-moment in the linear fashion and time is moving forward. And, as something happens, the effect then happens.[vi]

While this might seem a bit naïve on the surface, it does harmonize with one of the fundamental principles of the theory of relativity, namely, the part which gives it the name of “relativity.” The concept which gives this theory its name is that for any “frame of reference” (often called an “inertial” frame”), the same laws of physics apply.[vii] And so, even if your time and space is actually getting warped by something (ie: coming near to a black hole), you will only ever experience this as the space and time around you as acting strangely. And this is why if two different observers start to argue about whose spacetime is warping, they can only ever do so in regard to an outside standard of reference (ie: a nearby star). Now if observers from these outside standards of reference start to disagree, there’s no way to tell who’s right, and in fact, all of these are right, it’s all relative to where you observe things. This is why the warping of spacetime in relativity theory means that sometimes things will appear in multiple place and times, depending on the spacetime and conditions from which observered.

Another Diagram for "Looper"'s Temporal Structure by Natalie Zutter

Another Similar Diagram for “Looper”‘s Temporal Structure by Natalie Zutter.

All this is consistent with Johnson’s priotization of the frame of reference of the character at any given moment as that which organizes the film. And while scientists haven’t been able to quite get the theory of relativity and quantum mechanics to fully come together yet, they know they both work in their respective domains, even if the big picture gets fuzzy, similarly to Johnson’s film once again. Perhaps such fuzziness is all we will ever get.

Another nice graphic of the film's temporal structure by Rick Slusher at Film.com.

Another nice graphic of the film’s temporal structure by Rick Slusher at Film.com.

Leibniz, Parallel Dimensions, and Sticky Timelines

While this can help clear up some aspects of the film, it doesn’t explain why there is the “magnetic” or “sticky” aspect whereby actions in a given dimensional timeline seem to pull, attract, or repel each other. That is, there’s more than just fuzziness at work, but forces which make things more or less fuzzy, and push and pull in the process.

For example, when Sara slaps young Joe’s face, this seems to repel the punch in the face which old Joe receives when he meets his wife in the future. Likewise, there seems to be a magnetism of sorts between Joe and Cid. Both were abandoned by their mothers. As we are shown in the “flash forward” which shows what could happen to Cid if his mother is killed, he ends up on a train, as young Joe is instructed to do by old Joe earlier in the film to get out of town. If this path in time continued, he would end up fending on his own, like young Joe. There’s even a moment in which Sara says to young Joe “he’s you…” and it seems like she is speaking of Cid, but after a lengthy pause, she clarifies she’s speaking about old Joe when she says “he’s your loop.” If it weren’t for the fact that young Joe seems to have none of Cid’s telekenetic powers, it almost seems as if Cid could be young Joe’s younger self. While it does seem clear that young Joe and Cid are different, they also seem at least semi-“magnetically,” perhaps, if nothing else, retroactively by means of old Joe’s interventions in the past of the child who could turn into the potential Rainmaker.

This magnetism or stickyness between temporal dimensions is what has made critics argue that the film wants to have things “both ways,” collapsing two of the primary time travel models. Before explaining this in terms of contemporary quantum physics, however, it will be helpful to get a sense of how this was modelled by a philosopher who had quantum intuitions centuries beforehand, namely, G.W. Leibniz, and in particular, Leibniz’s prescient discussions of multiple universes in his famous text Monadology (1715).[viii]

According to Leibniz, every possible universe that could ever be imagined always already exists, before the universe even began, in the mind of God. God is, as many have argued, like a cosmic accountant or giant brain who examines all the possible universes, and figures out the best way to bring them together to produce the “best possible” universe. The reason for this limitation is that the structure of the world just doesn’t permit what Leibniz calls “incompossible” events to occur in the same universe. And so, to use his example, while the existence of a Biblical Adam who eats the apple in the Garden of Eden and one who doesn’t are both possible, they are not possible together in the same universe. These two possibilities cancel each other out, they are “incompossible.” What God does is then look forwards in spacetime to see which paths through the possible universes end up with the universe being best without contradicting itself, and making that pathway happen. And so, while the world may be full of suffering, for Leibniz, any other pathway through the possible universes would ultimately be worse for the overall good of the universe, as seen from the perfect knowledge, outside time, which only God possesses.[ix]

While Leibniz wrote years before anyone had dreamed of quantum physics, there are many ways in which quantum events are each similar to Leibniz’s God in their own domain. While individual quantum phenomenon, such as electrons, don’t have the sentience to choose the “best possible” pathway, they do have the ability to filter out incompossible pathways, both forwards and backwards in time. This is what the famed spacetime “smearing” is all about. That is, the a quantum phenomenon sends out what can be thought of as “feelers,”[x] not only in space, but also forwards in time, within the fuzzy “cloud” that quantum phenomenon, to eliminate potential pathways in time which would lead to incompossibilities. This is another way of saying that quantum phenomenon act “as if” they “know” how to avoid contradictory events ahead of time (ie: the “quantum eraser” experiments).

The reason why this can be described in terms of “feelers” which feel forwards in time, if within a given domain, is that this is in fact the way in which scientists themselves graph these things. Richard Feynman’s famous network-like diagrams plot out precisely the probabilities, in spacetime, which links states, and the stronger probabilities generally, though don’t always, win.[xi] While these feelers are virtual, for they indicate probabilities rather than anything real, they do result in what researchers call “consistent histories” between possible outcomes.[xii] All of which is to say, each particle, simpler yet nevertheless similar to Leibniz’s God, has some way, yet to be understood, which allows them to feel forwards in time, in a manner similar to these probability threads, to see which possibilities wouldn’t work. Scientists still aren’t sure how this “feeling” happens, but some, particularly those influenced by David Bohm,[xiii] have argued that particles literally “feel” ahead in this manner. While this would violate, in some cases, the prohibition from relativity theory that the fastest speed in the universe must be speed of light, the debate rages on to this day on how to solve this paradox.

Nevertheless, it certainly seems “as if” particles feel out possible pathways, eliminate the paradoxical ones, and then “decide” which ones they like best. This is why some theorists have argued that quantum phenomenon are like minds, because no-one can figure how they “decide” to be in one state or another such that, at least from the outside, it seems as if they are making decisions in the manner of animals and humans. That is, while quantum phenomenon are predictable in the long-term, in the short term, they seem as if they have minds of their own, or if they can somehow feel minute shifts in the microwinds of forces in the universe acting upon them in ways which are beyond our capability to detect.[xiv]

Collapsing the Rainmaker’s Wave: Quantum Entanglement and Splitting Timelines

To bring this back to Looper, it seems as if the depiction of time travel in this film is somewhere between Leibniz’s model of God, and some of what was just described in regard to quantum phenomenon, if in a way which is more fuzzy than Leibniz imagined, but more like that of quantum physics. For rather than a God which strives to produce the “best possible” universe, it seems like that Looper aims at something much more realistic, at least in terms of quantum physics, namely, a universe which aims to produce the most consistent possible version of itself, as manifested by a pull of sorts towards consistency.

We can see this by returning to the issue of Joe’s fuzzy memory. The closer young and old Joe get to incompossible events, the fuzzier old Joe’s memory becomes. Taken to its extreme, it is likely that at the end of the film, in the split second between when young Joe decides to pull the trigger on himself and the moment of doing this, that old Joe’s memory would lose all tie to the past he knew with his wife, because it ceases to be compossible with the reality in which he found himself. It seems likely that his memory would, at that moment, be much closer to that of young Joe at his age, which is to say, non-existent, and precisely because they are both becoming versions of young Joe. That is, old Joe is becoming less real, more virtual, more a fantasy projection of young Joe, which is to say, he becomes less of a potential real future for young Joe.

This helps explain why many critics have argued, along with commenter Pete Cooper in Magursen’s comment thread, that “Young Joe and the Rainmaker can never exist in the same reality.” And this is true, for if young Joe doesn’t kill himself, or do something similar, Cid will grows up to be the Rainmaker, and this will lead old Joe to go back in time to kill Cid as a child, making sure this will happen, and in the dialogue of the film, “closing the loop” in multiple senses of the term. If Cid doesn’t grow up to be the Rainmaker, however, this is because young Joe killed himself, thereby preventing old Joe from going back in time to try to kill him, thereby inadvertently producing the Rainmaker in the process. And so, a young Joe which kills himself and the Rainmaker cannot exist in the same universe, they are “incompossibles,” which is not the case with old Joe, and this helps explain why there’s interference between old and young Joe’s timelines.

Of course, this raises the condunrum of how the incompossibility of old Joe and Cid in the same universe could occur in the first place. And this is because time is currently flowing in two dimensions at once, one in which Cid becomes the Rainmaker, and one in which he doesn’t. This is in fact similar to another quantum phenomenon, known as “decoherence” or “the collapse of the wave function.”[xv] As quantum phenomenon approach each another, and their probability clouds and “smeared spacetime” begin to overlap, it becomes more and more likely that there will be an interaction, a quantum event, in which they

slam into each other and fly off in their respspective directions. While some physicists argue this is because the particles are always there in solid form, and that they teleport around their clouds of probability, others argue that they are smeared through spacetime, and solidify when something comes into this zone and disturbs it with great enough intensity. Either way, the result is ultimately the same, namely, that what seems to us as as a possible interaction of particles becomes one in reality. We can’t really know what it’s like before the particles interact, and so, whether or not there’s a cloud, a set of feelers, or a single particle able to teleport around in a particular zone of spacetime instantaneously, or something different completely. All human observers seem to be able to tell is that there is an interaction which follows along with the equations of quantum physics, even down to the degree and domain of unpredictability in involved.

This can help explain why old Joe and Cid can exist in the same universe, but not young Joe and the Rainmaker. The child we know as Cid in the film hasn’t yet passed the point of no-return after which he becomes the Rainmaker, just as young Joe hasn’t passed the point of no-return after which he can’t stop old Joe from producing the Rainmaker. These are two sides, in a sense, of the same fuzzy event seen from multiple perspectives. And each of these perspectives is fuzzy in a temporal sense as well, for we don’t know we have passed a critical point until after the fact, except perhaps in the ways in which old Joe’s memory-anticipations start getting fuzzier when he approaches potential branching points which could produce an incompossibility which could impact the ability of these timelines to flow together. Such a crucial turning point is seen when young Joe shoots himself, and the “entanglement,”[xvi] or coherence, between old Joe and Cid collapses, decoheres, and the Rainmaker, and both Joes, vanish. Only Cid and his mother remain.

All of which can help us finally understand the cause of the “magnetism” between dimensions in Looper, in that even when time travel creates the existence of multiple dimensions, there seems to be a force which pulls everything back together, trying to collapse dimensions back into a completely consistent state of one dimension. Like the manner in which gravity pulls heavenly bodies towards each other, so it is that time travel dimensions in Looper seem to pull towards the greatest possible consistency, at least in light of the multiple dimensions which have opened up.

While this may seem odd to our everyday sense of reality, none of this is bizarre to the strange world of quantum physics. While quantum particles cancel out direct incompossibilities, we have no way of knowing if they open up alternate dimensions to play out other scenarios in which these events wouldn’t be incompossible. Such is the interpretation of the data of quantum physics by those who take a “multiverse” or multiple universes perspective on the date of quantum experiments.[xvii] And it’s worth noting that all these theories of quantum physics are different perspectives on the same very consistent facts of experimental evidence, just as much as old and young Joe have multiple perspectives on the Rainmaker. In fact, it is possible that all the interpretations of quantum physics are right, from their perspective.

According to the multiple universes perspective on quantum physics, all possible pathways which a quantum phenomenon could take actually is taken in a possible universe. If this were the case, then there would be a nearly infinite set of possible universes. What then makes the universe we are in so special? Perhaps simply the fact that we are in it. But if quantum particles seem to cancel out incompossibilities in spacetime at their own level, there’s no reason to not think that our universe as a whole does this as well. Certainly some quantum physicists have argued that the same wave equations which can be applied to quantum particles can be applied to larger objects, up to and including the entire universe, such that perhaps our entire universe is like one giant entangled quantum state. [xviii] This has lead some to speculate that our universe is in fact potentially not what it seems, an illusion, projection, simulation, or holograph of some sort. Whether or not this is the case, it does seem that quantum particles prioritize consistency within their local domain, and from such a perspective it doesn’t seem unlikely that, were there multiple universes, through time travel or the multi-verse interpretation, that these would priotize consistency as well. Perhaps we live in the “most consistent” possible universe, and the reason why this is the case is because this level of consistency is needed to support life. Perhaps less consistent universes, in which things just vanish and appear at random, just produces too many paradoxes.

And what would happen in case of a paradox? We don’t really know, but the characters in the film definitely seem to have some ideas. Time traveller Abe [Jeff Bridges] seems quite concerned to avoid massive time paradoxes, and old Joe certainly notices that the closer he approaches an incompossibility with young Joe, the more his memories are erased. If quantum phenomenon seem to “erase” the possibility, beforehand, of incompossible actions, then it seems very possible that a true paradox would lead to precisely what we see with old and young Joe: when they hit a true incompossibility, they cancel each other out, like matter and anti-matter.

In this sense, it seems to read both the film and the evidence of quantum physics as demonstrating not only not only a force or pull towards consistency, but an inverse force whereby inconsistency splits spacetime into dimensions. In this sense, it’s not unreasonable to say that the universe “desires,” whether this is called magnetism or force or anything else, something like consistency, because without it, it would shatter, and taken far enough, potentially even cease to exist, similar to the manner of Joe in the film. Just as quantum particles “seem” to “decide” on some states rather than others, it is as if the universe “wants” to keep existing, and the pull towards consistency is how this manifests within it, as in the film. Maybe Leibniz’s God wasn’t so far fetched in the first place.

Of course, this is only a film. But in light of quantum physics, Looper’s time travel mechanics seem to be much more realistic to the way in which time-travel would likely happen were it possible. Then again, quantum physicists have argued that quantum particles travel through time all the time, or at least, that is one possible way of reading the evidence, that favored by Richard Feynman.[xix] Either way, in all these formations, whether in quantum physics or the time travel of the film, there is not only interference between dimensions or channels, but also a sort of instability and readiness to collapse as soon as an incompossibility arises to disturb an entangled, “sticky,” “magnetized” state. Perhaps coherent and incoherent phenomenon are simply islands of stability within this dance of forces.

Ramifications via Deleuze and Lacan: Our Fractal Technoverse

Gilles Deleuze famously argued in his books on cinema that the feeling that time, and the world with it, was ‘out of joint’ began in the aftermath of World War II, and clearly Akira Kurosawa’s Rashomon (1949) stands out as a film which uses flashbacks to present multiple possible versions of the past which don’t align, which couldn’t all be possible, and without providing resolution. Since the film’s attempt to place guilt in relation to a horrible act of violence can be read as an allegory of the attempt to deal with the guilt and horror of memory in relation to the atrocities of the war, it doesn’t seem unlikely that the form of the film is an attempt to deal with the trauma of its contents, allegorical and otherwise. And as psychoanalytic critics have long argued, fragmentation is one of the primary responses to a trauma which remains difficult to process and integrate. Deleuze nevertheless aims to get beyond the limitations of psychoanalysis, and his argument about time and film is wide-ranging and goes beyond trauma, unless the trauma is seen as more than the war, but the generalized condition of living in our postmodern age. And so, in his cinema books, he traces the shift in the depiction of time in avant-garde films in the post-war period, by means of cinematic authors such as Fellini, Resnais, Tarkovsky, and beyond.

Nevertheless, at least for psychoanalysis, and in particular, in regard to the way in which time travel films have been analyzed by Slavoj Zizek, time-travel films are about the attempt to think what it would mean to change who you are.[xx] We often find it hard to believe we were the people we once were, and we try to imagine the people we will become, even if we will never turn out to be quite as we imagine. The process of changing our habits by means of reframing our memories and expectations is what happens in any good therapy, or in any self-aware life outside the consulting room. Time travel films, for all their often sci-fi plot devices, are really, for Zizek, about the attempt to intervene in our present moments. But they do so in a language which speaks to our times, namely, temporal dislocation and hi-tech gadgets.

This is why Deleuze traces the lineage of films in which time is “out of joint” to those which have no sci-fi premise, but which have mechanics similar to time travel films. Ingmar Bergman’s Persona (1966) is an excellent case in point. While the whole film has nothing sci-fi about it, the play of memories and fantasies leave viewers with multiple possible interpretations, all layered on top of each other as possibilities, like so many quantum threads of possibility which are entangled until something forces it to decohere into one state over another.

While film seems to be particular suited to, as Deleuze argues, function as a “time machine,”[xxi] it is not the only medium which can do this. Literature can imagine mulitple time lines as well, for example, but it only began to truly do this at around the same time as film began to do this, with perhaps Jorge Luis Borges “Garden of Forked Paths” (1941) as one of the earliest examples. Of course, language and images of all sorts were always virtual realities of sorts, but only after World War II did time seem to truly “go out of joint.” While the fissures can be seen as early as experiments with painting, such as that of Picasso or the Italian Futurists, it is only retroactively that the true potential import of these devices become clear.

If most time travel films thematize single loops through time, the more baroque manifestations of recent times, such as Twelve Monkeys, Donnie Darko, Primer, The Prestige, Moon, and Looper show that relatively mainstream film, if towards the more difficult end of things of the mainstream, is now approaching the experiments of the avant-garde. These films, which take what Deleuze would call a “crystalline” structure, are increasingly becoming similar to the complexities of films like Terayama’s Pastoral (Japan, 1973) or Alain Resnais’ Last Year at Marienbad (France, 1961).

Ours is increasingly a world in which spacetime between individuals is generally measured in how long it takes for a message to fly between mobile devices, short-circuiting the 3D spacetime of the mere physical world. As such, linear time seems to be fading out, increasingly replaced by virtual sites on the Internet which update and mutate in webs in relation to each other. And while the web does seem to tend towards something like consistency, it still allows quite a few paradoxes to exist in powerfully entangled states.

Either way, this new spatiotemporal reality is increasingly moving from the realm of fantasy to everyday. And so, a film like Looper then is likely to resonate with a wide many who’ve never encountered quantum physics, but simply feel what it’s like to live everyday in our hypermodernity. And as our everyday life increasingly begins to take on quantum aspects, with micro and macro levels of our worlds echoing each other as in fractal images, perhaps films like this can help us intuit some new ways to navigate the challenges of our age.

Notes:

[i] http://www.slashfilm.com/ten-mysteries-in-looper-explained-by-director-rian-johnson/

[ii] Several excellent general introduction to quantum mechanics and relativity theory exist which explain the many approaches to interpreting the findings of quantum mechanics. My preferred introduction for beginners remains Gary Zukav’s The Dancing Wu-Li Masters: An Introduction to the New Physics (HarperOne, New York: 1979), which, even though nearly twenty years old, still remains one of the most accessible and philosophically friendly explanations of the basic issues at stake, and in ways which recent developments complement rather than displace. For those seeking a more recent source, see the slightly more technical Timeless Reality: Symmetry, Simplicity, and Multiple Universes by Vincent Stenger (Prometheus Books, New York: 2000).

[iii] An extensive discussion of quantum eraser experiments can be found in Katerhine Barad’s Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning (Duke Univ. Press, Durham: 2007), pp. 247-352. It should be kept in mind, however, that Barad’s description, while excellent, privileges the Copenhagen model of interpretation proposed by Niels Bohr. For a more balanced interpretation of issues related to spacetime symmetry, see Stenger (op.cit.), pp.26-151.

[iv] (http://www.flicks.co.nz/blog/a-man-of-100-words/looper-explained-with-straws/).

[v] http://screenrant.com/looper-ending-explanation-time-travel-spoilers//2/%5D

[vi] http://www.huffingtonpost.com/2012/09/30/looper-ending-explained-rian-johnson_n_1927860.html

[vii] For more on intertial frames in relation to relativity theory, see Paul Davies, About Time: Einstein’s Unfinished Revolution (Touchstone, New York: 1995), pp. 44-77.

[viii] See “The Monadology” in G.W. Leibniz, Discourse on Metaphysics and the Monadology (Dover Books, Dover: 2005, reprint).

[ix] For more Deleuze’s film theory in relation to Leibnizian incompossibility, see Gilles Deleuze, Cinema II: The Time-Image (Univ. of Minnesota Press, Minneapolis: 1989), pp. 130-131 .

[x] For more on virtual particles and Feynman networks, as well as how these can be thought of as “feelers” in spacetime, beyond models of quantum events which reduce them to simple particles, see Zukav (op.cit.), pp. 237-282.

[xi] For more on the networked structure of Feynman diagrams, see ibid.

[xii] Consisten histories are at least in part what Feynman diagrams are used to determine. For more, see Zukav (ibid.), or Stenger (op.cit.), pp. 147-8.

[xiii] David Bohm’s highly influential notion of the implicate order, and its use in interpreting quantum mechanical phenomenon, is explained at length in Wholeness and the Implicate Order (Routledge, London: 2002, reprint).

[xiv] For more on the notion of hidden variables, and the deconstruction of the reductive argument used in relation to these to devalue the non-local arguments proposed by Bohm and others, see Bohm (ibid.), pp. 83-139.

[xv] More on the “collapse of the wave-function” can be found in Zukav (op.cit), pp. 83-96, while more on decoherence can be found in Stenger (op.cit.), pp. 148-151.

[xvi] For a book length treatment of the notion of quantum entanglement, see The God Effect: Quanum Entanglement, or Science’s Strangest Phenomenon (St. Martin’s Press, New York: 2006).

[xvii] For more on the multiverse model of the cosmos, see John Gribben, In Search of the Multiverse: Parallel Worlds, Hidden Dimensions, and the Ultimate Quest for the Fronteirs of Reality (Wiley, Hoboken: 2009).

[xviii] For more on the notion of the entire universe as something like a wave-function, and hence, potentially existing in an entangled state, see Gribben (ibid.), pp. 23-33.

[xix] For Feynman’s account of how particles travel backwards in time, see Zukav (op.cit.), pp. 242-3.

[xx] For Zizek on time-travel in relation to the “temporality of the symptom” in psychoanalysis, and its relation to film, see, for example, The Sublime Object of Ideology (Verso, London: 1990), p. 161.

[xxi] See D.N. Rodowick, Gilles Deleuze’s Time Machine (Duke University Press, Durham: 1997).

Book Review: “Post-Cinematic Affect,” by Steven Shaviro, Zer0 Books, 2010.

•October 31, 2014 • Leave a Comment
Post-Cinematic Affect in Action - Grace Jones' "Corporate Cannibal"

Post-Cinematic Affect in Action – Grace Jones’ “Corporate Cannibal”

Wrote this in 2011, but I’m updating my website, and adding some new content that should have been here long ago. Enjoy.

Welcome to the post-cinematic mediasphere, the timeless time of the space of flows, the neuro-affective flat ontology of smooth capital. Steve Shaviro’s new text, Post-Cinematic Affect (Zero Books, 2010), is many things. On the one hand, it is a guided tour of the mediascape to come, a futureflash of the way the world will feel once today’s emergent media formations reach their mature forms. On the other hand, it’s also a diagnosis, an attempt to understand the manner in which capital and the image will increasingly intertwine in the world to come. Both media analysis and critique of capital, Shaviro’s slim tome is understated in its presentation, but wide in its potential effects. It’s an important book, one at the cutting edge of the attempt to think the dark underside of the networked age to come.

Shaviro describes his enterprise as an attempt to perform an “affective mapping” of what, following James Cascio and Gilles Deleuze, he calls the “participatory panopticon” of the  “control society . . . which comes from everywhere and nowhere at once” (8). In such a world, “personalities. . . [are reduced to] shells within which social forces are temporarily contained” (108), and all terrain is reduced to any-spaces-whatsoever (espaces-quelconque), monadicaly disconnected from each other, yet vague enough to morph at will in the timeless time, the “always being about to happen”-ness (86), of a mediascape which is purely relational, without exterior, and always in flux. Welcome to the smooth space of flows as a vision of hell.

What is left in a world in which the very categories of ages past, including space, time, subjectivity, agency, and community, even the boundaries between media itself, are dissolved in the disjunct unity of a fluid that percolates without end, yet always drains surplus elsewhere? Affect. Waves and waves of affect. Affect, for Shaviro, is counter-representational by nature, it is emergent, transpersonal, distributed, virtual. It is that which flows in the world in which humans used produce and consume commodities in factories and engage with the ‘real’ world. Now, instead, we have the near-completion of real subsumption, leaving us to scrounge for remainders or search for a way through to the other side. As the boundaries between cinema and portable computing, video-games, and websites increasingly begin to blur, as the Deleuzian time-image is drained of its duration by digital composition and post-continuity editing, and as we move to neuromodulatory media forms in which all pretense to plot and character dissolve into the affective high that a figure transmits, we find ourselves increasingly in the post-cinematic video-drome, the ambient wave-space of perpetual revolution, in which player and played are all played by a system that feeds itself on our ebbs and flows.

Instead of subjects and objects, what’s left is figures, and this is precisely what Shaviro works to map. The bulk of the text is made up of close readings of four recent media works, “diagrams” (6) and “machines for generating affect” (3), by Grace Jones/Nick Hooker, Oliver Assayas, Richard Kelley, and Mark Neveldine/Brian Taylor. Shaviro intentionally goes after works dismissed by others as excessive or failed, for he sees in these overblown bits of detrius the trace of the futurescape to come. Shaviro is fascinated by the pooling of affect around celebrity, the currents that flow in and out of the “amnesiac actors” which replace what used to subjects, the shattered dividual subjectivities that play out on the virtual post-cinematic mediascape, and the virtual spacetimes carved out of the flows of affect by its own movement within itself. Like the figures he traces, the media texts he examines are merely traces of movement. What he’s interested in is mutation, the drainage of Deleuze’s time-image, and the production of a new hyper-circulatory paradigm which he prophetically argues is coming to dominate our age.

Shaviro tracks the manner in which many of the buzzwords valorized by contemporary Deleuzian inspired theory are ironicly most apt for describing the most terrifying aspects of today’s world, such that Post-Cinematic Affect can serve as a wonderful tonic to the celebratory sides of contemporary Deleuzian, network/complexity, and futurist paradigms. For Shaviro, contemporary space has become relational and virtual, morphing into anything at will, never committing to one form or another, so that it can always become smooth to serve capital’s needs to mutate and serve ‘just-in-time’ production and circulation. Where there used to be masters (and master signifiers), now there are icons, patterns of modulation, for “modulation is the process that allows for the greatest difference and variety of products, while still maintaining an underlying control” (15). In place of subjects, what remains are points of transfer of affect, figures which echo in simulated interiority the icons which direct them, each composed of the flows whose densities determine the spacetime terrain in which accumulation occurs, siphoned somewhere eternally off-site. Series of “affective constellation[s]” (73), the result feels  “unspeakably ridiculous . . . creepily menacing . . . [and] exhilarating” (85). It’s the world of the perpetual music video, in which media sings just for you, in which distributed scapes of feeling wash over transfer points, and yet, one the need for perpetual flow keeps everything vague enough so one can “never leap from affect to concept” (73). And what of the much vaunted hope in networks and complex systems? Shaviro dryly slams contemporary complexity theory approaches: “actually existing capital is metastable. It functions as a dissipative system . . . . operating most effectively . . . at far from equilibrium conditions” (189), such that “networked manipulation works more effectively than a hierarhichal chain of command ever did” (107). It seems possible, however, that there are many types of metastable networks, a possibility that Shaviro doesn’t address.

pcasSuch a world, for Shaviro, is one best described by the much valorized term “flat ontology.” For when anything can become a medium of exchange for anything else, the ability to distinguish between master signifier and the chain of signifiers slips on the smooth space of numbers which, for Shaviro, underpins the whole apparatus. For it is “digital transcoding as common basis” (134) which allows for the interchangeability of everything that can be quantified. The result is the precaritization of work, the shift from physical to symbolic production, material to affective/intellectual labor, production to financialization, and the near complete real subsumption of the world by capital, such that “everything is a potential medium of exchange, a mode of payment for something else” (46). For Shaviro, “the only thing that remains transgressive today is capital itself . . . [it] transgresses the very possibility of transgression, because it is always only transgressing in order to make more of itself, devouring not only it’s own tail but it’s entire body, in order to achieve greater levels of montstrosity” (31). Within the perpetual now of the modulatory regime, motion and duration are simulated, and resistance is, at least so it seems, futile.

Or worse, it is incompossible. For if the fear of our current, cinematic age may be summed up by the words “chaos rules,” uttered by the uncanny fox in Lars Von Trier’s recent trainwreck of a film Antichrist (2009), the dystopia to come is probably best described, by Shaviro, as “incompossibility reigns.” Shaviro’s symptomatic reading of another recent cinematic trainwreck  Richard Kelley’s Southland Tales (2006) (by an equally talented director, I might add), shows how in tomorrow’s mediascape, the properties that used to belong to individuals – personalities, facades, desires, fears – are now shattered amongst numerous sites, while individuals are now required ‘flexible’ adaptors, ready to play any role, grasp hold of and channel any fragment of what used to be referents of the ‘real’ world, and all just to survive. The world to come is one in which not only is everything possible, but everything has already happened, and is already happening, now, all at once, even if it is all virtual, so that none of it can stick, resulting in a depthless simulated miasma. What’s left is are networks of modulations of affective constellations, and the constant jerking between what Bolter and Grusin have described as the poles of hypermediation and immediacy (115) . Like the inside of some cruel quantum particle, Shaviro shows us the dark side of contemporary science, media, economy, and society. Can anything be done?

Towards the end of the work, Shaviro discusses what Benjamin Noys (136) has called the strategy of ‘accelerationism’ – if it’s impossible and/or undesirable to go backwards or slow things down, then perhaps the solution is to try to make things go faster. As political strategy, Shaviro smartly remarks that the collapse that a hypertrophic crises could lead to could in fact lead to formations much worse than what we have now. But aesthetic accelerationism is a strategy endorsed by Shaviro, in that it allows us to explore the landscape countours of the world in the process of formation. Works of art send out probeheads, to use the Deleuzian terminology, examining new ways of being, potentially revealing new forms of resistance in the process.

gj2

Shaviro diagnoses the problem, yet is careful not to recommend any real cures – only time will tell where this is all going. But he clearly has his finger on the pulse of the cannibal impulses of our anticipated present. If we are to find any hope in between the lines of Shaviro’s dystopia, it is that perhaps there are ways to turn the very tools of capital against itself. Shaviro seems to hesistate – he knows we cannot go back, but we also cannot go forward on our current path without the dystopian world he analyzes in his text from coming true. But perhaps there are other types of networks, other types of meta-stability, flat ontology, relational space, virtuality? This, it seems, is the deeper question this text tries to ask, and it is a question that truly hits at the debates central to contemporary theory today. And while Shaviro occasionally nips at the counter-strategies composed by Michael Hardt and Antonion Negri, it seems that this is because Shaviro feels that the real strategies of resistance are yet to come.

While at times risking Baudrillardian fatalism, Shaviro seems to point at one ray of hope, namely, that new media formations can teach us, in their own way, how to once again, to use a phrase employed by Deleuze, “believe” once again in the world (60). But before we can get there, we need to map the new terrain, and that is what this slim yet sly volume seeks to do, and suceeds masterfully.

Post-Foundational Mathematics as (Met)a-Gaming

•May 29, 2013 • Leave a Comment

Mathematics is a fundamentally human activity, and a semiotic one at that, which is to say, it is an activity of making and using signs in relation to the wider world of practices whereby humans relate to their worlds. While this might seem obvious, most working mathematicians self-identify as Platonists, which is to say, they take the position that they are working with realities which are “really there,” “mathematical objects” which are able to be discovered by means of techniques modelled on that of the discovery of physical objects in nature. Mathematical objects, which is to say, things like numbers and geometrical figures, are ideal entities whose contours are wrenched from the fabric of the ideal itself by means of the techniques of logico-mathematical proof. All of which is to say, even if there were no physical world, the truths of mathematics are “really there,” as if God given, hence the term Platonism, often worn with pride by mathematicians today. The locus classicus of this position is that of Leopold Kronkeker, in 1893 when he famously said that “God made the natural numbers, all the rest is the work of man.”

Nevertheless, Kroneker was responding to the foundations crisis which was beginning to shake the tree of mathematics. For example, logicists like Gottlob Frege had attempted to “found” mathematics upon the basis of its subsumption to the “rules of thought” articulated in his new logical calculi. The problem with this, however, is the at it hardly did what Frege intended, which is to say, to “ground” mathematics, and hence show its absolute necessity in “all possible worlds” (to use a term from Leibniz), but rather, to reveal just how ungrounded the seemingly incontrovertible world of mathematics actually was. When combined with the set theoretics developed by Georg Cantor, or the slippery attempts to ground number from linear continua as described by Richard Dedekind, it seemed as if just as mathematics had radically increased in power and rigor during the nineteenth century, it had also revealed in the process that perhaps despite or by means of this very power, it was all illusion, a slight of hand. Did the Emperor have clothes? Kroneker believed that blind faith was the answer, and so do many mathematicians today.

And of course, if one works far from the limits of the mathematical enterprise, which is to say, far from the applied aspects of mathematics which find themselve continually in dialgue with the physical world and its non-mathematical impingements upon the edifice of pure mathematics, then one is safe from these issues. Likewise, if one doesn’t stray too far into the realm of the pure, to the foundations of mathematics itself, one is also able to skirt around the issues of how precisely mathematics derives its authority or internal consistency. It is the in the “dirty middle” realm, from which both the shores of the physical world and the purely ideal world are both distant horizons, that the terrain of mathematics appears boundless. But at the shores, the issue becomes muddier indeed.

And this is what the foundations crisis that shook the mathematical world at the start of the century revealed. Some argued, with Hilbert, that mathematics was purely about signs, and was merely a game, and hence, should not be compared to the physical world. Any need for grounding was then moot, because mathematics grounded itself, circularly, and needed no justification beyond this. It’s own internal consistency made it a form of sophisticated play, and if it was useful in the world beyond mathematics, then so be it, but this was ultimately, accidental and not something worth the time of mathematicians. This “formalist” approach, however, simply ignored the fact that the engine of mathematical creativty had not only come from within, but without. The radical developments within mathematics during the early 19th century, for example, the great works of Leonard Euler, were often spurred by attempts to solve problems from mechanics, which is to say, very practical issues which engineering posed to the lofty realm of math, and to which it could not answer. Even analysis, the great discovery of Newton and Leibniz, had been wrested from the gods of mathematics by means of the push of the attempt to describe accurately the motion of heavenly bodies, not to mention the behavior of mundane physical objects. Whatever mathematics is, it is hardly pure.In contrast to this we see the Intuitionism proposed by L.E.J. Brouwer, who argued that mathematics should simply get rid of anything that couldn’t be grasped by the intuition of the mind as purely abstract nonsense. Brouwer attempted to “construct” mathematics on the intuitions of the mind, producing an analogy to the manner in which the mind intuits the objects of the physical world and the manner whereby it intuits the ideal realm of mathematical entities. A highly influential early twenteith century movement, one based to a large degree in Neo-Kantian ideas of scientific method and practice, Intuitionism largely fell out of favor, along with formalism, even as the limits of pure Platonist and applied approaches found their own limits in Goedel’s famous “incompleteness theorums” of 1929-31.

In contrast to this we see the Intuitionism proposed by L.E.J. Brouwer, who argued that mathematics should simply get rid of anything that couldn’t be grasped by the intuition of the mind as purely abstract nonsense. Brouwer attempted to “construct” mathematics on the intuitions of the mind, producing an analogy to the manner in which the mind intuits the objects of the physical world and the manner whereby it intuits the ideal realm of mathematical entities. A highly influential early twenteith century movement, one based to a large degree in Neo-Kantian ideas of scientific method and practice, Intuitionism largely fell out of favor, along with formalism, even as the limits of pure Platonist and applied approaches found their own limits in Goedel’s famous “incompleteness theorums” of 1929-31.

Goedel’s singular accomplishment was to put all four of these approaches to grounding math – Platonic idealism, Physical Realism, Intuitionist Neo-Kantian Subjectivism, and Objective Structuralist Formalism – to rest as aspects of the insoluability of the same problem. That is, math simply could not be grounded from within, nor could it be grounded from without without proving itself ultimately both grounded and ungrounded, and in fact, both and neither, from a mathematical point of view, in the process. What Goedel essentially did, then, was show that the very notion of “grounding,” at least as this notion was being framed by mathematicians of his time, was part of the problem. That is, mathematicians who wanted to ground mathematics from within, as the insurer of its own truth, would find only circularity, but no ability to say if this circularity applied to anything beyond math. Those who wanted to ground mathematics in something beyond math, such as the physical world or human intuition, or even the workings of ultimately meaningless signs, would find that all they could prove by means of the tools provided by math, from within at least, was that mathematics relied on something beyond it, but no way of showing the need of this, or the need of any relation to a particular grounding or another, by solely mathematical means. That is, to ground math with something beyond it (ie: human activity, human signs, human intuition, god), would require actually bringing something outside of math inside of math. And that would produce contradiction.

Circularity or contradiction, the result would be incompletion or incoherence, respectively. The final option, oscillation or inconsistency, was simply what most working mathematicians did, which is, to use whichever options made math “work best” in a given local instance, and leave “grounding” for some other time. What Goedel did was show that Hilbert’s famous dictum that math must prove itself “consistent, coherent, and complete” by its own means, which is to say, by means of mathematics, is simply not possible, and that this is not simply an accident, but part of the very structure of the way mathematics itself works. Goedel showed that math had its own limits.

Of course, some of this might seem like common sense. Math always counts (numbers) or draws (geometry) something which is not math itself. If I see a group of animals and count them, and label them “four dogs,” the dogs and the number “four” are fundamentally different things. Math is always both about and not about mathematics. When math tries to eliminate any aspect of it which is not math, or which is math, the result will always be, at least from wtihin math, paradox. So it is with any signifying practice. The same can be said about language. A dog is not the same as the word “dog,” and ultimately, one cannot “ground” the relation between the two, at least from wtihin language, without producing paradoxes. Just as Goedel showed this within mathematics, so Jacques Derrida famously demonstrated by means of linguistic deconstruction, and Ludwig Wittgenstein in his own way nearly thirty something years before. The fact that Goedel, Wittgenstein, and Derrida share what can be seen as variations of the same insight in different fields, one which resonates strongly with that of Heisenberg in physics, is likely perhaps not accident.

Naturalistic Mathematics?

If mathematics cannot ground itself, perhaps it can ground itself in the fact that humans devised mathematics. That is, it is a signifying practice, like that of language, whereby humans describe aspects of their world so as to interact with them. Mathematics is a special type of language, but language nevertheless. Of course, most working mathematicians are likely not to like this, because it subordinates their activities to something, anything. But there is no unsubordinated position from which to view the world, we are always mediated in our relation to anything and everything, and likely, are nothing but mediations of mediations all the way down, fractally and holographically. That is, any notion that there is some ‘God’s Eye Perspective’ from which to survey the world seems singularly outdated as any other simplistic form of faith in the unbiased. All is perspective, and mathematics is simply one amongst others. It is useful, of course, but so is language, our bodies, our brains, etc. Each of these has been viewed by its partisans as the singular, privileged lens on the world, and each can be decentered by others. Why mathematics should be any different is beyond me.

Rather, we live in a world of networks, each of which supports the others, culture and nature, language and physics, human and animal, living and non-living, each an aspect of a wider whole which supersedes them all. Whether we call this whole experience, or the universe, these are also simply aspects of it, attempts to describe the whole. And as Goedel showed in his way, Derrida in his, and Heisenberg yet another, the attempt of any system to grasp the whole from within it is likely to founder in paradox. In mathematics, of course, this is the famous paradox of the barber, Bertrand Russell’s attempt to articulate the issue of the “set of all sets.”

That said, these problems all become less of an issue if we say something like mathematics is a human signifying practice, which is useful in its domain in relation to others. Of course, this begs the question of what use means, but since humans are those who determine that which is useful to them, and are also the originators of any math we have ever known, then we could perhaps say that mathematics reflects aspects of what humans value in the world. That is, mathematics has helped humans do the sorts of things they value, one form of which is mathematics. While some enjoy doing math for its own sake, the urge to do something like mathematics does seem to find its impetus in practical activity, which is to say, the attempt to describe the world so as to be able to do things in relation to it. There is no question that both pure and applied mathematics have given rise to new forms of mathematics, but as some of the more “naturalistic” philosophers of mathematics today have argued, math is always between the physical and the ideal, with one foot in each, dirty and impure to the core. That is, it is a form of media, just like language, or the body. A lens on the world of experience, if a particular one, like yet different from all other media in this way.

While naturalistic approaches to mathematics are in the minority among working mathematicians, and even philosophers of mathematics, it seems to be the only approach to mathematics which takes into account its foundations crisis mid-century. If it dethrones mathematics from its attempt to imagine itself as the queen of the sciences, well, let it join philosophy and every other dethroned discipline which aimed for such a role. For perhaps it is the very desire for centrality which is the problem rather than attempt to “find” a solution, for it seems to give rise to paradoxes, whereby the very fabric of, well, something, call it the world or otherwise, resists. Physics, linguistics, mathematics, the foundations crises of many disciplines of the twentieth century, they all seem to indicate that the center does not hold, and yet, centerlessly, they still do many things. Naturalism attempts to start from this, from activity in the world, and human activity at that, rather than ideal foundations, be they ideal in the classical sense, or the materialist inversions thereof.

Post-Structuralist Approaches to Mathematical Gaming

If mathematics is a human activity, then perhaps it may be possible to philosophize about it in regard to this perspective on it. Certainly mathematicians refer to specific “things,” which they describe with symbols which they manipulate. These signifieds of mathematics are represented by signifiers, which is to say, the graphs and equations scratched on paper, computer screen, and chalk-board as “representing” something generally called “mathematical.” If “mathematical objects” are signifieds, meanings, that which are described and represented by mathematical signs, considered as signifiers, then perhaps we can think of mathematics as a specialized type of language, and the practice of mathematics as a type of writing and speech. Certainly not one which is meaningless, as Hilbert famously argued. No, mathematics seems to be about the world as much as about itself, just as any language, and yet, it is a very particular sort of language at that.

As any language, mathematics can be considered, as Wittgenstein famously argued, a game. That is, it has rules, and people get quite heated if you break them, even if the rules of the game are always being changed from within as you go. Good moves in the game, in fact, change the very nature of the game itself, and in doing so, change what it means to play, the players, etc. In this manner, the rules of mathematical play are like that of linguistic play, which is to say, mathematics has a grammar, just like natural languages do, even if this grammar works differently than those of natural language. But this grammar is a grammar nevertheless. And so, a (post)structuralist analysis of mathematics is not only possible, but I would say, desirable.

Structuralism viewed languages as composed of utterances, often described as “parole,” in relation to structuring categories which were implicit yet made sense of utterances, or “langue.” There are, of course, several types of langue working in any given language. For example, in a natural language such as English, there is the langue of the semantics of the language, which is to say, the meanings of words, systematized in a dictionary, which a competent speaker of English would need to understand to “make sense” of a given utterance. Just as one couldn’t make sense of a sentence such as “The cat is on the mat” without knowing, for example, what it means to “sit,” likewise, one cannot make much sense of a mathematical sentence such as “x – 5 = 17” unless one knows the meaning of what “5” means, and how this mathematical “word” differs from that of “17.” While it is necessary to bring in forms of semiotics which deal in diagrams to describe how this could be applied to notions such as geometry, the semiotics of C.S. Peirce seems more than adequate to the task.

Below the level of semantics, or the meanings of words, are the deeper structures, whose which determine the ways in which these can be linked to each other. A word like “is” in “The cat is on the mat” presents a word which is really not merely a word, but a word which represents grammar, or syntax, which is the foundation out of which word meanings arrive, for it provides the fundamental and implicit categories which all the meanings of words to take form. And so, if one was to look up any word in a dictionary, one would find that the word “is” equivalent to this or that meaning. “Is” is both a word and a meta-word, so to speak, and this is what is meant by langue in relation to parole. Just as knowledge of the meaning of a term at a given level is necessary to understand an utterance, so is the meaning of the meanings which describe the meanings of these words, which is to say, the rules of the game as well as the particular move being made. And so, if one doesn’t understand what “is” is, then understanding the particular meaning of “The cat is on the mat” is likely impossible. The same with grammar markers in mathematics, such as “=,” which ultimately, is very similar to saying “is” in a natural language such as English.

As is likely apparent from the preceding, recurcsion is operative here, not only at each level, but at any level. That is, each and any utterance/parole is related to a langue, which itself is a parole in relation to at least one other langue, and this repeats fractally. This is the contribution to this sort of structural approach made by post-structuralism, namely, the attempt to show the paradox of any attempt to find an ultimate foundation at work in such an analysis. And so, rather than argue that a notion such as “is” represents a “deep structure” of the language of mathematics, and hence, in some way, the world itself, a post-structuralist approach uses relatively similar methods to show that the process can be carried on infinitely, with no ultimate ground in sight, or, if one wants, arbitrarily ended for convenience sake. But any attempt at ultimate ground will give rise to something like infinite regress, which is to say, incompletion, arbitrary end, or incoherence, or some mixture, which is to say, inconsistency. Post-structuralism and Goedel are on the same page on this one.

Mathematical Meta-Gaming

From a post-structuralist perspective, then, it becomes possible to say that mathematics has objects, which are meanings within its semantics. These objects are things such as numbers and shapes, or any of the other entities which mathematics attempts to “treat.” When mathematics deals with combinations as if they were “things,” which is the discipline of combinatorics, then we know we are in the realm of mathematical semantics. These things are then linked together to produce utterances, according to the rules of grammar implicit to the “game” of mathematics.

The sorts of utterances vary, however. Some are simple equations, such as “x – 5 = 17,” which are then transformable, by a known series of procedures, into “x=12,” such that it becomes possible to state that these two are themselve “equal.” There are procedures here, such as “solving” equations, which are the utterances. And these procedures produce the grammar whereby mathematical equations are transformed, one into the other.

But then there are meta-mathematical utterances, and these are proofs. A mathematical proof makes use of procedures within the language of mathematics to create utterances which attempt to alter the way the game is played. This is, for example, not all that different from the role of argument in philosophy. A philosopher might argue, for example, that we shouldn’t think of reason or god as this or that. Ultimately, the philosophers is using words to impact the way we use words, just as a proof indicates the ways in which a mathematician uses math to impact the way math is done. Of course, the goal is ultimately to impact the way people think about math, but seeing as mind-reading isn’t yet possible, the only way we’d know how people think is how they act, which is to say, how they “do” math, and a proof may aim at how people think, but ultimately, it only manifests its effects in how people “do” math. The same could be said of the role of argument in philosophy.

None of which is to say, of course, that I haven’t been doing precisely this in what I’ve already said. In fact, the preceding paragraphs are simply arguments, attempts to impact the way the game of philosophy of mathematics is done, from within it. And this sort of meta-gaming is part of how the game is played, even if the results of this are always uncertain, which is to say, incomplete, inconsistent, or incoherent, at least from within the game as it currently stands. But games evolve, and meta-gaming is how this happens, in math and language as much as any other sort of gaming. And so, if I decide all of a sudden that a bishop can now jump, but only over rooks, in chess, and this move catches on, and becomes part of the new rules amongst the “community” of chess gamers world wide, then I have made an utterance, not within the game, but also not beyond it. In a sense, I’ve made a meta-utterance or meta-move in regard to the game, thereby altering its grammar from within.

Mathematics does this all the time. In fact, that is precisely what Goedel did, and what others do when they “invent” new mathematics. Those meta-moves which catch on become movements, such as “category theory,” or meta-meta-movements, such as “post-foundationalist mathematics.” None of which is to say that the meta-games precede the games, since there was no such thing as “post-foundationalist mathematics,” or even the need for this, before the foundations crises. Sometimes the meta-games pre-exist, because they have already been called into existence (ie: people have been talking about the grammars of languages for quite a long time), while other times, they are produced, even giving rise to new layers in between existing layers.

In a similar manner, mathematicians can give rise to new objects. Certainly “category theory” has not only given rise to new mathematical grammars within and beyond it, but also, new mathematical objects, such as “functors” or “categories.” The field of abstract algebra, of which category theory is simply one form, in fact is the branch of mathematics which works to deal abstractly with various sorts of ways of relating objects and grammars to produce utterances and meta-utterances. From its start in set theory, modern algebra developed into group theory and beyond, and by means of Emmy Noether around the time of Goedel, because the meta-mathematical enterprise it is today. If Goedel destroyed the hope of a single meta-mathematics, Noether proved that the true name of foundations was “many,” even if meta- as a notion was only ever “one” in relation to a particular location. Noether showed that grammars and objects have plural ways of relating. And from such a perspective, it becomes possible to see that foundations are things, but verbs, processes of continually founding and refounding, which is to say, relating to levels of micro- and macro- scale within a given level of practice, semiotic or otherwise.

It is for this reason that many have turned to category theory as a possible inheritor to set-theory, as a possible “post-foundational” foundation for mathematics. What is so incredibly slippery about category theory is that it defines its objects, grammars, and moves relationally. That is, the very meaning of an object is what you can do with it, and these “moves” give rise to the very categories of objects in question, and vice-versa. That is, object, category, and move are interdependently defined, collapsing the distinction between utterance and meta-utterance, such that all utterance is meta-utterance and vice-versa. All of which is a way of saying that the constructedness and reconstructedness is not hidden behind a smokescreen of “this is the way things really are.” Category theory is a mathematics, not of being, such as set theory, but rather, a mathematics of relation.

There are, however, other potential post-foundational discourses. Fernando Zalamea has, in recent works such as “Synthetic Philosophy of Mathematics,” argued that sheaf theory can play this role in a way different from that of category theory. Sheaf theory, a form of mathematics which works to extract invariants from particular transits between mathematical objects in transformation, is fundamentally a mathematics of the in-between. It is a mathematics which extracts from particular motions particular symmetries, and like group theory, then works to put these to work themselves in transit between local and general. For example, sheaf theory may attempt to describe the ways in which particular figures can be sliced and re-glued to themselves in ways which maintain coherency even when that figure is transformed in a particular way, and to then learn from this possible insights which can be applied to different yet related types of slicing, re-gluing, and transformation. In many senses, sheaf theory is a meta-analytic formation, which is to say, it takes the sorts of tools of decomposition and recomposition, analysis and synthesis, seen in notions such as differentiation and integration, and generalizes them to ever wider terrain.

Sheaf theory is then a mathematics of transits, of the temporary reification of an aspect of a transit, only to reapply this to another. In this sense, the objects, categories, and grammars are also relationally interdetermining, and the relation between utterance and meta-utterance are constructed and reconstructed continually, as with category theory. One difference, however, is that category theory is ultimately a logical enterprise, and doesn’t get into the specifics of particular figures and their equations, but rather, attempts to describe the logical grammar “beneath” the mathematical language used to describe and manipulate these. In a sense, then, one could say that category theory and sheaf theory are both instantiations of post-foundational foundations of mathematical practice, but in regard to differing aspects of the mathematical enterprise. Category theory is a meta-gamic approach to logic, while sheaf theory plays this role in regard to transformation within a particular field which already has instantiated categories of objects, categories, and grammars (ie: figures, types, and rules to regulate transformations).

None of this is to say that category theory is “more” foundational, but rather, it is more abstract, and this is different. Abstraction here indicates that this is a further move away from the physical world, and closer to the ideal, which is simply the realm stripped of specifics. Between a relatively “concrete” aspect of the world, such as a stone, there is the abstract representation of this, such as the word “stone,” or the number “1” which can be used to count this stone. Category theory leans to the latter side, and sheaf theory the other, but depending on the particular pole one is using to base one’s practice at a given moment, be this the concrete or the abstract pole, or any other for that matter, then the “foundational” orientation of one’s meta-gamic practice will proceed differently. Foundations are always foundationing, and as such, one always creates and recreates them by means of meta-gamic moves. The whole point of a post-foundational foundationalism is that it aims to produce the potential for foundations everywhere, rather than prohibit them, less this simply become foundationalism in reverse. The hope isn’t to proscribe foundations, but to liberate them, and in the process, practices, from the straight-jacket of both foundationalism and its evil twin, anti-foundationalism. Post-foundationalism, on the contrary, embraces creativity, which is to say in relation to mathematics, the potential production of new mathematical games and meta-games to give rise to new ways of descriging our potential relations to the world.

Foundations as Foundationings: Or, Mathematico-(Meta)gamic Ethics

If abstract and concrete are two poles which can help us orient in this process, these two should be seen as merely categories produced by meta-gamic moves, and hardly necessary, but produced and reproduced at each and any moment in which they are operative. Just as Zalamea finds this set of polarities useful, I also find those of the human body helpful. That is, if all mathematics can be constructed as the product of human activity, then perhaps, as with natural language, it makes use of categories which can be seen as potentially deriving from the form of our embodiment. Natural languages, for example, have nouns, adjectives, linking words (ie: prepositions), and verbs, four primary parts of speech, and some theorists, including Gilles Deleuze, have argued that these can be seen as the result of the ways in which human embodiment in the world makes use of things, qualities/categories, forms of relation, and actions, which is to say, nouns, adjectives, ‘linking words’ (ie: of, on, in, is, therefore), and verbs. The grammar of human language, then, can be seen as a way of representing some of the primary categories which humans have, by means of the media of their bodies, extracted from their worlds of experience. None of which is to say that these categories are necessary, but rather, that they are produced and reproduced, continually, by gaming and meta-gaming in the worlds of our experience, with language being one of the effects thereof.

From such a perspective, it might not be far-fetched to argue something similar about mathematics. That is, there are mathematical objects, which function like nouns, and categories of these, which are similar to adjectives, giving rise to semantics from the relation between these. From here, utterances and meta-utterances formed by means of these objects and their meta-linkages in categories, utterances, and meta-utterances give rise to modes of relation, which are represented alongside the objects and categories within utterances and meta-utterances by means of “linking words” which represent within these grammartical structure. But all of these are ultimately the result, sedimentation, and reification, if only partial, of the processes which give rise to these. Mathematics and meta-mathematics, two sides of the same, ultimately, like a language and its grammar, are processes of gaming and meta-gaming.

Thinking in these terms also allows for mathematical gamings to be linked up to philosophical notions as well. Set-theory, then, is clearly, as Alain Badiou has argued, the mathematics of being, and as such, has fundamental resonance with the philosophical notion of ontology, even as category theory seems to be something like a mathematics of relation. Sheaf theory seems an attempt to describe something like a mathematics of becoming, of the transits between the figural and the numerical, and within each. If, as Brian Rotman has argued, geometry is the language of space, and algebra the language of counting, it becomes possible to see these as so many layers of semantics and syntax in relation to each other, and yet, also permeated by that often forgotten stepchild of linguistics, which is to say, pragmatics. There is no vocabulary or grammar without a context which makes these relevant and meaningful. Without something like human bodies in something like a world of experience which has particular structures of space and time such as we know them, it is unlikely that anything like mathematical grammar or vocabulary, let alone those of natural languages, would make any sense. Likewise with what we tend to think of as constituting proof or argument.

By framing languages as composed of semantic vocabularies of objects, linked in syntactic categories which produce series of potential relations between these, which are grasped in meta-syntatic categories which are the grammars of that language which ultimately articulate the pragmatic linkage of that language to its contexts beyond itself, then what gives rise to all of this? Creation, it would seem, but also recreation. Math emerged from our world, and continues to reemerge from it, as does language. Semantics, syntactics, and pragmatics are simply aspects of this continual process of recreation. And if this essay attempts to do anything, it is to deconstruct attempts to hide the manner in which recreation, which is to say, emergence, is the ever present potential within any and all, here and now.

Mathematics and language, just as with any other sedimentation of our actions into representations thereof, are simply ossifications which stand out from our practices, which we then treat as if “real,” which is to say, as if necessary. And yet, they are only ever produced and reproduced by our actions, even as these are only ever produced and reproduced by that of our contexts in turn. We are networked, linked nodes of praxis within others, at potentially infinite levels of scale. And yet, within our particular zones of this, there remains the potential to give rise to effects which ripple widely beyond, leading, under the right contextual conditions, to the potential for cascades which give rise to a sea-change in how things are done.

There will always be those who will try to control the way things are done, who will try to close off our sense that, to quote the revolutionaries of the past, “beneath the stones, the beach,” or, in more contemporary idiom, that our world is only ever of our own making and remaking. No one entity can ever shift the whole, and yet, any one entity can sit at the fulcrum of others and provide the critical shift in a grain of sand that creates a massive change. Or participate in structuring such a set of conditions so that something else can provide that final push.

There are times, of course, in which it is necessary to restrict the manner in which things recreate themselves, for in fact, too much change can dissolve and destroy. But our world has almost never been in danger of such things, and when it has, it is the chaotic dissolution has nearly always served to reproduce the deeper aims of control. But creativity, radical creativity, doesn’t care if it is the center of the world, but only that it resonates with the creativity within it in its own way, amplifying the power of liberation and emergence, being emergence itself, even if towards an end beyond it. The reason for this, of course, is that emergence knows no fruit, it is its own reward. As anyone who has ever created anything knows, creation is powerful stuff.

Mathematics is a form of creation. If god created the natural numbers, and humans created god or are god, or even tap into some aspect of the godlike nature of creativity within fabric of the world as such, that which has never yet ceased to surprise us with its novelty, then by creating new mathematics, we allow the world to recreate itself through us. And in doing this, we learn something deep about ourselves, and about the manner in which we emerge from the world in and through things like mathematics. Human languages aren’t thinks, they are emergences and reemergences, as are we, as is, it seems, the universe from some primordial singularity.

While the end of this essay may seem far from questions of mathematics, the reason for going to such metaphysical issues is to make the point that they are omnipresent, and that reification obscures this, with control and petrification as the result and aim. Liberation and emergence can also be result and aim, and both are self-potentiating tendencies, as paradoxical as anything described by Godel, but dynamic beyond such at attempt to grasp it in snapshots. None of which is to say that we need dispense with reification, but rather, that we simply need not be seduced by its productions as being anything ultimate. Rather, there is always a play of natura and naturans, to use Spinozist terminology, or to cite Schelling, producer and produced. Forgetting that the produced is only a product and not the producer is the classic means of taking the present and its products as necessary. Those in control of our world have always relied on the stability of appearances to maintain the notion that the world has to be as it is.

While creation for its own sake can dissolve all that’s good in the world if taken to its limits, our world has always erred on the side of reification for its own sake, minimal necessary creativity to avoid ossification. This of course makes sense in world in which survival is that of the fittest. But in a world in which we can now feed everyone, in which the worst predators humans face are other humans and their products, the very evolutionary situation has changed, and so, our ability to survive depends on us unlearning the very lessons that evolution evolved into us to survive the brutal period of biological evolution. Surviving cultural evolution is the next step, and this requires walking backwards the paranoia we needed to develop out of the primordial oceans.

All the games we play, from mathematics to baseball, politics to economics, these are all human products, and products of the world beyond this in turn. Those creations and recreations we give rise to reveal what we value. And so often, we tend to forget that we have choices. Of course, if we are to truly make choices, we have to deal with the meta-gamic question of what we value, and why. To avoid precisely such meta-gamic questioning, it is often easier to simply pretend that our world is not even partially of our own making, and that our agency is small indeed. But while our agency is distributed and relational, it is hardly small.

The ethics of this essay is that of emergence, neither creation for its own sake, thereby giving rise to chaos, nor consolidation and control for its own sake, giving rise to stasis. Because our world overvalues the second, course correction would seem to value a shift to the other side, to novelty, but again, this should hardly be seen as necessary, only situational.

Mathematics has in fact engaged in a radical move towards creativity during the twentieth century, and put the seduction of reification behind it after the foundations crisis of the early part of the century, itself potentially a reflection of so many other crises of foundations early in the century, and their often violent repercussions. Foundations are always foundationings. They are statements of value and valuation, but if one looks for the values underpinning one’s values, there is only ever refraction to the whole, because the whole notion of value itself is simply the manner in which one is able to relate a particular move within a particular game to the meta-game played at the level of a given whole. The ultimate meta-meta-game, however, something like the world or experience, is precisely what the question of value addresses, and the answer is only ever inconsistent, incoherent, or incomplete. And yet, the creations and recreations we give rise to in the way we game and meta-game in relation to this is what makes everything worthwhile.

And if there is something that gives value to meta-gaming as such, it seems to be the emergence of (meta)gaming as such, and its continued emergence, the robust complexity of the games playable within the larger gamespacetime. Everything in our world worth valuing, after all, seems to only exist contextually within such situations. Life, for example, or the love which it can give rise to, these depend upon the complexity, and robust sustainable complexity, of the systems which give rise to them. Look at the best within the game, and use it as a guide to establish new guidelines for future gaming and meta-gaming, and continually readjust.

For games without rules, such as life or love, it’s always about values, and values are always about the relation of moves to the context of the whole, which is always in the process of emergence. But is that emergence getting better, and in a manner which is seems to lead to that in the future? Perhaps that is the best we can ask. If there is an ethics to (meta)games such as mathematics, perhaps it is in praxes like these. And while many may balk at the notion that math could ever have an ethics, there is no action in the world which does express values and valuation, and hence, have an ethical component. Humans value things in the world which allow them to continue to live and grow, which is to say, to recreate their ability to create and recreate. We eat food, build buildings for shelter, and create things like mathematics, and in doing so, we expend our energy and time, things we value, in the process. We value mathematics, and as such, it is an expression of our value, valuation, and values, even humans themselves are expressions of the value, valuation, and values of the contexts which produced us in turn.

There is an ethics to every move in every game, including language and mathematics. I’d like to think that emergence is, ultimately, the only way to imagine winning, which is to say, to emerge more robustly in relation to any and all in one’s context. An ethics of emergence, with ramifications for all gaming and (meta)gaming, including the creation and recreation of mathematical worlds.

.

Nondualism and Semiotics: Philosophy of Language Between East and West

•May 29, 2013 • 3 Comments

Even a casual reading of secondary sources on non-Western philosophy is likely to quickly turn up the term “non-dualism” with great rapidity, and in fact, it is hardly a controversial assertion to say that non-duality is perhaps the single most common notion used to distinguish between so-called ‘Eastern’ and ‘Western’ modes of thinking. While not all ‘Eastern’ schools make use of this term, some do, for example, the term advaita in Sanskrit, and used extensively in Vedantic Hinduism and various schools of Buddhism. Many aspects of non-Western though have been understood by means of this term since, particularly by Western scholars. And this isn’t necessarily a bad thing, because the term, developed in the Indian tradition, nevertheless does a good job of describing crucial structural aspects of Taoist and Confucian philosophy (particularly in regard to the use of the term Dao, or ‘the Way’ within these traditions), as well as central aspects of Sufism.

That said, there is also a non-dual tradition in the West, with Plotinus as the clearest example as a thinker who could, and unproblematically be described as non-dual. It is also worth noting that, since the reemergence of the “modern” reading of Plato around 1600’s, Plotinus has been seen as the poor stepchild of philosophy, a mystic in philosopher’s clothing. This is hardly incidental, and there is in fact a possibility that Plotinus and other Westerns, such as the Stoics, were influenced by Indic forms of thinking, including possibly Buddhism, by means of the Greco-Indian states which Alexander the Great left in his wake. That said, the term could also be applied to many other contemporary Western philosophers, such as, to varying degrees, the German Idealists such as Hegel and Schelling, or other thinkers with supposedly mystical leanings, such as Spinoza, or Deleuze.

Before going on, however, it is worth saying more precisely about what this term “non-duality” is generally understood to mean. Most commonly, what is meant is the manner in which the duality between subject and object is somehow undercut or supplanted, but in all rigor, the term is and has been applied to any attempt to get around or undercut binary forms of thinking. The Western parody of Eastern thought as saying a lot of vauge and contradictory words which ultimately come to mean nothing in particular is perhaps useful here, if to deconstruct. For in fact, reading non-Western texts, one often encounters statements that one should, for example, endeavor towards a state that is “neither this nor that,” “both this and that,” and sometimes even “neither this nor that yet also both this and that.” And so, while perhaps the most important binary which non-dual non-Western forms of thinking aim to displace is generally that between subject and object, depending on the text, this could also be extended to binaries such as good and evil, true and false, real and apparent, etc.

One excellent place to start in an invetigation of what is at stake in this process in

David O. Loy’s excellent text Nonduality: A Study in Comparative Philosophy. Drawing from a wide variety of philosophical sources and traditions, Loy teases the structure of non-dual thinking from these traditions, and works to systematize and classify the ways in which non-duality is used. While Loy does integrate some discussion of post-structuralist theory, particularly that of Derrida, towards the end of his text, I’d like to build upon this, both by recourse to some of the semiotic substratum of post-structuralist analyses, as well as by going outside of philosophy and linguistics in regard to systems theory.

Non-dual modes of argumentation work to undercut a given binary. For example, if I were to say that all objects are “neither true nor false, but both true and false,” one could simply dismiss this as contradictory, as many Westerners in the past have done, or deal with it as a complex semiotic utterance, following in the footsteps of twentieth-century structuralists. In fact, it seems to me that A.J. Greimas, with his semiotic square, nicely describes the manner in which binaries can be seen as composed as a series of options between “A,” “not-A,” “B,” and 

“not-B,” as four primary options in a binary system, with those of “neither-nor and “both-and” as secondary options which occur between any two of the terms described above. The result is his famous semiotic square, which links the Aristotelian logical modalities of contradiction and contrary within binary form, thereby building upon the semiotics of Jakobsen and Saussure, and providing a nice diagram to describe dual or binary linguistic structures, influencing a wide variety of theorists, such as Jacques Lacan and Fredric Jameson.

When non-Western philosophers, or any philosophers at that, indicate that something is “neither this nor that,” they indicate that they are attempting to shift emphasis within the discourse within which they are working to another part of the square, and when they say “both this and that,” likewise. However, when a thinker says that they are attempting to describe something which is “neither this and that as well as both this and that,” they seem to block out the entire square. What then is the effect of this sort of operation?

 Self-Deconstructing Paradoxes: Wilde and Lacan

Take for example, the notion of truth and falsity. To say that something is neither true nor false, but both true and false, is to question the relevancy of the very binary to the case under consideration, and in a sense, reduce the applicability of the binary in question to the realm of poetic usage rather than denotative or logical argumentation. Such a gesture is not all that unsimilar to that of Oscar Wilde, when he famously argued that “All those who tell the truth will ultimately be found out,” or tricky arch-post-structuralist Jaques Lacan when he argued that “truth has the structure of a fiction.” These ingenious linguistic acrobatics are in fact non-dual utterances in disguise as more straightforward types of utterances, because ultimately, these are self-deconstructing statements.

For those with a taste for the vertiginous, it’s worth pursuing these paradoxical statements and how they operate to effectively deconstruct the binaries they utilize. To start with Wilde, if those who tell the truth will ultimately be found out, then those who tell the truth are really liars, which then raises the question of whether or not they are liars in the same way, or differently, than those who lie. The difference, of course, seems to be that those who tell the truth are somehow lying about lying, to themselves or others, while liars are those who at least are telling the truth about their lies to themselves. And so, liars are more truthful, at least to themselves. If they are truly liars, however, they would hardly be truthful about lying, and hence, would likely lie about this, and claim to be telling the truth, at least to others. But a truth teller, then, would be one who, at least to themselves, believes they are telling the truth, at least on the surface. The result is that the split between lying and truth-telling is moved inside the person, for the liar would then be someone who is open, at least to themselves, about the fact that nothing that they say, including statements they make about whether they lie or not, can be trusted, while a truth-teller would be a person who at least believes that what they say is true, as far as they are willing to admit, and hence tell the truth to themselves. The liar, then, is a person who is honest with themselves, and a truth-teller the person who is a liar to themselves, hence, why they will ultimately be found out, not for lying to others, but to themselves.

The result of this is phraseological game is to move the determination of what constitutes a liar and truth teller from the ground of self and other to that within the self, essentially, splitting the self into self and other in relation to others who are also split. The real distinction, then, is between those who try to imagine that they are non-split subjects, and those who admit that they are split. And so, Wilde’s argument is about the structure of subjectivity, even though, on the surface, it seems to be about truth and falsity. What Wilde is doing is shifting one discourse to the other. But by phrasing this in a non-dual fashion, he forces the reader to do the work of the stages, because, at least on the surface, the statement makes no sense. The only way to make sense of the statement is to follow-through with the way it redefines terms performatively, in how they are used. That said, Wilde’s quote is hardly non-sensical, but rather, is completely sensical, but on a terrain which is shifted from the one in which such a statement would be traditionally understood. That is, notions like “truth-telling” and “lying” imply a particular context, namely, that of subjects in relation to others, and the ability to lie, and what that means, namely, a subject who says one thing yet means another. Wilde shifts the context within which the distinction between truth and falsity are understood, which is to say, the form of subjectivity they imply.

And so, the question isn’t really whether or not Wilde’s tactic is sensible, because clearly it is. And in fact, the position articulated in highly compressed and indirect form in this quip is presented in a much more direct and explicated manner in some of his essays, such as “The Truth in Masks” and “The Decay of Lying.” In these articles, Wilde argues that it is only when we realize that openly lying and wearing masks is the only precondition for telling the truth, that we come to understand that there is simply something more honest about admitting our dissimulation. And so, pretending that one can be completely truthful is the deepest form of lie, while admitting the impossibility of complete truthfulness is at least a more honest, more partial form of truth.

And this is in fact what is argued, in simply more condensed form, in his maxim, with the one caveat that this quip doesn’t indicate whether or not liars will perhaps ALSO be found out, which is to say, whether or not they are more or less truthful than so-called truth tellers. But since Wilde doesn’t attack liars, and doesn’t say they will be found out, this is an indication that all truthtellers will be found out, but not all liars. It is the liar, then, who has the possibility of “getting away” with lying. This is not as fully developed a point as he makes in his essays, namely, that lying is more truthful than so-called honesty, because here the issue is whether or not one will be “found out” as a liar. If one is not “found out,” one can still be a liar, after all. Put in the context of his full position, as indicated by the essays, however, we see Wilde’s fuller point.

And that point is the one indicated by Lacan’s statement that “truth has the structure of a fiction,” which is to say, that the more truthful way to indicate something truthful is to use the form of fiction. To use the form of truth, however, is to do so less successfully, and hence, to be less of a truth-teller. This is why Wilde argues that there is a “truth in masks,” and his notion that the decay of lying in contemporary society is actually a terrible thing for society, because the vogue of truth-telling which so worries him in the 1890’s is the most dishonest thing there is. Lacan, however, largely developed his ideas in the period of 1950’s and 60’s, the period of the rise of so-called “postmodernism.” With Andy Warhol as the avatar or early post-modernism par excellence, and a Wildean mask-wearer if there ever was one, Lacan was describing the more generalized condition which Wilde saw in nascent form in his day. Today, perhaps even more than then, the only way to be truthful is to use the form of fiction.

From Hard Binary to Soft

In the examples examined above, the goal of all these modalities is to undermine those who believe in any one particular truth which is exclusive of a multiplicity of truths. None of which is to say that these figures don’t believe in anything. Rather, they believe that truth is local, perspectival, relative to situation, and that any attempt to develop a universal truth beyond this, which holds for all times and all places, is in fact untrue. And in doing so, they shift truth and falsity from a hard binary to a soft one. That is, they move from any notion of absolute right and wrong, to relative right and wrong. Of course, such a move shifts the hard binary from truth and falsity to absolute and relative, or universal and particular, or in the case of Wilde, between the split between self and other and that within oneself.

In a sense, such moves soften one binary at the expense of another. And so, one could make the argument that ultimately, such moves don’t accomplish anything, they simply are a play of smoke and mirrors. That said, Wilde and Lacan manage to shift the framing of a particular issue, in this case, truth and falsity, each in their respective ways. For Lacan, the issue becomes one of structure, while to Wilde, it is the division between selves to that within oneself. And one could easily argue that they are doing two versions of the same thing, for in fact, the “split subject” is essential to the Lacanian project.

And so, while at first it may seem that such paradoxical statements don’t accomplish anything, they do more than they may seem. Firstly, they show why the primary binary under investigation may not be as useful or meaningful as previously thought. As a result, they indicate the need for “soft” rather than “hard” use of this binary distinction, moving from a model based on a firm divide between these notions to one which frames these are something more akin to poles on a continuum, with many shades of gray or intensity between these. Thirdly, the need for such a shift is justified in regard to another binary, often unmentioned, upon which this argument implied by the statement depends. This other set of binaries (ie: split subject, universal and relative, etc.), however, can then be addressed in two ways, depending on the thinker involved. It can be dealt with as the true, deeper, more fundamental grounding binary, and hence, be dealt with in a manner which is “hard,” or, it can be deconstructed in turn, and hence, be shifted to a “soft” determination whose ground lies in some other binary whose “hard” structure is itself also elsewhere.

From what has been said here, then, it seems that there is a distinction not only between a hard and soft way of using a binary, but also of dealing with binaries as such. That is, one can see some binaries as hard, and others as soft, and the line between these as either hard or soft. Or one can see all binaries as ultimately soft, and the binary between binaries and even ways of using them as ultimately soft. The first approach is one which can be seen to selectively deconstruct one binary, and replace it with another, while the second deconstructs all binaries, including those it depends upon to make its arguments.

 Non-Duality as Binaries on “Soft-Serve”

Nonduality is the term used in much of the literature on non-Western forms of arggumentation, and some non-Western philosophy itself, particularly in the Indic tradition, to describe what I’ve here been calling “softness” in relation to binaries. That is, binary distinctions, of whatever sort, are deconstructed, and there is a fundamental skepticism to the use of binary logic and ways of arguing and speaking in general. Of course, the question then becomes what is put in place of the binary logics these modes of thought work to deconstruct.

It is worth noting, however, the degree to which such a position is radical in relation to many of the core doctrines which have shaped the history of what is often thought of as “the West.” During the twentieth century it was often seen as a truism in many Western schools of thought that not only was all language binary, but that all thought with it, and that to try to get around binaries lead to poetry or nonsense at best, and dangerous misuses of language and thought at worst. Entire schools of thought aimed to purge this nonsense and contradiction from philosophy itself. And this effort ultimately came to naught.

This effort and those allied to it, which is to say, those which aimed to reduce the world in a given domain of inquiry to binary terms, not only was defeated in philosophy, but in many fields of culture. To quite a few moments, the Heisenberg uncertainty principle in physics showed that traditional notions of objectivity depended upon notions of observation and subjectivity which were deconstructed by the evidence provided by experiments which would have been used to shore up such a very distinction in the first place. And if the fundamental stuff of the physical world refused to be put in binary boxes which distinguish between subject and object, leading many previously binary-thinking scientists to refer to sub-atomic particles with terms (ie: intention, knowledge) previously reserved for subjects, then could we say that the physical world itself was, like Wilde or Lacan, performatively deconstructing the dual and binary presuppositions of the scientists? Could we then say that the world itself was arguing that this hard binary needed to be softened?

 A Detour via Mathematics

The realm of mathematics and mathematic logic, often considered the realm of pure thought, and hence, the other end of the spectrum of investigation of the world from the realm of matter, suffered a similar fate with the Goedel incompleteness theorum, which shook the foundational beliefs of the mathematical community in the early twentieth century, at nearly the same time as Heisenberg proposed his principle. Godel showed that the foundations of mathematics were fundamentally non-dual, and that any binaries used to make fundamental distinctions which could give rise to dual or binary rules in math, including fundamental definitions of terms and rules, ultimately depended on nondualities.

Goedel’s argument was framed in regard to issues which arose in controversies in the logic of sets, and it is worth pursuing this line of reasoning for its semiotic richness in regard to that just analyzed in Wilde and Lacan. Mathematicians before Goedel had shown that it was possible to convert mathematics into the language of set theory, and by means of this, link many of the issues in symbolic logic with those of mathematics. And so, a number like “five” could then be redefined as the set of all things in the world of which there are things more than “four” yet less than “three,” with each of these other sets defined in relation to the manner in which sets contained other sets. Ultimately, this shifted the focus, at least in regard to numbers, to two very particular sets, namely, those which were empty, and those which had infinite number of things. The mathematcian Gottlob Frege showed how it might be possible to think of the empty set, or the set with zero items in it, as the basis of number, with the set with one element that which contained only the empty set, the set with two elements that which had the empty set and the set containing that within it, or one, thereby allowing one to “count” two levels of sets within it, and so on, thereby generating all the numbers by means of this recursion. The hope here was that Frege had found a way to justify the logical coherence of the very notion of number which was foundational to the ability to do any mathematics at all.

Godel showed that this was ultimately a paradoxical enterprise. Goedel developed another mathematico-logical language, and used it to convey the logical statements about numbers back into numbers. And he then showed that when one reversed the process used by Frege, essentially transforming logic back to numbers from their conversion into logic, that the result was uncertainty and inconsistency in regard to the logical status of the numbers which were produced by the very logical statements used to justify them.

While this is all extremely complicated, the issue can be simplified a bit by returning to sets. Is the empty set, which is to say, a set with no items in it, truly a set? And what about the set of all sets, which is to say, the set that includes all other sets, up to infinity? Both sets are, for various reasons, paradoxical. The classic example is to question whether or not the set of all sets includes itself as a set. If the answer is yes, then it has just shown that it is not the set of all sets, because something must include it in turn, but if the answer is no, then it is not the set of all sets, because there is something it doesn’t include, namely, itself. The same procedure, however, can be done with the set with nothing in it. Does it lack itself? Answering yes is just as problematic as answering no. As a result, neither the set of all sets, nor the set of no sets (the empty set), truly qualify as a set. And so, what then is a set, but the wrapping of two versions of the same paradox, namely, that of inclusion or lack thereof, around each other, and pretending that nothing strange is going on? The situation is quite similar to that indicated by Wilde and Lacan. Either the very definition of set, as that which includes something else, is inherently problematic, as that was between truth and fiction in Lacan and Wilde, or one has to shift how one relates to this notion.

And most of the mathematical community have chosen, in the terms used by Lacan and Wilde, to lie. That is, most mathematicians working in the field today describe themselves as one form or another of Platonist, which is to say, they don’t really question the fact that numbers “are real” in some sense or another. They believe in the truth of numbers, and when they use them, they believe they are “telling the truth” about some very deep aspect of the world. It would, however, be more honest if, as some small group of mathematicians after Godel do, call themselves liars. For in fact, numbers are fictions, if very interesting and useful ones. But they are built upon a fiction. And either this means that one tries to ignore the contradictions at the root of the mathematical enterprise, the Wildean/Lacanian notion of “truth-telling,” which is ultimately a form of lying to oneself, or, one admits that one is telling a lie, and hence, is ultimately more honest.

And more liberated. Because what one does in the process of admitting that one is lying is that one doesn’t have to take one’s particular lies as seriously. One ca shift lies, and play with them. Of course, truth tellers do this all the time, but they don’t want to admit this to themselves, they have to at least pretend, to themselves, that they are honest. But a true liar can lie to themselves and have a great time at it. The question, then, is why, at to what end? Is lying truly better than truth telling, and according to what standard? 

Binaries on “Soft-Serve” and Deconstruction

If both mathematics and physics deconstructed the binary oppositions at the foundations of their enterprises towards the start of the century, against the frenetic protests of those in those fields, and despite the continued disavowal by many of those working in the areas of these fields far from such limit-effects and foundations, then what might this mean for issues of language and thought? The dominant “ideology of thought” of the middle of the century was the notion that the brain was like a computer, and that thought was like a series of binary switches. This was, of course, due to the fact that digital computers were invented mid-century, and they were based upon chips whose logic circuits were themselves binary. This is largely due to the influence of the very same mathematicians and logicians whose developments in logic and set theory produced the crisis of foundations early twentieth century mathematics and physics, as described above.

But those who don’t go to the limits of a given discourse seem, in general, to be able to avoid having to deal with many of the effects of such issues. And so, the notion that thought was likely binary, in the model of computers, is still, to this day, often accepted as simple truth. And yet, the very history of the century seems to prove otherwise, just as networked logics of the internet and other forms of computation beyond those of binary computers, from the fundamentally networked structure of the brain to experiments in “artificial neural networks” indicates otherwise.

To give a sense of why it might be possible for those working in computing to more easily ignore foundational issues than those working in set theory, think of a picture frame. So long as one looks at the picture within the frame, one doesn’t have to deal with the fact that the picture is an illusion. This is similar to the border of a television or computer screen, or the “out of frame” of a film image. But as soon as one approaches the frame, the perhaps leaving one eye on the image within the frame, and the other on the frame itself or the space beyond, and one can no longer simply disavow the constructedness of the image, its representational status, its fictive structure. What’s more, it becomes difficult to say whether or not the boundary is itself part of the world, or part of the imaging of the world.

The reason why this example works so nicely is that, as many visual theorists and beyond have argued, the filmic or framed image of any sort is in fact a set. That which is inside the frame is part of the set, that which is outside is beyond, and the frame is the border which performs the action of including some aspects of the world, and excluding others. Deconstructing the binary, focusing one eye on the inside and one on the outside, draws attempt to the frame, and the act of framing.

And this is part of what deconstruction of any binary distinction does. That is, by unravelling any binary, one is not only focused on the untenability of the frame to determine what it seems to, but a certain motion if performed, whereby focus is shifted from one thing to another. And by means of this, the very action of deconstruction, and conversely, of construction, comes into view. The deframing which occurs when one focuses one eye on the inside and the other the outside of an image draws our attention to the original act of framing which often remains hidden, but which this act of deframing draws into the forefront. And so, we are confronted, not with two things, but two actions or processes, framing and deframing, or, framed otherwise, construction and deconstruction.

All of the deconstructions mentioned already here, whether that described by Heisenberg, but seemingly enacted by the quantum structure of “the world” itself, or that of Goedel, or Wilde or Lacan, are deconstructions of a binary which calls attention, in the process of deconstruction, that the construction of the binary in question is itself an action. To return to truth and lies, the issue isn’t so much whether one tells the truth or a lie, or is a truth-teller or liar, but rather, the question of how one deals with these issues. Truth and lies, truth-tellers and liars, cease to be things, and the appearance of thing-ness shifts to that of process, there is only truth-telling and lying, and any sense in which there are “things” known as truth or lies, truth-tellers or liars, all shifts depending on the actions taken in regard to other processes.

Things, then, are recast as processes, just as binaries are recast as continua. None of which is to say that there aren’t binaries and things, but rather, these are seen as derivative of the continua whose intertwining via processes gives rise to them, even as these processes are degrees of intensity of various continua. The world can then be seen, in a sense, as a weaving of such continue in relation to each other, a network, each of which provides context for each other. Things and hard binaries, then, could be seen as simply the most rigid aspects of such a process.

And here it becomes possible to see the manner in which deconstructions of various sorts undercut a particular binary, shift the terrain to the context which implicitly supports this binary, and then either contines to deconstruct, or it stops. Stopping shifts the base binary from one term to another, while continuing ultimately deconstructs everything. And so, the first approach moves the discourse from one sort of absolutism to another, while the latter leaves one in skepticism or nihilism, at least, if one takes these logics to their extreme. Such a binary between options itself binary, or in the terms I’ve been using, “hard.”

The “soft” option is that described in the preceding paragraph, which is a networked option. It is one which views the world as composed of processes which form the contexts of continua whose patterns of intertwinings produced momentary sedimatinations as things, which can then become reified and linked into binaries which seem absolute, but which ultimately will deconstruct if pushed. The question the isn’t, as with a “hard” approach to the world whether or not something is true or false, because everything is then true or false in its way. The question becomes which things to leave hard and which to make softer, and why, in particular contexts. Rather than truth or falsity, with deconstructive unmasking as the tool for destroying the false in the name of the true from within on its own terms, deconstruction becomes a tool to be used to transform things, and reconstruct them, in regard to issues determined at more encompassing contextual levels of scale.

And so, for example, in regard to mathematics, the question is not whether or not numbers are true or false. Of course they are false, like everything else, at least, if one views the world in terms of hard binaries true and false. From such a perspective, then, we can say, that nothing is ultimately true or false, but that everything, in its way, is both truth and false. And so, the language of nondualism, as used in many forms of non-Western thought, seems to often imply these qualifiers that it is worth indicating. That is, the terms “ultimately” and “in its way.” These are, according to logic, the absolute and existential qualifiers, two more of Frege’s inventions in mathematical logic. And they allow it to be seen what non-dual thought works to unravel, which is to say, “hard” use of binaries. Non-dual modes of argumentation are a way of saying that IF one wants to divide the world into two, absolute, exclusive categories, THEN ULTIMATELY, the world will resist, and give you something like NEITHER NOR and BOTH AND. Or at least, this is how non-dual arguments function in many non-Western texts.

 The Issue of Context and Framing

That said, not all non-dual texts, systems, and arguments are the same, for if they were, then Zen Buddhism and Vedantic Hinduism, for example, would be identical, and they aren’t. There are two primary differences here, the first of which is the context in which binaries are softened by means of deconstruction. Some contexts are simply more central to our way of relating to the world than others. And so, if one questions the binary between ketchup and mustard, the result is hardly earth shattering. Is this even a binary? Aren’t these more contraries within the category of condiments? Certainly these aren’t contradictories, like ketchup versus non-ketchup. But one could, if one chose, find a way to show that there are forms of condiment which are both ketchup and mustard, but also neither one nor the other. In fact, one could simply make such a hybrid condiment and be done with it. The effect of such a performative deconstruction of the category would be to shift the ground of the issue at hand. Rather than seeing ketchup and mustard as absolutes, as necessary and distinct, they become seen as choices, the results of our actions, and our continual actions, to produce things which fall into these categories. We could, after all, produce the hybrid instead, give it a name, and create a new form of condiment, and this would, ultimately, instantiate some new binaries. But our faith in the necessity of the ketchup-mustard binary would be shaken, and perhaps our very notion of what it means to think about condiments and food. Whether or not this would shake up our way of categorizing the world is another story.

Nevertheless, to shake up any binary distinction is, in a way, to shake up them all, even if to differing degrees. Because if one can deconstruct a binary between any two aspects of the world, whether seemingly necessary (ie: black and white) or randomly assembled (ie: Batman and a fish), and whether by one’s physical action (ie: combining these and producing a hybrid, or finding a hybrid), or rather simply intellectually in one’s head, such an action always shifts the ground to the context of the binary. That is, when we deconstruct the ketchup-mustard binary, we have to focus on the construction of our categories of condiments, and the choices we make in regard to these. Because we have most likely been born into culture with a set of options of condiments, we might imagine that these are necessary and predetermined options, but when someone produces something like “ketuchstard,” we may simply add this to our list of condiments and not think much of it.

Or, our world might shaken. For the addition or subtraction of an element of any category, in the moment when it occurs, brings about a choice in all those involved, namely, whether or not the action is valid. And this draws attention to the context, which is the action of categorization which has been implicit, all along. For when a change is made to the categories in question, the issue is pressed, one must choose to either continue to categorize the world in the same way, or to change.

And in this sense, deconstruction, as much as reconstruction, which is to say, the addition or subtraction of categories, call attemtion to the process of the continual reconstruction of categories by action in the world. There is no category in our world which is not continually being reconstructed by all the agents involved. Whether we think of these agents as humans using language, or the world and scientists producing experiments together, there is a continual process of co-construction, and this process is multi-determined, from many sides. It is never the result of any single binary, but webs of agents. While it is possible to split these networks into binaries, the world seems, at many levels, the attempt to make these ultimate.

For in fact, the world changes, and is composed of agents of many different types, and they network together, and any sense of similarity or difference is ultimately relative to these aspects involved. Any attempt to find potential bases to make sense of this will founder, at least, so long as the world continues to seem to keep producing new things. And there is even reason to believe that the very stuff of the world, down to the realm of quantum foam or the structure of logic and math and words, has an aspect to it which finds final and ultimate reification, thingification, splitting, or binarization somehow against its fundamental structure. It is as if the very stuff of the world, whether considered in the abstract in language or math, or in the material form in physics, doesn’t like any sort of hard binary when taken to the extreme. This is why I feel it is necessary to have a softer view of the world, one in which binaries, including that between a thing and its context, are never taken as ultimate.

Grounding Terms: The Non-Duality of Lacanian Master-Signifiers

All of which is to say, the world we encounter is likely much softer than we realize. It is continually constructed and reconstructed, and each entity within this participates in this continual process of construction and reconstruction. The question then becomes one of choices. Does one want to continue to construct the world in the same way, to keep interpreting it with the same categories, by acting according to the same parameters? None of which is to say that any one aspect of the world can simply change everything. No, we are all connected, and so, one would only introduce the condiment “ketchuard” to the world beyond oneself if one didn’t only make this in one’s own home, and hence, disrupt only one’s own personal way of thinking and acting in relating to condiments, but also, started to produce this on a mass scale, found a way to introduce this notion and/or thing into the world beyond oneself.

And this is where the silliness of such an example shows its force. For changing the way people view condiments is a minor change, but it does give people, if in form rather than content, a reminder of the fact that the world changes, isn’t as “hard” as it seems, and that even if any one individual can’t change the world, that collectively and in relation to the non-human aspects of the world, there is a lot which seems quite malleable, despite the fact that our own need for security seems to want to forget that fact.

But as soon as one goes for a more momentous notion, such as “truth” or “god” or “the self,” people get all up in arms. Not all terms are created equal in all discourses, and not all terms are defended in words and deed with equal ferocity. Ketchup and mustard have their defenders, but few would likely kill to prevent ketchuard from becoming “a thing,” but people get quite excited when you go for a term which grounds their ways of thinking and acting in the world.

These “grounding terms” are what Lacan calls “master signifiers,” and for Lacan, they are places in which a culture stores the non-duality at the core of a given discourse. That is, the discourse of a given religion using has certain grounding terms, such as “God” or “revealed truth,” just as the discourse of scientific inquiry, to choose another very non-arbitrary example, has its own grounding terms, notions such as “objectivity” and “reproducible data.” Each of these notions depend upon binaries which are treated as “hard” by the discourse in question, and these “hard” binaries are then used to organize others which are dealt with in softer fashion. And so, in science, some terms are negotiable, and no-one cries all that much when a shift is made between one term and another, but if one questions the very notion of objectivity, the ability to “do science” itself seems shaken. The same with religion. If one questions whether or not a given scripture is revealed or not, one may not only be questioning the stability of that particular scripture within the network of terms and practices, agents and objects, contexts and processes which is that religion, but also, if that scripture plays a grounding function in regard to that religion, then perhaps the entire relation of that religion to scripture as such. That is, the question becomes something like, hard or soft? And in a more general sense, in regard to the way in which that entire discourse relates to the contexts which ground it as such. 

The World on Soft-Serve

And this is why ultimately the issue that is being raised here is one of values. A “soft” worldview is one in which anything can be deconstructed, and should be, if it gets in the way of, well, a soft set of values. Such a soft set of values can be constructedly, softly, in contrast to those which are hard. That is, instead of the hard distinction between “good” and “evil,” one ends up in a world of continua, of “better” and “worse,” and these are themselves determined in relation to the contexts in question. That is to say, better and worse are always ever determined locally, relativistically. But there is a basic, overall frame that unites these many standards, and that is the notion that hardness is ultimately worse. This isn’t to say that some uses of hardness may not be helpful in particular situations, but rather, that when the use of hardness becomes hard itself, that there is likely a problem.

This issue of values then helps to determine that which is deconstructed and/or reconstructed within a given situation. Hard binaries are deconstructed, and the point of this is to call attention to the fact that the world doesn’t need to be this way. Granted, some aspects of the world will likely resist being deconstructed more than others, but this is because no entity is a hard and fast island, cut off firmly from the world around it. Rather, all is intertwined, networked. And it is only those hard and firm distinctions which create the fiction of isolated entities which choose to try to ignore this fact. The world ultimately resists this, and crises are the result. These crises can be opportunities to soften things up a bit, but they can also simply be painful parts of the continual process to attempt to hide all softeness, and make things hard again 

There is then an ethics of softness, just as there is one of hardness. Hardness aims at control, protection, territory, boundary policing, and maintenance of the status quo, while softness is all about flexibility and local responsiveness. The former is top down, the latter is bottom up, the former is about being “right” no matter the cost, while the latter is about what works, the former is about extremes, while the latter is about the middle way.

The Middle Way and the Deconstructive Abyss

It should come as no surprise that I bring up “the middle way” here, for this is the path described by the Buddha, as one of his most deconstructive disciples, the Indian philosopher Nagarjuna. Many throughout history have questioned whether or not Nagarjuna was a nihilist, because in fact, he deconstructed the very terms whereby the Buddhist enterprise justified its own worldview. Give him any term, and he’d deconstruct it, in a manner quite similar to that described above, and with almost Greimasian precision. He called one of his primary texts, however, Seventy Verses on the Middle Way. And he said his goal was to avoid the extremes of eternalism and anihiliationism, or absolutism and nihilism.

Of course, the difficult question is what remains if one deconstructs everything. In a sense, the answer is everything and nothing. For if everything is deconstructed, then one returns, in a sense, to where one began, one’s progress is a circle, only everything looks different. If one deconstructs even the very terms one uses to articulate one’s deconstructions (for example, truth and lies, or hard and soft), one is left with the world from which one began, before one began to pull the thread and deconstruct the “sweater” of one’s surroundings, the network of terms and processes, things and contexts, which are in so many ways, one’s world.

But the difference is that now everything looks tentative. For one realizes that nothing has to remain the way it is, everything has at least some degree of leeway, which is the degree to which one contributes to the continued categorization and action, construction and reconstruction, of the webs of context within which one is always already situated as an interpreter and actor in one’s world.

Nagarjuna describes this with his notion of shunyata, or “emptiness.” He argues that everything is empty, and the term here is often also translated as void or illusory, but the word origin can be traced to the manner in which something such as a bowl or vase is “hollow,” which is to say, it appears as full, but ultimately, there’s nothing there but void. That said, bowls are extremely useful, and the void within them is only ever in relation to that which is not void, that which surrounds it. And this is why Nagarjuna appends his notion with that of svabhava, often translated as “own-being” or “essence,” and this set of terms is often used together in the tradition influenced by Nagarjuna, to argue that everything is ultimately empty of own-being, or fixed essence. This is framed by Nagarjuna as simply a deepening of the notion, presented by the historical Buddha (at least to the best of our knowledge) of pratitya-samutpada, or dependent origination. That is, everything is networked, conntected to everything else. And so, nothing is truly ever what is seems, because it is always the result of causes and supports, contexts and processes. And so, it can be deconstructed.

Until, that is, one gets to the whole. And just as was described before in regard to mathematics, the issue is ultimately how one relates to the whole, which is to say, the “big picture.” Whether one does this implicitly or explicitely, this is a question of values. What sort of “big picture” does one want? Deconstruction, and non-dual language in general, always refracts the discourse away from the binary in question, and to the context which grounds the binary, and in a way which calls attention to the processes of reconstruction which are often hidden in the way such a binary often serves to structure action and interpretation in a continual way in the present, often in a way which appears isolated from context and necessary. In fact, binaries often appear so necessary that they disappear from view, like when one looks at the world through a set of glasses and forgets they are there.

But deconstruction and reconstruction call attention to the fact that the world can always be, at least to some degree or another, be constructed differently. But there do seem to be aspects of the world which, despite this, seem to resist more than others. For example, it’s hard to get around the fact that aspects of the world deconstruct and reconstruct. But if one deals with these softly as well, one is much less likely to be shocked and surprised by the world, because one expects this to happen.

For in fact, a soft world is simply better, at least, from a soft-perspective on things. Those advocates of a hard world would likely say the same. Each can only ever be judged according to what they produce. Our current world system is quite hard, being based upon armies that protect boundaries and fortunes from the impoverished who seem to vote the same people into office anyway so that all can get the same products we all want. But it does seem to me that our world could benefit from a bit more softness. This is, however, an admittedly soft way of looking at things.

Values, which is to say, how one frames one’s relation to a given context or whole, are never able to be coherent, at least, not in a binary way. For values are about what one wishes to see, they are about the way the future influences the past and vice-versa in their way of framing the present. The very notions of binarity, like before and after, now and not-now, begin to breakdown. Because values are about what we wish would be true. They are virtual, in regard to what is actual. They are about hopes and fears, dreams and fantasies, rather than the way the world “actually” is. Nevertheless, they frame this actuality, and alter the way we act in the world, and so, change the way the world “actually” becomes.

And this is why I feel that a softer world would be better. Because framed between the hard options of all or none, having and not-having, full and empty, I think the world would be better off with some, rather than a firm divide between haves and have nots. I think the world works better when there is a distribution of resources, potential, and ability to determine what is considered right and wrong, true and false. The only justification I can have for these notions is by my interpretations of the past, present, and potential future, but these framings have not ultimate justification, because, ultimately, justification is a dual game, and I’ve decided I prefer to play that game soft.

Because the alternative is adhering fervently to a set of binaries, and defending them at all cost. Or, to follow the logic of deconstruction, once started, to its logical end. And here it becomes possible to see the manner in which hard fullness is not the only hard option, but also hard emptiness. A true skeptic, one who believes in nothing, and is a nihilist, is one who adheres to emptiness for its own sake, who deconstructs everything, world be damned, and leaves nothing to take its place. Such a person ends where they began, for the world reappears as it was, if fully deconstructed, but not it seems not only reconstructible, but meaningless.

This is the hard empty, and in its way, is just as dangerous, and perhaps moreso, than the hard full. Between the absolutist and the nihilist is only what they hold on to, the full or the empty, but not the form of their holding, which is hard to the core. And this is why the followers of Nagarjuna argue strongly against nihilism, and the fact that their deconstructive method leads to anything like nihilism. For they argue that nihilism is what happens when one turns emptiness, or shunyata, into a thing itself. The middle way deconstructs everything, and gets everything back as potential, as an option, and avoids the fanatical purity of the nihilist, who leaves nothing for anyone, and tries to destroy for its own sake.

This helps explain why after the development of Nagarjuna’s Madhyamaka school of philosophy, another came about, often called the Yogachara school, which argued that the flip side of emptiness was in fact a new sort of fullness but one which was permeated by emptiness. This was a fullness which had all as potential, but nothing as necessary. This is a world which sees everything as virtual, and sees that as an opportunity to produce a better world.

And this is why one sees the discussion of compassion within all these forms of Buddhism, and in a manner which is ultimately not deconstructed, or rather, which is continually deconstructed and reconstructed anew. For complete deconstruction, taken in a sense which is hard, leads one to nihilism. And this means that truly anything is possible, but nothing worthwhile more than anything else. There is no reason, no argument, no justification, nothing to value.

The form of Buddhism in which one sees this tendency manifested the most strongly is Zen. Zen, or Cha’an Buddhism in the Chinese original, is nondualism taken to the extreme. And yet, it is not, at least according to someone like Nagarjuna, truly non-dual about its nondualism, for it adheres to this with a fanatical rigidity. And so, Zen verges on nihilism, on deconstructive fury for its own sake, that which Freud called “the death drive.” Zen is often described as wordless transmission of enlightenment. But any attempt to describe this is foreclosed, and with words, arguments, scriptures, the Buddha, and all guidelines for action seen as deconstructed, Zen becomes fully irrational, a relation between student and teacher, monastic institution and lineage and adherent, which is beyond words, concepts, explanation, justification, and values. After all the emphasis upon moral codes within Buddhism in history, Zen deconstructs these, and spontaneity returns with full force. Teachers hit, burp, scream senselessly, or use nonsequiturs or “contradictory” or seemingly irrational stories as teaching tools.

And sometimes, the results are wonderful, and if the teacher is truly compassionate, and the monastic tradition and context “soft” enough, I have no doubt that this radical use of deconstructive embrace of emptiness can truly be reconstructive and liberating. But there is so little to hang on to, so little grounding, that it is so easy to slip into nihilism. It hardly surprises me, then, that Zen has a history of being embraced by warriors, including the samurai, and there are whole traditions of Zen swordsmanship and archery. These emphasize the deconstruction and reconstruction of the preconceived categories that limit the warrior. There is a softening, but for ultimately hard ends, namely, warfare.

Zen can be a completely peaceful way of dealing with the world, and often is, but it also has this history. The reason, I believe, is because it leaves itself so little ground to stand upon, because it embraces emptiness to strongly, and takes its non-dualism in a nearly dual or hard manner, which is to say, to an extreme. Zen can be compassionate, but it also doesn’t have to be, and this becomes a question of values. Since that is largely foreclosed by Zen, I will say no more about this other than, well, Zen is Zen.

Contextual Nondualism 

None of which is to say that other forms of Buddhism are necessarily better in this regard. I believe firmly that the difference between softness and hardness, which is ultimately one of values, lies in what one does, in the processes of action and interpretation, construction and deconstrution, and ultimately, reconstruction, and not in the things which remain behind, like words or objects. The meaning is always determined by context, and context is itself, ultimately, non-dual, at least, when taken to the extreme. That is, any attempt to ground any particular mode of action by recourse to context will ultimately trigger recourse to the context of that context, on to infinity. And at the limit, the issue becomes one of values, of desire and hope, of the relation of the virtual to the actual, of potential to the concrete. What sort of world does one wish to have, and what might be ways, based on past actions, to help bring that about?

If extreme nihilism is one danger, then the other is absolutism. And many, many forms of non-Western thought are quite selective in the way they deconstruct. If Zen seems to deconstruct nearly everything, and leave little left secure, most other forms of Buddhism, Vedanta, Taoism, Cofucianism, leave certain key terms standing, and these then serve to orient and ground things. Cofucianism embraces “the Way,” but refuses to question the value of the family and filial piety. Taoism is much more extreme in its universalization of “the Way,” and hence, has often been accused of nihilism, or of saying “nothing,” as has Zen. It is perhaps not incidental, then, that Taoist notions were appropriated by the Legalist theorists in China, the Machiavelli’s thousands of years before the Italian was born, and Taoist notions were used to imagine ways for rulers to deconstruct and reconstruct the reifications of their opposition so as to undercut them. Sun Tzu’s famous Art of War uses Taoist inspired notions, and has influenced generals and warriors for literally thousands of years.

Likewise, while the Bhagavad Gita was one of Gandhi’s primary sources of inspiration, it also was massively influential on some of the Nazi war criminals who committed some of the greatest atrocities of the century. This is because, as with Zen, the Gita is fundamentally an “a-moral” text. This is not to say that it is immoral, but rather, that it urges, at points at least, that one should throw away any and all restrictions in one’s devotion to Krishna, which is ultimately, a representation of a principle of non-duality. The moral advice to Arjuna, the primary character in this work, is not to stop fighting, but return to it, and is not to overturn social hierarchy, but support it. There is a tension, however, because the text at some points advocates a support for the traditions of Vedic culture at that time (ie: revenge those who killed your family), while at others, it hints that devotion to Krishna is more important, and this implies a fundamentally deconstructive and reconstructive, context based approach to ethics. Without anything more to guide this, however, this can be interpreted as ‘anything goes,’ so long as there is “no karmic debt” incurred, which is to say, so long as one does engage in actions for their karmic reward, nor for the pleasure or pain they may bring.

This has been used to justify some very, very disturbing actions, and to provide a means from detaching from them. Deconstruction can be used, after all, to deconstruct and reconstruct things in a way which is very rigid, possessive, absolutist, and destructive. Deconstruction knows not compassion by itself, nor does reconstruction, for ultimately, the question is “for what,” which is to say, a question of values. If the value structure, the context which ground the others, is nihilist or absolutist, then no matter how much deconstruction and reconstruction, it is done for “hard” rather than “soft” ends, and any softness is ultimately relative.

Capitalism, after all, is today a massive machine for the deconstruction of past certainties, leading some critics to describe it as a machine for “deterritorialization” and “reterritorialization.” Destruction and reconstruction in its own image is in fact precisely what capitalism does. Devourer of worlds, few terms could ever describe contemporary multinational, tenticular capital better, the “vampire squid on the face of humanity” which sucks the world dry for its own ends, for hoarded piles of numbers in bank accounts as goods for their own sake, no matter the cost. Even the productions of goods are merely a means to an end, what it really is about is having more digits in the bank account.

And to what end? Capitalism defers questions of values, because ultimately, it is nihilist. And so, it appears revolutionary, and may have local liberatory effects from various solidifications which have become restrictive. But in the long run, its goal is to harden things into so may mirror image copies of its own image, and turn the whole world into fuel. And when it has does this, it will eat its own body, and today, the crises of capital are the spasms of the world system eating itself. None of which is to say that capital will ever collapse, it has shown time and again, it is far too slippery for this. No, it recalibrates at the last minute, but with each recalibration, it tries to give away as little as possible to those who need it the most, all in the aim of its own hoarding and growth.

Traditionalist reifications are one extreme, and hyperviolent nihilism, either in traditional form, as seen in particularist aggressors throughout history, or in its much more sly, nihilist postmodern forms in capitalism, are the dangers. But between absolutism and nihilism, lies the middle way. Taoist philosophy and Zen tend towards nihilism and complete nonduality, while more ritualized worldviews, such as many forms of Confucianism, or the devotional strands of Buddhism (ie: Pure Land Buddhism) and Hinduism, tend to reify and limit the ways in which nonduality allows for flexibility. What has allowed all of these systems to evolve in regard to the needs of the people involved is that the deconstruction and reconstruction of grounding terms allows these worldviews to change. Nondualism can either help or hinder this.

And while many of these worldviews have non-dual aspects, there are ways in which this nonduality is contained. It is these non-deconstructed moments which indicate the aspects of these worldviews which are non-negotiable. Compassion, for example, for the Mahayana. And this is justified by recourse to the fact that deconstruction is seen as a dispersive strategy, and it is, but this is to miss the fact that dispersal can be put towards non-dispersive ends. And so, throughout its history, the dissolution of the traditional aristocracy and its wealth by Buddhist calls of compassion have lead to the accumulation of fantastic wealth by Buddhist monasteries, and these destabilized these societies so much, particularly in Gupta India and T’ang China, that they had a hand in bringing down these empires themselves.

Any discourse can be misapplied, and any discourse can have its non-dual “soft” aspects linked up to others which are either hard, or used in “hard” ways. No term is more or less hard or soft than any other, it’s always in the way the term is used in context. Likewise, no particular action is good or bad on its own, or even meaningful, it is all about the contexts and processes which endow an action with meaning, relevance, causes, and effects. 

And this is why the middle way can never be a single discourse, and even calling it this is only ever a placeholder for a practice which must be continually vigilant, to maintain the ultimate softness of its ends. Because the world has a tendency to harden and soften which seems to feed on whichever is predominant in a given situation. Hardening and softening seem to self-potentiate. Growth spurts become rallies, and collapses can drag their contexts with them. 

No, the middle way is never a thing, it is a desire, one which must continually be recreated anew, continually reconstructed in regard to the contexts. One must continually weigh the relation between the situation in front of one and the contexts and processes of its construction and reconstruction, and attempt to find the soft-path. There are no guarantees. Softness is an ethics, not a thing. Likewise, no discourse is ever completely soft, but only so in context, and always only ever softening or hardening, along with all its aspects. Between hard and dissolving, softness is the middle way.

The one form of Buddhism I find most interesting in this respect is Tibetan Buddhism, and I have and will deal with this form of nonduality extensively elsewhere. What I find most interesting is that Tibetan Buddhism builds upon the Yogachara insight that emptiness is the foundation for creation, and by means of this, it softened the traditions of the Boen religion in Tibet, and the warrior ethos of its people, and created a society which was Buddhist to the core. That said, this was able to reify and hierarchize, to solidify.

What’s more, for all the talk of nonduality between subject and object within many, many non-Western philosophies, there is often a subtle emphasis upon the interior world. That is, one changes, through meditation, the way one sees the world, and this changes the way one acts. But there is rarely an emphasis upon direct revolutionary action, of changing the world itself. While some non-Western theorists urge this, such as Mozi in ancient China, by and large, change happens inside one, because one breaks down the barrier between consciousness and world, leaving only consciousness (Yogachara has been called the “Mind-Only” school, and has much in common with Vedantic Hinduism in this respect).

But if one were to truly take this argument to its conclusion, if the mind is everything, the world is also the mind. The binary would fully deconstruct. And so, to change the mind, one must change the world, not merely the mind. Now, many forms of Buddhism do change the world, and the institutions of Buddhist monasticism are evidence of this. But this reworking of the outside world isn’t oriented about a better outside world, but only a better inside world, and optimizing the outer world to help cultivate the latter.

And so, when the Dalai Lama says that the “East” has provided, in the form of Tibetan Buddhism, an inner science which can liberate the inner world, and the West the ideals of democracy and science and revolution which can change the outside world, I think his notion here should be taken seriously. The Dalai Lama has said that his worldview is Marxist, but not in the sense of the Chinese or the Soviets.

There must be a middle way between internal liberation and external liberation. And I feel that the non-dual insights of non-Western thought can be essential help the West get beyond its reification of the individual as the standard of all knowledge and action, as evidenced in various ways in materialist science and scientism, binary notions of thought and language, or capitalist possessivism. But likewise, Buddhism and attempts at inner liberation, such as psychoanlaysis and therapy in the West, need to question their own reifications and binaries as well. But not towards nihilist ends any more than absolutist. Rather, the question is, what are our values? And why? What can help us give rise to a better world?

And as I would argue, how could we become soft, to find the middle way, between hardness and dissolution? I believe that strategic use of nondualism can help this happen. And I think that a general orientation towards the nondual ground of any dualism is an essential part of this process. But if this becomes an end in itself, it can too be destructive. In the language of the Tibetan tradition, there is need for emptiness and compassion, for these are two sides of the same, and the resulting interpenetration keeps the process moving and quasi-stable, metastable. It keeps it readjusting to produce optimal softness.

Feeding Back Nonduality, From Virtual to Vajrayana 

I’d like to end on a discussion of feedback in living systems, and how this relates to non-dualism. All complex adaptive systems in the world make use of feedback to modulate their relation to their contexts. While a thermostate isn’t complex, it certainly is a simple feedback mechanism, for it adjusts the temperature of a house to keep it in an optimal middle zone. And it does this by means of feedback, which is to say, the temperature itself comes to factor in the setting of the processes which impact the temperature. Temperature enters more than once into the equation, which is why, in terms of mathematics, feedback tends to show up in equations in the form of exponents, leading these equations to be called “non-linear,” because they tend to give rise to curves rather than straightlines, and often “organic” seeming structures, rather than simplistic mecanical ones. Complex adaptive systems occur when these feedback processes take on a “life” of their own, such as the manner in which a whirlpool engages in constant feedback with aspects of its elements and environment. Living organisms are what happens when this becomes relatively self-sustaining. The evolution of life, consciouness, self-consciousness, and all we know, love, and value is the result of feedback building upon itself, and doing so non-linearly.

Studying the world, it becomes possible to learn what develops life and all that makes it good, and to try to experiment further to increase the quality and quantity of the this goodness the most sustainably and complexly for the greatest number. Scientists have studied these sorts of systems rather extensively, and seem to indicate that diversity, distributedness, feedback in various forms, and energetic flows which are meta-stable, just a little more than the system can handle, tend to maximize growth, in all senses of these terms, and in relatively sustainable ways. When any of these predominate, however, the system leaves the middle way between the many strands of these networks of factors, and imbalances are generally the result.

It is for these reasons that I think these are some very real values which can help us link any particular with the whole. So long as we don’t reify these, and keep using them softly, keeping the eye on the middle way, in all senses of this term, even up to and including questioning any and all aspects of this process. With one exception. Namely, that that which leads to life, and which makes it better, needs to be the source of our value. And ultimately, in a sense, this is always the case, all values, and even the ability to value, come from life. But we often take particular aspects of life and raise them beyond life. Even Buddhahood, or revolution, can become idols which interfere with any ability to give rise to these very things. And this tends to occur when we reify one aspect of the world as more valuable than the value which is within any and all, and which can give rise to the best within any and all, so long as it keeps its eyes always on the attempt to do what is best, not only in regard to itself, but the any and all.

Such an ethics is nondual. It doesn’t dispense with the self, because it is impossible to make the world any better if one simply dispenses with oneself and one’s own needs. But it attempts to continually deconstruct and reconstruct the terms of any situation in regard to an attempt to situate it in regard to the values of the whole. And the whole is always beyond, always only present in part, always, in a sense, soft. Not empty or full, not here or there, not present or beyond, not changing or the same, but fundamentally, nondual. But this nonduality cannot reify itself into something or nothing, it must be kept moving, and this is why nonduality is only ever a reification of what nonduality is actually about. There is no word for this, and yet, contra Zen silence, there are many. And each is better than others in regard to a particular situation. Of course, this is what a Zen koan seeks to indicate, but by dispensing with explanation, my sense is that it leans too much to the side of emptitess, without emphasizing the fact that people tend to need something to hang on to as well, some luminosity and compassion, creation and recreation, in addition to mere emptiness and deconstruction.

From these notions it becomes possible to tie nondualism, along with deconstruction and reconstruction, back into not only the semiotic issues described at the start of this essay, but also the issue of life, which is to say, the world from which semiotics like those of language emerged. In living systems, feedback is how the a system alters the way it relates the balance within itself to that which it maintains with its environment.

Feedback shows up in mathematical equations generally as exponents, and in the graphs which equations produce by the ways in which complex systems, such as those which are alive, tend to be described more by non-linear, curved trajectories, rather than the straight-line paths which tend to better describe mechanical systems. Of course, put enough mechanical systems into play with each other, and the results can quickly go non-linear, and in fact, mathematically speaking, the famous “three body problem,” quite a famous problem in the history of mathematical physics, finally showed that it only takes three linear systems to produce one non-linear one, which is to say, a system whose behavior cannot be predicted in advance from the parameters which composed it.

It doesn’t take all that much for the basic stuff of the world to ‘go off’ track and lead to unexpected results. In fact, all it takes is three things interacting, in the context of a fourth which provides a flow of energy, and things are likely to get out of hand. Two things, however, so long as they are mechanical and predictable, even when in an environment which provides a similar flow of energy, are predictable and non-linear. Binaries tend to reproduce themselves, and so do triads when anchored by binaries. But when triads arise in the realm of fourthness, they tend to produce the unexpected, and binaries may try to rein things in, back to neat triads or even neater binaries or unities, but rarely are able to do so fully. 

If one examines individual equations or graphs, separate from the systems in question, however, one only sees exponents or curves. And when the curves hit an inflection point, one only sees singularities. When mathematics and geometry take an approach which filters out context, the role that feedback plays in generating these factors becomes obscured, and is often only restored by relinking what reifying approaches segmented.

The same can be said with language. Individual words reify, as do binary distinctions. But humans only use individual words and binary distinctions in relatively artificial situations, removed from the ebb and flow of sentences in motion. And yet, these hypostatizations are privileged by so many philosophers and linguistics as paradigmatic examples of how language produces meaning. The individual signifier, for example, in the history of linguistics, and its accompanying signified. Of course the world will seem reified and binary when viewed from such a perspective.

However, when context and process and pragmatics are restored, as so many post-structuralist and other critics did with the structuralist linguistics which dominated mid-century, the rigidity begins to vanish. And in hindsight, it becomes evident that structuralism was a Manichean ideology which reflected a Manichean time, the time of the Cold-War, as well as of the binary computer. But even beyond the Internet, by means of artificial neural networks and many other advances in so-called “soft”-computing, even models of computation are beginning to go non-binary. In fact, binary computation is now, in hindsight, seeming so limited, restricted, and such a limitation on where the future of computing might go. The brain as well, is a fundamentally networked structure. Why then would we think that thought or language would be binary, linear, or composed of reified building blocks which are assembled like machine parts?

But what are some of the implications of this for semiotic theory, and the study of language? If exponents are the ways in which feedback shows up in mathematical equations, and curves, including those points where a curve curves into itself into a point, is how feedback shows up in geometry, nonduality is how it shows up in language.

For example, let’s return for moment to the issue of the “set of all sets,” and the related notion of “the empty set.” These notions were fundamental in the self-deconstruction of the notion of the set, and with it number and axiom, which were part of the foundations of mathematical logic at the early part of the century. If we consider a set as a word, however, and we think of what it includes, namely, an element, as a set as well, then it becomes possible to see this in binary terms. That is, a set is something which includes an element, that is its definition, and this process of inclusion is binary, in that an element is either included, or it is not. And so, something either is an element, or it is not, “E” or “not E.” The notion of a “set” is a meta-category which includes E, and excludes non-E. The set of all sets is that which both includes E and not E, while the empty set is that which includes neither.

And so, when a Yogachara thinker argues that any entity is both itself in a relative way, and yet ultimately void, and hence, is in a state of thusness which is both an entity and void, yet also neither, this is a mode of arguing which is not all that different from that presented by the foundations crisis of mathematics at the turn of the century. The difference, of course, is that the foundations crisis wanted to avoid this sort of deconstruction, while Yogachara embraces the deconstruction of Nagarjuna and his Madhyamaka school of “emptiness.” And yet, it also goes through these, because it affirms the notion that everything is ultimately void, but not relatively so. The question then becomes referred to context, and the values that allow one to articulate the local and the global in relation to these values, which for Yogachara, entail the path to liberation and the attempt to foster compassion beyond the self/other duality.

In the process, any particular reification is seen as ultimately tentative in relation to the contexts and processes of its production, and the particular choice of any entity is linked into modulation with larger wholes. And as such, none of which is going on is dealt with as ultimate or reified. Every aspect is rather composed of networks of others, and the linkages between and within any of these are up for grabs.

To say that something is “neither nor yet and both and yet neither of these” is to deconstruct the binary in relation to the contexts and processes of its production. And yet, this is always within a network of other notions. Deconstructing a binary can lead to its replacement by another, or by its use in a much more tentative way, or by the reconstruction of the whole landscape of terms around it. What a nondual argument does, however, by deconstructing terms, is to put them into play, to engage in a process of unmooring them from their contexts, and ultimately, referring them to the contexts and processes around it. Nonduality makes us aware, in and through binaries and the reifications that go with them (ie: this thing is this, and not that), that there are networks of other possibilities, and perhaps infinite ones, within any reification, around that reification, in the linkages between, and at multiple levels of scale. This is suchness, the mixture of emptiness and fullness, which the Tibetan Buddhist refer to as vajra being, the being which refracts everything and nothing, which is the very substance of everything, that which has its true essence obscured by anything in particular, but which has the potential to become anything and everything, depending on the contexts and processes in which it is transformed.

To continue with the Tibetan Vajrayana approach for a moment, only emptiness, which is to say, the deconstruction of refications, binaries, and other solidfied formations tied to these can reveal the potential luminosity of the Vajra-being, the essence of liberation and pure potential, within the very fabric of the world. And only emptiness can make sure that this potential doesn’t solidify again in turn. And yet, to reify emptiness is to miss out on all the potential for growth and transformation, in relation to which empiness is only the pathway. Emptiness is a means, not an end, and what it is a means towards is liberation, not merely of the self, or the inner world, but also the outer world, and others. This is true compassion, to liberate oneself through ones world, and one’s world through oneself, self for others and others for self, world for self and self for world. Getting stuck in either fullness or emptiness, deconstruction or reification, is ultimately a strategy which misses the power of the soft, or, to use the vajra term, the adamantine that is harder than all that is hard. Because true softness can cut through the hardest of rigidities and reveal the massive potentials lurking within, and help unleash them for a better world.

And this is why, I believe, Yogachara and the schools which flow from it, such as the Tibetan schools, are on to something fundamental about how the mixture of emptiness and luminosity, the vajra of vajrayana Buddhism, produces a seed of liberation which can bloom anywhere, within the subject or object, and ultimately anywhere. This seed, spoken of in Yogachara as the buddha-emgbryo or buddha-womb (the word is the same in sanskrit, tathagatagarbha), is the seed of liberation which is everything, and can be developed into pure liberation. And yet, most of us don’t see this, and this is why the world manifests normally as distinct, reified, binary things. But once we liberate how we see the world, by means of the liberation from fixations which the crucible of emptiness, of deconstruction provides, we see that anything and everything is possible. This is why in Tibetan Vajrayana Buddhism, one starts tantric meditations from pure emptiness, creates a visualization of a quality one wants to foster in one’s life, then breaks this down and dissolves this back into emptiness. The goal is to both further one’s detachment from the reifications of one’s everyday world, yet also increase one’s sense of freedom and possibility in relation to it. This is a virtual reality practice in reimagining reality.

The issue I have with this, however, is that it remains all within the self. Since all is mind to Yogachara, changing one’s mind is changing the world. But if one doesn’t change the world, one can only go so far with changing one’s mind. And this is why, I believe, there are limitations to the forms of liberation which are available here. That is, Vajarayana, as powerful as it is, doesn’t deal with what it may take to recreate the world beyond the mind. One could, of course, argue that the Buddhist community is precisely what the world turned into a mandala, or sacred diagram of a Buddhaverse, might look like. And yet, this is a vision of the world based on what liberation of the mind within that particular socio-historical context would look like if the world were then recreated on this image at a particular moment in its historical and social context. I find myself wondering, what might this look like if then this process were reapplied recursively, and back to the world?

 This bright, luminous potential is what Gilles Deleuze calls the virtual, and what, in my philosophy of networks, I call the matrix of emergence. It is that which is beyond any one, and yet that of which any one is an aspect, and so, it is ultimately, oneand. When matrix, or emergence, is reified, it give rise to rigid structures, but the potential for emergence remains within these, waiting to be unleashed by less rigid forms of networking. First there must be deconstruction of reifications to allow this potential to emerge. But there there also must be reconstruction, so as to foster more sustainable complex, which is to say, more robust, more emergently emergent, forms of emergence.

Feedback is one of the primary ways in which this happens. Feedback is the process whereby systems readjust their relation with the world so as to emerge more robustly. It is only when systems are reified from the world around them that they cease to engage in feedback, and this results in rigidified ways of acting which leads to crisis either within, without, but often, in mirrored forms, both. Feedback is non-dual, for it describes the manner in which boundaries are crossed, for feedback in relation to interior or external distinctions. It doesn’t nullify distinctions, but it perpetually readjusts them in relation to contexts and processes beyond them, yet which provide meaning for them. These contexts and processes evaluate and boundaries and reifications in regard to the values of the system in question, which are those which determine its actions. Reifications, boundaries, linkages between these, and the readjustments of the processes which determine, maintain, transform, and modulate these is what feedback mediates. Feedback is in fact the process which links these all together, hence, is a crucial part of the process of emergence.

Nondual modes of argumentation are attempts to loosen binary systems from within them. When they do not disappear, yet remain, this means that an attempt is being made by the discourse in question to both maintain and dissolve a distinction, which is to say, to modulate the way a particular distinction plays out in its processes of linking and delinking from particular micro and macro formations within various contexts and processes. That is, the larger contexts and processes at work are attempting to maintain relatively dynamic and fluid relations between parts of the system in question.

We see this in living systems all the time. For example, the mouth is both inside and outside of the body. And the body goes through great efforts to maintain this state of both and and neither nor. Continual processes of feedback are needed to make sure that the mouth both is and is not either within nor without the body. And so, the body feeds back on itself to maintain conditions which aren’t too wet, nor too try, not too much of this enzyme or that. Too much of either extreme, and the mouth would stabilize on one side of the the binary or the other. And that would prevent it doing what it needs to do, which is to say, abide in between, taking the middle path.

Linguistically, this is difficult to describe, because individual words tend to reify, and so do binary distinctions. But the mouth existed long before language even evolved. Feedback precedes language. And non-duality is simply the way in which some of its aspects show up in particular ways of using language, and in particular, those in which there is an attempt to keep some binaries in place, yet in a situation which modulates its relation in between that of others. And so, a person who lives in a world of objects, yet continually questions their very fabric and relation to others, is a person who lives in a world of objects, and yet which lives in a world in no objects and all objects. Which is another way of saying, they exist in a relation of feedback between the processes within and beyond them which give rise to these objects, maintain and transform them, such that all of these are seen as tentative.

Language is ill-equipped to describe these states, and particularly from within reified perspectives. The discourse of mathematics, for example, has a very, very specific set of filters through which it sees the world, and so when it comes upon sites of feedback within those aspects of the world it is attempting to describe, it hits limit effects. Like the manner in which a microphone will screech if there is too much feedback, or a house will overheat if there is too little, feedback is needed to keep systems going. And yet, from within particularly limited perspectives, it may not register as what it is. In fact, it often shows up as a gap or disturbance within what otherwise seemed orderly, such as exponents in otherwise linear equations, or curves in their graphs. These ‘messy’ points are points of junction between systems, and hence, are points of instability within attempts to order their interactions. And ultimately, any stability we see in our world is the result of some process of trying to make order out of disorder. Realizing that the word is much more complex is what getting beneath surface manifestations is all about.

But this requires a perspective which can link what others tend to reify. That is why my approach, the networkological approach, takes networks as its model, for what I am trying to do by means of this is to deal with what it truly means to understand relation. And from what I can tell, this means to also be related to the worlds within and without which reifications tend to obscure.

Grounding terms, the terms within discourses which localize the contradictions of that discourse, are those which, for Lacan and other linguists, describe what is both within those discourses yet beyond them. God is a notion which is everywhere in religious discourse, but if you try to use only religious discourse to justify why the world needs a notion like God, the result will either be tautology or contradiction, which is to say, one either will produce circular arguments, or have to go beyond religious discourse to justify religious discourse. Mathematics ran into the same problem with set theory at the start of the century, as the set of all sets, a quantitative attempt to formulate something “like” God, could either justify itself completely circularly, and hence produce no justification, or not at all. Justification, after all, is a form of linkage whereby one entity grounds itself in another which serves as its context. Math tried to ground itself in logic, and then Godel showed that if the results were ultimately problematic, that this was simply a shift in terrain. The problem remains the same. One has to deal with questions of value sooner or later.

The choice of grounding terms, which is to say, the terms within a discourse which one deconstructs less than the others, and hence, which anchor the others in networks of structure of usage, definition, categorization, and so on, are choices which underlie the values which make that discourse work. And ultimately, these are chosen because these terms allow for forms of action, since the use of words are themselves forms of action, which sync up ways of speaking and writing and non-linguistic forms of acting. If the resonance works well enough, the grounding terms are seen as grounded, which is to say, “justified.” The process we call reasoning and argumentation is simply a part of this ultimately only semi-linguistic process, one which is both within and outside of language, as well as neither nor. That is to say, there is a relation of feedback on both sides of the boundary between language and its others, because that boundary is continually being renegotiated and recreated. It hasn’t outlived its usefulness, so we keep that boundary around, and yet, we need to keep our relation to it relatively fluid. When we get too close to the boundary from within the domains which it allows to function, we get something like the effect of a microphone too close to speaker, and yet, if the microphone is too far away, the speaker won’t be able to modulate what they sound like in when reproduced. Feedback must always be optimal, it must be in-between, because it serves to maintain that which the in-between allows to manifest.

Mathematics, from a set theoretical perspective, exists in the in-between which set theory describes by means of the notion of inclusion, the enabling parentheses which allows set theory to function. Likewise, it is the fact that words both are and aren’t things that allows words to do what they do so well, which is to say, to represent the world without being the world, while being related to the world in a way which is relatively fluid.

If Western forms of thought assume the self-other binary, most non-Western schools have assumed the isolated, monadic individual as the foundation of all knowledge and action, since the birth of capitalism and modern science. We are living in a moment, however, in which our very capitalist modes of production and the very science we have given rise to are producing formations which are calling this model into question. It is has perhaps outlived some of its usefulness. We are starting to see feedback and limit effects which are screaming for a remodulation from within. Of course, perhaps this has always been, to some extent or another, the case. Individualism, private property, the nation-state, these reifications have always been hyperviolent and overrigid. But the limitations are now starting to be dismantled, along with binary structures in a more general way, by even the powers that be as they search for more flexible formations. The modern Western individual is beginging to unravel.

This is an opportunity, a moment of feedback in which is may be possible to remodulate things. To learn from earlier formations, like pre-modern Western formations, as well as the models which have prevailed in non-Western formations, which never reified the individual as extremely as in the West. That said, these non-Western societies have tended to reify social hierarchy much moreso than the West, and many of the philosophies in question have been much more focused on liberating the inner world rather than the outer, which is the reverse from capitalism. Both sides can learn from each other.

The trick, of course, is to not get stuck in the frenzy of deconstruction any more than the reification of the old ways, or the slick postmodern capitalist synthetic hybrid which deconstructs traditional reifications, only to reconstruct things at higher levels of complexity for equally rigid ends, and in a perpetual yet ultimately self-defeating cycle thereof.

From a networkological perspective, these all these are networks composed of networks. Any reification, or any binary linkage of these, is ultimately a network, within contexts composed of other networks, and at multiple level of scale. And the stuff of which these networks is composed, is matrix, oneand, or emergence, that which is both itself and beyond, and not anything in particular, for it is the stuff of which any particular thing is an aspect or grasping. It is fundamentally non-dual, for it feedsback into itself, which is to say, it contains itself infinitely, yet more intensely in some particular manifestatios in which it is more intensely networked with aspects of itself, thereby giving rise to the greater potential for more intense forms of networking. Energy is simply one aspect thereof, as is matter, space, time, subject, and object. And so, the networkological project is grounded in a non-dual manner, even if it makes use of particular reifications strategically to intervene in contemporary debates. Such an approach, called “skillful means” by the Buddhists, links construction to deconstruction to a goal and ethics of robust emergence, emergent emergence, which is for any and all, beyond subject and object, me and world, for it is that from which we come, and yet more abundantly. It is identification with the Deleuzian virtual, as a social and personal praxis of liberation.

Such an approach finds its only justification in what it desires in the world, in relation to what it desires to see, yet in regard to the context beyond all context, the non-dual context which is the only possible grounding for any aspect of the world, and for which even a notion like matrix, or emergence is merely yet one more partial reification. All of which raises the quesiton, of course, of what we want the world to become, in regard to this potential of what it has been, in relation to where we are from within this. All of which is to say, we need to engage in feedback, but massively so, such that any and all reifications are seen as tentative in regard to the widest possible context, which modulates the relation of any and all to each and all. And since this is ultimately that of which each and all are refractions, which it to say, empty of particular being but having the potential for any and all, if only relationally in regard to the whole and the potentials it has in relation to each, such that the question becomes what we want to become. And it would see that liberation from limitations, more free from reification, but in a manner which is sustainable and ever more intensly liberatory, would be that which is at least resonant with not only the structure of each and all, but also, with liberation from reification. There is a form of circularity here, of course, but with a difference, and that difference is what makes the difference, for it is difference as such, in and beyond any particular difference. For more on these notions, see my essay on “the widest possible context.”

From the perspective of Buddhism, however, this would be the realm of suchness, of the pure intepenetration of luminosity and void, and yet, beyond the realm of mind alone, but beyond the duality of subject and object, which is to say, a Marxist Buddhism, one which aims to recreate the world in the image of liberation, and recreate the mind in the image of liberated world. Such a notion is necessarily local, for it refracts the notion of liberation in relation to any and all in a way in which the parts and wholes exceed each other infinitely in their mutual emergence.

Language and mathematics, subject and object, these are so many reifications of the fundamental fabric of what is, the refractions of which give rise to the world we know. Reification of this process into reflections of the same give rise to rigidity, and yet, this is only ever local. The question is how to help the world emergence from its reifications without getting stuck in new ones, and yet also without fully dissolving in dissolution. The middle path, liberatory emergence, and fundamentlal non-duality, in and beyond all dualities and reifications, which ultimately, are necessary, if in soft form, for any and all emergence to arise in the first place, and to be able, by means of feedback, to emerge from itself in the process. From such a perspective, feedback can become seen beyond the reifications which shatter it into aspects which are ultimately less than comprehensible. Only from such a perspective does both duality, and non-duality, appear as aspects of each other, which is to say, the contexts of the largest possible context, and that of each and all, the matrix and fabric of the oneand, that emergence of which any and all are a part.

Nettime: The Philosophy, Science, and Culture of Networked Temporality (Extended Version)

•May 5, 2013 • 1 Comment

(This post was originally written in two installments, Section 1 first, then Sections 2 and 3.)

1. What is Time?

What is time? Surely time can be simple, as measured by clocks of various sorts. Distinct rhythms of a pendulum, or changes in number on a digital clock. The predictable movement of something that goes back and forth, an oscillator which covers a repeatable distance of space each time. But if we define time this way, we use notions like “repeat” in our very definition, presupposing that which we are attempting to define. Or perhaps, as suggested by famed theorist of time Henri Bergson, we are simply spatializing time by definining it this way. Clocks, after all, change physicially, and this isn’t time, it’s space. To imagine time as the movement from one moment to another, like “beads on a string,” is a spatial model.

Nevertheless, space and time are inestricably linked. It always takes time to cover an expanse of space, at least in the everyday world, and whatever takes up time seems to also occupy space. Whatever time is, it seems bound to a notion of space, even if the relation between these is anything but simple. Speed is simply the rate at which we cover space in time, converting one into the other. It can take me three hours to walk across town, or ten minutes by car. Inversely, endurance is simply the manner in which space is occupied by the same thing over time, and this indicates for us that, in relation to other endurances, something has “occupied” space. A stone occupies space, for me, at least, when it appears the same in relation to what’s around it for a period of time. This appearance continues, “repeats” itself, even when I close and open my eyes, or try to mash another stone into it, and realize, they won’t blend, even as coffee and milk seem perfectly happy to cohabitate in space, even if they displace each other a bit, but different colors of light seem to be able to overlay and blend and share space with hardly a problem. The displacement or occupation of space is always relative, and not only to the maps of occupations and displacements which are a spatial layout, but also in regard to time, for occupation and displacement, of objects or appearances, always happens in relation to time.

Models of Time: Philosophy, Science, Mathematics, Literature, Film, and Everyday Life

In the history of philosophy, definitions of time abound, and with this, it becomes possible to list off differing notions of time, the Augustinian philosophy of time, the Hegelian model, the Bergsonian model, the Deleuzian model. Within the history of science, there are also named models of time, such as Newtonian time, Minkoswki time, Einsteinian spacetime, Quantum spacetime. The time of Newton is similar to that of “beads on a string,” and yet, because it involves calculus, with its capacity for infinite division, the beads can be of any size, surely like physical beads on a physical string in physical space. With Minkowski, the time of physics began to compress and stretch, and with Einstien, time began to warp in relation to gravity, the famed “theory of relativity,” which introduced such new notions of “curved spacetime,” perhaps better visualized as “scrunched” or “expanded” spacetime, into physics.

Mathematicians, of course, had already begun to imagine such notions, and these seemingly unreal formulations were influential on the physicists who found more concrete applications for them. Riemann’s notions of quilting spaces of various types of scrunched or expanded spaces together to produce a monstrous “Franken”-space, a patchwork of geometries, each, of which would experience time differently in relation to these spaces, paved the way for Einstein. As did the work of Felix Klein, who famously realized that just as painters had been converting four-dimensional space and time into flat two-dimensional depictions for centuries, so there were ways to convert forms of space into each other by transforming and warping them, turning a sphere into a circle and an ellipse or back, simply in regard to the perspective one took on them. In fact, we often transform spaces and their shapes into one another simply by walking around them. All of this happens in and through time, space is never devoid of time, and vice-versa, and Einstein built upon this, giving rise to the stretching and bending spacetime spoken of by relativity theory. Quantum physicists, building upon this further stilll, describe a world in which spacetime is even stranger, permeated by jumps and fuzzinesses of various sorts, in which it is possible to either go back or forward in time, or act in ways which are fundamentally indistinguishable from this.

Beyond philosophy and science, there is also the time of other disciplines, the time as described by historians, ethnographers, sociologists. There is also the time described by literature, so many types of narrative time. And narratives aren’t only present in fiction, but also arguments (“if A and B, then C”), jokes, political narratives (“this war is different from the last one”), economic narratives (“this crisis was caused by this or that”), therapeautic narratives (“my parents help explain why I’m this way”), or the various other types of narrative structures we use to help us structure our lives. Or consume for pleasure in so many works of art. Language is itself fundamentally temporal, verbs producing transits between nouns, in regard to so many qualities and connectors, all produced by grids of symbols of various sorts that we arrange and rearrange in space and time like so many bits of a hypercomplex game whose stakes are often the very stuff of reality.

Beyond language, however, there are many ways in which we can bring the time within us into resonance with various aspects of the world around us. The time it takes to walk through a building, for example, in which one can walk faster or slower, loop back to where one started. Or subway time, whereby slices of an urban landscape are sutured by voyages of varying speed and directness within looping underground passageways which seem like so many virtual voyages into other dimensions. Or the time travels of filmic narratives, which by means of narrative conventions such as time-travel, can loop and bend.

If the time outside us seems relatively stable in relation to a variety of spatial layouts, however, our lived, “internal” time often seems the strangest of all. Memory flashes us backwards in time and permeates our present in varying degrees, even as anticipation, the futureshock of our past projected into our future, really, permeates our past and digs within it for useful memories which it them throws in front of us, permeating our present from the other side. Our future and present are saturated with the memories we use to frame and imagine them, just as our past is always organized and sifted through by means of the fantasies we have about future and present which help us organize our imagined future actions, hopes, and dreams. Separating past, present, and future in lived time, the time inside of us, often seems a paradoxical enterprise at best. Philosophers and mystics have long wondered whether or not the past really exists, or the future for that matter, as we never seem to “really” get to either, we live in what seems like an eternal present. And yet, this present is so full of past and future, memories and anticipations, hopes and fears based on those experienced previously, do we ever get the pure present? It vanishes, much like the past and future do. All seem unreal when you focus on them, as if time was only ever where you weren’t looking. And yet, mystics through the ages have countered that it is possible to expand time by meditating on this eternal present to expand it beyond time and space, to reach eternity within each and every moment and fragment of matter or space.

Taken to its extreme, inner, lived time begins to sound almost as strange as that of the physicists or mathematicians, microcosm refracting macrocosm or vice-versa. Then again, the physical world seems pretty stable unless we stray far from the “normal” conditions of the everyday, while lived internal time seems normaly only when we pay attention to how strange what seems “normal” to us actually is. Either way, the notion of time is used to describe these both, as aspects of the same thing.

Is Time a Word? The Linguistic Argument, and Beyond

Perhaps then the issue is with language, perhaps the most complex creation of humanity. Some philosophers have gone in this direction. Our language reifies, which is to say, “thing-ifies” whatever it describes. Words fix the flux of the world into static snapshots which don’t actually correspond to the much more labile conditions of the world beyond it. The useful fiction of words perhaps distorts or even creates what we experience as time. Nouns are perhaps the worst culprits, at least verbs are somewhat more honest, and adjectives allow us to imagine aspects things share despite space and time, while connecting words just do the dirty work of bringing these all together and putting them in motion. And it is in the motion that we rediscover the time killed by nouns and other less guilty words, the motion of producing and consuming sentences, and getting around the deceptive periods which separate sentences like so many false idols of space within time. Books spatialize time, then, perhaps as much as clocks, or films. Or bodies, which localize time within these lumps of moving flesh, and curl it up within these meat-computers we call brains, who then produce things like words which segment the world into words and then reassemble them to produce a parodic representation of the world beyond it.

But language certainly can’t be the only culprit. Films are also guilty, they slice the world up into snapshot images which are reassembled into moving images which are warped reassemblages which resonate with the time of the world, yet are fundamentally distinct temporal creations. Our everyday lives as then equally as suspect, as wel slice the world into bits, like so many moving cameras we move our perspectives around, dicing up the world from our own points of view, and then reassemble them in the fuzzily warped and edited storehouse we call memory. And if, as scientists argue, our present and future are threaded through with this highly suspect memory archive, then our present should hardly be trusted, it is ultimately a personal language of sorts, whose letters and words are the memories we use to help us recognize, describe, and re-present the present experiences we filter and categorize before we even realize we have done so. Perhaps the very notion of an ego is simply the deepest such memory-word we know, the “I” around which our language of experiences congeal.

Maybe this all because we have bodies which warp our experiences, turning moving light particles into sight, moving air particles into sound, translating our sense-data into memory-recognitions, and all in relation to our evolutionary heritage which biases us to look for certain experiences over others. Whatever time or space we ever experience is ultimately the result of the way in which our biological evolution evolved us to experience it, in ways which it felt were most likely to help us survive. And if our culture, our films and our words and so much else, were created from this foundation, might they not be simply more complex warpages of the world, inheritors of the biology which evolved us with its own agendas? Of course, biological evolution is only one level of complexity, the physical world had to “evolve” up to the point at which it could “evolve” organisms, and the difference between complex physical systems and living ones seems ultimately only a matter of degree. A whirlpool seems to have a “life of its own,” and to “want” to continue whirling the way it does. To say this isn’t proto-life is like saying that organisms aren’t hyper-matter. It’s all a matter of degree, or perspective.

Either way, if time is ultimately a word, and words are biased distortions of the world beyond us, this should hardly be reason to stop there and call it a day. There are so many levels of distortion, why fetishize language? Our bodies distort, our brains distort, our sense organs distort, our evolution distorts. It’s all distortion, all the way down. Or translation and creation, depending on how you see it. Matter distorts, and perhaps is this very distortion of some primordial energy, or something deeper still, as scientists believe that matter and energy are simply differing sides of the same. Perhaps space and time simply are distortions then too. Space, time, matter, energy, all distortions of some deeper matrix.

But matrix of what? Space, time, matter, and energy, these are abstractions of our experience, which seems only ever filtered by our bodies, brains, psychological biases, cultural biases, the list goes on and on. Perhaps the universe is little more than a set of translations of experiences into each other, and matter, energy, space, and time are simply the terms we use to organize the most stable of these, at least, as the world appears to us.

Is Time Real? Fantasies of Idealist and Materialist Notions of Time

Perhaps, as some have argued, it’s all a simulation, like we see in films such as The Matrix (1999). Perhaps studying film, or virtual reality, isn’t such a strange place to go to study time. Whatever time is, its as much there as it is in matter or energy. For even our most indubitable experiences, whether personal or shared with others, are only ever known as our experiences. Even if I perform a science experiment, and a community of scientists verifies it, it could be a dream, or I could be one of the famed “brains in a vat” which philosophers sometimes imagine. It could all be a simulation. And there is ultimately no way of knowing if when I see a bunch of scientists verify my experiment, that they aren’t all part of a dream or simulation. Perhaps there are glitches that might give it away, but even these could be parts of a larger simulation or dream still. This is why some scientists have argued that our universe could be one enormous simulation, a holographic projection, and they have even tried to develop experiments that could test if this were the case. But what then would be the difference between virtual and physical reality? Should we care?

Likewise with the physical world. Even if I only ever experience it through my own experience, the aspects of my experience that seem shared with others, which is to say, the so-called “physical” world, even if it’s not really there, even if other people are simply figments of my imagination, they seem so stable and follow such predictable rules, that they can treated as if they were “real.” In fact, even if they are an illusion, what difference would this make, so long as my whole life were this illusion? Of course, even if we were to learn that the whole world of our experience were a simulation, then we could start to wonder if the machine producing the simulation weren’t also a simulation of some deeper simulation.

Such an infinite regress occurs as well when it is not idealism taken to its extreme, but also materialism. If all is matter, then some of this matter give rise to illusions, images, like our sense experience and dreams. But perhaps this is just how matter feels other matter. Our brains experience our sense organs, which experience the matter of the world, it’s all matter all the way down. And thoughts then are just how our brains, which are matter, experience each other. Perhaps then experience, including that of sensation, thought, and feeling, is simply how matter reacts with other matter, and how this is experienced from the inside. Perhaps then all matter, including molecules, feel each other in some very simple, primordial way, and when matter gets more complex, it feels more complexly, and human thought is simply this.

Idealism has difficulty accounting for the physical world, and yet, taken to its extreme, idealism deconstructs itself back into the physical world, or cuts the cord to reality entirely, an impossible situation and/or infinite regress. Likewise, materialism has difficulty accounting for the inner worlds we experience, and seems on the verge of arguing that inner experience is impossible, or it pushes it into ever smaller and more distant realms of matter (ie: the body, the brain, the prefrontal cortex) in what is ultimately an infinite regress verging on the soul. No wonder so many of the most materialist scientists find that there’s a need for a ghost in the machine. For taken to its extreme, materialism ultimately deconstructs, hits paradox or infinite regress, or turns into its opposite, namely, a world in which all matter must have something like experience, even in simplest form.

And yet, even though materialism and idealism both deconstruct, perhaps this isn’t the worst place to be, for since experience is all we have really ever known, perhaps matter and appearance are sides of each other, which is to say, of experience, which is all we ever, well, experience. Space, time, matter, and energy, these all seem aspects of experience then as well. The experience we share is called the physical world, that which we don’t is our “inner” world, but it’s all appearances of varying degrees of stability. Those appearances which appear the most stable we call “real,” and those which are less stable are “merely” appearances, but since it seems there’s no firm way to draw a line between these, these are perhaps differences of degree.

A Matrix of Experience Beyond Binarity

Perhaps we can start from here, from experience, which is all we have ever known. Any experience we have ever had of a world beyond us, or of other experiencing consciousness, is only ever aspects of our experience, which isn’t merely our experience, but also the world. These are two sides of each other, like two sides of a sheet of paper, inseparable. We can’t imagine the world but through ourselves, and vice-versa, and each, like materialism and idealism, ultimately deconstruct each other, or giving rise to paradox, infinite regress, or some sort of fuzzy or oscillating mixture of these. One can either try to ignore this, and cling to ultimately relative notions like “self” or “world,” or embrace this, and realize that self and world are interdependent notions, aspects of each other, and of the more encompassing situation of which they are aspects, and which is all we have ever experienced.

Let’s call this grounding situation “experience.” From such a perspective, “my experience” would be that most fundamental aspect which seems unique to me, and those aspects which seem, from within “my experience” to exceed it somehow, to be that of “the world,” of which the experience of “others” is a part. For there do seem to be experiences beyond mine, as attestted to by the reports of other experiencers, even if I only ever access those through my experience. “Experience” as such, then, would be the term used to describe the seemingly larger whole of experience of which mine is an aspect. My experience would then be an opening onto experience as such, included and including it, as paradoxical as this might seem to more traditional forms of logic. Whatever logic there is in the world, it seems to derive from this, so if we want to call it paradoxical, so be it, the foundation from which logic emerges is paradox, such that paradox would ultimately, then, be the foundation of logic, and not vice-versa.

Space then could be seen as the most stable general network of shared experiences among experiencers. For example, if I move an object, and my friend sees this, we both see the object moving, but also the world of experience around this staying stable in relation to the moving object. The greatest stability within this seems to be what we call space, the invariant network which underlies and organizes that which is common to the experiences which experience within experience. While this may warp and bend according to gravity, and ultimately, acceleration, as the experiments used to ground relativity theory seem to show, then perhaps I would have differing experiences than another experiencer. And yet, a third party would be unable to tell which of us is having the “correct” experience of space. Space then would be that within experience which seems to give rise to all these experiences of space by various experiencers.

All of which shows why it makes sense to argue that there needs to be something producing all our particular experiences within experience, and why experience is still ultimately only ever the experience of experiencers, such that perhaps experience as such is an abstraction from the experience of experiencers, a projection of these, an ideal assemblage of all the experiences of all experiencers. This helps explain why the term experience is worth retaining, because there has to be something which relativizes these experiences, in regard to which they are “only” experiences, which is to say, if there were nothing underlying or producing these experiences, it would be redundant to call them “mere” experiences. But this is hardly the case, because experiencers don’t always have the same “external” experiences, and while these issues can usually be resolved by a third party, this isn’t always the case. But if we examine further the distinction between “internal” and “external” experience, this issue gets fuzzier still, for these are also merely aspects of the same, a question of degree. Is the experience of “my” eye the same as “my” experence? What about that of “my” brain? Is the world “mine”? Or my “ego”? Like “self” and “world,” these notions too will deconstruct.

Likewise with that between a particular experience and experience as such, or between experience and that which produces it. But the slippage can be at least partially stabilized by allowing all these notions to be relative to the context which produces these, such that they cease being reified notions, and work more as positions within networks of aspects of a whole which always exceeds the sum of its aspects.

From such a perspective, it’s possible to speak of experience as the ideal extrapolation of all the particular experiences of experiencers. Each experiencer has a “world” of experience, and the sum total of these, greater than the sum of its aspects, is “the” world, the ground of experience as such. The world would then be within all words, but yet always in excess of any, aggregate, and all, for it seems this world is always changing, surprising us, and hardly capturable by all worlds, even in the aggregate, similarly to experiences and experience as such.

In fact, it seems that any particular aspects of the world, or series of these, is always exceeded by the world. This seems to be the fundamental quality of the world of experience itself. Let’s call this “matrix” or “oneand.” It is matrix because it gives rise to the world and experience, and is present in any and all aspects thereof. And it is “oneand” because it is always in excess of any attempt to reduce it to any reified unity. Matrix, or oneand, would then be the very stuff of the world of experience itself. Any and all aspects of this would be only aspects thereof. Any segment, discrimination, unity, binary, quality, motion, concept, term, self, world, or anything else, would only ever be an aspect of matrix, or oneand, which is grasped in each and all experiences, and is that of which experienced, experiencer, experiencing, and experience are composed as so many of its aspects. Matrix, or oneand, is beyond whole and part, container and contained, or any other binary distinction, as well as beyond any unitary description, such as experience or appearance, or even attempts to be described by notions such as matrix and oneand. These two names, placeholders and useful representations at best, are simply two aspects of this fundamental stuff.

Martrix, or oneand, is that which is beyond and and all attempts to grasp it, even if present in aspect within all of these. To use the language of many Asian philosophies, it is nondual. That is, in regard to any “a” and/or “b” which could be said about it, or any other set of statements or chainging or nesting thereof, it would be neither a nor b, both a and b, neither “neither a nor b” nor “both a and b,” and both “neither a nor b” and “both a and b.”

All of which may seem nonsensical, or useless, irrational, illogical, or paradoxical, or whatever terms one might want to apply to this sort of thinking. Perhaps quasi-religious, or mystical, or deluded. But the logic behind the argument which brought us to this place is hopefully apparent. Logic and argument ultimately find their foundation in something ultimate and paradoxical like this, or are limited fictions. The irrational, paradoxical, useless, nonsensical, these are part of our world too, only aspects of the whole of which its parts are only ever that.

What’s more, science and mathematics are increasingly tending in such a direction. Early in the twentieth century, both physics and mathematics had a “foundations crisis” in which they began to question their most basic presupositions, and the results unsettled the seeming foundations of both. In phsycis, relativity theory and quantum physics demonstrated that any attempt to “reify” any aspect of our world gives rise to what, to ordinary thinking, would be paradoxes, such as incommensurable relative experiences, or uncertainties so uncertain that it’s ultimately impossible to determine if it is the subject performing the experiment, or the very substance of the world, which is uncertain, such that the very distinction between these seems to begin to break down. Physicists are still attempting to deal with the fallout from the “uncertainty” at the heart of relativity and quantum physics. Whether their interpretations of the data take a subject-oriented, epistemological tilt (ie: the Copenhagen Interpretation), or a more substance-oriented view whereby it is the world which has this uncertainty within it (ie: Bohmian interpretation), or rather opt for infinite regress (ie: Many Worlds interpretations), these are ultimately aspects of the same, which is to say, the manner in which, for whatever reason, its seems that the experience of the world, when pushed to its extremes, will deconstruct, turn into its opposite, produce infinite regresses, or otherwise resist extreme reification, and the concomitant binarization of inside and outside of a reification which always comes with this.

In mathematics, the situation is hardly different. Around the turn of the century, mathematicians attempted to see if math could be used to “prove” its own assumptions. And this lead to paradox, infinite regress, or aspects of each, depending on how you interpret this. The issue was, in short, whether or not the “set of all sets” could be considered a set. That is, whether or not the most ecompassing way of talking about the world, the “set of all sets,” could itself be considered an aspect of the world or not. If yes, then there must be something which could emcompass this set, a yet more encompassing entity, for any set could always be a member of another set, thereby leading to infinite regress. But if it wasn’t, then the “set of all sets” was incoherent, a set that wasn’t a set, or a new type of set, one which fundamentally recast what it meant to be a set, for it paradoxically had a sort of infinite regress as part of its very definition, that which, according to what it means to be a set, would make it not a set. Contradiction, inconsistency, or incoherence, these were the options. And this led Kurt Godel in 1929 to prove, using the tools of the mathematics of set theory, that set theory was at its base one of these three, depending on how you wanted to frame the issue, and that there was no way to get around this and still be doing mathematics of set theory. And the results were generalizable from set theory to the rest of mathematics, at least to an extent that the results of Godel’s proofs destroyed any attempt to search for the foundations of mathematics in anything resembling this way. From here, the search for the foundations was in something, well, more slippery, paradoxical, and relative, in ways which uncannily parallel that in physics.

Beyond Reification

All of which is to say that the notion of matrix, or oneand, in the manner described briefly in the preceding sections, as the all of which any is composed, which is beyond reification, whole and parts, self and world, and yet that of which these are aspects, is resonant with the findings of math and science. That is, no matter how one interprets the data of relativity and quantum physics, data which have been reproduced and checked to such a degree as to be accepted unquestionably by the scientific community, the fundamental stuff of our world functions something like what I’m describing as matrix or oneand. Likewise, the foundations of mathematics requires something like a “set of all sets” or “number larger/smaller than others,” of which all others are aspects. If science is a form of materialism, and mathematics a form of idealism, they deconstruct their own foundations similar to their philosophical cousins, and are faced with paradox, fuzziness, or infinite regress. To use the language of mathematics, the options are incoherence, inconsistency, or incompletion, while to use the language of physics, the various attempts to explain away uncertainty (such as ontological Bohmian approaches, epistemological Cophenhagen approaches, or Many Worlds approaches). Ultimately, each of the three options in a given field are aspects of each other, and between and amongst these disciplinary views on the world, so many lenses on experience, these are aspects of each other. In fact, the foundations of any lens on the world seem to run into versions of this trio in one form or another, whether these lenses focus on inner experience or the physical world, or any other way of slicing up experience.

Matrix resists being ever turned into a one, and so, is oneand, and any attempt to reify or reduce it to a one will result in these limit effects, the ways in which the oneand will always manifest within ones, but never be reducible thereto. In fact, if there seems to be anything which limits matrix, it is only its ability to be any and all ones which are not exclusive and try to reduce any aspect of oneand or oneand itself to a one, even if this oneand is the all. As such, matrix is necessarily beyond one and many, part and whole, a and b, but that from which all these notions, and in fact, all experiences and worlds, derive, of which all are aspects, and each aspect is the all whole, if in its own way, for aspect and all are simply aspects of the oneand which is beyond such a distinction.

Some Precursors: Hegel and Schelling 

These ideas, while resonant with the forefront of physics and mathematics, are hardly new, even if they haven’t previously been described in this form. The notion that any aspect of our world must be an aspect of that which is within any and all aspects, a sort of “set of all sets,” was described by German philosophers, often called Idealists, in the early nineteenth century. F.W.J. Schelling spoke of an Unconditioned, or ulimited, that which was a ground of any and all conditioned, which is to say, limited, entities. G.W.F. Hegel built upon this further, saying that this Absolute was that of which any aspect of the world was a part, including concepts, things, persons, experiences, history, and the world itself. The basic thought here is actually quite simple. Any part of the world has to be a part of the whole of the world, which is always more than the sum of these parts, even if present in some way within all, and never reducible to any of these parts, because it it what is beyond them and gives rise to them. Without such a notion of the whole beyond any whole, paradoxes emerge. For example, what was before our universe, or where did our universe come from? Such questions lead to infinite regress, or paradox, or inconsitency.

And so, one can ignore the paradoxes, or see them as part of one’s description of the world, and in fact, as the fundamental ground of any and all descriptions of the world. Any descriptions which don’t admit, include, or somehow take this into account are dishonest partial descriptions, and those which do are fuller or more open descriptions. But all are limited descriptions, because these paradoxes seem unavoidable, fundamental, and don’t seem to go away. Whether we ignore them or not, they seem to be part of the fabric of the world. Might as well try to work with them, rather than continually be surprised when they frustrate our attempts to control and manage the world in various ways.

Hegel and Schelling were hardly the first to have these ideas, however. Both argued, each in their own ways, that “the Absolute” was fundamentally non-dual, which, to use the language of Hegel, means it is “speculative,” beyond the limits of “picture-thinking,” the term he used for thought which attempts to reduce things to fixed representations. The Absolute is beyond the limitations of language to describe it, and any notion of concept we use to grasp it has to be beyond the simplistic notions of logic we use to grasp less complex aspects of our world. And so, for Hegel, “the Concept,” which can be translated perhaps most accurately as “the Grasping,” takes the shape of the Absolute, not the other way around. Any simpler ways of grasping aspects of the world are then only limited aspects of our grasp of conceptuality, which, in its fullest form, is fundametnally non-dual.

Similar notions, namely, that binary, dualistic thinking are simplifications of the more fundamentally non-dual, non-binary thinking which is needed to understand more fundamental aspects of the cosmos, are much older than the nineteenth century. Hegel, for example, was influenced by the mystic Jakob Boehme, amongst others. In his later years, Schelling increasingly looked for the origins of his notion of the Ungrounded in various world religions. And there is much in common between notions of God as present in many theologies and this notion of the Absolute or Ungrounded. Isn’t God, whatever this term might mean, at least, in theory, supposed to be outside of time, space, world, subject, object, experience, language, and thought, and yet be present in any and all of these, as that which is always beyond any and all, yet cause and even ultimate purpose of all of these?

Of Physics and Mathematics: The Time of Singularity

While it may seem that this is simply the pathway towards irrational mysticism, it is important to note that a similar notion, without the theological trappings, has been a part of mainstream science and mathematics since the early twentieth centuries, about the time of the foundations crises. One could even see this notion as a result of these, what these crises produced. This notion is that of “singularity.”

In physics, “the singularity” is the term most commonly used to describe that which gave rise to “the Big Bang” which began our universe. The notion of the singularity is itself paradoxical. Physicists know that as any entity approaches the speed of light, its space and time condense, and that is also what happens as any entity approaches a “Black Hole.” A black hole is an entity whose gravity and density is so great, that it compresses space and time, and matter and energy with it, to something like infinity. The reason we don’t know if it truly ever reaches infinity is because it seems impossible to “reach” infinity (is it a place or time that is reachable?), but also, because any method we have to investigate black holes can only proceed so far until the very forces of the black hole itself would either destroy the observation device, or severly warp any signs it could send us, as even light cannot elude the grip of a black hole once it gets close enough to it.

What’s more, the mathematical formulas which scientists use to model the behavior of black holes, the same mathematical formulas used to describe the behavior of the rest of the physical universe, which normally produce excellent predictions of phenomenon, cease to be of much use the close one gets to a black hole. The tend to go infinite, either towards infinity or zero, and ultimately, these are in many situations sides of the same. If there measurements of time or space, matter or energy, go infinite or to zero, these are ultimately simply differing ways of looking at the same. Infinite energy would destroy anything not it, but since it was infinite, unless this infinity came in several degrees (and would it then still be infinity?), it would be uniform, and hence, in relation to various aspects within it, having zero difference from itself. And since energy is always a  relative measurement (ie: something has energy if it can do more work than something else, no difference means no “useful” energy), infinite energy would be ultimately the same as no energy.

When mathematical equations bottom out like this, particularly in situations that oherwise provide coherent answers, but which when taken to an extreme, reach such intensity that the physical quantities cease to make sense, this is what mathematicians refer to as a “singularity.” A simple case can be found if you try to divide any number by zero. Since any number can be put in as a possible answer, and any number times zero is zero, when you subract that from zero to see if there is any remainder, the quotient and remainder will always be zero. And so, divide any number by zero, and any number can function as a quotient, and equally get you nowhere, with no remainder. And so, any number isn’t quite wrong, because any number is as equally wrong or right as any other. Which is to say, math ceases, in this case, to function as math. This is why mathematicians refer to the answer to this question, and those like it, as “undefined.” This is different from when you subtract five from five, which will give you zero. When physical equations give you zero or one in a situation in which these answers make no sense, give you infinity, or go undefined, this is what is meant by a “singularity.”

In the history of math, these sorts of results were often treated as quirks which simply had to be worked around. But as the various branches of mathematics, such as algebra and number theory, began to link ever more closely with parallel aspects of geometry, it became clear that these strange results in equations lined up with the strange parts of the figures and shapes they could be used to describe. The center of a sphere, since it is not included in the sphere yet is in a sense present in all its aspects, if indirectly, is sometimes described as being a part of the sphere “at infinity.” Likewise, when a line intersects itself, it gives rise to contradictory results in the equations which describe the line, points which aren’t merely undefined, but rather, singular within the shapes and figures those equations describe. These points are indeterminate, within more than one space, time, equation, or attempt to grasp it in one way or another, at the same time. They are one, yet more, which is to say, oneand.

Singular points in equations line up with those in figures, and those in figures with those in the world they are used to describe. And so, many of the equations of relativity theory break down at black holes. Likewise with quantum physics. In fact, the very notion of a “particle” in quantum physics is a fiction. A complex process of mathematical juggling is necessary to make the results of the equations and experiments become “particle-like.” This process, known as “renormalization,” essentially reifies the result, makes them “normal” enough for scientists to work it. All of which is to say that, at least according to the findings of contemporary physics, the closer we get to trying to reify the ultimate fabric of reality, the more it seems to “resist.” For this reason, many physicists don’t even believe it is possible to have “nothing,” for even the void of space seems to contain “vacuum energy” and swarms of “virtual particles” within “quantum foam.” And no-one knows what happens in a true singularity, like those present within black holes.

Some physicists feel that what appears as a black hole to us is the the singularity which, on the “other side” of a black hole, can or does give rise to another universe. Perhaps singularities are like pumps, inflating one universe with matter and energy from another, and the universe beyond the universe, the “multiverse,” is actually a “Swiss-cheese” like affair of universes laced into each other by these points of singularity, not unlike that of geometric shapes, lines, or equations which intersect each other in geometry and algebra.

And if space and time seem to condense and scrunch infinitely as one approaches a black hole, if we run the equations which describe the universe as we know it backwards from the earliest evidence we have of the Big Bang, which scientists call the CMBE, or Cosmic Microwave Background Energy, we hit a singularity, which is why scientists and mathematicians, as well as theoretical cosmologists, refer to this point which gave rise to the Big Bang as “the singularity.” This entity would be that which gave rise to matter, energy, space, and time as so many aspects. This is why it makes no sense to speak of time or space before the Big Bang, unless in a fundamentally differenet sense. For in some senses, if time and space “unfolded” from the singularity, can we even say that the singularity “exists”? The very word “existence” implies that something has an independent reality. “Ex” is the prefix for “out” in Latin, seen in English words like “exit” or “exterior.” That which humans, including scientists, refer to as existing is something which is the way it is independent of our desires, dreams, hopes, fears, and wishes, and in a manner consistent across space and time.

If there is no space and time “in” the singularity, or rather, all space and time are always already included within this inclusion which is beyond exclusion, can we really speak of it existing? Or rather, can we speak of ourselves as existing? For in a sense, it is only the singularity which exists, and our existence is but a fiction, as fictional, ephemeral, and “unreal” as dreams of hallucinations. Then again, none of us have ever actually experienced the singularity, and because of the laws of physics, we never could, we’d be obliterrated if we even tried to approach it. So perhaps it is the dream or fiction. Either way, it seems to be the fiction and the foundation of contemporary math and science, that which provides the bases for the very equations of physics which describe the most real things we have ever experienced.

All of this is more reason to feel that the fundamental stuff of our universe is fundamentally nondual. Existence and non-existence hardly apply to the singularity or its products, for these are ultimately only aspects of it which are only ever partially and relatively applicable. Sense and reality as we know them break down at the singularity, and yet, it is the foundation of all we have ever experienced, including notions like reason or logic. And so, the foundation of sense is nonsense, the foundation of logic is paradox, the foundation of reality is fantasy, and yet, we can only ever know this by means of using the tools provided by sense, logic, and reality. The very argument deconstructs itself, such that it is possible to say that all we experience is neither nor yet both fantasy and reality, logical and paradoxical, existent and non-existent, sense and nonsense. The structure repeats with uncanny regularity. And this only indicates more powerfully why the notion of matrix, or the oneand, can be seen as that of which these are all aspects, so long as we keep in mind that the very naming and conceptualization of this notion is itself only an aspect thereof.

Whether or not we call this notion “the singularity” or “God” or “matrix” or “oneand” is perhaps irrelevant, what matter is how this notion changes our thinking and how we act, speak, and relate to the world around us. As Gregory Bateson famously argued, an information is only a difference that makes a diference. And if this notion doesn’t somehow make a difference to and for us, then perhaps it is no notion at all.

Is This Theology? Ethics, Science, Philosophy?

The similarities between this notion and that of “God” as described in many devotional traditions, philosophies, and other worldviews is perhaps not coincidental, and needs to be taken seriously. The fact that the at times most fervently atheistic mathematicians and scientists have found that their equations rely on an attempt to grasp something like “God” at their foundation should not be seen as an endorsement of any religion or belief system, and more than of atheism. “God” is a word, a human idea created by our culture, a projection of our greatest hopes, dreams, idealizations, desires, and perhaps fears. World religions are an attempt to domesticate, institutionalize, and instrumentalize and control the fundamentally destabilizing power and insight which is being described here, an insight so fundamentally destabilizing that it has shaken the entire Western scientific enterprese to its foundation, such that many try to work around and/or ignore it. But few who encounter it on a regular basis can deny that it is the foundation of what they do. This isn’t faith, it’s simply reason taken to its own logical breaking points and foundations, by its own means. Reason cannot found itself, for like everything else in the world, it deconstructs, and this ends in paradox, inconsistency, incoherence, or some mixture of these. Or the argument being presented here.

Any attempt to describe the notion being described here as “matrix” is necessarily partial. And the more it attempts to completely reify this notion, the further out of sync it will be with it, even if some degree of reification is necessary to even approach it at all. Between reification and pure openness, matrix is neither nor as well as both and. There is in fact here the core of an ethics, middle path between pure reification and pure dissolution, an ethics of development and growth of manifestation of matrix in all its fullness and potential.

And even science and mathematics, which often claim to be beyond ethics, are always already shot through with biases which imply various ethical ways of relating to the world. Why do we value doing science, or value doing mathematics? Why discover more about the way the world works, or try to control and harness the powers of nature? It is because we value things, like human life, or life in general, or pleasure which control over various aspects of nature brings, or even the pleasure of discovering the deeper secrets of the world. The motivationis always something we value. And whatever we value or devalue, even if it is passionately dispassionate activity, matrix must be at the core of this as well.

For in fact, matrix must be the foundation of all values, the source of all value and valuation, that which is valued in any valuation, as well as that which is beyond all value even as it is always an aspect of any and all values and valuations. When we begin to question which values we value valuing, the very notion of value will deconstruct like any others, and matrix will be staring us back in the face.

If it is possible that matrix is at the foundation of physics and mathematics, as well as that which all ethical and religious systems attempt to describe, and in fact, is that of which any aspect of the world is an attempt at representation in it s own way, then matrix is that which is refracted in any and all, even as some aspects of the world are more intensely matrixal, which is to say, they have more of the potential of matrix within them. The singularity, of course, but the singularity also destroys, which is to say, deconstructs, whatever it absorbs, and as such, it is neither life nor death to the cosmos, but also both of these and the other, beyond these and the foundations from which they derive.

The Question of Value

But the human mind, the inner experience of the world, now that is something which is able to bring the whole world of experience together within it, and reimagine the world in ever more powerful ways, then bring these dreams into the world, and unleash ever more potentials of the world. This mind, however, is a product of the deep creativity of the world itself, of the evolution of life and the cosmos. The human mind is perhaps the most fully realized representation of the singularity yet developed, even if a poor one at that.

And yet, the mind seems only the way in which our physical body feels itself from the inside, with thought as how the brain feels itself, feeling how the brain feels the body, and sensation how the brain feels the body feeling the world beyond. We are the sense organs of matrix, the way in which it comes to feel its world from outside its own insides. We are its dreams, thoughts within its giant brain, body, and world, which is to say, the cosmos, which is both inside and outside of us, as we are all inside and outside of matrix. Have we ever left the singularity? Is the Big Bang just a dream, as much as our cosmos, as much as our own experience, and our dreams of dreaming? The argument is little different than that which questions if we are living in a simulation. What matters, ultimately, is the difference this all makes.

And it seems that if matrix values anything, it is the further development of matrix. Which is to say, the robust emergence of more emergence. For what matrix does is emerge, it is emergence, and when it is more intensely emergent, it emerges not only in the present but future, it gives rise to time from the process of its emergence from itself. Spacetime results from emergence emerging from itself, as that which is opened within matrix so that it can emerge as emergence, which is what it is. Emergence is simply another name for matrix and oneand, for it is that of which these are, essence and existence being oneand, even if more intensely so in some aspects of the world than others. Dormant emergence is emergence turned against itself by extreme reification, while emergent emergence is emergence in the process of existing as its essence, which is to say, to emerge, and to do so in a way which feeds into future emergence, avoiding extreme reification as much as dissolution, while making use of both towards the end of greater emergence beyond past, present, and future, yet within all of these.

And so, if we are to develop an ethics from this, values to guide our projects, then we need to find those aspects of the world which are most intensely and sustainably emergent, and model our behavior on these, learn from them. And since matrix is fundamentally non-dual, is should come as little surprise that those aspects of our world which are most intensely emergent, which is to say, which complexify the most intensely and sustainably, are those which do so by intertwining with others, by emerging in relation with them, intertwining their own projects with those of others. No aspect of the world can emerge by reifiying itself, or turning other aspects of the world into reified mirror aspects of itself. No, the world resists this. All aspects of the world which thrive are other-centered and directed, because this is the core way in which one can be self-centered and directed.

But there is a middle zone. Towards one extreme in our world is the matrix which pursues the pathway of maximally robust self-centeredness, and those who tend to the other extreme, which is maximally other-centeredness. Those which follow the first path, which can be thought of as paranoid, tend to thrive in the short run, but undermine their own success in the long run, producing continual crises and potential crashes as they destroy the very aspects of their world which sustain them. Those aspects of matrix which are other-centered tend to proceed at a much slower yet more distributed way, and in the long term, this is more productive, stable, rich, and in sync with the deep patterns of matrix itself. Those which are purely other-centered or purely self-centered, however, will ultimately deconstruct themselves, but those who pursue the middle path will find a degree of resonance with that of the world around it as it tries to emerge more robustly as well. The distinction between self and world, in fact, begins to deconstruct, and what remains is the emergence of emergence. This is a non-dual ethics and way of life. Such an approach to the world, however, is ultimately relative to one’s surroundings, for the middle pathway is only ever the middle between reification and dissolution in relation to the world in which it finds itself.

Matrix desire to liberate matrix from its fetters, which is to say, from limitations, to develop itself and emerge in the most profound yet sustainable way possible. At least, this is what the history of the cosmos seems to show. All that we value is based upon life and life more abundantaly, and this is the result of the manner in which matrix valued and hence worked to give rise to something like matter and life which could value something like life and life more abundantly in the process. The paradox, the non-dual irony, perhaps, is that the more we value the quality of life of others is the greater degree to which ours increases.

And this seemingly opposite, dialectical logic is the way the world seems to work. Take any particular aspect of the world to its extreme, and it will deconstruct its own foundations, yet intertwine it with others towards non-dual ends, and new emergences will come to be which will give rise to new dualities which can give rise to yet more intense emergences, in and beyond duality and non-duality. Dialectics and deconstruction seem to be a part of this process.

In the process of emergence, matrix gives rise to a world fuller and deeper than it was in the singularity, a world with us in it. The sigularity has given us the world, and we can give it back, and in the process, gain it ourselves, in, through, and beyond ourselves. We do this by desiring liberation via the middle path, between reification and dissolution, for any and all, and working to make this possible. Within the zone of robust emergence, it means pushing things away from reification and mirroring of the same, and towards the refraction of difference, towards curiosity, desire, change, multiplicity. Politically speaking, this is radical socialist democracy, not chaos, but the world described by post-anarchist thinkers. Certainly, it is different from the evil world of today, ruled by megacorporations which run countries to divide and conquer the world via racisms, borders, queer-phobias, mysogyny, and general impoverishment of “others,” as well as the incarceration or bombing of others, always imagined as well valuable than ourselves, thereby producing a world always on the verge of its own deconstruction. Slower yet more distributed development is the only ethical way, investing in others until all are ready for the next step, and distributing control of the process, economically and politically, to the maximum degree that is sustainable. That is a robust world, a world that is maximally emergent.

While nature did not emerge that way, for it emerged from scarcity, in a world of animal eats animal, biological evolution hit an inflection point with humanity, it evolved altruism and cooperation, as well as recursive thought, and these gave us the ability to take evolution to the stars. They also gave us the ability to destroy and be cruel to ourselves, as well as the ability to extinguish all life on our planet. Unless we learn to conquer our inner worlds, we will destroy our outer ones. The fiction that science and mathematics are beyond values fails to take into account the fact that as science is on the verge of deconstructing the human to give rise to the post-human, via technologies such as artificial intelligence and nano-bio-tech, we need to deconstruct our values to emerge from these as well. Emergence, and the pathway provided by the middle path of robust emergence, which models its behavior on the most robustly emergent aspects of the world around it, is a way to deconstruct the dualities which have reified our world into its currently dangerous and painful state.

There are philosophies of the past which have argued many of these notions, if without making use of the logics of mathematics and physics. The philosophy of the West, particularly that which comes from the pathbreaking work of Gilles Deleuze, is currently tending in this way, and the Deleuzian notion of the virtual is a definite influence on what I am calling matrix, the oneand, and emergence. The major influences on Deleuze, such as Henri Bergson, Gilbert Simond, A.N. Whitehead, C.S. Peirce, or Baruch Spinoza also indicate similar pathways. Relational emergentism has always been a minority position within Western philosophy, an underground current that was always overshadowed by the thinkers of reification, such as Rene Descartes or Immanuel Kant. Despite Deleuze’s antipathy to Hegel, as well as many of Hegel’s own later writings, Hegel’s more truly dialectical works, such as the Phenomenology and the Logic, are also crucial precursors to this mode of thinking, even if this is often obscured by interpretations of Hegel, including those of the late Hegel himself, and to a lesser extent, Marx.

But even before these, there are precursors in the Classic Arabic and Mahayana and Vajrayana Buddhist philosophical traditions which provide incredible resources for imagining non-dual philosophies of relational emergence today. That said, many of the forms of non-dual insight present within these traditions retains, like most Western philosophy, aspects which keep the powerful non-duality of some of its most crucial insights in fetters. Classical Arabic and Buddhist countries through the ages are not necessarily the zones of the greatest robust emergence. For even if they liberate the mind, they do not necessarily liberate society, just as Western societies tend to liberate the physical world for a few but not the many. A truly robustely emergent, non-dual worldview would have to deconstruct aspects of all of these precursors to imagine something new and different in sync with the particular needs of the middle path of the worlds in which we find ourselves. Any robustely emergent worldview will always selectively employ dual and non-dual elements in order to deconstruct local roadblocks to liberation to maximally sustainable robust emergence, and to help solidify and temporarilly reify those which are needed to allow for greater emergence in the future. A truly complete non-dual philosophy would deconstruct itself. All emergence is local, and hence, all strategies to further emergence, which is to say, worldviews, ultimately are as well, including this one.

Beyond Reified Chronotopics

We live in an age of networks, and I have written extensively elsewhere about what a philosophy of networks, based in emergent relationalism, as its local manifestation, might look like. Such a worldview would have to deconstruct the traditional reifications between philosophy and politics, science and fantasy, ethics and knowledge, in order to produce something which emerges from these contemporary cultural stases. And if we live in networked times, it is from networks we must emerge, and through which we can, for networks are ultimately ways of thinking of how emergence occurs. Composed of nodes, links, grounds, and levels of processes, all these can be seen as aspects of the ways in which emergence comes to be, between the extreme reification which nodes often give rise to, and the dissolution of processes. Between these, networks come to be, and from these, the potential for liberating our world to more robustely emergent ways of being.

This essay began with an investigation of time. From a networkological perspective, any aspect of the world can only ever be understood in relation to the whole, for if all is matrix, part and whole always exceed each other, for both are oneand. And so, any term needs to be deconstructed and reconstructed in regard to how it relates to the local attempt to give rise to even greater robust emergence in any and all. Matrix is fractal and holographic, and so must its method of analysis and synthesis, deconstruction and reconstruction.

From a networkological perspective, time is an aspect of emergence. Emergence is most reified, in the temporal sense, when reduced to space, which is what was described at the start of this essay as spatalized time, which is to say, the time of clocks. Clock-time, or less extreme reifications of time, such as moments or memories, can then be linked together to form networks. These include the linear flattenings of time and its moments into the image of beads on a string, or a set of events placed one after another in a repetition of the progression of homogenous moments. But such a network is one in which the pure linearity implies a point at a distance, a virtual point, the image of a moment as monad which extends itself in one-dimension forward, and the network formed between the points of the line and this virtual center, one which flattents the time of a circle into a straight line yet is as controlling as the center is to its circumference, is always present in its absence within each moment and all, regulating their form and linkage, their slicing from their surroundings and their reconection into linearity. Events with completely homogenous form, forced into homogeneous order. Such is what the attempt to reify time at the level of the link looks like, even as the reified instant of the clock, or the moment, is this at the level of the node. When this occurs, all time at the level of the ground, which is to say, as change, that which is both within and without moments and their progression, is concieved in relation thereto. As a result, the process of emergence itself is radically foreclosed, and all change seems simply the repetition of the same.

There is another way, in which the “–and” of the oneand peers out from within the one of any node, link, ground, and process, as well the processes of noding, linking, grounding, and emergence which give rise to these. At the level of the node, time is much more than clock time, nor any idealized or homogenous moment. Time is fundamentally multiplicitous, never the same, and any reification of it, any grasping, can keep grasp in a manner which reveals this openness as much as conceals some of it to make this grasping possible in the first place. Likewise, at the level of linking, moments, episodes, actions, these don’t need to be linked in a straight line, nor made part of a grid pattern like space (ie: a “database” approach to time). There are as many ways to link moments as there are ways of creating networks. Each of these maps of time, or chronotopes, has its particular flavor, and may be applicable in various ways to particular situations. Some are more decentralized than others. A line is the most centralized and controlled way of turning change into a perfectly regimented series of monadic nodes. And yet, the more loops and short-circuits within this, the more the line folds back upon itself, and produces networks which subvert linearlity from within it, liberating it from the iron yoke of progression. Memory, anticipation, the more these enter into time, the less time is just a focus on the actual and now right in front of us, the more free it is. Of course, if the moment can also be liberated, expanded to include the whole world, full ot past and future, exploding the node from within. Whether exploding the node or link, relative dereification, at least in a world like ours, allows more emergence to bloom between the cracks of paranoid control.

If networks are made of nodes and links, they always define themselves against backgrounds which ground them, and these grounds are neither fully within nor fully outside of these networks. If moments and their modes of linkage are the basic ways of conceiving of time, and this is seen against the background of physical change in space, then to liberate this is to see the emergence underneath this, the ways in which change is so much more than physical. Physical change and mental change are aspects of each other, we only ever apprehend the physical world through our filters. Even what seems like simply physical change can be interpreted in so many different ways, and this occurs by means of its intertwining with memory and fantasy, of the futurepast which is the ground of the now and vice-versa, of the neither/nor at the heart of change. And here we see how we verge on that which is neither/nor or yet also both and, which is to say, emergence. When emergence is reduced to processes nested within each other, to the quantitative emergences, simply one layered on top of the next, which gives rise to spatial, physical change, and none of the qualitative emergences which produce truly emergent newness, deconstructing and reconstructing nodes, links, grounds, and levels, all towards giving rise to more robust emergences in the process, then nodes, links, grounds, and levels of processes producing networks and their aspects are so many distinct reified aspects.

When these are all seen as aspects of emergence, however, everything shifts. Emergence gives rise to processes which intertwine, and these give rise to stable environments with stable structures which produce entities which can then link with each other, and as each continues to emerge in relation to each other, the parts and whole emerge at ever greater levels of emergence. Node, link, ground, and process are so many levels within the networks of emergence, each nodes which link together against the ground of the world of emergence itself.

Time is only ever an aspect of emergence, just as space is the background of invariance against which change occurs. Time is closer to emergence, and space to reification, and yet, both are aspects of the manner in which emergence differs from itself to give rise to a world whereby it can emerge more profoundly from itself. Space is congealed time made static in matter which displaces other matter, and time is how this is reunified in a matter which experiences the displacements of others. Experiencers can notice change because they compare change to sameness, time to space, and in the process, can even come to realize that they are experiencing. This is what humans do. Time displaces itself within itself as internal emergence and flow, and space in regard to what is outside of itself, as physical change. Inside and outside, space and time, both deconstruct, and are aspects of emergence, which is beyond all of these, even if each is a reification of emergence which has the potential to emerge more robustly, in regard to itself and world, if it loosens the hold of reification upon itself and world. Networks are simply one way to conceptualize this. But they are a model in sync with out increasingly networked times.

Neurotime: The Temporality Structure of the Brain

If clock time is the simplest time, then what is the most complex we know? Ultimately, the most profoundly emergent temporal phenomenon we know is the human brain. A brain is a network of intertwined pulsing fibers. These fibers pulse faster when stimulated by the pulses of others, and when this happens, they secrete a material that strengthens their connection backwards with whatever stimulated it. Intersecting and looping back into each other, the fibers feedback and forward into each other. Their intersections are so many nodes, linked together, giving rise to modules and nodes which are so many wholes which ground them, and a processes which emerge from these. While some of the modules are relativiely fixed in form, the brain is constructed for maximum sustainable flexibility, which is to say, fibers have links to diverse parts of the brain, and the firing of one inhibits or promotes a wide variety of others. As a result, the brain is continually voting on what it percieves from the outside world, and each part of the brain continually voting to produce guesses for what it believes other parts of the brain and outside world will do next, based on its memories of what these were in the past. When parts of the brain agree, they fire in sync, their pulsing producing a rhythm, and as various other parts of the brain vote, the sync flows up and down the levels of the brain, from sensory nerves to emotional and cognitive centers, untill there is, with any luck, some agreement, and when this happens, so long as some other part of the brain with veto power doesn’t intervene, sensation gives rise to action. The patterns of sync are ideas, and the largest pattern of sync in the brain at any given time, its “dynamic core,” is consciousness.

The brain is a time machine, a fundamentally distributed network, and it produces the most fundamentally complex form of time we know. It stores its memories distributively, and makes its decisions by debating which memories to choose to interpret the present and imagine about the future. All of this is done by means of the networking of matter, and our world is simply what this feels like, in relation to what’s around it, from the inside.

The distributed nature of the storage of memory in the brain is oddly resonant with one other model for the most complex phenomenon we know, which is to say, quantum phenomenon. It would be wrong to say that quantum “particles” are complex, for in fact, there seems no way to tell one electron or proton from another. But while they are simple from the outside, the fact that they are particles at all are, as mentioned earlier, fictions. Rather, they are ways in which quantum field processes reifiy each other in particular ways, giving rise to the spacetime between them in the process. The particles are hardly separate from the fields, and seem, if nothing else, simply the manner in which these fields emerge from themselves by intersecting themselves in relation to each other, and in ways which confound traditional notions of space and time. Anyone working high energy physics as much as any basic science textbook today will attest to the fact that quantum phenomenon defy everyday, normal human notions of space and time.

The manner in which they do resembles the structure of the human brain to an uncanny degree. Quantum “particles” can in fact even be thought of “smearing” spacetime. That is, they seem to be in many places and times at once. And just as they “smear” themselves over spacetime, so it can be said that “spacetime” is smeared in them, for ultimately these are two ways of saying the same thing. From such a perspective, what are distinct moments and positions in space and time for everyday humans are positions which can be thought of as existining intensively, which is to say, within, a quantum particle, as much as they would normally be extensively without it. The famed probabilities of quantum mechanics can then be thought of as the degree of intensity whereby each “external” location in spacetime beyond it is present “internally, within” a given “particle.”

From such a perspective, there are networks of space and time, of varying intensities, within quantum phenomenon which are only ever somewhat separated from the world of which they are a refraction, and which smears into them and them into it. What’s more, these probabilities, when viewed in a non-reified manner, can be seen as the distant influences upon the “particle” by those aspects of its environment which are non-local to it. In relation to its environment, a particle decides which of the micro-influences get the most votes and follows it, harmonizing its inner structure (evident only at even higher energies), and its outer structure. This only appears random when reified from the larger ground of emergence of which it is only ever an aspect.

The similarities to human lived time are incredible. Human brains have external positions from the outside world present in them as so many intensities of pulsing within its internal networks. Its decisions are made by harmonizing sync between inner and external influences. And as a result, there is a sense of space and time “within” our experience, if of a different nature than in the external world. The difference, it would seem, is that the inner structure of the human brain is radically different from that of quantum particles. Quantum particles differ in what is around them, but their inner structure, when “magnified” at higher energy levels, seems to be identical, if fractal. Human brains are anything but. The reason is we don’t store information outside of us, as the physical world does, but also inside of us, storing memories in the internal environment of our brain. Each one evolves uniquely. As pulses ride around our brain, each with its own experience more linear time, the networks of these give rise to the distributed experience of time we call lived human temporal experience.

Little wonder our time feels distributed, as if it can expand or contract at will, and is shot through with memory and anticipation. The physical structure of our brain is like this, and wherever the pulses increase in intensity and come into sync, there some aspect of us is, smeared out like a quantum particle in spacetime. Our experiential spacetime is little more than what this feels like from within. We can be in many times and spaces at once, separate, flowing, layered, and to varying degrees of intertwining, blending, and refraction. The reason for this is that this is how this very complex organ feels as it activates varying networked patterns of activation within its more fixed yet still ultimately rewireable hardware of wires.

The structure described here is mirrored by one other phenemenon reworking our world today, namely, the internet. A webpage on our screen can the be the product of sync between vast amounts of data from a wide variety of computers across the globe. The physical architecture of the internet changes over the time, as does the software it runs upon it, and any of these may change what we see on our screen, though depending on how they are organized, they may not, even as distinct happenings in our world, or activation of similar circuits in different parts of the brain, may give rise to experiences we may read as the same.

The internet is making our world more brainlike, more non-linear, and with it, we are beginning to experience forms of memory and anticipation which are more human, and less like the spatialized linear time of clocks, within the physical world around us, even if by means of virtual avatars. The internet is an enormous brain of brains, yet outside of human brains, and interaction between the internet and our brains is changing how we think of time. We feel less need to reify time, and our films and popular culture evidence this in a wide variety of ways, even by means of philosophies that attempt to think in more networked ways.

In the process, we are starting to see time in more networked ways, more quantum, brain-like ways, and the potential is radically liberatory. Then again, humans have almost always found ways to turn new liberations into new forms of enslavement, and to complexify in the least robust ways which are sustainable. But each transformation employs deconstruction and reconstructio, and hence, the chance to truly change things. To imagine the world in a more liberatory way. And this means getting in touch with the core of emergence, that destabilizing, dereifying core which has the potential to bring us from the path of maximum sustainable pain and destruction to that of the middle path of maximum sustainable robustness.

As our models of time become less linear, let us try to keep the potential for liberation in mind, and question the value of the values which guide our transformations, and the potential for a deeper relation to the nondual core, the potential for radical creativity, which is within any and all, yet which can only ever be released by means of networking, by reaching beyond oneself, unravelling to some extent one’s reifications, and enclosing one’s openings, going beyond the binaries to find a nondual core, potential, and pathway, an ethics, politics, and worldview, which is less destructive, at least, one hopes, for any, each, and all.

2. The Brain and the Quantum

All of which is to say that time is, perhaps primordially, non-linear, both in humans and the physical world. And that makes sense, for the structure of the human brain is ultimately a resonant echo of the world which produced it. The most complex actual entity the world has yet known, namely, the human brain, is a refraction of the most complex potentiality the world has yet known, namely, the singularity of all singularities, ‘the’ singularity, which began the largest context we can imagine, that of which all space and time are extractions and reifications. From these twin poles, the brain and singularity, it becomes possible to extrapolate a time beyond the more traditional and limited human notions thereof, and then reconstruct these, and potential pathways beyond these, in relation to a sense of time of which these are only ever partial graspings.

In the section which follows, I’ll explore some of the ways in which the experiments in cutting edge artificial intelligence and neuroscience, when coupled with the science of quantum physics and cosmology, when extrapolated to their speculative limits, can be used to help devise a theory to account for some of the uncanny similarites and crucial differences between the structure of the brain and the singularity. Without these forays into science, and then beyond, in the following second section, the theses advanced in the third section, with so much in common with the world’s mystical and purely speculative philosohpical traditions, would seem merely that. The third seciton can be read on its own for those less interested in the rather convoluted scientific issues involved, but for those wanting to know why these ideas aren’t quite as mystical as they might first sound, the second section is provided.

The time of the human brain, our lived experience of time, is simply what our brain feels like to us, yet from the inside. Time diffuses and spreads out, dilates and contracts, loops back and forth into future and past, changes attentional scale and flits between ideas, being in more than one “mental” and “physical” place at once in our awareness.

While our films and digital networked media are approaching these levels of complexity, the neurological networks present in even one human brain still far outstrip even the connectivities of the Internet as a whole, and are part of the extended networks of feelers in our bodies, sense organs, and world, in continual loops of feedback with those already present in our brains. And ultimately, even if the Internet is almost like a sentient being at this point, in that it evolves and mutates, processes and makes decisions, this is because, like human languages or nets of ideas, commodities, or other cultural phenomenon, these forms of “quasi-life” draw their life-like aspects from us. Without us, the internet would cease to evolve, and the same with our language and culture. Though our media are becoming evermore networked, and beginning to approximate the structure of the human brain, they are still fundamentally based on serial modes of computation, which prioritize linearity in their software and circuit design, and limit feedback and intermodulation.

The fundamentally networked structure of organisms and their brains are constructed differently, they are non-linear, distributed, refractive in structure. They are complex, self-organizing, emergent phenomenon, with incessant loops of feedback between elements and aspects of the environment. Cutting edge artificial intelligence, by means of “artificial neural networks,” which make use of software neurons, have only begun to simulate the architecture of the brain. These networks of simulated neurons are able to do things that more traditional computers simply can’t, which is to say, guess or forget, learn from mistakes and develop creative solutions. The downside, however, is that these computers only have networked software, their hardware is still linear and “serial” rather than distributed and “parallel.” And so, advances in this realm are ultimately limited until we can develop computer chips which evolve themselves like living organisms, and build themselves from the ground up, by means of the sort of feedback networks we see within living organisms and in the ways they relate to their environments. While we have software which does this, called “genetic/evolutionary algorithms” which form “multi-agent systems” which give rise to “distributed computation,” these once again are simulations produced on serial hardware.

What’s more, true artificial intelligence would have to be linked into its body in wide nets of feedback. Multiple forms of scientific evidence point to the fact that without emotions, humans make terrible decisions, because they can’t access core values to ground their deliberative processes. A computer with no sense of why it should protect its body, or analogically those of others like it, will make decisions based on sorts of rationality which are ultimately destructive to itself and what’s around it. Emotional computers would need feedback loops throughout their bodies, such that it could “feel” if its fans were properly regulating its temperature, similar to the ways in which humans “feel” hungry, and this then feeds back into our specific and also global neural processing. Many doing research on human emotions are increasingly convinced without feedback loops with our body, we would hardly have emotions as “brains in a vat.” That is, we cry and then feel the sadness, we laugh and then feel the happiness, our body is an extension of our brains.

Embodied cognition theorists even argue that the very structure of our limbs are forms of computation, and this is why it takes massive amounts of computation power to teach a robot to walk up stairs if you give it mechanical legs, but install rubber-bands on these legs that function in a manner similar to human tendons, and the computation power required drops enormously. Evolution long ago performed the computation necessary to design animal bodies, by means of continual feedback with relatively stable environments, and as a result, we get so much of our computation, to use a term employed by robotics and embodied cognition theorists, for “free.”

All of which is to say, no living body, and no feedback loops throughout it, and human-style artificial intelligence will likely be impossible. Artificial organisms and chips, however, change the equation. And so, until we develop something like nanotech which gives rise to articial life, true artificial intelligence will only ever be small scale, because it will only ever be simulated. And this is perhaps just as well, because until our species learns to be a little less destructive to ourselves and others, perhaps this technology would be too dangerous for us anyway.

Until then, however, even if the Internet does at some point rival the complexity of the human brain, it will only ever be quasi-life, derivative of us for its true creativity. And so, until artificial life and computers approach the true complexity of the human brain, computers, no matter how fast, will only ever be complicated, not complex, which is to say, they will have speed, but not creativity. Creativity requires networks of feedback within and outside the system in question, so that it can dissassemble and reconstruct aspects of itself and how it reads and acts in relation to the world so as to be able to adapt to it. Traditional computers have speed, but they can’t guess or learn, only memorize and project. These are fundamentally different, because one is completely hierarchical, while the other is distributive and refractive. Until computers can reprogram themselves in regard to their environment, and learn like humans do, they will only ever be machines.

Between brain and singularity, then, we have the two most complex forms of time yet known. The singularity gives us an image of time as pure potential, while the human brain, and and all of them, the most complex actual form of time we have yet to experience. From these, we can extrapolate idealities, and these can help us reimagine the ways our reified limiting concepts of time have constricted our views of what time can mean.

The human brain can be at multiple space and times at once within itself. It folds the world into itself in memory and selectively unfolding memories which it shatters and then reweaves to interpret the present and imagine the future. And it stores its memories, according to neuroscientists, in a fundamentally distributed and superpositioned manner. Which is to say, memories don’t exist in any one location in the brain. Rather, our brains map various aspects of our world, such as all the shades of color, all the types of shapes with hard edges, all the varieties of smells, and then produces maps of ways to link these in order to give rise to specific memories of events. This is why neuroscientists argue that any “recall” memory is always a recreation. And our perception of the present makes use of memory to recognize anything we experience, to such an extent that the present is, in many senses, mostly memory (the famous case of “filling in” in regard to the blind-spot in the eye being a prime example of this).

The similarity to quantum particles is astounding. Just as human memory be in multiple locations in space and time at once, and layer these in varying intensities to produce tension between various inner states, so quantum particles can “smear” themselves out in multiple spacetimes. Looked at inversely, this is ultimately the same as saying that multiple locations in spacetime are smeared “within” a particle as so many intensities. There are networks of spacetime folded into these particles which are extended, or unfolded, across locations in spacetime. What’s more, we know that as we approach a singularity, of shrinking a domain of spacetime into the space of a quantum particle by means of strong acceleration or gravity, that spacetime contracts, as if folding into the particle in question. Whether extending a single particle across an area of spacetime, or folding an area of spacetime into a super-dense particle (ie: a blackhole), the result looks similar, which is to say, spacetime is distributed as networks of intensities within a particle.

There are, however, crucial differences between the way brains intensively store spacetimes within them in the form of memory, and the way particles, as singularities or otherwise, do this. Quantum particles only ever mirror the structure of the spacetime they fold within them, if in relation to the particular type of particle they are. And as mentioned earlier, all quantum particles of a type have an identical inner strucutre, while each human brain has a similar structure to others in terms of large-scale architecture, but on the micro-level, is full of radically different structures, which give rise to the distinct memories and personalities of each person.

What’s more, quantum particles, when on their own, collapse when disturbed by others. That is, if two “smeared” particles end up in the zone of each other’s influence, they will ultimately disturb each other, and pull each other out of a “smeared” state, so that each particle has to “choose” a specific spacetime location with the other, giving rise to an event, a “particle collision,” when often transforms them and sends them flying off in new directions. While human brains may loose their focus on a given memory, if you hit a person’s body, the whole body or thought doesn’t “collpase” in the manner of a particle’s “smeared” or “cloud-like” state. This is because the matter of the universe which is above the quantum-scale, which appears “stable” and doesn’t flicker into and out of localized spacetime like quantum phenomenon do, occurs when quantum particles form relatively stable networks that dance together in relatively stable patterns, like organisms whose cells all work together in a semi-stable balance. While the particles still “flicker” in and out of existence, they largely do so in place, because they keep each other in check, and this is what is meant by atoms and molecules.

What quantum particles clearly can’t do, however, is shift their focus, or “explore” their interntal structure the way humans do. While they do fold the area of spacetime through which they are “smeared” into them, if in regard to their own internal structure, this intertwining enfolding only changes in regard to changes in their internal and external structure. Since it seems the internal structure of these particles are all identical, essentially, additional layers of internally folded spacetime which “unfolds” at higher energy levels (ie: the quark jets which are revealed to exist within protons), then the folding of a segment of external spacetime within a quantum particle is like a refractive mirror. In fact, it is uncannily similar to the description of the inner world of monads as described by Leibniz in his famous work of philosophy, The Monadology.

Human brains, however, have dynamic inner structures which are highly individualized, and so, they each refract the outer world they “smear” inside them differently. And as we know from experience, we also slice and dice our experience, refract that on bits of spacetime enfolded long ago, yet also rearranged in memory maps and meta-maps, and then project back imaginary reconstructions of what could happen, and after comparing many of these, choose one set to become our “actions.” None of this seems to occur with quantum particles, which seem to refract the world around them in one set of intensities, and while these impact the actions they make, these seem determined by the way dynamic forces interact with internal elements which seem dynamically frozen into stable patterns which give rise to stable probabilities.

And yet, quantum particles are one of the very few aspects of the physical world which, like the brains of organisms, do not behave in completely predictable ways. Quantum particles appear to “choose” some paths over others, and in relation to micro-influences from beyond their immediate environment (a more relational, Bohmian inspired interpretation) or due to a predictable degree of randomness (a more reifying, Cophenhagen-style interpretation of events, which is opposed to a Many-Worlds interpretation which refracts this these issues to “other universes/dimensions” beyond our knowledge). While brute physical matter, like stones or molecules of water, largely do whatever the sum of the forces of the environment around them, only quantum particles and animal brains seem to have something like the “freedom” to decide in relation to their environments. The fact that they are both able to “smear” or “fold” spacetime into themselves is likely not to be accidental in this.

An examination of the Bohmian interpretation of quantum interactions can help to explain why. In books such as Wholeness and the Implicate Order, or Undivided Universe, Bohm produces a highly influential reading of the data of quantum physics which, while a minority opinnion in the scientific community, is no more disprovable than the majority “Cophenhagen” position, nor the other minority opinion, or “Many Worlds” position. All of these work from the same data, they simply shift the base level assumptions about the deeper structures of reality which underlie them. And since we would have to develop experiments which are able to “look under the hood” of the current laws of physics (ie: it isn’t possible to go beyond the speed of light), it seems like it may not be possible to find out whether or not one of these interpretations are more or less correct than others. Perhaps this is yet one more way in which the world seems, in some senses, to resist ultimate reification.

For Bohm, the seeming “randomness” of the “decisions” of quantum particles can actually be made sense of if we assume the universe to be ultimately relational and non-local. That is, quantum particles can be thought of like nerve cells in the brain. That is, they would exist in continuous chains of feedback with others, and this distributed feedback would help it to make its decisions. Micro-influences, summed up and averaged out, from both within and without the now relationally imagined network commonly called a “particle,” would then arrive at the decision in the manner of a brain, which is to say, distributively and dynamically. Just like a community of nerve cells, decisions would be made by consensus.

The catch, however, is that as with the brain, some of these influences would be non-local. In the brain, this occurs because while nerve cells are almost always linked to their immediate neighbors, they are also selectively linked to nerve cells and communities of these in distant parts of the brain. It’s the selectivity of the wiring of these channels of amplification and inhibition which give rise to the distinct patterns of flows which make brains work the way they do, and each uniquely. For quantum particles, these long-distince linkages would be to everything around it, non-selectively, getting weaker the further away things are, and in regard to the types of influences in question. But these connections are ultimately to every other aspect of spacetime, if ever weaker in regard to distance. These are considered “non-local” in the quantum sense once they become “spacelike,” which is a term scientists use for when it would require going above the speed of light to cover a given distance in the time allotted. Since above the quantum level can go faster than the speed of light, this notion is often dismissed by scientists. That said, quantum particles often act in ways which can be interpreted as either random, producing other inaccessible dimensions, or exceeding the speed of light. The Bohmian approach reads things according to the last option.

By making this choice, Bohm avoids the extreme of reifying particles, as in the Copenhagen approach, or reifying objective contexts, as in the “many worlds” approach, which assumes that the universe produces divergent copies each time a quantum particle makes a decision, even though these copies don’t interact after this. The Bohmian interpretation, on the contrary, opts for a solution which rejects either extreme. It argues for a worldview in which everything is connected. Similar to the way in which the movement of air molecules on one side of the planet impacts, if in highly indirect, mediated, and weak fashion, those on the other side, so it is, for Bohm, with quantum phenomenon. The difference is that he believes that particles don’t need to be physically touching, but rather, that just as smeared particles can influence each other by overlapping, that the entire universe presents smearings within smearings within smearings, all of differing densities, with “particle collisions” as simply the points at which these networks hit a certain level of intensity as the clouds of fields y shift in relation to each other.

While Bohm’s ideas are a minority opinion amongst scientists, they are all agreed that it is as provable, or unprovable, as the majority or other minority opinions, for they all interpret the same data, yet the paradigms they use to do this require assumptions about aspects of the universe that are not only as of yet beyond the realm of investigation, but may remain that way. Bohm’s approach, however, has the advantage of neither splitting the universe into zillions of inaccessible copies, nor reifying particles and requiring “funny” math to do so. Rather, he argues that particles may be able to influence each other in ways which are not only spacelike, but “timelike” as well.

Which brings up the question as to why scientists refer to travelling faster than the speed of light as “timelike.” Since light is the fastest known entity, it functions as a cosmic yardstick for time as well as space. And so, for example, large expanses of spacetime is measured, for example, by “lightyears.” The reason for this is that it takes light one year to cover a lightyear, and yet, it takes slower entities, which are any which are not light, more time. This is because they weigh more by having mass, while photons don’t seem to have mass, at least, not rest mass. Since mass and energy are ultimately the same for physicists, in that one can convert to another, a photon’s only mass is its energy, which is not the case for any other known particle. These particles would have to shed their mass to travel at the speed of light, and it is their mass which prevents them from ever being light enough to do so.

In this sense, these slower, heavier particles trade off speed for mass, and hence, they will always cover less space in a given amount of time than light. In this sense, the speed of light can be used to measure not only space, but time, for it represents the fastest it is possible to cover space in a given time, which is to say, convert one into the other. It’s almost like currency conversion. Light gets the best exchange rate when they convert their energy into travel in space or time, while heavier particles have to pay a tax which is exponentially higher the faster you approach maximum speed, which is to say, the speed of light. In this way, a light year measures not only distance, but the minimum amount of time that it can take to travel that distance. It is ultimately a measure of spacetime, and can be divided or unfolded into differing expanses of space and time, depending on the matter and energy at work in a given situation. Matter, energy, space, and time are ultimately four sides of the same, even if they unfold this same, which I’ve called matrix, differently.

In this sense, light is the yardstick for the measure of both time and space. It also helps explain why looking into deep space is not only looking into space, but also time. The light we see with the naked eye, or a telescope, took time to get here. Light from the sun is eight minutes old, and so, really, we are seeing the way the sun looked eight minutes in the past. We are seeing not only what is far away in space, as we do when we see things in the distance on Earth, where the impact of the speed of light is negligible enough to be ignored. Since the sun is far enough away, it is distant not only in space, but also time. When this happens, scientists call the distance “timelike.” We see the sun at a distance of not only space, but also back in time. Or, when we see distant supernovas explode, we can say what happened in these distant locations in the past, but not what is happening now. What appears as the present at a great distance is ultimately looking backwards in time as well.

Time dilates over great distances, slowing down, in a sense. The inverse happens when you speed up. As your speed increases, whatever travels with you, in your “frame of reference,” remains the same, but, according to relativity, the way your surroundings appear warps, even if to those in these surroundings, it is you that warp. And so, a spaceship approaching the speed of light appears to scrunch in space in the direction of its speed, just as, when it returns to us after its voyage, time for the voyager went slower than it did for us. Space and time compressed as the ship approached light speed. But for the person in such a ship, the opposite would be happening, in that the space of its surrounding would seem to elongate in the direction of its motion, and the time of its surrounding would seem to speed up, even as its clocks seemed to move at the same rate.

As one approached the speed of light, then, the spacetime around one would seem to be extending in space until it dilated to zero, just as time would increase around one until it moved infinitely. Infinite movement in infinite space is no movement in no space and no time, or vice-versa. Space, time, speed, movement, all these would become meaningless. Likewise, since one would need zero matter to be able to do this, one would either need to shed all one’s weight and convert it to energy, like a light particle, or have so much matter that your gravity creates massive amounts of energy by dragging spacetime into you, like in a black hole. In this sense, at the speed of light, there is neither time, space, nor matter, perhaps only energy, if it even makes sense to speak of this at this point.

Complicating issues a bit more, there are quantum particles that seem to be able to go backwards in time. That is, these particles and their anti-particles are only different in the direction they spin. When they collide, they cancel each other out, and yet, in our world, we see both in various places. And yet, if we were somehow able to travel backwards in time, we’d see the same particles, moving backwards like all the others, yet also moving in opposite directions of spin. This is why scientists have argued that there is ultimately no difference between these particles and their antiparticles, other than direction in time. That is, there is only one particle, when it spins one way it is moving forwards in time, and when the other way, backwards in time. Two particles or one, two directions of time or one, either approach can make sense of the same data, and since quantum particles seem to smear spacetime in some of their interactions, who’s to say they can’t do this as well? Whether we say there is one particle travelling forwards and backwards in time, or two different ones travelling forwards in time, the data is ultimately identical. Which is to say, we are able to get an idea of what travelling backwards in time might look like, just by looking at travelling forwards with a different spin.

There is another relevant complication, namely, that when quantum particles interact, they don’t actually do so in reified “collisions,” but more accurately, in networks. And depending on which point of reference one uses to divide up these networks, one ends up with a wide variety of collisions, some of which have particles transforming into others. On the quantum level, a particle aborbs energy when it absorbs a moving particle in it, but since it has nowhere to “put” it, it either speeds up and lets the particle go on its way, or simple jumps into the type of particle that includes both of the original ones. And vice-versa with decomposition. Particles fold and unfold into each other in this way all the time, and are only truly “final” if one reifies collisions into pairs that only preserve particles structure. While this is a completely acceptable way to divide things up, it is only one way of interpreting the data. And so, whether or not particles are “ultimate” or continually morphing into one another depends on one’s perspective.

And so, some scientists, often those who like to speak of particles travelling backwards in time, have also hypothesized that perhaps there are many fewer particles than there seems to be. If all particles of a given type, such as light or electrons, are indistinguishable from each other, how do we know they aren’t all just refractions of the same? This is why some have said that perhaps the universe is a hall of mirrors of sorts, or a giant crystalline image. Light particles are quite unique in this respect, for they the only particles that are their own antiparticle. This means that whether or not one goes forwards or backwards in time, light looks the same, and is the only particle to do so. And so, as John Bell famously hypothesized, perhaps there is only one light particle in the cosmos, reflecting off matter, bouncing backwards and forwards in time, like so many images between a set of parallel mirrors facing each other. What’s more, perhaps all the other particles are simply versions of these reflections that slowed down, gained mass, by interfering with each other. We know quantum particles interfere with each other and sometimes even themselves, so why not?

From such a perspective, it becomes possible to wonder if we’ve ever actually left the singularity. Perhaps the universe isn’t an unfolded expansion of the singularity, but rather, a form of internal involution, and space, time, and matter and energy with it, are virtual. Would there even be a difference?

Perhaps the difference between a particle going the speed of light, and a singularity, ultimately, two sides of a very differing same. The first has zero weight, the space around it fully dilates and its time speeds up, while the time “within” the light particle, to an outsider, slows to zero and its space shrinks to the size of a point. What about as one approaches a black hole, which is to say, a singularity? Well, scientists believe it would be the exact opposite. Again, one’s own spacetime would be identical, for one’s inertial frame stays the same. That said, one would have to be able to resist the incredible power of infinite gravity, which become exponentially difficult the closer one gets to the black hole. And so, ultimately, one’s space and time would ultimately compress, but this would only be noticeable from the outside, because the change would be uniform from within. However, things outside of one would seem strange. In the direction of the black hole, space would seem to be warping into a point, with clocks getting faster the closer they get to that point. From the outside, however, you would seem to simply get dimmer and dimmer, with your clocks getting slower, at least from outside, while you got fatter and fatter, flattened against the outside of a seemingly vast yet curved expanse of blackness with no starts behind it.

This is hardly what you’d see, however, rather the inverse. Of course, it’s all speculation, because it’s not possible to deal with that sort of gravity without being compressed and torn and rearranged so violently that a spaceship with even the strongest walls would be ripped to shreds as different parts of it compressed ever more violently at differing degrees depending on infinitessimally smaller distances to the center of the singularity at the center of the hole. But hypothetically, as one approached the hole, the closer one gets, the more the space in front of one would seem to be scrunching into a point with its center at the center of the singularity, with clocks slowing down the closer they got to the center. Behind one, however, space would seem to be expanding from a point at the exact opposite of the center of the hole, directly behind one, with clocks speeding up in that direction. What’s more, the dark hole in front of one would seem to keep getting smaller yet darker, and the world behind you would seem to keep getting brighter, and it would keep expanding, even as the world in front of you kept shrinking, until the space behind you started to fold over and begin to “enclose” you, until it fully wrapped around itself and started to seem to get sucked ever closer, but from all sides, towards the hole in front of you.

This is why some scientists have speculated that perhaps there are “white holes” which are what are on the opposite side of black holes, and just as black holes “eat” aspects of universes, so “white holes” spit them out. Perhaps the singularity which starts off a universe is a black hole which stored up enough material, and then, when something set it off, decided to explode and start its own universe in its own dimension, which, in a sense, is the infinity within itself, within another universe. There is then perhaps a multiverse of universes, all connected by means of these swisscheese holes, black on one side and white on the other, pumping univereses into each other, if in ways which violate the normal rules of space and time within any one of them.

And so, as one would approach a black hole, eventually, one would see, at least speculatively, only pure white, as the world behind you wrapped around the front until the black hole vanished. Enucleated in a node of pure whiteness, until, perhaps, you exploded, and a universe of black space began to appear in the very pores of this whiteness. The inverse would likely happen were you able to go the speed of light, as the whole world around you would turn to pure white in front of you, yet getting ever smaller, and the world behind you turning ever blacker, until the black reached over and completed itself as you hit the speed of light, enucleated in pure blackness. Light particles, in a sense, can be thought of all all exterior, with no interior, while singularities like those inside of black holes, as all interior.

And just as we speak of human brains as having “interior” experience, so it seems happens inside black holes. Space and time, matter and energy, are smeared on its exterior from without, yet from inside, are smeared within. Singularities are infinite interiority, with matter, energy, space, and time, at least in theory, smeared within them, just like they are inside quantum particles. The difference, however, is that quantum particles smear spacetime within them by smearing themselves out in spacetime. Singularities, however, compress space and time within them. And if the relation is mutual, it is only in the manner of some sort of inner spacetime which, to outsiders, is virtual. Perhaps our universe is precisely this, a virtual world within an blackhole, just as other universes are the same within our black holes.

Let’s return to the notion that the singularity is the most complex potential we know, and the brain the most complex actuality. What is the difference, in this sense, between actuality and potential? In terms of human experience, we say that an action is potential if we are considering it, yet haven’t yet actualized it. Often we imagine several possible experiences, and choose on over others, and these actions are spoken of as potential. Objects have potential energy if it is stored within them, and can then be unleashed according to one or more possible actions. While humans can imagine these actions as possibilities, objects, lacking complex brains, seem unable to do so.

Our entire universe existed as potential within the singularity which began our universe, and time and space within it. This means that the future that we experience unfolding before us is always already contained within the singularity. In order to get a sense of what this might mean, we can examine how quantum particles “contain” aspects of the future within their smeared spacetime state. The way this is experienced by those performing experiments on these particles is that they seem to “know” what is going to happen in regard to very particular situations in the future. For example, there are certain things that particles aren’t able to do, like interact with another particle, in “unsmeared” particle form, in two places at once. Once another particle interferes enough with another, they are likely to zap into one location in the overlap and scatter each other as a “collision.”

Scientists have developed experiments, however, that try to trick particles into doing things like this that they aren’t “supposed” to do. This is done in order to see what the rules are. And in experiments, such as the famous “quantum eraser” experiments, it seems that quantum particles act so they are consistent in spacetime. That is, they never do something which in the future could cause a contradiction with its past actions. That is, they avoid the sort of paradoxes we see in time travel movies, such as when a person kills their own grandparents. Quantum particles seem able to only act in the present in a way which later will end up having, retroactively, been consistent with its past actions. This structure, described by philosophers as future anteriority, or the “back to the future” effect, makes it seem as if quantum particles “knew” what was going to happen to them in the future, and took this into account in the past. Of course, quantum particles don’t know anything. But another way of looking at this, one which gets rid of the notion of particles which “act as if they know what’s going to happen,” is to say that the particles are in multiple times at once. This might sound crazy, but what it means in practice is that if you do something to the particle in the future, this immediately impacts the past so that the past is consistent with it. This is what it would look like from an “outside” observer, at least. But if one were near the particle, going forward in time, one would simply do what one was planning to do to the particle at one moment, and then, no matter what one did going forwards, it would be as if the particle could anticipate those actions, even if you couldn’t.

Quantum eraser experiments were in fact designed to test these very notions, and, without getting into the complexities of these experiments, it’s safe to say that they prove beyond a shadow of a doubt that either quantum particles somehow can anticipate what you’re going to do ahead of time, or they are in multiple spacetimes at once such that what one does in one immediately impacts the other, or there are multiple dimensions involved. Once again, the three primary ways of interpreting quantum phenomenon. As with previous examples, my approach will be to do with the one that reifies neither the particle nor universe, and hence, which opts to smear the particle in spacetime.

This has ramifications for how one conceptualizes the diffference between actual and potential. If quantum particles exist in many spacetimes at once, they can be thought of, at least from our perspective, to be only potentially existent until a solid interaction occurs which makes them actual. That is, during a collision, there is a single particle, fixed in one spacetime location. Between such collision events, however, it is virtual, smeared in spacetime, in many spaces and times at once. That said, it is in some more than others. While Cophenhagen interpretations of quantum physics argue that these are probabilities which measure the degree of randomness involved, and the many worlds interpretations ultimately say the same, even if their interpretation reads the meaning of this in a reversed manner, the Bohmian approach is different from both these mirror images. The Cophenhagen approach puts the indeterminacy of the situation within the observer, reifying the subjective aspects of the situation, and the Many Worlds approach does the reverse. But the Bohmian approach aruges that the indeterminacy is in the relation between the matter and energy of the particle with the space and time over which they are extended as so many potentials.

Instead of probabilities, then, there are intensities of matter/energy at a given location in spacetime. Some of these intensities are zero, which would be calculated by the other approaches as zero probability. Likewise, the intensity approaches full intensity, what the others would call a probability of one, even though it never reaches either of these states until it actualizes in a collision. Ultimately, intensity and probability depend upon one’s frame of reference.

Taking the Bohmian approach, however, one could say that the degrees of intensity represent the extent to which the particle actualized its virtual potential to manifest in a given spot, and that these virtualities contracted into a full actuality when one of these virtuals actualized in one particular spacetime location. This is what others have described as “smearing” of the particle across spacetime. Nevertheless, some of the locations have a zero intensity, and these are those which are proscribed, because they produce inconsistencies in time.

For Bohm, the very stuff of the world continually shifts between virtual and actual and states in between. This is why Bohm refers to his theory as “ontological,” because for him, the fundamental stuff of the world “is” this way. This he opposes to the “epistemological” approach of the Cophenhagen school, which puts describes these issues as degrees of indeterminacy in what can be known by a subject. The many worlds interpretation, then, simply shifts the issues off stage, not placing them within the subject, but beyond the subject. While I don’t agree that “ontological” is a good way of describing what Bohm is doing, because ultimately, the notions of virtual and actual he employs deconstruct the boundaries between epistemology and ontology, subject and object, at least in any traditional sense.

Nevertheless, if quantum particles spread themselves to varying degrees of virtual intensity, which are the degrees of probability they will actualize somewhere in relation to what’s around them, then this can also be seen as space and time networking virtually within the particle as so many intensities of space and time within the particle. When particles interfere and interact, they smear themselves and their respective spactimes within each other as so many intensities, until they actualize. Actualities always imply one location in spacetime, with deterimed degress of matter and energy. While some particles can be infinitely layered onto each other and simply increase in intensity, such as electrons, others “exclude” each other (as described by the famed Pauli exclusion principle), such as protons, which push each other out of the way like most matter. Those which exclude each other like actualize in such an exclusive manner.

All of which is to say that within a singularity, an entire universe is condensed, with its space, time, matter, and energy all virtualized within it. Until, from a speculative exterior perspective, it “decides” to expand and give rise to a Big Bang and create a universe, all that it gives rise to, and ever could, exists virtually, as potential, infinitely condensed within it. Like memories stored in a human brain, all these potentials are infinitely expanded throughout it equally in its internal spacetime, which is virtual, because it is infinitely compressed. Were these potentials actualized, they would unfold potentials into possibilities, virtuals into actuals. But in the cosmic bookkeeping, there are things which are excluded, which would create contradictions. There are branches within the pathways of potential, and branches which cancel each other out if selected, as pathways interfere with each other, just as quantum particles interfere with each other and sometimes with themselves, which is to say, their non-local environment if one is a Bohmian, in the world beyond singularities.

What is being described here is not far from the God of Leibniz, as described in his Monadology, a work mentioned earlier in this text. According to Leibniz, God is the being that has all possible universes in his mind, as so many virtual pathways, and as the world actualizes, God is the giant computer programmer that makes sure that the best possible world ultimately comes to be, considering the current state of things and the virtual possibilities. That is, if a butterfly flapping its wings in one part of the world will make the butterfly gain a mouthful of food, but be the event which leads to the proverbial chain of events which causes a hurricane elsewhere, all things being equal, God will make sure the butterfly goes for the morsel of food that avoids the hurricane, so long as it doesn’t ultimately lead to something worse. God needs infinite foresight, to check the future possibilities. Which means, then, that God needs to “smear” its consciousness in multiple locations in spacetime, and in regard to our universe, smear it through all spacetimes, just like a quantum particle smears itself through spacetime.

This is similar to what many mystical traditions have aimed at in meditation, namely, the ability to concentrate on many different things at the same time, and many different times in the same space. The smearing of the mind through spacetime, and the smearing of spacetime in the mind. Returning to Leibniz’s notion of God, however, if smearing quantum particles in spacetime is the same, ultimately, as saying that spacetime is smeared within these particles, so it is with this description of God. That is, if God’s mind is smeared across all the spacetimes, it is also possible to say all the spacetimes are smeared within God, and all at differing intensities. From such a perspective, every possiblility in space and time can be seen as virtually present in the mind of God, with differing intensity to the degree to which it is compatible with the overall actualization to its “best,” fullest development in relation to the rest. And God would be present in everything, if moreso in things which promote the development of the best within the world around them.

There seems, however, to be no guarantee in the world of science that things always turn out for the best, nor that there is an omniscient God making sure things work out well. But writing at nearly the same time as Leibniz was Barch Spinoza, and his model of the world, and God, while not able to account for virtuality and actuality like this, can complement Leibniz’s notion of God in some fascinating and helpful ways.

Leibniz’s God is one which describes a world unfolding in time, selecting pathways through the virtual worlds of possibility, and bringing the best to actuality. Spinoza’s world is one in which truth is beyond time, and God, which is the principle of truth, fundamentally non-dual. That is, God is both within time and space and outside of it. God is reason, and reason, for Spinoza, is that which cannot contradict itself, which is to say, God is, at least in this respect, similar to that described by Leibniz, a principle of consistency. For example, if a triangle is something with three sides, and this is the essence of a triangle, that which is impossible to doubt if one knows the concepts intended by these words, then this essence of triangleness, the truth it represents, is a part of God. God is simply the ultimate rational pattern which contains all the others which make sense of the world as we know it.

What this means, however, is tricky to say. For Spinoza, we can deduce certain truths from others which are more fundamental. And so, while we find the definition of a triangle fundamental once we know what its components mean, we can then deduce things from this, such as, for example, the notion that it is impossible to have two sides not add up to more than the third. While we may deduce this, and this may take time, and require several steps in an argument, these are all present simultaneously with the notion of a triangle in the mind of God. God is beyond space and time, and is the necessary truths of the world, in and beyond space and time. But God is also the cause of them, because God not only a cosmic bookkeeper of logician, but also the most real and most powerful entity in the world. God is the power which gave rise to the universe, and the standard of all values. That which happens, for Spinzoa, is the best that could be, considering all that could happen, simply because it is the only standard against which we could make such a judgment. There is no perfect world outside of this one in regard to which we could judge this one, unless we want to judge an imaginary world better than a real one. All we have ever known, and the standard for any good we have ever valued, is the world in front of us. The best within it, and the most reasonable, is what is most Godly. Likewise, that which is the most powerful. This doesn’t mean Spinoza valorizes evildoers, but rather, he believes that those who are cruel and terrible to others may have short-sighted benefits, but ultimately dig their own graves and cause their own troubles. For Spinoza, nature and politics bear this out. Utlimately, however, his concern is ultimate, not with human action. The universe is the most powerful thing we have ever known, and is the only standard against which power can be understood, the same with value and reason.

The inheritor to such an approach to the world, and God, is the German Idealists, Schelling, and then Hegel. God is the unconditioned, the Absolute, the invariant within all experience. As the invariant, God is the standard against which any determination of reason, value, power, ethics, or anything else can be judged. Space and time are simply the unfolding of the absolute in itself. The absolute unfolds from virtual to actual. And it does so, for Hegel, to move from abstract absolute to concrete absolute. A concrete absolute is one which is determinate, and negates itself and its world by holding its particular, determinate aspects apart from each other and the world in and within itself in relation to the world. For example, a real tree holds its leaves apart from each other, keeps its green pigments separate from its brown ones, and is distinct from the idea I may have of this in my head. The tree also holds apart, in a sense, the various moments of time in which it exists. But when I imagine a tree in my head, all trees in the world and all times I’ve ever seen a tree in get collapses, and all the parts, colors, and other aspects of a tree become a fuzzy mess. Indistinct and abstract, the parts aren’t determinate, nor fully distinct from what’s around them, nor do they mutually negate either other, either logically or in space or time. They are, in Hegelian terms, abstract, not concrete.

From this foray into philosophy, we can then postulate what it might be like to exist within a singularity. All that has ever existed exists within the singularity as virtual pathways within virtual spacetime. Similar to the branching pathways in a human brain, with so many feedback channels, the singularity would then be a giant virtual brain whose thoughts are so many virtual scenes of events and pathways through it, all interfereing with each other to give rise to so many varied intensities whereby various pathways intensify or inhibit others. What is more, it seems that every quantum entity may also have such a temporal structure, even if this seems to decohere at macro/non-quantum sizes, only to reappear in reworked form in the interiority of the lived experience of time in terms of memory and anticipation, with all the quantum-like aspects this seems to reveal. All of which is to say that this structure may not only apply to the singularity, but to lesser degrees, all aspects of the universe which arose therefrom, with some more reified and actual and less virtual than others, and yet, still networks within networks of such a mode of polydimensional, virtual-actual networked temporal branching.

All of which is fundamentally different from what we experience in the everyday external spacetime of physical objects, in which entities are actual, which is to say, determined, and extended so their aspects are distinctly unfolded and exclude and displace each other in space and time, and some events give rise to others. Time unfolds in space, and vice-versa, causes leading to effects, but never the reverse, due to the single direction of the flow of energy from the Big-Bang and its way of pushing along the non-quantum aspects of the universe in one direction alone. This can be seen as simply a collapsed, reified version of the more complex forms of quantum time and neural experienced time.

That said, ultimately, we have no way of knowing, of course, if what seems so solid is here as it appears to be. That is, perhaps we have never left the singularity, such that all of us, the objects of the world, and all our worlds of experience are simply virtual projections within the singularity itself. Of course, saying we have never left the singularity is the same, ultimately, as saying that the singularity has never left us. From such a perspective, we are smeared in it, and it in us.

If this is the case, and we have no way ultimately of knowing it is not, then then universe could be viewed in two very different ways. In the first, we never have never left the singularity, but the intensity of the pathways through it change in intensity relative to each other, causing all we experience. There would then be no movement forward in time for the singularity as a whole, but only changes in the intensity of its pathways in space, as some vanish, and others increase to full intensity, while others increase or decrease in intensity relative to the main branch pursued from the center to periphery, which are ultimately simply aspects of each other, because this space is virtual anyway. But we could, at least virtually, figure the center as the now, the branching pathways to the periphery as the virtual futures, and the actual past as that which has gone through the center, and is written on the exterior, extended in virtual space, with the most recent events as the largest, and those in the distant past as the smallest. Of course, for this to be more accurate, the branching virtual pathways would have to be three dimensional, as would the writings on the sphere itself, and the interior of the sphere in virtual space, and the circumference of the sphere in the actual. And then we would have an image of our own universe as it unfolds in spacetime, and perhaps of time itself, fractally proliferating to quantum, collapsed objective, and neural forms.

From such a perspective, then, there is no difference, ultimately, in saying that we are all still within the singularity, and it is within us, for every aspect of the universe can ultimately be seen as little more than the holographic and fractal refracton of the singularity, each more intense the more as it actualizes the full scope of the virtual potentials of the singularity within it. Of all the aspects of the world, the human brain is able to bring into the actual world the virtual potetial of the singularity to the greatest degree. This detour through neurocience and artificial intelligence, quantum physics and cosmology, physical and speculative, has brought us back, now, to where this began.

3. God and the Mind

The preceding discussions of aspects of neuroscience, artificial intelligence, quantum physics, and cosmology, the latter two extrapolated speculatively beyond the limits of experience, provide a basis for the more directly philosophical model to follow, rejoining the discussion of the first part of this essay.

We have no way of knowing if we have ever left the singularity, and ultimately, whether we are all virtual presences within the singularity whose intensity increases as we actualize more of those potentials, or the singularity is present within us more intensly as we actualize its virtual potentials, is ultimately two ways of saying the same thing.

What’s more, there is no reason not to think of the singularity as what the philosophers have often called the Absolute, the Unconditioned, or the mystics and theologians have called God, at least, if by God we mean the ultimate horizon of all experience, that from which all space and time, matter and energy, came to be, is a part, and likely will return or continue to mutate infinitely, which may ultimately be two ways of saying the same.

From such a perspective, God, or the singularity, or in the terms described above, matrix or the oneand, gave rise to matter and energy, space and time, by differentiating itself, by emerging from indeterminate virtuality into determinate actuality, from potential to actual. As it differentiated, it rewove with itself, and gave rise to more complex actualities, aspects of itself that didn’t layer differences and samenesses in space and time, matter and energy, intensively onto each other, but rather, extensively, with each excluding the other to varying degrees, with space, time, matter, and energy as the result. The world moves forward in time and explores itself in space, yet within the singularity, these are all stored, virtually layered on top of each other in the same infinitely compressed yet infinitely extended spacetime. From the center, infinite pathways extend, with each possible scenario in the universe, and yet, there are feedback channels between these, threading the threads. The center is the now, and the branches move closer towards it feeding back more into each other and becoming more definite from the virtual haze surrounding them as they near actualization. Interfereing like so many branches of lightening, the pathways and scenarios increase in intensity as they approach the center, which is always the most intense, for it is the peak of actualization.

This node of actualization, burning white hot in intensity, sucks the pathways into it, like so much matter and energy in a black hole, and what it sucks into it are virtual pathways, like so many networks in a brain. What emerges out the other side of this hole, however, is the universe as we know it, actual, and extended in spacetime, in which time flows linearly forwards, yet differentiated in spacetime, just as in the singularity, it is the reverse. And yet, on the other side of the core of actualization is also another virtual pathway, the periphery of this infinitely extended virtual universe, a spherical enclosure, whose exterior flattens upon it the space we experience, holographically, compacting dimensions, with the more recent events larger, and the more distant ones smaller. Infinite in expanse, this sphere is infinitely small, for it contains all the virtual worlds we could be inside of it, and our actual world outside yet receding into it from all sides. Like a black hole, but turned inside out, like the famed “white holes” described by physicists. Simply inflate this image from three dimensions to four, and you get a description of our actual universe.

From such a perspective, we have never left the singularity even as we are always leaving it, for it is virtually smeared through the whole universe, and the whole universe is virtually smeared in it. It is the potential which the whole universe actualizes, the essence of God in any and all, which is to say, the core of emergence within any and all aspects of the universe. It is physical energy, as well as all the virtual possibilities which this makes truly possible as potential in any given actual situation. Energy is the name of this essence in the abstract, but because energy is always bound up in matter to unfold in particular ways, this reification of one aspect of potential is a highly reduced notion of this seed of emergence, of all the virtual potentials present, within each and any and all, such that each and any aspect of the cosmos is a refraction of the singularity, of God, of the matrix or oneand, each in its own way.

And each in differing degrees of intensity. Those which bring the virtual freedom to be any and everything, like the quantum particle, of which the singularity is the most virtually free, into the world of the actual, is the most Godllke. The human brain, then, as the most complex and free actuality, is the most intensely Godlike aspect of the world we know, just as the singularity is the most virtually Godlike. All value, from which all ethics derives, exists on a continuum of degrees of intensity of potential to make actual the virtual potential present in the seed of emergence present in any and all in relation to that in and of any and all.

All time and space is within us, if virtually, for the history of the entire universe would ultimately be needed to fully explain each and any of its most minute aspects. How to explain why you are thinking what you are now if you hadn’t been born in a particular place, in a particular society, in this particular species, on this particular planet, in this particular solar system, in this universe given rise to by a this singularity? Each and any aspect of the world can trace itself back to the singularity this way, just as any and all aspects of the world are pathways to the singularity within it, even as we are within it, each the essence and core of the other, virtual and real, potential and actual.

For the singularity, time is fuzzy, indeterminate, enfolded, vague, full of options and pathways. From the pure light of indeterminacy, the universe is like a the brain of infants, born with far too many nerve connections, which slowly prunes itself as some connections grow stronger than others with time, and those which aren’t used die. The singularity is high on possibility, until it begis to actualize, and determinate shapes come to be, and it begins to grope to grasp some of those with others, define aspects of itself, and eventually, by giving rise to consciousness and self-consciousness, to come to know itself. Just like our thoughts arise and fall away, yet leave traces in the physical structure of our brain which impacts pathways for future thoughts, organisms are like thoughts in the brains of the world, objects are like nerve-cells, and yet, it all pulses with life and, even if at simpler degrees, something like awareness. And though organsism rise and fall, we leave traces in the fabric of the brain of which we are aspects. We are how the singularity comes to feel and think itself.

Just as thought is how humans feel their brains from the inside, feelings how we feel our brains feel out bodies, and sensations how our brain feels our body feeling our world, so it is with us. We are the way the singularity comes to feel itself, and as we develop, so its ability to feel itself becomes more defined, more complex.

And we, and all other matter, are how the singularity comes to move from the fuzziness of potential to the definition of actuality. Quantum phenomenon actualize within webs of virtual potentials. If one starts from the perspective of a given quantum particle-event, it is as if it sends out virtual ‘feelers’ to imagine future possibilities, and then ‘chooses’ one to act on, similar to the way in which humans imagine possibilities, then actualize choices. Nevertheless, the notion that quantum-particle events are ever truly distinct is a useful fiction, for in fact, these are reifications of the shimmering networks of events, the networks of virtual possibilities and actualized events described by the famous diagrams of physicist Richard Feynman. Quantum particle-events are simply nodes in these vast networks of flickering virtual potential and actual events, wrapping in and out of each other. And just as quantum event-particles seem to smear spacetime within them, so it is with the singularity, it has all of spacetime smeared within it, along with all its vitual and actual potentials. It changes within itself, slowly shifting from mostly virtual to mostly actual, and this is what we experience as movement in spacetime.

Perhaps, however, it is more apt to describe it as a shifting from unfocused to focused, from undefined to defined, from fuzzy to concrete. While there is a loss of pure potentiality, there is a gain in specificity, concretion, definition. Things become, to use a more commonplace term, more intensely real, and hence, less nebulous and indistinct. And this is another form of freedom, the freedom from the indefinite. If the singularity looses the pure abstract freedom of possibility to concretion, it gains it back as this concretion increases in complexity. Human brains are clearly evidence of this.

What we give to the singularity is definite, actual, concrete, and determined existence in delimited space and time. Only by means of such a loss of freedom, of reification and enclosure, can any aspect of the pure potential of the singularity distinguish itself enough from what is around it to then be able to intertwine with others equally differentiated to give rise to greater intensity of differentiated intertwining, which is to say, complex networking. This is emergence. This is the standard of all value, that which potentiates itself the more intensely it complexifies.

We are all of God, as is every aspect of the world, and yet, to differing degrees, to the extent to which we have the potential to actualize the potential of any and all by means of our own. This is a non-dual ethics and practics, one which only grows as a self by growing others, and vice-versa. A network ethics of robust complexification of self and world, of coming into sync with the manner in which the world develops in and through you. To come into sync with emergence is to have it emerge in and through you, it is action as inaction, the maximal freedom, power, and pleasure possible from one’s particular location within it. Ultimately selfish in its unselfishness, it is the distribution of control and potential, the fostering of difference to the maximum sustainable to degree. For so long as difference and change do not overwhlem the ability to give rise to yet more, they feed emergence. Versus paranoid control, centralization, hierarhiczation, this is potential set free, to the verge of chaos. Our world is far, far from chaos due to proliferating difference, and in fact, only ever seems on the verge of chaos from the crises which massive overcontrol bring about, and claim to be caused by the reverse. Decentralization, diversification, proliferation, investment in any and all, rather than for the few, to the verge of chaos, this is the pathway towards robustness for our world.

As the singularity wrested itself into actuality, giving rise to life, then self-consciouness, then language and culture, it has always balanced reification with distributedness. And yet, the evolution of nature is radically brutal and cruel, each species taking the next for fuel and raw materials. Carnage and brutality. Humans have inherited this terrible lineage, and we have never ceased to be brutal to each other. And yet, our minds only developed this way because we learned to cooperate, we are social animals, and this is the only way to move forward, to unlearn the brutality of evolution in the harsh environments of our past. We now have the technology to eliminate any suffering, we have tamed nature, and yet, since it forged us, we now need to brutality that the millenia have etched in our souls. This inflection point in human evolution, and of life and nature, is humanity’s mission, to redeem the suffering that was needed to give rise to it, but now taking over the path of evolution, and evolving ourselves, and with us, the world. Before our own technology destroys us. We are on the verge of revolutions in biotech and nanotech, unknown vistas of potential, but peril as well. If we don’t learn the lessons of evolution before it is too late, we will be yet one more evolutionary dead end.

If individual humans live in a time state which is fundamentally networked, ever more like the singularity, able to imagine possible futures and consider the best, if we don’t do this now as a species, by means of our massively networked communications networks, we will be doomed by the work of our own hands. So far we have been able to avoid nuclear destruction, but can we survive the technologies on the horizon if we don’t finally come to self-consciousness, as one enormously distribued human brain, and begin to stop destroying and oppressing ourselves and world?

Humanity has always been approaching the stage of being one massively intertwined organism, a giant brain for processing its own development in relation to the world around it. And yet, while this giant brain of our collective consciousness is consciousness, it is hardly self-conscious. We are beginning, only now, to become aware of ourselves as a species, a collective mind composed of collective minds, as the internet weaves us together into transpersonal communications networks. Like an infant slowly coming to realize that its disparate sensations are aspects of itself, and some are under its control, ultimately nucleating an ego out of partial nodes of awareness of its body, to finally grasp itself as a self, so it is with our collective coming to self-consciousness as a species. We think of our individual selves as so many monads, or as monadically isolated cultures, and yet, these are only ever virtual networks within the world, which is the ultimate horizon against which this is all possible. Protecting what is ours alone, we chase egoic dreams, and the result is destruction. Those like me, those near me, those part of my country, community, this is the path to decomplexification. Growth becomes paradoxical as it approaches its limits, for overemphasis upon the reified requires the reverse to bring about growth, and vice-versa.

From such a perspective, time is many, many things. It is the time of the singularity, pure virtual enfolding of any and all potentials that could be, compressed infinitely in spacetime within the singularity, and actualizing itself in refracted form in any and all aspects of the world we have come to know. It is the networks of branching virtual pathways within the singularity, and the progressive now lacing various virtual strands together into one thread of actuality in the extended domain of each actual in spacetime. It is the layering of the traces of these threads wtihin the impacts these actualizations of events leave in the stuff of matter as so many layers of memory. It is the extension of memory into the future by the habits of living organisms, and the increasing complexification of pathways of moving matter as it intertwines in organisms in patterns which perpetuate themselves. It is the layerings of these patterns as the amplify and inhibit each other in the nervous systems of organisms, and ever moreso through brains. It is the branching pathways, dilations, and layerings, and networks of realities and fantasies of the ways organisms experience the more costrained networks of displacement in physical spacetime of matter. And it is the virtual brains of human culture, the Internet, and the new artificial bio and nanotech brains we will likely give rise to which can, if we survive, bring actuality closer to the freedom of the virtual, if at higher levels of complexity, intertwining virtual and actual to ever more complex degrees.

 
Follow

Get every new post delivered to your Inbox.

Join 280 other followers