Networkologies is back

•February 22, 2015 • Leave a Comment

Hi All-

Sorry for a long hiatus, largely due to the need to buckle down and get my book finished and published, followed by the rest of my first sabbatical. With a return to teaching and writing, it is also a time to return to blogging. New posts coming soon.

-Chris

Advertisements

“Networkologies” is NOW PUBLISHED, Avail on Amazon!

•November 11, 2014 • 1 Comment

To Everyone Who Reads This Website –

Many thanks over the years for all the folks who’ve come and visited and enjoyed these posts. I’m happy to announce that after many years of editing and condensing my work on networkologies into one slim volume, that the first installment of the networkological project has been published by Zer0 books, and you can now order it on Amazon! It’s called “Networkologies: A Philosophy of Networks for a Hyperconnected Age – A Manifesto.” It’s split into two halves – a very user-friendly introduction, and a crazy over-the-top manifesto. The link to Amazon is here.

71FymkUVlgL[1]

You’ll also notice that I’ve uploaded all the works in progress that went into making this book, as supplemental materials on a page on the sidebar. I’ll work on bringing these to publication as further installments in the networkologies project. But the first volume is now out, so spread the word to any and all! Best wishes to all my readers, and thanks for the many questions and comments over the years!

Website Updates, and New Texts for Download

•October 31, 2014 • 1 Comment
Just wanted to let you all know that I’ve just spent a few days revamping this website, in preparation for the official publication of my new book, Networkologies: A Philosophy of Networks for a Hyperconnected Age – A Manifesto, which is just about to be published by Zer0 Books.
Firstly, I’ve just uploaded two new texts that I wrote in the past that may be of interest – a book review for Steve Shaviro’s “Post-Cinematic Affect,” and an analysis of the logics of time-travel film in light of quantum physics, and the structure of Rian Johnson’s 2012 film “Looper.”
Mostly, however, the new version of the website now has links on the sidebar where you can download in pdf format the supplemental texts/works in progress I wrote before this in preparation for the Zer0 book, and which I will subsequently be working on bringing to publication in update format as time permits in order to fill out the networkological project into a multi-volume whole. These manuscripts include
– “Networlds: Networks and Philosophy From Experience to Meaning” (225 pgs), a very user-friendly introduction to the philosophy of networks from the standpoint of ‘everyday life.’ The Zer0 book was originally the introduction to this text, but it got too long, so I just published the introduction/manifesto. Highly polished text.
– “Netlogics: How to Diagram the World With Networks” (265 pgs), the original book I wrote on philosophy of networks which really has the entire project as a whole, but I decided was too dense to publish first, and so started writing Networlds. Needs minor modifications to fit the current form of the project, but otherwise highly polished draft.
– “The Networked Mind: Artificial Intelligence, “Soft-Computing,” and the Futures of Philosophy (234 pgs). The first book-draft I wrote, a sort of preface to the philosophy of networks, one steeped in the science and technology of networks, and shows the need for a philosophy of networks. It was from this that the rest of the project grew. The draft needs some work in terms of editing, and I am currently working on fixing-up this one first as the next volume to be published in the series. While the start of the text still needs some work, I’ve used middle sections of the text with students in class, and they say it is much clearer than already published sources on the materials at hand.
– My Dissertation, “The Untimely Richard Bruce Nugent” (351 pgs), an intersectional analysis of the work of the only openly ‘bisexual’ author and visual artist of the Harlem Renaissance. The work acts shows why an intersectional and networked approach to historiography and identity helps demonstrate the richness of Nugent’s overdetermined gestures, and how these resonate not only with Nugent’s time and our own, but the hermeneutics of reading the past which relate these in light of the rise of queer studies, women of color feminism, and other post-identity-political forms of interpretation.
Best wishes to all!
-Chris

Collapsing the Fuzzy Wave: Rian Johnson’s “Looper” (2012), Quantum Logics, and the Structures of Time Travel Films

•October 31, 2014 • Leave a Comment
A Very Uncanny, Bruce Willis-Like Version of Joseph Gordon-Levitt: Rian Johnson's Time-Travel Film "Looper" (2012)

A Very Uncanny, Bruce Willis-Like Version of Joseph Gordon-Levitt: Rian Johnson’s Time-Travel Film “Looper” (2012)

(Wrote this in 2012, but I’m updating my website, and adding some new content that should have been here long ago. Enjoy.)

Hypermodernity and the Time-Travel Film

In today’s hypermodernity, the experience of everyday life can often make it feel as if one is travelling through time, or even existing in many times and places at once. We are continually meeting our many slightly divergent copies, each existing within alterante yet often partially overlapping temporal dimensions. For in today’s world, we see refractions of ourselves everywhere, from our profiles on social networking sites to our continually updated online fragments of images, narratives, chat-histories and self-descriptions, it can often be hard to keep track of our bits and pieces, and this tendency is only likely to increase. To quote Agent Smith fromThe Matrix (Wachowki Bros.,1999): “The best thing about being me…there are so many “me’s!”

While Smith’s reaction is euphoria, there are many other possible reactions to this radical change in our way of relating to the world and ourselves. And while these were probably always multiple in one sense or another, for there were always many selves and worlds in the eyes of others, today we encounter so many different types of others, digital and otherwise, and leave so many virtual traces of ourselves, from voice-message greetings to videos, that it seems like we are always running into shards of ourselves in ever more concrete forms. As artificial intelligence and biotech get ever more powerful, who’s to know whether our digital avatars might one day literally have a life of their own which branch off from ours like roots from a tree, only to allow us to re-encounter full versions of ourselves again at some later date, even if today we see the foreshadowing of this in the proliferation of our virtual avatars.

And so, it should hardly be surprising then that the time travel genre has only gotten more popular, mainstream, and complex. While most of these films make use of science fiction devices, however, travelling in time isn’t merely something which happens in the domain of speculative fiction, for in a sense, memory and fantasy are always a form of interior time travel. From such a perspective, films which explore the depths of interior time, such as David Lynch’s Muholland Drive (2001), or David Cronenberg’s Spider (1999) can be thought of as films which use psychosis as the method to present time in the form of a shattered crystal, in which it is possible for people to encounter copies of themselves, sometimes exact and sometimes divergent, so many virtual avatars walking around within our heads. While technology may oneday literalize this phenomenon, the human mind, and cinema itself, are already media of time travel, and such that films which use speculative fiction simply have found a convenient means to dramatize this.

Recent time-travel films, such as Sean Carruth’s Primer (2004) or Duncan Jones’ Moon (2010), show new possibilities for the genre. The first uses the notion that time travel can create multiple copies of persons, while the second employs cloning to do the same. While the mechanics are slightly different, in many senses, both can be seen as attempt to think through the challenges of our age in an allegorical manner.

One of the best diagrams to help explain the temporal structure of Sean Carruth's "Primer" (2004), easily one of the best films of the decade.

One of the best diagrams to help explain the temporal structure of Sean Carruth’s “Primer” (2004), easily one of the best films of the decade.

“Looper”’s Quantum Memories and Fuzzy Futures

As time travel films get more complex, however, we need new models to think through their mechanics, and new ways to think about the ways they produce meanings. A recent and quite impressive addition to the genre is Rian Johnson’s 2012 film Looper. And as with many such films, soon after release, there were a panoply of explanations, diagrams, and attempts to explain its time-travel dynamics, many on the Internet. While most praised the film in one form or another, many argued that the presentation of time travel in the film is in some senses sloppy, because it mixes and matches various theories of time travel, and hence sacrifices structure for character development. This reading of the film has in some senses been supported by the director, who implied in at least one interview that he went with his gut rather than work out the details.[i]

Despite this, there’s actually some very strong reasons to see Johnson’s gut instinct as having a lot going for it. For when contextualized by aspects of contemporary quantum mechanics, the fuzzy logics of Looper are hardly inconsistent. In fact, this film can be seen as actually advances the time travel genre to a new level of complexity. In the process, the film introduces a series of new tools to add to the time-travel filmmaker’s toolbox.

Before getting to why I think Johnson’s film is actually much more of consistent approach to time travel than many have thought, it’s worth describing some of what makes this film so innovative if, that is, it can actually be made to work theoretically. One of Looper’s most unique innovations is the way it deals with memory and anticipation. In a powerful scene which recalls the avant-garde experiments of Japanese New Wave director Shuji Terayama’s Pastoral (1973), an older and younger version of the same character sit down to talk with each other in a diner. The older version of Joe, played by Bruce Willis, explains to his younger self, played by Joseph Gordon-Levitt, how time travelling to visit his younger self nevertheless also impacts him as well:

“My memory’s cloudy. It’s a cloud. Cause my memories aren’t really memories. They’re just one possible eventuality now. And they grow clearer, or cloudier, as they become more or less likely. But then they get to the present moment, and they’re instantly clear again. I can remember what you do after you do it. It hurts … But this is a precise description of a fuzzy mechanism. It’s messy.”

We see the memory issues described by “old” Joe [Willis] at work in the film, for example, when he has trouble remembering his wife’s face when they first met just as “young” Joe [Gordon-Levitt] meets Sara [Emily Blunt]. And it seems that both old and young Joe meet their respective love-interests with a blow to the face, for example, when the slap which Sara gives young Joe disrupts old Joe’s ability to remember the punch in the face immediately before he meets his future wife in his own past. It seems as if there’s cross-over or interference between the time lines of this cinematic world, even if these timelines aren’t strictly parallel, but rather, to one extent or another, also serial, which is to say, one after the other. The film goes out of its way to give us a sense that timelines interfere with each other, however, fast cross-cutting and aural-bridges between the scenes in question, such as the slap to young Joe and the punch to old Joe, to cement the link. While ultimately old Joe is able to recall his wife’s face, it is implied by these means that young Joe’s potential relationship with Sara could somehow erase that between old Joe and his wife. Time lines can not only interfere with each other, they can alter or even erase aspects of each other, and the potentials of each other as a whole. Or in terms of the plot of the film, young Joe’s short term future can alter his long-term future, which is ultimately, old Joe’s past, hence, the memory fuzziness described by old Joe in the diner.

All of which is quite resonant with the structure of quantum entities, whether or not Johnson did this intentionally, or, as he has indicated, by instinct. According to quantum physics, sub-atomic entities aren’t quite particles, and this is why, after at first depicting electrons as tiny sattelites in fixed orbits around the central atomic nucleus, physicists came increasingly to depict electrons as fuzzy “clouds” which hover around a nucleus.[ii] What’s particularly difficult to grasp about this, however, is that electrons aren’t spread out like a rain cloud, but rather, the electron and the space and time in the area of the so-called cloud are, to use a popular metaphor in the science literature, “smeared” within and through each other. That is, while a rain cloud is full of little drops of water, there is only one electron in the spacetime of the cloud, and it could show up anywhere in there, but its location within this is, in a sense, “fuzzy.” The reason for this is that quantum entities, which are only described as particles by scientists today as a short-hand, simply don’t follow the laws of time and space like large entities do, they’re able to be in more than one space and time at once, even if more intensely in some of those spaces and times than others. What’s more, they interfere with each other, in a manner not all that dissimilar to what is described in Looper.

This is why we can think of old and young Joe as being similar, in many ways, to quantum entities. As they get closer to each other, in space and time, their actions increasingly begin to interfere, not only with each other in the present, but in the past and future as well. This manifests in the way in which young Joe’s actions can rewrite old Joe’s memories, or to the extent that the memory of one character is the future of another, also impact their potential future paths.

All of which is quite similar to what happens when quantum entities approach each other. The only way we can tell that particles as small as electrons are where we expect them to be is by shooting another “test” particle in their general vicinity, and then looking to see how the test particle is deflected by its interaction with an electron. After repeated trials, scientists have come to realize that particles this small are never exactly where you expect them to be, but rather, they exist within a cloud of probabilities of being in one particular zone in spacetime or another. Similar to old Joe’s description of his memory, particles are only ever more or less likely to be where they are expected to be, and any attempt to fix a quantum particle in place will produce a “precise description of a fuzzy mechanism.” Ultimately, it’s about probabilities. And this is what has lead scientists to argue that the particles and/or spacetimes involved, depending on how you look at things, are “smeared” in relation to each other, according to the degree generally indicated by the intensity of shading of the cloud. That is, the density of the cloud indicates how likely it is the particle is to be there, and hence, the clearness or cloudiness gets more intense as the presence of the “particle” is more or less likely.

What’s more, quantum particles and their probabilities interfere with each other. One of the first things which quantum physicists realized, in the famous split diffraction grate experiments, was that quantum phenomenon “diffract.” That is, just like ripples in a pool of water, quantum “clouds” extend in waves from their most intense points, and when these begin to overlap, the result is a pattern very similar to the ways in which waves of water interfere with each other over a distance. This is one of the reasons why many working in this field argue that quantum phenomenon have both particle-like and wave-like characteristics. While a quantum entity only ever “actualizes” at one point when it hits another, thereby acting like a particle, its ability to interfere with the action of others, and vice-versa, extends around it in rippling clouds, similar to the ways in which young and old Joe seem to be able to impact each other from afar in the film.

One particularity of quantum physics, however, is that this smearing, clouding, and rippling doesn’t merely happen over space, but time as well. This can be seen demonstrated in the famous “quantum eraser” experiments,[iii] which indicate that quantum particles act as if they go out of their way to avoid paradoxes, in ways which would require they “knew” what happened in the past. There have been countless attempts to explain the strange consistency of quantum phenomenon in these strange examples, but basically, it is as if quantum particles “know” what we are going to do before hand, and take this into account to make sure they don’t do something which would violate the often already paradoxical seeming, yet nevertheless consistent, laws of quantum mechanics. In many senses, the paradoxes only arise if we view the quantum world with our everyday, more linearly temporal lenses, similar to the way in which Looper appears inconsistent from a more traditionally linear temporal point of view.

But is Looper Consistent? Alternate Temporal Dimensions and Relativity Theory

It is the issue of consistency, however, which has worried many critics of Looper, even if ultimately, the forms of consistency desired by these critics don’t take quantum issues into account. One of the best streamlinings of the argument critical of Looper is presented by blogger Liam Maguren here.[iv] Maguren starts by describing the four primary approaches to time travel depicted in contemporary film:

“Theory A – Fate: there is only ever one destined timeline in the entire universe. If you travel to the past, your actions will not change the timeline at all, for they were always meant to happen (e.g.Timecrimes).

Theory B – Alternate Universes: Travelling to the past causes the creation of an alternate universe/timeline (e.g. Star Trek).

Theory C – Success: Instead of creating another universe/timeline, it shifts the current one, for there is only one linear universe/timeline in this theory. Thus, the previous future ceases to exist (or be altered). This could lead to the non-existence of the person that travelled (e.g. Back to the Future).

Theory D – Observer Effect: Just like Success except the traveller is essentially ‘Out of time,’ meaning they will not be affected by change (e.g. Groundhog Day).”

He then continues to apply this to Looper:

“… Looper creates its own time travel theory by merging two conventional ones: Alternate Dimensions and Success. This means the following:

1) Travelling back to the past can alter the future, creating another dimension in the process.

2) Changing past events can affect the traveller (e.g. losing limbs) [as happens to the character Seth, played by Paul Dano].”

The initial problem people have with this theory-merger is the seemingly contradictory nature of those two propositions. If a traveller creates/goes to an alternate dimension, are they not immune to any consequences faced by their younger, alternate self? How can an altered dimension still affect the original dimension the traveller came from?

Marguren attempts to then solve this problem:

“The temporal nature of Looper’s world wants the alternate dimension to remain as true to the original dimension as possible in order to avoid unnatural paradoxes. However, if a paradox were to occur, it’s not going to mean the entire universe implodes* (as the ending proves). It simply means that it’s unnatural for paradoxes to occur in the universe.

It’s similar to how opposite poles from two different magnets attract each other. They do not want to separate; it’s natural. An attraction is still held between them even when they are being forced apart (just like the original and alternate dimensions). However, with enough force, the attraction will cease and the magnets will separate. This is by no means a precise science (as they quite clearly state in the film). But Looper remains consistent to the rules of its universe … This does not mean that the exact same events will happen in the exact same way. Rather, key events will remain fixated due to the universe’s “magnetic” desire to keep dimensions linear.”

It is this “magnetic” attraction between dimensions or timelines which lead another blogger, Kofi Outlaw[v] to conclude that the film’s logic ultimately fails:

“We could go on and on like this, but we would inevitably find ourselves arriving back at the same conumdrum: time travel theory: you just can’t have it both ways. Looper crafts a very good story out of a wild sci-fi premise, and while it dodges a lot of its own potholes scene-to-scene, when viewed from a distance its clear that Rian Johnson has not yet cracked the time travel movie conundrum.”

Marguren's Final Diagram to Help Explain "Looper"'s Temporal Structure

Marguren’s Final Diagram to Help Explain “Looper”‘s Temporal Structure

By wanting to have it “both ways,” Johnsons’s film is then perhaps consistent with its own logic, even if that logic is not coherent in regard to common notions of what time travel should be like. This leads one of the commentors on Marguren’s page to clear up, using Marguren’s proposal, one of the plot’s most central mysteries in the following way:

“… the rainmaker [the film’s erstwhile antagonist] never time-travelled. he kills his aunt [his foster mother], then is brought up by sarah [his biological mother]. initially he becomes the rainmaker who starts closing the loops, but because the Joes appeared in his childhood and changed his relation with Sarah to one where he accepts her as his mother, he will likely not become the rainmaker this time. that is the whole story about Cid [Peirce Gagnon][sic].”

All of which seems consistent with what Johnson himself says of

“The approach that we take with it is a linear approach. That was an early decision that I made. Instead of stepping back to a mathematical, graph-like timeline of everything that’s happening, we’re going to experience this the way that the characters experience it. Which is dealing with it moment-to-moment. And so, the things that have happened have happened. Everything is kind of being created and fused in terms of the timeline in the present moment. So, the notion is, on this timeline, the way that old Joe is experiencing it, nothing has happened until it happens. Now, you could step back and say are there multiple timelines for each moment, and every decision you make creates a new timeline. That’s fine. You can step back and draw the charts and do all that. But in terms of what this character is actually seeing and experiencing, he’s living his life moment to moment-to-moment in the linear fashion and time is moving forward. And, as something happens, the effect then happens.[vi]

While this might seem a bit naïve on the surface, it does harmonize with one of the fundamental principles of the theory of relativity, namely, the part which gives it the name of “relativity.” The concept which gives this theory its name is that for any “frame of reference” (often called an “inertial” frame”), the same laws of physics apply.[vii] And so, even if your time and space is actually getting warped by something (ie: coming near to a black hole), you will only ever experience this as the space and time around you as acting strangely. And this is why if two different observers start to argue about whose spacetime is warping, they can only ever do so in regard to an outside standard of reference (ie: a nearby star). Now if observers from these outside standards of reference start to disagree, there’s no way to tell who’s right, and in fact, all of these are right, it’s all relative to where you observe things. This is why the warping of spacetime in relativity theory means that sometimes things will appear in multiple place and times, depending on the spacetime and conditions from which observered.

Another Diagram for "Looper"'s Temporal Structure by Natalie Zutter

Another Similar Diagram for “Looper”‘s Temporal Structure by Natalie Zutter.

All this is consistent with Johnson’s priotization of the frame of reference of the character at any given moment as that which organizes the film. And while scientists haven’t been able to quite get the theory of relativity and quantum mechanics to fully come together yet, they know they both work in their respective domains, even if the big picture gets fuzzy, similarly to Johnson’s film once again. Perhaps such fuzziness is all we will ever get.

Another nice graphic of the film's temporal structure by Rick Slusher at Film.com.

Another nice graphic of the film’s temporal structure by Rick Slusher at Film.com.

Leibniz, Parallel Dimensions, and Sticky Timelines

While this can help clear up some aspects of the film, it doesn’t explain why there is the “magnetic” or “sticky” aspect whereby actions in a given dimensional timeline seem to pull, attract, or repel each other. That is, there’s more than just fuzziness at work, but forces which make things more or less fuzzy, and push and pull in the process.

For example, when Sara slaps young Joe’s face, this seems to repel the punch in the face which old Joe receives when he meets his wife in the future. Likewise, there seems to be a magnetism of sorts between Joe and Cid. Both were abandoned by their mothers. As we are shown in the “flash forward” which shows what could happen to Cid if his mother is killed, he ends up on a train, as young Joe is instructed to do by old Joe earlier in the film to get out of town. If this path in time continued, he would end up fending on his own, like young Joe. There’s even a moment in which Sara says to young Joe “he’s you…” and it seems like she is speaking of Cid, but after a lengthy pause, she clarifies she’s speaking about old Joe when she says “he’s your loop.” If it weren’t for the fact that young Joe seems to have none of Cid’s telekenetic powers, it almost seems as if Cid could be young Joe’s younger self. While it does seem clear that young Joe and Cid are different, they also seem at least semi-“magnetically” connected, perhaps, if nothing else, retroactively by means of old Joe’s interventions in the past of the child who could turn into the potential Rainmaker.

This magnetism or stickyness between temporal dimensions is what has made critics argue that the film wants to have things “both ways,” collapsing two of the primary time travel models. Before explaining this in terms of contemporary quantum physics, however, it will be helpful to get a sense of how this was modelled by a philosopher who had quantum intuitions centuries beforehand, namely, G.W. Leibniz, and in particular, Leibniz’s prescient discussions of multiple universes in his famous text Monadology (1715).[viii]

According to Leibniz, every possible universe that could ever be imagined always already exists, before the universe even began, in the mind of God. God is, as many have argued, like a cosmic accountant or giant brain who examines all the possible universes, and figures out the best way to bring them together to produce the “best possible” universe. The reason for this limitation is that the structure of the world just doesn’t permit what Leibniz calls “incompossible” events to occur in the same universe. And so, to use his example, while the existence of a Biblical Adam who eats the apple in the Garden of Eden and one who doesn’t are both possible, they are not possible together in the same universe. These two possibilities cancel each other out, they are “incompossible.” What God does is then look forwards in spacetime to see which paths through the possible universes end up with the universe being best without contradicting itself, and making that pathway happen. And so, while the world may be full of suffering, for Leibniz, any other pathway through the possible universes would ultimately be worse for the overall good of the universe, as seen from the perfect knowledge, outside time, which only God possesses.[ix]

While Leibniz wrote years before anyone had dreamed of quantum physics, there are many ways in which quantum events are each similar to Leibniz’s God in their own domain. While individual quantum phenomenon, such as electrons, don’t have the sentience to choose the “best possible” pathway, they do have the ability to filter out incompossible pathways, both forwards and backwards in time. This is what the famed spacetime “smearing” is all about. That is, the a quantum phenomenon sends out what can be thought of as “feelers,”[x] not only in space, but also forwards in time, within the fuzzy “cloud” that quantum phenomenon, to eliminate potential pathways in time which would lead to incompossibilities. This is another way of saying that quantum phenomenon act “as if” they “know” how to avoid contradictory events ahead of time (ie: the “quantum eraser” experiments).

The reason why this can be described in terms of “feelers” which feel forwards in time, if within a given domain, is that this is in fact the way in which scientists themselves graph these things. Richard Feynman’s famous network-like diagrams plot out precisely the probabilities, in spacetime, which links states, and the stronger probabilities generally, though don’t always, win.[xi] While these feelers are virtual, for they indicate probabilities rather than anything real, they do result in what researchers call “consistent histories” between possible outcomes.[xii] All of which is to say, each particle, simpler yet nevertheless similar to Leibniz’s God, has some way, yet to be understood, which allows them to feel forwards in time, in a manner similar to these probability threads, to see which possibilities wouldn’t work. Scientists still aren’t sure how this “feeling” happens, but some, particularly those influenced by David Bohm,[xiii] have argued that particles literally “feel” ahead in this manner. While this would violate, in some cases, the prohibition from relativity theory that the fastest speed in the universe must be speed of light, the debate rages on to this day on how to solve this paradox.

Nevertheless, it certainly seems “as if” particles feel out possible pathways, eliminate the paradoxical ones, and then “decide” which ones they like best. This is why some theorists have argued that quantum phenomenon are like minds, because no-one can figure how they “decide” to be in one state or another such that, at least from the outside, it seems as if they are making decisions in the manner of animals and humans. That is, while quantum phenomenon are predictable in the long-term, in the short term, they seem as if they have minds of their own, or if they can somehow feel minute shifts in the microwinds of forces in the universe acting upon them in ways which are beyond our capability to detect.[xiv]

Collapsing the Rainmaker’s Wave: Quantum Entanglement and Splitting Timelines

To bring this back to Looper, it seems as if the depiction of time travel in this film is somewhere between Leibniz’s model of God, and some of what was just described in regard to quantum phenomenon, if in a way which is more fuzzy than Leibniz imagined, but more like that of quantum physics. For rather than a God which strives to produce the “best possible” universe, it seems like that Looper aims at something much more realistic, at least in terms of quantum physics, namely, a universe which aims to produce the most consistent possible version of itself, as manifested by a pull of sorts towards consistency.

We can see this by returning to the issue of Joe’s fuzzy memory. The closer young and old Joe get to incompossible events, the fuzzier old Joe’s memory becomes. Taken to its extreme, it is likely that at the end of the film, in the split second between when young Joe decides to pull the trigger on himself and the moment of doing this, that old Joe’s memory would lose all tie to the past he knew with his wife, because it ceases to be compossible with the reality in which he found himself. It seems likely that his memory would, at that moment, be much closer to that of young Joe at his age, which is to say, non-existent, and precisely because they are both becoming versions of young Joe. That is, old Joe is becoming less real, more virtual, more a fantasy projection of young Joe, which is to say, he becomes less of a potential real future for young Joe.

This helps explain why many critics have argued, along with commenter Pete Cooper in Magursen’s comment thread, that “Young Joe and the Rainmaker can never exist in the same reality.” And this is true, for if young Joe doesn’t kill himself, or do something similar, Cid will grows up to be the Rainmaker, and this will lead old Joe to go back in time to kill Cid as a child, making sure this will happen, and in the dialogue of the film, “closing the loop” in multiple senses of the term. If Cid doesn’t grow up to be the Rainmaker, however, this is because young Joe killed himself, thereby preventing old Joe from going back in time to try to kill him, thereby inadvertently producing the Rainmaker in the process. And so, a young Joe which kills himself and the Rainmaker cannot exist in the same universe, they are “incompossibles,” which is not the case with old Joe, and this helps explain why there’s interference between old and young Joe’s timelines.

Of course, this raises the condunrum of how the incompossibility of old Joe and Cid in the same universe could occur in the first place. And this is because time is currently flowing in two dimensions at once, one in which Cid becomes the Rainmaker, and one in which he doesn’t. This is in fact similar to another quantum phenomenon, known as “decoherence” or “the collapse of the wave function.”[xv] As quantum phenomenon approach each another, and their probability clouds and “smeared spacetime” begin to overlap, it becomes more and more likely that there will be an interaction, a quantum event, in which they slam into each other and fly off in their respective directions. While some physicists argue this is because the particles are always there in solid form, and that they teleport around their clouds of probability, others argue that they are smeared through spacetime, and solidify when something comes into this zone and disturbs it with great enough intensity. Either way, the result is ultimately the same, namely, that what seems to us as as a possible interaction of particles becomes one in reality. We can’t really know what it’s like before the particles interact, and so, whether or not there’s a cloud, a set of feelers, or a single particle able to teleport around in a particular zone of spacetime instantaneously, or something different completely. All human observers seem to be able to tell is that there is an interaction which follows along with the equations of quantum physics, even down to the degree and domain of unpredictability in involved.

This can help explain why old Joe and Cid can exist in the same universe, but not young Joe and the Rainmaker. The child we know as Cid in the film hasn’t yet passed the point of no-return after which he becomes the Rainmaker, just as young Joe hasn’t passed the point of no-return after which he can’t stop old Joe from producing the Rainmaker. These are two sides, in a sense, of the same fuzzy event seen from multiple perspectives. And each of these perspectives is fuzzy in a temporal sense as well, for we don’t know we have passed a critical point until after the fact, except perhaps in the ways in which old Joe’s memory-anticipations start getting fuzzier when he approaches potential branching points which could produce an incompossibility which could impact the ability of these timelines to flow together. Such a crucial turning point is seen when young Joe shoots himself, and the “entanglement,”[xvi] or coherence, between old Joe and Cid collapses, decoheres, and the Rainmaker, and both Joes, vanish. Only Cid and his mother remain.

All of which can help us finally understand the cause of the “magnetism” between dimensions in Looper, in that even when time travel creates the existence of multiple dimensions, there seems to be a force which pulls everything back together, trying to collapse dimensions back into a completely consistent state of one dimension. Like the manner in which gravity pulls heavenly bodies towards each other, so it is that time travel dimensions in Looper seem to pull towards the greatest possible consistency, at least in light of the multiple dimensions which have opened up.

While this may seem odd to our everyday sense of reality, none of this is bizarre to the strange world of quantum physics. While quantum particles cancel out direct incompossibilities, we have no way of knowing if they open up alternate dimensions to play out other scenarios in which these events wouldn’t be incompossible. Such is the interpretation of the data of quantum physics by those who take a “multiverse” or multiple universes perspective on the date of quantum experiments.[xvii] And it’s worth noting that all these theories of quantum physics are different perspectives on the same very consistent facts of experimental evidence, just as much as old and young Joe have multiple perspectives on the Rainmaker. In fact, it is possible that all the interpretations of quantum physics are right, from their perspective.

According to the multiple universes perspective on quantum physics, all possible pathways which a quantum phenomenon could take actually is taken in a possible universe. If this were the case, then there would be a nearly infinite set of possible universes. What then makes the universe we are in so special? Perhaps simply the fact that we are in it. But if quantum particles seem to cancel out incompossibilities in spacetime at their own level, there’s no reason to not think that our universe as a whole does this as well. Certainly some quantum physicists have argued that the same wave equations which can be applied to quantum particles can be applied to larger objects, up to and including the entire universe, such that perhaps our entire universe is like one giant entangled quantum state. [xviii] This has lead some to speculate that our universe is in fact potentially not what it seems, an illusion, projection, simulation, or holograph of some sort. Whether or not this is the case, it does seem that quantum particles prioritize consistency within their local domain, and from such a perspective it doesn’t seem unlikely that, were there multiple universes, through time travel or the multi-verse interpretation, that these would prioritize consistency as well. Perhaps we live in the “most consistent” possible universe, and the reason why this is the case is because this level of consistency is needed to support life. Perhaps less consistent universes, in which things just vanish and appear at random, just produces too many paradoxes.

And what would happen in case of a paradox? We don’t really know, but the characters in the film definitely seem to have some ideas. Time traveller Abe [Jeff Bridges] seems quite concerned to avoid massive time paradoxes, and old Joe certainly notices that the closer he approaches an incompossibility with young Joe, the more his memories are erased. If quantum phenomenon seem to “erase” the possibility, beforehand, of incompossible actions, then it seems very possible that a true paradox would lead to precisely what we see with old and young Joe: when they hit a true incompossibility, they cancel each other out, like matter and anti-matter.

In this sense, it seems to read both the film and the evidence of quantum physics as demonstrating not only not only a force or pull towards consistency, but an inverse force whereby inconsistency splits spacetime into dimensions. In this sense, it’s not unreasonable to say that the universe “desires,” whether this is called magnetism or force or anything else, something like consistency, because without it, it would shatter, and taken far enough, potentially even cease to exist, similar to the manner of Joe in the film. Just as quantum particles “seem” to “decide” on some states rather than others, it is as if the universe “wants” to keep existing, and the pull towards consistency is how this manifests within it, as in the film. Maybe Leibniz’s God wasn’t so far fetched in the first place.

Of course, this is only a film. But in light of quantum physics, Looper’s time travel mechanics seem to be much more realistic to the way in which time-travel would likely happen were it possible. Then again, quantum physicists have argued that quantum particles travel through time all the time, or at least, that is one possible way of reading the evidence, that favored by Richard Feynman.[xix] Either way, in all these formations, whether in quantum physics or the time travel of the film, there is not only interference between dimensions or channels, but also a sort of instability and readiness to collapse as soon as an incompossibility arises to disturb an entangled, “sticky,” “magnetized” state. Perhaps coherent and incoherent phenomenon are simply islands of stability within this dance of forces.

Ramifications via Deleuze and Lacan: Our Fractal Technoverse

Gilles Deleuze famously argued in his books on cinema that the feeling that time, and the world with it, was ‘out of joint’ began in the aftermath of World War II, and clearly Akira Kurosawa’s Rashomon (1949) stands out as a film which uses flashbacks to present multiple possible versions of the past which don’t align, which couldn’t all be possible, and without providing resolution. Since the film’s attempt to place guilt in relation to a horrible act of violence can be read as an allegory of the attempt to deal with the guilt and horror of memory in relation to the atrocities of the war, it doesn’t seem unlikely that the form of the film is an attempt to deal with the trauma of its contents, allegorical and otherwise. And as psychoanalytic critics have long argued, fragmentation is one of the primary responses to a trauma which remains difficult to process and integrate. Deleuze nevertheless aims to get beyond the limitations of psychoanalysis, and his argument about time and film is wide-ranging and goes beyond trauma, unless the trauma is seen as more than the war, but the generalized condition of living in our postmodern age. And so, in his cinema books, he traces the shift in the depiction of time in avant-garde films in the post-war period, by means of cinematic authors such as Fellini, Resnais, Tarkovsky, and beyond.

Nevertheless, at least for psychoanalysis, and in particular, in regard to the way in which time travel films have been analyzed by Slavoj Zizek, time-travel films are about the attempt to think what it would mean to change who you are.[xx] We often find it hard to believe we were the people we once were, and we try to imagine the people we will become, even if we will never turn out to be quite as we imagine. The process of changing our habits by means of reframing our memories and expectations is what happens in any good therapy, or in any self-aware life outside the consulting room. Time travel films, for all their often sci-fi plot devices, are really, for Zizek, about the attempt to intervene in our present moments. But they do so in a language which speaks to our times, namely, temporal dislocation and hi-tech gadgets.

This is why Deleuze traces the lineage of films in which time is “out of joint” to those which have no sci-fi premise, but which have mechanics similar to time travel films. Ingmar Bergman’s Persona (1966) is an excellent case in point. While the whole film has nothing sci-fi about it, the play of memories and fantasies leave viewers with multiple possible interpretations, all layered on top of each other as possibilities, like so many quantum threads of possibility which are entangled until something forces it to decohere into one state over another.

While film seems to be particular suited to, as Deleuze argues, function as a “time machine,”[xxi] it is not the only medium which can do this. Literature can imagine multiple time lines as well, for example, but it only began to truly do this at around the same time as film began to do this, with perhaps Jorge Luis Borges “Garden of Forked Paths” (1941) as one of the earliest examples. Of course, language and images of all sorts were always virtual realities of sorts, but only after World War II did time seem to truly “go out of joint.” While the fissures can be seen as early as experiments with painting, such as that of Picasso or the Italian Futurists, it is only retroactively that the true potential import of these devices become clear.

If most time travel films thematize single loops through time, the more baroque manifestations of recent times, such as Twelve Monkeys, Donnie Darko, Primer, The Prestige, Moon, and Looper show that relatively mainstream film, if towards the more difficult end of things of the mainstream, is now approaching the experiments of the avant-garde. These films, which take what Deleuze would call a “crystalline” structure, are increasingly becoming similar to the complexities of films like Terayama’s Pastoral (Japan, 1973) or Alain Resnais’ Last Year at Marienbad (France, 1961).

Ours is increasingly a world in which spacetime between individuals is generally measured in how long it takes for a message to fly between mobile devices, short-circuiting the 3D spacetime of the mere physical world. As such, linear time seems to be fading out, increasingly replaced by virtual sites on the Internet which update and mutate in webs in relation to each other. And while the web does seem to tend towards something like consistency, it still allows quite a few paradoxes to exist in powerfully entangled states.

Either way, this new spatiotemporal reality is increasingly moving from the realm of fantasy to everyday. And so, a film like Looper then is likely to resonate with a wide many who’ve never encountered quantum physics, but simply feel what it’s like to live everyday in our hypermodernity. And as our everyday life increasingly begins to take on quantum aspects, with micro and macro levels of our worlds echoing each other as in fractal images, perhaps films like this can help us intuit some new ways to navigate the challenges of our age.

Notes:

[i] http://www.slashfilm.com/ten-mysteries-in-looper-explained-by-director-rian-johnson/

[ii] Several excellent general introduction to quantum mechanics and relativity theory exist which explain the many approaches to interpreting the findings of quantum mechanics. My preferred introduction for beginners remains Gary Zukav’s The Dancing Wu-Li Masters: An Introduction to the New Physics (HarperOne, New York: 1979), which, even though nearly twenty years old, still remains one of the most accessible and philosophically friendly explanations of the basic issues at stake, and in ways which recent developments complement rather than displace. For those seeking a more recent source, see the slightly more technical Timeless Reality: Symmetry, Simplicity, and Multiple Universes by Vincent Stenger (Prometheus Books, New York: 2000).

[iii] An extensive discussion of quantum eraser experiments can be found in Katerhine Barad’s Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning (Duke Univ. Press, Durham: 2007), pp. 247-352. It should be kept in mind, however, that Barad’s description, while excellent, privileges the Copenhagen model of interpretation proposed by Niels Bohr. For a more balanced interpretation of issues related to spacetime symmetry, see Stenger (op.cit.), pp.26-151.

[iv] (http://www.flicks.co.nz/blog/a-man-of-100-words/looper-explained-with-straws/).

[v] http://screenrant.com/looper-ending-explanation-time-travel-spoilers//2/%5D

[vi] http://www.huffingtonpost.com/2012/09/30/looper-ending-explained-rian-johnson_n_1927860.html

[vii] For more on intertial frames in relation to relativity theory, see Paul Davies, About Time: Einstein’s Unfinished Revolution (Touchstone, New York: 1995), pp. 44-77.

[viii] See “The Monadology” in G.W. Leibniz, Discourse on Metaphysics and the Monadology (Dover Books, Dover: 2005, reprint).

[ix] For more Deleuze’s film theory in relation to Leibnizian incompossibility, see Gilles Deleuze, Cinema II: The Time-Image (Univ. of Minnesota Press, Minneapolis: 1989), pp. 130-131 .

[x] For more on virtual particles and Feynman networks, as well as how these can be thought of as “feelers” in spacetime, beyond models of quantum events which reduce them to simple particles, see Zukav (op.cit.), pp. 237-282.

[xi] For more on the networked structure of Feynman diagrams, see ibid.

[xii] Consisten histories are at least in part what Feynman diagrams are used to determine. For more, see Zukav (ibid.), or Stenger (op.cit.), pp. 147-8.

[xiii] David Bohm’s highly influential notion of the implicate order, and its use in interpreting quantum mechanical phenomenon, is explained at length in Wholeness and the Implicate Order (Routledge, London: 2002, reprint).

[xiv] For more on the notion of hidden variables, and the deconstruction of the reductive argument used in relation to these to devalue the non-local arguments proposed by Bohm and others, see Bohm (ibid.), pp. 83-139.

[xv] More on the “collapse of the wave-function” can be found in Zukav (op.cit), pp. 83-96, while more on decoherence can be found in Stenger (op.cit.), pp. 148-151.

[xvi] For a book length treatment of the notion of quantum entanglement, see The God Effect: Quanum Entanglement, or Science’s Strangest Phenomenon (St. Martin’s Press, New York: 2006).

[xvii] For more on the multiverse model of the cosmos, see John Gribben, In Search of the Multiverse: Parallel Worlds, Hidden Dimensions, and the Ultimate Quest for the Fronteirs of Reality (Wiley, Hoboken: 2009).

[xviii] For more on the notion of the entire universe as something like a wave-function, and hence, potentially existing in an entangled state, see Gribben (ibid.), pp. 23-33.

[xix] For Feynman’s account of how particles travel backwards in time, see Zukav (op.cit.), pp. 242-3.

[xx] For Zizek on time-travel in relation to the “temporality of the symptom” in psychoanalysis, and its relation to film, see, for example, The Sublime Object of Ideology (Verso, London: 1990), p. 161.

[xxi] See D.N. Rodowick, Gilles Deleuze’s Time Machine (Duke University Press, Durham: 1997).

Book Review: “Post-Cinematic Affect,” by Steven Shaviro, Zer0 Books, 2010.

•October 31, 2014 • Leave a Comment
Post-Cinematic Affect in Action - Grace Jones' "Corporate Cannibal"

Post-Cinematic Affect in Action – Grace Jones’ “Corporate Cannibal”

Wrote this in 2011, but I’m updating my website, and adding some new content that should have been here long ago. Enjoy.

Welcome to the post-cinematic mediasphere, the timeless time of the space of flows, the neuro-affective flat ontology of smooth capital. Steve Shaviro’s new text, Post-Cinematic Affect (Zero Books, 2010), is many things. On the one hand, it is a guided tour of the mediascape to come, a futureflash of the way the world will feel once today’s emergent media formations reach their mature forms. On the other hand, it’s also a diagnosis, an attempt to understand the manner in which capital and the image will increasingly intertwine in the world to come. Both media analysis and critique of capital, Shaviro’s slim tome is understated in its presentation, but wide in its potential effects. It’s an important book, one at the cutting edge of the attempt to think the dark underside of the networked age to come.

Shaviro describes his enterprise as an attempt to perform an “affective mapping” of what, following James Cascio and Gilles Deleuze, he calls the “participatory panopticon” of the  “control society . . . which comes from everywhere and nowhere at once” (8). In such a world, “personalities. . . [are reduced to] shells within which social forces are temporarily contained” (108), and all terrain is reduced to any-spaces-whatsoever (espaces-quelconque), monadicaly disconnected from each other, yet vague enough to morph at will in the timeless time, the “always being about to happen”-ness (86), of a mediascape which is purely relational, without exterior, and always in flux. Welcome to the smooth space of flows as a vision of hell.

What is left in a world in which the very categories of ages past, including space, time, subjectivity, agency, and community, even the boundaries between media itself, are dissolved in the disjunct unity of a fluid that percolates without end, yet always drains surplus elsewhere? Affect. Waves and waves of affect. Affect, for Shaviro, is counter-representational by nature, it is emergent, transpersonal, distributed, virtual. It is that which flows in the world in which humans used produce and consume commodities in factories and engage with the ‘real’ world. Now, instead, we have the near-completion of real subsumption, leaving us to scrounge for remainders or search for a way through to the other side. As the boundaries between cinema and portable computing, video-games, and websites increasingly begin to blur, as the Deleuzian time-image is drained of its duration by digital composition and post-continuity editing, and as we move to neuromodulatory media forms in which all pretense to plot and character dissolve into the affective high that a figure transmits, we find ourselves increasingly in the post-cinematic video-drome, the ambient wave-space of perpetual revolution, in which player and played are all played by a system that feeds itself on our ebbs and flows.

Instead of subjects and objects, what’s left is figures, and this is precisely what Shaviro works to map. The bulk of the text is made up of close readings of four recent media works, “diagrams” (6) and “machines for generating affect” (3), by Grace Jones/Nick Hooker, Oliver Assayas, Richard Kelley, and Mark Neveldine/Brian Taylor. Shaviro intentionally goes after works dismissed by others as excessive or failed, for he sees in these overblown bits of detrius the trace of the futurescape to come. Shaviro is fascinated by the pooling of affect around celebrity, the currents that flow in and out of the “amnesiac actors” which replace what used to subjects, the shattered dividual subjectivities that play out on the virtual post-cinematic mediascape, and the virtual spacetimes carved out of the flows of affect by its own movement within itself. Like the figures he traces, the media texts he examines are merely traces of movement. What he’s interested in is mutation, the drainage of Deleuze’s time-image, and the production of a new hyper-circulatory paradigm which he prophetically argues is coming to dominate our age.

Shaviro tracks the manner in which many of the buzzwords valorized by contemporary Deleuzian inspired theory are ironicly most apt for describing the most terrifying aspects of today’s world, such that Post-Cinematic Affect can serve as a wonderful tonic to the celebratory sides of contemporary Deleuzian, network/complexity, and futurist paradigms. For Shaviro, contemporary space has become relational and virtual, morphing into anything at will, never committing to one form or another, so that it can always become smooth to serve capital’s needs to mutate and serve ‘just-in-time’ production and circulation. Where there used to be masters (and master signifiers), now there are icons, patterns of modulation, for “modulation is the process that allows for the greatest difference and variety of products, while still maintaining an underlying control” (15). In place of subjects, what remains are points of transfer of affect, figures which echo in simulated interiority the icons which direct them, each composed of the flows whose densities determine the spacetime terrain in which accumulation occurs, siphoned somewhere eternally off-site. Series of “affective constellation[s]” (73), the result feels  “unspeakably ridiculous . . . creepily menacing . . . [and] exhilarating” (85). It’s the world of the perpetual music video, in which media sings just for you, in which distributed scapes of feeling wash over transfer points, and yet, one the need for perpetual flow keeps everything vague enough so one can “never leap from affect to concept” (73). And what of the much vaunted hope in networks and complex systems? Shaviro dryly slams contemporary complexity theory approaches: “actually existing capital is metastable. It functions as a dissipative system . . . . operating most effectively . . . at far from equilibrium conditions” (189), such that “networked manipulation works more effectively than a hierarhichal chain of command ever did” (107). It seems possible, however, that there are many types of metastable networks, a possibility that Shaviro doesn’t address.

pcasSuch a world, for Shaviro, is one best described by the much valorized term “flat ontology.” For when anything can become a medium of exchange for anything else, the ability to distinguish between master signifier and the chain of signifiers slips on the smooth space of numbers which, for Shaviro, underpins the whole apparatus. For it is “digital transcoding as common basis” (134) which allows for the interchangeability of everything that can be quantified. The result is the precaritization of work, the shift from physical to symbolic production, material to affective/intellectual labor, production to financialization, and the near complete real subsumption of the world by capital, such that “everything is a potential medium of exchange, a mode of payment for something else” (46). For Shaviro, “the only thing that remains transgressive today is capital itself . . . [it] transgresses the very possibility of transgression, because it is always only transgressing in order to make more of itself, devouring not only it’s own tail but it’s entire body, in order to achieve greater levels of montstrosity” (31). Within the perpetual now of the modulatory regime, motion and duration are simulated, and resistance is, at least so it seems, futile.

Or worse, it is incompossible. For if the fear of our current, cinematic age may be summed up by the words “chaos rules,” uttered by the uncanny fox in Lars Von Trier’s recent trainwreck of a film Antichrist (2009), the dystopia to come is probably best described, by Shaviro, as “incompossibility reigns.” Shaviro’s symptomatic reading of another recent cinematic trainwreck  Richard Kelley’s Southland Tales (2006) (by an equally talented director, I might add), shows how in tomorrow’s mediascape, the properties that used to belong to individuals – personalities, facades, desires, fears – are now shattered amongst numerous sites, while individuals are now required ‘flexible’ adaptors, ready to play any role, grasp hold of and channel any fragment of what used to be referents of the ‘real’ world, and all just to survive. The world to come is one in which not only is everything possible, but everything has already happened, and is already happening, now, all at once, even if it is all virtual, so that none of it can stick, resulting in a depthless simulated miasma. What’s left is are networks of modulations of affective constellations, and the constant jerking between what Bolter and Grusin have described as the poles of hypermediation and immediacy (115) . Like the inside of some cruel quantum particle, Shaviro shows us the dark side of contemporary science, media, economy, and society. Can anything be done?

Towards the end of the work, Shaviro discusses what Benjamin Noys (136) has called the strategy of ‘accelerationism’ – if it’s impossible and/or undesirable to go backwards or slow things down, then perhaps the solution is to try to make things go faster. As political strategy, Shaviro smartly remarks that the collapse that a hypertrophic crises could lead to could in fact lead to formations much worse than what we have now. But aesthetic accelerationism is a strategy endorsed by Shaviro, in that it allows us to explore the landscape countours of the world in the process of formation. Works of art send out probeheads, to use the Deleuzian terminology, examining new ways of being, potentially revealing new forms of resistance in the process.

gj2

Shaviro diagnoses the problem, yet is careful not to recommend any real cures – only time will tell where this is all going. But he clearly has his finger on the pulse of the cannibal impulses of our anticipated present. If we are to find any hope in between the lines of Shaviro’s dystopia, it is that perhaps there are ways to turn the very tools of capital against itself. Shaviro seems to hesistate – he knows we cannot go back, but we also cannot go forward on our current path without the dystopian world he analyzes in his text from coming true. But perhaps there are other types of networks, other types of meta-stability, flat ontology, relational space, virtuality? This, it seems, is the deeper question this text tries to ask, and it is a question that truly hits at the debates central to contemporary theory today. And while Shaviro occasionally nips at the counter-strategies composed by Michael Hardt and Antonion Negri, it seems that this is because Shaviro feels that the real strategies of resistance are yet to come.

While at times risking Baudrillardian fatalism, Shaviro seems to point at one ray of hope, namely, that new media formations can teach us, in their own way, how to once again, to use a phrase employed by Deleuze, “believe” once again in the world (60). But before we can get there, we need to map the new terrain, and that is what this slim yet sly volume seeks to do, and suceeds masterfully.

Post-Foundational Mathematics as (Met)a-Gaming

•May 29, 2013 • Leave a Comment

Mathematics is a fundamentally human activity, and a semiotic one at that, which is to say, it is an activity of making and using signs in relation to the wider world of practices whereby humans relate to their worlds. While this might seem obvious, most working mathematicians self-identify as Platonists, which is to say, they take the position that they are working with realities which are “really there,” “mathematical objects” which are able to be discovered by means of techniques modelled on that of the discovery of physical objects in nature. Mathematical objects, which is to say, things like numbers and geometrical figures, are ideal entities whose contours are wrenched from the fabric of the ideal itself by means of the techniques of logico-mathematical proof. All of which is to say, even if there were no physical world, the truths of mathematics are “really there,” as if God given, hence the term Platonism, often worn with pride by mathematicians today. The locus classicus of this position is that of Leopold Kronkeker, in 1893 when he famously said that “God made the natural numbers, all the rest is the work of man.”

Nevertheless, Kroneker was responding to the foundations crisis which was beginning to shake the tree of mathematics. For example, logicists like Gottlob Frege had attempted to “found” mathematics upon the basis of its subsumption to the “rules of thought” articulated in his new logical calculi. The problem with this, however, is the at it hardly did what Frege intended, which is to say, to “ground” mathematics, and hence show its absolute necessity in “all possible worlds” (to use a term from Leibniz), but rather, to reveal just how ungrounded the seemingly incontrovertible world of mathematics actually was. When combined with the set theoretics developed by Georg Cantor, or the slippery attempts to ground number from linear continua as described by Richard Dedekind, it seemed as if just as mathematics had radically increased in power and rigor during the nineteenth century, it had also revealed in the process that perhaps despite or by means of this very power, it was all illusion, a slight of hand. Did the Emperor have clothes? Kroneker believed that blind faith was the answer, and so do many mathematicians today.

And of course, if one works far from the limits of the mathematical enterprise, which is to say, far from the applied aspects of mathematics which find themselve continually in dialgue with the physical world and its non-mathematical impingements upon the edifice of pure mathematics, then one is safe from these issues. Likewise, if one doesn’t stray too far into the realm of the pure, to the foundations of mathematics itself, one is also able to skirt around the issues of how precisely mathematics derives its authority or internal consistency. It is the in the “dirty middle” realm, from which both the shores of the physical world and the purely ideal world are both distant horizons, that the terrain of mathematics appears boundless. But at the shores, the issue becomes muddier indeed.

And this is what the foundations crisis that shook the mathematical world at the start of the century revealed. Some argued, with Hilbert, that mathematics was purely about signs, and was merely a game, and hence, should not be compared to the physical world. Any need for grounding was then moot, because mathematics grounded itself, circularly, and needed no justification beyond this. It’s own internal consistency made it a form of sophisticated play, and if it was useful in the world beyond mathematics, then so be it, but this was ultimately, accidental and not something worth the time of mathematicians. This “formalist” approach, however, simply ignored the fact that the engine of mathematical creativty had not only come from within, but without. The radical developments within mathematics during the early 19th century, for example, the great works of Leonard Euler, were often spurred by attempts to solve problems from mechanics, which is to say, very practical issues which engineering posed to the lofty realm of math, and to which it could not answer. Even analysis, the great discovery of Newton and Leibniz, had been wrested from the gods of mathematics by means of the push of the attempt to describe accurately the motion of heavenly bodies, not to mention the behavior of mundane physical objects. Whatever mathematics is, it is hardly pure.In contrast to this we see the Intuitionism proposed by L.E.J. Brouwer, who argued that mathematics should simply get rid of anything that couldn’t be grasped by the intuition of the mind as purely abstract nonsense. Brouwer attempted to “construct” mathematics on the intuitions of the mind, producing an analogy to the manner in which the mind intuits the objects of the physical world and the manner whereby it intuits the ideal realm of mathematical entities. A highly influential early twenteith century movement, one based to a large degree in Neo-Kantian ideas of scientific method and practice, Intuitionism largely fell out of favor, along with formalism, even as the limits of pure Platonist and applied approaches found their own limits in Goedel’s famous “incompleteness theorums” of 1929-31.

In contrast to this we see the Intuitionism proposed by L.E.J. Brouwer, who argued that mathematics should simply get rid of anything that couldn’t be grasped by the intuition of the mind as purely abstract nonsense. Brouwer attempted to “construct” mathematics on the intuitions of the mind, producing an analogy to the manner in which the mind intuits the objects of the physical world and the manner whereby it intuits the ideal realm of mathematical entities. A highly influential early twenteith century movement, one based to a large degree in Neo-Kantian ideas of scientific method and practice, Intuitionism largely fell out of favor, along with formalism, even as the limits of pure Platonist and applied approaches found their own limits in Goedel’s famous “incompleteness theorums” of 1929-31.

Goedel’s singular accomplishment was to put all four of these approaches to grounding math – Platonic idealism, Physical Realism, Intuitionist Neo-Kantian Subjectivism, and Objective Structuralist Formalism – to rest as aspects of the insoluability of the same problem. That is, math simply could not be grounded from within, nor could it be grounded from without without proving itself ultimately both grounded and ungrounded, and in fact, both and neither, from a mathematical point of view, in the process. What Goedel essentially did, then, was show that the very notion of “grounding,” at least as this notion was being framed by mathematicians of his time, was part of the problem. That is, mathematicians who wanted to ground mathematics from within, as the insurer of its own truth, would find only circularity, but no ability to say if this circularity applied to anything beyond math. Those who wanted to ground mathematics in something beyond math, such as the physical world or human intuition, or even the workings of ultimately meaningless signs, would find that all they could prove by means of the tools provided by math, from within at least, was that mathematics relied on something beyond it, but no way of showing the need of this, or the need of any relation to a particular grounding or another, by solely mathematical means. That is, to ground math with something beyond it (ie: human activity, human signs, human intuition, god), would require actually bringing something outside of math inside of math. And that would produce contradiction.

Circularity or contradiction, the result would be incompletion or incoherence, respectively. The final option, oscillation or inconsistency, was simply what most working mathematicians did, which is, to use whichever options made math “work best” in a given local instance, and leave “grounding” for some other time. What Goedel did was show that Hilbert’s famous dictum that math must prove itself “consistent, coherent, and complete” by its own means, which is to say, by means of mathematics, is simply not possible, and that this is not simply an accident, but part of the very structure of the way mathematics itself works. Goedel showed that math had its own limits.

Of course, some of this might seem like common sense. Math always counts (numbers) or draws (geometry) something which is not math itself. If I see a group of animals and count them, and label them “four dogs,” the dogs and the number “four” are fundamentally different things. Math is always both about and not about mathematics. When math tries to eliminate any aspect of it which is not math, or which is math, the result will always be, at least from wtihin math, paradox. So it is with any signifying practice. The same can be said about language. A dog is not the same as the word “dog,” and ultimately, one cannot “ground” the relation between the two, at least from wtihin language, without producing paradoxes. Just as Goedel showed this within mathematics, so Jacques Derrida famously demonstrated by means of linguistic deconstruction, and Ludwig Wittgenstein in his own way nearly thirty something years before. The fact that Goedel, Wittgenstein, and Derrida share what can be seen as variations of the same insight in different fields, one which resonates strongly with that of Heisenberg in physics, is likely perhaps not accident.

Naturalistic Mathematics?

If mathematics cannot ground itself, perhaps it can ground itself in the fact that humans devised mathematics. That is, it is a signifying practice, like that of language, whereby humans describe aspects of their world so as to interact with them. Mathematics is a special type of language, but language nevertheless. Of course, most working mathematicians are likely not to like this, because it subordinates their activities to something, anything. But there is no unsubordinated position from which to view the world, we are always mediated in our relation to anything and everything, and likely, are nothing but mediations of mediations all the way down, fractally and holographically. That is, any notion that there is some ‘God’s Eye Perspective’ from which to survey the world seems singularly outdated as any other simplistic form of faith in the unbiased. All is perspective, and mathematics is simply one amongst others. It is useful, of course, but so is language, our bodies, our brains, etc. Each of these has been viewed by its partisans as the singular, privileged lens on the world, and each can be decentered by others. Why mathematics should be any different is beyond me.

Rather, we live in a world of networks, each of which supports the others, culture and nature, language and physics, human and animal, living and non-living, each an aspect of a wider whole which supersedes them all. Whether we call this whole experience, or the universe, these are also simply aspects of it, attempts to describe the whole. And as Goedel showed in his way, Derrida in his, and Heisenberg yet another, the attempt of any system to grasp the whole from within it is likely to founder in paradox. In mathematics, of course, this is the famous paradox of the barber, Bertrand Russell’s attempt to articulate the issue of the “set of all sets.”

That said, these problems all become less of an issue if we say something like mathematics is a human signifying practice, which is useful in its domain in relation to others. Of course, this begs the question of what use means, but since humans are those who determine that which is useful to them, and are also the originators of any math we have ever known, then we could perhaps say that mathematics reflects aspects of what humans value in the world. That is, mathematics has helped humans do the sorts of things they value, one form of which is mathematics. While some enjoy doing math for its own sake, the urge to do something like mathematics does seem to find its impetus in practical activity, which is to say, the attempt to describe the world so as to be able to do things in relation to it. There is no question that both pure and applied mathematics have given rise to new forms of mathematics, but as some of the more “naturalistic” philosophers of mathematics today have argued, math is always between the physical and the ideal, with one foot in each, dirty and impure to the core. That is, it is a form of media, just like language, or the body. A lens on the world of experience, if a particular one, like yet different from all other media in this way.

While naturalistic approaches to mathematics are in the minority among working mathematicians, and even philosophers of mathematics, it seems to be the only approach to mathematics which takes into account its foundations crisis mid-century. If it dethrones mathematics from its attempt to imagine itself as the queen of the sciences, well, let it join philosophy and every other dethroned discipline which aimed for such a role. For perhaps it is the very desire for centrality which is the problem rather than attempt to “find” a solution, for it seems to give rise to paradoxes, whereby the very fabric of, well, something, call it the world or otherwise, resists. Physics, linguistics, mathematics, the foundations crises of many disciplines of the twentieth century, they all seem to indicate that the center does not hold, and yet, centerlessly, they still do many things. Naturalism attempts to start from this, from activity in the world, and human activity at that, rather than ideal foundations, be they ideal in the classical sense, or the materialist inversions thereof.

Post-Structuralist Approaches to Mathematical Gaming

If mathematics is a human activity, then perhaps it may be possible to philosophize about it in regard to this perspective on it. Certainly mathematicians refer to specific “things,” which they describe with symbols which they manipulate. These signifieds of mathematics are represented by signifiers, which is to say, the graphs and equations scratched on paper, computer screen, and chalk-board as “representing” something generally called “mathematical.” If “mathematical objects” are signifieds, meanings, that which are described and represented by mathematical signs, considered as signifiers, then perhaps we can think of mathematics as a specialized type of language, and the practice of mathematics as a type of writing and speech. Certainly not one which is meaningless, as Hilbert famously argued. No, mathematics seems to be about the world as much as about itself, just as any language, and yet, it is a very particular sort of language at that.

As any language, mathematics can be considered, as Wittgenstein famously argued, a game. That is, it has rules, and people get quite heated if you break them, even if the rules of the game are always being changed from within as you go. Good moves in the game, in fact, change the very nature of the game itself, and in doing so, change what it means to play, the players, etc. In this manner, the rules of mathematical play are like that of linguistic play, which is to say, mathematics has a grammar, just like natural languages do, even if this grammar works differently than those of natural language. But this grammar is a grammar nevertheless. And so, a (post)structuralist analysis of mathematics is not only possible, but I would say, desirable.

Structuralism viewed languages as composed of utterances, often described as “parole,” in relation to structuring categories which were implicit yet made sense of utterances, or “langue.” There are, of course, several types of langue working in any given language. For example, in a natural language such as English, there is the langue of the semantics of the language, which is to say, the meanings of words, systematized in a dictionary, which a competent speaker of English would need to understand to “make sense” of a given utterance. Just as one couldn’t make sense of a sentence such as “The cat is on the mat” without knowing, for example, what it means to “sit,” likewise, one cannot make much sense of a mathematical sentence such as “x – 5 = 17” unless one knows the meaning of what “5” means, and how this mathematical “word” differs from that of “17.” While it is necessary to bring in forms of semiotics which deal in diagrams to describe how this could be applied to notions such as geometry, the semiotics of C.S. Peirce seems more than adequate to the task.

Below the level of semantics, or the meanings of words, are the deeper structures, whose which determine the ways in which these can be linked to each other. A word like “is” in “The cat is on the mat” presents a word which is really not merely a word, but a word which represents grammar, or syntax, which is the foundation out of which word meanings arrive, for it provides the fundamental and implicit categories which all the meanings of words to take form. And so, if one was to look up any word in a dictionary, one would find that the word “is” equivalent to this or that meaning. “Is” is both a word and a meta-word, so to speak, and this is what is meant by langue in relation to parole. Just as knowledge of the meaning of a term at a given level is necessary to understand an utterance, so is the meaning of the meanings which describe the meanings of these words, which is to say, the rules of the game as well as the particular move being made. And so, if one doesn’t understand what “is” is, then understanding the particular meaning of “The cat is on the mat” is likely impossible. The same with grammar markers in mathematics, such as “=,” which ultimately, is very similar to saying “is” in a natural language such as English.

As is likely apparent from the preceding, recurcsion is operative here, not only at each level, but at any level. That is, each and any utterance/parole is related to a langue, which itself is a parole in relation to at least one other langue, and this repeats fractally. This is the contribution to this sort of structural approach made by post-structuralism, namely, the attempt to show the paradox of any attempt to find an ultimate foundation at work in such an analysis. And so, rather than argue that a notion such as “is” represents a “deep structure” of the language of mathematics, and hence, in some way, the world itself, a post-structuralist approach uses relatively similar methods to show that the process can be carried on infinitely, with no ultimate ground in sight, or, if one wants, arbitrarily ended for convenience sake. But any attempt at ultimate ground will give rise to something like infinite regress, which is to say, incompletion, arbitrary end, or incoherence, or some mixture, which is to say, inconsistency. Post-structuralism and Goedel are on the same page on this one.

Mathematical Meta-Gaming

From a post-structuralist perspective, then, it becomes possible to say that mathematics has objects, which are meanings within its semantics. These objects are things such as numbers and shapes, or any of the other entities which mathematics attempts to “treat.” When mathematics deals with combinations as if they were “things,” which is the discipline of combinatorics, then we know we are in the realm of mathematical semantics. These things are then linked together to produce utterances, according to the rules of grammar implicit to the “game” of mathematics.

The sorts of utterances vary, however. Some are simple equations, such as “x – 5 = 17,” which are then transformable, by a known series of procedures, into “x=12,” such that it becomes possible to state that these two are themselve “equal.” There are procedures here, such as “solving” equations, which are the utterances. And these procedures produce the grammar whereby mathematical equations are transformed, one into the other.

But then there are meta-mathematical utterances, and these are proofs. A mathematical proof makes use of procedures within the language of mathematics to create utterances which attempt to alter the way the game is played. This is, for example, not all that different from the role of argument in philosophy. A philosopher might argue, for example, that we shouldn’t think of reason or god as this or that. Ultimately, the philosophers is using words to impact the way we use words, just as a proof indicates the ways in which a mathematician uses math to impact the way math is done. Of course, the goal is ultimately to impact the way people think about math, but seeing as mind-reading isn’t yet possible, the only way we’d know how people think is how they act, which is to say, how they “do” math, and a proof may aim at how people think, but ultimately, it only manifests its effects in how people “do” math. The same could be said of the role of argument in philosophy.

None of which is to say, of course, that I haven’t been doing precisely this in what I’ve already said. In fact, the preceding paragraphs are simply arguments, attempts to impact the way the game of philosophy of mathematics is done, from within it. And this sort of meta-gaming is part of how the game is played, even if the results of this are always uncertain, which is to say, incomplete, inconsistent, or incoherent, at least from within the game as it currently stands. But games evolve, and meta-gaming is how this happens, in math and language as much as any other sort of gaming. And so, if I decide all of a sudden that a bishop can now jump, but only over rooks, in chess, and this move catches on, and becomes part of the new rules amongst the “community” of chess gamers world wide, then I have made an utterance, not within the game, but also not beyond it. In a sense, I’ve made a meta-utterance or meta-move in regard to the game, thereby altering its grammar from within.

Mathematics does this all the time. In fact, that is precisely what Goedel did, and what others do when they “invent” new mathematics. Those meta-moves which catch on become movements, such as “category theory,” or meta-meta-movements, such as “post-foundationalist mathematics.” None of which is to say that the meta-games precede the games, since there was no such thing as “post-foundationalist mathematics,” or even the need for this, before the foundations crises. Sometimes the meta-games pre-exist, because they have already been called into existence (ie: people have been talking about the grammars of languages for quite a long time), while other times, they are produced, even giving rise to new layers in between existing layers.

In a similar manner, mathematicians can give rise to new objects. Certainly “category theory” has not only given rise to new mathematical grammars within and beyond it, but also, new mathematical objects, such as “functors” or “categories.” The field of abstract algebra, of which category theory is simply one form, in fact is the branch of mathematics which works to deal abstractly with various sorts of ways of relating objects and grammars to produce utterances and meta-utterances. From its start in set theory, modern algebra developed into group theory and beyond, and by means of Emmy Noether around the time of Goedel, because the meta-mathematical enterprise it is today. If Goedel destroyed the hope of a single meta-mathematics, Noether proved that the true name of foundations was “many,” even if meta- as a notion was only ever “one” in relation to a particular location. Noether showed that grammars and objects have plural ways of relating. And from such a perspective, it becomes possible to see that foundations are things, but verbs, processes of continually founding and refounding, which is to say, relating to levels of micro- and macro- scale within a given level of practice, semiotic or otherwise.

It is for this reason that many have turned to category theory as a possible inheritor to set-theory, as a possible “post-foundational” foundation for mathematics. What is so incredibly slippery about category theory is that it defines its objects, grammars, and moves relationally. That is, the very meaning of an object is what you can do with it, and these “moves” give rise to the very categories of objects in question, and vice-versa. That is, object, category, and move are interdependently defined, collapsing the distinction between utterance and meta-utterance, such that all utterance is meta-utterance and vice-versa. All of which is a way of saying that the constructedness and reconstructedness is not hidden behind a smokescreen of “this is the way things really are.” Category theory is a mathematics, not of being, such as set theory, but rather, a mathematics of relation.

There are, however, other potential post-foundational discourses. Fernando Zalamea has, in recent works such as “Synthetic Philosophy of Mathematics,” argued that sheaf theory can play this role in a way different from that of category theory. Sheaf theory, a form of mathematics which works to extract invariants from particular transits between mathematical objects in transformation, is fundamentally a mathematics of the in-between. It is a mathematics which extracts from particular motions particular symmetries, and like group theory, then works to put these to work themselves in transit between local and general. For example, sheaf theory may attempt to describe the ways in which particular figures can be sliced and re-glued to themselves in ways which maintain coherency even when that figure is transformed in a particular way, and to then learn from this possible insights which can be applied to different yet related types of slicing, re-gluing, and transformation. In many senses, sheaf theory is a meta-analytic formation, which is to say, it takes the sorts of tools of decomposition and recomposition, analysis and synthesis, seen in notions such as differentiation and integration, and generalizes them to ever wider terrain.

Sheaf theory is then a mathematics of transits, of the temporary reification of an aspect of a transit, only to reapply this to another. In this sense, the objects, categories, and grammars are also relationally interdetermining, and the relation between utterance and meta-utterance are constructed and reconstructed continually, as with category theory. One difference, however, is that category theory is ultimately a logical enterprise, and doesn’t get into the specifics of particular figures and their equations, but rather, attempts to describe the logical grammar “beneath” the mathematical language used to describe and manipulate these. In a sense, then, one could say that category theory and sheaf theory are both instantiations of post-foundational foundations of mathematical practice, but in regard to differing aspects of the mathematical enterprise. Category theory is a meta-gamic approach to logic, while sheaf theory plays this role in regard to transformation within a particular field which already has instantiated categories of objects, categories, and grammars (ie: figures, types, and rules to regulate transformations).

None of this is to say that category theory is “more” foundational, but rather, it is more abstract, and this is different. Abstraction here indicates that this is a further move away from the physical world, and closer to the ideal, which is simply the realm stripped of specifics. Between a relatively “concrete” aspect of the world, such as a stone, there is the abstract representation of this, such as the word “stone,” or the number “1” which can be used to count this stone. Category theory leans to the latter side, and sheaf theory the other, but depending on the particular pole one is using to base one’s practice at a given moment, be this the concrete or the abstract pole, or any other for that matter, then the “foundational” orientation of one’s meta-gamic practice will proceed differently. Foundations are always foundationing, and as such, one always creates and recreates them by means of meta-gamic moves. The whole point of a post-foundational foundationalism is that it aims to produce the potential for foundations everywhere, rather than prohibit them, less this simply become foundationalism in reverse. The hope isn’t to proscribe foundations, but to liberate them, and in the process, practices, from the straight-jacket of both foundationalism and its evil twin, anti-foundationalism. Post-foundationalism, on the contrary, embraces creativity, which is to say in relation to mathematics, the potential production of new mathematical games and meta-games to give rise to new ways of descriging our potential relations to the world.

Foundations as Foundationings: Or, Mathematico-(Meta)gamic Ethics

If abstract and concrete are two poles which can help us orient in this process, these two should be seen as merely categories produced by meta-gamic moves, and hardly necessary, but produced and reproduced at each and any moment in which they are operative. Just as Zalamea finds this set of polarities useful, I also find those of the human body helpful. That is, if all mathematics can be constructed as the product of human activity, then perhaps, as with natural language, it makes use of categories which can be seen as potentially deriving from the form of our embodiment. Natural languages, for example, have nouns, adjectives, linking words (ie: prepositions), and verbs, four primary parts of speech, and some theorists, including Gilles Deleuze, have argued that these can be seen as the result of the ways in which human embodiment in the world makes use of things, qualities/categories, forms of relation, and actions, which is to say, nouns, adjectives, ‘linking words’ (ie: of, on, in, is, therefore), and verbs. The grammar of human language, then, can be seen as a way of representing some of the primary categories which humans have, by means of the media of their bodies, extracted from their worlds of experience. None of which is to say that these categories are necessary, but rather, that they are produced and reproduced, continually, by gaming and meta-gaming in the worlds of our experience, with language being one of the effects thereof.

From such a perspective, it might not be far-fetched to argue something similar about mathematics. That is, there are mathematical objects, which function like nouns, and categories of these, which are similar to adjectives, giving rise to semantics from the relation between these. From here, utterances and meta-utterances formed by means of these objects and their meta-linkages in categories, utterances, and meta-utterances give rise to modes of relation, which are represented alongside the objects and categories within utterances and meta-utterances by means of “linking words” which represent within these grammartical structure. But all of these are ultimately the result, sedimentation, and reification, if only partial, of the processes which give rise to these. Mathematics and meta-mathematics, two sides of the same, ultimately, like a language and its grammar, are processes of gaming and meta-gaming.

Thinking in these terms also allows for mathematical gamings to be linked up to philosophical notions as well. Set-theory, then, is clearly, as Alain Badiou has argued, the mathematics of being, and as such, has fundamental resonance with the philosophical notion of ontology, even as category theory seems to be something like a mathematics of relation. Sheaf theory seems an attempt to describe something like a mathematics of becoming, of the transits between the figural and the numerical, and within each. If, as Brian Rotman has argued, geometry is the language of space, and algebra the language of counting, it becomes possible to see these as so many layers of semantics and syntax in relation to each other, and yet, also permeated by that often forgotten stepchild of linguistics, which is to say, pragmatics. There is no vocabulary or grammar without a context which makes these relevant and meaningful. Without something like human bodies in something like a world of experience which has particular structures of space and time such as we know them, it is unlikely that anything like mathematical grammar or vocabulary, let alone those of natural languages, would make any sense. Likewise with what we tend to think of as constituting proof or argument.

By framing languages as composed of semantic vocabularies of objects, linked in syntactic categories which produce series of potential relations between these, which are grasped in meta-syntatic categories which are the grammars of that language which ultimately articulate the pragmatic linkage of that language to its contexts beyond itself, then what gives rise to all of this? Creation, it would seem, but also recreation. Math emerged from our world, and continues to reemerge from it, as does language. Semantics, syntactics, and pragmatics are simply aspects of this continual process of recreation. And if this essay attempts to do anything, it is to deconstruct attempts to hide the manner in which recreation, which is to say, emergence, is the ever present potential within any and all, here and now.

Mathematics and language, just as with any other sedimentation of our actions into representations thereof, are simply ossifications which stand out from our practices, which we then treat as if “real,” which is to say, as if necessary. And yet, they are only ever produced and reproduced by our actions, even as these are only ever produced and reproduced by that of our contexts in turn. We are networked, linked nodes of praxis within others, at potentially infinite levels of scale. And yet, within our particular zones of this, there remains the potential to give rise to effects which ripple widely beyond, leading, under the right contextual conditions, to the potential for cascades which give rise to a sea-change in how things are done.

There will always be those who will try to control the way things are done, who will try to close off our sense that, to quote the revolutionaries of the past, “beneath the stones, the beach,” or, in more contemporary idiom, that our world is only ever of our own making and remaking. No one entity can ever shift the whole, and yet, any one entity can sit at the fulcrum of others and provide the critical shift in a grain of sand that creates a massive change. Or participate in structuring such a set of conditions so that something else can provide that final push.

There are times, of course, in which it is necessary to restrict the manner in which things recreate themselves, for in fact, too much change can dissolve and destroy. But our world has almost never been in danger of such things, and when it has, it is the chaotic dissolution has nearly always served to reproduce the deeper aims of control. But creativity, radical creativity, doesn’t care if it is the center of the world, but only that it resonates with the creativity within it in its own way, amplifying the power of liberation and emergence, being emergence itself, even if towards an end beyond it. The reason for this, of course, is that emergence knows no fruit, it is its own reward. As anyone who has ever created anything knows, creation is powerful stuff.

Mathematics is a form of creation. If god created the natural numbers, and humans created god or are god, or even tap into some aspect of the godlike nature of creativity within fabric of the world as such, that which has never yet ceased to surprise us with its novelty, then by creating new mathematics, we allow the world to recreate itself through us. And in doing this, we learn something deep about ourselves, and about the manner in which we emerge from the world in and through things like mathematics. Human languages aren’t thinks, they are emergences and reemergences, as are we, as is, it seems, the universe from some primordial singularity.

While the end of this essay may seem far from questions of mathematics, the reason for going to such metaphysical issues is to make the point that they are omnipresent, and that reification obscures this, with control and petrification as the result and aim. Liberation and emergence can also be result and aim, and both are self-potentiating tendencies, as paradoxical as anything described by Godel, but dynamic beyond such at attempt to grasp it in snapshots. None of which is to say that we need dispense with reification, but rather, that we simply need not be seduced by its productions as being anything ultimate. Rather, there is always a play of natura and naturans, to use Spinozist terminology, or to cite Schelling, producer and produced. Forgetting that the produced is only a product and not the producer is the classic means of taking the present and its products as necessary. Those in control of our world have always relied on the stability of appearances to maintain the notion that the world has to be as it is.

While creation for its own sake can dissolve all that’s good in the world if taken to its limits, our world has always erred on the side of reification for its own sake, minimal necessary creativity to avoid ossification. This of course makes sense in world in which survival is that of the fittest. But in a world in which we can now feed everyone, in which the worst predators humans face are other humans and their products, the very evolutionary situation has changed, and so, our ability to survive depends on us unlearning the very lessons that evolution evolved into us to survive the brutal period of biological evolution. Surviving cultural evolution is the next step, and this requires walking backwards the paranoia we needed to develop out of the primordial oceans.

All the games we play, from mathematics to baseball, politics to economics, these are all human products, and products of the world beyond this in turn. Those creations and recreations we give rise to reveal what we value. And so often, we tend to forget that we have choices. Of course, if we are to truly make choices, we have to deal with the meta-gamic question of what we value, and why. To avoid precisely such meta-gamic questioning, it is often easier to simply pretend that our world is not even partially of our own making, and that our agency is small indeed. But while our agency is distributed and relational, it is hardly small.

The ethics of this essay is that of emergence, neither creation for its own sake, thereby giving rise to chaos, nor consolidation and control for its own sake, giving rise to stasis. Because our world overvalues the second, course correction would seem to value a shift to the other side, to novelty, but again, this should hardly be seen as necessary, only situational.

Mathematics has in fact engaged in a radical move towards creativity during the twentieth century, and put the seduction of reification behind it after the foundations crisis of the early part of the century, itself potentially a reflection of so many other crises of foundations early in the century, and their often violent repercussions. Foundations are always foundationings. They are statements of value and valuation, but if one looks for the values underpinning one’s values, there is only ever refraction to the whole, because the whole notion of value itself is simply the manner in which one is able to relate a particular move within a particular game to the meta-game played at the level of a given whole. The ultimate meta-meta-game, however, something like the world or experience, is precisely what the question of value addresses, and the answer is only ever inconsistent, incoherent, or incomplete. And yet, the creations and recreations we give rise to in the way we game and meta-game in relation to this is what makes everything worthwhile.

And if there is something that gives value to meta-gaming as such, it seems to be the emergence of (meta)gaming as such, and its continued emergence, the robust complexity of the games playable within the larger gamespacetime. Everything in our world worth valuing, after all, seems to only exist contextually within such situations. Life, for example, or the love which it can give rise to, these depend upon the complexity, and robust sustainable complexity, of the systems which give rise to them. Look at the best within the game, and use it as a guide to establish new guidelines for future gaming and meta-gaming, and continually readjust.

For games without rules, such as life or love, it’s always about values, and values are always about the relation of moves to the context of the whole, which is always in the process of emergence. But is that emergence getting better, and in a manner which is seems to lead to that in the future? Perhaps that is the best we can ask. If there is an ethics to (meta)games such as mathematics, perhaps it is in praxes like these. And while many may balk at the notion that math could ever have an ethics, there is no action in the world which does express values and valuation, and hence, have an ethical component. Humans value things in the world which allow them to continue to live and grow, which is to say, to recreate their ability to create and recreate. We eat food, build buildings for shelter, and create things like mathematics, and in doing so, we expend our energy and time, things we value, in the process. We value mathematics, and as such, it is an expression of our value, valuation, and values, even humans themselves are expressions of the value, valuation, and values of the contexts which produced us in turn.

There is an ethics to every move in every game, including language and mathematics. I’d like to think that emergence is, ultimately, the only way to imagine winning, which is to say, to emerge more robustly in relation to any and all in one’s context. An ethics of emergence, with ramifications for all gaming and (meta)gaming, including the creation and recreation of mathematical worlds.

.

Nondualism and Semiotics: Philosophy of Language Between East and West

•May 29, 2013 • 3 Comments

Even a casual reading of secondary sources on non-Western philosophy is likely to quickly turn up the term “non-dualism” with great rapidity, and in fact, it is hardly a controversial assertion to say that non-duality is perhaps the single most common notion used to distinguish between so-called ‘Eastern’ and ‘Western’ modes of thinking. While not all ‘Eastern’ schools make use of this term, some do, for example, the term advaita in Sanskrit, and used extensively in Vedantic Hinduism and various schools of Buddhism. Many aspects of non-Western though have been understood by means of this term since, particularly by Western scholars. And this isn’t necessarily a bad thing, because the term, developed in the Indian tradition, nevertheless does a good job of describing crucial structural aspects of Taoist and Confucian philosophy (particularly in regard to the use of the term Dao, or ‘the Way’ within these traditions), as well as central aspects of Sufism.

That said, there is also a non-dual tradition in the West, with Plotinus as the clearest example as a thinker who could, and unproblematically be described as non-dual. It is also worth noting that, since the reemergence of the “modern” reading of Plato around 1600’s, Plotinus has been seen as the poor stepchild of philosophy, a mystic in philosopher’s clothing. This is hardly incidental, and there is in fact a possibility that Plotinus and other Westerns, such as the Stoics, were influenced by Indic forms of thinking, including possibly Buddhism, by means of the Greco-Indian states which Alexander the Great left in his wake. That said, the term could also be applied to many other contemporary Western philosophers, such as, to varying degrees, the German Idealists such as Hegel and Schelling, or other thinkers with supposedly mystical leanings, such as Spinoza, or Deleuze.

Before going on, however, it is worth saying more precisely about what this term “non-duality” is generally understood to mean. Most commonly, what is meant is the manner in which the duality between subject and object is somehow undercut or supplanted, but in all rigor, the term is and has been applied to any attempt to get around or undercut binary forms of thinking. The Western parody of Eastern thought as saying a lot of vauge and contradictory words which ultimately come to mean nothing in particular is perhaps useful here, if to deconstruct. For in fact, reading non-Western texts, one often encounters statements that one should, for example, endeavor towards a state that is “neither this nor that,” “both this and that,” and sometimes even “neither this nor that yet also both this and that.” And so, while perhaps the most important binary which non-dual non-Western forms of thinking aim to displace is generally that between subject and object, depending on the text, this could also be extended to binaries such as good and evil, true and false, real and apparent, etc.

One excellent place to start in an invetigation of what is at stake in this process in

David O. Loy’s excellent text Nonduality: A Study in Comparative Philosophy. Drawing from a wide variety of philosophical sources and traditions, Loy teases the structure of non-dual thinking from these traditions, and works to systematize and classify the ways in which non-duality is used. While Loy does integrate some discussion of post-structuralist theory, particularly that of Derrida, towards the end of his text, I’d like to build upon this, both by recourse to some of the semiotic substratum of post-structuralist analyses, as well as by going outside of philosophy and linguistics in regard to systems theory.

Non-dual modes of argumentation work to undercut a given binary. For example, if I were to say that all objects are “neither true nor false, but both true and false,” one could simply dismiss this as contradictory, as many Westerners in the past have done, or deal with it as a complex semiotic utterance, following in the footsteps of twentieth-century structuralists. In fact, it seems to me that A.J. Greimas, with his semiotic square, nicely describes the manner in which binaries can be seen as composed as a series of options between “A,” “not-A,” “B,” and 

“not-B,” as four primary options in a binary system, with those of “neither-nor and “both-and” as secondary options which occur between any two of the terms described above. The result is his famous semiotic square, which links the Aristotelian logical modalities of contradiction and contrary within binary form, thereby building upon the semiotics of Jakobsen and Saussure, and providing a nice diagram to describe dual or binary linguistic structures, influencing a wide variety of theorists, such as Jacques Lacan and Fredric Jameson.

When non-Western philosophers, or any philosophers at that, indicate that something is “neither this nor that,” they indicate that they are attempting to shift emphasis within the discourse within which they are working to another part of the square, and when they say “both this and that,” likewise. However, when a thinker says that they are attempting to describe something which is “neither this and that as well as both this and that,” they seem to block out the entire square. What then is the effect of this sort of operation?

 Self-Deconstructing Paradoxes: Wilde and Lacan

Take for example, the notion of truth and falsity. To say that something is neither true nor false, but both true and false, is to question the relevancy of the very binary to the case under consideration, and in a sense, reduce the applicability of the binary in question to the realm of poetic usage rather than denotative or logical argumentation. Such a gesture is not all that unsimilar to that of Oscar Wilde, when he famously argued that “All those who tell the truth will ultimately be found out,” or tricky arch-post-structuralist Jaques Lacan when he argued that “truth has the structure of a fiction.” These ingenious linguistic acrobatics are in fact non-dual utterances in disguise as more straightforward types of utterances, because ultimately, these are self-deconstructing statements.

For those with a taste for the vertiginous, it’s worth pursuing these paradoxical statements and how they operate to effectively deconstruct the binaries they utilize. To start with Wilde, if those who tell the truth will ultimately be found out, then those who tell the truth are really liars, which then raises the question of whether or not they are liars in the same way, or differently, than those who lie. The difference, of course, seems to be that those who tell the truth are somehow lying about lying, to themselves or others, while liars are those who at least are telling the truth about their lies to themselves. And so, liars are more truthful, at least to themselves. If they are truly liars, however, they would hardly be truthful about lying, and hence, would likely lie about this, and claim to be telling the truth, at least to others. But a truth teller, then, would be one who, at least to themselves, believes they are telling the truth, at least on the surface. The result is that the split between lying and truth-telling is moved inside the person, for the liar would then be someone who is open, at least to themselves, about the fact that nothing that they say, including statements they make about whether they lie or not, can be trusted, while a truth-teller would be a person who at least believes that what they say is true, as far as they are willing to admit, and hence tell the truth to themselves. The liar, then, is a person who is honest with themselves, and a truth-teller the person who is a liar to themselves, hence, why they will ultimately be found out, not for lying to others, but to themselves.

The result of this is phraseological game is to move the determination of what constitutes a liar and truth teller from the ground of self and other to that within the self, essentially, splitting the self into self and other in relation to others who are also split. The real distinction, then, is between those who try to imagine that they are non-split subjects, and those who admit that they are split. And so, Wilde’s argument is about the structure of subjectivity, even though, on the surface, it seems to be about truth and falsity. What Wilde is doing is shifting one discourse to the other. But by phrasing this in a non-dual fashion, he forces the reader to do the work of the stages, because, at least on the surface, the statement makes no sense. The only way to make sense of the statement is to follow-through with the way it redefines terms performatively, in how they are used. That said, Wilde’s quote is hardly non-sensical, but rather, is completely sensical, but on a terrain which is shifted from the one in which such a statement would be traditionally understood. That is, notions like “truth-telling” and “lying” imply a particular context, namely, that of subjects in relation to others, and the ability to lie, and what that means, namely, a subject who says one thing yet means another. Wilde shifts the context within which the distinction between truth and falsity are understood, which is to say, the form of subjectivity they imply.

And so, the question isn’t really whether or not Wilde’s tactic is sensible, because clearly it is. And in fact, the position articulated in highly compressed and indirect form in this quip is presented in a much more direct and explicated manner in some of his essays, such as “The Truth in Masks” and “The Decay of Lying.” In these articles, Wilde argues that it is only when we realize that openly lying and wearing masks is the only precondition for telling the truth, that we come to understand that there is simply something more honest about admitting our dissimulation. And so, pretending that one can be completely truthful is the deepest form of lie, while admitting the impossibility of complete truthfulness is at least a more honest, more partial form of truth.

And this is in fact what is argued, in simply more condensed form, in his maxim, with the one caveat that this quip doesn’t indicate whether or not liars will perhaps ALSO be found out, which is to say, whether or not they are more or less truthful than so-called truth tellers. But since Wilde doesn’t attack liars, and doesn’t say they will be found out, this is an indication that all truthtellers will be found out, but not all liars. It is the liar, then, who has the possibility of “getting away” with lying. This is not as fully developed a point as he makes in his essays, namely, that lying is more truthful than so-called honesty, because here the issue is whether or not one will be “found out” as a liar. If one is not “found out,” one can still be a liar, after all. Put in the context of his full position, as indicated by the essays, however, we see Wilde’s fuller point.

And that point is the one indicated by Lacan’s statement that “truth has the structure of a fiction,” which is to say, that the more truthful way to indicate something truthful is to use the form of fiction. To use the form of truth, however, is to do so less successfully, and hence, to be less of a truth-teller. This is why Wilde argues that there is a “truth in masks,” and his notion that the decay of lying in contemporary society is actually a terrible thing for society, because the vogue of truth-telling which so worries him in the 1890’s is the most dishonest thing there is. Lacan, however, largely developed his ideas in the period of 1950’s and 60’s, the period of the rise of so-called “postmodernism.” With Andy Warhol as the avatar or early post-modernism par excellence, and a Wildean mask-wearer if there ever was one, Lacan was describing the more generalized condition which Wilde saw in nascent form in his day. Today, perhaps even more than then, the only way to be truthful is to use the form of fiction.

From Hard Binary to Soft

In the examples examined above, the goal of all these modalities is to undermine those who believe in any one particular truth which is exclusive of a multiplicity of truths. None of which is to say that these figures don’t believe in anything. Rather, they believe that truth is local, perspectival, relative to situation, and that any attempt to develop a universal truth beyond this, which holds for all times and all places, is in fact untrue. And in doing so, they shift truth and falsity from a hard binary to a soft one. That is, they move from any notion of absolute right and wrong, to relative right and wrong. Of course, such a move shifts the hard binary from truth and falsity to absolute and relative, or universal and particular, or in the case of Wilde, between the split between self and other and that within oneself.

In a sense, such moves soften one binary at the expense of another. And so, one could make the argument that ultimately, such moves don’t accomplish anything, they simply are a play of smoke and mirrors. That said, Wilde and Lacan manage to shift the framing of a particular issue, in this case, truth and falsity, each in their respective ways. For Lacan, the issue becomes one of structure, while to Wilde, it is the division between selves to that within oneself. And one could easily argue that they are doing two versions of the same thing, for in fact, the “split subject” is essential to the Lacanian project.

And so, while at first it may seem that such paradoxical statements don’t accomplish anything, they do more than they may seem. Firstly, they show why the primary binary under investigation may not be as useful or meaningful as previously thought. As a result, they indicate the need for “soft” rather than “hard” use of this binary distinction, moving from a model based on a firm divide between these notions to one which frames these are something more akin to poles on a continuum, with many shades of gray or intensity between these. Thirdly, the need for such a shift is justified in regard to another binary, often unmentioned, upon which this argument implied by the statement depends. This other set of binaries (ie: split subject, universal and relative, etc.), however, can then be addressed in two ways, depending on the thinker involved. It can be dealt with as the true, deeper, more fundamental grounding binary, and hence, be dealt with in a manner which is “hard,” or, it can be deconstructed in turn, and hence, be shifted to a “soft” determination whose ground lies in some other binary whose “hard” structure is itself also elsewhere.

From what has been said here, then, it seems that there is a distinction not only between a hard and soft way of using a binary, but also of dealing with binaries as such. That is, one can see some binaries as hard, and others as soft, and the line between these as either hard or soft. Or one can see all binaries as ultimately soft, and the binary between binaries and even ways of using them as ultimately soft. The first approach is one which can be seen to selectively deconstruct one binary, and replace it with another, while the second deconstructs all binaries, including those it depends upon to make its arguments.

 Non-Duality as Binaries on “Soft-Serve”

Nonduality is the term used in much of the literature on non-Western forms of arggumentation, and some non-Western philosophy itself, particularly in the Indic tradition, to describe what I’ve here been calling “softness” in relation to binaries. That is, binary distinctions, of whatever sort, are deconstructed, and there is a fundamental skepticism to the use of binary logic and ways of arguing and speaking in general. Of course, the question then becomes what is put in place of the binary logics these modes of thought work to deconstruct.

It is worth noting, however, the degree to which such a position is radical in relation to many of the core doctrines which have shaped the history of what is often thought of as “the West.” During the twentieth century it was often seen as a truism in many Western schools of thought that not only was all language binary, but that all thought with it, and that to try to get around binaries lead to poetry or nonsense at best, and dangerous misuses of language and thought at worst. Entire schools of thought aimed to purge this nonsense and contradiction from philosophy itself. And this effort ultimately came to naught.

This effort and those allied to it, which is to say, those which aimed to reduce the world in a given domain of inquiry to binary terms, not only was defeated in philosophy, but in many fields of culture. To quite a few moments, the Heisenberg uncertainty principle in physics showed that traditional notions of objectivity depended upon notions of observation and subjectivity which were deconstructed by the evidence provided by experiments which would have been used to shore up such a very distinction in the first place. And if the fundamental stuff of the physical world refused to be put in binary boxes which distinguish between subject and object, leading many previously binary-thinking scientists to refer to sub-atomic particles with terms (ie: intention, knowledge) previously reserved for subjects, then could we say that the physical world itself was, like Wilde or Lacan, performatively deconstructing the dual and binary presuppositions of the scientists? Could we then say that the world itself was arguing that this hard binary needed to be softened?

 A Detour via Mathematics

The realm of mathematics and mathematic logic, often considered the realm of pure thought, and hence, the other end of the spectrum of investigation of the world from the realm of matter, suffered a similar fate with the Goedel incompleteness theorum, which shook the foundational beliefs of the mathematical community in the early twentieth century, at nearly the same time as Heisenberg proposed his principle. Godel showed that the foundations of mathematics were fundamentally non-dual, and that any binaries used to make fundamental distinctions which could give rise to dual or binary rules in math, including fundamental definitions of terms and rules, ultimately depended on nondualities.

Goedel’s argument was framed in regard to issues which arose in controversies in the logic of sets, and it is worth pursuing this line of reasoning for its semiotic richness in regard to that just analyzed in Wilde and Lacan. Mathematicians before Goedel had shown that it was possible to convert mathematics into the language of set theory, and by means of this, link many of the issues in symbolic logic with those of mathematics. And so, a number like “five” could then be redefined as the set of all things in the world of which there are things more than “four” yet less than “three,” with each of these other sets defined in relation to the manner in which sets contained other sets. Ultimately, this shifted the focus, at least in regard to numbers, to two very particular sets, namely, those which were empty, and those which had infinite number of things. The mathematcian Gottlob Frege showed how it might be possible to think of the empty set, or the set with zero items in it, as the basis of number, with the set with one element that which contained only the empty set, the set with two elements that which had the empty set and the set containing that within it, or one, thereby allowing one to “count” two levels of sets within it, and so on, thereby generating all the numbers by means of this recursion. The hope here was that Frege had found a way to justify the logical coherence of the very notion of number which was foundational to the ability to do any mathematics at all.

Godel showed that this was ultimately a paradoxical enterprise. Goedel developed another mathematico-logical language, and used it to convey the logical statements about numbers back into numbers. And he then showed that when one reversed the process used by Frege, essentially transforming logic back to numbers from their conversion into logic, that the result was uncertainty and inconsistency in regard to the logical status of the numbers which were produced by the very logical statements used to justify them.

While this is all extremely complicated, the issue can be simplified a bit by returning to sets. Is the empty set, which is to say, a set with no items in it, truly a set? And what about the set of all sets, which is to say, the set that includes all other sets, up to infinity? Both sets are, for various reasons, paradoxical. The classic example is to question whether or not the set of all sets includes itself as a set. If the answer is yes, then it has just shown that it is not the set of all sets, because something must include it in turn, but if the answer is no, then it is not the set of all sets, because there is something it doesn’t include, namely, itself. The same procedure, however, can be done with the set with nothing in it. Does it lack itself? Answering yes is just as problematic as answering no. As a result, neither the set of all sets, nor the set of no sets (the empty set), truly qualify as a set. And so, what then is a set, but the wrapping of two versions of the same paradox, namely, that of inclusion or lack thereof, around each other, and pretending that nothing strange is going on? The situation is quite similar to that indicated by Wilde and Lacan. Either the very definition of set, as that which includes something else, is inherently problematic, as that was between truth and fiction in Lacan and Wilde, or one has to shift how one relates to this notion.

And most of the mathematical community have chosen, in the terms used by Lacan and Wilde, to lie. That is, most mathematicians working in the field today describe themselves as one form or another of Platonist, which is to say, they don’t really question the fact that numbers “are real” in some sense or another. They believe in the truth of numbers, and when they use them, they believe they are “telling the truth” about some very deep aspect of the world. It would, however, be more honest if, as some small group of mathematicians after Godel do, call themselves liars. For in fact, numbers are fictions, if very interesting and useful ones. But they are built upon a fiction. And either this means that one tries to ignore the contradictions at the root of the mathematical enterprise, the Wildean/Lacanian notion of “truth-telling,” which is ultimately a form of lying to oneself, or, one admits that one is telling a lie, and hence, is ultimately more honest.

And more liberated. Because what one does in the process of admitting that one is lying is that one doesn’t have to take one’s particular lies as seriously. One ca shift lies, and play with them. Of course, truth tellers do this all the time, but they don’t want to admit this to themselves, they have to at least pretend, to themselves, that they are honest. But a true liar can lie to themselves and have a great time at it. The question, then, is why, at to what end? Is lying truly better than truth telling, and according to what standard? 

Binaries on “Soft-Serve” and Deconstruction

If both mathematics and physics deconstructed the binary oppositions at the foundations of their enterprises towards the start of the century, against the frenetic protests of those in those fields, and despite the continued disavowal by many of those working in the areas of these fields far from such limit-effects and foundations, then what might this mean for issues of language and thought? The dominant “ideology of thought” of the middle of the century was the notion that the brain was like a computer, and that thought was like a series of binary switches. This was, of course, due to the fact that digital computers were invented mid-century, and they were based upon chips whose logic circuits were themselves binary. This is largely due to the influence of the very same mathematicians and logicians whose developments in logic and set theory produced the crisis of foundations early twentieth century mathematics and physics, as described above.

But those who don’t go to the limits of a given discourse seem, in general, to be able to avoid having to deal with many of the effects of such issues. And so, the notion that thought was likely binary, in the model of computers, is still, to this day, often accepted as simple truth. And yet, the very history of the century seems to prove otherwise, just as networked logics of the internet and other forms of computation beyond those of binary computers, from the fundamentally networked structure of the brain to experiments in “artificial neural networks” indicates otherwise.

To give a sense of why it might be possible for those working in computing to more easily ignore foundational issues than those working in set theory, think of a picture frame. So long as one looks at the picture within the frame, one doesn’t have to deal with the fact that the picture is an illusion. This is similar to the border of a television or computer screen, or the “out of frame” of a film image. But as soon as one approaches the frame, the perhaps leaving one eye on the image within the frame, and the other on the frame itself or the space beyond, and one can no longer simply disavow the constructedness of the image, its representational status, its fictive structure. What’s more, it becomes difficult to say whether or not the boundary is itself part of the world, or part of the imaging of the world.

The reason why this example works so nicely is that, as many visual theorists and beyond have argued, the filmic or framed image of any sort is in fact a set. That which is inside the frame is part of the set, that which is outside is beyond, and the frame is the border which performs the action of including some aspects of the world, and excluding others. Deconstructing the binary, focusing one eye on the inside and one on the outside, draws attempt to the frame, and the act of framing.

And this is part of what deconstruction of any binary distinction does. That is, by unravelling any binary, one is not only focused on the untenability of the frame to determine what it seems to, but a certain motion if performed, whereby focus is shifted from one thing to another. And by means of this, the very action of deconstruction, and conversely, of construction, comes into view. The deframing which occurs when one focuses one eye on the inside and the other the outside of an image draws our attention to the original act of framing which often remains hidden, but which this act of deframing draws into the forefront. And so, we are confronted, not with two things, but two actions or processes, framing and deframing, or, framed otherwise, construction and deconstruction.

All of the deconstructions mentioned already here, whether that described by Heisenberg, but seemingly enacted by the quantum structure of “the world” itself, or that of Goedel, or Wilde or Lacan, are deconstructions of a binary which calls attention, in the process of deconstruction, that the construction of the binary in question is itself an action. To return to truth and lies, the issue isn’t so much whether one tells the truth or a lie, or is a truth-teller or liar, but rather, the question of how one deals with these issues. Truth and lies, truth-tellers and liars, cease to be things, and the appearance of thing-ness shifts to that of process, there is only truth-telling and lying, and any sense in which there are “things” known as truth or lies, truth-tellers or liars, all shifts depending on the actions taken in regard to other processes.

Things, then, are recast as processes, just as binaries are recast as continua. None of which is to say that there aren’t binaries and things, but rather, these are seen as derivative of the continua whose intertwining via processes gives rise to them, even as these processes are degrees of intensity of various continua. The world can then be seen, in a sense, as a weaving of such continue in relation to each other, a network, each of which provides context for each other. Things and hard binaries, then, could be seen as simply the most rigid aspects of such a process.

And here it becomes possible to see the manner in which deconstructions of various sorts undercut a particular binary, shift the terrain to the context which implicitly supports this binary, and then either contines to deconstruct, or it stops. Stopping shifts the base binary from one term to another, while continuing ultimately deconstructs everything. And so, the first approach moves the discourse from one sort of absolutism to another, while the latter leaves one in skepticism or nihilism, at least, if one takes these logics to their extreme. Such a binary between options itself binary, or in the terms I’ve been using, “hard.”

The “soft” option is that described in the preceding paragraph, which is a networked option. It is one which views the world as composed of processes which form the contexts of continua whose patterns of intertwinings produced momentary sedimatinations as things, which can then become reified and linked into binaries which seem absolute, but which ultimately will deconstruct if pushed. The question the isn’t, as with a “hard” approach to the world whether or not something is true or false, because everything is then true or false in its way. The question becomes which things to leave hard and which to make softer, and why, in particular contexts. Rather than truth or falsity, with deconstructive unmasking as the tool for destroying the false in the name of the true from within on its own terms, deconstruction becomes a tool to be used to transform things, and reconstruct them, in regard to issues determined at more encompassing contextual levels of scale.

And so, for example, in regard to mathematics, the question is not whether or not numbers are true or false. Of course they are false, like everything else, at least, if one views the world in terms of hard binaries true and false. From such a perspective, then, we can say, that nothing is ultimately true or false, but that everything, in its way, is both truth and false. And so, the language of nondualism, as used in many forms of non-Western thought, seems to often imply these qualifiers that it is worth indicating. That is, the terms “ultimately” and “in its way.” These are, according to logic, the absolute and existential qualifiers, two more of Frege’s inventions in mathematical logic. And they allow it to be seen what non-dual thought works to unravel, which is to say, “hard” use of binaries. Non-dual modes of argumentation are a way of saying that IF one wants to divide the world into two, absolute, exclusive categories, THEN ULTIMATELY, the world will resist, and give you something like NEITHER NOR and BOTH AND. Or at least, this is how non-dual arguments function in many non-Western texts.

 The Issue of Context and Framing

That said, not all non-dual texts, systems, and arguments are the same, for if they were, then Zen Buddhism and Vedantic Hinduism, for example, would be identical, and they aren’t. There are two primary differences here, the first of which is the context in which binaries are softened by means of deconstruction. Some contexts are simply more central to our way of relating to the world than others. And so, if one questions the binary between ketchup and mustard, the result is hardly earth shattering. Is this even a binary? Aren’t these more contraries within the category of condiments? Certainly these aren’t contradictories, like ketchup versus non-ketchup. But one could, if one chose, find a way to show that there are forms of condiment which are both ketchup and mustard, but also neither one nor the other. In fact, one could simply make such a hybrid condiment and be done with it. The effect of such a performative deconstruction of the category would be to shift the ground of the issue at hand. Rather than seeing ketchup and mustard as absolutes, as necessary and distinct, they become seen as choices, the results of our actions, and our continual actions, to produce things which fall into these categories. We could, after all, produce the hybrid instead, give it a name, and create a new form of condiment, and this would, ultimately, instantiate some new binaries. But our faith in the necessity of the ketchup-mustard binary would be shaken, and perhaps our very notion of what it means to think about condiments and food. Whether or not this would shake up our way of categorizing the world is another story.

Nevertheless, to shake up any binary distinction is, in a way, to shake up them all, even if to differing degrees. Because if one can deconstruct a binary between any two aspects of the world, whether seemingly necessary (ie: black and white) or randomly assembled (ie: Batman and a fish), and whether by one’s physical action (ie: combining these and producing a hybrid, or finding a hybrid), or rather simply intellectually in one’s head, such an action always shifts the ground to the context of the binary. That is, when we deconstruct the ketchup-mustard binary, we have to focus on the construction of our categories of condiments, and the choices we make in regard to these. Because we have most likely been born into culture with a set of options of condiments, we might imagine that these are necessary and predetermined options, but when someone produces something like “ketuchstard,” we may simply add this to our list of condiments and not think much of it.

Or, our world might shaken. For the addition or subtraction of an element of any category, in the moment when it occurs, brings about a choice in all those involved, namely, whether or not the action is valid. And this draws attention to the context, which is the action of categorization which has been implicit, all along. For when a change is made to the categories in question, the issue is pressed, one must choose to either continue to categorize the world in the same way, or to change.

And in this sense, deconstruction, as much as reconstruction, which is to say, the addition or subtraction of categories, call attemtion to the process of the continual reconstruction of categories by action in the world. There is no category in our world which is not continually being reconstructed by all the agents involved. Whether we think of these agents as humans using language, or the world and scientists producing experiments together, there is a continual process of co-construction, and this process is multi-determined, from many sides. It is never the result of any single binary, but webs of agents. While it is possible to split these networks into binaries, the world seems, at many levels, the attempt to make these ultimate.

For in fact, the world changes, and is composed of agents of many different types, and they network together, and any sense of similarity or difference is ultimately relative to these aspects involved. Any attempt to find potential bases to make sense of this will founder, at least, so long as the world continues to seem to keep producing new things. And there is even reason to believe that the very stuff of the world, down to the realm of quantum foam or the structure of logic and math and words, has an aspect to it which finds final and ultimate reification, thingification, splitting, or binarization somehow against its fundamental structure. It is as if the very stuff of the world, whether considered in the abstract in language or math, or in the material form in physics, doesn’t like any sort of hard binary when taken to the extreme. This is why I feel it is necessary to have a softer view of the world, one in which binaries, including that between a thing and its context, are never taken as ultimate.

Grounding Terms: The Non-Duality of Lacanian Master-Signifiers

All of which is to say, the world we encounter is likely much softer than we realize. It is continually constructed and reconstructed, and each entity within this participates in this continual process of construction and reconstruction. The question then becomes one of choices. Does one want to continue to construct the world in the same way, to keep interpreting it with the same categories, by acting according to the same parameters? None of which is to say that any one aspect of the world can simply change everything. No, we are all connected, and so, one would only introduce the condiment “ketchuard” to the world beyond oneself if one didn’t only make this in one’s own home, and hence, disrupt only one’s own personal way of thinking and acting in relating to condiments, but also, started to produce this on a mass scale, found a way to introduce this notion and/or thing into the world beyond oneself.

And this is where the silliness of such an example shows its force. For changing the way people view condiments is a minor change, but it does give people, if in form rather than content, a reminder of the fact that the world changes, isn’t as “hard” as it seems, and that even if any one individual can’t change the world, that collectively and in relation to the non-human aspects of the world, there is a lot which seems quite malleable, despite the fact that our own need for security seems to want to forget that fact.

But as soon as one goes for a more momentous notion, such as “truth” or “god” or “the self,” people get all up in arms. Not all terms are created equal in all discourses, and not all terms are defended in words and deed with equal ferocity. Ketchup and mustard have their defenders, but few would likely kill to prevent ketchuard from becoming “a thing,” but people get quite excited when you go for a term which grounds their ways of thinking and acting in the world.

These “grounding terms” are what Lacan calls “master signifiers,” and for Lacan, they are places in which a culture stores the non-duality at the core of a given discourse. That is, the discourse of a given religion using has certain grounding terms, such as “God” or “revealed truth,” just as the discourse of scientific inquiry, to choose another very non-arbitrary example, has its own grounding terms, notions such as “objectivity” and “reproducible data.” Each of these notions depend upon binaries which are treated as “hard” by the discourse in question, and these “hard” binaries are then used to organize others which are dealt with in softer fashion. And so, in science, some terms are negotiable, and no-one cries all that much when a shift is made between one term and another, but if one questions the very notion of objectivity, the ability to “do science” itself seems shaken. The same with religion. If one questions whether or not a given scripture is revealed or not, one may not only be questioning the stability of that particular scripture within the network of terms and practices, agents and objects, contexts and processes which is that religion, but also, if that scripture plays a grounding function in regard to that religion, then perhaps the entire relation of that religion to scripture as such. That is, the question becomes something like, hard or soft? And in a more general sense, in regard to the way in which that entire discourse relates to the contexts which ground it as such. 

The World on Soft-Serve

And this is why ultimately the issue that is being raised here is one of values. A “soft” worldview is one in which anything can be deconstructed, and should be, if it gets in the way of, well, a soft set of values. Such a soft set of values can be constructedly, softly, in contrast to those which are hard. That is, instead of the hard distinction between “good” and “evil,” one ends up in a world of continua, of “better” and “worse,” and these are themselves determined in relation to the contexts in question. That is to say, better and worse are always ever determined locally, relativistically. But there is a basic, overall frame that unites these many standards, and that is the notion that hardness is ultimately worse. This isn’t to say that some uses of hardness may not be helpful in particular situations, but rather, that when the use of hardness becomes hard itself, that there is likely a problem.

This issue of values then helps to determine that which is deconstructed and/or reconstructed within a given situation. Hard binaries are deconstructed, and the point of this is to call attention to the fact that the world doesn’t need to be this way. Granted, some aspects of the world will likely resist being deconstructed more than others, but this is because no entity is a hard and fast island, cut off firmly from the world around it. Rather, all is intertwined, networked. And it is only those hard and firm distinctions which create the fiction of isolated entities which choose to try to ignore this fact. The world ultimately resists this, and crises are the result. These crises can be opportunities to soften things up a bit, but they can also simply be painful parts of the continual process to attempt to hide all softeness, and make things hard again 

There is then an ethics of softness, just as there is one of hardness. Hardness aims at control, protection, territory, boundary policing, and maintenance of the status quo, while softness is all about flexibility and local responsiveness. The former is top down, the latter is bottom up, the former is about being “right” no matter the cost, while the latter is about what works, the former is about extremes, while the latter is about the middle way.

The Middle Way and the Deconstructive Abyss

It should come as no surprise that I bring up “the middle way” here, for this is the path described by the Buddha, as one of his most deconstructive disciples, the Indian philosopher Nagarjuna. Many throughout history have questioned whether or not Nagarjuna was a nihilist, because in fact, he deconstructed the very terms whereby the Buddhist enterprise justified its own worldview. Give him any term, and he’d deconstruct it, in a manner quite similar to that described above, and with almost Greimasian precision. He called one of his primary texts, however, Seventy Verses on the Middle Way. And he said his goal was to avoid the extremes of eternalism and anihiliationism, or absolutism and nihilism.

Of course, the difficult question is what remains if one deconstructs everything. In a sense, the answer is everything and nothing. For if everything is deconstructed, then one returns, in a sense, to where one began, one’s progress is a circle, only everything looks different. If one deconstructs even the very terms one uses to articulate one’s deconstructions (for example, truth and lies, or hard and soft), one is left with the world from which one began, before one began to pull the thread and deconstruct the “sweater” of one’s surroundings, the network of terms and processes, things and contexts, which are in so many ways, one’s world.

But the difference is that now everything looks tentative. For one realizes that nothing has to remain the way it is, everything has at least some degree of leeway, which is the degree to which one contributes to the continued categorization and action, construction and reconstruction, of the webs of context within which one is always already situated as an interpreter and actor in one’s world.

Nagarjuna describes this with his notion of shunyata, or “emptiness.” He argues that everything is empty, and the term here is often also translated as void or illusory, but the word origin can be traced to the manner in which something such as a bowl or vase is “hollow,” which is to say, it appears as full, but ultimately, there’s nothing there but void. That said, bowls are extremely useful, and the void within them is only ever in relation to that which is not void, that which surrounds it. And this is why Nagarjuna appends his notion with that of svabhava, often translated as “own-being” or “essence,” and this set of terms is often used together in the tradition influenced by Nagarjuna, to argue that everything is ultimately empty of own-being, or fixed essence. This is framed by Nagarjuna as simply a deepening of the notion, presented by the historical Buddha (at least to the best of our knowledge) of pratitya-samutpada, or dependent origination. That is, everything is networked, conntected to everything else. And so, nothing is truly ever what is seems, because it is always the result of causes and supports, contexts and processes. And so, it can be deconstructed.

Until, that is, one gets to the whole. And just as was described before in regard to mathematics, the issue is ultimately how one relates to the whole, which is to say, the “big picture.” Whether one does this implicitly or explicitely, this is a question of values. What sort of “big picture” does one want? Deconstruction, and non-dual language in general, always refracts the discourse away from the binary in question, and to the context which grounds the binary, and in a way which calls attention to the processes of reconstruction which are often hidden in the way such a binary often serves to structure action and interpretation in a continual way in the present, often in a way which appears isolated from context and necessary. In fact, binaries often appear so necessary that they disappear from view, like when one looks at the world through a set of glasses and forgets they are there.

But deconstruction and reconstruction call attention to the fact that the world can always be, at least to some degree or another, be constructed differently. But there do seem to be aspects of the world which, despite this, seem to resist more than others. For example, it’s hard to get around the fact that aspects of the world deconstruct and reconstruct. But if one deals with these softly as well, one is much less likely to be shocked and surprised by the world, because one expects this to happen.

For in fact, a soft world is simply better, at least, from a soft-perspective on things. Those advocates of a hard world would likely say the same. Each can only ever be judged according to what they produce. Our current world system is quite hard, being based upon armies that protect boundaries and fortunes from the impoverished who seem to vote the same people into office anyway so that all can get the same products we all want. But it does seem to me that our world could benefit from a bit more softness. This is, however, an admittedly soft way of looking at things.

Values, which is to say, how one frames one’s relation to a given context or whole, are never able to be coherent, at least, not in a binary way. For values are about what one wishes to see, they are about the way the future influences the past and vice-versa in their way of framing the present. The very notions of binarity, like before and after, now and not-now, begin to breakdown. Because values are about what we wish would be true. They are virtual, in regard to what is actual. They are about hopes and fears, dreams and fantasies, rather than the way the world “actually” is. Nevertheless, they frame this actuality, and alter the way we act in the world, and so, change the way the world “actually” becomes.

And this is why I feel that a softer world would be better. Because framed between the hard options of all or none, having and not-having, full and empty, I think the world would be better off with some, rather than a firm divide between haves and have nots. I think the world works better when there is a distribution of resources, potential, and ability to determine what is considered right and wrong, true and false. The only justification I can have for these notions is by my interpretations of the past, present, and potential future, but these framings have not ultimate justification, because, ultimately, justification is a dual game, and I’ve decided I prefer to play that game soft.

Because the alternative is adhering fervently to a set of binaries, and defending them at all cost. Or, to follow the logic of deconstruction, once started, to its logical end. And here it becomes possible to see the manner in which hard fullness is not the only hard option, but also hard emptiness. A true skeptic, one who believes in nothing, and is a nihilist, is one who adheres to emptiness for its own sake, who deconstructs everything, world be damned, and leaves nothing to take its place. Such a person ends where they began, for the world reappears as it was, if fully deconstructed, but not it seems not only reconstructible, but meaningless.

This is the hard empty, and in its way, is just as dangerous, and perhaps moreso, than the hard full. Between the absolutist and the nihilist is only what they hold on to, the full or the empty, but not the form of their holding, which is hard to the core. And this is why the followers of Nagarjuna argue strongly against nihilism, and the fact that their deconstructive method leads to anything like nihilism. For they argue that nihilism is what happens when one turns emptiness, or shunyata, into a thing itself. The middle way deconstructs everything, and gets everything back as potential, as an option, and avoids the fanatical purity of the nihilist, who leaves nothing for anyone, and tries to destroy for its own sake.

This helps explain why after the development of Nagarjuna’s Madhyamaka school of philosophy, another came about, often called the Yogachara school, which argued that the flip side of emptiness was in fact a new sort of fullness but one which was permeated by emptiness. This was a fullness which had all as potential, but nothing as necessary. This is a world which sees everything as virtual, and sees that as an opportunity to produce a better world.

And this is why one sees the discussion of compassion within all these forms of Buddhism, and in a manner which is ultimately not deconstructed, or rather, which is continually deconstructed and reconstructed anew. For complete deconstruction, taken in a sense which is hard, leads one to nihilism. And this means that truly anything is possible, but nothing worthwhile more than anything else. There is no reason, no argument, no justification, nothing to value.

The form of Buddhism in which one sees this tendency manifested the most strongly is Zen. Zen, or Cha’an Buddhism in the Chinese original, is nondualism taken to the extreme. And yet, it is not, at least according to someone like Nagarjuna, truly non-dual about its nondualism, for it adheres to this with a fanatical rigidity. And so, Zen verges on nihilism, on deconstructive fury for its own sake, that which Freud called “the death drive.” Zen is often described as wordless transmission of enlightenment. But any attempt to describe this is foreclosed, and with words, arguments, scriptures, the Buddha, and all guidelines for action seen as deconstructed, Zen becomes fully irrational, a relation between student and teacher, monastic institution and lineage and adherent, which is beyond words, concepts, explanation, justification, and values. After all the emphasis upon moral codes within Buddhism in history, Zen deconstructs these, and spontaneity returns with full force. Teachers hit, burp, scream senselessly, or use nonsequiturs or “contradictory” or seemingly irrational stories as teaching tools.

And sometimes, the results are wonderful, and if the teacher is truly compassionate, and the monastic tradition and context “soft” enough, I have no doubt that this radical use of deconstructive embrace of emptiness can truly be reconstructive and liberating. But there is so little to hang on to, so little grounding, that it is so easy to slip into nihilism. It hardly surprises me, then, that Zen has a history of being embraced by warriors, including the samurai, and there are whole traditions of Zen swordsmanship and archery. These emphasize the deconstruction and reconstruction of the preconceived categories that limit the warrior. There is a softening, but for ultimately hard ends, namely, warfare.

Zen can be a completely peaceful way of dealing with the world, and often is, but it also has this history. The reason, I believe, is because it leaves itself so little ground to stand upon, because it embraces emptiness to strongly, and takes its non-dualism in a nearly dual or hard manner, which is to say, to an extreme. Zen can be compassionate, but it also doesn’t have to be, and this becomes a question of values. Since that is largely foreclosed by Zen, I will say no more about this other than, well, Zen is Zen.

Contextual Nondualism 

None of which is to say that other forms of Buddhism are necessarily better in this regard. I believe firmly that the difference between softness and hardness, which is ultimately one of values, lies in what one does, in the processes of action and interpretation, construction and deconstrution, and ultimately, reconstruction, and not in the things which remain behind, like words or objects. The meaning is always determined by context, and context is itself, ultimately, non-dual, at least, when taken to the extreme. That is, any attempt to ground any particular mode of action by recourse to context will ultimately trigger recourse to the context of that context, on to infinity. And at the limit, the issue becomes one of values, of desire and hope, of the relation of the virtual to the actual, of potential to the concrete. What sort of world does one wish to have, and what might be ways, based on past actions, to help bring that about?

If extreme nihilism is one danger, then the other is absolutism. And many, many forms of non-Western thought are quite selective in the way they deconstruct. If Zen seems to deconstruct nearly everything, and leave little left secure, most other forms of Buddhism, Vedanta, Taoism, Cofucianism, leave certain key terms standing, and these then serve to orient and ground things. Cofucianism embraces “the Way,” but refuses to question the value of the family and filial piety. Taoism is much more extreme in its universalization of “the Way,” and hence, has often been accused of nihilism, or of saying “nothing,” as has Zen. It is perhaps not incidental, then, that Taoist notions were appropriated by the Legalist theorists in China, the Machiavelli’s thousands of years before the Italian was born, and Taoist notions were used to imagine ways for rulers to deconstruct and reconstruct the reifications of their opposition so as to undercut them. Sun Tzu’s famous Art of War uses Taoist inspired notions, and has influenced generals and warriors for literally thousands of years.

Likewise, while the Bhagavad Gita was one of Gandhi’s primary sources of inspiration, it also was massively influential on some of the Nazi war criminals who committed some of the greatest atrocities of the century. This is because, as with Zen, the Gita is fundamentally an “a-moral” text. This is not to say that it is immoral, but rather, that it urges, at points at least, that one should throw away any and all restrictions in one’s devotion to Krishna, which is ultimately, a representation of a principle of non-duality. The moral advice to Arjuna, the primary character in this work, is not to stop fighting, but return to it, and is not to overturn social hierarchy, but support it. There is a tension, however, because the text at some points advocates a support for the traditions of Vedic culture at that time (ie: revenge those who killed your family), while at others, it hints that devotion to Krishna is more important, and this implies a fundamentally deconstructive and reconstructive, context based approach to ethics. Without anything more to guide this, however, this can be interpreted as ‘anything goes,’ so long as there is “no karmic debt” incurred, which is to say, so long as one does engage in actions for their karmic reward, nor for the pleasure or pain they may bring.

This has been used to justify some very, very disturbing actions, and to provide a means from detaching from them. Deconstruction can be used, after all, to deconstruct and reconstruct things in a way which is very rigid, possessive, absolutist, and destructive. Deconstruction knows not compassion by itself, nor does reconstruction, for ultimately, the question is “for what,” which is to say, a question of values. If the value structure, the context which ground the others, is nihilist or absolutist, then no matter how much deconstruction and reconstruction, it is done for “hard” rather than “soft” ends, and any softness is ultimately relative.

Capitalism, after all, is today a massive machine for the deconstruction of past certainties, leading some critics to describe it as a machine for “deterritorialization” and “reterritorialization.” Destruction and reconstruction in its own image is in fact precisely what capitalism does. Devourer of worlds, few terms could ever describe contemporary multinational, tenticular capital better, the “vampire squid on the face of humanity” which sucks the world dry for its own ends, for hoarded piles of numbers in bank accounts as goods for their own sake, no matter the cost. Even the productions of goods are merely a means to an end, what it really is about is having more digits in the bank account.

And to what end? Capitalism defers questions of values, because ultimately, it is nihilist. And so, it appears revolutionary, and may have local liberatory effects from various solidifications which have become restrictive. But in the long run, its goal is to harden things into so may mirror image copies of its own image, and turn the whole world into fuel. And when it has does this, it will eat its own body, and today, the crises of capital are the spasms of the world system eating itself. None of which is to say that capital will ever collapse, it has shown time and again, it is far too slippery for this. No, it recalibrates at the last minute, but with each recalibration, it tries to give away as little as possible to those who need it the most, all in the aim of its own hoarding and growth.

Traditionalist reifications are one extreme, and hyperviolent nihilism, either in traditional form, as seen in particularist aggressors throughout history, or in its much more sly, nihilist postmodern forms in capitalism, are the dangers. But between absolutism and nihilism, lies the middle way. Taoist philosophy and Zen tend towards nihilism and complete nonduality, while more ritualized worldviews, such as many forms of Confucianism, or the devotional strands of Buddhism (ie: Pure Land Buddhism) and Hinduism, tend to reify and limit the ways in which nonduality allows for flexibility. What has allowed all of these systems to evolve in regard to the needs of the people involved is that the deconstruction and reconstruction of grounding terms allows these worldviews to change. Nondualism can either help or hinder this.

And while many of these worldviews have non-dual aspects, there are ways in which this nonduality is contained. It is these non-deconstructed moments which indicate the aspects of these worldviews which are non-negotiable. Compassion, for example, for the Mahayana. And this is justified by recourse to the fact that deconstruction is seen as a dispersive strategy, and it is, but this is to miss the fact that dispersal can be put towards non-dispersive ends. And so, throughout its history, the dissolution of the traditional aristocracy and its wealth by Buddhist calls of compassion have lead to the accumulation of fantastic wealth by Buddhist monasteries, and these destabilized these societies so much, particularly in Gupta India and T’ang China, that they had a hand in bringing down these empires themselves.

Any discourse can be misapplied, and any discourse can have its non-dual “soft” aspects linked up to others which are either hard, or used in “hard” ways. No term is more or less hard or soft than any other, it’s always in the way the term is used in context. Likewise, no particular action is good or bad on its own, or even meaningful, it is all about the contexts and processes which endow an action with meaning, relevance, causes, and effects. 

And this is why the middle way can never be a single discourse, and even calling it this is only ever a placeholder for a practice which must be continually vigilant, to maintain the ultimate softness of its ends. Because the world has a tendency to harden and soften which seems to feed on whichever is predominant in a given situation. Hardening and softening seem to self-potentiate. Growth spurts become rallies, and collapses can drag their contexts with them. 

No, the middle way is never a thing, it is a desire, one which must continually be recreated anew, continually reconstructed in regard to the contexts. One must continually weigh the relation between the situation in front of one and the contexts and processes of its construction and reconstruction, and attempt to find the soft-path. There are no guarantees. Softness is an ethics, not a thing. Likewise, no discourse is ever completely soft, but only so in context, and always only ever softening or hardening, along with all its aspects. Between hard and dissolving, softness is the middle way.

The one form of Buddhism I find most interesting in this respect is Tibetan Buddhism, and I have and will deal with this form of nonduality extensively elsewhere. What I find most interesting is that Tibetan Buddhism builds upon the Yogachara insight that emptiness is the foundation for creation, and by means of this, it softened the traditions of the Boen religion in Tibet, and the warrior ethos of its people, and created a society which was Buddhist to the core. That said, this was able to reify and hierarchize, to solidify.

What’s more, for all the talk of nonduality between subject and object within many, many non-Western philosophies, there is often a subtle emphasis upon the interior world. That is, one changes, through meditation, the way one sees the world, and this changes the way one acts. But there is rarely an emphasis upon direct revolutionary action, of changing the world itself. While some non-Western theorists urge this, such as Mozi in ancient China, by and large, change happens inside one, because one breaks down the barrier between consciousness and world, leaving only consciousness (Yogachara has been called the “Mind-Only” school, and has much in common with Vedantic Hinduism in this respect).

But if one were to truly take this argument to its conclusion, if the mind is everything, the world is also the mind. The binary would fully deconstruct. And so, to change the mind, one must change the world, not merely the mind. Now, many forms of Buddhism do change the world, and the institutions of Buddhist monasticism are evidence of this. But this reworking of the outside world isn’t oriented about a better outside world, but only a better inside world, and optimizing the outer world to help cultivate the latter.

And so, when the Dalai Lama says that the “East” has provided, in the form of Tibetan Buddhism, an inner science which can liberate the inner world, and the West the ideals of democracy and science and revolution which can change the outside world, I think his notion here should be taken seriously. The Dalai Lama has said that his worldview is Marxist, but not in the sense of the Chinese or the Soviets.

There must be a middle way between internal liberation and external liberation. And I feel that the non-dual insights of non-Western thought can be essential help the West get beyond its reification of the individual as the standard of all knowledge and action, as evidenced in various ways in materialist science and scientism, binary notions of thought and language, or capitalist possessivism. But likewise, Buddhism and attempts at inner liberation, such as psychoanlaysis and therapy in the West, need to question their own reifications and binaries as well. But not towards nihilist ends any more than absolutist. Rather, the question is, what are our values? And why? What can help us give rise to a better world?

And as I would argue, how could we become soft, to find the middle way, between hardness and dissolution? I believe that strategic use of nondualism can help this happen. And I think that a general orientation towards the nondual ground of any dualism is an essential part of this process. But if this becomes an end in itself, it can too be destructive. In the language of the Tibetan tradition, there is need for emptiness and compassion, for these are two sides of the same, and the resulting interpenetration keeps the process moving and quasi-stable, metastable. It keeps it readjusting to produce optimal softness.

Feeding Back Nonduality, From Virtual to Vajrayana 

I’d like to end on a discussion of feedback in living systems, and how this relates to non-dualism. All complex adaptive systems in the world make use of feedback to modulate their relation to their contexts. While a thermostate isn’t complex, it certainly is a simple feedback mechanism, for it adjusts the temperature of a house to keep it in an optimal middle zone. And it does this by means of feedback, which is to say, the temperature itself comes to factor in the setting of the processes which impact the temperature. Temperature enters more than once into the equation, which is why, in terms of mathematics, feedback tends to show up in equations in the form of exponents, leading these equations to be called “non-linear,” because they tend to give rise to curves rather than straightlines, and often “organic” seeming structures, rather than simplistic mecanical ones. Complex adaptive systems occur when these feedback processes take on a “life” of their own, such as the manner in which a whirlpool engages in constant feedback with aspects of its elements and environment. Living organisms are what happens when this becomes relatively self-sustaining. The evolution of life, consciouness, self-consciousness, and all we know, love, and value is the result of feedback building upon itself, and doing so non-linearly.

Studying the world, it becomes possible to learn what develops life and all that makes it good, and to try to experiment further to increase the quality and quantity of the this goodness the most sustainably and complexly for the greatest number. Scientists have studied these sorts of systems rather extensively, and seem to indicate that diversity, distributedness, feedback in various forms, and energetic flows which are meta-stable, just a little more than the system can handle, tend to maximize growth, in all senses of these terms, and in relatively sustainable ways. When any of these predominate, however, the system leaves the middle way between the many strands of these networks of factors, and imbalances are generally the result.

It is for these reasons that I think these are some very real values which can help us link any particular with the whole. So long as we don’t reify these, and keep using them softly, keeping the eye on the middle way, in all senses of this term, even up to and including questioning any and all aspects of this process. With one exception. Namely, that that which leads to life, and which makes it better, needs to be the source of our value. And ultimately, in a sense, this is always the case, all values, and even the ability to value, come from life. But we often take particular aspects of life and raise them beyond life. Even Buddhahood, or revolution, can become idols which interfere with any ability to give rise to these very things. And this tends to occur when we reify one aspect of the world as more valuable than the value which is within any and all, and which can give rise to the best within any and all, so long as it keeps its eyes always on the attempt to do what is best, not only in regard to itself, but the any and all.

Such an ethics is nondual. It doesn’t dispense with the self, because it is impossible to make the world any better if one simply dispenses with oneself and one’s own needs. But it attempts to continually deconstruct and reconstruct the terms of any situation in regard to an attempt to situate it in regard to the values of the whole. And the whole is always beyond, always only present in part, always, in a sense, soft. Not empty or full, not here or there, not present or beyond, not changing or the same, but fundamentally, nondual. But this nonduality cannot reify itself into something or nothing, it must be kept moving, and this is why nonduality is only ever a reification of what nonduality is actually about. There is no word for this, and yet, contra Zen silence, there are many. And each is better than others in regard to a particular situation. Of course, this is what a Zen koan seeks to indicate, but by dispensing with explanation, my sense is that it leans too much to the side of emptitess, without emphasizing the fact that people tend to need something to hang on to as well, some luminosity and compassion, creation and recreation, in addition to mere emptiness and deconstruction.

From these notions it becomes possible to tie nondualism, along with deconstruction and reconstruction, back into not only the semiotic issues described at the start of this essay, but also the issue of life, which is to say, the world from which semiotics like those of language emerged. In living systems, feedback is how the a system alters the way it relates the balance within itself to that which it maintains with its environment.

Feedback shows up in mathematical equations generally as exponents, and in the graphs which equations produce by the ways in which complex systems, such as those which are alive, tend to be described more by non-linear, curved trajectories, rather than the straight-line paths which tend to better describe mechanical systems. Of course, put enough mechanical systems into play with each other, and the results can quickly go non-linear, and in fact, mathematically speaking, the famous “three body problem,” quite a famous problem in the history of mathematical physics, finally showed that it only takes three linear systems to produce one non-linear one, which is to say, a system whose behavior cannot be predicted in advance from the parameters which composed it.

It doesn’t take all that much for the basic stuff of the world to ‘go off’ track and lead to unexpected results. In fact, all it takes is three things interacting, in the context of a fourth which provides a flow of energy, and things are likely to get out of hand. Two things, however, so long as they are mechanical and predictable, even when in an environment which provides a similar flow of energy, are predictable and non-linear. Binaries tend to reproduce themselves, and so do triads when anchored by binaries. But when triads arise in the realm of fourthness, they tend to produce the unexpected, and binaries may try to rein things in, back to neat triads or even neater binaries or unities, but rarely are able to do so fully. 

If one examines individual equations or graphs, separate from the systems in question, however, one only sees exponents or curves. And when the curves hit an inflection point, one only sees singularities. When mathematics and geometry take an approach which filters out context, the role that feedback plays in generating these factors becomes obscured, and is often only restored by relinking what reifying approaches segmented.

The same can be said with language. Individual words reify, as do binary distinctions. But humans only use individual words and binary distinctions in relatively artificial situations, removed from the ebb and flow of sentences in motion. And yet, these hypostatizations are privileged by so many philosophers and linguistics as paradigmatic examples of how language produces meaning. The individual signifier, for example, in the history of linguistics, and its accompanying signified. Of course the world will seem reified and binary when viewed from such a perspective.

However, when context and process and pragmatics are restored, as so many post-structuralist and other critics did with the structuralist linguistics which dominated mid-century, the rigidity begins to vanish. And in hindsight, it becomes evident that structuralism was a Manichean ideology which reflected a Manichean time, the time of the Cold-War, as well as of the binary computer. But even beyond the Internet, by means of artificial neural networks and many other advances in so-called “soft”-computing, even models of computation are beginning to go non-binary. In fact, binary computation is now, in hindsight, seeming so limited, restricted, and such a limitation on where the future of computing might go. The brain as well, is a fundamentally networked structure. Why then would we think that thought or language would be binary, linear, or composed of reified building blocks which are assembled like machine parts?

But what are some of the implications of this for semiotic theory, and the study of language? If exponents are the ways in which feedback shows up in mathematical equations, and curves, including those points where a curve curves into itself into a point, is how feedback shows up in geometry, nonduality is how it shows up in language.

For example, let’s return for moment to the issue of the “set of all sets,” and the related notion of “the empty set.” These notions were fundamental in the self-deconstruction of the notion of the set, and with it number and axiom, which were part of the foundations of mathematical logic at the early part of the century. If we consider a set as a word, however, and we think of what it includes, namely, an element, as a set as well, then it becomes possible to see this in binary terms. That is, a set is something which includes an element, that is its definition, and this process of inclusion is binary, in that an element is either included, or it is not. And so, something either is an element, or it is not, “E” or “not E.” The notion of a “set” is a meta-category which includes E, and excludes non-E. The set of all sets is that which both includes E and not E, while the empty set is that which includes neither.

And so, when a Yogachara thinker argues that any entity is both itself in a relative way, and yet ultimately void, and hence, is in a state of thusness which is both an entity and void, yet also neither, this is a mode of arguing which is not all that different from that presented by the foundations crisis of mathematics at the turn of the century. The difference, of course, is that the foundations crisis wanted to avoid this sort of deconstruction, while Yogachara embraces the deconstruction of Nagarjuna and his Madhyamaka school of “emptiness.” And yet, it also goes through these, because it affirms the notion that everything is ultimately void, but not relatively so. The question then becomes referred to context, and the values that allow one to articulate the local and the global in relation to these values, which for Yogachara, entail the path to liberation and the attempt to foster compassion beyond the self/other duality.

In the process, any particular reification is seen as ultimately tentative in relation to the contexts and processes of its production, and the particular choice of any entity is linked into modulation with larger wholes. And as such, none of which is going on is dealt with as ultimate or reified. Every aspect is rather composed of networks of others, and the linkages between and within any of these are up for grabs.

To say that something is “neither nor yet and both and yet neither of these” is to deconstruct the binary in relation to the contexts and processes of its production. And yet, this is always within a network of other notions. Deconstructing a binary can lead to its replacement by another, or by its use in a much more tentative way, or by the reconstruction of the whole landscape of terms around it. What a nondual argument does, however, by deconstructing terms, is to put them into play, to engage in a process of unmooring them from their contexts, and ultimately, referring them to the contexts and processes around it. Nonduality makes us aware, in and through binaries and the reifications that go with them (ie: this thing is this, and not that), that there are networks of other possibilities, and perhaps infinite ones, within any reification, around that reification, in the linkages between, and at multiple levels of scale. This is suchness, the mixture of emptiness and fullness, which the Tibetan Buddhist refer to as vajra being, the being which refracts everything and nothing, which is the very substance of everything, that which has its true essence obscured by anything in particular, but which has the potential to become anything and everything, depending on the contexts and processes in which it is transformed.

To continue with the Tibetan Vajrayana approach for a moment, only emptiness, which is to say, the deconstruction of refications, binaries, and other solidfied formations tied to these can reveal the potential luminosity of the Vajra-being, the essence of liberation and pure potential, within the very fabric of the world. And only emptiness can make sure that this potential doesn’t solidify again in turn. And yet, to reify emptiness is to miss out on all the potential for growth and transformation, in relation to which empiness is only the pathway. Emptiness is a means, not an end, and what it is a means towards is liberation, not merely of the self, or the inner world, but also the outer world, and others. This is true compassion, to liberate oneself through ones world, and one’s world through oneself, self for others and others for self, world for self and self for world. Getting stuck in either fullness or emptiness, deconstruction or reification, is ultimately a strategy which misses the power of the soft, or, to use the vajra term, the adamantine that is harder than all that is hard. Because true softness can cut through the hardest of rigidities and reveal the massive potentials lurking within, and help unleash them for a better world.

And this is why, I believe, Yogachara and the schools which flow from it, such as the Tibetan schools, are on to something fundamental about how the mixture of emptiness and luminosity, the vajra of vajrayana Buddhism, produces a seed of liberation which can bloom anywhere, within the subject or object, and ultimately anywhere. This seed, spoken of in Yogachara as the buddha-emgbryo or buddha-womb (the word is the same in sanskrit, tathagatagarbha), is the seed of liberation which is everything, and can be developed into pure liberation. And yet, most of us don’t see this, and this is why the world manifests normally as distinct, reified, binary things. But once we liberate how we see the world, by means of the liberation from fixations which the crucible of emptiness, of deconstruction provides, we see that anything and everything is possible. This is why in Tibetan Vajrayana Buddhism, one starts tantric meditations from pure emptiness, creates a visualization of a quality one wants to foster in one’s life, then breaks this down and dissolves this back into emptiness. The goal is to both further one’s detachment from the reifications of one’s everyday world, yet also increase one’s sense of freedom and possibility in relation to it. This is a virtual reality practice in reimagining reality.

The issue I have with this, however, is that it remains all within the self. Since all is mind to Yogachara, changing one’s mind is changing the world. But if one doesn’t change the world, one can only go so far with changing one’s mind. And this is why, I believe, there are limitations to the forms of liberation which are available here. That is, Vajarayana, as powerful as it is, doesn’t deal with what it may take to recreate the world beyond the mind. One could, of course, argue that the Buddhist community is precisely what the world turned into a mandala, or sacred diagram of a Buddhaverse, might look like. And yet, this is a vision of the world based on what liberation of the mind within that particular socio-historical context would look like if the world were then recreated on this image at a particular moment in its historical and social context. I find myself wondering, what might this look like if then this process were reapplied recursively, and back to the world?

 This bright, luminous potential is what Gilles Deleuze calls the virtual, and what, in my philosophy of networks, I call the matrix of emergence. It is that which is beyond any one, and yet that of which any one is an aspect, and so, it is ultimately, oneand. When matrix, or emergence, is reified, it give rise to rigid structures, but the potential for emergence remains within these, waiting to be unleashed by less rigid forms of networking. First there must be deconstruction of reifications to allow this potential to emerge. But there there also must be reconstruction, so as to foster more sustainable complex, which is to say, more robust, more emergently emergent, forms of emergence.

Feedback is one of the primary ways in which this happens. Feedback is the process whereby systems readjust their relation with the world so as to emerge more robustly. It is only when systems are reified from the world around them that they cease to engage in feedback, and this results in rigidified ways of acting which leads to crisis either within, without, but often, in mirrored forms, both. Feedback is non-dual, for it describes the manner in which boundaries are crossed, for feedback in relation to interior or external distinctions. It doesn’t nullify distinctions, but it perpetually readjusts them in relation to contexts and processes beyond them, yet which provide meaning for them. These contexts and processes evaluate and boundaries and reifications in regard to the values of the system in question, which are those which determine its actions. Reifications, boundaries, linkages between these, and the readjustments of the processes which determine, maintain, transform, and modulate these is what feedback mediates. Feedback is in fact the process which links these all together, hence, is a crucial part of the process of emergence.

Nondual modes of argumentation are attempts to loosen binary systems from within them. When they do not disappear, yet remain, this means that an attempt is being made by the discourse in question to both maintain and dissolve a distinction, which is to say, to modulate the way a particular distinction plays out in its processes of linking and delinking from particular micro and macro formations within various contexts and processes. That is, the larger contexts and processes at work are attempting to maintain relatively dynamic and fluid relations between parts of the system in question.

We see this in living systems all the time. For example, the mouth is both inside and outside of the body. And the body goes through great efforts to maintain this state of both and and neither nor. Continual processes of feedback are needed to make sure that the mouth both is and is not either within nor without the body. And so, the body feeds back on itself to maintain conditions which aren’t too wet, nor too try, not too much of this enzyme or that. Too much of either extreme, and the mouth would stabilize on one side of the the binary or the other. And that would prevent it doing what it needs to do, which is to say, abide in between, taking the middle path.

Linguistically, this is difficult to describe, because individual words tend to reify, and so do binary distinctions. But the mouth existed long before language even evolved. Feedback precedes language. And non-duality is simply the way in which some of its aspects show up in particular ways of using language, and in particular, those in which there is an attempt to keep some binaries in place, yet in a situation which modulates its relation in between that of others. And so, a person who lives in a world of objects, yet continually questions their very fabric and relation to others, is a person who lives in a world of objects, and yet which lives in a world in no objects and all objects. Which is another way of saying, they exist in a relation of feedback between the processes within and beyond them which give rise to these objects, maintain and transform them, such that all of these are seen as tentative.

Language is ill-equipped to describe these states, and particularly from within reified perspectives. The discourse of mathematics, for example, has a very, very specific set of filters through which it sees the world, and so when it comes upon sites of feedback within those aspects of the world it is attempting to describe, it hits limit effects. Like the manner in which a microphone will screech if there is too much feedback, or a house will overheat if there is too little, feedback is needed to keep systems going. And yet, from within particularly limited perspectives, it may not register as what it is. In fact, it often shows up as a gap or disturbance within what otherwise seemed orderly, such as exponents in otherwise linear equations, or curves in their graphs. These ‘messy’ points are points of junction between systems, and hence, are points of instability within attempts to order their interactions. And ultimately, any stability we see in our world is the result of some process of trying to make order out of disorder. Realizing that the word is much more complex is what getting beneath surface manifestations is all about.

But this requires a perspective which can link what others tend to reify. That is why my approach, the networkological approach, takes networks as its model, for what I am trying to do by means of this is to deal with what it truly means to understand relation. And from what I can tell, this means to also be related to the worlds within and without which reifications tend to obscure.

Grounding terms, the terms within discourses which localize the contradictions of that discourse, are those which, for Lacan and other linguists, describe what is both within those discourses yet beyond them. God is a notion which is everywhere in religious discourse, but if you try to use only religious discourse to justify why the world needs a notion like God, the result will either be tautology or contradiction, which is to say, one either will produce circular arguments, or have to go beyond religious discourse to justify religious discourse. Mathematics ran into the same problem with set theory at the start of the century, as the set of all sets, a quantitative attempt to formulate something “like” God, could either justify itself completely circularly, and hence produce no justification, or not at all. Justification, after all, is a form of linkage whereby one entity grounds itself in another which serves as its context. Math tried to ground itself in logic, and then Godel showed that if the results were ultimately problematic, that this was simply a shift in terrain. The problem remains the same. One has to deal with questions of value sooner or later.

The choice of grounding terms, which is to say, the terms within a discourse which one deconstructs less than the others, and hence, which anchor the others in networks of structure of usage, definition, categorization, and so on, are choices which underlie the values which make that discourse work. And ultimately, these are chosen because these terms allow for forms of action, since the use of words are themselves forms of action, which sync up ways of speaking and writing and non-linguistic forms of acting. If the resonance works well enough, the grounding terms are seen as grounded, which is to say, “justified.” The process we call reasoning and argumentation is simply a part of this ultimately only semi-linguistic process, one which is both within and outside of language, as well as neither nor. That is to say, there is a relation of feedback on both sides of the boundary between language and its others, because that boundary is continually being renegotiated and recreated. It hasn’t outlived its usefulness, so we keep that boundary around, and yet, we need to keep our relation to it relatively fluid. When we get too close to the boundary from within the domains which it allows to function, we get something like the effect of a microphone too close to speaker, and yet, if the microphone is too far away, the speaker won’t be able to modulate what they sound like in when reproduced. Feedback must always be optimal, it must be in-between, because it serves to maintain that which the in-between allows to manifest.

Mathematics, from a set theoretical perspective, exists in the in-between which set theory describes by means of the notion of inclusion, the enabling parentheses which allows set theory to function. Likewise, it is the fact that words both are and aren’t things that allows words to do what they do so well, which is to say, to represent the world without being the world, while being related to the world in a way which is relatively fluid.

If Western forms of thought assume the self-other binary, most non-Western schools have assumed the isolated, monadic individual as the foundation of all knowledge and action, since the birth of capitalism and modern science. We are living in a moment, however, in which our very capitalist modes of production and the very science we have given rise to are producing formations which are calling this model into question. It is has perhaps outlived some of its usefulness. We are starting to see feedback and limit effects which are screaming for a remodulation from within. Of course, perhaps this has always been, to some extent or another, the case. Individualism, private property, the nation-state, these reifications have always been hyperviolent and overrigid. But the limitations are now starting to be dismantled, along with binary structures in a more general way, by even the powers that be as they search for more flexible formations. The modern Western individual is beginging to unravel.

This is an opportunity, a moment of feedback in which is may be possible to remodulate things. To learn from earlier formations, like pre-modern Western formations, as well as the models which have prevailed in non-Western formations, which never reified the individual as extremely as in the West. That said, these non-Western societies have tended to reify social hierarchy much moreso than the West, and many of the philosophies in question have been much more focused on liberating the inner world rather than the outer, which is the reverse from capitalism. Both sides can learn from each other.

The trick, of course, is to not get stuck in the frenzy of deconstruction any more than the reification of the old ways, or the slick postmodern capitalist synthetic hybrid which deconstructs traditional reifications, only to reconstruct things at higher levels of complexity for equally rigid ends, and in a perpetual yet ultimately self-defeating cycle thereof.

From a networkological perspective, these all these are networks composed of networks. Any reification, or any binary linkage of these, is ultimately a network, within contexts composed of other networks, and at multiple level of scale. And the stuff of which these networks is composed, is matrix, oneand, or emergence, that which is both itself and beyond, and not anything in particular, for it is the stuff of which any particular thing is an aspect or grasping. It is fundamentally non-dual, for it feedsback into itself, which is to say, it contains itself infinitely, yet more intensely in some particular manifestatios in which it is more intensely networked with aspects of itself, thereby giving rise to the greater potential for more intense forms of networking. Energy is simply one aspect thereof, as is matter, space, time, subject, and object. And so, the networkological project is grounded in a non-dual manner, even if it makes use of particular reifications strategically to intervene in contemporary debates. Such an approach, called “skillful means” by the Buddhists, links construction to deconstruction to a goal and ethics of robust emergence, emergent emergence, which is for any and all, beyond subject and object, me and world, for it is that from which we come, and yet more abundantly. It is identification with the Deleuzian virtual, as a social and personal praxis of liberation.

Such an approach finds its only justification in what it desires in the world, in relation to what it desires to see, yet in regard to the context beyond all context, the non-dual context which is the only possible grounding for any aspect of the world, and for which even a notion like matrix, or emergence is merely yet one more partial reification. All of which raises the quesiton, of course, of what we want the world to become, in regard to this potential of what it has been, in relation to where we are from within this. All of which is to say, we need to engage in feedback, but massively so, such that any and all reifications are seen as tentative in regard to the widest possible context, which modulates the relation of any and all to each and all. And since this is ultimately that of which each and all are refractions, which it to say, empty of particular being but having the potential for any and all, if only relationally in regard to the whole and the potentials it has in relation to each, such that the question becomes what we want to become. And it would see that liberation from limitations, more free from reification, but in a manner which is sustainable and ever more intensly liberatory, would be that which is at least resonant with not only the structure of each and all, but also, with liberation from reification. There is a form of circularity here, of course, but with a difference, and that difference is what makes the difference, for it is difference as such, in and beyond any particular difference. For more on these notions, see my essay on “the widest possible context.”

From the perspective of Buddhism, however, this would be the realm of suchness, of the pure intepenetration of luminosity and void, and yet, beyond the realm of mind alone, but beyond the duality of subject and object, which is to say, a Marxist Buddhism, one which aims to recreate the world in the image of liberation, and recreate the mind in the image of liberated world. Such a notion is necessarily local, for it refracts the notion of liberation in relation to any and all in a way in which the parts and wholes exceed each other infinitely in their mutual emergence.

Language and mathematics, subject and object, these are so many reifications of the fundamental fabric of what is, the refractions of which give rise to the world we know. Reification of this process into reflections of the same give rise to rigidity, and yet, this is only ever local. The question is how to help the world emergence from its reifications without getting stuck in new ones, and yet also without fully dissolving in dissolution. The middle path, liberatory emergence, and fundamentlal non-duality, in and beyond all dualities and reifications, which ultimately, are necessary, if in soft form, for any and all emergence to arise in the first place, and to be able, by means of feedback, to emerge from itself in the process. From such a perspective, feedback can become seen beyond the reifications which shatter it into aspects which are ultimately less than comprehensible. Only from such a perspective does both duality, and non-duality, appear as aspects of each other, which is to say, the contexts of the largest possible context, and that of each and all, the matrix and fabric of the oneand, that emergence of which any and all are a part.