Thursday, December 30, 2010

The Poverty of Online Information

This is a post I wrote for another website, Good to Know, Inc., but I wanted to share it on this blog in its unedited form (since they're likely to make edits to my writing.) Here it is:

Also, I have no idea why the font is being so screwy.


There are three concepts that, when one looks closely enough, can be seen as a single indivisible triad: narratives, context and memory. They all fundamentally deal with an associative way of thinking and they are noticeably robust: one doesn’t easily forget things (my cousin and I still occasionally debate who was at fault in a scuffle we had when I was seven) and we seem to understand things best when they’re told as a story. They also have another trait in common: they are often lacking if not non-existent when I try to find information from the internet.

Take Wikipedia as an example. It certainly has a wealth of raw information; just about anything one can think of seems to have an article, whether your interest is in hermetian matrices, hardcore bands, Lacanian psychoanalysis or Pokemon trading cards. Even so, I never feel the same sense of fulfillment reading material on Wikipedia that I get from reading a book; I often find myself struggling to walk away with something to hold on to after browsing through this seemingly infinite pool of collective knowledge.

Maybe I’m just of a dying breed of people who hold on to physical books, but more universal reasons seem to be at play. Our brains work in an associative manner; rather than record our memories in a serialized manner like a video camera or a computer’s hard drive, we create memories by connecting various objects, sensations, thoughts, images, sounds and other past experiences. Conversely, we recall memories from the past by seeing, hearing, feeling or thinking something that we’ve associated with it; anyone remember the scene that everybody talks about from Swann’s Way where the narrator takes a bite of a Madeline cookie and his entire past rushes back at him?

A single sensation or reference brings back a rapid cascade of memories and this same phenomenon underlies what might be our memory’s inherent preference for books: we associate its content with the distinct look and feel of the book’s cover, pages, printing, wear and tear. Simply put, you can remember something much better when you attach it to a physical object of some kind.[1] But just as the physical context of knowledge may matter more than we think, so too might the relevance of that knowledge to some larger narrative. The journalist Malcolm Gladwell might oftentimes be criticized for the lack of empirical rigor behind his theses, but I prefer any one of his books to a series of entries on Wikipedia because I believe him to be a masterful storyteller able to weave a larger number of data, anecdotes and ideas into a single story that makes sense of them. Knowledge is much more easily remembered when there is a narrative that brings it all together.

One way of thinking about this is that a narrative introduces redundancy to knowledge. It’s much easier to remember an idea when it’s linked to a number of other ideas as part of a larger theory. If you learn something in a vacuum, there will be nothing to give you any clues to it should you forget it; but by relating it to something else, we have something to remind us of the original idea.

But is it really the case that there are no narratives on Wikipedia? If I look up David Bowie, I’ll find a summary of his life and career and with it there will be hyperlinks to more detailed summaries of particular works or to biographies of his contemporaries; there’s definitely a narrative. Even so, there’s something about Wikipedia that is too summarized and distilled to be the kind of story that sticks well after I’ve finished reading an article. It offers not so much a story as a summary in which details are relegated to the various articles that it links to.

But these links on Wikipedia are inadequate to finish telling the story. They are details that are told in a complete vacuum, unaware of the fact that I came from a page talking about one thing and not another. In Malcolm Gladwell’s Blink, the defeat of the United States Navy 7th Fleet at the hands of Paul Van Riper’s meager force in the 2002 Millennium Challenge War Games is talked about in the context of how human intuition can be more powerful than the world’s greatest computers, but a Wikipedia article summarizing the thesis of Blink could only tell it in sufficient detail by providing a link to an unrelated page summarizing the incident; one that would fail to have the interpretive richness of Malcolm Gladwell’s own retelling.

A story, as opposed to a simple summary or functional narrative, speaks to us about what is not obvious. Whereas summaries compress, stories interpret by bringing together a wealth of information and constructing a unique idea. Furthermore, through this same process of bricolage, a story is filled with a rich variety of prose and impedimenta that allow us to create a rich network of associations and have an experience that is eminently memorable. A summary, as a simple compression of one or more stories, simply doesn’t have the same informational richness as those stories and as such cannot create the strong mnemonic associations that stories do.

At the end of the day, storytelling is something that is hardwired into us. We comprehend and remember things as stories, not data, and the tradition of storytelling has been a universal ritual as far back as recorded history takes us. This natural affinity of ours towards telling stories can teach us a lot about how we learn and should be taken into consideration as we expand our educational tools into the domains of the internet and cloud computing. The ability to express ideas as dynamic and interactive programs as opposed to static texts and the interconnection of vast amounts of user-created data will doubtless allow us to inform and educate in novel ways; but these innovations must cultivate the human faculty for storytelling if they are to be truly effective as pedagogical tools. As our world becomes increasingly interconnected and hyper-abundant in information, context has become the essence of knowledge; and one cannot understand context without understanding stories.

[1] For further reading on this idea, see Nassim Nicholas Taleb’s notebook, Opacity (

Wednesday, December 15, 2010

I haven't updated for a very long time, but on the agenda (soon-ish) is a series I've wanted to do on how narratives are presented in games. Most of the games are fairly old, but this is the rough sketch so far:

Forking Replays: Starfox 64 vs. The Bouncer

Fantastic Worlds: Super Mario 64 vs. Quest for Glory V

Human Behavior and Imagined Narratives: The Sims vs. Civilization

These are very tentative, really subject to change. Also, if anyone could figure out how to make Creatures work on Windows Vista or Windows 7, I'd be eternally grateful. I'd love to re-discover that old gem.

Sunday, November 21, 2010

Used Bookstores

Used bookstores are like crack. There are too many books in a given trip that I feel I can't leave without, and it's especially bothersome when they're overpriced.

Nonetheless, used books are much better than new ones. They're what financial people call the "long option"; you buy a ton of them for cheap in the hopes that one of them is going to have a high impact. With new books, you can't be so surprised, the books are expensive and you come in set on some book that's been published in the last two years or canonized in the last ten. Maybe you get what you were looking for, but you have not much better of an idea of how satisfied you'll be than if you were to pick up a random book at a used bookstore.

I also find that used bookstores are more likely to have some more obscure things that are maybe a little bit dated but nonetheless more illuminating to a particular field of study. Today I got three books, some more well known than others:

Postmodernism: The Cultural Logic of Late Capitalism by Frederic Jameson

Empire of Signs by Roland Barthes

Essays on Non-Conceptual Content

The last one, by the way, is for research purposes. I also have a long option strategy when it comes to research on Bart. There are some books that I'll buy because I specifically think that they'll answer a question I have, but many things surprise me and give me new ideas on what to do. Of course, my reading list has gotten ever longer to the point that I'll refrain from posting it for fear of name-dropping.

I also very much enjoy buying used vinyl records, particularly aesthetically one-track house/techno/trance/trip-hop/etc LPs. I get them if they're a dollar or two and stack them up with some sound equipment I have yet to use again. Weird hobby, yes, but I dream of having some extremely lush jungle of random sounds that I can put together through all of these stripped down building blocks; making soundscapes through bricolage, to be precise.

Aesthetically, I'm a hoarder. I take many tiny fragments in the hope that the right set of them will make a whole that's far more valuable than everything I've collected.

Saturday, November 20, 2010

Narratives are for Socialists!

I've been linking together ideas of social interactions and narratives a lot in some of my posts. For a while, I've been continuing to develop a theory that narratives are a kind of epistemology that is rooted in social interactions. Here's an evolutionary take on the idea:

Human cognition arose from having to infer increasingly complicated patterns (for those who might get the wrong idea, this is not a logical kind of inference—it's pattern recognition in the brain.) Originally, pattern recognition was developed in order to provoke the kind of response along the lines of "Holy shit, a rattlesnake!".

As primates evolved, they became more sophisticated organisms that also worked in groups. These groups involved social relationships far more complex than schools of fish or flocks of birds (which are quite nicely simulated by programs such as Boids). In order to facilitate these social interactions, members of the group needed to traverse these in an effective way, thus the individual organisms needed to become more intelligent with regards to social interactions.

Now, this kind of intelligence had much different demands than those of basic sequential reasoning, which was first the basis for being able to competently hunt down an animal and eventually for the kind of sequential logic that gave rise to civilization. If you now turn your attention to stories, it's pretty notable that the vast majority of stories are re-tellings (fictional or otherwise) of social interactions; more so when I think about the fact that the only narratives devoid of social interactions (that I can think of) are from modern times.

If this is true, then it's safe to say that stories were originally about people. Stories also have their own internal logic and form that is very different than the kind of modularized rationalist thinking that we usually associate with knowledge. My take on this is that this unique logic was crafted over time in the human mind in accordance with the nuances of social interactions. Rather than try to learn about social interactions through empiricism, which would get your slow-thinking ass killed in the jungle, the human mind developed a vast and messy network of heuristics for comprehending social interactions—in simple terms, we developed an intuition for how people work and were able to express it to one another by means of storytelling.

But why telling stories? Why not just understand social interactions in some way on our own? Two reasons. First, there's an obvious advantage to being able to communicate this knowledge to the rest of the group—to tell one's kids why they shouldn't lie to other people or why they should stay away from the other tribe. Second, being able to transmit and modify this knowledge presents another huge advantage: just as biological evolution embraces random change as a means of creative destruction, having the capacity for storytelling allowed for traditions and arguments between people that allowed stories to change in unexpected ways. Those that were keepers stuck around and those that didn't have much value could be weeded out in time.

Since then, we've been given a large and intractable network of storytelling traditions, genres, themes, allusions and so on; all of which is worth studying, by the way. At some point, I might post about how narrative evolves, since narratives are built on other narratives through two concepts talked about by Jerome Bruner known as canonicity and breach. But I'll leave that for another time.

Wednesday, November 17, 2010

I wasn't entirely satisfied with my last post—I started rushing myself near the end. Might go back and edit it for the sake of clarity. I got pretty lost in my own thoughts and feel like I didn't get out a simple central argument.

For the time being, sick of heavy-handed philosophizing. Will post more impressionistic journals about the city and briefer/more intelligible thoughts on aesthetics.

Monday, November 15, 2010

A New Heuristic for Language

While working on my interactive storytelling engine, Bart, I've been puzzling over a few ideas about language:

1) How do you create a grammar that can sufficiently understand context? The de-facto way of understanding grammar is by envisioning it as a tree (for more technical folk, this is a phrase-structure grammar, give or take a few technicalities.):


This isn't just helpful for understanding the meaning and structure of a sentence but also for understanding more broad ideas such as an entire story. Now, I should make clear that grammar only deals with the syntax of a sentence and not its semantics. That means that it's not dealing with what the sentence means (see above) but only with the structure of the sentence.

The essence of these grammars is context. The context of the rest of the sentence tells us what words can come next in a sentence. If the sentence under construction is "Colorless green ideas sleep ____", the blank spot can't be filled with a noun; that doesn't make any sense. At the end of the day, however, these simple hierarchical phrase-structure grammars are not powerful enough to deal with the full expressive power of human language. They also don't deal with the semantics of the sentence; there's no grammar out there that can tell us the above sentence is meaningless and doesn't make any sense.

The last statement I made was loaded for a number of reasons; for those who spotted that, I'll put the question more properly: teaching a computer how to make grammatical sentences is not the same as teaching it how to communicate through natural language. I don't believe that any grammar like the one above could be used to comprehend semantics.

2) So, let's talk a little bit about semantics. The meaning of a sentence and the meaning of a story are very similar in two particular places:

First, just as a story (or a part of a story) has multiple meanings, so do sentences or parts of a sentence; not only do words and phrasings have multiple definitions ("let's talk about sex with Doug Funny" vs. "let's talk about sex with Doug Funny"), but there's also an inherent ambiguity to pronouns such as "he", "she" or "they" that can only be given definitions in the context of what the person is saying. A good analogue to the latter part would be how a story may reference that "Elvis was playing on the jukebox" and so we use what we know of Elvis in the real world to fill in the blank. The range of interpretations is changed by the surrounding context, with new interpretations added and old interpretations removed.

But there is also a corollary: just as we can use many stories to express the same thing, we can also choose from multiple words or sentences to say the same thing. Respectively, these two phenomena are called homonymy and synonymy

Now consider that synonyms create a sort of redundancy* for figuring out what a sentence means. If a word has multiple meanings then using synonymous word or description for the same meaning can make clear what the writer was getting at. Consider "Roger, who loved carrots, was a real jackrabbit" and "Roger, who didn't always show great judgment at parties, was a real jackrabbit." In each one, the surrounding context is narrowed down by the use of synonyms. It's also worth noting that context can be seen as a way of creating redundancy (or creating more ambiguity in other cases.)

3) Associativity plays a huge role in understanding things like narratives and semantics in general. The following is an illustration of how ideas, memory and everything else fundamentally works in the brain:

Your Brain (artist's rendition)

Your Brain Making an Association!

The neocortex, the area of the brain that makes us "human", is fundamentally a network of neurons that connect with one another through simple rules. There may be different parts that neurologists have identified, but none of these discoveries seem to constitute any strict unbreakable rules; different areas of the brain can fulfill the functions of other areas in cases of brain damage, a case of plasticity (a.k.a. redundancy.) To get at the importance of this, let's consider two phenomena:

Ideas: Consider the (masterful) pictures that I drew above. Think of each of the circles as a different idea/concept/description/etc, such as cell-phones or Pride and Prejudice. Now, look at the second illustration and suppose that these two ideas had something in common with each other--you notice this and bam!, they're connected! This is a huge generalization, of course; the brain is packed with quadrillions of neurons and each idea would be something more like a cluster of connected neurons, but the same basic idea is at play. Let's now move on to something more relevant...

Memory: For me, the concepts of memory and narrative cannot be separated. Everything that has "happened" to us is what we remember. I once had an argument with a friend where I insisted that dreams were just a giant dump of data from the brain and that the only reason we remember them as stories of any kind is because we make sense of that data after the fact; that that's the way our memory works. He said in return "but then you're just going into the age old question of what's experience and what's memory."

It should be clear this point that our memories do not work as some serial recording device like a computer's hard-drive or a video-casette; nature has given us a far more sophisticated device. Memory is a network of associations between sensations, ideas, stories and any other kind of experience that one can imagine—thus the reason for the insufferable amount of references to Marcel Proust's eating of the Madeline cookie in Remembrance of Things Past. We recall through association, something reminds us of something else.

4) Associativity shares much in common with the basic axioms of modern semiotics**. Consider that signs are not defined in absolute terms but assigned in relative terms. Let me try to explain this in detail.

Imagine for now that when I write a word like /this/, I'm talking about the concept. So for example, the word "automobile" is representative of an /automobile/. This is where semiotics started at when Saussure coined it; each word was an arbitrary token used to reference some object in the real world, so for example:

"Tree" --> /tree/
"Blue" --> /blue/
"Dog" --> /dog/

But now consider the difficulty of defining something in terms of absolutes. You can say that you like the color blue better than the color red, but you can't say that you like the color blue better than a tree. There's no comparison between a color and a plant. So instead, the idea of a sign system was coined, which was the idea of defining things in relation to each other so they could be compared. So red and blue belonged to the system of colors and trees and roses belonged to the system of plants. But, any one word or concept can belong to multiple systems. For example, if you're talking about what to decorate your garden with, you can say "I like trees better than garden gnomes." whereas you couldn't if you were having a discussion about plants.

I'm going to say for the record that this is all a simplification for many reasons, but I digress. Consider how this is similar to the concepts of honomyny and synonymy. The color blue can be referenced in multiple contexts. Conversely, we can think of the color blue as having multiple definitions; it can mean a color in the rainbow, or it can be part of a list of visual motifs in a movie. In order to see it this way, however, we have to accept that an idea can only be defined in opposition to other ideas. Let's take a look:

Note that while there are multiple contexts for the colors black and white, each of these contexts is defined by talking about how they are different from other things.*** That's how a sign system is formed; there is nothing explicitly defined. We can define a system of colors because of the comparable differences we see between them; same with a system of hats in Spaghetti Westerns. This is the concept that the French structuralists knew as difference. Meaning is relative.

5) So what was this all about? Where's the new heuristic?

Note the big glaring similarity between my semiotic diagram and the diagram of the brain. Did you see it? They're both networks. A sign system is created by the ability to associate ideas with one another, which requires that they have some similarity in form. One might say that in order to contrast two ideas, you first have to compare them (bonus points to those who can find a comparison between cell-phones and Pride and Prejudice.)

Right now we've been talking about the similarity in function between a semiotic network and a neural network. Now I'd like to explain why I don't think phrase-structure grammars are insufficient for understanding semantic context. From a computational standpoint, they're quite beautiful; the weakest phrase-structure grammar, a context free grammar, is itself a vast improvement over the most primitive model of computing, known as a finite state machine. But one thing that the concept of phrase-structure grammars doesn't seem to take into account very well is the concept of redundancy, which we saw was crucial for understanding how context can be narrowed down by having multiple words available as synonyms for the same concept. In simple terms, they do not have a built in mechanism for dealing with ambiguity.

As we know, using many synonyms can narrow down the range of interpretations. But here's the question that many would have: "why should words have multiple meanings? Isn't that silly and just going to confuse us?" Yes, maybe, but it also makes it much, much, much easier to describe something, because we can find a word with the right connotations and then add in redundancy in order to weed out unwanted interpretations (again, I understand this is a simplification.) In essence, when it comes to the meanings of words, sentences, stories or anything else (this concept applies to all of these things), we get something that looks like this:

We should be understanding language as a network that is capable of combining many different ideas rapidly and unpredictably but also capable of creating redundancy that allows precise and novel things to be expressed without complete confusion. Yeah, that's it.

I may post a follow up to this essay that more fully investigates the implications of this model. For now, I've actually gotten tired of writing this post. I much prefer talking about Dumbo on a winter afternoon and the like.


*For more technical types, this is a more precise definition of redundancy.

**I don't know entirely who coined which concepts in semiotics; but the semiotic theory that I talk about is largely taken from Umberto Eco's semiotic theory.

***For those who are interested in questions about meta-languages, this heuristic may come to explain away a lot of my problems with it. One of the problems of semiotics is that we need a meta-language to evaluate sign systems. Notice, however, that in defining black and white in opposition to each other in terms of hats and colors in the spectrum, there was a difference in form. One sign system included red whereas the other one didn't, thus making them distinctly different sign systems irrespective of any explicit links.

That is to say, that if both systems solely included black and white, they would be the exact same sign system. It may actually be a little more complicated in that no sign system should be made up of a set of nodes that is a subset of another sign system's set of nodes, but I think that the same basic principle is at play here.

Sunday, November 14, 2010

A Digression

Will be posting soon about a new idea for understanding language through concepts taken from neuroscience and (to some degree) semiotics. It's a bit hard to explain, I'm not entirely sure where it's going, but it looks to be an alternative to grammars that doesn't necessarily discriminate between syntax and semantics. It may take a while, however, since I'm trying to come up with something that's simple and applicable enough to actually count for something.

For the time being, something different:

I've been very glad that winter is approaching. November and December are two of my favorite months, despite how many people are bummed by the cold weather and the shorter days; not as much goes on, people aren't out as late, the day ends before 5:00. I've always kind of liked that when it's in moderation.

A quiet and contemplative calm falls over everything and reveals the sacredness of so many places. When there are few people around and I can hear the wind scratch past my ears in the night, I can wander aimlessly and watch the light protrude from the buildings in Brooklyn Heights and the not-so-distant New York City skyline as if it were giving me just enough light to see where I'm going so that I can contemplate more things.

Winter might not be as fun as spring or summer, but it's much more sacred to me. May be posting pictures soon; for now, someone else's will suffice:


Friday, October 22, 2010

Why Intent "Matters" (or: why Beyonce is a bad liar)

First, a disclaimer: if anybody ever tells you that an interpretation of a work is invalid because "that's not what the author intended," throw them off an overpass; they're better off coming back as a lobster.

Second, the opposite view is equally dogmatic and flawed. This is the harder point to make and I'll start by extolling some of the virtues of this view before kicking it down a notch.. A peer of mine once brilliantly said about analyzing Lolita "If we were to bring Nabokov back from the dead to ask him what he intended, he'd probably just pop out of his grave and start lying." This is quite true on many levels; on a quite literal level, there's no way you can guarantee an honest statement of intent; just because an author is saying it as opposed to writing it or phrasing it in a direct manner rather than an indirect manner (i.e. telling a story, what a concept!) that doesn't make it any more of a foundation since it isn't verifiable.

I can take this further by saying that intent simply isn't verifiable. It has no objective existence; you can tell your best friend what your intent about writing a story is and then lie to everybody else, but what you said to your best friend doesn't have any objectively verifiable basis; it only exists in your head! This seems somewhat banal, but it's a less metaphyiscal way of stating the fact that subjective ideas can only be mapped, they cannot be verified.* So it would seem that talking about intent is silly; it doesn't objectively "exist", it's something we can only infer in some unquantifiable way by privileging someone's "direct" statements or by looking at the biography of an author and using our idea of what their life was like to suggest how some events may have informed their work.

But this argument seems a bit weak. Is anything in literary interpretation ever objectively verifiable? Answering "yes" to that question seems absurd (if you think otherwise, feel free to speak up; I'm just saying that right now I really don't see any good argument.) So what are we doing then in analyzing literature? We're making sense of its impact and its relationship to the world by constructing a narrative of our own. While others may have strongly disagreed with me on the following point, I'll still make it: narratives are fundamentally about people and primarily reflect our existence as social creatures. When we read a novel or watch a TV show, we socially construct characters from the words on the page or the actors on the screen despite what's most likely a complete poverty of information; we may not know the entirety of their (imaginary) life experience, but just like a famous author, we construct a narrative from a limited biography (and body of canonical works; After the Quake isn't representative of Haruki Murakami, I swear!) and create a being with a life of its own.

One of the most important factors in our social construction of human beings is the idea of intentionality. We infer intentions when it comes to everything people do; perhaps as a way of masking all of the noisy details and deviations of a person's behavior or maybe because we really can learn who to trust and who not to.** Without ascribing intentions, we can't construct a picture of a person or empathize with someone; thus the reason why a heroic or tragic story on the news will captivate us but a statistic can only glance off the (somewhat) rational surface of our minds. The same goes for literature; not just in constructing characters but also in how we construct the story as a whole. There is always a narrative voice telling the story, however passive or indirect; and just as we listen closely to a personal story told to a trusted friend, we "listen" to a piece of literature in order to figure out what to make of the story it contains.

To put it another way, every story always has a storyteller; implicit or otherwise. Our idea of the storyteller is informed and constrained by many things; social norms, the conventions of genre, the idea that they're trying to entertain us (think about the last mystery/thriller you saw; you have a pretty good idea of why the most obvious suspect wasn't the traitor/murderer/villain,) and so forth. A former professor of mine rightfully responded to this point by bringing up the point that many of these ideas are different than a mere statement of intent because we can create a more objectively verifiable case about things like social customs and generic conventions. I agree with them insofar that making blanket statements about what someone intended isn't a good way to make an argument; our idea of someone's intention is as subjective as the author's own intent and an argument requires definite common ground. I still find it necessary, however, to acknowledge that we ultimately create an interpretation of the story that is itself a narrative and in order to do that we create an intent behind the storytelling. But I digress.

All storytellers are actually implicit; even if we know the author or are listening to an orator right in front of us, we don't know every last detail of that person's life, we've constructed a simplified version in order to relate them to the story that they're telling. The point is that this implicit storyteller informs our own effort to make sense of the story and in order to let such a storyteller inform us about the story we endow them with intentionality.*** Storytelling is fundamentally a social enterprise; it depicts complex social relationships (to the point that we can hate Nina Meyers for killing Teri Bauer) with very little information and it makes an impression on us by allowing us to read into how the story is told. From an evolutionary (and completely hypothetical) perspective, stories began as a means of communication about complex social relationships and so we're always scrutinizing the storyteller who must have borne witness to the events and who must have some motive behind telling us the story. But I (once again) digress.

We can't read a story without inferring something about the storyteller's intent and we can't have any understanding of a story without imagining a common ground between ourselves and the author. This is why Beyonce's songs will never do it for me. She talks about guys leaving her in so many of her songs (Single Ladies, Why Don't You Love Me, Say My Name...) but she's been dating Jay-Z for practically all of her adult life and is now married to him. I can't see veyr much authenticity in what she's saying. Jay-Z never left her and she started dating him when she was 21, so it seems unlikely that she has much to be going off of.

People have asked me why I continue to enjoy Lady Gaga despite the seeming artificiality of her songs about partying and seduction (she was a workaholic in school and is more of one now.) That's a good question. I suppose that most of Lady Gaga's songs in The Fame seemed somewhat reflexive and ironic to me. I know that's not a very sophisticated critique, but the difference is that Beyonce, for all of her singing and dancing talent and her picture perfect looks, just comes off to me as too damned earnest for her to have any implicit commentary in her lyrics. Of course, this is just the Beyonce that I've imagined for myself; the truth is that I don't know the first thing about her, and neither does anybody else outside of her personal life.

*My amateur knowledge of phenomenology and semiotics causes me to think that this statement suggests the respective roles of both schools. Semiotics is the study of mappings whereas phenomenology is the study of where mappings come from.

**To be fair, narratives were much more reliable back in the Pleistocene when the world wasn't so damned interconnected. I take no credit for this idea; see The Black Swan by Nassim Taleb.

***A stronger argument that I'm tempted to make is that this idea of the implicit storyteller is necessary to create any sort of narrative context; generic conventions, social customs and even the specific language/dialect that we're reading in is some subjective phenomena independent of the physical text (how could it be a physical property of the text?) that we see as an act of communication between teller and listener. To put this in a more familiar perspective, Peirce concluded that all signs require three elements: a signifier, a singified and an interpretant. Without a social construction that binds the storyteller (imagined or not) and the listener, there's no code with which to link sign and signified.

Friday, October 15, 2010

Humans aren't Machines

To some, the answer may be "well, duh." To others, it may seem that I'm making a poetically admirable but superficial statement that's obviated by the fact that everything is a collection of small atomic parts that interact to form an emergent system.

As the title suggests, I'd like to make an argument against the latter; but I'd also like to ask the former group to read along. Why? Because this isn't based on a metaphysical defiance of science but in fact something of a declaration that science and spirituality are quite compatible; in fact, I will be saying that reducing human beings to machines is deeply un-scientific. But I digress.

First thing's first: yes, using a very loose definition of machine* you can argue that all living things are machines; but it's a pretty banal and in fact misleading definition. Why misleading? Because the word machine has a whole lot of connotations and when it comes to a word, you can change the definition without sufficiently changing the connotations of the word. Machines are seen generally as artificial, unthinking, heartless, cold and calculating (I could go on, but you get the point.) But could we really make these accusations at a phenomena so general and abstract as a system that emerges from interacting parts?

These misleading connotations have a particular moral hazard; they encourage a very mechanistic and quite possibly nihilist view of the world. I'd like to note that I'm not saying that a mechanistic interpretation existence is bad in itself; that's science and I happen to think like most people that science is a very good thing. But when we take the word machine, with all its implications, and slap it on everything we see regardless of the context in which we're talking about it, our world view starts to become an outright perversion of science. This isn't just some spiritual quibble; in many ways it leads our thinking to become deeply un-scientific. This may sound odd, but in order to make this point I need to touch upon the cultural properties of machines.

Machines were historically created as something to aid and simplify human labor, starting as tools in the hunter-gatherer period. Since then, we've gotten to the point of automated assembly lines and computers. When making a machine, one generally needs to specify the problem and the solution in a way that can easily be understood as separate parts; not only in order to come up with a feasible design before building it, but also for the sake of being able to figure out what went wrong if the machine fails. For the entirety of Civilization, machines have been understood as a configuration of parts that can be understood how each part contributes to fulfilling the machine's function and whose actions are predictable. This sounds like a loaded statement, but it only comes from the logic of why and how we create tools and machines; by the fact that it was necessary to produce a reliable outcome and that in order to do so the logic had to be sufficiently simple. Note that this even applies to machines like random number generators; we may not be able to predict the number, but we can fully understand and predict how each part will contribute to the process of creating that random number.

But humans are hardly like this (or any animal for that matter.) We may be able to derive general ideas about living things by discovering the most basic moving parts or performing some specific experiments, but the interactions between these parts and the emergent patterns that come from them are well beyond our current comprehension. Unlike machines, organisms are almost entirely unpredictable. In the paradigm of physics, this doesn't matter because physics is only interested in the simplest forces and smallest parts; no physicist has to actually come up with any predictions about the human condition. In this context, it's perfectly safe to define a machine as anything that converts energy from one form to another.

But what of the many other things that we're looking to understand? By taking this definition of machine from physics and glibly applying it to every other schema through which we look at humans, we've turned a blind eye to the fundamental randomness of organisms. By this, I am not making any metaphysical argument about free-will/chance/determinism, but using randomness in the mathematical sense, which simply states that if you can't predict it (due to a lack of information or otherwise), then it's random for all intents and purposes. It's no wonder then that we've failed to create AI that portrays humans in any life-like manner, that we still wonder why shamanism or religion is sometimes a better alternative to clinical treatment** or that a thousand economists with PhDs couldn't see the imminent collapse of the world financial system.

The mechanistic view of the world is appropriate for fields and paradigms in which its objects of study can be sufficiently understood in such a light. Once we let this view pervade the rest of our thoughts, we end up looking at the map and not the territory. Calling living things "machines" in a general sense does just this by implying that their parts and behaviors are comprehensible as such and gives the world a false sense of predictability. Reducing the complexities of life in this fashion isn't just offensive, it's moronic.


*The formal definition of machine is an object that converts energy from one form to another. By this definition, anything that materially exists is a machine.

**For the record, I am not trashing clinical treatment of people in need of help. I am the son of a psychologist and a pediatric nurse. I believe that medical professionals are oftentimes helpful because they are usually very scientific in how they rely on data. I should also note that my complaint with the mechanistic view of the universe is not with empiricism; true empiricism acknowledges what we don't know and doesn't rely on representations. What I am suggesting is that in the face of randomness, clinical treatment doesn't always make sense.

Tuesday, October 12, 2010

Data vs. Narrative

This is an odd topic for a post, since I don't think many people think about data and narrative as particularly contrasting with each other, but I think there are important differences and similarities that should be addressed. A narrative as we know is a story, something that's told to us to make a point, to amuse us or to help us make sense of something in general. Data is a collection of raw numbers or labels that links two or more things together; the amount of young adults that have jobs, the amount of reported car accidents in America in a given year, etc.

So, let's first take care of why I'm bringing up this question in the first place. What's the point of comparing and contrasting data and narratives? Well, a narrative is, generally speaking, a way of making sense of some series of events. It illustrates causality (note to literary theorists: please bear with my questionable simplification; part of the use of this is understanding narratives better, this is just a good starting point.) If narratives explain why something happened, then they may be able to tell us something about what's going to happen next. By this definition, it's really no different than a scientific hypothesis (literary theorists: just keep running with it). More simply, we can think of this as being given a bunch of data points and connecting them with a mathematical function of some kind; i.e. fitting them to a curve:

Some data...

And a narrative to explain it!

I should make a note that it's perfectly acceptable that these narratives could be wrong. A new data point could be shown that doesn't fit with the curve that I drew, thus forcing me to draw a new curve. In fact, there are an endless number of curves I could draw to fit those points; some of which may look exactly the same close up but wildly different when you zoom out. As an interesting side-note, this illustrates pretty well what Wolfgang Iser calls the inexhaustibility of the text. What I mean by this is that when reading a book, one creates a "world" inside their head that matches what they've read; but there are an infinite number of "worlds" that could match this, with any number of (currently) unnecessary ideas unconsidered. As the reader continues, they find new statements in the book that have to be accounted for either by changing previous assumptions or by adding new details to the "world" that they've created in their head.*

So why is a narrative not just a scientific hypothesis or some formula that matches a bunch of data points? The answer can be found in part of my digression; that as we read a text, we have to consider new details in evaluating what's going on and why. A more mathematical analogy is that we have to consider more semantic dimensions; we might read a book that takes place in a Castle, and at first we only have to know the general outline of a castle, but then when the book describes how it made the characters feel alone and vulnerable, we then have to think about ways in which that castle may look or be built that would invoke that kind of a feeling. This all might be very abstract, mind you, but it still works that way on a fundamental level. Also; and this is important, the details that we come up with in the future are going to depend on the details that we've imagined now; so the set of details to be considered are not just latently laid out in the text in some finite way, the details we consider are actually going to derive from each other. So the semantic dimensions (or types of detail, for the less pretentious) are not in any way pre-determined and there could be a potentially infinite amount of them.

Data, like the sample points I showed in my illustration, is different. There are exactly two dimensions to be considered when fitting a line to these points. We already know the entirety of the semantics. Data is entirely delimited; we don't consider things outside of the traits that are enumerated, and there are values specified for each of these traits on every point. Lines drawn to fit the data are ultimately built to fit a static sign system that never changes (I hoped not to have to use the word "sign system", but I don't know how else to explain that.) But once again, there's a catch that might pull data and narrative back together, which I believe to have unknowingly been the source of conflict in a debate I had on this subject with a good friend:

Even if the points exist in a single, unchanging sign system (unlike narratives, in which the sign system changes in unpredictable ways), there's still the question of the scientific hypothesis or the line that fits the curve. Scientific data and points on a chart may both be delimited, but the hypothesis or the function isn't really. Oftentimes, a scientific discovery is made by thinking outside of the conventional data analyzed and therefore the sign system of why the data points are generated in a certain way changes. Similarly, with a mathematical function, the way in which the data points are arranged may require you to add increasing amounts of complexity to the functions; you may add another degree to the polynomial or add a trigonometric function, or something else entirely. So the actual hypothesis/function actually does have a surprising amount in common with narratives in that there is a potentially infinite amount of semantics that can be added.

The difference is that whereas a text is unexhaustible, there isn't the case with a set of data; the hypothesis that's suggested doesn't change the attributes of the data that have to be accounted for. There is an argument from literary theorists, however, that essentially says that as you find these new hypotheses, you create a new paradigm in which the attributes of the data are better changed to fit it and that therefore you have the same intractability as solving a narrative. As tempting as it seems to accept this counter-argument, it still seems to be the case that while new paradigms might introduce new ways of formatting the data, scientific hypotheses still account for the old delimitations; or else it's just solving a straw-man for all intents and purposes. Personally, this argument doesn't interest me, so let's move on to something more important:

The question on my mind from the beginning of this has been one about hypotheses and it'll take a moment to explain. New hypotheses are constantly created from new observations that may have nothing in common with the data. For example, a theory in microeconomics may spring from something in psychology that is not in the paradigm of economics; so it wouldn't show in the data, but it may be the only thing that explains the economic data. Now, let's think about how we perceive the world. When we look at the world, everything is data; our senses send discrete electric imuplses to our brains that quantify our experience as data; the semantics of our sensory experiences are finite, so how do we experience narratives, how can things be inexhaustible?

The answer lies in our neurology. The many inputs and outputs of our brains are interconnected through webs or neurons; these connections being associations between observations. These create clusters of connected neurons we know as ideas. These new ideas which are formed from connections between observations (and also from other ideas) create new conceptions of meaning that we couldn't even hypothesize about before, therefore offering us a process akin to the text in which we can create a virtually unlimited number of narratives starting with the finite amount of data that we initially perceive; in other terms, we can now draw hypotheses from outside of our our original paradigm of experience. Add to this that changes in this web of neurons essentially changes the inputs and outputs into our system (we may still have the same sensory receptors, but they get processed differently as they go further through our cortex) and we've created the reading process in which we connect data with meaning and then re-define data according to that meaning, therefore creating new semantics from semantics that we ourselves had previously created; our experience becomes inexhaustible.

I wasn't sure if I'd get to the end of this one, *phew.* It got a bit obscure near the end (my ability to write clearly is waning with my focus), so if anyone wants me to clear anything up feel free.

*For those interested in this concept, I talk more about it in my earlier entry about the movie Inception. You can also read Iser's original theory in his essay The Reading Process: A Phenomenological Approach.

Friday, September 24, 2010

Book Review: The Next 100 Years

I picked this book up in the middle of Barnes and Noble while taking a walk on a Saturday afternoon with not much else to do. I almost immediately wrote the book off due to my (still firm) belief that any prediction about something like international politics in the next few years, let alone the next hundred of them, is outright impossible to do with any precision.* I decided to pick it up, however, since it sounded like it would make for an entertaining piece of fiction about futuristic warfare and geopolitics.

He may not have convinced me that he'll be able to predict anything in the long run, but I was impressed by just about everything else. The author, George Friedman, is incredibly knowledgeable about everything from American academic and cultural movements (i.e. pragmatism and feminism, each of which plays a large role in his assessment of future conflicts) to the relationships between geography, economic power and military dominance. Most impressively, he's suggested a number of scenarios that run completely counter to most people's intuition about what will happen next and how it will happen. I'll sum up his main points briefly, focusing mostly on his ideas about the forces that have been shaping the world in the past century rather than the specifics of his predictions, since this is where I believe the real meat of the book lies.

Friedman suggests that what we are entering into right now is distinctively the "American Age"; not so much that he believes that the United States will remain the hegemon for several centuries (although he believes its power will not be endangered until at least the turn of the next century) but that where Atlantic Europe was the economic and cultural center of Civilization for hundreds of years, North America, bordering two of the world's major oceans and practically invulnerable to invasion, will be that center of gravity for a very long time to come and that whoever dominates the continent will dominate the world.

Friedman rightfully points out that the GDP of the United States is still greater than the next several countries after it and that despite constituting less than 5% of the world's population, it still accounts for around a quarter of the world's economic output. He attributes American dominance to a number of historical factors such as its domination of North America, the tremendous power of the U.S. Navy after World War II and the spread of the American philosophy of Pragmatism, characterized by the invention and spread of computing. Pragmatism, he says, is a distinctly American philosophy that praises ideas for their practical application and scorns the metaphysical, which although being directly at odds with much of the world, also gave birth to inventions such as the computer, which vastly expanded America's cultural and economic sphere of influence.

Friedman's general assessment of the conflicts happen today and his predictions of future conflicts are drawn from his knowledge of Pragmatism among other ideologies. He attributes America's continuing "culture wars" as well as the current conflict between America and the Islamic world to ideological fault lines. In particular, the spread of Pragmatism has created resentment in those cultures which were at odds with it, but more strongly he cites the various movements and struggles coming from the ideological conflict over the status of the family, which he sees as inevitable in the face of the massive technological advancements in the 19th and 20th centuries.

Several advances drastically changed the ways in which families and individuals behaved. Rapid advances in medicine brought down infant mortality to the point that families no longer needed to have very many children to ensure financial safety. In fact, as the 20th century progressed, having a lot of kids became economic suicide for many families as economic conditions demanded a more educated workforce and parents started having to send their kids to school for longer periods of time. Meanwhile, not having as many children, women for once had a lot more time on their hands and finally had the opportunity to work full time. Divorce also became much less financially dangerous.

Of course, many continued to hold very traditional family values, especially in the early 20th century when the technological and economic revolution had not fully set in. This has since caused a massive conflict between socially conservative institutions and an increasing number of individuals who have shed socially conservative values for a more opportunistic lifestyle. Friedman sees this as being not only at the heart of the American culture war, but also at the heart of the conflict between radical Islam and the West, as evidenced by Osama Bin Laden's letters to America.

These set up many of the initial fault-lines in the world that will erupt in the future. Other fault lines include Russia's geopolitical imperative to regain the status of the now defunct Soviet Union due to a demographic crisis and an increasingly hostile Eastern Europe, the shared cultural borderland between the United States and Mexico, which fully erupts near the end of the century due to Mexico's new found economic and cultural clout in the American Southwest, and the emergence of Turkey and Japan as world powers with spheres of influence in the Islamic bloc and coastal China respectively. I should also note that Friedman believes that China is due to fragment within the next couple of decades, suggesting that its economic growth is too fragile (a la Japan in the 1980s, the then feared competitor to the U.S.) and its politics too unstable.

There's quite a lot going on in this book, so I've omitted a lot and would like to get at his main points. Friedman doesn't concern himself with global warming or the current financial meltdown. For the former, he sees a drop in world population (and thus material demand) and the emergence of new technology as more than enough to solve it. For the latter, he sees the current crisis as very nasty but ultimately nothing more than another case of the world's economic balance correcting itself. I'm a bit skeptical on these points, but I should note that he doesn't believe there will be no crises ahead. He believes that immigration policy will take a 180 degree turn in the middle of the 21st century as steadily declining populations in the West create a massive labor shortage. He believes that the opposite will happen in 2080, causing the United States to repatriate many Mexican-Americans and creating a divisive split in a largely Hispanic America.

His main point, however, is that there are in fact static forces that allow for some degree of prediction. The geographical and demographic tensions between cultures and nations are relatively fixed and will ultimately be the backdrop for conflicts emerging from ancient rivalries. America's continued control of the seas (and eventually space) mean that it's convenient for most of the world to be complicit and in return enter the world economy (or, "the American system") under U.S. protection.

The United States' geopolitical objectives will also remain the same; to maintain dominance of North America, to control the world's oceans and to not allow any regional hegemon to emerge on any continent (but really, only Eurasia matters, because the geography of South America, East Asia and Africa don't allow for this sort of a thing.) The two current wars fought today (in which he has two kids serving overseas) are a cost-effective way to disrupt a region that Al-Qaeda was trying to unite against the West. With a civil war brewing in Iraq and the gulf states horrified at America's actions, another Caliphate is unlikely to happen, he says. For Friedman, his outlook on foreign policy will define the international system in the American age.

Although the future is far from written, whether the randomness come from free-will or a lack of knowledge; Friedman is right in saying that in fact many major forces are static and the outcomes of many scenarios might just be predictable. The one wild-card where I seriously disagree with him is technology, which relies on very random and severe jumps that open up entire opportunities. It seems presumptuous, and too in line with conventional wisdom, to say that space flight, robotics and genetics will advance in some predictable way and define the balance of power in the 21st century. That, in my opinion, is reason enough to distrust his prediction, however well informed it may be.

Most importantly, he has little opinion on any leader now or in the future. America is in a state of manic-depression, he says, blaming its supposedly imminent decline on the actions of past presidents and bitterly divided between exuberance and gloom about our current one. But, he suggests, leaders don't really have all that much say. As he describes it, world politics are like a game of Chess (please, bear with the cliche for a minute) in which many moves may be possible, but if you understand the game well, then there are fewer and fewer moves that actually make sense; however much it might seem to the contrary. That is, with the exception of that one amazing and unexpected move by the grand master that turns the entire game on its head; but I don't possibly see how Friedman can fully account for that.

*That is to say that yes, you can guess that something's going to be happen and be right about it, but that doesn't mean you've predicted it. Thus my saying "with any precision"; I'll believe in these kinds of predictions if someone can show me a consistent level of accuracy over time.

Saturday, July 24, 2010

My Interpretation of Inception

I saw Inception a few days ago, and from my collective experience with my other friends who've seen it, it's the kind of movie that you have a strong opinion about; you either love it or you hate it and there's always a reason or five.

I personally loved the movie, I thought that in spite of its flaws that the plot was extremely complex, well-informed and theoretically adventuresome. I saw it as a metaphysical thought experiment that follows in the footsteps of Christopher Nolan's last non-Batman movie, The Prestige. My fascination with the movie came from my seeing the movie as an extremely ambitious thought experiment on narrative, epistemology, language and consciousness; which in itself is funny because it made me realize what a structuralist I've become. I used to always get annoyed when I talked to diehard Freudians because they kept bringing up the same interpretive framework for everything, but it's hit me that whenever I watch a movie or look at a piece of art, I end up seeing it as addressing some question or another about narratives and signs that only a structuralist would really care about.

I'm going to write my interpretation in the form of small vignettes since I don't feel that I could give a single elegant interpretive framework. It's also hit me that I've always worked very hard at linearizing my non-linear way of thinking and that it might be okay for me once in a while to present things in a more free-form fashion that doesn't take up so much energy.

Also, if you don't like things that are "pretentious", then you may not want to read this entry; it attempts to connect a lot of very strange concepts. You've been warned.


1) Mal (Marion Cotillard) is quite clearly insane. There is no two ways about it; she's mad. That seems like a rather banal observation, but it is tempting at first to empathize with her after seeing so many layers of dreaming throughout the movie and actually believe that she has a valid reason to be skeptical. But when it comes down to it, Mal has no basis for believing that she's in a dream when she kills herself. She can say "Well, dreams seem real, so there's no way I can prove that this isn't a dream", but she would have to ask that on ANY level of consciousness; in other words, if she was right that she was in a dream, then she would still have to ask the same question when she woke up in the next level up.

2) So there's an epistemological question that goes on with these dreams. You can't know whether you're in a dream if there's no reference to the waking world. This is equivalent to Bertrand Russel's problem of meta-languages. You cannot verify the statement "this statement is true" on its own merits. There needs to be some metalanguage that can say whether that statement is true or false and that can directly compare two statements. If anyone's ever told you that you're comparing apples and oranges, it means that you're dealing with two or more objects that do not have a shared system of comparison, i.e. a metalanguage. Of course, if we constantly concerned ourselves with trying to find a transcendent metalanguage (or for all of you postmodernists out there, a transcendent signified), we'd never get anywhere. Narratives in general, especially religious narratives, help us deal with this kind of problem; from the common ground of shared narratives, we reach some sort of understanding of the world through dialogue.

Madness is epistemic nihilism, a belief that nothing is sacred (which may yet explain people's reactions to my sense of humor). In my last entry, I pondered about how cognitive dissonance allows us to participate in a collective narrative. In Luigi Pirandello's Henry IV, the main character, a disgruntled and possibly schizophrenic impersonator of the German monarch Henry IV raves about the freedom he's gained from dropping the mask and living in a state of constant flux without worrying about contradicting any narrative he or others may have made about himself. He's become quite literally anti-social, a sociopath. On an interesting digression, he is faced with the choice, after murdering his ex-wife's fiancee, whether to keep his mask as Henry IV and trap himself in the narrative of madness, or face the consequences of his crime the minute he stops acting like the supposed lunatic who murdered him; a Madman suddenly forced to sanely masquerade as a madman.

Mal is an epistemic nihilist, and she faces the worst possible consequence for it.

3) The way in which dreams operate is based on a phenomenological theory of storytelling in which the reader's consciousness fills in the semantic gaps left by the text. If I say "He walked down the street" you have to come up with your own idea of what that person looks like, what kind of street they walk down and perhaps how they might be walking. In fact, maybe you don't even think about some of those things until I point them out, or until I give a second phrase that reads "and people laughed at the way he walked." Of course, now that I told you that he was laughed at, you accommodate this whole sentence by giving him a funny way of walking.

In Inception, the shared dream is populated with an arbitrary landscape, perhaps a city, a tundra or a hotel. There are people walking around and perhaps even some familiar locations, but for the most part, these are all arbitrary landmarks. The dreamer populates the dream using his subconscious, filling in for what's left blank with their own personal thoughts. The initial setup of objects demands that the dreamer forms some causality between initial objects in the dream and, as a result, the dreamer creates a narrative out of the objects that reveals information about themselves. This is how the process of extraction works.

4) The process of extraction mirrors how we use language and narratives. Words on their own are arbitrary, meaningless symbols; mere parts of a (very very messy) syntax. By drawing connections between the words that somebody says, that person can communicate with us by allowing us to draw connections between the words and extracting information from the context that emerges when multiple words or sentences are arranged in a particular way.

Narratives work in the same fashion. The "dream defenses" in inception are a perfect example. I personally thought that the dreams having some literal military looked silly, as many people may have. Despite the fact that we know that it wouldn't really look like that in real life, we understood what the military men signified by seeing them try to kill any and all intruders. Every metaphor requires at least two juxtaposed symbols; they are an invariant representation of a pattern. By juxtaposing two or more symbols, you can imply some sort of causality or context that mirrors something that people understand and therefore turn meaningless symbols (such as say... a bunch of talking farm animals) into something that helps us understand the world (such as, just for the sake of argument... Animal Farm.)

5) Extraction works by framing a person's consciousness with arbitrary symbols and then causing them to involuntarily fill in the blanks with parts of their personal life. We're simply hard-wired to look for patterns and try to make predictions by accounting for the unseen. The dreamer reveals their secrets by allowing the intruders to see how they connect the dots and account for what isn't present in the dream.

In other words, it is the context of what somebody is saying that reveals the truth. Otherwise, the person could be lying or telling the truth or just saying nonsense; but if we compare it to what they've said before and under what circumstances, then we can assign some sort of information content to the words. The logical conclusion of this is that the truth about a person comes from what isn't present in what they say, that it's always looking between the juxtaposition of somebody's words, sentences or actions. This is why our English teachers always tell us to "read between the lines."

6) As an extension of (3), art and religion make for very good learning tools because they utilize our brain's incredible talent for storing patterns as invariant representations. Stories represent a wealth of highly abstracted shared wisdom accumulated over the course of civilization; all in a convenient narrative form. But I digress.

7) The dreamscapes created by Ellen Paige use logical paradoxes in order to "close the loop" of the dream. Most notable was the infinite stairs reminiscent of M.C. Escher. I don't fully know what to make of it, but such paradoxes seem to tie in well with Bertrand Russel's metalanguage problem. The dreams have within them some sort of "incompleteness" in that the dream simply cannot account for all possible questions the dreamer might ask. By making the dream physically loop on itself in a seamless fashion, Ellen Paige can make sure that the dreamer will simply not be able to answer certain questions such as "does this dream end or is it infinite?"

In real life, we're faced with a similar phenomenon. Many logical paradoxes exist within our world, one of which has shown that some true statements in mathematics simply cannot be proven. This is called Godel's Incompleteness Theorem. In this sense, the dream is arguably just as real as "real life" because it has a set of symbols upon which we are able to endow a narrative and has logical paradoxes which shroud certain epistemic questions about reality.

8) And logical paradoxes might just be the most essential thing to the fabric of reality. If we had "total knowledge" by which all things were proven with a consistent set of axioms, then we would no longer have any sort of experience because there would be nothing left to learn. All experience is caused by the activity of the brain making sense of its inputs and all narrative is us drawing causal links out of an infinite amount of possibilities to account for some series of events (or more accurately, some set of symbols.) The existence of epistemic limits means that we continue in the activity of learning and creating narratives.

In my opinion, narratives are the essence of experience because they're about imagining what isn't there; we live ineffably. Things continue to be ineffable in the absence of irrefutable proof. But perhaps it's also because reality works just like the dreams featured in Inception; that the universe just doesn't have enough material to answer all of our questions and so reality closes itself under a logical paradox that leaves us to continually guess with narratives. The universe is hardly the static, sound and complete ontological entity that the Enlightenment envisioned, it's a constantly fluctuating narrative where ideas like "true" and "false" are far less understandable than we think.

And I think that's why Christopher Nolan made the last scene such a cliffhanger. To be honest, I actually hated that scene, I think it was an extremely un-subtle way to address a question that the movie already implied and I believe that by putting it in the forefront, it made people take the problem so literally without wondering whether it even mattered whether Leonardo DiCaprio was still dreaming.

Thursday, July 8, 2010

Cognitive Dissonance and Narrative Troglodytes

So, this is a theory that I've been thinking about for a long time and I decided I may as well put this up for a first entry.

For those who know about Cognitive Dissonance, you'll know exactly what I'm talking about. If you don't know what it is, I'll put it very simply. You're more likely to like somebody if you help them rather than if they help you. Following this pattern: if you act unfriendly to someone, you're more likely to dislike them. A little bit counter-intuitive, but nonetheless supported by a great deal of scientific data. The best explanation as to why this happens is that people feel the need to have a consistent narrative of their actions--if certain occurrences don't fit into that narrative, or run completely counter to it, then you're going to have to use up more mental energy to keep multiple narratives running in your head. Considering that both intuition (which constitutes, I'd venture... 95% of our thinking) and memory are associative phenomenon, that means that you expend a lot of mental energy holding unassociated and seemingly contradictory ideas.

But that explanation (or just-so story, you decide) isn't the point of this entry. Just to show how general the idea behind cognitive dissonance is, I'd like to talk about a related phenomenon. The face has some dozens of muscles that can be either active or at rest. All of our facial expressions are created by activating combinations of these muscles. But by moving any one of these muscles, we send a special electric/chemical impulse that causes us to feel what we're communicating with that muscle. That is, if I make an angry face, then I will feel angrier. If I smile more, I'll feel more jovial.

For yet one more example, think about how hard it is for people to lie. People fidget, sometimes break out into laughter (although I used to get nervous when telling a story when I was little and my sister would accuse me of lying, which would of course make me smile or laugh more), our palms will get sweaty, which is why polygraphs can (admittedly unreliably) detect lies. Taken together together, there seems to be a general pattern here that escapes our everyday intuitions: that what we show on the outside is not simply a manifestation of how we feel but that in fact the connection between our feelings and our actions goes both ways.

This isn't very shocking on its own; I think most of us understand what I said to some degree. But I think that the implications of this idea have not been thought through. Consider, for starters, why we would evolve in such a way that it's hard to lie or to operate independently of what our own actions may infer; after all, if we know everything that's going on in our own heads, shouldn't we not try to make inferences about our own actions? I know that the question is more complicated than simply that (and I'll get to that in a moment), but consider the utility to a band of hunter-gatherers of nobody in the group having an easy time lying.

In fact, take it one step further and consider what cognitive dissonance (and everything related) does for us. It gives both our actions and our thoughts narrative continuity. I've preferred chocolate to vanilla my entire life; I may order vanilla whimsically once in a while, but I know that I generally prefer chocolate ice cream. It would be very hard for me to know what ice cream to stock the house with if several times a day, based on my temperament and not on past actions, despised the flavor of ice cream I liked two hours ago and loved a flavor that I was lukewarm about.

Of course, that's a mere nuisance compared to what it could do with myself and people. What if I suddenly, for no reason, despised my best friend while we were hanging out and then the next day I was fine with him again? I may have reasons in my own head for it, but if there was no precedent in my actions, then to other people it will look completely random; both initially despising him out of left field and suddenly being cool with him the next day. Most of our understanding of the world is based on narratives; we use narratives to make predictions (which would explain why we don't do so well at prediction in a more interdependent modern world) and we rely on predictions to know whether we can hang out with a friend the next day or trust our neighbor to feed the cats. Therefore, cognitive dissonance assures that most changes will happen gradually; a friend of mine might suddenly have an outburst and storm off once in a while, but I can be pretty sure that that friend is not going to have a change of morality overnight and sell my kidneys on the black market.

So cognitive dissonance is a social mechanism that helps the survival of groups of people. Evolutionarily, this seems in line with the fact that our closest relatives are more social than our more distant relatives. This also leads me to believe that Rousseau had it entirely wrong when he said that humans were meant to be solitary creatures; solitary schmolitary, we're open books!

We're not just social creatures, however; we're also narrative creatures (however much those two concepts are independent, but I'll leave that to another essay.) For those who don't live entirely under a rock, you know as well as I do that people love reading and telling stories. You also know from high school English class that people also love interpreting stories. This instinct, to me, seems to be reflective of the inferences that we make about other individuals as well as groups of people. We fit our own actions into a story that other people can read and by extension we all participate in a collective narrative.

And narrative is in fact the operative word here. Narrative interpretation has much more to do with associations between ideas than deductive logic. The brain itself works primarily by finding patterns through association, also known as intuition. Deductive logic, by contrast, is a trick that we seemed to formally coin in the classical era and is still used in small doses. The basis for our interactions with other people (and perhaps even our conception of the self) is one based on being able to infer patterns and construct a narrative. Without the narrative continuity of cognitive dissonance, people would be black boxes from which we could only stand to learn about them by deductively testing rigid hypotheses, since there would be too many significant hidden variables for us to know how somebody is going to behave (social conventions also matter in this regard, but they themselves come from the same concept of a shared narrative, which I'll once again have to save for another essay.) So instead of telling you to wear a condom when you go on a date, your parents could remind you to bring your notepad, your data tables and your TI-86 Scientific Calculator. Sounds perfect to me.

* * * * * *

Also, if you extend the logic of this post, you can forget the "enlightened" political theorists Hobbes, Rousseau, Locke, Shmocke, Crocke. It seems to me that only Edmund Burke was on target about societal behavior.
Hi everyone,

For those who know me (the short version): I have a lot of idle thoughts about narratives, technology, math, uncertainty, systems, subjectivity (blah blah blah) and I thought that I should write them down.

For those who may not (won't be that long): I'm a recent college graduate with a B.A. in Computer Science and English. I study (and mess with) the intersection between narratives, technology and math. I'm working on several projects, one big one that I talk about here. I like just about any subject or idea that I can fit into a larger framework. This blog is meant to contain a lot of my thoughts about subjects ranging from literature and philosophy to neuroscience and economics. Just about every entry in this blog will draw from at least two "disciplines"--I rarely like thinking in a vacuum. Oh, yeah, I also dislike departments and the "specialization" of knowledge rampant in academia, but that's another story.

So if you're interested in any of the subjects I talk about, keep an eye out for new entries. And if you're a contrarian, please be sure to attack my ideas; keep me honest.