Evolving Computers

My oldest brother was born at just the right time to participate in the groundbreaking new world of home computing. My dad bought the family a Commodore 64 and I invested huge swathes of time playing games, occasionally learning very basic computer programming from my brother. A 386 (one of the first modern PCs) and later a Pentium processor found their way into our home. Soon after that, the internet was born. My brother stayed with the developing technology and is now a programmer at Google.

His kids, on the other hand, have never known a world without the internet, PSPs, streaming video and Bluetooth. Technology has insinuated itself into the fabric of their lives, and they may never experience such a severe paradigm shift as home computing or the internet made when I was a kid. (To give you a sense of my age, I remember the world without the internet, and I remember being frustrated when this new thing called Windows triumphed over my familiar text-based DOS. I also remember the command prompt: LOAD”$”,8,1)

For kids today, computers are second nature. They cannot fathom the complexity of these devices because it is masked by the ease of use. A child has no need to consider the circuitry, the silicon, the programming language or the million increments in technical achievement that accumulated to this incredible moment in history, where we can summon seas of information with the click of a button.

But just because they can’t understand the million steps leading to such ease of use, they will suffer no handicap when it comes to understanding future computers. Indeed, they are already experts, of a sort, and the evolutionary history of computing hardly matters; the outmoded technologies both mechanical and programmatic are not useful any more except to historians or theoreticians. When was the last time LOAD”$”,8,1 did anything useful for me or the people at Google? (Playing Dig Dug on an emulator doesn’t count.)

We might think that kids today are spoiled, reaping rewards accumulated by generations of brilliant minds without fully appreciating it. But this has been the case with all technologies since the wheel. And this time binding faculty of humankind extends to more than just “technology”.

We are all guilty of this ignorance, for example, with our own bodies. We don’t consider the incredibly complex specialization of our eyes, how perfectly the lens focuses light onto the retina, how the rods and cones react, and how our brains parse information about shape, colour, shade, edges, depth, context and so forth. We just see a red curtain and go about our day.

Everything in our bodies is in fact the end result of innumerable biological adjustments, mutations that have been use-tested and refined through the ages by our ancestors’ survival and procreation. And not one of us needs to know these millions of stages of evolution in order to use our own biocomputer at least decently.

There will always be those who have a passion for the specifics, the evolutionary biologists, art historians and so forth, and we need them to impart their knowledge on the next generation of hungry minds. But you can’t blame someone for being born too late to really get it. It’s all we can do to work with what we’re given, do research when necessary, and move forward. Even my brother, currently on the cutting edge, was born about 150 years after Ada Lovelace, the world’s first computer programmer.

Deepak Chopra’s Cosmic Confusion

In all of history, no topic has been the subject of more bullshit writing than the spiritual side of life. We all live through the lens of our own experience, and it’s commendable to try to explain internal experiences, but because consciousness is such a mysterious and strange aspect of life, an unfortunate majority of opinions about it are sadly misguided.

Deepak Chopra, for example, says that matter is an illusion and consciousness is all there is. This is wrong. I’ve voiced my disagreement with his opinions before, but I assure you I’m not holding a grudge; I’m simply voicing my reaction to the ongoing dissemination of his ideas, which I find pernicious. We should all strive to understand our selves, so I don’t hold his efforts against him, but I would love to share a coffee with the man and let him in on the following:

Matter is real. It is one of the fundamental facts of the universe, as all sane people know. Even most insane people know this. Under some very special conditions, matter gives rise to organisms. As those organisms evolve, some gain tremendous complexity and computational powers to employ for survival, and very few attain what we would call consciousness. Referring to a persistent fact as an “illusion” isn’t helpful.

“Reality is that which, when you stop believing in it, doesn’t go away.” – Philip K. Dick

Some of the functions of consciousness remain a mystery, but we have no evidence to assume that consciousness is all there is. What we know as consciousness today has only been around for the tiniest sliver of the history of the universe, and there is plenty of evidence to support that claim. How can Chopra’s theories explain prehistory? If consciousness is all there is, does he believe that there was absolutely nothing in the universe until the first conscious being was born? What gave birth to that being?

Chopra’s philosophy seems like redressed Hinduism, where matter is maya (illusion) and we are all facets of Brahman (God). He redresses it with the ill-fitting jargon of quantum physics, a perplexing topic that arose from the exploration of matter. Chopra is certainly no authority on this dense and confusing field of study, and most quantum physicists disagree with his interpretations.

Chopra and his ilk love to refer to materialism as “reductionism” as if materialism reduces our significance in the universe. But this is bush league word play. Pay attention to how often they use that word and you’ll realize this is a cheap tactic in a mind game and has no relation to how the world is described by materialism.

And besides, not one facet of our internal experienced is “reduced” by materialism. Whatever explanation we throw at it, we all have an internal experience. Belief in God or spirits or a soul—even the belief that we are all biological puppets—doesn’t change the fact that consciousness as we know it arises from the brain. Beliefs don’t change our qualia and don’t change our perceptual apparatus. It only changes our explanation of these phenomena to ourselves. But those explanations are just words.

I’ve experienced the feeling of being in the true presence of divinity. It was a fully conscious experience and it came stamped with an undeniable feeling of authority. I came out of the experience thinking, “Oh, that’s what they mean when they say ‘God’.” In no way does this experience prove that there is some sort of external divine intelligence; it only proves that such a conscious experience is possible. Such a feeling is possible. It’s a beautiful feeling, but it says nothing about the fundaments of the universe, and the experience would have been totally impossible without matter (my brain, for example). I spend time every day cultivating that experience, and I need no belief of any kind to justify it. These are purely pragmatic concerns, denuded of metaphysics.

I’m sure these facts have been laid out for Mr. Chopra over and over again, yet he keeps on with his message, adjusting his pseudo-scientific jargon ever so slightly but failing to learn or change or grow. It makes me question his motives. The fact is that his name has become a brand, and to admit his prior confusion hurts the brand. After all, what does an enlightened spiritual guru need with a net worth of $80 million? He doesn’t need any of your money, and you don’t need any of his nonsense.

Free Debate

Last year I read Free Will by Sam Harris. The book impressed me as a concise demolition of folk psychology’s casual (lazy) assumptions about free will, written with straightforward language and a direct approach. I had a few issues with the book, and as an exercise wrote a “Devil’s Advocate” critique in which I used the last vestiges of the dualism I inherited from growing up with church and Catholic school.

Shortly after I read Free Will, I read Consciousness Explained by Daniel Dennett. The latter struck me as beautifully written, philosophically and scientifically strong, and it razed my already crumbling Cartesian Theater. I remember thinking distinctly that I’d like to read Dennett’s take on Harris’ book, as Free Will openly challenges Dennett’s stance on the issue.

Monday I was happily informed through social media that this has happened. Sam Harris has posted Dennett’s rebuttal here. The rebuttal is a bit long, a bit thorough, and, to my thinking, imperfect, but I highly recommend both Free Will and Dennett’s take on it. Because what’s more fun than sitting at home reading philosophy?

Why should you read Harris’ opinion that free will is an illusion, then read Dennett’s opinion that Harris is wrong? It might seem a bit of a waste on the surface; if neither has the whole answer, what do we gain from these essays? We get a glimpse into a dialogue between two intelligent minds, and dialogue is the reason books like Free Will should be written in the first place.

Newton’s theories of space and time held sway for a couple hundred years until they were shown to be wrong by Einstein. But there could have been no “Einstein” if not for the foundations laid by Newton. Without Newton’s boldness, his willingness to put opinion to paper and publish it, science might have remained a stagnant morass of religious dogma and superstition. Even as Einstein was proving those theories wrong, he was standing on Newton’s shoulders (and the shoulders of many more recent scientists and philosophers).

“[A}ny hypothesis, however absurd, may be useful in science, if it enables a discoverer to conceive things in a new way; but … when it has served this purpose by luck, it is likely to become an obstacle to further advance.” – Bertrand Russell, History of Western Philosophy

It takes guts to point out what you think are mistakes and sloppy thinking in the work of a professional, especially one that is a friend. But Dennett speaks his mind and isn’t worried about Harris’ feelings because this is what intellectual adults do. They make rational arguments, arguments that come from science and sound philosophy. They are not arguing emotionally, and you can be sure since Harris posted the rebuttal on his own website that he respects the man behind it and thinks the rebuttal worthy of our time, even if it is expressly intent on showing Harris is wrong.

Why does this behavior seem strange to me? Because so many outspoken debaters fail at it. Deepak Chopra, for example, debates publicly about God but quickly gets emotional and degrades himself by hurling ad hominem attacks, like this muddled thing he co-authored about Sam Harris. This confused article says nothing of significance, continually attacks Harris, and yet is couched as a sort of moral high road for sloppy thinkers who hold onto beliefs despite evidence.

Some scientific figures like Richard Dawkins think some debates can be a bad idea. Dawkins posted this article on his website admonishing Bill Nye the Science Guy for debating evolution with Ken Ham, founder of the Creation Museum (what items this “museum” contains, I do not know). Dawkins thinks these debates give undo credence to propositions that are not falsifiable and lack real evidence.

But this free will debate is completely justified, and I’m excited to read Harris’ response. Dialogues like these enrich our understanding, and though Harris and Dennett share many beliefs, they are two very distinct minds with distinct styles of argument. Neither is liable to make a proposition without either empirical evidence or a strong philosophical argument.

I don’t think this dialogue will resolve the issue once and for all. Certainly neither side will convince everyone in the world if scientists can’t even convince creationists of evolution. But if the debate causes us to question our own beliefs, maybe even shed some of our lazy assumptions, it will have done the us good.

Journals, Art, Journeys

When I was young my oldest brother Jeff showed me what an amusing pastime it was to keep a journal. I’ve found this essential. Without keeping a record of the day’s events, we forget most of the coincidences, oddities, and revelations of our lives. Even when we remember the facts of our experience, it’s impossible to recapture the exact feel of events. Most of my life I’ve kept some kind of book on the go, whether it’s just funny lines or ideas or scenes from movies I’d like to see.

It seems important because of this main fact: memories are not real. When you think about an event in your past, (spoiler alert) your brain does not magically go into the past. Our brains attempt to reconstruct our reactions to that experience, but our brains are different now, so the reconstruction is imperfect. Plus, memories can be bent and changed.

Regular journal entries give us a window into our state of mind at the time. This is crucial if you want to understand your life as a journey or narrative, or if you want some sort of proof that you’re getting closer to your goals or developing intellectually.

The same can be said, on the macroscopic scale, of art and science in culture. Art expresses the zeitgeist while science improves our understanding of each moment. We could never have had The Wire without ancient Greek literature, and we could never have invented smartphones without first understanding how radio waves work. This only works when people write it down.

Occasionally an artist makes a conscious effort to draw our attention to cultural development by retelling ancient, fundamentally human stories with current language and culture. The best example is Ulysses by James Joyce. The story is not about a guy named Ulysses in ancient Ithaca, but a man named Leopold Bloom in 20th century Dublin. The title and structure of the novel showcase thousands of years of human values in flux.

“This race and this country and this life produced me…I shall express myself as I am” – James Joyce, A Portrait of the Artist as a Young Man

It can be great to read old, embarrassing journal entries because it means you’ve grown. Without writing it down we have no proof. And without a record it’s sometimes impossible to understand how we could have believed the crazy notions we’ve outgrown. This blog is likely full of ideas I’ve outgrown. I’m fine with that. Years from now I’ll be glad I was observant, honest in my assessments, and most importantly, that I wrote it down.

 

P.S. There will be no blog post next week because I will be busy eating food. Happy Holidays everyone.

Memetic Evolution

Most of us have a general sense of how biological evolution works. Simple organisms differentiate, mutate, and replicate. Some new traits prove to be good coping mechanisms and help organisms outlive and proliferate further than those organisms with traits ill fit for the environment. Over millions of years the biological changes that take place are increasingly complex. The survivors win. Their reward is life.

But evolution can explain more than biology. Memetic evolution, for example, happens upon almost identical lines. Simple ideas, the kind that represented simple objects or situations to our unintelligent ancestors, were encoded in spoken and pictorial languages (dubbed “memes” by Richard Dawkins). These ideas are sent from one mind to another, and just like genes, they differentiate, mutate, and replicate into other minds.

Some ideas take hold because they provide a benefit to the organism using them.  An early human who learns how to use a club has an advantage over one who hasn’t learned to use a club. A man who knows how to fashion a sword will do well in a fight against that club-wielding guy. Someone who learns how to string a bow and fire an arrow can stand back in the trees and kill the man with the sword with little risk. And the man who communicates all this information proficiently can show up with a gang of hundreds, each with a homemade bow, and so on, and so on. Now we have nuclear weapons.

As our human powers of rational thinking developed, we had an increasing ability to think abstractly, which had far-reaching benefits. Humanity went from understanding individual problems to understanding types of problems. When we figured out how to handle one type of problem, the individual problems of this type no longer required as much investment in thought. As our knowledge outgrew its abstractions, our memetic evolution accelerated exponentially. In relatively no time, our memetic, semantic, cultural reality was infinitely more complex than our biological reality.

Humanity seems like the only species with the ability to continue abstracting beyond the first few levels. Even when we teach lower primates sign language, we can prove that they think, but they tend not to read Bertrand Russell or discuss the plot of a good sitcom.

Memes replicate as aggressively as possible, just like genes. Particular genes can pass around the entire world, but it takes generations, decades of effort and luck. Ain’t nobody got time for that! By contrast, how many people know about the Sweet Brown meme since it appeared in the world?

As memes mutate and grow more complex, they push the boundaries of the semantic world outward into various specialized niches. This is obvious in our internet-soaked world culture. Nobody can keep up with all the facts of our world. Nobody can even keep track of all the relevant facts to their particular field of specialization at our current rate of memetic growth.

New insights gained from trial and error continue to expand all fields of knowledge. If you can combine ideas into something novel, you have pushed the boundary of our semantic world. Notice how ideas normally don’t just appear out of nowhere? Ideas are almost always build upon the foundations of previous ideas.

So wouldn’t it be great if there were a tool that organized memes into easily understandable fragments, and we could each curate our own stream of information so that the knowledge relevant to our interests could be scanned easily as well as studied for detail? Welcome to the world of Twitter.

Twitter acts as an exploder button for memetic evolution. Think of all the Sweet Brown remixes! But seriously, I want a genetic scientist to have all the specialized knowledge available from around the world so progress can continue. I curate my own feed and my knowledge of worthwhile writing and music and film has increased in a dramatic way. Plus it allows us to stay current, so our cultural developments remain on the cutting edge.

The internet has brought us together in unexpected ways. It’s easy to see how much time is wasted on sites like Twitter and Facebook. But it isn’t like that for everyone. Most of the gene swarm on planet Earth died off before our ancestors replicated successfully. Think of how much of Earth’s matter has been incorporated into our 7 billion neighbours. We give our particular genes a great success rate, so it’s a good thing for them that they made us. As for Twitter, could memetic evolution ask for a better medium of proliferation through human minds?

Memes, like genes, want nothing more than to replicate, and they do so in a very chaotic way until they find a best-fit pattern for the environment. We invented Twitter to share information. But our inventions are always on the shoulders of past ideas. So our semantic, memetic world has guided us to invent Twitter, the ultimate replicator (so far) for memes. Are we in the driver’s seat here, willfully directing memes for further progress, or are we being directed by our memes? And anyway, what’s the difference?

Wordfail

Most of my favorite works of art deal with psychological, internal, and (if I may) spiritual problems. I might be in the minority on that, but it’s hard to tell. Most pop cinema and music seem to actively avoid these issues in any serious or thoughtful way, but my view may be skewed by massive PR budgets, while many profound works count on niche marketing and word of mouth.

Two nights ago I was working through an internal process during my meditation, essentially allowing my sensory inputs to drain out and empty, and it occurred to me (not for the first time) that many of these internal obstacles literally defy rational language. The scientific method is a beautiful tool for explaining and enhancing our understanding of our world, but when it comes to internal experiences, scientific language fails to capture the experience in any way I can relate to.

I can talk about the cessation of dialectical thinking, stimulation of the parasympathetic nervous system or increasing respiration for lowering systolic blood pressure, but these descriptions are cold and say nothing about the end-user experience, despite their medical accuracy.

To speak about “turning the light around” captures more of the mysterious essence of the experience, even though this phrase provably does not describe what’s going on in my body. All language is in a sense arbitrary. If we can find language that more closely captures the experience, we should use it.

Scientists have been encroaching on this field for a while now, and with good reason. Some organizations like The David Lynch Foundation try to analyze meditation from a scientific perspective so they may explain it to rational people. This is totally laudable and seemingly essential these days. But I was always more affected by artistic interpretations of internal experiences, art forms that somehow poetically capture the ineffable nature of what’s happening, what it feels like to have internal revelations.

This is where I find uncompromising value in art. Art is the best conveyor of human experience, and exposure to it seems essential to me if we want to mature as human beings.

All communication is symbolic. The word “kite” is not the physical object called a kite. If the best we can do to symbolize an actual kite is to come up with a verbal grunt with sharp sounds on each end—a sound that is intrinsically meaningless—then we are at least slightly lame as a species. The word itself seems complete gibberish to someone without experience of an actual kite. But to watch a film of some kite-flying enthusiasts, or read about a child’s wonder as the wind pulls the kite down a sunny beach, is to learn on more than merely verbal levels.

This is where I cut a lot of slack for religious literature. There are a lot of religious books which, if taken literally, are absurd and stupid. But those books tend to elicit analogical and mystical interpretations that resonate with people in deep ways. Reading The Bhagavad Gita, I never once expected that the events depicted in it really happened. But I was moved by it, and I continue to find it beautiful.

This might be why I value “saying something” over simply making art for money. I am glad to fork over my hard-earned cash for a meaningful experience, and usually annoyed when I walk away from a movie or book thinking, “so what?”

I have written on this previously, if anyone is interested.

Change Your Brain – Pt. 4

In “Change Your Brain” parts 1, 2 and 3, I tried to recommend books that had a positive effect on my behavior. Glancing back over recent posts I’ve noticed a shift in my thinking, and it stands to reason that the book I just finished contributed to that change in a major way.

We can’t know exactly why we are the way we are. Since each of our ‘minds’ arise out of the darkness of unconscious processes, it stands to reason that we should look toward the unconscious when we need a tune-up. Discovering our unconscious assumptions and bringing them into consciousness allows us to shed light on the processes that guide our minds.

The following book might have made me a little more sane.

Science and SanityScience and Sanity by Alfred Korzybski

The book’s full title is Science & Sanity: An Introduction to Non-Aristotelian Systems. This is the foundational text for the branch of study called General Semantics. Its claims rest on the fact that language and science are forms of human behavior. If our behaviors and interpretations of reality are not accurate to the facts of the world, our evaluations, and therefore our future behaviors, will result in harmful shocks, delusions, failures, etc. We use science to communicate facts to one another. These facts offer dependable models. But in our communication and even our thinking, unconscious assumptions can deform the information and leave us with models that are false to the facts of the world. If these unconscious assumptions aren’t remedied, our species will become less sane.

So why pick on Aristotle? Briefly, this work is an attempt to recondition the Western mind. Because Aristotle had the last word on philosophy before the Dark Ages, his theories went untouched for centuries and have become engrained in most Western culture. Though Science and Sanity was published in 1933, we still have a long way to go.

Aristotle inherited the primitive language of his day. The language was formed by cultures that did not have the benefits of rigorous analysis. He inherited a mythologized interpretation of reality, a worldview that explained phenomena in anthropomorphic terms without the checks and balances of science. Aristotle used the language of his day to express the laws of “logic”, thus introducing primitive unconscious assumptions about the world to future generations. World events halted the progress of philosophy after Aristotle and his works became canonized. Simply because he was the last word in reason for hundreds of years, his philosophy took deep root in the Western mind.

Aristotle’s assumption of properties in objects and his use of subject-predicate language take the brunt of Korzybski’s criticism. Words are words and things are things and never the two shall meet. No word can ever “be” the thing it describes. When I claim “Mark is lazy”, I overstep empirical means by ascribing to Mark some property of laziness which I have not looked for scientifically. In truth, all I have is my empirical observations of Mark’s behavior. To say “Mark acts lazy” is more accurate to the known facts and describes the world as a dynamic process.

I know this seems like nitpicking, but subject-predicate reasoning leads to unjustified inferences about the world and in extreme cases can lead us to completely false assumptions. Most pernicious is the fact that these assumptions usually go unchecked because they happen unconsciously.

Next on the chopping block is Aristotle’s law of the excluded middle. This is the claim that a thing, A, is either true, or it’s negation, not A, is true, and nothing else is possible. This thought pattern oversimplifies observations in the worst way. Korzybski’s revision encourages a revolt from this two-valued logic to an infinite-valued logic. A person can be wholly inside a house, wholly outside a house, or partially inside and partially outside to any conceivable degree.

Another major consideration is the elimination of elementalism in language. Elementalism describes the breaking down of concepts into constituent elements that cannot exist outside of the whole. Most famously, Newton broke down our reality into ‘space’ and ‘time’ and this verbal trick led countless scientists on the search for the properties of ‘space’ and ‘time’ which led to failure, of course, since there are no such observable things as ‘space’ or ‘time’. Einstein proved that they are inseparable. When we verbally separate them, we must make sure this separation remains on the verbal level. Words are not things.

Another example is the linguistic dichotomy formed between ‘mind’ and ‘body’, two aspects of a whole that cannot exist independently. A man who researches the properties of ‘mind’ while disregarding ‘body’ does himself a disservice because the properties of ‘mind’ involve the ‘body’, and vice versa, to varying degrees. Entities work as-a-whole, and should be analyzed and spoken of as such.

The harm of Aristotlian systems is that they look for The Truth as opposed to a truth. Science and future humanity need languages that correspond to observable phenomena that operate within a context and as-a-whole. Accurate descriptions lead to accurate models of the world, and accurate models lead to sanity. As you might tell from the description so far, the aims of Science & Sanity reach far and deep and aim to completely reformulate many of the thinking-habits of Western culture.

But it doesn’t stop there. You’ll learn about colloidal chemistry, the dynamic gradient, differential calculus, Euclid and Riemann, Einstein and Minkowski, and why nothing truly happens “simultaneously” with anything else. This vast, multidisciplinary approach gives a philosophical and technical basis for using language in clear, unmistakable ways.

Science and Sanity claims that knowledge and language are only accurate when their structure matches the structure of the world. If we rely on words, and the definitions of those words are other words, concrete meaning retreats from us. The true test for a scientifically sound language, according to Korzybski, is that the language matches the structure of the world it represents. More far-reaching still is his insistence that structure is the only true content of knowledge.

Korzybski believes that mathematics most perfectly matches the structure of the world as well as our nervous systems, therefore acting as our most perfect bridge of communication. Since our linguistic processes must make instantaneous assessments of a dynamic world, differential calculus offers an analogy by its ability to provide us with empirically accurate snapshots of processes.

Overall, the work means to enhance our “consciousness of abstracting”, to keep us mindful of the world around us, to differentiate between our observations through lower order nervous centers (sense input) and our higher order abstractions (language, mental models, etc.). “Consciousness of abstracting” offers an scholastic approach to mindfulness, and means to keep us from confusing orders of abstraction. The attempt is to bring scientific clarity to human thought.

While there are large swathes of the book that are quite technical, mathematical and daunting, the underlying principles remain easy to understand (though I should admit that I was somewhat primed for it by Robert Anton Wilson). Chapter to chapter, the exposition is powerful and comprehensive through its nearly 800 pages.

I recommend this book for scientists, linguists, philosophers, and people with time to read.

Free Will: A Devil’s Advocacy

Here are some thoughts on Sam Harris’ Free Will that weren’t in my review. I hope you get a kick out of them.

Much of the strength of Harris’ argument rests on the weak shoulders of the concept of free will, which is vague, flimsy, and usually inadequate. The popular concept of free will, in this sense, is similar to the popular concept of God, which is often argued over but rarely defined. His point, briefly, is that we all inherit accidental conditions when we’re born and throughout our lives that define the range of our experience and reactions. We have no control over these conditions and therefore our will can never be truly “free”.

What Harris does best is to present an argument to the popular, “common sense” assumptions about free will (much like he does with God in this “debate” with a confused Deepak Chopra). Harris’ argument constantly pleads for a causal look at physical phenomena and evidence-based conclusions. He shows over and over again that there is no real evidence to support free will.

Some of the semantic elements of his presentation did not sit right with me, however, so I’ll outline a few of these issues as I see them. I welcome comments and opinions, as I am nothing like an authority on any of these issues. If it’s a bit fragmented, that’s because I agreed with most of what he said, and only took issue here and there.

“Free will is an illusion.”

Rather than open with a working definition of free will—which would put people on the same page immediately, even if it were a weak definition—Harris says that free will IS an illusion. This is his thesis statement. But the “Is”, in this case, as in all subject-predicate propositions, over-extends its authority. When we say what something is, what exactly do we mean?

I can predicate all I want, but my statements will never cover all the facts of reality. I can talk about some fundamental, permanent property in an object, and I can make a definitive judgment about objects—say, “the grass is green”—but my judgment doesn’t close the issue altogether and won’t necessarily be true under all conditions (green grass viewed under a red light appears black).

How do I know what something is? I look, or in the case of a priori phenomena, I intuit and reason. I can sense features of the object of inquiry, but I can never know everything about it. Better I should declare my judgment in terms of my frame of reference. The subject-predicate “is” misleads because it unconsciously assumes ultimate, objective authority and falls prey to misguided is/is-not dual logic.

If I translate his “is” claim with this in mind, it becomes “free will appears as an illusion to me.” This simple and honest change of wording strips his argument of its tone of authority. The “to me” implies his epistemology (i.e. free will appears as an illusion according to principles 1, 2, 3, etc.). We shouldn’t speak authoritatively about facts as though they exist alone in a vacuum. I prefer arguments that avoid claiming what something “is” and instead describe the world as it actually works, through verbs expressing process-transactions with an observer.

“There are no self-sustained facts, floating in nonentity.” – Alfred North Whitehead, Process and Reality

The other issue I have with his claim is this: labeling a persistent fact “an illusion” doesn’t make the illusion go away. Even if we wholeheartedly buy Harris’ argument, we still feel free to make the choices presented to us. Calling free will an illusion doesn’t diminish the fact that I make conscious decisions that affect my future. The term “illusion” is not so firm a concept that Average Joe can’t easily misapply it. People will continue to live just as they did before, even if they adjust their assumptions about how much freedom is actually involved.

Hindus believe that the phenomenological world is an illusion, that our senses obscure true divine reality. But Hindus still live with the phenomenological world of sense every day. We can reason away sensations as merely neurological events, but this does little to take us out of the experience of that illusion. I find this thought echoing in my head when Harris writes, “[e]ither our wills are determined by prior causes and we are not responsible for them, or they are the product of chance and we are not responsible for them” because a similar line of reasoning likely led the Hindus to develop their metaphysics of karma.

Epistemology

On this issue I am confused, and I would be happy if someone would clear things up for me. If I accept that physics can explain everything that exists, then material facts must be responsible for false beliefs. If I believe trolls control reality through magic, this belief is explained as an outcome of weird biology or neurochemical activity (psychosis, bad drugs, etc.). So the physical facts of my brain are responsible for my subjective misreading of the world around me. This underlines the importance of communicating my judgments in terms of my own frames of reference (i.e. “I am going under the Great Bridge when I die, according to my beliefs, because I paid the toll and the trolls have chosen me.”).

Science is a semantic activity first and foremost; although it aims at objectivity, empirical testing should always refer to the observer applying tests to the facts at hand (i.e. “Under these conditions, I noticed X.”). The more observers corroborate a fact, the higher the probability that the next observer will see the same thing and further corroborate the observation. By this, the scientific method, we approach facts and laws. But the point here is that experience is the only measure of reality. We can say nothing meaningful about an unobserved world, and if we do we are confusing inference for fact. Free Will takes advantage of this tactic, frequently using absolute, objective language when it is not possible to have absolute, objective knowledge. His use of absolute language is obviously a linguistic dilemma. It doesn’t mean that I disagree with the spirit of his argument.

I will quote him as an example: “Consider what it would take to actually have free will. You would need to be aware of all the factors that determine your thoughts and actions, and you would need to have complete control over those factors.” Remove superlatives from this sentence. “Consider what it would take to actually have free will. You would need to be aware of some of the factors that determine your thoughts and actions, and you would need some control over those factors.” Well, we are aware of some of the factors, and we do have some control, so the statement simply begs the definition of “free”. In this case, “free” doesn’t mean “completely free”, but there is wiggle room for free will if we admit that we have some freedom in our conscious actions.

I’d like to return to my question: are thoughts and semantics purely material? Can the various levels of abstraction, the nuance and individuality of human brains and nervous systems be completely accounted for with only empirical evidence? I don’t think so. If you take strict material determinism as your view, you do not have the empirical evidence to claim that all of human experience can be explained by physics, even if it really, really seems that way.

There seems to be some missing explanation, some mystery that translates these so-called simple neuronal firings into complex visible, audible, tactile experiences that we can actually think about in infinite (indefinite) degrees of abstraction. It’s easy to say, “Thought A is caused by the firing of neuron X,” but a thought has a subjective, omni-valent presentation to it and a neuronal firing does not. How consciousness translates neuronal firings into experience seems mostly mysterious to me.

We can measure the brain activity of someone who is meditating or sleeping or doing a puzzle or looking at a red Chevy Nova. We can get reams of data, collated and colour-coded, and that makes us think we have the facts. But try going the other way here. Try looking at brain activity on the page and tell me exactly what it is like to live in that brain, complete with all itinerant facts, all the memories of each element of the event, the fantasies the person calls up consciously and unconsciously because of personal, historical correspondences, etc. It’s impossible. Numbers and data are not experience itself. Math and science are only models of experience. They are usually more correct models than primitive superstitions, sure, but still only models.

If beliefs have a material basis, the materialist says his nervous system and brain is more correct than a faith-based person’s nervous system. But both generate models for consideration. New and different models help guide us toward understanding reality as long as we eventually eliminate the false models. There will always be new models of reality because models reflect our collective subjectivity, which evolves and reacts to environmental conditions. To ossify any one model into dogma is to insist on an end to our development in understanding reality.

The Neuroscience of Consciousness

Harris refers several times to the fact that brain scans reveal activity significantly sooner than a person feels he has made a decision. “These findings are difficult to reconcile with the sense that we are the conscious authors of our actions.” But is it possible that the complexity of the brain and nervous system, with our conscious and unconscious abstractions on multiple levels, simply takes longer to register cogently in consciousness than it does to register on an EEG? We don’t know of a more complex machine than the brain/CNS. Since all signals travel at finite speeds, might the time delay be explained by the abstract processing and reprocessing, the neuronal and physiological feedback loops we perform unconsciously to fit events into our worldviews?

He notes that seemingly random neuronal firing originating in the brain has been observed. But how do we know something is random? We call a signal random if it doesn’t follow our idea of patterned stimulus (in this case, it doesn’t fit our ideas of material determinism). Of course, what appears random may in fact be purposeful and not random at all. Pi, for instance, looks like a completely random string of numbers (3.141592654…), but we know it isn’t random. It signifies a concrete relation.

If I were to argue for the existence of a soul, I might argue this so-called random firing is not random at all, but direct material evidence of the soul’s activity. The neural action rises from unknown causes in a manner that material determinism can’t explain. Of course if I were to make this argument, I would be stepping outside of the materialist paradigm. Naturally, using a complete unknown as “proof” for anything is totally backwards…but weirder arguments have been made (by Deepak Chopra).

Since it would be impossible to trace all the contributing factors in any decision, many human activities must seem on paper to be randomly generated (from a window of possibilities, tendencies, etc.). If there was an immaterial soul and free will, it would be immeasurable and we could only detect its impulses after the impulse worked through the nervous system and was processed on different levels of abstraction, so the conscious mind might be the last to know of the soul’s impetus.

Freedom

While Harris can safely kill the concept of “freedom” in any argument for free will, he concedes that we do have will and we do make conscious decisions that affect our futures. Our apparent “freedom” falls within a bracket of possibilities. But with intellectual and/or spiritual growth, we continually understand more of our unconscious tendencies and open the window of possible outcomes, thereby increasing our freedom.

“Willpower is itself a biological phenomenon. You can change your life, and yourself, through effort and discipline—but you have whatever capacity for effort and discipline you have in this moment, and not a scintilla more (or less).”

Even arguing against free will, Harris believes willpower exists and is effective. Rather than thinking about humanity as a purely clockwork organism, he grants that our wills are unique to us and we do have a measure of control over our own lives.

     “A creative change of inputs to the system—learning new skills, forming new relationships, adopting new habits of attention—may radically form one’s life.               Becoming sensitive to the background causes of one’s thoughts and feelings can—paradoxically—allow for greater creative control over one’s life. It is one thing to bicker with your wife because you are in a bad mood; it is another to realize that your mood and behavior have been caused by low blood sugar. This understanding reveals you to be a biochemical puppet, of course, but it also allows you to grab hold of one of your strings: A bit of food may be all that your personality requires. Getting behind our conscious thoughts and feelings can allow us to steer a more intelligent course through our lives (while knowing, of course, that we are ultimately being steered).”

So Harris’ argument hasn’t damaged our humanity, it has just given us an honest look at what we mean when we say we are free. We definitely are not completely free, and there is no conceivable behavior we can adopt to prove we are free from background causes.

He is a neuroscientist, and his arguments imply material determinism, as when he says, “if I were to trade places with one of these men, atom for atom, I would be him: There is no extra part of me that could decide to see the world differently…” So although he seems to write as though agnostic about the existence of a soul, Free Will is in fact an argument against the soul. This hidden message shouldn’t surprise anyone, as he is the founder of Project Reason, and their modus operandi is to spread science and secular values to the world.

Free Will is a fine piece of work and I recommend it. It does away with a shabby, ill-defined concept in favor of evidence-based conclusions. It might seem a colder way to look at the world, but feeling cold or feeling warm and fuzzy doesn’t change the facts of reality. Nor does his work strip us of spirituality. I am eager to read Harris’ upcoming book, Waking Up: Science, Skepticism, Spirituality, due in 2014.

Free Will by Sam Harris

A little while ago I wrote a review of Free Will by Sam Harris. The review, in which I speak very highly of the book, was aimed at the casual reader, someone who might pick it up out of sheer curiosity, so I avoided some of the semantic and epistemic issues that rubbed me the wrong way. Here is the review, as written. Next week I will post some of the problems I had with the book. These problems didn’t make it into the review because I doubt they will bother the average reader and reflect my own distinct brand of nitpicking.

Article first published as Book Review: Free Will by Sam Harris on Blogcritics.

Free Will by Sam Harris

By now we’ve all noticed the campaign of scientific materialists to discredit religion in the hope that a saner, more scientific society will prevail. This future society, it is hoped, will base decisions on empirically verifiable facts and not superstition. Among those leading the campaign for atheism are Richard Dawkins, the late Christopher Hitchens, Bill Maher, Ricky Gervais, and the co-founder of Project Reason, philosopher and neuroscientist Sam Harris.

Harris’ position is very clear: he believes that physics explains all phenomena and therefore our so-called free will is actually an illusion. Because the decisions we make arise out of our current conditions, and those conditions are the result of innumerable physical influences (brain structure, weather, etc.), our decisions are simply the outcome of a specific organism (us) going through a specific physical history (everything that formed us).

Looking back at any action, it is easy to feel as though we could have done things differently. But we cannot prove this. To say we could have acted differently is to suppose that at least some condition at the time of our decision would have been different. If not for the particular conditions influencing us, why did we decide as we did?

Free Will acknowledges the psychological fact that we all feel we are in control of our decisions. This is, in reality, one of the very few arguments for free will. So Harris asks us to look into the causes of our future decisions. We can tell a story about what influences us and how we will likely act, but this story covers very few of the innumerable factors that actually move us to one action over another. When we finally make a decision and exercise our will, we cannot say the impetus lies solely with our conscious “I.”

The conscious “I,” Harris argues, is a simply a function of human organisms. Consciousness is necessary to sort priorities and make decisions, but no supernatural agency is needed to justify this; it is simply an adaptive biological function. Willpower, he claims, is one factor of our consciousness, and we can never truly know why we choose to do one thing over another.

With Harris’ position, there is no need to invent supernatural explanations for any of the facts. God and the soul are comforting fictions but cannot maintain in a society that bases its decisions on empirical data.

This book should be fairly convincing to anyone with an open mind. I empathize with the frustration that theists must feel at the disrespect with which many writers and celebrities criticize religion. But Harris is not belligerent at all. His writing is clear, cogent, and makes no unnecessary detours to put down any creed. He instead prefers a straightforward approach, written in firm language.

Far from abstract verbalism, Free Will discusses the practical issues of morality, politics, and justice without yanking the rug out from under them. From Harris’ position justice must still be served, but our inclinations to hate criminals must be reassessed as we realize these criminals aren’t in complete control, but are the unlucky outcomes of bad genes, bad environments, or other maladaptive conditions. The elimination of free will in no way leads to the decay of morality.

What really impresses me about Free Will is the logical, masterful way in which he unfolds his thesis. Many times I finished one chapter with specific questions, only to find those exact questions tackled in the next chapter. Whether the reader is convinced or not, the execution of this little book is far superior to most on the subject.

Solipsism, Semantics and Science, Between You And Me

Previously I wrote that all our experiences of the world happen within our nervous system, that we cannot truly see past our perceptions and experience reality directly. While this is a fact, it doesn’t mean we remain completely separate from each other.

Dictionary.com defines solipsism two ways.

1. Philosophy. The theory that only the self exists, or can be proved to exist.

2. Extreme preoccupation with and indulgence of one’s feelings, desires, etc.; egoistic self-absorption

Obviously if all we can ever experience happens within our nervous systems, it’s tempting to think that we will always remain apart, that our experiences are never truly shared and don’t even overlap. Too strong a belief in that separation can cause feelings of isolation. But experience happens on many different levels, and some of these levels allow more connection with other people and the world at large.

Nothing in the Universe happens in isolation. Fundamental forces tie all matter together, so everything is related to everything else in a real, physical way, in differing degrees ranging between 0 and ∞ (not inclusive). This is why we have theories like the Butterfly Effect which claim the wind from a butterfly wing can cause a hurricane across the planet. (Careful! Don’t watch the movie of the same name starring Ashton Kutcher.)

Everything that exists is in constant flux, constantly changing and never static. When we talk about any thing, that thing is different one moment to the next (it may change in temperature, mass, and so forth, but at a minimum, the atoms and electrons, etc., are in different positions). So it’s wrong to speak of things as static, unchanging blocks of reality. A static noun implies an unchanging object. It’s much more accurate to discuss reality in terms of process transactions, using active verbs and avoiding the verb is and its other forms (to be, being, was, etc.).

So here we are in a whirling field of activity (of which we are a part). When we observe a part of the universe, we can never know all the details of an event because the characteristics of any event are linked to the details of every other event which are always changing, and so really, the universe is just one big, continuous, ever-changing event, never twice the same. What we perceive are objects abstracted from that event that fall within our range of frequency response.

By frequency response I mean that there is a range frequencies that are perceivable by the ear, other frequencies that are perceivable by the eye, and so forth. These naturally observable frequencies—including others like infrared that, through technology, come within our frequency response—are all we can perceive externally.

So we observe an object, a part of the whole event, and we abstract a set of details. Let’s say I’m watching tennis. Tennis is a sport that depends physically upon the sun, Earth, gravity, nuclear forces, and so forth, even though we do not think about or even perceive these factors. Instead, I focus on the ball, or the short skirts, depending on who’s playing.

The ball or skirt that I perceive is a tiny part of the entire event. The characteristics that I perceive in the ball are finite (because I can only perceive so much), but unlimited (I can always find new characteristics by looking in different ways). So what I perceive, the ball or skirt, will always contain fewer details than the actual event.

But this perception comes together inside my brain. The visual information, audio, movement and relations to surrounding factors (rackets, the net, etc.) all happen on an unspeakable, objective level. My brain compiles the information together into a workable model before I even become aware of it. And I cannot take my perception directly out of my head and place it into the head of my friend. But now that I have a workable model based on perception, I apply a label to the object of my attention; I choose to call it “ball”.

When I call it “ball”, I am applying a verbal label to this non-verbal, objective level of experience. It is the label that I communicate to my friend. But this label is just a label, a semantic tool used to signify my experience. The word “ball” stands for the assembly of perceptions in my brain. The label does not contain the same quantity or quality of information that my perception does. The label has few possible values, because “ball” is a generic term, but for my tennis example, “ball” has one value; the word signifies the actual object being hit back and forth by the players. My label leaves out all the information that I perceive when I perceive the actual ball. But now that I have a label, a means to communicate with my friend, something special happens.

I can apply labels to my experiences and attempt to describe that wordless, objective experience, and my friend can do the same. If I say, “the ball is fuzzy and purple”, my friend can think about what those words mean, or look them up if need be, and say, “actually, you lunatic, the ball is fuzzy and green. Take another look,” at which point I can test my perceptions against his at the verbal level. When I look and find that the ball is green and not purple, I have learned something. I am colourblind.

So while we cannot know reality directly, and we cannot know another’s perceptions, we can communicate with one another to compile more and more information about the experience of our fellow humans. Labels allow us to communicate, which is fundamental for human progress. Without communication, we would still be primitive instead of domesticated primates.

At the label level of life, we can have meaning. There is no such thing as meaning on the objective level of reality, and I doubt the universe as a whole has meaning. Meaning comes from language, and on that level we share reality with our friends.

If we really want to share reality, the key is clear communication. The more thoroughly we communicate our experiences, the more we are connected. This is part of the reason that clear language, proper grammar, and creativity are important to me. There is also a direct link between clear language and clear thinking. At the very least clear language is a symptom of clear thinking. But I have a hunch that clear language can lead to clear thinking. As our rational brains use language and logic to piece together our worldviews, increasing our linguistic capacities can only help the rational process.

Knowing what is communicable and how best to communicate is a key to creativity. Part of that is learning how to differentiate the real from the unreal, fact from fiction, and so forth, so that our friends can weigh our communications accurately. Semantics is essential to how we live and learn; it is how we translate our wordless experience of reality into shared experience. If we can nail down a systematic way of testing experiences against one another, we might learn how the universe operates. This is what science tries to do.

Science is based on a method of experiment and observation, a reduction of hopefully irrelevant variables, and then proper communication of the data to others for verification through further experiments. This is how we methodically tally one person’s experience with another. Through science we learn tendencies about the wordless, objective level of objects, and we can compile theories about the actual events, even the manifold of spacetime in which reality happens. Though science doesn’t prove anything 100%, the more scientific evidence there is for a theory, the more reason there is to believe it.

The goal of science is the discovery of our reality. Science is intentionally sterile to reduce the subjective variables that change so radically from person to person. If we can discover how the universe works independent of our personal experiences, we can fit our personal experiences to the truths of the universe to avoid unpleasant surprises.

In my personal experience, I can apply whatever metaphysics I want. I can believe in faeries, gods, demons, or whatever, and I can talk about them meaningfully and even use them to explain my experience, but this is not science. I might enjoy my metaphysics more than yours, but that doesn’t make them right. Even still, differing viewpoints are essential to scientific testing. The metaphysics of Ptolemy, Gallileo, Newton or Einstein helped move science forward because their metaphysics increasingly seemed to tally with the experience of others and the evidence of the day.

As science moves forward it becomes more and more sure of itself. Science continually out-modes metaphysics. That’s progress. It’s crucial that people keep posing new questions about the world as long as theories don’t get in the way of experience. Since theories can alter the power of our investigations, it’s a good idea to pause, take a breath, let the sense data register and be processed by higher abstractions, and try to see things for what they really are. Then, communicate.

Of course, that’s just my opinion. If my opinion tallies with your experiences, feel free to believe me. But you should feel free to not believe me as well. Belief might change your actions and perceptions, but not the external facts of reality.