Book: Blindsight by Peter Watts, part 1 book scifi philosophy @review

:PROPERTIES: :EXPORT_FILE_NAME: book-blindsight :EXPORT_DATE: 2019-01-14 :END:

#+toc: headlines 2

This article does contain spoilers for the novel Blindsight.

Blindsight

As the novel is licensed Creative Commons, I will be citing large chunks of the text. I could make an attempt to paraphrase Watts' work, but in doing so I would only butcher it. Instead, I will let the text stand on its own, then explain it and tie it in with other ideas.

You invest so much in it, don't you? It's what elevates you above

the beasts of the field, it's what makes you special. Homo

sapiens, you call yourself. Wise Man. Do you even know what it is,

this consciousness you cite in your own exaltation? Do you even

know what it's for?

Maybe you think it gives you free will. Maybe you've forgotten

that sleepwalkers converse, drive vehicles, commit crimes and

clean up afterwards, unconscious the whole time. Maybe nobody's

told you that even waking souls are only slaves in denial.

Make a conscious choice. Decide to move your index finger. Too

late! The electricity's already halfway down your arm. Your body

began to act a full half-second before your conscious self 'chose'

to, for the self chose nothing; something else set your body in

motion, sent an executive summary—almost an afterthought— to the

homunculus behind your eyes. That little man, that arrogant

subroutine that thinks of itself as the person, mistakes

correlation for causality: it reads the summary and it sees the

hand move, and it thinks that one drove the other.

But it's not in charge. You're not in charge. If free will even

exists, it doesn't share living space with the likes of you.

Insight, then. Wisdom. The quest for knowledge, the derivation of

theorems, science and technology and all those exclusively human

pursuits that must surely rest on a conscious foundation. Maybe

that's what sentience would be for— if scientific breakthroughs

didn't spring fully-formed from the subconscious mind, manifest

themselves in dreams, as full-blown insights after a deep night's

sleep. It's the most basic rule of the stymied researcher: stop

thinking about the problem. Do something else. It will come to you

if you just stop being conscious of it.

Every concert pianist knows that the surest way to ruin a

performance is to be aware of what the fingers are doing. Every

dancer and acrobat knows enough to let the mind go, let the body

run itself. Every driver of any manual vehicle arrives at

destinations with no recollection of the stops and turns and roads

traveled in getting there. You are all sleepwalkers, whether

climbing creative peaks or slogging through some mundane routine

for the thousandth time. You are all sleepwalkers.

Don't even try to talk about the learning curve. Don't bother

citing the months of deliberate practice that precede the

unconscious performance, or the years of study and experiment

leading up to the gift-wrapped Eureka moment. So what if your

lessons are all learned consciously? Do you think that proves

there's no other way? Heuristic software's been learning from

experience for over a hundred years. Machines master chess, cars

learn to drive themselves, statistical programs face problems and

design the experiments to solve them and you think that the only

path to learning leads through sentience? You're Stone-age nomads,

eking out some marginal existence on the veldt—denying even the

possibility of agriculture, because hunting and gathering was good

enough for your parents.

Do you want to know what consciousness is for? Do you want to know

the only real purpose it serves? Training wheels. You can't see

both aspects of the Necker Cube at once, so it lets you focus on

one and dismiss the other. That's a pretty half-assed way to parse

reality. You're always better off looking at more than one side of

anything. Go on, try. Defocus. It's the next logical step.

Oh, but you can't. There's something in the way.

And it's fighting back.

Blindsight (full book in HTML) is one of my favorite books of all time. I go back and re-read it every month or two. Each time I come across new nuances that are worth pondering.

Watts manages to fit a large amount of concepts into very dense prose. It won't be for everyone, but it's worth the slog. Anyone who has read Neal Stephenson's Cryptonomicon knows how jargon-y some of the science fiction writers can get -- and unlike Stephenson, Watts is not going to hold your hand. You will get out of this book what you put into it.

The main character is Siri Keeton, a man who had half of his brain removed as a young boy to cure his epilepsy. Ever since the operation, his emotional response has been significantly affected; the very first passages covering a major event from his childhood give you a strong sense of Siri's character. (The prose in each section of Blindsight is carefully selected, with each of its different perspectives given in an appropriate tone.) As an adult, the empty half of Siri's skull has been filled with an array of computer chips that turn him into a walking Chinese Room.

To be more specific, Siri is a Synthesist, a computer-assisted agent capable of interpreting and relaying information without needing to understand it. This is well-explained in the novel and is an important job in a world where baseline humans cannot understand the cutting edge. Siri is the ultimate observer, which is part of why he makes an excellent protagonist. Each of his introspections and flashbacks adds to the richness of the story, giving a picture of a dystopian post-Singularity world where humans simply aren't important. (This theme of human obsolescence is more explicitly explored in the sequel book, Echopraxia.)

After an event known as Firefall, Siri travels with a group of other transhumans aboard the ship Theseus to investigate a phenomenon in the outer reaches of the solar system. The entity they find there is one of fiction's best examples of something radically alien.

Central Theme: Consciousness and Intelligence

The main idea of Blindsight is the separation between consciousness and intelligence. Most people, hampered by the obvious dominance of homo sapiens on Earth, have never considered that our most unique trait may be a local maximum on the fitness landscape. Watts is not afraid of this idea, and he pushes it to its limit.

You may not be convinced by the argument -- many are not, and philosophical discussion of Blindsight is always interesting -- but Watts makes the case very strongly. In his conception, consciousness and self-awareness are parasitic computational processes, attaching themselves to millions of years of pre-optimized neural modules.

In the book's appendix, where he explains at length the scientific/conceptual reasoning behind the themes of the novel, he sums it up quite well:

But beneath the unthreatening, superficial question of what

consciousness is floats the more functional question of what it's

good for. Blindsight plays with that issue at length, and I won't

reiterate points already made. Suffice to say that, at least under

routine conditions, consciousness does little beyond taking memos

from the vastly richer subconcious environment, rubber-stamping

them, and taking the credit for itself. In fact, the nonconscious

mind usually works so well on its own that it actually employs a

gatekeeper in the anterious cingulate cortex to do nothing but

prevent the conscious self from interfering in daily

operations. (If the rest of your brain were conscious, it would

probably regard you as the pointy-haired boss from Dilbert.)

Is this kind of self-reflective process actually required for intelligence? Can a biological machine -- unaware even of its own existence -- act in the world with motivations, solve problems, communicate with other instances of its same species? The intuitive answer, because we humans can imagine no other way of understanding the nature of Being, is that only an embodied and conscious creature is capable of the kind of cognitive complexity we see in humans.

Watts disagrees.

Consciousness and Evolution

Evolution has no foresight. Complex machinery develops its own

agendas. Brains—cheat. Feedback loops evolve to promote stable

heartbeats and then stumble upon the temptation of rhythm and

music. The rush evoked by fractal imagery, the algorithms used for

habitat selection, metastasize into art. Thrills that once had to

be earned in increments of fitness can now be had from pointless

introspection. Aesthetics rise unbidden from a trillion dopamine

receptors, and the system moves beyond modeling the organism. It

begins to model the very process of modeling. It consumes

ever-more computational resources, bogs itself down with endless

recursion and irrelevant simulations. Like the parasitic DNA that

accretes in every natural genome, it persists and proliferates and

produces nothing but itself. Metaprocesses bloom like cancer, and

awaken, and call themselves I.

The system weakens, slows. It takes so much longer now to

perceive—to assess the input, mull it over, decide in the manner

of cognitive beings. But when the flash flood crosses your path,

when the lion leaps at you from the grasses, advanced

self-awareness is an unaffordable indulgence. The brain stem does

its best. It sees the danger, hijacks the body, reacts a hundred

times faster than that fat old man sitting in the CEO's office

upstairs; but every generation it gets harder to work around this—

this creaking neurological bureaucracy.

I wastes energy and processing power, self-obsesses to the point

of psychosis. Scramblers have no need of it, scramblers are more

parsimonious. With simpler biochemistries, with smaller

brains—deprived of tools, of their ship, even of parts of their

own metabolism—they think rings around you. They hide their

language in plain sight, even when you know what they're

saying. They turn your own cognition against itself. They travel

between the stars. This is what intelligence can do, unhampered by

self-awareness.

I is not the working mind, you see. For Amanda Bates to say "I

do not exist" would be nonsense; but when the processes beneath

say the same thing, they are merely reporting that the parasites

have died. They are only saying that they are free.

Humans are evolved creatures. Our behaviors are rooted in millions of years of natural selection; most of the activities of your brain are not much different from the activities of the brains of lesser primates. Many of the behaviors we consider uniquely human are actually mammalian behaviors: even rats understand fairness. Elephants seem to have complex death-related behaviors. Chimpanzees exhibit complex behaviors that we recognize as being not exactly human but still recognizable -- tool use, altruism/cooperation, excellent recall -- not so surprising given that chimpanzees are our closest relatives.

However, complex intelligence is not restricted to mammals. Bird intelligence is a highly interesting topic, with some birds showing signs of consciousness and passing the mirror test. Birds and mammals diverged from one another 300 million years ago, so whatever neural functions are responsible for these tasks have involved entirely independently.

Of course, don't let any of this confuse you: humans are/ special. We are capable of adaptation to any arbitrary environment on a timeline demarcated in days or weeks, while most other complex animals only evolve at generational scales. We are a [[https://www.theatlantic.com/science/archive/2018/04/in-a-few-centuries-cows-could-be-the-largest-land-animals-left/558323][walking extinction event]]: any large animal inevitably went extinct not long after primitive homo sapiens moved into the neighborhood. Cultural influences on behavior can be as strong or stronger than genetic influences, depending on the trait in question. Complex language, although a kludge --

"When you get right down to it, it’s a work-around. Like trying to

describe dreams with smoke signals. It’s noble, it’s maybe the

most noble thing a body can do but you can’t turn a sunset into a

string of grunts without losing something.”

-- is perhaps the defining characteristic of homo sapiens.

Humans are special.

But they're not as special as you think.

Once you understand that intelligence can come in many different forms and still solve complex problems, an understanding that gets even more pessimistic when you study machine intelligence, you're left holding the bag: what's this consciousness thing for, anyway?

Consciousness is Incompetent

A memory rose into my mind and stuck there: a man in motion, head

bent, mouth twisted into an unrelenting grimace. His eyes focused

on one foot, then the other. His legs moved stiffly,

carefully. His arms moved not at all. He lurched like a zombie in

thrall to rigor mortis.

I knew what it was. Proprioreceptive polyneuropathy, a case study

I'd encountered in ConSensus back before Szpindel had died. This

was what Pag had once compared me to; a man who had lost his

mind. Only self-awareness remained. Deprived of the unconscious

sense and subroutines he had always taken for granted, he'd had to

focus on each and every step across the room. His body no longer

knew where its limbs were or what they were doing. To move at all,

to even remain upright, he had to bear constant witness.

There'd been no sound when I'd played that file. There was none

now in its recollection. But I swore I could feel Sarasti at my

shoulder, peering into my memories. I swore I heard him speak in

my mind like a schizophrenic hallucination:

/This is the best that consciousness can do, when left on its

own./

There is an excellent short documentary-clip on a patient with proprioreceptive polyneuropathy on YouTube, titled The Man Who Lost His Body, about a man named Ian Waterman whose nervous system was partially destroyed by a virus. (I believe this clip is the same one Siri is remembering in-story.) There are links to the full documentary in the video description. He is incapable of controlling his body without constant conscious attention. He managed, after a long time of constant practice, to re-learn how to walk and control his arms. Even then, his gait is obviously suboptimal, he's incapable of anything faster than a walk, and he's going to have to live the rest of his life like this.

Think about how absolutely terrifying this would be. Knowing that until you die, you will have to be in conscious control of every movement in your body. Robbed of the automatic processes that allow your body to move smoothly, you find that even your body is foreign to you. You're brought back to a certain conception of consciousness, oversimplified though it may be: that you're just a brain driving around in a meat car.

And now you can't drive.

This is the best that consciousness can do, when left on its own.

Extreme cases like proprioreceptive polyneuropathy are merely useful as examples. A human being can barely control his own body without the assistance of an uncountable number of unconscious processes.

It gets worse. Your conscious mind isn't even good at the things you are good at.

Consciousness is, at best, a meta-process that organizes the behavior of the rest of the brain, the seat of goal-oriented behavior. At worst, it's merely an observer convincing itself that it is responsible for the amazing feats of the human brain. Your conscious mind is not capable of nearly any of the complex behaviors that a trained adult must undergo in order to accomplish anything. This is the entire point of training or practice, as mentioned in the first quote of the article:

Every concert pianist knows that the surest way to ruin a

performance is to be aware of what the fingers are doing. Every

dancer and acrobat knows enough to let the mind go, let the body

run itself. Every driver of any manual vehicle arrives at

destinations with no recollection of the stops and turns and roads

traveled in getting there. You are all sleepwalkers, whether

climbing creative peaks or slogging through some mundane routine

for the thousandth time. You are all sleepwalkers.

Imagine a strong chess player, deep in thought during a tournament game. Chess is the standard example of a purely intellectual pursuit. Accordingly, we could assume that what the chess master is doing is this: that he is consciously evaluating the options before him, calculating complex evaluations against one another, before finally selecting his move. Chess is the ultimate example of the complexity of the conscious mind. Right?

Wrong. In a beginner chess player, who must constanty remind himself of the rules, perhaps the conscious mind plays a large role in his play. But he's not very good at the game, constantly missing correct ideas and miscalculating lines. The master is differentiated from the amateur by the sheer amount of unconscious processing that goes on while he plays a game. Positional concepts come to the mind unbidden; long-term projections are made completely intuitively; tactical combinations are pre-processed by tens of thousands of similar positions; candidate moves are selected on gut feeling alone. The conscious mind is relegated to the position of simply checking the work of the unconscious processes that are doing all of the heavy lifting -- if even that.

The master is capable of feats that defy belief by those who are not strong chess players. Non-players are amazed by the sight of someone playing a game blindfolded, but tournament chess players consider this merely an "impressive" feat. Most strong tournament players are capable of recalling entire games from memory, and many of them can play blindfolded. I've been able to play blindfold chess since elementary school. How is this possible?

A famous 1973 study by William G. Chase and Herbert A. Simon sheds light on the matter. Previously, de Groot (1965) had devised a short-term recall task which involved showing a chess position to a player for 5 seconds and then asking them to reconstruct that position some time later. Masters were capable of doing this nearly perfectly, while they found a sharp dropoff below master skill. Crucially, this only applied for positions which are actually reasonable positions in a chess game; should the pieces be arranged on the board at random, masters showed no increased ability of recall. What is going on here?

A chess board has 64 squares, and each game begins with 32 pieces, evenly split between the players. The number of possible chess games and positions is literally astronomical. However, a position from a real game can be cut up into conceptual chunks. What piece pairings remain on the board? Which players have castled? What does the pawn structure look like? Each one of these questions has a short set of "reasonable" answers, easily captured by a player with sufficient experience. A non-player or weak player looking at a board may only recognize a 64-square checkerboard covered with 20 or so pieces. A strong player looking at the same position may recognize: this is a King's Indian line, plus a couple of minor alterations as the players drifted away from theory. The master therefore does less conscious work by far.

In fact, de Groot had found that

Masters search through about the same number of possibilities as

weaker players -- perhaps even fewer, almost certainly not more --

but they are very good at coming up with the "right" moves for

further consideration, whereas weak players spend considerable

time analyzing the consequences of bad moves. (Chase, Simon 1973)

The master has offloaded most of the work to an enormous network of unconscious processes which are finely-honed by experience and practice. He is doing less work, not more.

You -- the conscious you -- are not very good at anything.

Theme: Humans Post-Singularity

The new Millennium changed all that. We’ve surpassed ourselves now,

we’re exploring terrain beyond the limits of merely human

understanding. Sometimes its contours, even in conventional space,

are just too intricate for our brains to track; other times its

very axes extend into dimensions inconceivable to minds built to

fuck and fight on some prehistoric grassland. So many things

constrain us, from so many directions. The most altruistic and

sustainable philosophies fail before the brute brain-stem

imperative of self-interest. Subtle and elegant equations predict

the behavior of the quantum world, but none can explain it. After

four thousand years we can’t even prove that reality exists beyond

the mind of the first-person dreamer. We have such need of

intellects greater than our own.

But we’re not very good at building them. The forced matings of

minds and electrons succeed and fail with equal spectacle. Our

hybrids become as brilliant as savants, and as autistic. We graft

people to prosthetics, make their overloaded motor strips juggle

meat and machinery, and shake our heads when their fingers twitch

and their tongues stutter. Computers bootstrap their own offspring,

grow so wise and incomprehensible that their communiqués assume the

hallmarks of dementia: unfocused and irrelevant to the

barely-intelligent creatures left behind.

The Singularity is the hypothetical moment at which technological progress exceeds our predictive ability. The name is an analogy with a singularity in physics (or any of the other fields which use the term).

In the example of a black hole, we know that as the volume of an object shrinks, its density must increase, as long as it doesn't lose any mass. The compressing center point of a black hole, "punched out" from the surrounding universe by an event horizon, is sometimes described as having a finite mass but zero volume -- and therefore infinite density. The nature of this object makes it impossible for us to say for certain how the physical laws apply to it.

Futurists and techno-cultists of all stripes have long recognized the potential of a runaway intelligence process: an exponential explosion in machine intelligence leaving anything recognizable in the dust. Such an intelligence would, in this conception, be as superior to us as we are to chimpanzees, but even more alien. In the same way that a black hole leaves us unable to determine the physical nature of the singularity at its core, a future dominated by a machine superintelligence is so radical to us that we struggle to understand even the scope of change that it would bring about. (Excellent science fiction writers like Watts can achieve this somewhat, but they're still just guessing.)

Of course, the Singularity is a hypothetical event. But if you believe in the inevitability of a machine superintelligence, in some form -- as I do -- then it's very difficult to avoid the conclusion that humans need not apply.

A human would be no more capable of understanding the thought process of a Singularity-tier machine intelligence than his dog is able to understand what he's doing when he files his taxes.

Maybe you're skeptical. This radical change is alien to you, with no context giving you the ability to simulate it. That's fine. Humans have undergone many analogous changes in the past -- imagine a Paleolithic hunter gatherer predicting the complex social relationships of a late Bronze Age society, with city streets, mud brick houses, organized religion, and systematic food production technologies like agriculture, animal domestication, and selective breeding for both. Unthinkable. Perhaps he -- brighter than the other members of his tribe -- has some inkling that these wild cereals he harvests from the fields of the Levant could be bent to his will, or that these goats could be captured and bred in the safety of the village. Perhaps he has some inkling of a system for offloading memory into physical objects -- tokens for economic exchange, notches on a piece of clay for counting -- that his great-great-great-and-so-on-grandchildren would one day turn into a formal system of writing. But merely an inkling.

This process, this conceptual singularity brought about by technological change, has happened many times throughout history. The Neolithic Revolution, advancements in metallurgy, civilizational conquests spreading social strategies and technoligies by the sword, religious centralization from written language and later from long-distance communications, and at some point in this process we hit the gas with the Industrial Revolution. And there's no way to take our foot off the gas now.

The very first sentence of Ted Kaczynski's manifesto begins thus:

The Industrial Revolution and its consequences have been a disaster

for the human race.

Perhaps it was! Disregarding his misguided bombing campaign, which could never have solved the problem even if he were correct, we humans of the future must accept a further principle: there's no way back. The Industrial Revolution was only the beginning. Now we have vast interconnected economic systems, near-instantaneous global communications, satellites in orbit maintaining critical infrastructure, and a sort of high-latency hive mind growing ever more abominable by the year in the form of social media. As a common accelerationist refrain goes, derived from Robert Frost's poem /A Servant to Servants/:

The only way out is through.

We go forward, consequences be damned, because humans know no other way. Humans aren't even in charge of this process, and before long, we won't even be the primary driving force for it.

Mind Uploading, Virtual Utopia

It had been scarcely two months since Helen had disappeared under

the cowl. Two months by our reckoning, at least. From her

perspective it could have been a day or a decade; the Virtually

Omnipotent set their subjective clocks along with everything else.

She wasn't coming back. She would only deign to see her husband

under conditions that amounted to a slap in the face. He didn't

complain. He visited as often as she would allow: twice a week,

then once. Then every two. Their marriage decayed with the

exponential determinism of a radioactive isotope and still he

sought her out, and accepted her conditions.

On the day the lights came down, I had joined him at my mother's

side. It was a special occasion, the last time we would ever see

her in the flesh. For two months her body had lain in state along

with five hundred other new ascendants on the ward, open for

viewing by the next of kin. The interface was no more real than it

would ever be, of course; the body could not talk to us. But at

least it was there, its flesh warm, the sheets clean and

straight. Helen's lower face was still visible below the cowl,

though eyes and ears were helmeted. We could touch her. My father

often did. Perhaps some distant part of her still felt it.

Once the machines are in control, why bother sticking around? Humans are natural pleasure-seekers. The seeking out of increasing forms of sensory novelty is in our blood. We are explorers not just in the physical sense -- ancient /homo sapiens sapiens/ spreading across the surface of the planet, conquistadors subjugating a new continent, expert sailors exploring and charting the open seas -- but in a way that is much more abstract. The same circuits that allow a cat to explore a new home have been bootstrapped into abstract reward mechanisms for learning. A young teenage nerd exploring the world of Azeroth in /World of Warcraft/ is using the same circuits as an ancient hunter-gatherer exploring a new territory for sources of food, filtered through some primitive 2-dimensional screen with the benefit of some pre-recorded audio.

Virtual reality is a trend which has taken off in recent years, with primitive VR technology available to consumers at reasonably low cost. The simulation does not have to be particularly life-like. Symbolic abstractions, voxel systems like those in Minecraft, are more than sufficient for the brain to recognize objects and terrain. The brain understands. What's more, it's easily fooled. At some level, the user is aware that he is in a simulation, and that the goggles are merely a screen overlaying his visual field. And yet it is easy to find footage on YouTube of people panicking as a VR rollercoaster plummets into near freefall. Don't forget that the experience isn't out there -- it's in here, in your head, with you. A sufficiently complex representation, delivered directly to your senses, gives you experiences which are just as "real" to your brain as what you could experience in the real world, but they're unhampered by silly rules like physics.

Virtual reality provides access to experiences which are unimaginable to modern humans. Science fiction authors have been writing about virtual reality since William Gibson's Neuromancer/ and Neal Stephenson's /Snow Crash, with other major examples as well (like Star Trek).

It doesn't matter to your brain at all that none of these experiences are taking place in the physical world. They are experiences nonetheless, capable of being encoded into memories, turned into long-term memory, even absorbed as unconscious ability in tasks before being applied in meatspace.

But for some, or many, of us, the real question may become: why go back? Why wake up when you've become indistinguishable from a god?

Post-Motivations

Jim had his inhaler in hand as we emerged from the darkness. I

hoped, without much hope, that he'd throw it into the garbage

receptacle as we passed through the lobby. But he raised it to his

mouth and took another hit of vassopressin, that he would never be

tempted.

Fidelity in an aerosol. "You don't need that any more," I said.

"Probably not," he agreed.

"It won't work anyway. You can't imprint on someone who isn't even

there, no matter how many hormones you snort. It just—"

Jim said nothing. We passed beneath the muzzles of sentries

panning for infiltrating Realists.

"She's gone," I blurted. "She doesn't care if you find someone

else. She'd be happy if you did." /It would let her pretend the

books had been balanced./

"She's my wife," he told me.

For those of us who still choose to remain in meatspace, such humans would be somewhat disconnected from their existing reward-circuitry. We already live in such a world, to some degree: drugs can ameliorate (antidepressants for depression) or even eliminate (stimulants for ADD) the disadvantages of a particular brain. Humans have been inducing religious experiences with hallucinogens like psilocybin for a very long time. We have always understood that brains can be hacked by chemicals, although our understanding of this fact was primitive before the 20th century.

Now imagine living in a world where even the emotional drives are easily manipulated. Love potions, a mainstay of fantasy fiction, are not impossible. The right concoction of chemicals can induce feelings of love, religious ecstasy, blind hatred and aggression. A loving couple. An ecstatic monk. A remorseless supersoldier.

These are not dreams. These are inevitabilities.

Buckle up.

End of Part 1

Blindsight covers a lot of territory. In the next post, I'll cover the concept that "Technology Implies Belligerence!", a very convincing account of "xenopsychology", along with analysis of the various types of transhumans, near-humans, and non-humans that appear in the novel.

TODO Book: Blindsight by Peter Watts, part 2 book scifi philosophy @review

:PROPERTIES: :EXPORT_FILE_NAME: book-blindsight-2 :EXPORT_DATE: 2019-01-02 :END:

#+toc: headlines 2

This article does contain spoilers for the novel Blindsight.

Theme: Technology Implies Belligerence!

But what were the odds that even our best weapons would prove

effective against the intelligence that had pulled off the

Firefall? If the unknown was hostile, we were probably doomed no

matter what we did. The Unknown was technologically advanced—and

there were some who claimed that that made them hostile by

definition. Technology Implies Belligerence, they said.

I suppose I should explain that, now that it's completely

irrelevant. You've probably forgotten after all this time.

Once there were three tribes. The Optimists, whose patron saints

were Drake and Sagan, believed in a universe crawling with gentle

intelligence—spiritual brethren vaster and more enlightened than

we, a great galactic siblinghood into whose ranks we would someday

ascend. Surely, said the Optimists, space travel implies

enlightenment, for it requires the control of great destructive

energies. Any race which can't rise above its own brutal instincts

will wipe itself out long before it learns to bridge the

interstellar gulf.

Across from the Optimists sat the Pessimists, who genuflected

before graven images of Saint Fermi and a host of lesser

lightweights. The Pessimists envisioned a lonely universe full of

dead rocks and prokaryotic slime. The odds are just too low, they

insisted. Too many rogues, too much radiation, too much

eccentricity in too many orbits. It is a surpassing miracle that

even one Earth exists; to hope for many is to abandon reason and

embrace religious mania. After all, the universe is fourteen

billion years old: if the galaxy were alive with intelligence,

wouldn't it be here by now?

Equidistant to the other two tribes sat the Historians. They didn't

have too many thoughts on the probable prevalence of intelligent,

spacefaring extraterrestrials— but if there are any, they said,

they're not just going to be smart. They're going to be mean.

It might seem almost too obvious a conclusion. What is Human

history, if not an ongoing succession of greater technologies

grinding lesser ones beneath their boots? But the subject wasn't

merely Human history, or the unfair advantage that tools gave to

any given side; the oppressed snatch up advanced weaponry as

readily as the oppressor, given half a chance. No, the real issue

was how those tools got there in the first place. The real issue

was what tools are for.

To the Historians, tools existed for only one reason: to force the

universe into unnatural shapes. They treated nature as an enemy,

they were by definition a rebellion against the way things

were. Technology is a stunted thing in benign environments, it

never thrived in any culture gripped by belief in natural

harmony. Why invent fusion reactors if your climate is comfortable,

if your food is abundant? Why build fortresses if you have no

enemies? Why force change upon a world which poses no threat?

Human civilization had a lot of branches, not so long ago. Even

into the twenty-first century, a few isolated tribes had barely

developed stone tools. Some settled down with agriculture. Others

weren't content until they had ended nature itself, still others

until they'd built cities in space.

We all rested eventually, though. Each new technology trampled

lesser ones, climbed to some complacent asymptote, and

stopped—until my own mother packed herself away like a larva in

honeycomb, softened by machinery, robbed of incentive by her own

contentment.

But history never said that everyone had to stop where we did. It

only suggested that those who had stopped no longer struggled for

existence. There could be other, more hellish worlds where the best

Human technology would crumble, where the environment was still the

enemy, where the only survivors were those who fought back with

sharper tools and stronger empires. The threats contained in those

environments would not be simple ones. Harsh weather and natural

disasters either kill you or they don't, and once conquered—or

adapted to— they lose their relevance. No, the only environmental

factors that continued to matter were those that fought back, that

countered new strategies with newer ones, that forced their enemies

to scale ever-greater heights just to stay alive. Ultimately, the

only enemy that mattered was an intelligent one.

And if the best toys do end up in the hands of those who've never

forgotten that life itself is an act of war against intelligent

opponents, what does that say about a race whose machines travel

between the stars?

Theme: Alien-ness

Transhumans: Theseus Crew

They never really talked like that, by the way. You'd hear

gibberish—a half-dozen languages, a whole Babel of personal

idioms—if I spoke in their real voices.

Some of the simpler tics make it through: Sascha's good-natured

belligerence, Sarasti's aversion to the past tense. Cunningham

lost most of his gender pronouns to an unforeseen glitch during

the work on his temporal lobe. But it went beyond that. The whole

lot of them threw English and Hindi and Hadzane into every second

sentence; no real scientist would allow their thoughts to be

hamstrung by the conceptual limitations of a single

language. Other times they acted almost as synthesists in their

own right, conversing in grunts and gestures that would be

meaningless to any baseline. It's not so much that the bleeding

edge lacks social skills; it's just that once you get past a

certain point, formal speech is too damn slow.

Near-Humans: Jukka Sarasti

From a third, just short of the forward bulkhead, Jukka Sarasti

climbed into view like a long white spider.

If he'd been Human I'd have known instantly what I saw there, I'd

have smelled murderer all over his topology. And I wouldn't have

been able to even guess at the number of his victims, because his

affect was so utterly without remorse. The killing of a hundred

would leave no more stain on Sarasti's surfaces than the swatting

of an insect; guilt beaded and rolled off this creature like water

on wax.

But Sarasti wasn't human. Sarasti was a whole different animal,

and coming from him all those homicidal refractions meant nothing

more than predator. He had the inclination, was born to it;

whether he had ever acted on it was between him and Mission

Control.

Maybe they cut you some slack, I didn't say to him. Maybe it's

just a cost of doing business. You're mission-critical, after

all. For all I know you cut a deal. You're so very smart, you

know we wouldn't have brought you back in the first place if we

hadn't needed you. /From the day they cracked the vat you knew

you had leverage./

/Is that how it works, Jukka? You save the world, and the folks

who hold your leash agree to look the other way?/

Artificial Intelligence: The Captain

"Susan James has barricaded herself in the bridge and shut down

autonomic overrides." An unfamiliar voice, flat and

affectless. "She has initiated an unauthorized burn. I have begun

a controlled reactor shutdown; be advised that the main drive will

be offline for at least twenty-seven minutes."

The ship, I realized, its voice raised calmly above the

alarm. The Captain itself. On Public Address.

That was unusual.

Extraterrestrial: Rorschach

"You haven't mentioned your father at all," Rorschach remarked.

"That's true, Rorschach," Sascha admitted softly, taking a

breath—

And stepping forward.

"So why don't you just suck my big fat hairy dick?"

The drum fell instantly silent. Bates and Szpindel stared,

open-mouthed. Sascha killed the channel and turned to face us,

grinning so widely I thought the top of her head would fall off.

"Sascha," Bates breathed. "Are you crazy?"

"So what if I am? Doesn't matter to that thing. It doesn't have a

clue what I'm saying."

"What?"

"It doesn't even have a clue what it's saying back," she added.

"Wait a minute. You said—/Susan/ said they weren't parrots. They

knew the rules."

And there Susan was, melting to the fore: "I did, and they do. But

pattern-matching doesn't equal comprehension."

Bates shook her head. "You're saying whatever we're talking

to—it's not even intelligent?"

"Oh, it could be intelligent, certainly. But we're not talking

to it in any meaningful sense."

"So what is it? Voicemail?"

"Actually," Szpindel said slowly, "I think they call it a Chinese

Room..."

About bloody time, I thought.

Extraterrestrial: Scramblers

Imagine you're a scrambler.

Imagine you have intellect but no insight, agendas but no

awareness. Your circuitry hums with strategies for survival and

persistence, flexible, intelligent, even technological—but no

other circuitry monitors it. You can think of anything, yet are

conscious of nothing.

You can't imagine such a being, can you? The term being doesn't

even seem to apply, in some fundamental way you can't quite put

your finger on.

Try.

[...]

I shook my head, trying to wrap it around that insane, impossible

conclusion. "They're not even hostile." Not even capable of

hostility. Just so profoundly alien that they couldn't help but

treat human language itself as a form of combat.

How do you say We come in peace when the very words are an act

of war?

[...]

... if Sarasti was right, scramblers were the /norm/: evolution

across the universe was nothing but the endless proliferation of

automatic, organized complexity, a vast arid Turing machine full

of self-replicating machinery forever unaware of its own

existence. And we—we were the flukes and the fossils. We were the

flightless birds lauding our own mastery over some remote island

while serpents and carnivores washed up on our shores. [...] If

Sarasti was right, there was no hope of reconciliation.

TODO Comments on Artificial Intelligence, part 2 future philosophy ai @analysis @technology

:PROPERTIES: :EXPORT_FILE_NAME: comments-on-artificial-intelligence-2 :EXPORT_DATE: 2019-01-02 :END:

#+toc: headlines 2

Major Problems for AI

Before I begin trying to sketch out the process of evolution from an agent in a simulation game, to something we could call "alive", to something we could call "intelligent", we need to tackle some of the major stumbling blocks to AI.

Note: I am not an AI researcher, nor an expert on programming, nor an academic philosopher. I am an amateur philosopher (see the Site Overview for a non-exhaustive list of my influences).

Comments on Artificial Intelligence, part 1 future philosophy ai @analysis @technology

:PROPERTIES: :EXPORT_FILE_NAME: comments-on-artificial-intelligence-1 :EXPORT_DATE: 2019-01-01 :END:

Necessary Redefinitions and Clarifications

No discussion on artificial intelligence makes sense without a re-evaluation of a few cherished concepts. What qualifies for life? Intelligence? Sentience? Our current pictures of these ideas are rooted in hundreds of thousands of pages of philosophy and science. However, they are incomplete, suffering from a common ailment: insufficient sample size. When the borders of the concepts are analyzed more closely, they become fuzzy.

In order to have a meaningful discussion about whether life can exist in silica, we have to dismiss concepts which exclude digital forms of life by definition alone. Fortunately, the literature exists to show that there is already a tradition of doing just that.

Life

A good definition of life can be found on Wikipedia. While there is no truly authoritative definition of "life" (a fact which itself means something), everybody agrees on a few of the fundamentals. Most definitions are biologically chauvinistic: they imply that the only things we can call "alive" are those things which are, like us, cellular creatures formed of organic tissues and "coded" by DNA.

Below that you will see alternative definitions of life. These alternative definitions are important to us, particularly because they

... [distinguish] life by the evolutionary process rather than its

chemical composition.

In fact, I would suggest a much stronger definition: the "strong alife" ("artificial life") position. In doing so, I am not alone. In a potentially apocryphal quote, the computer scientist (and towering genius) John von Neumann is said to have stated that

Life is a process which can be abstracted away from any particular

medium.

One of the early artificial life researchers, a man named Tom Ray, wrote a paper titled "An approach to the synthesis of life" in 1991. The paper is an extremely good read, but it can be a bit opaque: he speaks of his computer simulation Tierra, a highly complex cyber-ecosystem simulation which seeks to prove evolutionary complexity among digital "organisms" (self-modifying scraps of self-replicating code competing for computer resources). Tierra is featured extensively in Steven Levy's 1993 book Artificial Life, which I never returned to my local library in high school (whoops).

In the paper we find Ray express a strong alife position in no uncertain terms.

Ideally, the science of biology should embrace all forms of life.

However in practice, it has been restricted to the study of a

single instance of life, life on earth. Life on earth is very

diverse, but it is presumably all part of a single phylogeny.

Because biology is based on a sample size of one, we can not know

what features of life are peculiar to earth, and what features

are general, characteristic of all life.

[...]

The intent of this work is to synthesize rather than simulate

life. This approach starts with hand crafted organisms already

capable of replication and open-ended evolution, and aims to

generate increasing diversity and complexity in a parallel to the

Cambrian explosion.

To state such a goal leads to semantic problems, because life

must be defined in a way that does not restrict it to carbon

based forms. It is unlikely that there could be general

agreement on such a definition, or even on the proposition that

life need not be carbon based. Therefore, I will simply state my

conception of life in its most general sense. I would consider a

system to be living if it is self-replicating, and capable of

open-ended evolution. Synthetic life should self-replicate, and

evolve structures or processes that were not designed-in or

pre-conceived by the creator (Pattee, 1989).

Many arguments can be (and have been) made about the validity of the strong alife hypothesis, in either direction. I will adopt the strong alife hypothesis and believe that the organic chauvinism of the traditional definition is unsuitable for a world in which intelligent agents may arise in silicon. (The moral philosophy of artificial intelligence is outside the scope of this article. If you're in for the ride, here's a Wikipedia rabbit hole.) Later I will also adopt the "strong AI" position as a consequence of some ideas by futurist Ray Kurzweil (not to be confused with John Searle's definition of "strong AI", see "Chinese Room").

Intelligence

The first sentence on the Wikipedia article says this:

Intelligence has been defined in many ways, including: the

capacity for logic, understanding, self-awareness, learning,

emotional knowledge, reasoning, planning, creativity, and problem

solving. More generally, it can be described as the ability to

perceive or infer information, and to retain it as knowledge to

be applied towards adaptive behaviors within an environment or

context.

A quite inclusive definition, since it's already intended to include nonhuman intelligence. (Of course, as with "life", no authoritative definition of "intelligence" exists. Every field that claims to study intelligence does so with its own definition, tools, and metrics. While the related concept of the intelligence quotient (IQ) has strong predictive validity, it doesn't actually "measure" anything -- as an entirely statistical model, it leaves as many questions unanswered as it can answer. Here is a strong paper covering some of the questions about IQ and why it doesn't exactly correlate to "intelligence".)

If you're interested in the myriad definitions of intelligence, there is a non-exhaustive (but reasonably representative) list here on the same page.

As a definition, the Wikipedia definition of intelligence will work just fine, although we must remember that whatever means an artificial lifeform would achieve intelligence would necessarily be very different from our own, as an artificial intelligence lacks the millions of years of embodied evolution which gives us the sophisticated machinery of the brain, only recently turned towards cognitive complexity and technological development.

A passage from Nick Bostrom's excellent Superintelligence: Paths, Dangers, Strategies will shed light on true machine intelligence (differentiated from the kinds of highly-complex programs ("expert systems") that dominate public discussion of AI):

But let us note at the outset that however many stops there are

between here and human-level machine intelligence, the latter is

not the final destination. The next stop, just a short distance

farther along the tracks, is superhuman-level machine

intelligence. The train might not pause or even decelerate at

Humanville Station. It is likely to swoosh right by.

The mathematician I. J. Good, who had served as chief

statistician in Alan Turing’s code-breaking team in World War II,

might have been the first to enunciate the essential aspects of

this scenario. In an oft-quoted passage from 1965, he wrote:

> Let an ultraintelligent machine be defined as a machine that can

far surpass all the intellectual activities of any man however

clever. Since the design of machines is one of these intellectual

activities, an ultraintelligent machine could design even better

machines; there would then unquestionably be an "intelligence

explosion,” and the intelligence of man would be left far

behind. Thus the first ultraintelligent machine is the last

invention that man need ever make, provided that the machine is

docile enough to tell us how to keep it under control.

People who believe in the coming of the Singularity, a radical transformation of human life and society by the exponentially-increasing intelligence of a "seed AI" (the "ultraintelligent machine" mentioned by Good), have pointed out repeatedly that it is naive to consider intelligence as ending with "Humanville Station". If you were plot intelligence on a line (an oversimplification, but a useful one), the gap between a recognized genius (Einstein, Newton, Neumann, etc) and your local supermarket cashier is a fraction of the gap between your average human and your average dog. It is comforting to believe that human intelligence is the final word on cognitive complexity, but there is no reason to believe this beyond absence-of-evidence -- and plenty of reasons to believe that humans aren't nearly as intelligent as we believe we are.

Consciousness and Free Will

Perhaps you believe you have found the silver bullet against artificial intelligence: the increasingly nebulous concept of "consciousness". Consciousness, as defined by Wikipedia, no doubt exists:

Consciousness is the state or quality of awareness or of being

aware of an external object or something within oneself. It has

been defined variously in terms of sentience, awareness, qualia,

subjectivity, the ability to experience or to feel, wakefulness,

having a sense of selfhood or soul, the fact that there is

something "that it is like" to "have" or "be" it, and the

executive control system of the mind.

Most humans, regardless of the difficulty in defining the term, will agree that they are conscious, and that other humans are also conscious. Those who do not agree are simply pedants, arguing an outdated concept of consciousness and dragging in equally outdated concepts of free will.

I am a compatibilist, which I define simply as the view that you are not free to make "real" choices -- your life, as every other physical object, is strapped to an already-existing 4-D spacetime shape which includes the future, which is decided by entirely physical phenomena. This kind of perspective has been likened to religious answers to questions such as "if God knows you're going to do X, how can he punish you for it?": the fact that your choice was pre-determined doesn't mean you didn't make it. To a compatibilist, the question of whether an action was taken freely is an ethical or legal question, not a metaphysical one. Your decision-making process, the output of the most complex machine in the entire universe, may feel phenomenologically like a truly "free" choice, but the brain is no more an exception to the mechanical nature of the universe than a falling bowling ball.

Accordingly, I don't include any conception of free will in my idea of consciousness. You may be a believer in true free will or a strict determinist (both under the umbrella "incompatibilist"). I think both views are naive. For the first, the idea that a human spirit, immune to the laws of causality, is piloting your body is an antiquated one -- the mind-body duality problem is not a serious one, in my opinion (Mind/Brain Identity). For the second, the philosophical concept of free will underpins any serious legal system (mens rea, conspiracy, duress, etc) and any serious ethical system (moral responsibility for one's actions). There need not be an underlying metaphysical reality to a practical definition of free will.

Daniel Dennett, philosopher of science, mind, and biology, has this to say in his 1984 book "Elbow Room", in a chapter titled "Why Do We Want Free Will?":

The distinction between responsible moral agents and beings with

diminished or no responsibility is coherent, real, and

important. It is coherent, even if in many instances it is hard

to apply; it draws an empirically real line, in that we don't all

fall on one side; and, most important, the distinction matters:

the use we make of it plays a crucial role in the quality and

meaning of our lives. [...] We want to hold ourselves and others

responsible, but we recognize that our intuitions often support

the judgement that a particular individual has "diminished

responsibility" because of his or her infirmities, or because of

particularly dire circumstances upon upbringing or at the time of

action. We also find it plausible to judge that nonhuman animals,

infants, and those who are severely handicapped mentally are not

responsible at all. But since we are all more or less imperfect,

will there be anyone left to be responsible after we have excused

all those with good excuses? [...] We must set up some

efficiently determinable threshold for legal competence, never

for a moment supposing that there couldn't be intuitively

persuasive "counterexamples" to whatever line we draw, but

declaring in advance that such pleas will not be

entertained. [...] The effect of such an institution [...] is to

create [...] a class of legally culpable agents whose subsequent

liability to punishment maintains the credibility of the

sanctions of the laws. The institution, if it is to maintain

itself, must provide for the fine tuning of its arbitrary

thresholds as new information (or misinformation) emerges that

might undercut its credibility. One can speculate that there is

an optimal setting of the competence threshold (for any

particular combination of social circumstances, degree of public

sophistication, and so on) that maximizes the bracing effect of

the law. A higher than optimal threshold would encourage a sort

of malingering on the part of the defendants, which, if

recognized by the populace, would diminish their respect for the

law and hence diminish its deterrent effect. And a lower than

optimal threshold would yield a diminishing return of deterrence

and lead to the punishment of individuals who, in the eyes of

society, "really couldn't help it." The public perception of the

fairness of the law is a critical factor in its effectiveness.

With the "free will" red herring out of the way, we may ask: how can a machine be conscious? Easily: the same way we are, or in some roughly analogous way. Dennett defines consciousness in humans in his book "Consciousness Explained" in this way:

In a Thumbnail Sketch here is [the Multiple Drafts theory of

consciousness] so far:

There is no single, definitive "stream of consciousness," because

there is no central Headquarters, no Cartesian Theatre where "it

all comes together" for the perusal of a Central Meaner. Instead

of such a single stream (however wide), there are multiple

channels in which specialist circuits try, in parallel

pandemoniums, to do their various things, creating Multiple

Drafts as they go. Most of these fragmentary drafts of

"narrative" play short-lived roles in the modulation of current

activity but some get promoted to further functional roles, in

swift succession, by the activity of a virtual machine in the

brain. The seriality of this machine (its "von Neumannesque"

character) is not a "hard-wired" design feature, but rather the

upshot of a succession of coalitions of these specialists.

The basic specialists are part of our animal heritage. They were

not developed to perform peculiarly human actions, such as

reading and writing, but ducking, predator-avoiding,

face-recognizing, grasping, throwing, berry-picking, and other

essential tasks. They are often opportunistically enlisted in new

roles, for which their talents may more or less suit them. The

result is not bedlam only because the trends that are imposed on

all this activity are themselves part of the design. Some of this

design is innate, and is shared with other animals. But it is

augmented, and sometimes even overwhelmed in importance, by

microhabits of thought that are developed in the individual,

partly idiosyncratic results of self-exploration and partly the

predesigned gifts of culture. Thousands of memes, mostly borne by

language, but also by wordless "images" and other data

structures, take up residence in an individual brain, shaping its

tendencies and thereby turning it into a mind.

Dennett claims that a conscious mind is "merely" (word selected for its value in understatement) a collection of highly complex agents, modular pieces of brain matter, which evolved in tandem to solve problems. There is some central executive function (or a distribution of central executive functions) which determine how to interpret the output of these many, many different modules which want "our" attention.

This is remarkably similar to the idea of the premier AI theorist and co-founder of the MIT AI lab Marvin Minsky of a society of mind. Wikipedia explains it well:

A core tenet of Minsky's philosophy is that "minds are what

brains do". The society of mind theory views the human mind and

any other naturally evolved cognitive systems as a vast society

of individually simple processes known as agents. These processes

are the fundamental thinking entities from which minds are built,

and together produce the many abilities we attribute to

minds. The great power in viewing a mind as a society of agents,

as opposed to the consequence of some basic principle or some

simple formal system, is that different agents can be based on

different types of processes with different purposes, ways of

representing knowledge, and methods for producing results.

This idea is perhaps best summarized by the following quote:

> What magical trick makes us intelligent? The trick is that

there is no trick. The power of intelligence stems from our vast

diversity, not from any single, perfect principle. —Marvin

Minsky, The Society of Mind, p. 308

An artificial intelligence could (and, I think, must) be constructed along similar lines, as a collection of different modules (neural networks?) managed by a central executive function. This is a suitable starting point for a thought experiment about artificial intelligence.

An Aside: Intelligence and Consciousness

It is easy to believe that intelligence requires consciousness. Peter Watts, an award-winning writer of speculative fiction, argues against this in one of the themes of the book Blindsight. A collection of quotations from the novel are below. If you want context, you should just read the whole thing, available under a Creative Commons license here.

(I will be proceeding in the following parts with the assumption that conscious processes -- self-awareness, self-modification, modification of goal-orientation, etc -- are necessary for an artificial intelligence. This is not because I believe Watts is incorrect, but because I cannot concieve of an unconscious intelligence in anything but a Lovecraftian manner.)

Imagine you have intellect but no insight, agendas but no

awareness. Your circuitry hums with strategies for survival and

persistence, flexible, intelligent, even technological—but no

other circuitry monitors it. You can think of anything, yet are

conscious of nothing. You can’t imagine such a being, can you? The

term being doesn’t even seem to apply, in some fundamental way you

can’t quite put your finger on.

Evolution has no foresight. Complex machinery develops its own

agendas. Brains—cheat. Feedback loops evolve to promote stable

heartbeats and then stumble upon the temptation of rhythm and

music. The rush evoked by fractal imagery, the algorithms used for

habitat selection, metastasize into art. Thrills that once had to

be earned in increments of fitness can now be had from pointless

introspection. Aesthetics rise unbidden from a trillion dopamine

receptors, and the system moves beyond modeling the organism. It

begins to model the very process of modeling. It consumes

ever-more computational resources, bogs itself down with endless

recursion and irrelevant simulations. Like the parasitic DNA that

accretes in every natural genome, it persists and proliferates and

produces nothing but itself. Metaprocesses bloom like cancer, and

awaken, and call themselves I.

The system weakens, slows. It takes so much longer now to

perceive—to assess the input, mull it over, decide in the manner

of cognitive beings. But when the flash flood crosses your path,

when the lion leaps at you from the grasses, advanced

self-awareness is an unaffordable indulgence. The brain stem does

its best. It sees the danger, hijacks the body, reacts a hundred

times faster than that fat old man sitting in the CEO's office

upstairs; but every generation it gets harder to work around this—

this creaking neurological bureaucracy.

I wastes energy and processing power, self-obsesses to the point

of psychosis. Scramblers have no need of it, scramblers are more

parsimonious. With simpler biochemistries, with smaller

brains—deprived of tools, of their ship, even of parts of their

own metabolism—they think rings around you. They hide their

language in plain sight, even when you know what they're

saying. They turn your own cognition against itself. They travel

between the stars. This is what intelligence can do, unhampered by

self-awareness.

I is not the working mind, you see. For Amanda Bates to say "I

do not exist" would be nonsense; but when the processes beneath

say the same thing, they are merely reporting that the parasites

have died. They are only saying that they are free.

You invest so much in it, don't you? It's what elevates you above

the beasts of the field, it's what makes you special. Homo

sapiens, you call yourself. Wise Man. Do you even know what it is,

this consciousness you cite in your own exaltation? Do you even

know what it's for?

Maybe you think it gives you free will. Maybe you've forgotten

that sleepwalkers converse, drive vehicles, commit crimes and

clean up afterwards, unconscious the whole time. Maybe nobody's

told you that even waking souls are only slaves in denial.

Make a conscious choice. Decide to move your index finger. Too

late! The electricity's already halfway down your arm. Your body

began to act a full half-second before your conscious self 'chose'

to, for the self chose nothing; something else set your body in

motion, sent an executive summary—almost an afterthought— to the

homunculus behind your eyes. That little man, that arrogant

subroutine that thinks of itself as the person, mistakes

correlation for causality: it reads the summary and it sees the

hand move, and it thinks that one drove the other.

But it's not in charge. You're not in charge. If free will even

exists, it doesn't share living space with the likes of you.

Insight, then. Wisdom. The quest for knowledge, the derivation of

theorems, science and technology and all those exclusively human

pursuits that must surely rest on a conscious foundation. Maybe

that's what sentience would be for— if scientific breakthroughs

didn't spring fully-formed from the subconscious mind, manifest

themselves in dreams, as full-blown insights after a deep night's

sleep. It's the most basic rule of the stymied researcher: stop

thinking about the problem. Do something else. It will come to you

if you just stop being conscious of it.

Every concert pianist knows that the surest way to ruin a

performance is to be aware of what the fingers are doing. Every

dancer and acrobat knows enough to let the mind go, let the body

run itself. Every driver of any manual vehicle arrives at

destinations with no recollection of the stops and turns and roads

traveled in getting there. You are all sleepwalkers, whether

climbing creative peaks or slogging through some mundane routine

for the thousandth time. You are all sleepwalkers.

Don't even try to talk about the learning curve. Don't bother

citing the months of deliberate practice that precede the

unconscious performance, or the years of study and experiment

leading up to the gift-wrapped Eureka moment. So what if your

lessons are all learned consciously? Do you think that proves

there's no other way? Heuristic software's been learning from

experience for over a hundred years. Machines master chess, cars

learn to drive themselves, statistical programs face problems and

design the experiments to solve them and you think that the only

path to learning leads through sentience? You're Stone-age nomads,

eking out some marginal existence on the veldt—denying even the

possibility of agriculture, because hunting and gathering was good

enough for your parents.

Do you want to know what consciousness is for? Do you want to know

the only real purpose it serves? Training wheels. You can't see

both aspects of the Necker Cube at once, so it lets you focus on

one and dismiss the other. That's a pretty half-assed way to parse

reality. You're always better off looking at more than one side of

anything. Go on, try. Defocus. It's the next logical step.

Oh, but you can't. There's something in the way.

And it's fighting back.

Next

Part 2 will be an exploration of under what conditions we could consider a simulated agent to be "alive", using the game Dwarf Fortress as a crude starting point.

New Year: 2019 @personal

:PROPERTIES: :EXPORT_FILE_NAME: new-year-2019 :EXPORT_DATE: 2019-01-01 :END:

2018 was an interesting year for me. I made a lot of progress on technical infrastructure and in my job, kept up with old friends, and worked on independent life skills (like most hacker-types, I'm just functional enough to survive on my own). Yet I find myself dissatisfied, so at the turn of the new year I've decided to change some things.

  1. Eating healthier is mandatory. No more bullshit food: if I want
  2. to eat pastries that I don't need, I should at least learn to make them myself.
  3. Get a reliable sleep schedule. My work doesn't require that I
  4. wake up until 1-2 PM, so I've gotten pretty bad about going to bed at a reasonable hour. Humans are adapted to a certain cycle, and there are psychological effects from breaking this.
  5. I will update this site at least twice a week. There are plenty
  6. of books and games in my review backlog (since I've been ignoring the site for the last three months).

Happy New Years, everyone.

slphil

Book: "Rubicon" by Tom Holland book history @review

:PROPERTIES: :EXPORT_FILE_NAME: book-rubicon :EXPORT_DATE: 2018-12-31 :END:

[This article was originally posted on 2018-09-21.]

Amazon link to the book. I read the Kindle version.

The fall of the Roman Republic is probably the most-discussed historical era before WW2. It is documented by many primary sources, and many later Roman historians themselves wrote about the events of those years. The Romans were very aware of their own history, and when their cultural norms and legal structures began to break down, many of them noticed.

Of course, the explanations given by those Romans were not always sound. The Romans were a superstitious people who lacked the modern methods of historical analysis. Additionally, they had a habit of combining the historical record with propaganda. Because of this, their observations can feel fragmented and unbelievable. It is the task of a historian to wade through the sources available, combined with scientific or archaeological data, and determine what really happened. This is no easy feat, especially since the Romans lived in a setting of intense emotion, propaganda, and factionalism.

An equally difficult task is given for the writer of popular history.

While the academic historian has to obey certain rules for valid research in his subfield, the popular historian is bound by a single, extremely difficult rule: be interesting. This can be tough.

Tom Holland does not disappoint. Rubicon: The Last Years of the Roman Republic is a masterpiece of popular history.

Rubicon

The experience of reading a book is a separate thing entirely from the factual content it contains. There is little of value to be said about the content of Rubicon, since its subject matter has been covered by writers at least thousands of times over as many years.

Rubicon focuses on the waning years of the Roman Republic, starting in with the wars against Carthage and ending with the glorious reign of Rome's first and best Emperor; where Holland makes his mark is not in the re-telling of ancient stories but in the weaving together of the many threads into a coherent narrative.

Holland consistently leaves the reader swimming in the deep waters of Roman history and culture, eschewing cheap and easy comparisons to modern events except in a telling quote from the introduction, taken from Machiavelli's own analysis of Livy, to remind us why the study of history is always relevant:

Prudent men are wont to say -- and this not rashly or without good

ground -- that he who would foresee what has to be should reflect

on what has been, for everything that happens in the world at any

time has a genuine resemblance to what happened in ancient times.

The exact comparisons meant to be drawn from this turbulent political collapse are left as an exercise to the reader. (For someone more interested in explicit comparisons to modern politics, Mike Duncan, creator of the amazing The History of Rome podcast is more explicit in his new book The Storm Before the Storm.)

Caesar

I would prefer not to simply summarize the book. It is great, and you should read it. Instead, I want to focus a bit on its most important character: the man who crossed the Rubicon in the famous event referenced in the title.

Holland finishes the preface to the book with two quotes from famous Romans: Julius Caesar and the historian Sallust.

"Human nature is universally imbued with a desire for liberty, and

a hatred for servitude."

Caesar, Gallic Wars

"Only a few prefer liberty -- the majority seek nothing more than

fair masters."

Sallust, Histories

(Sallust wrote many great Roman histories. An incredible translation of two of his works by Quintus Curtius can be purchaed on Amazon here: Sallust: The Conspiracy Of Catiline And The War Of Jugurtha.)

There is more than a hint of irony to these quotes, as the reader will find with a close analysis of the book. Caesar brought servitude to many, many thousands of conquered Gauls (and others), and their repeated rebellions against him is precisely what he means by a "desire for liberty". Caesar himself would personally bring about the destruction of the Republic that many of his fellow elite Romans held so dear, but we should be careful about this: the Republic was already rotten to the core, the virtuous Roman farmer having been replaced by slave labor on large estates, and Caesar can just as well be said to have saved Rome as to have destroyed its republic.

Sallust, who was an influential follower of Caesar and a very good historian, coined in a single line the reason why the republic fell. All of the elite's rhetoric amounted to nothing against the poverty and indignation of the Roman people, who were more than happy to give up their "liberty" in exchange for their lives. Caesar, born to an elite although not especially wealthy family, was much loved by the plebeians -- a fact that made him the sworn enemy of other patricians. The great Cicero, who considered Caesar to be a grave threat to his beloved Republic, wrote this in a speech railing against one of Caesar's two "successors", Mark Antony:

"In that man were combined genius, method, memory, literature,

prudence, deliberation, and industry. He had performed exploits in

war which, though calamitous for the republic, were nevertheless

mighty deeds. Having for many years aimed at being a king, he had

with great labor, and much personal danger, accomplished what he

intended. He had conciliated the ignorant multitude by presents,

by monuments, by largesses of food, and by banquets; he had bound

his own party to him by rewards, his adversaries by the

appearances of clemency. Why need I say much on such a subject? He

had already brought a free city, partly by fear, partly by

patience, into a habit of slavery. With him I can, indeed, compare

you [Mark Antony] as to your desire to reign; but in all other

respects you are in no degree to be compared to him."

Cicero, although a brilliant man and perhaps history's most influential writer, judged Caesar incorrectly because of his own attachments and blindness to the failures of Rome.

For my own feelings on Caesar, I'll finish by deferring to Alexander Hamilton, one of the core architects of the United States of America.

The greatest man that ever lived was Julius Caesar.

Antitrust in the Modern Age culture big_tech politics @analysis

:PROPERTIES: :EXPORT_FILE_NAME: antitrust-in-the-modern-age :EXPORT_DATE: 2018-12-31 :END:

[This article was originally posted on 2018-09-08.]

Reply to this article on the Mastodon post.

Sources

Yesterday, September 7th, this article was posted in the New York Times, written by David Streitfeld. While reading it, I was reminded of the arguments put forth in another article in Esquire, written by Scott Galloway. about why Big Tech needs to be dismantled.

The first article is focused on Amazon in particular, but the second article (from February) focuses on the argument for dismanting what Galloway calls "the Four": Amazon, Apple, Facebook, and Google.

Other sources for quotes: On why conservatives support Trump and the Wikipedia page for "echo chamber".

Analysis

On the Scope of the Problem

I am not inherently opposed to big business. Some problems require large amounts of capital and bureaucracy to solve: the negative aspects of big business are just a part of living in a modern industrial society. Enormous corporations are capable of cutting the cost of goods and services down to very low prices, which is why the cost of luxuries in the developed world is lower than it has ever been. (The other concerns, like the cost of necessities and certain other things like college tuition, are subject to different forces.) One could not even dream of having things like smartphones or high-class servers without enormous companies like Intel -- the capital investment required to create modern processors is in the tens or hundreds of millions. Small companies could not manage the kinds of supply chains involved: the savings come from the economies of scale involved. Even the popular socialist magazine Jacobin wrote about how only big businesses can provide modern labor protections. Opposition to big business is delusional.

However, we have a bigger problem on our hands with the Four. While other corporations in fields like manufacturing do most of their work behind the scenes, the Four play a constant role in our lives. We purchase an increasing amount of our goods on Amazon, communicate using smartphones running operating systems controlled by Apple and Google, search for information on Google (completely giving up our privacy in the process), and spend our whole social lives on Facebook. This is an incomplete list: all of these companies grow extremely quickly and constantly add new services to their portfolio as part of their competition. These corporations exercise direct control over the way we interact with and utilize information. Buying something new? Check the Amazon reviews. Looking for a place to eat? Check Google or Facebook. Want to do something new with your phone? There's an app for that.

So exactly how big are these corporations? Enormous. From the Esquire article (which is already outdated, having been written in Feb 2018):

Over the past decade, Amazon, Apple, Facebook, and Google—or, as

I call them, “the Four”—have aggregated more economic value and

influence than nearly any other commercial entity in

history. Together, they have a market capitalization of $2.8

trillion (the GDP of France), a staggering 24 percent share of

the S&P 500 Top 50, close to the value of every stock traded on

the Nasdaq in 2001.

How big are they? Consider that Amazon, with a market cap of $591

billion, is worth more to the stock market than Walmart, Costco,

T. J. Maxx, Target, Ross, Best Buy, Ulta, Kohl’s, Nordstrom,

Macy’s, Bed Bath & Beyond, Saks/Lord & Taylor, Dillard’s,

JCPenney, and Sears combined.

Meanwhile, Facebook and Google (now known as Alphabet) are

together worth $1.3 trillion. You could merge the world’s top

five advertising agencies (WPP, Omnicom, Publicis, IPG, and

Dentsu) with five major media companies (Disney, Time Warner,

21st Century Fox, CBS, and Viacom) and still need to add five

major communications companies (AT&T, Verizon, Comcast, Charter,

and Dish) to get only 90 percent of what Google and Facebook are

worth together.

And what of Apple? With a market cap of nearly $900 billion,

Apple is the most valuable public company. Even more remarkable

is that the company registers profit margins of 32 percent,

closer to luxury brands Hermès (35 percent) and Ferrari (29

percent) than peers in electronics. In 2016, Apple brought in $46

billion in profits, a haul larger than that of any other American

company, including JPMorgan Chase, Johnson & Johnson, and Wells

Fargo. What’s more, Apple’s profits were greater than the

revenues of either Coca- Cola or Facebook. This quarter, it will

clock nearly twice the profits that Amazon has produced in its

history.

It's no surprise that Galloway would go on to say on September 5th that "... big tech has effectively become more powerful than the Senate" (although he's not only talking about their market value here).

This is only scratching the surface of the problem. BBC News found that children spend six hours or more on screens, doubling the amount of time spent in front of screens in 1995, when I was a young boy. (As an aside: it is true that many nerds spend longer than this in front of a screen every day, and I frequently double this. This lifestyle, which I have maintained since childhood, has deep costs -- and the fact that I personally spend too much time in front of a screen is exactly why I think I'm credible when I say that it fucks you up. Take that tablet away from your kid.) This isn't only an issue for children: social media applications routinely take up almost an hour of their users' lives on average, and many people have multiple of these applications installed.

So why are the Four more dangerous than other massive corporations? It's not only because they are staggeringly wealthy, but because they make up a core part of the lives of people in the modern world. To hand this much social control to corporations out of some misguided capitalist fundamentalism is to completely miss the point on why capitalism is a good thing.

The Destruction of the Internet

What many of the normie-friendly newspaper articles fail to recognize, however, is the social damage caused by these monolithic corporations. As an Internet native, I've watched nearly every other community wither and die. People used to have to actually engage with people they considered friends, not just scroll through their wedding photos and leave a heart react and a "Congratulations!" This kind of faux social connectivity is a major contributing factor to the loneliness epidemic which is worse than it has ever been.

Thriving, powerful Internet subcultures have been laid low by the network effects and magnetism of the major social media giants. Many of them try to migrate to these new platforms to remain relevant, but they soon find themselves corrupted and destroyed by their new medium.

Businesses find themselves neglecting their website in favor of their Facebook and Twitter accounts, creating a series of annoying walled gardens and contributing to the collection of even more data to be bought and sold by advertisers whose only goal is to get you to spend more money on shit you don't need.

Websites, although they're worse than they've ever been because of the proliferation of webshit written by garbage "programmers" who wouldn't be fit to clean Djikstra's shoes, are going the way of the dodo too. The Web, a series of hyperlinked documents in a large scheme of URLs etc, is being replaced by custom-written applications whose only job is to keep you scrolling endlessly like an idiot while some investors in Silicon Valley mine your eyeballs for pennies. These applications have much more power than any website could have through mere JavaScript -- which is the entire point. Anytime a website says "hey, we're glad you're here, have you seen our app?" is just politely asking you to be their bitch.

The Destruction of the Retail Economy

Amazon is one of the biggest companies in the world, with a market cap of 952B as of Sept 8. It got this way by ruthlessly optimizing every single step of the retail chain. It's difficult for brick-and-mortar stores to compete with a company that can deliver anything I want to my house within days (and potentially within hours once the drones start delivering Prime packages).

Anything you want to buy can be found on Amazon. If you shell out the money for Amazon Prime, you can get free two-day shipping too! And now that you've already paid for Prime ($99/year), you might as well get your money's worth, right? And now your apartment is full of shit you don't need and your savings are gone.

This kind of power has been good for consumers who just want to spend dollars to purchase things, but it's had a devastating effect on the retail industry. Many of these retail stores were crucial job providers in rural America, contributing to the second wave of mass unemployment in rural areas. (The first wave: the shifts in American industry that left many factory workers out of a job.)

People have spent decades talking about how Walmart has gutted economies across the country. Buckle up: Amazon is only going to get richer and more powerful. Walmart was just a preview.

The Destruction of Your Society

Algorithmic content distribution, self-selection effects on social media following, and the Greater Internet Fuckwad Theory have conspired to completely destroy the social structure. Think Trump's election was bad? Like I said above: buckle up. This is only going to get worse.

Americans fucking hate each other. If you were surprised by Trump's election, you weren't paying attention. (I had predicted his victory by mid-2015, defending my prediction against the heckling of my friends and family. I even had the good graces not to rub it in, but if you know me and you're reading this: I told you so.) Doing something "to own the libs" isn't just a meme: few things are as satisfying as watching compilations of liberals and leftists melting down on Nov 9, 2016. Poor conservatives do not care if Trump's policies would benefit them, just like they haven't cared if Republican policies will benefit them. Economics is beside the point. This is a culture war. From the previous article:

First of all, the bulk of Trump’s supporters have nowhere else to

go, nor do they want to go anywhere. They experience themselves

as living in a different world from liberals and Democrats.

Their animosity toward the left, and the left’s animosity toward

them, is entrenched.

Trump’s basic approach — speaking the unspeakable — is

expressive, not substantive. His inflammatory, aggressive

language captures and channels the grievances of red America, but

the specific grievances often feel less important than the

primordial, mocking incivility with which they are expressed. In

this way, Trump does not necessarily need to deliver concrete

goods because he is saying with electric intensity what his

supporters have long wanted to say themselves.

“President Trump reminds distrustful citizens of liberal

institutions’ disinterest in, and disrespect for, challenges in

their own lives,” Arthur Lupia, a political scientist at the

University of Michigan, wrote in response to my inquiry about

Trump’s appeal.

Meanwhile, the entire political platform of the Democratic Party nowadays boils down to "Fuck Trump", which leads to incoherent party politics, DSA shitfights, and the eternally-recurring ghost of Hillary Clinton, the worst Presidential candidate I've ever seen.

There is a single cause for all of this drama: everyone lives in a digital echo chamber. From Wikipedia:

In news media, echo chamber is a metaphorical description of a

situation in which beliefs are amplified or reinforced by

communication and repetition inside a closed system. By visiting

an "echo chamber", people are able to seek out information which

reinforces their existing views, potentially as an unconscious

exercise of confirmation bias. This may increase political and

social polarization and extremism. The term is a metaphor based

on the acoustic echo chamber, where sounds reverberate in a

hollow enclosure.

Another emerging term for this echoing and homogenizing effect on

the Internet within social communities is cultural tribalism.

Exploitation of this echo chamber has been weaponized by the subculture that punches a hundred leagues above its weight class: the surge of underemployed, bored, technologically-literate 20-something white Americans known as the "alt-right", a term that refers to many more people than simply white nationalists and Internet Nazis. In this kind of climate, the rise of a decentralized mass that capitalizes on outrage culture is simply inevitable. The alt-right as a political project may be dead on arrival (for the same reasons as anarchists: absolutely no agreement on any point of policy or philosophy, merely a shared disdain for aspects of existing society), but as a cultural movement it is an unpaid legion of shitposters with no rival.

Their cultural rivals, the "SJWs" (a movement which is literally just Tumblr grown up), are the political opposite but serve much of the same purpose. I do not consider them as efficient in their tactics as the alt-right because they already influence our country's major institutions (colleges and corporations) and therefore must dedicate their whole lives to this project, while the bored legions of 4chan/pol/ sit in their basements and coordinate a military strike on rebel groups in the Middle East and fuck with washed-up celebrities.

Accordingly, both movements create their own echo chambers that fuel their further development and mobilization. This is not going to stop anytime soon, although the fiasco of the Unite the Right rally shattered any ideas of the alt-right coming out of the shadows -- the war will remain online.

You may think this is just Internet drama that has no effect on the real world. If so: were you asleep in 2016?

The New Face of Antitrust

These companies must be destroyed, but clearly our current interpretation of antitrust laws isn't good enough to get the job done. Fortunately, we have a superhero: Lina Khan. (See the first article source above.)

Ms. Khan has become something of a prodigy in antitrust law. From the article:

In early 2017, when she was an unknown law student, Ms. Khan

published “Amazon’s Antitrust Paradox” in the Yale Law

Journal. Her argument went against a consensus in antitrust

circles that dates back to the 1970s — the moment when regulation

was redefined to focus on consumer welfare, which is to say

price. Since Amazon is renowned for its cut-rate deals, it would

seem safe from federal intervention.

Ms. Khan disagreed. Over 93 heavily footnoted pages, she

presented the case that the company should not get a pass on

anticompetitive behavior just because it makes customers

happy. Once-robust monopoly laws have been marginalized, Ms. Khan

wrote, and consequently Amazon is amassing structural power that

lets it exert increasing control over many parts of the economy.

Amazon has so much data on so many customers, it is so willing to

forgo profits, it is so aggressive and has so many advantages

from its shipping and warehouse infrastructure that it exerts an

influence much broader than its market share. It resembles the

all-powerful railroads of the Progressive Era, Ms. Khan wrote:

“The thousands of retailers and independent businesses that must

ride Amazon’s rails to reach market are increasingly dependent on

their biggest competitor.”

The paper got 146,255 hits, a runaway best-seller in the world of

legal treatises. That popularity has rocked the antitrust

establishment, and is making an unlikely celebrity of Ms. Khan in

the corridors of Washington.

Her hard work has made it possible to reframe the antitrust debate in a way that isn't just naive capitalist fundamentalism, but to actually take into account the growing non-financial power of these same companies. It isn't just a matter of how rich the Four are. The real concern is how powerful they are.

“Ideas and assumptions that it was heretical to question are now

openly being contested,” [Khan] said. “We’re finally beginning to

examine how antitrust laws, which were rooted in deep suspicion

of concentrated private power, now often promote it.”

The hearings start next week, on September 13. We can only hope for the best.

Internet Cultures, part 2 culture politics @analysis

:PROPERTIES: :EXPORT_FILE_NAME: internet-cultures-2 :EXPORT_DATE: 2018-12-31 :END:

[This article was originally posted on 2018-07-12.]

Normie Cultures

Note: It was debatable whether or not Reddit should have been included in this post. While Reddit has never been in the same boat as Facebook or Twitter, in the last few years traffic on Reddit has increased significantly, to the point where even my mother and my friends' moms know about Reddit memes. In the end, I decided that Reddit's structure and underlying culture are "genuine", and not the result of an invasion by outsiders. Still, Reddit is frequently mocked in certain places for this character.

In an era where left-leaning political activists obsess over concepts like colonialization, gentrification, etc, it's worth pointing out (a bit tongue-in-cheek) that the largest colonization in human history is the colonization of cyberspace by normies.

The early Internet's culture was not accessible to the general public. Computers were extremely expensive. The 1991 Macintosh PowerBook sold for $2,299, which is over $4,000 dollars in today's money. These machines were also much slower, with fewer features, running on unsophisticated networks with messy standards and poor stability.

People who used the Internet tended to be experts, academics, and hobbyists with a good amount of disposable income. These people came together on IRC, spoke over email, used simple websites, and formed tight-knit communities. (I have left out Usenet, as I was not a Usenet person, but Usenet and BBS systems were the original birth of Internet culture.)

Then came the Eternal September, and the Internet would never recover.

America Online

I'm in my mid-20s, so the so-called "Golden Era" of computer nerdery was just before my time. My first interactions with the Internet came from using America Online over a dial-up connection, like many other Americans. There's nothing wrong with this, per se, but the lowering of the bar has consequences for Internet culture.

AOL made using the Internet easy. Anybody could pay a decent fee and get dial-up through AOL, with email service, pre-curated chatrooms, instant messenging, keyword search (great for casual users), etc. AOL was a highly effective walled garden -- although not completely walled -- and extremely profitable. AOL's user base was huge, and the profits from the company were invested into marketing schemes like sending out huge numbers of free trial CDS with a certain number of dial-up hours offered for free. Many Americans remember playing with these discs, since they were everywhere and completely useless for people who already were customers. I used to shatter the discs into tiny pieces, play with the thin foil between the layers, and make shapes out of the shards of plastic -- that's how pervasive AOL was in those days.

At one point, America Online free trial discs made up more than half of all the CDs made in the world. At another point, half of all dial-up customers in the United States were on AOL. This company was huge.

And a lot of people blame this for the Internet going to shit. That doesn't mean it couldn't have happened any other way (it's inherently just about elitism and not wanting the unwashed masses on our special nerd platform) but in our current timeline, AOL is at fault. The cultural norms that propagated when normal people started to use the internet became widespread, especially when AOL went from charging an hourly rate to charging a flat $19.95 a month for an account (which had multiple usernames -- each member of the family had their own email address, chat username, etc). Poor grammar, mispellings, cyber hookup culture, etc festered on AOL, although they are not unique to AOL even at this time.

AOL provided curated keywords which, in the era before people had gotten used to the Internet, were much easier to remember than a URL. The chat rooms, while not technically superior to IRC, were arranged in easily-used menus and full of people from all walks of life. AOL had a booming text roleplay scene in the early days; while MUDs and other nerd things had already existed, it is probably on AOL's chatrooms that the \*asterisk actions\* became a thing. Instant messaging was a big deal as well. Lists of friends in groups, seeing when people are online, status messages, etc. Combine this with email ("You've Got Mail!") and AOL was the most effective entryway for normies ever designed.

It's worth remembering though that AOL was not a minor feat, technologically or in business. They deserve our respect for making a large impact on Internet adoption across the country.

MySpace

Most of my generation remembers MySpace as the first real social network. Before MySpace, I had a friends list on AIM, but that isn't a social network. I remember using MySpace to keep track of people I met at local shows starting in middle school.

At this point, MySpace is a meme. A dead craze lost to the bins of history. But MySpace's effect on web culture was not insubstantial. The tacky, stupid page skins from people copying random pieces of HTML/CSS from elsewhere on the Internet, the autostarting music that you could never find in time, the interpersonal drama caused by Top 8 choices of popular people -- all important. The note culture was strong (and even persisted at Facebook for a while before dying) and functioned as a sort of half-LiveJournal, half-public chain letter platform. Photos and comments, long message chains. MySpace mattered.

MySpace is where many of my early friendships took root, especially in the post-AOL era. Going to a local concert would inevitably end in exchanging MySpace names. Private MySpace groups could function as places to plan get-togethers, discuss local topics, do political activism, or just talk shit.

Too bad the platform was mostly annoying attention-seeking teenagers. But we were free.

Facebook

And then Facebook dethroned MySpace.

Fuck Facebook and fuck Zuck. This platform is a piece of shit and its dominance has only harmed our society. Maybe in the future I'll write a long article on why I hate this company so much.

The tl;dr of that article:

Real-name social networks stifle their inhabitants. People don't want to say things that will offend the people they love, even if they believe them. Nobody wants to post something which will cause a mob of offended Internet crybabies to bomb their office with phone calls and emails until they get fired.

People create false expectations of life. The kind of pre-curated bullshit content that gets posted to Facebook makes everyone depressed. People post everything good that happens to them, their meals, their new house, their new family, their marriage. Scrolling through a Facebook feed for long enough will give you the impression that these people are the norm, that they are successful, and that you (yes, you) are uniquely a failure. It's bullshit. Everyone's life is a mess, but only the drama queens post it all over Facebook.

Advertising as a profit mechanism is cancer. If you're not paying for it, you are the product.

You only see what you want to see. Facebook's nature makes it an incredibly potent echo chamber. Most people have friends who believe what they believe, so the posts by their friends justify their own biases. (The few who are stifled, as above, only see an endless sea of the unenlightened -- not better.) Even worse, Facebook's algorithms ensure that it will only show you content that it thinks you will click on. Like hamsters running in wheels, for the profit of one of the world's richest companies.

It makes you stupid. "You don't realize it but you are being programmed." A previous Facebook executive has tried to do a public run on the problems here because some of the people involved in creating this monstrosity feel regret. It didn't work; these stories died after less than a month.

Twitter

Twitter forced us to think more concisely. 140 characters isn't a lot of space (this was upgraded to 280 characters at the start of this year). The scale that Twitter personalities can reach is also much larger than the scale on Facebook. This platform is ground zero for the culture war -- breaking news, up-to-date propaganda from the front lines, virtue signalling blue checkmarks posting about Blumpfth, Twitter has it all.

The problem is that conciseness isn't the lesson everyone learned. A lot of people are just on Twitter because the short post length gives them an excuse not to think. It's very difficult to have a serious conversation there because of this. The bio, the name, the reputation, the subcultural trappings -- these artifacts of cultural inertia make convincing someone on Twitter nearly impossible. Everyone on Twitter is a rhetorical warrior, driven by self-satisfaction and Likes.

At least Twitter is entertaining. People who want to show off to their followers (or the people they follow) are extremely easy to bait.

Of course, nowadays Twitter is also cracking down on the kinds of anarchic content creation that made it a giant in the first place. In the throes of the "Russian bot" scandals, the well-known secret that a lot of Twitter's traffic is automated has become center stage. Liberals and leftists are hysterical about pro-Trump and conservative hashtags, those Trending stories are removed by direct admin intervention, etc. But it should be known that this has always been the case. Automated bot accounts pushing hashtags have always been a part of social movements using Twitter. Faceless trolls with thousands of followers are hugely influential on the far right and the far left both. The birth of what I call "blue checkmark journalism" has only made things worse -- click ANY Trump tweet and it's nothing but virtue signalling liberal Twitter users at the top. People literally wake up in the middle of the night to alarms that Trump has tweeted so that they can reply first.

What a shitshow. More normie colonization.

| | Identity | Reputation | Meme | Moderation | |----------+-------------------------------+--------------------------------+--------------+-----------------------------| | MySpace | "Real name", unenforced | Friends list, top 8 | Subcultures | Report system | | Facebook | Real name, sometimes enforced | Image, friends list | Viral, group | Algorithmic / report system | | Twitter | Pseudonymous OR verification | Follower count, cultural clout | Viral | Algorithmic / report system |

The Issue with Normie Culture

As I established in the previous post on this topic, much of the power of what I consider "genuine" Internet subcultures stems from some variant of anonymity. This allows people to vent online, to express views which are socially unacceptable, and to learn from this experience. Many socially-normal people have abnormal or even anti-social views on some topic or another, and by exploring these topics in argument with others (in either good or bad faith) these individuals can gain a more nuanced view of reality. Anonymity is not always put to the benefit of the social good -- the rampant bigotry on pol is oft-referenced, but I've made it clear that such a simple analysis of pol is useless.

More dangerously, when these communities fracture into isolated echo chambers, things get worse. The canonical examples are the radical leftist bent of Tumblr communities and the vicious right-wing nature of pol. But neither holds a candle to the effect of fracturing in the post-anonymity era.

This is the issue with normie culture.

Names.

With your reputation on the line, or even your job, personal friendships, or family relationships, people get entrenched in their opinions. They share clickbait content that exists only to rile up political or emotional sentiments in their friends. This clickbait serves not just to solidify their political inclinations and their hatred of their enemies, but also serves as free propaganda for the other side showing their own compatriots "look how fucking stupid our enemies are".

The flip side is the chilling effect of knowing that certain opinions will be unpopular in your community. Even worse, this drives people even further towards radicalized cyberspaces, and gives them an even deeper impression that they've discovered hidden knowledge. In some cases, this is true -- but in most, it breeds conspiracy nonsense and an unearned sense of superiority.

The idea of tying your digital identity with your real-life identity is the worst idea anyone ever had, and many governments are constantly discussing trying to pass it as law.

The normie colonization of the Internet is almost complete. And poor John Barlow is turning in his grave.

Internet Cultures, part 1 culture politics @analysis

:PROPERTIES: :EXPORT_FILE_NAME: internet-cultures-1 :EXPORT_DATE: 2018-12-31 :END:

[This article was originally posted on 2018-06-11.]

Introduction

Optimism and Naivety

The general computer was the most influential invention of the 20th century by a long shot. The world inhabited by people in the 21st century would be completely unrecognizable to those who came before. More important than the computer itself was the creation of the Internet, a vast and amorphous collection of servers and clients which would make up a new abtract space referred to as "cyberspace". Easily the most optimistic outlook by the early pioneers of cyberspace is "A Declaration of the Independence of Cyberspace" written in 1996 by John Perry Barlow. It opens with a very clear summary statement:

Governments of the Industrial World, you weary giants of flesh

and steel, I come from Cyberspace, the new home of Mind. On

behalf of the future, I ask you of the past to leave us

alone. You are not welcome among us. You have no sovereignty

where we gather.

John Barlow, an incredibly influential cyberlibertarian, showed unbridled optimism at the future of the Internet, rejecting any concept of "meatspace" politicians exercising control over what he saw as a pure realm free of prejudice or archaic legal principles. He would go on to be a founding member of the Electronic Frontier Foundation, one of the most important groups for the protection of civil liberties on the Internet. (I have donated money to the EFF in the past.)

Unfortunately, Barlow's optimism seems excessive in retrospect. There is a reason why: cyberspace is not actually a meeting of pure minds, free from restraint. A later essay would explain why.

"Code Is Law"

On January 1, 2000, another influential Internet activist named Lawrence Lessig published an essay in Harvard Magazine named "Code Is Law: On Liberty in Cyberspace". Lessig would go on to create the non-profit organization Creative Commons and his ideas would fundamentally change the Internet's cultural landscape, especially with later concepts like Remix Culture.

"Code Is Law" will serve as the foundation for my later analysis of various Internet subcultures, so it will be worth a brief analysis of its ideas.

This regulator is code--the software and hardware that make

cyberspace as it is. This code, or architecture, sets the terms

on which life in cyberspace is experienced. It determines how

easy it is to protect privacy, or how easy it is to censor

speech. It determines whether access to information is general or

whether information is zoned. It affects who sees what, or what

is monitored. In a host of ways that one cannot begin to see

unless one begins to understand the nature of this code, the code

of cyberspace regulates.

In the essay, Lessig is commenting mostly on the way this code will change the nature of censorship on the Internet. He speaks at length about how the regulation of the internet in order to get rid of crimes, and the ability for people to snoop on others' data on the Internet, and the complex social value judgments that we as a society have to make in order to make the Internet the proper kind of cyberspace for us.

He ends by talking about one of the most important aspects of the "Code Is Law" idea, which is that an "unregulated" internet is actually impossible, in a similar criticism to how libertarians are sometimes criticized by an argument that tyranny by a corporation is just as possible as tyranny by a government.

Our choice is not between "regulation" and "no regulation." The

code regulates. It implements values, or not. It enables

freedoms, or disables them. It protects privacy, or promotes

monitoring. People choose how the code does these things. People

write the code. Thus the choice is not whether people will decide

how cyberspace regulates. People--coders--will. The only choice

is whether we collectively will have a role in their choice--and

thus in determining how these values regulate--or whether

collectively we will allow the coders to select our values for

us.

For here's the obvious point: when government steps aside, it's

not as if nothing takes its place. It's not as if private

interests have no interests; as if private interests don't have

ends that they will then pursue. To push the antigovernment

button is not to teleport us to Eden. When the interests of

government are gone, other interests take their place. Do we know

what those interests are? And are we so certain they are anything

better?

"The Medium is the Message"

Long before the Internet was even a thought, a Canadian intellectual named Marshall McLuhan made a deep impact on the field of media theory with the idea that "the medium is the message". I should say at the outset that I have theoretical/empirical problems with some of the influential thinkers of media theory, particularly Baudrillard, but that the general ideas have merit. I took some quick quotes from McLuhan's 1967 book "The Medium Is The Massage" (sic) from Wikiquote in lieu of a full explanation.

All media work us over completely. They are so pervasive in their

personal, political, economic, aesthetic, psychological, moral,

ethical, and social consequences that they leave no part of us

untouched, unaffected, unaltered. The medium is the massage. Any

understanding of social and cultural change is impossible without

a knowledge of the way media work as environments. All media are

extensions of some human faculty – psychic or physical.

and

Environments are invisible. Their groundrules, pervasive

structure, and overall patterns elude easy perception.

It is worth mentioning that McLuhan's work in many ways predicts the existence and nature of the World Wide Web, which would not exist for decades after this book, and quite a few years after his death in 1980.

In the next section I will touch on how the structure of a few of the Internet's most influential websites influenced the cultures that erupted from them, and how this also changed the secondary effects those cultures could have on the rest of the Web. Even in this era of walled gardens, the Web still exists -- although the ways in which nodes on this web influence each other are frequently subversive.

Major Internet Cultures

phpbb era

Before the existence of major social media sites, most communities would spring up around some particular topic and remain relatively closed. Larger communities may exist of which each community is a piece -- the "furry" community is a canonical example of a decentralized Internet subculture, although it is a divisive topic to say the least. Many of these communities had a simple structure: they would have a website, typically a static html page administered by a webmaster who may also have been the person "in charge", and in order for community members to hang out, they would use Internet Relay Chat (IRC), a PHP bulletin board system, or both. IRC, as a real-time chat, serves a different purpose than a categorized and indexed forum, and some communities eschewed real-time group communication entirely (although Instant Messengers like AIM were common in this era, and personal friendships arose over those networks).

A typical phpbb board had a structured layout, with boards like:

  1. General Discussion (typically about whatever the community’s
  2. main focus was)
  3. I’m New/Leaving/Back (for introductions, keeping track of
  4. community members)
  5. Off-Topic (random content, as an outlet for community
  6. members who want to talk about other topics without going elsewhere)
  7. Roleplay (text roleplays, more common in furry communities
  8. or video game fandoms)

Sometimes these boards were organized into categories or sub-boards. The complexity tended to grow as the user base grew.

These boards use a combination of usernames, visual avatars, consistent “signatures” under each post, post counts, titles, etc to maintain a complex reputation system. A user who is invested into the community, with hundreds or thousands of posts, is less likely to misbehave since he has invested a lot of time and energy into this personal identity. Admins and moderators were typically labeled as such, and the presence of one of them online tended to keep misbehavior in check. Rules could be strictly enforced, and some communities got a reputation for having strict moderators.

Some of these types of boards still exist. For example, the Bay 12 Games Forum which is mostly used for talking about Dwarf Fortress, and the Internet troll culture nexus KiwiFarms is still around to create drama to this day.

A brief side note to illustrate the kinds of people we may be talking about here. Margaret Pless wrote this for New York Magazine:

Consider that across the forums there are multiple warnings to

members to conceal their identity. For KFers, anonymity isn’t a

choice but a necessity — they know what they’re doing is probably

illegal, and that their anonymity insulates them from any

consequences. They know their “entertainment” harms vulnerable

people, which is why some of them felt bad when that one target

hanged herself. But, most important, they know that if their

anonymity were compromised, their own community might eat them

alive.

4chan

The website 4chan.org is the most famous example of an anonymous imageboard, but it wasn’t the first, and it isn’t the last. There were those that came before (2ch) and those that came after (8ch), but none had the same effect on Internet culture as 4chan. 4chan’s /b/ board, “Random”, used to be one of the most anarchic places on the Internet, which turned it into a hotbed of cultural creativity – but mostly garbage. Memes repeated ad nauseum, troll threads spammed with nonsense, perfectly good conversations interrupted by someone spamming photos of graphic executions, etc were the norm on /b/.

An edgy and remorseless culture arose there, leading to incidents like The Great Habbo Raid of 2006, where “/b/tards” completely ruined the experience of Habbo users for the “lulz”. (Encyclopedia Dramatica will be used as a source for some of this article. Take its tone into consideration, as it is the same culture we are talking about, having diverged slightly in its own wiki medium. If you are easily offended, perhaps don’t click.) Another infamous incident was when /b/tards posthumously trolled a teenage boy who had committed suicide, leading to the “an hero” meme. Early 4chan, while malicious, had not yet morphed into the far-right troll culture we see today (although racism “for the lulz” was rampant), and they frequently raided right-wing political targets like the entertaining raids on Hal Turner, a radio conspiracy theorist (like a diet Alex Jones). Later, Anonymous would even launch a moral crusade against the Church of Scientology, leading to Project Chanology. Project Chanology was derided by many /b/tards as “moralfaggotry”, revealing the early ethical split in the community that would eventually lead to the creation of Anonymous as a hacktivist collective (leaning left-wing) and of the alt-right /pol/ users that would take center stage in Internet culture starting in 2015.

What could create such a virulent subculture? The answer is easy, and it has everything to do with the medium of the anonymous imageboard. 4chan (particularly /b/, the board with the most traffic) has a few features to its usage which drive things in this direction relentlessly.

All posts are completely anonymous. Posts are given the name “Anonymous” and a post number. Even in the same thread, it can be difficult to tell which posts are by the same person, although the >>reply system and typing/writing styles can keep the thread together. At times in 4chan’s history, and on certain boards, there was an option to add a “tripcode” to ones name, so that a name like Anonymous!Ep8pui8Vw2 would give someone a consistent name. Tripcodes were secure because it’s a one-way operation – given the tripcode output Ep8pui8Vw2, it is very difficult to determine what the input was to create that code. In some contexts, it was possible to change the name entirely. As can be expected on 4chan, these people were derided as “tripfags” and “namefags” respectively, encouraging the anonymity of the site as a part of its structure.

Only 100 threads may be active at any given time. On /b/, where a thread with no replies could slide off “page 10” (10 threads per page) within an hour, getting a rise out of the community was critical. As long as your thread continued to get activity, it would remain near the top of the board. In this regard it didn’t matter why the thread was popular. If the thread was well-recieved by the community, it would have a decent shelf life (eventually dying by force after a few hundred posts), but controversial threads would be even more successful. Once a “shitstorm” begins, it won’t end until the thread reaches its comment limit and dies. A particularly successful shitstorm might create followup threads, or even migrate to other *chans or to IRC channels, where it can rage for days (or morph into a raid or a successful meme). Controversy is more important than conformity, but both are successful strategies. 4chan creates a sort of hive mind, but a highly violent one.

Rules are only lightly enforced. Posts which violate US law, like the posting of illegal pornography, will be removed somewhat swiftly and the poster banned, but the general culture of 4chan is laissez faire. Users flaming each other in ways which would recieve a permanent ban on Facebook, Twitter, or Reddit is a common sight on 4chan. “OP is a faggot” is an extremely common refrain by users who think that a thread is stupid. The suffix “-fag” became a general noun – “oldfag”, “newfag”, “moralfag”, “tripfag”, “namefag”, etc – which generally has a negative connotation, but not always. Casual racism was unpunished and even rewarded by the culture of the board. Hitler memes (like the dancing Hitler gif) were popular on early /b/. In general, these posts weren’t serious and served the purpose only of infuriating those who weren’t “in the club”; this vitriol served as a shibboleth to discourage “newfags” and “normalfags” from staying on the site. Ominiously, this ethical desensitization would later contribute to the birth of /pol/ as a right-wing cultural phenomenon.

Reddit

Reddit is now one of the most-trafficked sites on the Internet, and surely exerts a huge influence on Internet culture. The site generally trends towards a cosmopolitan sort of center-left ideology, but it has also been home to now-banned far-right communities in the past, and still houses a highly influential (but commonly-mocked) pro-Trump subreddit on r/The_Donald.

Reddit has a mixed opinion on anonymity. No attempt is made to tie a Reddit account with a personal identity, like on Facebook, and a user can create an account without even entering an email address, which is commonly used on other sites to at least slow people down from creating tons of new accounts. Reddit’s take on anonymous posting has effected its culture substantially, with users creating “throwaway” accounts before posting deeply personal or controversial content to the site. However, most users have a main account which is subject to the typical style of Reddit “cultural control”.

Reddit’s innovation, which gives it a distinct character, is its karma system. Individual accounts accrue comment karma and post karma, which are separate – post karma can only be accrued by having successful thread submissions. Reddit’s original purpose as a link aggregator caused this, since they wanted to reward users who found and posted interesting content that other users approved of. The karma system has a more important effect, though: comments on Reddit are not sorted chronologically, but by karma. Posts on subreddits are also affected by karma, but the algorithm for which posts go on the front page of the subreddit is much more complicated.

Inside of a post, Reddit comments are arranged into a tree system. Here’s a quick example:

Top level comment [200 karma]

> Reply A [50 karma]

> Reply B [35 karma]

> > Reply to B [5 karma]

> > Reply to B [1 karma]

> Reply C [-10 karma]

Top level comment [150 karma]

> Reply A [10 karma]

And so on. Inside of each level, the time of each post is irrelevant, and the posts are sorted only by karma. This means that posts that get general approval are pushed to the top of each level, which causes them to be seen by more people, which may cause them to get more upvotes, etc. The opposite is true with downvotes: downvoted comments get pushed to the bottom of their levels, and comments below a certain karma threshold are hidden by default, which makes them easy to miss. There are momentum effects here. A comment with positive karma is more likely to get upvotes, and a comment with negative karma is more likely to get downvotes; Reddit is a community where groupthink definitely plays a role in individual choice. (There are alternative sorting mechanisms, but almost nobody uses them, except to sort by Controversial during certain threads.)

It is exactly this mechanism which creates the sort of hive-mind agreement that Reddit engenders in its users. This is also why more “sophisticated” (read: more anti-social) communities like 4chan mock Reddit users, thinking of them as “hugbox” losers who fall in line with popular opinions or dominant ideologies without thinking for themselves. This problem is more true on some subreddits than on others, especially the leftist subreddits which tend to have extremely strict moderation teams: see for example r/LateStageCapitalism and r/Anarchism. Right-wing Trump support subreddit r/The_Donald also has an extremely strict moderation team, as part of an explicit strategy to flood Reddit with pro-Trump content. (It worked, and the r/all “frontpage” algorithm has been reworked multiple times to keep T_D from spamming it.) All three subreddits, r/LateStageCapitalism, r/Anarchism, and r/The_Donald are guilty of the same hivemind, low-thought, ideological possession because of these policies.

(To be fair, there is a large amount of disagreement among leftists and rightists about specifics, and those debates are not banned, but actual dissent is banned immediately. This kind of overly-sensitive moderation style is part of why 4chan users mock Reddit.)

Conclusion

The major factors of the media of Internet communities are:

The nature of identity: a sort of spectrum from full anonymity (4chan) to pseudonyms (phpbb, Reddit), to forced real-name usage (Facebook). Full anonymity means each post in the thread must stand on its own and can’t even be tied with previous posts by the same person. Pseudonyms create more coherent conversations, but ones where users are forced to behave consistently and in a socially appropriate manner. Forced real-name usage puts peoples’ livelihoods and social status on the line. (A comment on identity: some 4chan boards, like /pol/ have introduced ID systems where a user’s posts in one thread will all have the same ID, but that same user in a different thread will not have the same ID. A user can be identified consistently only across one thread, not across the whole board. /b/ does not use this.)

The nature of reputation: this is related to the nature of identity, with reputation being generally useless on 4chan, highly influential on traditional message boards where a user will become synonymous with their avatar image, and a middle ground on Reddit where some figures are recognized (mods, bots, and popular figures) but most names being irrelevant.

The nature of a successful “meme” on that medium: phpbb boards have chronological threads and posts exclusively, 4chan has aggressively-pruned boards where only the most active threads survive at all, and Reddit uses group consensus to elevate or bury content.

The nature of its moderation team: this makes a huge difference. 4chan’s laissez faire approach can be compared to Reddit’s by-the-subreddit approach – Reddit’s admins only intervene in cases of illegal content, and the subreddit’s moderators have absolute sovereignty over what moderation tactics they will use for their communities. (Many subreddits have split over moderation drama.)

| | Identity | Reputation | Meme | Moderation | |----------------+--------------------------+----------------------------+------------------------+----------------------| | phpbb / forums | Pseudonymous | Tied to name, # of posts | Popularity, status | Strict | | 4chan | Anonymous | Zero (tripcodes unpopular) | Controversy, amusement | Near-zero | | Reddit | Pseudonymous, throwaways | Karma system | Popularity | Depends on community |

At a later date I will apply the same analysis to the major “normie” platforms, Facebook and Twitter, and show how they also influence their users.

My Tech Stack linux free_software @technology

:PROPERTIES: :EXPORT_FILE_NAME: tech-stack :EXPORT_DATE: 2018-12-31 :END:

Machines

| Model | Hostname | CPU | GPU | RAM | |--------------------+----------+-----------------------------------+---------------------------+------| | Ideapad Y500 | shakuras | Intel i7-3630QM (8) @ 3.4 GHz | 2x NVIDIA GeForce GT 650M | 16GB | | Thinkpad x130e | zerus | AMD E-300 APU (2) @ 1.3 GHz | AMD ATI Radeon HD 6300 | 2GB | | Samsung Chromebook | char | Cortex-A15 Exynos 5 (2) @ 1.7 GHz | ARM Mali-T604 (Quad) | 2GB |

Software Choices

While I do keep a single Windows machine around for playing some games, the vast majority of my time at a computer is spent in front of a Linux machine.

Distributions

Void Linux (rolling release, without systemd) for x86_64 systems.

I currently run Devuan on my Samsung Chromebook, although at some point I intend to replace it with the armhf distribution of Void.

Environment

I use Awesome WM as a tiling window manager. If you don't use a tiling WM, I strongly recommend it.

Web Browser

Firefox. Firefox is a standard libre browser that has gotten much faster since the Quantum update. I refuse to use Chrome derivatives generally.

Text Editor

Emacs! Always Emacs. I do use evil-mode for Vim keybindings, in accordance with the old joke: "Emacs is a great operating system. It's just missing a good text editor." Emacs is one of the best pieces of software ever written.

As an aside, this site is written in Emacs using ox-hugo to export org-mode files to Markdown for Hugo, a static site engine written in Go.

Terminal

A proper UNIX-style command line interface is the superior environment for most tasks. I will write more on this later. I consider urxvt & tmux & zsh to be the Holy Trinity. I use neovim for quick configuration of system files.

Site Overview @about

:PROPERTIES: :EXPORT_FILE_NAME: overview :EXPORT_DATE: 2018-12-30 :END:

About Me

Having a personal website in 2018 is an exercise in vanity. Accordingly, I have decided not to put my actual name on the site. Instead I have dedicated this page to the actual purpose of the Internet: pseudo-academic shitposting. My handle "slphil" expands to "shitlord philosopher", which you may interpret however you wish.

I do not have a college degree; dumb 18-year-olds make dumb decisions, and it's hard to go back once you're in your late 20s. I try to read 1-2 books each week, sometimes a re-read of a book I've already read, in order to remain stimulated. Besides, university education (as useful as it is in the job market) is overrated for autodidacts. A good line from Arthur Schopenhauer:

Generally speaking, university philosophy is mere fencing in front

of a mirror. In the last analysis, its goal is to give students

opinions which are to the liking of the minister who hands out the

Chairs... As a result, this state-financed philosophy makes a joke

of philosophy. And yet, if there is one thing desirable in this

world, it is to see a ray of light fall onto the darkness of our

lives, shedding some kind of light on the mysterious enigma of our

existence.

I do consider myself a philosopher/: "all men by nature desire to know" (Aristotle, /Metaphysics). I dislike the pretentious tone that this word can bring about, familiar to anyone who knows college-age Philosophy students. I do not hold to any particular definition of "philosopher": each methodology comes with its own cost-benefit analysis, and most(!) of them are meaningfully valid in at least some context. Particularly dear to me are the branches of philosophy of science, since my "native tongue" as a teenager was the same kind of fedora-tier STEM-elitist materialism that I mock nowadays. An enjoyable exercise is to update existing influential philosophy in scientific terms: for example to update Kant's thought on the Understanding through the lens of neuroscience, as can be seen here: Neuroscientists are Kantian Anti-realists.

I also adopt the term /hacker/: relevant definitions from the Jargon File are "1. A person who enjoys exploring the details of programmable systems and how to stretch their capabilities, as opposed to most users, who prefer to learn only the minimum necessary. RFC1392, the Internet Users' Glossary, usefully amplifies this as: A person who delights in having an intimate understanding of the internal workings of a system, computers and computer networks in particular." and "7. One who enjoys the intellectual challenge of creatively overcoming or circumventing limitations." I am a free software nerd and a Linux power user. Although I have some baseline competence and knowledge with regards to computer programming, I write code neither as a hobby nor an occupation so my actual ability to program is limited.

Because of the complex interplay between the hacker and philosopher mindsets, I have come to prefer the hyphenated hacker-philosopher. My broad (but not expert!) understanding of philosophy, literature, history, etc allows me to apply the hacker mindset to non-technical discussions.

My ultimate goal is to achieve the status of a polymath, but I have another 10-15 years of dedicated study before I can use the word without cringing.

Topics

Analysis

Books

Games

Contact