On being a policy

This is my post for day 8 of the Inkhaven writing retreat.

I’m trying to figure out what’s up with what I’m calling “being a policy”. I’d like to get better at it.

The classic way to decide what actions to take is to generate a bunch of options, evaluate the outcomes of taking each one, and pick the action corresponding to the best predicted outcome. In other words: to think about it.

Thinking, however, is expensive and slow. So we develop a ton of ways to shortcut the process.

One of these ways is to develop a habit. A habit is something that you had to practice a few times, but that eventually became automatic, and that you now do essentially unconsciously and involuntarily. You could stop doing it if you wanted, but first you’d have to notice, then you’d have to decide to stop, then you’d have to practice stopping. I think I would classify things like muscle memory or skills under this. Every time you use the scissors, your brain does not have to recalculate the optimal way to move them. Habits can also be purely internal or cognitive, like if you find yourself feeling annoyed about something, automatically try to generate something about it that you’re grateful for.

Another type of non-deliberative decision is what I would call a heuristic. Perhaps you have a heuristic that you don’t eat until you’re hungry. By using the hunger signal to fire off the behavior, you save yourself from having to constantly re-decide whether to eat or not. But you might sometimes decide to go against this heuristic. If you’re about to go on a long road trip, you might want to eat a big breakfast right away, so that you don’t have to stop for food for a while. Or perhaps you’re still in the office at 7pm and you’re starving, but you’ve just got a little bit of work left to do before you can send off this email, so you decide to keep working and eat after you’ve sent the email. Heuristics can save you 95% of the cost of deciding, while still being flexible to the context.

There’s a third kind of decision procedure that I would call a policy. A policy is a well-defined rule that you always follow. I probably got this term partly from its use in reinforcement learning, but it also matches the connotation of a company policy. The reason you have a policy is because you deliberated for a while on what the best course of action would be in this recurring, well-defined context, and decided that you always want to take that action. So after you decide on the best course of action, you then install it as a policy. Like a habit, you will now take that action every time, but unlike a habit, it may be very conscious and difficult. Unlike a heuristic, you will not reconsider under varying contexts.

Part of the purpose of a policy is to ensure that you take the action even when other circumstances would give you reason not to. Policies help ensure fair action, and better long-term consequences, even at the cost of short-term consequences.

A normal person might call a policy a “principle”, a “virtue”, or just “the right thing to do”.

In the field of advanced, mathematically-founded decision theory, the right decision procedure is to maximize expected utility, according to your best predictive models and consistent values.

But according to doubly advanced, meta-mathematically-founded decision theory, the right decision procedure is to first identify the decision procedure that maximizes expected utility, and then install that decision procedure.

I am generally quite good at having predictive models, reasoning through the implications, and then taking the action with the best predicted outcome. But I sure do have some big flaws in that last part. I seem to do a form of hyperbolic discounting, according to which it always feels like a good idea to do the somewhat easier action right now.

I do have some policies. For example, I do not drink alcohol. I don’t remember ever deciding on this policy, but it just has zero appeal. There are apparently some good things about alcohol, and I understand that drinking a tiny bit will have no observable effects. But that is not relevant, because I Do Not Drink Alcohol. Another example for me is that when Musk bought twitter, I stopped using twitter. It did not feel like a choice, it felt like “shoot, I can’t use that now, that’s too bad.” Would there still be benefits to using twitter? Absolutely. It seems like the whole machine learning community uses twitter as their primary social network. I’d be more informed about my work. And it really is quite a lot of fun. But that is not relevant, because I Cannot Use Twitter, now.

Both of these examples (and others that I’ve thought of) have two things in common; 1) the rule is about not doing something rather than doing something, and 2) the condition is extremely well-defined. A change I really want to make is something like “choose productive activities more often, and consumptive activities less”. But I don’t want to totally stop consumptive activities. How much is enough?

I think it’s actually extremely common for people to have what I’m calling policies, and to use them as tools for living better lives. I suspect that one reason I don’t seem to know how to pick up this tool is because I’m in love with “reason”, and I’ve spent all my life refining my ability to make good predictive models, and assess what actions and outcomes would be valuable. Since installing a policy is partly for preventing you from thinking about what to do each time, it feels somewhat “anti-reason”. But I do in fact endorse and understand the doubly-advance version of decision theory, and so I would endorse installing more policies.

I just need to find where this tool’s handle is first, before I can pick it up and start using it.

Aesthetics means something

This is my post for day 7 of the Inkhaven writing retreat.

I once joked on twitter that, “When I say that something is aesthetic, what I mean is that I like it and I don’t know why.” This was meant to be a joke, but one which, of course, has a kernel of truth. What is the kernel?

I think I use the concept of “aesthetics” to refer to a reaction I have to certain stimuli which is instantaneous, involuntary, like something being “painful” or “surprising”. I do not deliberate on what is aesthetic in the same way that I deliberate on what is good or true. Instead I deliberate on why that particular thing was aesthetic.

Because it is instantaneous, the experience can usually be captured at the sensory level, like visuals or music. And understanding why I had that reaction is an additional step. I don’t instantly know why I had the positive reaction to the aesthetic stimuli.

I think that often, what causes the positive reaction is that the instantaneous stimuli managed to capture something deeply meaningful. And the efficient resonance from the nerve endings to the soul is what feels aesthetic.

To give some examples;

I once saw a good friend playing taiko drums in a group. If you haven’t experienced them before, taiko drums are thunderous. Some part of me processed the powerful sound as my friend being powerful. The collective production of it was processed as the group being harmoniously powerful. The rhythmicity, yelping, and physical motion of the performers was processed as playful, celebratory.

I once saw one of the Iron Man movies. I don’t even remember which one. Earlier in the movie, we learn that Tony Stark has added a feature to Jarvis where he can hold his hands up in a certain gesture, causing the pieces of his Iron Man suit to be summoned and fly into place around his body. Later in the movie, he’s at his home with Pepper Potts, and the bad guys show up in helicopters and start shooting into his living room. In slow motion, ready to save the day, Stark does the hand gesture, and the suit pieces fly into the room — and assemble around Potts. Instantly, the audience is shown that Stark cares about her safety more than anything else.

I find Islamic mosaics to be enrapturing, almost painfully beautiful. The patterns are complex enough that my whole brain refocuses on it. And then, despite the full attention, I can just never quite manage to understand the whole pattern. As soon as I think I understand what I’m looking at, I saccade my eyes to an adjacent part, and am struck with an unexpected flourishing of novelty. But it’s also clearly not randomness; it’s deliberate, regular, and conveys to me that without a doubt that another mind existed which brought the pattern into being. That mind wanted me to have this very experience.

I spend quite a lot of time in art museums. There are a number of reasons why, but a major one is because I love experiencing aesthetics; that is, I love being reminded, all the way down, of things that are meaningful to me.

I write so you can make use of my mental models

This is my post for day 6 of the Inkhaven writing retreat.

I think a lot about models. Mental models. The world is too big and complicated for us to memorize everything we experience, and it would take far too long to think through all the implications of everything that we do remember. So what we do instead is build lots of smaller models, small enough to remember, and simple enough to use for predicting and decision making.

We conditionally deploy these models largely based on bottom-up sensations. I haven’t memorized exactly which aisle or shelf my favorite bread is on, nor which breads are immediately to the left and right of it. But I know that when I run out of bread, that will trigger an action for me to decide when to go to the store, and that when I get to the store, the visuals around me will match up enough to a remembered template that I will just skim a bit and then find the bread more or less right away.

All of this will happen automatically, involuntarily, because it is required for navigating the world. But you can also do it intentionally. You can think about whether a given model is failing a bit too often, and whether you should look for a better model. You can try to merge two models into one. You can take a model that’s complicated to use, and see if you can find one that is much more elegant while still being sufficiently accurate.

You can also build models of content that you’ll never personally experience. You can try to understand what went so wrong in 18th century France, or how what’s up with Uranus being sideways.

All these models live in our heads. They are little bits of software that we programmed inside our brains by going around living. So they are written in brain language. This is ultimately mathematical, but when you are the math, it doesn’t feel like math.

It turns out that there are other entities in the world, doing the same thing. And they’re pretty cool! We like to interact. But they think in terms of their mental models, and we think in terms of our mental models. Fortunately, there is substantial overlap between how these models work. It turns out that reality has joints, along which any functional agent will have delineated some concepts used in their mental models. We can point at an apple and say “apple”, and then the other entity will know to assign the word “apple” to the mental model that is activated inside their mind when they look at what we pointed at. Language is the means by which we bridge across our individual mental models.

Many people have a craft. They spend their careers paying deep attention to something and practicing it. I am a craftsman of mental models. Unfortunately, you cannot just hand someone a mental model in the same way that you can hand them a sword or fresh vegetables. By default, all my work is stuck inside my head. To invite someone into the shop of a mental model craftsman, you have to communicate the models. If we’re using language, then I have to move around the structure of the model and linearize it. (Models do not like being linearized.)

Fortunately I do enjoy the process of communicating models. It can make the models better, but it’s also just a blast to give people the same experience I had while making it. But it’s still much, much more natural for me to just keep on crafting. So by this point I feel as though I’m sitting in a shop filled with thousands of little gadgets. And the gadgets themselves could be put to better use if other people could use them.

I want to get better at sharing my models. That’s why I’m at Inkhaven.

People should be smaller

This is my post for day 5 of the Inkhaven writing retreat.

Epistemic status; spit-balling some stuff but confident in the overall concept.

Literally, physically, smaller. I don’t mean “lose weight”; I mean everyone should be like one meter tall. This would be highly advantageous to society and the economy.

Humans are pretty big for an animal. You might imagine that the reason we are the size that we are is because we need a body this big to support a brain this big. But I don’t think there’s any reason to believe that’s true! Shorter people are not less intelligent. We need to eat more to fuel our big brains, but our limbs are far stronger than needed to just carry our head around.

I think the reason we are this size is intra-species competition. If you’re taller, then you can beat up the other guy. So perhaps evolution slowly increased human height as we became more able to feed our bigger bodies. Or possibly we needed to be this tall for persistence hunting. In any case, neither of those are exactly “legitimate” reasons. I don’t think we would lose any of our deep human values if we all got our height cut in half. At this point we’ve mostly agreed not to fight, and we can handle the other predators with tools.

Okay, but why is smaller better? The two main reasons I can think of are 1) injury and 2) energy.

Injury

Small things are proportionally tougher. You can drop a matchbox car on the floor and it will just bounce, but if you drop a real car from four feet up you will need to call more than a mechanic. You can throw an ant off the balcony and it will just walk away, whereas if you faint from standing, you could easily bust your head open.

In these type of injuries, you get hurt because your body has to absorb the force of decelerating your own mass. Your mass is proportional to your volume, but forces are transmitted through surface area. A longer rope is not stronger, but a thicker rope is stronger. So when something twice as big falls, the mass it has to stop is eight times larger (two cubed) but its limb’s cross-sectional surface area is only four times larger (two squared). This disadvantage gets much worse as things get bigger (100 cubed is way, way bigger than 100 squared). This phenomenon is called the square-cubed law.

This is also why children are good at rock climbing, squirrels can climb trees, and bugs can climb walls.

So if people were smaller, there would be way less injury. Back pain would also be less common. Spooning with your partner would not have that awkward thing where your arm gets squeezed under them. You could also carry heavier things. Imagine going to IKEA and just hefting the whole bed frame over your head and sauntering home.

(You’d still get just as hurt if something else impacts you, like a bullet. Bigger people can take bigger punches, but falls are more dangerous to them.)

Energy

But the real winning reason why being smaller is better is that smaller things require less energy to operate, and everything in the economy scales with energy.

Our bodies would need less calories per day. Stores could be smaller. Farms could be smaller. We’d produce less carbon emissions. Cars would be smaller. We’d use less fuel. We’d need to produce less steel. Cities could be closer together. Our cargo would be smaller.

(“Wouldn’t we just have more children, until the population saturated our resources again?” I hear someone say. Yeah, probably. But more people means more positive experiences.)

Space exploration would benefit hugely from smaller people. The earth has so much gravity that we can just barely exit it with chemical rockets.

Because smaller things have less inertia, you can move them faster. All of society could go faster. That doesn’t mean you’d be more anxious; your neurons’ signalling would also have less far to travel, so you’d think faster, too.

The future

This is really all moot, because if the future goes well then we’ll all just upload our brains into computers, and “size” won’t be a thing anymore. But it’s fun to think about.

Getting paid ≠ producing value

This is my post for day 4 of the Inkhaven writing retreat.

Some people have jobs just so that they can make enough money to sustain themselves and whatever else they want to do with their lives. This is a perfectly reasonable choice to make. I’m also not writing this to anyone who’s struggling to meet sustenance level.

But many people also want their labor to be valuable to society overall. Your job is one of the biggest parts of your life, and it’s natural to want to be able to take pride in what you do.

I care a lot about my work being beneficial, and I’ve found that throughout my career, when I’ve actually sat down and thought about the particulars of my employer, it was often far from clear whether it was positive. It’s easy to assume companies wouldn’t spend so much money on you if it wasn’t valuable, surely someone must be making sure. But I believe this is sadly often not the case. I want to walk you through some more detail of how I think about this, in case it might help you improve your own career choices.

From some perspective, most actions are useless, and many others are harmful; how can you be sure your labor is having positive effects? Sometimes, it’s pretty obvious. If you’re a farmer, you can be pretty sure that the food your labor produces will help nourish and keep alive another person. If you’re providing routine medical care, then you can essentially witness the benefits in real time.

More generally, when people are willing to pay for stuff, that is a pretty strong signal that that thing is actually valuable to them. So it’s reasonable to assume that if there’s a salary, it’s valuable. And I think this is true for the most part. But there are numerous ways that this signal can get messed up.


For a couple years I worked at a software company whose product was helping people get better medical care. This is a good start! But, like many, many software companies in the Bay area, we were not yet profitable. It had been growing for several years, and was employing a couple hundred people. Not a casual endeavor. Even the revenue we were taking in was a muddled signal. Our product wasn’t paid for by the people receiving medical care. Instead, it was purchased by other companies, who in turn provided it as one of their employee benefits. So it was used by a small fraction of employees, and the feedback loop was long. a major incentive at play for the purchasing company is that providing our service as an employee benefit looked good, it made potential employees feel better about working for the company. Not a very reliable signal.

And how good was the product? In the beginning, the core product was a ranking of all the doctors in the US. The ranking was marketed as being based on some kind of advanced data analytics. Later, the main product was a system that let patients receive detailed opinions about their medical cases (from doctors high up in the ranking). While I worked there they continue to branch into other exploratory products. If you were a patient, then using our product probably felt good. You were paired with a real person who would walk you through the process, and help you understand the doctor’s opinion. Feeling good is a real thing, and has its own value. But was anybody checking whether the people who used our product had better medical outcomes?

From the perspective of the engineering team, this product is almost optimized to make you feel good about working on it. The database tables I interacted with contained rows and rows of real human names,1 getting help with their very real medical cases. In isolation, I think this part was absolutely positive. It’s not like we were selling snake oil.

But another key consideration in whether a choice is the right one is what the alternatives are. Running this software/healthcare company was costing, so, so much money. Not more than usual; it’s just that 200 professionals cost a lot. And those 200 skilled people could have been doing something else. It’s really hard to tell whether this company was worth the value it was producing given the tangled way that the incentives and signals were flowing.

Everyone was always very professional. The office was clean and crisp. People generally enjoyed working there and rarely spoke badly of the company. There was no sense of working for “the man”. The CEO was amicable and kinda goofy. It would be so, so easy to just let that overall atmosphere carry you through the day, and feel good about your work.

Over time I heard, though casual word of mouth, that the doctor ranking was a very basic statistics model based on only a handful of data points, like the doctor’s standardized test scores. I had not been updated since the very first iteration. The reason why seems to have been some kind of office politics that never leaked out to me in any detail. Honestly all of this is probably the norm and doesn’t necessarily change my estimate of the company’s value much. But what else are the signals failing to catch?


If the product of your labor has individual paying customers who make it profitable, then that significantly increases the probability that it is net positive.

Though even in that case, there are failure modes. People are often systematically wrong about what’s valuable to them. Or, a whole market sector is designed to squeeze money from people where they wouldn’t endorse it. This is the kind of thing where people could reasonably disagree with any given example, but I’m thinking of things like gambling, ads, or credit cards. There is a type of gambling that is fun, a type of advert that tells me about a useful product I didn’t know existed, and a type of finance that is indispensable for the economy to run efficiently. But there are also types of all these things that feed parasitically off the people who simply don’t have the wherewithal to make good choices. And I’m not sure how many of the zillions of people working in these sectors are checking which kind they are involved in.

Another big factor, especially in tech, is that lots of salaries are ultimately funded by very small groups of people, venture capitalists. Despite the incentives, VCs are human and can be systematically wrong about what is valuable to fund. If you find yourself to be the eventual recipient of a salary that only exists because some VCs took a risk, then you may be able to do your own thinking about the economics of the business and conclude that they were wrong, and that your job is not worth doing according to your own beliefs. They can also be funding things that work towards their values and against the values of society. During the several crypto-currency bubbles, it was a pretty straight-forward strategy to invest in a crypto company that was optimized for hype, sell when the bubble was high enough, and essentially be screwing over everyone else involved with the project.


Things just get really funny when the world contains high-leverage mechanisms. Sources of unprecedented energy are also bombs. Steel mills can be converted almost overnight to a supplier of weapons for an unjust war. Even farms can become subsidized by uncalibrated (if well-meaning) government programs, which destroys the main signal of whether the food is valuable. This leverage gets even more intense in the presence of existential risks like AI. My guess is that basically all current work in AI, outside of the tiny field of AI safety, is negative expected value for humanity.

Of course, you, personally are not always going to be able to think about it and make a better call. I’m certainly not advocating that you do an intensive private investigation into the details of your company’s finances. But I think it’s very common for people to just… not seriously consider that jobs might be net-negative for society? It’s a very uncomfortable thought. So I think most people don’t even start to ask the question.

As human society gets larger, things are getting weirder. The eddy currents being shed off from the turbulence of growth can be so big that you never know you’re inside one. And neither your salary, nor seeing the numbers on the dashboard go up, nor even the fulfilling satisfaction of sharing a victory with your coworkers are that strong of a signal. But I think things are still comprehensible enough that it’s worthwhile for you to spend some time asking the question.

  1. I mean, I personally didn’t have access to production data, but it was close enough. ↩︎

Imagine history like you would a memory

This is my post for day 3 of the Inkhaven writing retreat.

I think I’ve been making a mistake when learning history.

I rely strongly on visualization when I think. When I read fiction, I will naturally visualize everything happening. It doesn’t really even feel like an option; it just feels like part of what it means to be reading. It’s not photo-realistic or anything. In fact, it has the same kind of blurry sense that dreaming has. I assume it’s the exact same machinery.

When I read about history, I do the same thing. I naturally visualize the people and events that I’m reading about. Because, again, that just feels like part of what it means to be reading and understanding the content.

At some point it occurred to me that this modality of visualizing is, for me, different from what happens when I visually recall memories. It has a different feeling somehow. And maybe a different visual style; it’s hard to tell.

I would like to say that this realization happened from something like watching a historical movie, or seeing some of those colorized historical photos.

Photo credit

But I’m pretty sure that it actually happened because I’m now old enough that some of my actual memories are becoming historically relevant.

I’m being confronted with video from 9/11, or depictions of floppy disks, or visual styles changing and being like — hey, hang on. That’s not a historical artifact, that’s a thing that actually hap– and then, yeah, realizing that I’m that old. The quality of photography and video also changed noticeably as I grew up. So I can look at my own childhood photos and realize that even though they have a colorization characteristic of a particular decade, they did in fact happen, and I can compare that to my memories of them happening.

So it has occurred to me that I could try to extrapolate this effect in the opposite mode. (Of course, you can also try this even if you’re not old.) When you read about Darwin, don’t think of that one picture we’ve all seen where he has a long beard. Just try to imagine some actual professor you had, and maybe he happens to have a beard. Somehow, when I say to myself “visualize this as if you remember it” I get an actually different experience in my brain.

It makes it easier for the history I’m learning to connect up to all the mental models and beliefs I already have about real people that I have experienced. I’m more likely to consider that perhaps certain people throughout history may have been autistic like some of my friends, or warmly charismatic like some other friends, or narcissistic demagogues like some people whose choices I may have been witnessing unfold on the news in real time.

This bridging obviously gets way harder if the history in question is further away from my experience. If I wanted to really truly feel the cruelty of King Ashurbanipal slaughtering his enemies from a chariot, I’d have to do a fair bit of work to bring up the right “memory” visual. Not only because the chariots & armor would be foreign to me, but also because I’ve never seen anyone get slaughtered.

I probably can’t afford to do this for all the history I read, but I think it’s very valuable to do so for a select sample. I want my models of history to be fully integrated with my models of my life experiences. Human nature was not different in the past. I want to have fully informed beliefs about things like what disasters may come, and what people’s responses to that might look like.

Are apples made of cells?

This is my post for day 2 of the Inkhaven writing retreat.

Like, probably, but I think this is an interesting and non-trivial question. I’m not actually 100% sure about the answer; I’m probably 97% sure of the answer. But the correct answer doesn’t actually affect the value of thinking about this question.

Even if you know a lot of biology, there is an explanation that a biologist could give about how apples are actually not made of cells which would, I claim, be pretty convincing. How much detail would they have to give before you believed them? Even if it turns out to be false, it’s good practice to know what it feels like on the inside to have your mind changed about something. Can you imagine what the biologist could say that would change your mind? Think about this for a second before reading on.

You learn pretty early on in biology class that all life is made of cells. (We’re not getting into the issue of viruses, here.) It’s pretty much what life is; cells are the level at which the replication occurs, so life begetting life is cells begetting cells.

But cells are extremely complicated and messy and made of lots of sub-parts. And they can produce bulk materials that are useful to the organism: materials which are not themselves made of cells. Tissues need to be made of cells in order to perform complex functionality that is locally responsive to the molecular conditions around it. Cells have the machinery to control what goes in and out, and control which genes get expressed when.

But not all parts of organisms need this.

  • Your hair and fingernails and outer skin layers are kind of cells but kind of not. They’re dead cells, flattened together and drained of most of their contents.
  • The enamel in your teeth is not made of cells. It just needs to sit there being a rock. This does come at the cost of self-repair, hence dentistry being a major category of medical care.
  • Bones are in large part made of minerals, which is to say, tiny rocks, which I would claim are not cells. But these minerals seem to be fractally mixed-in with the myriad organic operations of the bones, almost as though bones are perfectly calibrated to queer the made-of-cell/not-made-of-cells binary.
  • Your bladder is full of, let’s say, non-cellular fluid.

And some cells are very large. Eggs are often cited as large cells (though it’s very unclear to me whether this is true), with the ostrich egg being the largest. Could apples be giant, single plant cells? How sure are you?

So, what’s up with apples? Is there some minimal cellular machinery around the edges that packs in the sugars, pumping nutrients in through the stem, making sure the skin grows in proportion, with the bulk of the apple mass being undifferentiated acellular deliciousness? Or is it cell walls all the way through, crunching crisp copies of every chromosome?

If you were the first discoverer of cells, you would need to spend a while going around and checking different types of tissues before you could justify the generalization “all life is made of cells”. And I think fruit is a category where you would be less justified in generalizing to.

So how could you tell? Are there ways other than looking it up, or using a microscope?

The origin of writing is surprisingly unclear

This is my post for day 1 of the Inkhaven writing retreat.

History is often defined to have begun with writing. That is, something is considered history if we know about it because someone wrote it down. Otherwise, it is considered pre-history. We only know about pre-history by making inferences from physical evidence.

Before we go any further, the typical definition of a writing system is that it must be able to encode, physically and persistently, arbitrary speech from a language. Humans have been using physical markings and symbols to communicate specific concepts like “sun” or the names of kings for a much, much longer time than they have been using a full writing system.

The classic story is this;

The Sumerians invented writing first, around 3400 BCE, with cuneiform. The Egyptians followed suit shortly after, around 3100 BCE. It’s unclear to what degree the Egyptian invention of writing was caused by or influenced by cuneiform; they likely knew about it. Notably later, around 1200 BCE, the people of the Shang dynasty invented oracle bone script, which would evolve into modern Chinese. Mayan hieroglyphs extend back to 200 BCE, in a place so far away from Asia that its development is certainly independent.

So: four, maybe three, independent inventions of writing. I think this story is extremely cool and beautiful on its own. But the details of what we know and how we know it reveal a surprisingly rich space of other possible stories.

Disclaimer: I am not a professional in any related field and I find it surprisingly difficult to recover clear facts about these timelines from online research. Everything below represents my general impression more than any specific claims.

How old is it?

Archaeologists (and, to be fair, many scientists) seem to have a tendency to state that something is as old as the oldest existing positive example. This is understandable; there is a long history of people just claiming that stuff is really old, and getting fame and attention for that.

While archaeologists sometimes acknowledge that something could be older, they do not, as a field, seem to be attempting to collaboratively build a probabilistic model of how old things are, taking into account less crisp things like the existence of prerequisite conditions, or the occurrence of events which could have wiped out earlier evidence. Another way to say this is that they don’t seem to think in a very Bayesian way.

But sometimes, you pretty much just know how old something is, because you have records during its entire developmental process.

If you have lots of fossils of dinosaurs, and then lots of fossils of increasingly bird-like dinosaurs, and then lots of fossils of decreasingly dinosaur-like birds, and then lots of fossils of birds — you pretty much can just know how old birds are.

Similarly, we have found stone tools with a range of developmental sophistication going all the way from fancy obsidian daggers back to things that just barely look like deliberately broken rocks, with countless samples smoothing filling in the space.

With textiles however, they essentially all decay after a few thousand years at the absolute longest. So whether there was or was not 300,000 year old clothing, we would not find it.

Similarly, there seem to be tattoos on bodies as far back as the skin is still preserved.

For this reason, it seems to me like we can know exactly how old cuneiform is.

It all started in the 8th millennium BCE with clay balls called “bullae”, which were impressed with symbols that recorded goods exchanged. This evolved over time into simple seal impressions on flat tablets. The denoting of quantities and types of goods grew more complex over time. Pictographs began to be used for what they sounded like, and not just what they depicted. “Proto-cuneiform” is considered to have begun development around 3200 BCE, not reaching full writing system status until more like 2900 BCE. We have hundreds of tablets from this period,1 and a pretty good sense of how it developed.

Egyptian hieroglyphs seem to have a fuzzier record of their proto-writing period. Full writing emerges around 2800 BCE. There are some simpler uses of hieroglyphs dating back to the ink labels on jars from the tomb of King Ka from 3120 BCE, or the ivory tags from Abydos Tomb U-j from 3250 BCE. Given how close this is to the dates for cuneiform, I’m confused about why the consensus is confident that cuneiform was older.

In stronger contrast, oracle bone script, the ancient predecessor of Chinese, seems to have jumped into the historical record as a fully-formed writing system in 1200 BCE. I claim that this means that we have basically no idea how old Chinese writing is. The windfall of artifacts seems to be due to the rulers engaging in a particular practice of divination, leaving records of the query and answer on turtle shells and ox scapula, which are relatively durable. At roughly the same time,2 they started inscribing onto bronze vessels, which are extremely durable. If they were writing on paper, bamboo or silk before that, we wouldn’t see it. There are several artifacts which are thousands of years older than oracle bone script, claimed to be potential predecessors of it. But they are tauntingly sparse.

The history of Mayan seems far fuzzier still. It seems not to have originated with the Maya at all. Another script found nearby is Zapotec which, while obviously strongly related, is different enough that it is considered undeciphered. And there are a handful of much older artifacts that indicate that this family of scripts may have been developing as far back as the Olmec civilization. These artifacts seem to show symbols that are less likely to constitute a full writing system. But the intermediate record is so sparse that it’s hard to say much. All these artifacts are on stone; the most durable material, and also one of the most effortful to write on. We can be certain the writing was first developed on an easier medium.

Beyond these four script families, there are many other artifacts across the world that may represent independent inventions of writing, like the knotted cords of the Inca, the rongorongo glyphs of Rapa Nui, a cuneiform competitor, or the mysterious symbols on many Vinča artifacts, these latter potentially dating from 5000-3500 BCE.

When I look at all these facts combined, they seem at least compatible with many different histories. The blank spots in the chronology seem to contain plenty of space for unseen structure.

What does it mean?

It’s hard to say why I find it so compelling to know exactly how the origin of writing went down. In some sense, writing feels like humanity fulfilling its potential. Information is incredibly powerful. It can etch the world onto a grain of sand. It’s like a wormhole that teleports the past into the future. There was a brief period of history where no one on earth could read any of these four scripts. But true language has structure that reflects the structure of the world. Now, they can all be read again, in the majority. Although I do not think writing does much to preserve individual human souls from ultimate annihilation, I think it goes a long way towards preserving humanity.

  1. If you were thinking you’d heard there were hundreds of thousands of cuneiform tablets, those ones are from after the proto-cuneiform period. ↩︎
  2. Since oracle bone script and the bronze scripts are attested from almost the same time, I am also confused about why oracle bone script is universally reported to be the “oldest” form of Chinese writing. EDIT: I go deeper on this question in a future Inkhaven post. ↩︎

The Empty Tattoo

I’ve always thought it would be cool to have a tattoo. No, not “cool”; satisfying. To have a symbol that I like, that is meaningful to me, that I can carry around with me. I would put one on each wrist, because I would see them often, it’s a nice blank spot, and my hands are symbolic of my interaction with the rest of the world.

I am a person who is very interested in and aesthetically aligned with finding things that are singularly good in some way. What is the optimal way to make choices? What is the truth underlying all other truths? What is the best possible future? The most important thing? So when I think about getting a tattoo for myself, I don’t just think of getting a pretty design, or a character from a show, or a commemoration of a past event. I think, what is one message that I fully love? What symbol most represents me? What would be optimal to remind myself of so often?

I could look at it and take regular pleasure in the meaning of the thing. People I meet could ask about it, and I would get to excitedly tell them about the cool meaningful thing.

There are so many great things it could be: the earth; the earliest known writing; neurons; a depiction of Turing machines; of infinity, of an infinite future, of unbounded potential; of striving; of cooperation between sentient beings.

But, I don’t have one yet. And it’s because, well, I haven’t found the “right” thing. I haven’t found a concrete, visual expression of something that passes my bar for being permanently part of my experience.

I know, I know — “you’ll never find the perfect thing”, “nothing’s perfect”, “you’re missing the point of tattoos”, “don’t let the perfect get in the way of the good” etc. etc. But… I don’t think I’m falling into the failure mode that you might be imagining. I don’t feel anxious about getting something and then not liking it later. I’m not sure that there is a perfect thing. I just haven’t thought of anything that really excites me. If 50 years go by and I still don’t have a tattoo, I don’t think I’m going to regret it. I think I’ll just think, “well, I didn’t find the right thing, and that’s perfectly okay”.

But something occurred to me today; an alternative framing. I was pacing around my apartment, trying to make progress on my research. I was thinking about how, if an entity receives two different inputs, what does it mean when it takes two different actions, versus the same action? What does it mean when it takes two different actions, but which result in the same outcome? And at the same time, I was thinking about my cool coworker friend who helped me decide to get my ears pierced, and who also has lots of cool tattoos. I looked at my arms and thought about how I don’t have tattoos. And I thought; me not having a tattoo is one of the ways the world can look. What does it mean about the difference between this world and the one where I do have tattoos?

And then, I had the sense that it meant something positive rather than something negative. That it was the result of an action rather than an inaction. That is to say: I have a tattoo. The fact that I have not yet found a suitable symbol is a meaningful and enduring fact about me. The nothingness emblazoned on my arms is a direct causal effect of my continued search. I can look at my arms, and instead of thinking “maybe I should have gotten Turing machines tattooed there years ago”, I can think “there it is, there’s the symbol to myself that I am one who searches, one who is not satisfied”. And in fact that’s essentially what I do think when I look at my arms. Because when I think, “I still wish I had a tattoo” my immediate next thought is, “let’s think harder about what it could be”, which causes me to start thinking about potential essential expressions of ultimate meaning. This is essentially identical to thinking, “I still wish I understood the most important things… let’s keep trying to understand them”.

In almost all of modern mathematics, the objects of discourse are ultimately defined in terms of what are called sets. The number 5 is the set containing the numbers 1 through 4. A function is a set of pairs (where a pair is a set of two things), where the first thing (itself some kind of set) in each pair gets mapped to the second thing (another set) in the pair, et cetera. But if everything is in terms of sets, where does it bottom out? The empty set. The empty set is a perfectly valid set; it has no elements. Because nothing is in it, you don’t need to have a pre-existing type of object in order to define the empty set. You can start there.

So that’s my tattoo; the empty tattoo.1 It represents everything I’ve found so far that’s perfect; nothing. The fact that nothing is in it is meaningful, because I have been searching. There are other people who have no tattoo and who have not been searching. You can’t tell us apart by looking, but that’s okay, because I know who I am.

  1. Of course, many people have a tattoo of the empty set itself. But, well, that doesn’t feel right for me. The set containing only the empty set is different from the empty set itself. ↩︎

Orders of Magnitude

This piece is a speech that I gave for the 2020 SF Bay area secular solstice. Solstice was online this year, and a partial recording can be found here. This speech begins at 46:56. It transitions into the moment of darkness, the traditional “midpoint” of solstice where the stage is dark and you are left with your own thoughts for a brief period of time.


Every year, we gather and think about many different themes that are important to us. Many of these themes are about the very good, and the very bad.

We think about humanity’s past. We sing Bitter Wind Blown and reflect on what it would be like to have to light our own fires for warmth. We try to imagine what it would feel like not to know why it gets cold, or why we get sick, or whether there is anything we can do about it.

We think about the present. We share stories of humanity’s astonishing technological accomplishments and feats of cooperation over the last five thousand years.

And sometimes, we think about the present from another perspective. Beside our achievements we see stretching from the past into the present an unbroken thread of suffering, woven thick with the experiences of countless souls.

So at the same time, we must contend with the fact that we are standing on a rising pedestal, flung exponentially higher by our ancestors, and that also, all around us, unacceptable atrocities continue.

How are we to make sense of this? When confronted with the desire to understand everything, and the compulsion to ensure the prosperity of the far future; how are we to stretch our minds across both the enormous losses and enormous gains?

This is a challenge that is with us in normal times. This last year has brought a further complication to the picture. With pervasive fear of sickness, stuttering economies, and our institutions struggling to keep their heads above the water, it no longer feels like we are at the apex of human history.

To better understand these extremes, we need to deploy the tools of rationality.

In the third century before the common era, a man named Archimedes wrote an essay called The Sand Reckoner. The ancient Greeks, you see, had a term for a large quantity; a “sand hundred”. The idea was that, although one could see with the naked eye that sand consisted of discrete grains, enumerating all the grains in a sand dune, let alone an entire desert, was beyond possibility; beyond human abilities. But the mind of Archimedes soared above such imagined limitations. He invented a means of manipulating large numbers, which today we would call exponents, and with these he calculated upper bounds on the number of sand grains in the whole of the earth, and indeed, the sand-grain volume of the entire universe, as they believed it to be at the time. In this exercise, Archimedes had reckoned the sand. And in bringing the immensity of the sand inside himself, he also unleashed the mind of humanity onto the universe.

One virtue of rationality is precision. And sometimes, precision is less about knowing decimal places, and more about knowing what order of magnitude you’re on. Another virtue of rationality is scholarship. And the way that I know how to reckon with today’s immensities is to do research, and find statistics that tell me something about what order of magnitude we’re on. 

For example, the Spanish flu of 1918 killed somewhere between 20 and 100 million people. For comparison, malaria kills about half a million people per year, and COVID has killed 1.7 million people. 

In the second quarter of this year, the US GDP had the largest decline on record, and then the third quarter had the largest increase on record, although the net of those was negative. 

It took the world about a year to develop and begin distributing a vaccine. The previous fastest vaccine development was four years, in 1967.

These facts can be objects of meditation. They can be devices in your practice to understand the world around you, to orient your mind, and to choose future actions. Despite the clarity of specific numbers, it can take a long time to really understand what they mean. A lot of exposure is necessary to take these numbers inside yourself. My recommendation is to think about it lightly often, and deeply on occasion.

To think about it lightly often, perhaps form some associations that will let you be incidentally reminded of the good and the bad. For example, every time I see a plane in the sky, I just can’t help but stop for a second, follow its path through the sky with my eyes, and imagine all the people on board. It is truly a miracle that for a modest sum, each of those people can be safely hurled across the surface of this great earth, and in the meantime admire the tops of clouds.

In contrast, whenever I walk down the street and see a padlock, it reminds me that we have failed to solve basic coordination problems between people. While there still exist wars and police, or even fences and padlocks, we have not finished our work.

This is a careful balancing game; you don’t want to be so often optimistic that you lose your sense of urgency in building the future, and you don’t want to be so despondent that you lose your will to try. Minds vary in their makeup, so experiment at your own discretion.

For me, these associations are a light reminder, almost a subconscious one, which give me the opportunity to choose how deeply I want to reflect. The more you traverse the orders of magnitude, the more familiar they become, and the more you will be oriented to the exponential reality.

And tonight is a time to consider it more deeply. So for now, I will leave you with one more statistic on which to reflect in silence for the next minute. The global death rate is about 107 people every minute, or just under two people each second. This year, COVID has added about three deaths for each of those minutes.

[A pendulum clock begins ticking in the background. I blow out the candle and fade to darkness.]