2017: WHAT SCIENTIFIC TERM OR CONCEPT OUGHT TO BE MORE WIDELY KNOWN?

General Standardization Theory

Try building a tower by piling irregular stones on top of each other. It can be done, eight, nine, sometimes ten stones high. You need a stable hand and a good eye to spot each rock’s surface features. You find such man-made “Zen stone towers” on riverbanks and mountaintops. They last for a while until the wind blows them over. What is the relationship here between skill and height? Take relatively round stones from a riverbank. A child of two can build a tower two stones high. A child of three with improved hand-eye-coordination can manage three stones. You need experience to get to eight stones. And you need tremendous skill and a lot of trial and error to go higher than ten. Dexterity, patience and experience can get you only so far.

Now, try with a set of interlocking toy bricks as your stones. You can build much higher. More importantly: your three-year-old can build as high as you can. Why? Standardization. The stability comes from the standardized geometry of the parts. The advantage of skill is vastly diminished. The geometry of the interlocking bricks corrects the errors in hand-movement. But structural stability is standardization’s least impressive feat. Its advantages for collaboration are much more significant.

We have long appreciated the advantages of standardization in business. In 1840, the USA had more than 300 railroad companies, many with different gauges (the width between the inner sides of the rails). Many companies refused to agree on a standard gauge because of heavy sunk costs and the need for barriers to competition. Where two rail lines connected, men had to offload the cargo, sometimes store it and then load it onto new cars. In a series of steps, some by top-down enactment, but mostly by bottom-up coordination, the industry finally standardized gauges by 1886. Other countries saw similar “gauge wars.” England ended them by legislation in 1856.

In the last hundred years, every national government and supranational organization, and virtually every industry has created bodies to deal with standardization. They range from the International Organization for Standardization (ISO) to the World Wide Web Consortium (W3C) to bodies like the “Bluetooth Special Interest Group.” Their goals are always a combination of improved product quality, reputation, safety and interoperability.

What is the best way to achieve optimal standards? While game theory (coordination games) offers a vast body of knowledge, setting standards in the real world is not easy. However, the advantages are huge. Thus, landing at a relatively low local peak is vastly preferable to no coordination. Let’s call the sum of this theoretical and practical knowledge from management and game theory the “Special Theory of Standardization”—akin to Einstein’s “Special Theory of Relativity.”

However, standardization is a vastly more powerful concept, one that might lead to a “General Standardization Theory.” Let’s look at a few domains that are undergirded by standardization.

Take matter, which ranges from the elementary particles up the periodic table with their standardized atoms to an endless number of discrete molecules. Simple chunkiness doesn’t seem to be enough to build a universe. Apparently, that requires standardized chunks. From a “General Standardization Theory” point of view: Is this the optimal standard or just a local peak? Or take living matter. A cell can work only with standardized building blocks (amino acids, carbohydrates, DNA, RNA, etc.). Could something as complex as a cell ever work outside of standards? A “General Standardization Theory” might provide answers on the limits of complexity that can be achieved without standards.

Further up the chain, in biology, the question is how to get huge numbers of unrelated individuals to cooperate flexibly. Some anthropologists name the invention of religion as the solution. Others suggest the evolution of moral sentiments, the invention of written law or Adam Smith’s invisible hand. I suggest that standardization is at least part of the solution. People can cooperate in ample numbers without standards though all the known mechanisms. But, eventually, groups that use standards outpace groups that do not. Is there a threshold where cooperation breaks down without the injection of standards?

My hypothesis: yes, but it is much higher than Dunbar’s number of approximately 150 individuals, possibly in the tens of thousands. Interestingly, only homo sapiens devised standardization, no other animal. Then again, this advance took even humans a long time—until the fifth millennium BC, which brought the standardization of language (writing), the standardization of value (money) and standardized weights.

2015: WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

Self-Aware AI: Not In A Thousand Years.

The widespread fear that AI will endanger humanity and take over the world is irrational. Here is why.

Conceptually, autonomous or artificial intelligence systems can develop two ways: either as an extension of human thinking or as radically new thinking. Call the first “Humanoid Thinking” (or “Humanoid AI”) and the second one “Alien Thinking” (or “Alien AI”).

Almost all AI today is Humanoid Thinking. We use AI to solve problems that are too difficult, time consuming or boring for our limited human brains to process: electrical grid balancing, recommendation engines, self-driving cars, face recognition, trading algorithms, and the like. These artificial agents work in narrow domains with clear goals that their human creators specify. Such AI aims to accomplish human objectives—often better, with fewer cognitive errors, fewer distractions, fewer outbursts of bad temper and fewer processing limitations. In a couple of decades, AI agents might serve as virtual insurance sellers, doctors, psychotherapists, and maybe even virtual spouses and children.

We will achieve much of this, but such AI agents will be our slaves with no self-concept of their own. They will happily perform the functions we set them up to enact. If screw-ups happen, they will be our screw-ups due to software bugs or overreliance on these agents (Daniel C. Dennett’s point). Yes, Humanoid AIs might surprise us every once in a while with novel solutions to specific optimization problems. But in most cases novel solutions are the last thing we want from AI (creativity in the navigation of nuclear missiles, anyone?). That said, Humanoid AI’s solutions will always fit a narrow domain. These solutions will be understandable, either because we understand what they achieve or because we understand their inner workings. In some cases, the code will become too enormous and fumbled for one person to understand because it is continuously patched. In these cases we can turn it off and start programming a more elegant version. Humanoid AI will bring us closer to the age-old aspiration of having robots do most of the work while humans are free to be creative—or to be amused to death.

Alien Thinking is radically different. Alien Thinking could conceivably become a danger to Humanoid Thinking; it could take over the planet, outsmart us, outrun us, enslave us—and we might not even recognize the onslaught. What sort of thinking will Alien Thinking be? By definition, we can’t tell. It will encompass functionality that we cannot remotely understand. Will it be conscious? Most likely, but it need not be. Will it experience emotion? Will it write bestselling novels? If so, bestselling to us or bestselling to it and its spawn? Will cognitive errors mar its thinking? Will it be social? Will it have a theory of mind? If so, will it make jokes, will it gossip, will it worry about its reputation, will it rally around a flag? Will it create its own version of AI (AI-AI)? We can’t say.

All we can say is that humans cannot construct truly Alien Thinking. Whatever we create will reflect our goals and values, so it won’t stray far from human thinking. You’d need real evolution, not just evolutionary algorithms, for self-aware Alien Thinking to arise. You’d need an evolutionary path radically different from the one that led to human intelligence and Humanoid AI.

So, how do you get real evolution to kick in? Replicators, variation and selection. Once these three components are in place, evolution arises inevitably. How likely is it that Alien Thinking will evolve? Here is a back-of-the-envelope calculation:

First consider what getting from magnificently complex eukaryotic cells to human-level thinking involved. Achieving human thought required a large portion of the Earth’s biomass (roughly 500 billion tons of eukaryotically bound carbon) during approximately two billion years. That’s a lot of evolutionary work! True, human-level thinking might have happened in half the time. With a lot of luck, even in 10% of the time (that’s 200 million years), but it’s unlikely to have happened any faster. Remember, you don’t need only massive amounts of time for evolution to generate complex behavior, you also need a petri dish the size of Earth’s surface to sustain this level of experimentation.

Assume that Alien Thinking will be silicon-based, as all current AI is. A eukaryotic cell is vastly more complex than, say, Intel’s latest i7 CPU chip—both in hardware and software. Further assume that you could shrink that CPU chip to the size of a eukaryote. Leave aside the quantum effects that would stop the transistors from working reliably. Leave aside the question of the energy source. You would have to cover the globe with 10^30 microscopic CPUs and let them communicate and fight for two billion years for true thought to emerge.

Yes, processing speed is faster in CPUs than in biological cells, because electrons are easier to shuttle around than atoms. On the other hand, eukaryotes work massively parallel, whereas Intel’s i7 works only four times parallel (4 cores). Eventually, at least to dominate the world, these electrons would need to move atoms to store their software and data in more and more physical places. This necessity will slow their evolution dramatically. It’s hard to say if, overall, silicon evolution will be faster than biological. We don’t know enough about it. I don’t see a reason why this sort of evolution would be more than two or three orders of magnitude faster than biological evolution (if at all)—which would bring the emergence of self-aware Alien AI down to roughly a million years.

What if Humanoid AI becomes so smart it could create Alien AI from the top down? That is where Orgel’s Second Rule kicks in: “Evolution is smarter than you are.” It’s smarter than human thinking. It’s even smarter than humanoid thinking. And, it’s much slower than you think.

Thus, the danger of AI is not inherent to AI, but rests on our over-reliance on it. Artificial Thinking is not going to evolve to self-awareness in our lifetime. In fact, it’s not going to happen in literally a thousand years.

I might be wrong, of course. After all, this back-of-the-envelope calculation applies legacy human thinking to Alien AI—which, by definition, we won’t understand. But that’s all we can do at this stage.

Toward the end of the 1930s, Samuel Beckett wrote in a diary, “We feel with terrible resignation that reason is not a superhuman gift…that reason evolved into what it is, but that it also, however, could have evolved differently.” Replace “reason” with “AI” and you have my argument.

.

2013: WHAT *SHOULD* WE BE WORRIED ABOUT?

The Paradox of Material Progress.

 

I recently had dinner with a friend—a prominent IP lawyer—at his mansion in Switzerland, one of the few spots directly on Lake Zurich. As is customary with people who have mansions, he gave me the complete tour, not leaving out the sauna (how many different ways are there to decorate a sauna?). The mansion was a fireworks display of technological progress. My friend could regulate every aspect of every room by touching his iPad. “Material progress”, he said during his show, “will soon come to every home.” Stories of high-tech, high-touch houses have been around for decades, but it was still neat to see that it finally exists. Clearly sensing my lack of amazement he guided me to his “picture-room.” Photographs on display showed him with his family, on sail boats, on ski slopes, golf courses, tennis courts and on horseback. One photo he seemed especially proud of showed him with the Pope. “A private audience”, he said.

So what do we learn from this that we didn’t learn from The Great Gatsby?

Material progress has and will continue to spread. Knowledge is cumulative. At times in our past, knowledge has diminished. The classic case is Tasmania or—on a grander scheme—the Middle Ages. But since Gutenberg, it is difficult to imagine that humanity will ever again shed information. Through the accumulation of knowledge and global trade, the goods and services that my lawyer-friend enjoys today soon will be available to the poorest farmer in Zimbabwe. But no matter how much knowledge we accumulate, no matter how cheap computation, communication and information storage become, no matter how seamless trade flows, that farmer will never get any closer to a date with the Pope.

See the Pope allegorically as all the goods and services that are immune to technological creation and reproduction. You can vacation on only one St. Barts island. Rauschenberg created just a few originals. Only so many mansions dot the lakeshore in Zurich. Bringing technology to bear won’t help create any more. A date with a virtual Pope will never do the trick.

As mammals, we are status seekers. Non-status seeking animals don’t attract suitable mating partners and eventually exit the gene pool. Thus goods that convey high status remain extremely important, yet out of reach for most of us. Nothing technology brings about will change that. Yes, one day we might re-engineer our cognition to reduce or eliminate status competition. But until that point, most people will have to live with the frustrations of technology’s broken promise. That is, goods and services will be available to everybody at virtually no cost. But at the same time, status-conveying goods will inch even further out of reach. That’s a paradox of material progress.

Yes, luxury used to define things that made life easier: clean water, central heating, fridges, cars, TVs, smart phones. Today, luxury tends to make your life harder. Displaying and safeguarding a Rauschenberg, learning to play polo and maintaining an adequate stable of horses, or obtaining access to visit the Pope are arduous undertakings. That doesn’t matter. Their very unattainability, the fact that these things are almost impossible to multiply, is what matters.

As global wealth increases, non-reproducible goods will appreciate exponentially. Too much status-seeking wealth and talent is eyeing too few status-delivering goods. The price of non-reproducible goods is even more dependent on the inequality of wealth than on the absolute level of wealth in a society—further contributing to this queeze.

The promise of technological progress can, by definition, not be kept. I think we should worry about the consequences, including a conceivable backslash to the current economic ecosystem of technology, capitalism and free trade.

.

Start typing and press Enter to search