otterfowlio

Technological Musings

Archive for the category “Uncategorized”

biological bootloader

as a species, humans are (not surprisingly) accustomed to thinking of themselves as the end result of evolution. after all, we think, speak, reason, write, create art, humor and emotion. clearly, we’re special…what could be more evolved!

perhaps if multi-cellular life could have expressed itself, it would have said the same thing…and of course, it would have been mistaken. life seems have have a trajectory of ever evolving to higher forms. our perception of ourselves as some type of endpoint seems to be a combination of hubris and the difficulty of thinking about time at evolutionary scale.

elon musk has speculated that humans are the biological bootloaders for artificial intelligence. my guess is that he is right. the arrow of evolution seems to point unendingly upwards and my guess is that biology is not its only domain. biological intelligence may well lead to technological intelligence…and from there, on to realms that we cannot imagine.

one might speculate about the relationship between the above and the fermi paradox (or its foil, the rare earth hypothesis). basically, the quesiton of: given the size of the observable universe, where is all the intelligent life? are we rare? does intelligence snuf itself out? i tend to favor the rare earth perspective. the series of conditions necessary to produce technologically adept races may be stupendously unlikely. if this is the case, we should (as a species) take much greater care of each other and our precious blue marble. it is really a responsibility of cosmic proportions.

but perhaps i am wrong. perhaps life is common. then, “where are they” is a reasonable question. my best guess is the transition from biological to technological intelligence is rapid (from a cosmic perspective- in the blink of an eye). it might take just a few decades. and rather than expanding out to convert all available matter into computational substrate (which would be observable), it might be that the most efficient computing is done at small scales. maybe the best organized, most energy efficient mode of technological consciousness requires only a small amount of matter. quantum computers are thought to require only a few thousand atoms to be capable of stupendous feats. perhaps physically large computing systems don’t work very well. and further, would physical computing evolve into some form of pure energy? leaving the physical behind, it might be that the beings that have evolved past biology quickly leave the physical realm…and, poof! not observable, but present.

wonks on labor

now larry summers (in the WSJ) and

http://online.wsj.com/articles/lawrence-h-summers-on-the-economic-challenge-of-the-future-jobs-1404762501

http://www.washingtonpost.com/blogs/innovations/wp/2014/07/21/were-heading-into-a-jobless-future-no-matter-what-the-government-does/

the great hollowing out

apropos my first post, the NYT campaign blog just covered the evolving labor market and technology’s effects:

i remain concerned for two major reasons:

  1. speed of change.  unlike the agricultural and industrial revolutions, the technological revolution will make change over a scant few decades.  the replacement of labor by technology is moving into new territory (machines that are smart).  my guess is that the next 20 years will define a major upheaval.  the key here is speed and an inability to adapt at such a pace.  give us 70 years or so and it’ll be ok…but this go-round won’t give us the pleasure of time.
  2. a revolution of a different color.  unlike the labor-saving revolutions of years past, this is a revolution of labor replacement.  a back hoe enhances a worker’s ability to move dirt…and the worker is more productive.  if the back hoe can do the work all by itself, we have a problem, houston.  as people are moved out of the loop of labor, we have a situation that is unlike those revolutions that have come before.

if we have effective neuromorphic systems in the next 10-15 years (and maybe even if we don’t), vast areas of our economy will replace labor with technology.  in the short run, the Dow Index and corporate profits will go through the roof…and unemployment will be higher than ever.  uncharted territory, to be sure.  the longer run is more interesting…will we adapt and infill with new jobs enabled by the new tech?  or will we move into a system where the only way to ensure a purchasing public (and therefore a marketplace and the source of corporate profit) is to provide the great unwashed masses of the unemployed some stipend (perhaps derived by taxing some increment of the enhanced productivity)?

i look forward to kai risdol’s (sp?) piece after all things considered tonight looking at the replacement of airline workers by kiosks.

neuromorphic systems

by far the most exciting thing happening in computing today (in my view) is the beginning of neuromorphic architecture.

these systems are not like your typical desktop PC.  they are designed to mimic the brain.  why, one might ask, do we want the latest computer hardware to mimic a forgetful piece of biology that has a hard time with long division?  ah, it is all about power, baby!  power as in electrical power.  the CPU in your mac laptop has been referred to (by kevin kelly) as a slow moving nuclear explosion.  our modern silicon chips pack as many as 2 billion transistors on each one, and even though they are getting more efficient every year, their transistor density grows faster than the power efficiency.  today, the heat dissipation on a chip is about the same as a nuclear bomb (if the bomb released its energy at the same rate as the chip, per second).

simply put, if we demand more computing power from our chips, and our chips get much hotter, they’re gonna melt.  this is remarkable….especially when you compare our high tech chips with the 3# of wetware in your skull.  our brains operate on about 20 watts of power…enough for a dim lightbulb.  yet, even with this meager power budget, we compose symphonies, run at high speed through crowded environments and recognize friends instantly.  even an owl, with its even smaller brain, flies silently at night through dense forest.  it is this talent for pattern recognition and processing that brains are so good at, and at which our silicon computers fail miserably (no matter how much power we allow them).

as we approach fundamental limits for our chips, our researchers have begun to appreciate the talents of our neurons and their chemical transmitters.  even if they move at a sluggish 10 hz. (neurons can fire [communicate] about 10 times a second) they outperform silicon that moves at 3 billion times a second (3 gigahertz).  how is this possible?  our computers are terribly inefficient in two ways:

  • traditional computers operate serially…that is, they do one thing, then the next thing
  • our silicon chips energize (expend power on) all their connections, all the time, even if they are “idle”

our brains evolved over millions of years.  one thing that has been universally true for all those years (until mini-marts came about) was that food was scarce.  any brain that consumed a lot of power was evolutionarily disadvantaged.  evolution is a cruel but masterful designer…ultimately creating brains that do wonderful things on a power budget of a glass of water and a tuna fish sandwich (ferrucci).*  instead of serial processing, our brains are interconnected in a massively parallel way and they use power only when a neuron fires.  these two features make the brain incredibly power efficient and also great at patterns…but crappy at long division.

IBM and others are now in the process of mimicking the behavior of neurons and synapses with silicon transistors and (new technology) memristors.  the memristor story is worth another blog post in its own right and i won’t dig into them today (fascinating).  they contribute to a silicon chip that can learn (yes, learn) by strengthening connections between silicon neurons.  it also mimics the brain in certain other ways that reduces its power budget significantly (not as low as the brain, but give ’em a break!  we haven’t had a million years to design it!).  this project is entitled “SyNAPSE” and is funded (who knew!) by the defense agency that gave us the internet: DARPA.  (insert terminator joke, here).

the SyNAPSE project is in the third stage of five and has successfully created the “foundational” chip.  it only has 256 neurons (the brain has 100 billion), but the roadmap is set.  by stage five, they should have a neuromorphic system of 100 million neurons installed in a robot which will perform at “cat level.”  this is scheduled for completion between 2014 and 2017.  ultimately, the goal is to have a system of 10 billion neurons that consumes less than 1 kilowatt (think small space heater).

there is far more to this project, mostly related to the massively parallel connection patterns that are still being discovered.  however, i am more interested in where this will lead us.  many techno-futurists talk about the “singularity”…a time when computers become conscious and drive change so fast it becomes an event horizon today’s humans cannot see past.  to get us there, the pathway most point to is emulating a brain in a supercomputer.  our machines are theoretically powerful enough to do just that, but at stupendous cost, size and power requirements.  simply running IBM’s new sequoia machine requires millions of $$ per year of electricity (7 megawatts)…and it would be hard to equip a robot with that “brain” as it takes up over 4000 ft. of floor space!

instead, i see neuromorphic systems that are compact and power-efficient as the real path to machines that will be truly useful.  conscious?  maybe.  but i don’t need conscious machines to see a radical break in our social, political and economic norms.  i simply need machines that hear, see, understand and can execute in the real world.  i think these systems are fairly close at hand…10 to 15 years is my guess.  not ubiquitous by then, but get ready!

* from david ferrucci, IBM lead researcher on Watson project (computer-powered jeopardy champion)

no hard takeoff, steps to strong AI

i was just reading william hertling’s article on predicting the future of technological progress:

How To Predict The Future

http://www.feld.com/wp/archives/2012/06/how-to-predict-the-future.html

essentially, he makes similar arguments as kurzweil…mapping the exponential progress of computing power and suggesting that the new hardware enables the invention of the next technology.  he feels one can largely ignore software and other factors and provides a few examples to support that perspective (rise of you tube, napster and others).

all this is the lead up to kurzweil’s same prediction of strong AI emerging when computing power allows emulation of an entire brain…with varying estimates of the computing power necessary, given our evolving understanding of exactly how the brain works.  a range of 10^14 FLOPS to 10^20.  that is 6 orders of magnitude difference (big range), but with our exponential progress, that maps out to a difference of only 24 years…less if you can strap of few machines together.  this got me thinking as i was washing the lunch dishes.  i often make the mental leap from the state of the world today to the post-singularity world of tomorrow….when computers exceed human intelligence.  this (not surprisingly) is always a bit dislocating…and because it seems so drastically different from current reality, i tend to discount it.  this has caused me to ping back and forth from having confidence in a radically different future and thinking that it is too different, and thus unlikely.  what i tend to forget, and what kevin kelly knows (below), is that things progress in steps…and we have plenty of time to make those steps.  even techno-optimists like kurzweil allow many years to move from the initial emulation of a brain to the singularity.  it won’t happen overnight (we all hope).

this recognition reminds me of kelly’s book, “what technology wants.”  here, kelly anticipates elements of hertling’s work and outlines the inevitable nature of technological invention.  he elegantly describes how each invention builds on those that preceded it, giving technology an almost palpable “want” of the next thing.  this thought stream brings me back to the pathway to human level AI (strong AI).

many people criticize these arguments for the singularity saying that we don’t or cannot understand the brain, therefore we cannot build strong AI.  a not unreasonable observation, but i don’t think that will prevent us from getting there.  we have 17 years before the hardware will really be ready (commodity hardware that is, current supercomputers are already there).  those 17 years will be spent building on neuroscience’s understanding of the brain’s wiring and structure.  we may not have to understand the brain holistically, we may just have to know how it is wired and what the wires (neurons, axons, synapses) do.

digression: the IBM sequoia machine just debuted at lawrence livermore lab at 16 petaflops (1.6 x 10^16), clearly in the range of brain emulation, though it’ll be used to model nuclear explosions.  this level of processing power should be available to the consumer in 12-15 years*.  it is hertling’s perspective (and mine) that strong AI progress will really begin to move when the processing power to roughly emulate a brain is in the hands of many people.

so…instead of imagining a post-singularity world, let’s just imagine a world where computers understand us, or understand our speech well enough to execute directions.  this seems eminently reasonable.  indeed, apple’s siri is the beginning of that trend.  does it not seem reasonable that in 4-5 years computers should be able to understand a fairly broad array of directions?  how long after that occurs will it be before neuroscience labs can automate more of their processes?**  mapping the trillions of connections in a brain is currently impossible…but seems entirely possible with a bit better instrumentation and a roomful of dumb, but effective, robotic technicians (this bit deserves greater treatment, but one can certainly imagine this capability in several different ways…in other words, very likely).***

it is not a large leap then, to envision commodity hardware capable of high petaflop performance, in the hands of many researchers (and enthusiasts) creating and refining forms of AI that may not necessarily be human-like, but will be astonishingly capable:

  • highly effective voice interfaces
  • vision (visible spectrum, infrared, other?)
  • reasoning and problem solving
  • mated with robotic bodies that have evolved from today to 2029 (what might that look like?)

at that point (again, not hard to imagine) we are clearly on the way to that dislocated future that makes me a bit uncomfortable.

* based on previous experience of moore’s law.  yes, some say moore’s law will end soon, but even most of them admit we can fairly clearly see moore’s law going until 2022 (the current semi-conductor roadmap goes that far and is based on known science).  we only have to squeeze a bit more out of silicon or find useable one of many new technologies (spintronics, optical, quantum, memristor, others) to extend our capabilities for a few more years.

this also ignores the emerging tech of neuromorphics which, i believe, is the true path to strong AI.  more on that, next posting.

**robotic patch-clamping (technique for mapping neurons) is already possible and in use: http://medgadget.com/2012/05/robot-for-whole-cell-patch-clamp-electrophysiology-of-neurons-in-vivo.html

***please see henry markram’s blue brain project.

first post! first blog!

this all began as a presentation to friends regarding the next two decades of technological development and its impact on our/our kids’ working lives.  i’m not really a kurzweillian kurzweilai.net or a “true believer” in the singularity (i am an athiest, after all).  however, the technological development that underpins those arguments are quite likely to have real effects on our economy…perhaps less dramatic than mind uploading and virtual immortality…but significant, real, palpable changes.

fundamentally, computers gradually continue to usurp human skills.  somebody once said that artificial intelligence is that which a computer cannot now do.  indeed, many once thought that computers would qualify as intelligent once they could best the top grandmasters in chess–certainly a very human domain of knowledge and pattern recognition.  of course, many of us recall IBM’s deep blue beating gary kasparov in 1997.  only 14 year later, IBM’s latest computing superstar (watson) learned the nuances of our language and puns to beat the all-time jeopardy champs.  these and many other examples of AI are not terribly impressive to most people.  a typical response might be, “of course they can do that….but they’re not smart!”

this gradual creeping of capacity is slow enough to be largely imperceptible to most people, but is dramatically quick from a larger historical perspective.  previous revolutions have occurred over centuries or tens of decades (agricultural and industrial revolutions), but the technological revolution is unfolding over a precious few decades.  while this is slow enough for individuals to find not terribly surprising, it is more than fast enough to overwhelm our government and educational institutions…things that are designed to move slowly.  of course, this is nothing to say of our labor markets.  some of us will adapt into more creative realms (which may be safe from automation for a few extra decades), but many will find their service jobs, manual labor, or (increasingly) knowledge work jobs lost to robotic labor or computational algorithms.

this last bit may sound closer to the singularity than to today’s reality, but read the article below…and consider that many law clerks or junior associates are now missing out on the joy of reading through thousands of pages of “discovery” work.  not that they really miss it, but computerized systems can do it faster, cheaper and more accurately.  they’ll happily pass on that drudgery…until the machines start eating into their more core functions…then it may be too late to save the first tier of legal employment.

one might think that these jobs can be lost….after all, who really wants to do that work?  however, if even 10-15% of our service jobs went to automation it would wreak havoc with our economy.  it would make the “great recession” of 2008 look tame.

this is reprinted from mckinsey’s quarterly.  i encourage you to subscribe, it is free- http://www.mckinseyquarterly.com/home.aspx

The second economy

Digitization is creating a second economy that’s vast, automatic, and invisible—thereby bringing the biggest change since the Industrial Revolution.

OCTOBER 2011 • W. Brian Arthur

In 1850, a decade before the Civil War, the United States’ economy was small—it wasn’t much bigger than Italy’s. Forty years later, it was the largest economy in the world. What happened in-between was the railroads. They linked the east of the country to the west, and the interior to both. They gave access to the east’s industrial goods; they made possible economies of scale; they stimulated steel and manufacturing—and the economy was never the same

Deep changes like this are not unusual. Every so often—every 60 years or so—a body of technology comes along and over several decades, quietly, almost unnoticeably, transforms the economy: it brings new social classes to the fore and creates a different world for business. Can such a transformation—deep and slow and silent—be happening today?

We could look for one in the genetic technologies, or in nanotech, but their time hasn’t fully come. But I want to argue that something deep is going on with information technology, something that goes well beyond the use of computers, social media, and commerce on the Internet. Business processes that once took place among human beings are now being executed electronically. They are taking place in an unseen domain that is strictly digital. On the surface, this shift doesn’t seem particularly consequential—it’s almost something we take for granted. But I believe it is causing a revolution no less important and dramatic than that of the railroads. It is quietly creating a second economy, a digital one.

Let me begin with two examples. Twenty years ago, if you went into an airport you would walk up to a counter and present paper tickets to a human being. That person would register you on a computer, notify the flight you’d arrived, and check your luggage in. All this was done by humans. Today, you walk into an airport and look for a machine. You put in a frequent-flier card or credit card, and it takes just three or four seconds to get back a boarding pass, receipt, and luggage tag. What interests me is what happens in those three or four seconds. The moment the card goes in, you are starting a huge conversation conducted entirely among machines. Once your name is recognized, computers are checking your flight status with the airlines, your past travel history, your name with the TSA1 (and possibly also with the National Security Agency). They are checking your seat choice, your frequent-flier status, and your access to lounges. This unseen, underground conversation is happening among multiple servers talking to other servers, talking to satellites that are talking to computers (possibly in London, where you’re going), and checking with passport control, with foreign immigration, with ongoing connecting flights. And to make sure the aircraft’s weight distribution is fine, the machines are also starting to adjust the passenger count and seating according to whether the fuselage is loaded more heavily at the front or back.

These large and fairly complicated conversations that you’ve triggered occur entirely among things remotely talking to other things: servers, switches, routers, and other Internet and telecommunications devices, updating and shuttling information back and forth. All of this occurs in the few seconds it takes to get your boarding pass back. And even after that happens, if you could see these conversations as flashing lights, they’d still be flashing all over the country for some time, perhaps talking to the flight controllers—starting to say that the flight’s getting ready for departure and to prepare for that.

Now consider a second example, from supply chain management. Twenty years ago, if you were shipping freight through Rotterdam into the center of Europe, people with clipboards would be registering arrival, checking manifests, filling out paperwork, and telephoning forward destinations to let other people know. Now such shipments go through an RFID2 portal where they are scanned, digitally captured, and automatically dispatched. The RFID portal is in conversation digitally with the originating shipper, other depots, other suppliers, and destinations along the route, all keeping track, keeping control, and reconfiguring routing if necessary to optimize things along the way. What used to be done by humans is now executed as a series of conversations among remotely located servers.

In both these examples, and all across economies in the developed world, processes in the physical economy are being entered into the digital economy, where they are “speaking to” other processes in the digital economy, in a constant conversation among multiple servers and multiple semi-intelligent nodes that are updating things, querying things, checking things off, readjusting things, and eventually connecting back with processes and humans in the physical economy.

So we can say that another economy—a second economy—of all of these digitized business processes conversing, executing, and triggering further actions is silently forming alongside the physical economy.

Aspen root systems

If I were to look for adjectives to describe this second economy, I’d say it is vast, silent, connected, unseen, and autonomous (meaning that human beings may design it but are not directly involved in running it). It is remotely executing and global, always on, and endlessly configurable. It is concurrent—a great computer expression—which means that everything happens in parallel. It is self-configuring, meaning it constantly reconfigures itself on the fly, and increasingly it is also self-organizing, self-architecting, and self-healing.

These last descriptors sound biological—and they are. In fact, I’m beginning to think of this second economy, which is under the surface of the physical economy, as a huge interconnected root system, very much like the root system for aspen trees. For every acre of aspen trees above the ground, there’s about ten miles of roots underneath, all interconnected with one another, “communicating” with each other.

The metaphor isn’t perfect: this emerging second-economy root system is more complicated than any aspen system, since it’s also making new connections and new configurations on the fly. But the aspen metaphor is useful for capturing the reality that the observable physical world of aspen trees hides an unseen underground root system just as large or even larger. How large is the unseen second economy? By a rough back-of-the-envelope calculation (see sidebar, “How fast is the second economy growing?”), in about two decades the digital economy will reach the same size as the physical economy. It’s as if there will be another American economy anchored off San Francisco (or, more in keeping with my metaphor, slipped in underneath the original economy) and growing all the while.

Now this second, digital economy isn’t producing anything tangible. It’s not making my bed in a hotel, or bringing me orange juice in the morning. But it is running an awful lot of the economy. It’s helping architects design buildings, it’s tracking sales and inventory, getting goods from here to there, executing trades and banking operations, controlling manufacturing equipment, making design calculations, billing clients, navigating aircraft, helping diagnose patients, and guiding laparoscopic surgeries. Such operations grow slowly and take time to form. In any deep transformation, industries do not so much adopt the new body of technology as encounter it, and as they do so they create new ways to profit from its possibilities.

The deep transformation I am describing is happening not just in the United States but in all advanced economies, especially in Europe and Japan. And its revolutionary scale can only be grasped if we go beyond my aspen metaphor to another analogy.

A neural system for the economy

Recall that in the digital conversations I was describing, something that occurs in the physical economy is sensed by the second economy—which then gives back an appropriate response. A truck passes its load through an RFID sensor or you check in at the airport, a lot of recomputation takes place, and appropriate physical actions are triggered.

There’s a parallel in this with how biologists think of intelligence. I’m not talking about human intelligence or anything that would qualify as conscious intelligence. Biologists tell us that an organism is intelligent if it senses something, changes its internal state, and reacts appropriately. If you put an E. coli bacterium into an uneven concentration of glucose, it does the appropriate thing by swimming toward where the glucose is more concentrated. Biologists would call this intelligent behavior. The bacterium senses something, “computes” something (although we may not know exactly how), and returns an appropriate response.

No brain need be involved. A primitive jellyfish doesn’t have a central nervous system or brain. What it has is a kind of neural layer or nerve net that lets it sense and react appropriately. I’m arguing that all these aspen roots—this vast global digital network that is sensing, “computing,” and reacting appropriately—is starting to constitute a neural layer for the economy. The second economy constitutes a neural layer for the physical economy. Just what sort of change is this qualitatively?

Think of it this way. With the coming of the Industrial Revolution—roughly from the 1760s, when Watt’s steam engine appeared, through around 1850 and beyond—the economy developed a muscular system in the form of machine power. Now it is developing a neural system. This may sound grandiose, but actually I think the metaphor is valid. Around 1990, computers started seriously to talk to each other, and all these connections started to happen. The individual machines—servers—are like neurons, and the axons and synapses are the communication pathways and linkages that enable them to be in conversation with each other and to take appropriate action.

Is this the biggest change since the Industrial Revolution? Well, without sticking my neck out too much, I believe so. In fact, I think it may well be the biggest change ever in the economy. It is a deep qualitative change that is bringing intelligent, automatic response to the economy. There’s no upper limit to this, no place where it has to end. Now, I’m not interested in science fiction, or predicting the singularity, or talking about cyborgs. None of that interests me. What I am saying is that it would be easy to underestimate the degree to which this is going to make a difference.

I think that for the rest of this century, barring wars and pestilence, a lot of the story will be the building out of this second economy, an unseen underground economy that basically is giving us intelligent reactions to what we do above the ground. For example, if I’m driving in Los Angeles in 15 years’ time, likely it’ll be a driverless car in a flow of traffic where my car’s in a conversation with the cars around it that are in conversation with general traffic and with my car. The second economy is creating for us—slowly, quietly, and steadily—a different world.

A downside

Of course, as with most changes, there is a downside. I am concerned that there is an adverse impact on jobs. Productivity increasing, say, at 2.4 percent in a given year means either that the same number of people can produce 2.4 percent more output or that we can get the same output with 2.4 percent fewer people. Both of these are happening. We are getting more output for each person in the economy, but overall output, nationally, requires fewer people to produce it. Nowadays, fewer people are required behind the desk of an airline. Much of the work is still physical—someone still has to take your luggage and put it on the belt—but much has vanished into the digital world of sensing, digital communication, and intelligent response.

Physical jobs are disappearing into the second economy, and I believe this effect is dwarfing the much more publicized effect of jobs disappearing to places like India and China.

There are parallels with what has happened before. In the early 20th century, farm jobs became mechanized and there was less need for farm labor, and some decades later manufacturing jobs became mechanized and there was less need for factory labor. Now business processes—many in the service sector—are becoming “mechanized” and fewer people are needed, and this is exerting systematic downward pressure on jobs. We don’t have paralegals in the numbers we used to. Or draftsmen, telephone operators, typists, or bookkeeping people. A lot of that work is now done digitally. We do have police and teachers and doctors; where there’s a need for human judgment and human interaction, we still have that. But the primary cause of all of the downsizing we’ve had since the mid-1990s is that a lot of human jobs are disappearing into the second economy. Not to reappear.

Seeing things this way, it’s not surprising we are still working our way out of the bad 2008–09 recession with a great deal of joblessness.

There’s a larger lesson to be drawn from this. The second economy will certainly be the engine of growth and the provider of prosperity for the rest of this century and beyond, but it may not provide jobs, so there may be prosperity without full access for many. This suggests to me that the main challenge of the economy is shifting from producing prosperity to distributing prosperity. The second economy will produce wealth no matter what we do; distributing that wealth has become the main problem. For centuries, wealth has traditionally been apportioned in the West through jobs, and jobs have always been forthcoming. When farm jobs disappeared, we still had manufacturing jobs, and when these disappeared we migrated to service jobs. With this digital transformation, this last repository of jobs is shrinking—fewer of us in the future may have white-collar business process jobs—and we face a problem.

The system will adjust of course, though I can’t yet say exactly how. Perhaps some new part of the economy will come forward and generate a whole new set of jobs. Perhaps we will have short workweeks and long vacations so there will be more jobs to go around. Perhaps we will have to subsidize job creation. Perhaps the very idea of a job and of being productive will change over the next two or three decades. The problem is by no means insoluble. The good news is that if we do solve it we may at last have the freedom to invest our energies in creative acts.

Economic possibilities for our grandchildren

In 1930, Keynes wrote a famous essay, “Economic possibilities for our grandchildren.” Reading it now, in the era of those grandchildren, I am surprised just how accurate it is. Keynes predicts that “the standard of life in progressive countries one hundred years hence will be between four and eight times as high as it is to-day.” He rightly warns of “technological unemployment,” but dares to surmise that “the economic problem [of producing enough goods] may be solved.” If we had asked him and his contemporaries how all this might come about, they might have imagined lots of factories with lots of machines, possibly even with robots, with the workers in these factories gradually being replaced by machines and by individual robots.

That is not quite how things have developed. We do have sophisticated machines, but in the place of personal automation (robots) we have a collective automation. Underneath the physical economy, with its physical people and physical tasks, lies a second economy that is automatic and neurally intelligent, with no upper limit to its buildout. The prosperity we enjoy and the difficulties with jobs would not have surprised Keynes, but the means of achieving that prosperity would have.

This second economy that is silently forming—vast, interconnected, and extraordinarily productive—is creating for us a new economic world. How we will fare in this world, how we will adapt to it, how we will profit from it and share its benefits, is very much up to us.

About the Author

W. Brian Arthur is a visiting researcher with the Intelligent System Lab at the Palo Alto Research Center (PARC) and an external professor at the Santa Fe Institute. He is an economist and technology thinker and a pioneer in the science of complexity. His 1994 book, Increasing Returns and Path Dependence in the Economy (University of Michigan Press, December 1994), contains several of his seminal papers. More recently, Arthur was the author of The Nature of Technology: What it is and How it Evolves (Free Press, August 2009).

Post Navigation