otterfowlio

Technological Musings

Archive for the month “June, 2012”

neuromorphic systems

by far the most exciting thing happening in computing today (in my view) is the beginning of neuromorphic architecture.

these systems are not like your typical desktop PC.  they are designed to mimic the brain.  why, one might ask, do we want the latest computer hardware to mimic a forgetful piece of biology that has a hard time with long division?  ah, it is all about power, baby!  power as in electrical power.  the CPU in your mac laptop has been referred to (by kevin kelly) as a slow moving nuclear explosion.  our modern silicon chips pack as many as 2 billion transistors on each one, and even though they are getting more efficient every year, their transistor density grows faster than the power efficiency.  today, the heat dissipation on a chip is about the same as a nuclear bomb (if the bomb released its energy at the same rate as the chip, per second).

simply put, if we demand more computing power from our chips, and our chips get much hotter, they’re gonna melt.  this is remarkable….especially when you compare our high tech chips with the 3# of wetware in your skull.  our brains operate on about 20 watts of power…enough for a dim lightbulb.  yet, even with this meager power budget, we compose symphonies, run at high speed through crowded environments and recognize friends instantly.  even an owl, with its even smaller brain, flies silently at night through dense forest.  it is this talent for pattern recognition and processing that brains are so good at, and at which our silicon computers fail miserably (no matter how much power we allow them).

as we approach fundamental limits for our chips, our researchers have begun to appreciate the talents of our neurons and their chemical transmitters.  even if they move at a sluggish 10 hz. (neurons can fire [communicate] about 10 times a second) they outperform silicon that moves at 3 billion times a second (3 gigahertz).  how is this possible?  our computers are terribly inefficient in two ways:

  • traditional computers operate serially…that is, they do one thing, then the next thing
  • our silicon chips energize (expend power on) all their connections, all the time, even if they are “idle”

our brains evolved over millions of years.  one thing that has been universally true for all those years (until mini-marts came about) was that food was scarce.  any brain that consumed a lot of power was evolutionarily disadvantaged.  evolution is a cruel but masterful designer…ultimately creating brains that do wonderful things on a power budget of a glass of water and a tuna fish sandwich (ferrucci).*  instead of serial processing, our brains are interconnected in a massively parallel way and they use power only when a neuron fires.  these two features make the brain incredibly power efficient and also great at patterns…but crappy at long division.

IBM and others are now in the process of mimicking the behavior of neurons and synapses with silicon transistors and (new technology) memristors.  the memristor story is worth another blog post in its own right and i won’t dig into them today (fascinating).  they contribute to a silicon chip that can learn (yes, learn) by strengthening connections between silicon neurons.  it also mimics the brain in certain other ways that reduces its power budget significantly (not as low as the brain, but give ’em a break!  we haven’t had a million years to design it!).  this project is entitled “SyNAPSE” and is funded (who knew!) by the defense agency that gave us the internet: DARPA.  (insert terminator joke, here).

the SyNAPSE project is in the third stage of five and has successfully created the “foundational” chip.  it only has 256 neurons (the brain has 100 billion), but the roadmap is set.  by stage five, they should have a neuromorphic system of 100 million neurons installed in a robot which will perform at “cat level.”  this is scheduled for completion between 2014 and 2017.  ultimately, the goal is to have a system of 10 billion neurons that consumes less than 1 kilowatt (think small space heater).

there is far more to this project, mostly related to the massively parallel connection patterns that are still being discovered.  however, i am more interested in where this will lead us.  many techno-futurists talk about the “singularity”…a time when computers become conscious and drive change so fast it becomes an event horizon today’s humans cannot see past.  to get us there, the pathway most point to is emulating a brain in a supercomputer.  our machines are theoretically powerful enough to do just that, but at stupendous cost, size and power requirements.  simply running IBM’s new sequoia machine requires millions of $$ per year of electricity (7 megawatts)…and it would be hard to equip a robot with that “brain” as it takes up over 4000 ft. of floor space!

instead, i see neuromorphic systems that are compact and power-efficient as the real path to machines that will be truly useful.  conscious?  maybe.  but i don’t need conscious machines to see a radical break in our social, political and economic norms.  i simply need machines that hear, see, understand and can execute in the real world.  i think these systems are fairly close at hand…10 to 15 years is my guess.  not ubiquitous by then, but get ready!

* from david ferrucci, IBM lead researcher on Watson project (computer-powered jeopardy champion)

no hard takeoff, steps to strong AI

i was just reading william hertling’s article on predicting the future of technological progress:

How To Predict The Future

http://www.feld.com/wp/archives/2012/06/how-to-predict-the-future.html

essentially, he makes similar arguments as kurzweil…mapping the exponential progress of computing power and suggesting that the new hardware enables the invention of the next technology.  he feels one can largely ignore software and other factors and provides a few examples to support that perspective (rise of you tube, napster and others).

all this is the lead up to kurzweil’s same prediction of strong AI emerging when computing power allows emulation of an entire brain…with varying estimates of the computing power necessary, given our evolving understanding of exactly how the brain works.  a range of 10^14 FLOPS to 10^20.  that is 6 orders of magnitude difference (big range), but with our exponential progress, that maps out to a difference of only 24 years…less if you can strap of few machines together.  this got me thinking as i was washing the lunch dishes.  i often make the mental leap from the state of the world today to the post-singularity world of tomorrow….when computers exceed human intelligence.  this (not surprisingly) is always a bit dislocating…and because it seems so drastically different from current reality, i tend to discount it.  this has caused me to ping back and forth from having confidence in a radically different future and thinking that it is too different, and thus unlikely.  what i tend to forget, and what kevin kelly knows (below), is that things progress in steps…and we have plenty of time to make those steps.  even techno-optimists like kurzweil allow many years to move from the initial emulation of a brain to the singularity.  it won’t happen overnight (we all hope).

this recognition reminds me of kelly’s book, “what technology wants.”  here, kelly anticipates elements of hertling’s work and outlines the inevitable nature of technological invention.  he elegantly describes how each invention builds on those that preceded it, giving technology an almost palpable “want” of the next thing.  this thought stream brings me back to the pathway to human level AI (strong AI).

many people criticize these arguments for the singularity saying that we don’t or cannot understand the brain, therefore we cannot build strong AI.  a not unreasonable observation, but i don’t think that will prevent us from getting there.  we have 17 years before the hardware will really be ready (commodity hardware that is, current supercomputers are already there).  those 17 years will be spent building on neuroscience’s understanding of the brain’s wiring and structure.  we may not have to understand the brain holistically, we may just have to know how it is wired and what the wires (neurons, axons, synapses) do.

digression: the IBM sequoia machine just debuted at lawrence livermore lab at 16 petaflops (1.6 x 10^16), clearly in the range of brain emulation, though it’ll be used to model nuclear explosions.  this level of processing power should be available to the consumer in 12-15 years*.  it is hertling’s perspective (and mine) that strong AI progress will really begin to move when the processing power to roughly emulate a brain is in the hands of many people.

so…instead of imagining a post-singularity world, let’s just imagine a world where computers understand us, or understand our speech well enough to execute directions.  this seems eminently reasonable.  indeed, apple’s siri is the beginning of that trend.  does it not seem reasonable that in 4-5 years computers should be able to understand a fairly broad array of directions?  how long after that occurs will it be before neuroscience labs can automate more of their processes?**  mapping the trillions of connections in a brain is currently impossible…but seems entirely possible with a bit better instrumentation and a roomful of dumb, but effective, robotic technicians (this bit deserves greater treatment, but one can certainly imagine this capability in several different ways…in other words, very likely).***

it is not a large leap then, to envision commodity hardware capable of high petaflop performance, in the hands of many researchers (and enthusiasts) creating and refining forms of AI that may not necessarily be human-like, but will be astonishingly capable:

  • highly effective voice interfaces
  • vision (visible spectrum, infrared, other?)
  • reasoning and problem solving
  • mated with robotic bodies that have evolved from today to 2029 (what might that look like?)

at that point (again, not hard to imagine) we are clearly on the way to that dislocated future that makes me a bit uncomfortable.

* based on previous experience of moore’s law.  yes, some say moore’s law will end soon, but even most of them admit we can fairly clearly see moore’s law going until 2022 (the current semi-conductor roadmap goes that far and is based on known science).  we only have to squeeze a bit more out of silicon or find useable one of many new technologies (spintronics, optical, quantum, memristor, others) to extend our capabilities for a few more years.

this also ignores the emerging tech of neuromorphics which, i believe, is the true path to strong AI.  more on that, next posting.

**robotic patch-clamping (technique for mapping neurons) is already possible and in use: http://medgadget.com/2012/05/robot-for-whole-cell-patch-clamp-electrophysiology-of-neurons-in-vivo.html

***please see henry markram’s blue brain project.

Post Navigation