Exponential Innovation: The 4 Pillars
In this series on “Exponential Innovation,” we have started from the following postulate:
“Exponential growth in the complexity of technology, reflected in increasing computing power and capacity, the explosion of data and increasingly complex and powerful material sciences, is a reality in our society, and will have an ever increasing influence over society and the world economy for the foreseeable future.
That is, simply, the belief that the world we live in today is one in the midst of massive upheaval and change, driven largely by new technologies. As we discussed in our last post, that change is very hard to comprehend from a day-to-day perspective. In our next post, we will talk about the history of technological change, and how it has rapidly transformed our society, and our very understanding of the individual. But today, we will talk about the here and now.
What is the current state of technology? What is now possible, that has never been possible before? We’ll split this discussion into a few categories: Data, Intelligence, Material Science, and Energy. In many ways, these are the cornerstones of technological growth: progress in each supporting and enhancing progress in the others.
Without massive new pools of data, we have come to understand, we cannot develop more intelligent software and neural networks. Without new leaps in material sciences, we cannot build the machines that will harness that data. Without ever more intelligent machines, we cannot make progress in material sciences and energy. And finally, without new sources of clean and cheap energy, we cannot run the society we are building in a sustainable fashion, nor advance on any other front.
Thus it would make little logical sense to begin with any one topic in particular. So we’ll simply follow the one which consumes the interest of our industry the most at present, and that is data.
Data, often described as the “new oil,” in modern economies, in many ways outpaces all other areas of advance in its growth. In volume alone, human societies now create more usable data, at a faster rate of acceleration, than any other product of humanity in history. Since Gutenberg invented the printing press, few areas of human advancement have been so visibly disruptive to our very way of life, and our conception of humanity as a whole.
The field is rife with overwhelming statistics, but the most important are probably about sheer capacity. There is no clearer gauge of fundamental shifts in the way we use data, than how much data we actually create and store, and how much capacity we have:
In 2015, IBM estimated that the world created and stored 2.5 quintillion bytes of data per day (Nearly 1 billion terabytes). We created more data between 2013 and 2015 than in all human history combined- and by rather a large margin. depending on how one calculates “data.”
Though average internet speeds have grown rapidly in the intervening years, they have in many ways been limited by obstacles aside from technical feasibility.
Geography, economics, and politics continue to tamp down on the growth of high-speed internet, but increasing creation and storage of data, and the increase in data capacity, continues apace. As cartoonist/technologist Randall Monroe details in his fascinating post: Fedex Bandwidth, the internet (at the time of writing), was capable of transferring 167 terabits a second, with a rate of growth in traffic of about 30% a year.
On the other hand, miniaturization of data storage meant that if, for the sake of imagination, one were to employ the entire fedex fleet to ship nothing but data-storage micro-sd cards around the world, the theoretical bandwidth of this “sneakernet” would be 177 petabits, or a thousand times the capacity of the internet. Wilder still, FujiFilm announced way back in 2014 that it was now capable of storing 154 terabytes on a single data storage tape (used mainly by large corporations to back up and transfer massive amounts of data).
Massive increases in capacity are driven by increasing demand for storage space. And that demand is in turn driven by a massive increase in the volume and types of data we record. Today, the world is awash with new data sources from the Internet of Things. The world is filling up with sensors and microphones and cameras for every conceivable purpose.
And of all that new data, over 90% of it is “unstructured,” meaning that it is created mainly by and for the use of machines in completing complex tasks: anything from analyzing a sensor network, to encoding, transferring, and analyzing sound, video and images, and much else besides.
But just because the density of data storage is increasing, doesn’t mean that we are generating ever more unused data capacity. Quite the opposite: as data storage becomes denser and processors become faster and more energy efficient, we are finding new and novel uses for the data we produce. We are also producing more and more data intensive content and applications.
(A look at how graphics have changed over 25 years).
What that ocean of new data promises though, is more than just the ability to record and document everything. Rather, the real breakthroughs in current technology are all happening around machine intelligence, for which having massive amounts of new data is of growing importance.
Which brings us to:
In his fascinating video on the history of AI, Frank Chen of Andreeson Horowitz makes the case that the current apparent “boom” in artificial intelligence is at once an old story, and a new one.
As he points out, since about the 1940s, computer scientists such as Alan Turing have understood that a programmable computer could, in theory, be provided with a set of instructions complex enough to mimic the behavior of a human being. This was the basis of the famous Turing Test, which defines artificial intelligence as a machine which can convince a human, through text interactions, that it is a real person.
And yet, in a cycle of booms and busts, or what Chen calls “AI Winters,”, the computer industry has steadfastly failed to provide artificial intelligence that lives up to expectations set by such theories. At every stage, artificial intelligence has made breakthroughs that appear, on the surface, to be profound, but which ultimately disappoint in their inapplicability to broader uses. But also, simultaneously, the goalposts for consumer-level conceptions of AI have continually changed over time, and not necessarily with good reason.
In a sense, these booms and busts are of our own making. Human beings are not well equipped to understand AI as a concept. As Chen points out, Hollywood has trained audiences to conceptualize AI as being a malevolent, or at least dangerous, and highly powerful version of human consciousness- with all the same emotional motivations as a human would have. Of course, this is a very poor reading of Turing’s theories, and a poor way of imagining artificial intelligence.
While AI has never lived up (yet) to the concepts presented to us in popular culture, or hinted at by the limited range of activities computers can do as well or better than humans, we also tend to discount AI’s progress over time, as we become used to its preeminence in domain after domain. In short, whenever AI gains new abilities, because they don’t match our expectations of how AI is supposed to behave, we discount their significance.
A machine that exhibits “human like behavior” would be doing so, as Ray Kurzweil pointed out in his response to Paul Allen’s critic of his work in 2011, for entirely different reasons, and by entirely different mechanisms, than an actual human being would. The aim of artificial intelligence is not to replicate a human mind, but to replicate human abilities.
And in that process, machines have made enormous strides in 50 years. From syntactic to semantic language processing, machines have caught up to the human ability to process and decode complex language, both spoken and written. Using neural networks and deep learning, machines have taught themselves to identify and categorize pre-defined shapes, and, more recently, to identify and categorize objects that are not pre-defined, from video, images, and audio.
In narrow skill sets, machines now regularly best human counterparts in a wide range of activities, from playing games, such as Chess and Go or Jeopardy, to driving cars, flying planes, diagnosing mechanical failures, assessing human health, and much, much else.
But, as Chen also points out, herein lies the strange paradox of man’s relationship with machine. While machines outpace human beings in narrowly defined and highly complex tasks for which much data is available, they routinely fail at very simple tasks, for which data is harder to collect or to contextualize.
Thus, a computer can navigate a car through the complex and extensive roads and highways of a major city, but it cannot help a robot to prepare a simple cup of tea, given a kitchen it has never been in before.
One task appears to be more complex than the other to humans, but in its fine-grained and irregular nature, the preparation of a cup of tea in an unknown kitchen is far, far more complex than navigating a car on a public road. Where humans must struggle for many months to achieve competence at driving, a machine can learn this behavior in as good as no time at all. Yet, a machine has yet to be made that can do the mundane activities for which humans require no preparation whatsoever.
While fanciful notions of replicating the structure of the human brain have mostly been abandoned by computer scientists, it has come to be recognized generally that the way a human mind operates, through massively redundant replication of processes in every area of the brain at once, is an important key to how “digital brains” will evolve in the future.
As Ray Kurzweil has pointed out for many years, the human brain evolved around a set of relatively simple mechanisms. Neurons record, access, and replicate information over time, with no single neuron or part of the brain governing the operation of the whole. No central program exists in human intelligence; instead, the brain is a collection of different networks, each of which are sensitive to different kinds of input, and each of which interact differently with information.
At the same time, the different centers of the brain remember, access and alter information which the mind (both the unconscious and conscious), finds useful in some way. In other words, the whole brain continually reshapes its own inner structure to accommodate its needs over time.
Much modern research into theory of mind have shed light on this process, and laboratory experiments have proven that the brain is capable of restructuring itself- a phenomenon first observed among children, who are capable of learning multiple languages without benefit of an understanding of language as a concept.
And what it means for the development of AI is that more and more, the focus has been and will continue to be on building machine minds which are capable of rewriting their own internal processes, and seeking information according to their own needs. Deep learning, whereby neural networks use “training data” to examine their own experiences and challenge their own previous assumptions against new information, is the now the most exciting area of computer science.
The combination of massive amounts of new data, along with neural networks that can self-direct their exploration of that data, and refine their understanding on an evolving basis, promises to broadly transform the way computers work in the very near future. In many respects, from self-driving cars to Google’s Deep Mind, this transformation is well-underway already.
Later this week, we’ll publish Part 2 of this two parter, including sections on the last two pillars of exponential innovation in the modern world: Material Science and Energy.