Note from 2025: I wrote this blog in 2012, to celebrate my then 30th Anniversary as a professional working in the software field. The other blogs in the “30 year” series are retrospectives on my career up to that point. You can find those here, here and here. The current blog forecasts what, at that time, I thought the next 30 years, from 2012 to 2042, might look like. Even though this contains some “notes to myself” toward the end, I’ve decided to to leave it exactly as written, worts and all, because–frankly–I’m pretty proud of it. As I look back on these predictions from the perspective of 2025, thirteen years of that original 30-year interval have already occurred. While not all of my predictions have happened yet, many have–and I think the rest still seem very likely. I especially like my predictions about AI and what we now call “deep tech” or “hard tech”–the interaction of the digital and the physical world. Incidentally, I cover a portion of the material presented here in a keynote presentation I gave on IoT a couple years after this was written: https://youtu.be/En7KLoRqxsc?t=2273
As the final installment of the series looking over my past 30 years in software I’d like to turn the clock forward and look at what may be in store for us in the year 2042—thirty years in the future.

Moore’s Law as it’s often stated says that the power of computing devices doubles every 18 months. This observation and prediction have held true since at least the 1960’s. Throughout my career, well-reasoned and convincing arguments have been made almost every year that—while Moore’s Law will continue to work for the next decade, perhaps—beyond that point in time a fundamental physical limit will be hit which will slow further progress. Each time, however, new discoveries have been made or new approaches taken that allowed the steady improvement of computing power to continue.
So let’s assume that Moore’s law will hold good for the next three decades, as it has for the last five or more. If that’s the case, by 2042 computing power will grow more than a million times (2^20) beyond its current value. Your pocket-sized smart device thirty years from now, if Moore’s Law holds good, will be about twice as powerful as 2012’s fastest supercomputer. That supercomputer resides in a National Laboratory, draws 8.2 megawatts of power, and covers an area of over 4,000 square feet (400 m^2). Your engineering desktop workstation in the year 2042 will be about four times more powerful than that. It will have the same processing power as the entire (estimated) two million server Google Global infrastructure does today[i]. And if history is any guide, in 2042 both engineers and smart device users will still be clamoring for faster machines!

So what will you do with this incredible computing power at your fingertips?
Well, play games obviously. And shop. View and create entertainment and media. And interact with your friends, and do the normal things that humans like to do. If history tells us anything, it’s that people remain the same even when their environment changes radically. Jokes told during Roman times can still be funny, and words written thousands of years ago can still move us, inspire us, and resonate with our own experience. I think we can safely assume that however much our technology changes in the next 30 years, people will still be acting like people, and doing the same types of activities that people have done for millennia—though, of course, in some very different ways as we’ll discuss below.
A more intriguing question, perhaps, than what people will be doing with their machines, is whether other entities will also be acting more like “people” in 2042. Some serious futurists[ii] speculate that when a computing system reaches 10 exaflops (10^19 floating-point operations per second) it has the potential to model and perhaps even develop human-scale intelligence. This degree of processing power is roughly 100 times our current estimation of the processing power of Google’s 2012 global infrastructure. If Google keeps upgrading this infrastructure in pace with Moore’s law, before the year 2025 their network will exceed the human intelligence threshold. And further in the future, by the year 2042, a network of just 100 of the then-current standard server machines would have a “human intelligence” level of processing power, at a price affordable by a small business—say, around $200,000 in today’s money.
This is all speculation, of course, but what would it mean to have computers with human-scale intelligence available to us? Some futurists predict that as we put our most intelligent machines at work designing even more intelligent machines, over time our computers will become so powerful that they will far exceed human intelligence and even human comprehension. Driven by these super-human intelligences, the rate of technological change will go from being fast to being essentially infinite, with new iPhone-scale innovations occurring every few seconds instead of at intervals of years or decades. Depending on your emotional outlook, some speculate this could result in a “diamond age” of infinite wealth and possibilities, while others believe it will lead to the end of human civilization, along the lines of “Skynet” in the Terminator movies.
Futurists sometimes call this point of infinite innovation the “singularity” or, in a phrase that I prefer, the “rapture of the nerds”[iii]. While some scenarios put this hypothetical event within or around our 2042 horizon, I for one am not going to worry about it. I find both the worst and the best case scenarios tend to be the least likely ones to play out. What generally happens is something far more quotidian. It’s true the course of history—of technology or pretty much anything—is often a “punctuated equilibrium”: a period of stability or relatively linear progress that is set in a new direction by a singular event. It is certainly possible that we may be approaching such a singularity within the next 30 years. However, by definition, the future then becomes unpredictable. So let’s assume we continue tottering along somewhere between the extremes of rapture and extinction, and continue to look at where we might end up.
I think the most likely near-term manifestation of large-scale computing power will be really good intelligent agents. I am currently a frequent user of Apple’s voice-controlled Siri system for iPhones and iPads, regularly using Siri to check the weather, play songs, send texts, and perform other tasks. By even the most generous interpretation, however, Siri’s current capabilities are nowhere close to showing real “intelligence”. But I do think it shows the direction. Where even the best commercial voice recognition system in 2012 is mediocre at best by human listening standards, Siri is clearly far better than anything available to consumers 10 years ago. It’s clear that over the course of the next decades, given Moore’s Law, at some point voice recognition will become really good. Likewise the “intelligence” behind it will become much more sophisticated, not just doing what I specifically ask for, but “thinking ahead” and solving real problems for me.
In truth, though, I think few people will need their own intelligent agent. In my own career I’ve had human assistants or “admins” who, of course, already had the human-scale intelligence these automated systems aspire to! I am talking about the type of admin who books trips, schedules meetings, files expense reports and so on—not a project manager or budget analyst. Frankly, during those times when I had a human admin 100% dedicated to me, I needed to make a special effort just to keep them busy. This is because I have always done many administrative tasks myself—using technology—that in my parent’s generation used to be delegated; a trend that will clearly continue and accelerate. My (human) assistant right now is shared between several execs, and that works out well for all of us. My guess is the artificially intelligent assistants in our thirty-year future horizon will be much like this—shared between multiple people, and hosted by businesses. Undoubtedly many businesses will deploy such artificially intelligent systems as “call center agents”—replacing today’s ubiquitous “interactive voice response systems” (“Press 1 for Marketing, 2 for Sales” and so on) with real problem-solving capability—at least one may hope.
Hand in hand with ever-increasing compute power will be the increasing intermingling of the physical and virtual worlds. This is a clear trend. Even our current 2012 generation smart devices contain a wide array of sensors. The smart device now in your pocket or bag can almost certainly sense its geographical location, physical orientation, changes in direction, ambient light, sound, temperature (to some degree), “feel” through its touch screen, “see” through its camera and “hear” through its microphone. Your device can also sense things we humans cannot, such as radio signals. Clearly smart devices in the future will have more and subtler senses—perhaps water-vapor or infrared sensors, pressure and texture sensors (“touch”), environmental chemical sensors[iv] (“smell” and perhaps “taste”), height above ground, micro-location (that is, 3D position accurate to the centimeter or better) and micro-orientation, automatic triangulation between other nearby sensors, automatic information and preference exchanges with other nearby smart devices, and many others.
These future devices will also make better use of the sensors they have because of their increased computing power. While more is better, camera sensors on current-generation smart devices arguably have enough resolution to support face recognition, but using the smart devices themselves to do facial recognition of arbitrary individuals—or even a single individual in varying lighting situations—is not very satisfactory. This is an issue with “training” (access to accurate data), storage and compute power, as well as battery consumption. Similarly, voice recognition, scene recognition, object recognition and many other processing tasks are just beyond the ability of current devices to do really well. I think it’s a safe bet that 30 years from now, the technology on your smart device will be capable of recognizing any person you point it at, any object it “sees”, and understand every word that it “hears”—among many other recognition tasks.
Because of decreasing cost, these smart devices will also be more ubiquitous and more interconnected. We already see this trend, and it will clearly continue. By 2042—and perhaps well before–computing power will be literally everywhere. Not just in our smart phones, home entertainment and businesses, but in our clothing[v], our shoes, our eyeglasses[vi], our walls, our appliances, our cars, everything made out of glass[vii], low-cost packaging, Harry Potter-like “newspapers”, even our toilets and toothbrushes! These devices will all “talk” to each other in complex and artificially intelligent ways, making things like commercial self-steering cars, real-time health and exercise monitoring, and custom-tailored advertising a commonplace occurrence. While this may seem the stuff of science fiction, this is really not very speculative at all—many of these applications and devices either already exist or are in development today. What will change over the next 30 years is their ubiquity. I believe that 30 years from now, compute and display power will be literally everywhere.
In addition to being much more aware of and embedded in the world around them, future generation devices will also have an increased ability to manifest the virtual world into the physical world. Right now, our smart devices offer speakers and headphones to produce sound, and increasingly high-resolution color displays to display images. While these are wonderful in their own way, over time our smart devices will have more and more ways of projecting virtually created objects into the real world. Let’s briefly consider two current technologies that I believe will become an integrated fact of life in the future. “Virtual reality” technology—such as that used to paint markers onto football fields in sports broadcasts—overlays or composites virtual images on top of the “live” image being seen through a camera. Because the virtual images are rendered in the same perspective as the camera—using fine-grained orientation and location sensors—the real and the virtual world are combined into a single, seamless view.
In the future, smart devices will use transparent “heads-up” displays to overlay what we are actually seeing with information from the virtual world. “Google glass” is one such real-world integration initiative[viii]. While the current generation device lacks the ability to dynamically overlay images in perspective on our current environment, this will clearly be addressed in succeeding generations and we will have true “augmented reality” throughout our day in everything we see through our glasses or contact lenses. In addition, virtual devices may incorporate projectors that appear to “beam” images from the virtual world to the physical, along the lines of the holographic images in the Star Wars saga (though you may need to look through a transparent screen to see them)[ix]. Devices will also be able to interface with the physical objects around them and use those objects as displays devices and sensors. For example, the windows and walls of your home will become displays for next-generation smart devices, fully integrating your virtual and physical worlds.
Because of improved sensor and display technology, in the future you will interact with your mobile device in increasingly “natural” ways—that is, in a way that is similar to how you interact with other people, and with the physical environment around you. Your mobile device will be able to read your facial expression and tone of voice, for example, and respond to your mood more as a person would. You will use speech and gestures to manipulate real or virtual objects in increasingly natural ways. As a current example, consider the touchscreen and the “swipe” gesture. I have seen both small children as well as my late mother-in-law—who was in her 90’s—adopt this gesture almost without thought.
Current year 2012 smart devices perform two key functions: (1) they offer us portable compute power as well as the associated displays and sensors to take advantage of it, and (2) they serve as proxies for our identity. There is a strong argument that the first function—that of providing portable computation, input and display power—will be supplemented even subsumed by the environment around us as computing power becomes genuinely ubiquitous. For example, if every glass surface in our environment can act as a display and input device, the need for a large screen or keyboard in the smart device itself is reduced—meaning that our devices can be made physically smaller.
I personally believe many smart devices will retain enough functionality that they can be used in a self-contained way; but there’s a good case that they won’t. In this case the second function—serving as a proxy for our identity—may become the primary function of smart devices. The original function of our smart devices was, of course, to receive phone calls. In this case, the device was an obvious proxy for our identity—when someone calls “your number” they of course reach your phone, not you personally; that is, the phone is a proxy for you. Similarly, smart devices are now serving as payment and location tokens and, I believe will increasingly broadcast information about us in the digital realm just as our appearance does in the physical realm. If the primary function of a smart device is to be a stand-in for us in the digital world, having such devices physically embedded in our body does not sound out of the question!
“Virtual Reality” is in some ways the opposite of “Augmented Reality”. Where augmented reality projects the virtual world onto the physical, virtual reality projects the physical world into the virtual. The simplest manifestations of virtual reality are programs that enable you to control virtual beings or “avatars” in a computer-based world. Today in 2012, this has been commonplace for decades in gaming and also in simulations such as “Second Life” and “The SIMS”. We have already seen the trend where games and simulations have grown dramatically in sophistication. Thirty years from now these environments will be extremely rich and immersive, probably utilizing special suits and glasses, gestural interfaces or motion capture type technology to allow you to project yourself in a seamless and intuitive way.
As computing power grows closer to having the full capabilities of the human brain, some serious researchers—including DARPA, the US Defense Advanced Research Projects Agency who pioneered the Internet—are looking for ways to directly connect the human brain to external sensory apparatus as well as, presumably, virtual worlds[x]. This has real-world application for controlling artificial limbs, and prosthetic devices are already under development that respond to brain control[xi]. As computers capable of simulating every neuron and dendrite in the human brain become a reality, it is not entirely science fiction to imagine that at some point people may be able to “upload” their entire personalities into a computer. If that ever happens, though, it is likely to be outside the 30 year time horizon we are looking at, because fast as Moore’s Law is, the computing power to do this won’t be readily available quite yet—even in 2042.
A profound extension of the virtual world into the physical world is 3D printing. While it may sound like science fiction, the ability to “print” physical objects in layers has actually been around since the 1980’s. What has changed is the reduced cost and increased precision and speed of these devices due to the availability of more processing power.
One type of “additive manufacturing” or “3D printing” that is becoming practical is to use an inkjet printer to deposit one layer at a time of plastic or other material onto a substrate, which is then hardened. A second layer is laid on top of the first, and so on until a complex three-dimensional structure is built up in cross-sections, layer by layer. Scaffolding made of sprayed-on wax or other removable material is used to support voids in more complex objects. By laying down the sections of a three-dimensional object one-by-one—much like paging through the succession of individual cross-sections in a CAT scan—a three-dimensional object of almost arbitrary complexity can be built from a 3D computer model.
Many different 3D printing technologies exist in addition to inkjet that can take a computer model and “print” metal, plastic, glass, ceramic or other material into a real-life object. Even food items have been printed. Perhaps the most intriguing of the many items that have or will be “additively manufactured” is the initiative to print replacement human organs and bone joints. For at least a decade, researchers have been looking at ways to “print” new organs using computer models and inkjet technology[xii], using the jets to deposit living cellular material onto a cell-nurturing substrate. This is in some ways a holy grail because by printing organs using the patient’s own cultured cells, both the need for donors and the possibility of organ rejection can be avoided. It’s even conceivable that the patient’s own DNA could be synthesized on the spot from digitized sequencing data and used to grow the required cells, making the production of his or her own personalized organs available on-demand anywhere in the world. And what about printing entire human beings from scratch! This is getting more into the science fiction realm than I’d like to in this blog, but is certainly something to think about.
Compared to printing organs and whole people, printing bones, joints and teeth may seem more prosaic. However printing custom-shaped replacement teeth is already being done commercially[xiii], and printing artificial hips customized to the patient’s specific bone structure is an active area of research.[xiv] I have had two elderly family members go through hip replacements, and even though the surgery went well their mobility and comfort was impacted afterward. To be able to essentially recreate and implant an exact replica of their original hip would have been of tremendous benefit—and in a few decades at most I think this will be commonplace.
While many challenges remain, thirty years from now it seems very likely to me that the technology of printing custom-made human organs, bones, teeth and joints will have been perfected. Whole humans may take a while longer!
Printing technology also suggests one of several possible paths around obstacles to Moore’s Law, in that it suggests ways to create truly solid three-dimensional circuits by laying them down one layer at a time. While today 3D printing technologies for electronics are primarily being considered as a means of creating macro-sized objects like three-dimensional circuit boards, it seems conceivable that in the future 3D printing technology could be used to create truly “solid” (three-dimensional) circuits. Individual structures in conventional integrated circuits are expected to reach dimensions of 10nm within the next decade—far smaller than the 65nm structures current 3D printing technology can currently produce[xv],[xvi]. Still, it’s not beyond imagination that within the next 30 years techniques will be found to “print” semiconductors and the structures required for integrated three-dimensional circuitry.
Perhaps the holy grail of “3D printing” would be the ability to “print” molecule by molecule, or even atom by atom, thereby creating entirely new chemical and physical structures. This is similar to the ultimate aim of a discipline called nanotechnology, and the theoretical machines in that discipline that can do manufacturing on the molecular level are called “Assemblers” rather than printers. A molecule-by-molecule assembly technology, along with software control, would among other things allow medical professionals to create “designer” medical compounds tailored specifically for you as an individual[xvii]. While this sounds far-fetched, note that biological processes “programmed” by DNA do indeed have the capability to synthesize new molecules. You yourself were physically produced using such “programmed”, chemically synthesized materials. The goal here would be to do such an assembly process—literally—programmatically under software control[xviii].
A fully realized nanotechnology program would have the ability to create any structure using the chemical materials at hand. Much as a seed can be regarded as a set of programming instructions and chemical machinery that re-organizes dirt, sunlight and water into a tree, nanotechnology would have the ability to create—grow if you will—physical objects using information created in a virtual environment. While this sounds incredible, the pieces are in place to make it happen—and I think it will happen, within the next 30 years. While you may not be able to “grow” a chair by 2042, you will probably be able to design one on your computer and then print it at home or at a nearby service. And, with nanotechnology, once you have the physical object you may then be able to manipulate it into a different shape, color, and texture programmatically, as your needs or wishes change.
At the heart of many of the trends I see converging over the next 30 years is “just in time” and “bespoke personalization”. I believe you will have an unprecedented level of control over the entertainment you see, the goods you buy, the medicines and healthcare products you use, and the way you interact with the world around you generally. Going one step beyond that, I think many items will be created for you “to order”, “on the fly” as you want them, including software applications.
For example, a physical clothing store of the year 2042—if such a thing exists at all—may have only a very limited selection of sample garments. Those they do have will be mainly to stimulate ideas or showcase the latest fashions. You may try on a sample for “feel” and general appearance, after which your own personalized garment will be created for you, as you wait, in your choice of color and with personalized fit and details (pockets, no pockets and so on). Such boutiques would be primarily for people who enjoy the experience of shopping. People who don’t want to shop can—in the “virtual” world—send their intelligent agent or “avatar” to try on and even select clothes for them, perhaps with the aid of intelligent assistants or human-driven shop assistant avatars. Some clothes may be “printable” at home, obviating the need even for shipping.
These trends are already clear in the “fast fashion” industry (Zara, H&M, etc.), and in the emerging computerized “made to measure” industry. The technology is not really a stretch; what would need to evolve over the next 30 years would be the consumer preferences to make this a reality. Due to increasing automation the economics in many industries have already shifted so that making one unique item costs roughly the same amount per unit as manufacturing the same item in quantity. I believe the trend will continue so that items made “just for us” and “just in time” will become the norm. This is not just idealism; the huge savings in transportation, inventory and “wastage”—producing items that are never bought and need to be marked down—are very real and very measurable, and already driving fundamental shifts in the apparel industry. My guess is this will continue and the trend will spread to other industries as well as the technology to rapidly produce a “custom” product evolves.
In addition to the just-in-time manufacture of physical goods such as furniture and clothing, I believe many virtual goods such as software will also be created “on the fly” by intelligent agents. Software tools have already progressed to the point where a typical “appstore” type app for a smart-phone now takes only a few days, weeks or at most months for one or a small handful of people to develop, requiring only a few hundred to a few thousand lines of code. This is a huge reduction from the norm of just a few years ago where teams of dozens or even hundreds of people took 18 months to produce a new product, which typically consisted of hundreds of thousands or millions of lines of code.
Granted, most such “appstore” applications are simple compositions of pre-built components. These apps may leverage sophisticated services that do indeed take significant engineering effort to produce—but many apps can leverage the same services. Simple as they are, appstore-type apps solve real problems and are the kind of thing consumers are looking for most often. I believe in the future many apps will be created on demand by intelligent agents and then just as quickly discarded.
For example, before a recent car trip my wife asked me what the weather would be like along our route. I was momentarily at a loss—I knew how to find the route and I knew how to find the weather at each point along the route. What I didn’t know how to do was combine the two with the estimated-time-of-arrival for each point along the route, and then produce a meaningful display. I ended up finding a pre-built app that gave me what I wanted—and I suppose I could have written it myself given a small amount of effort. However this is exactly the kind of solution an intelligent agent could instantly build out of existing components to solve a problem in real-time.
I believe that by the year 2042 “games” and “movies” will have merged into a single form of entertainment that is very immersive. You will be able to both surround yourself with the drama in an “augmented reality” type of fashion—that is, projected onto the physical world—and also to project yourself as a character into the virtual world created by the content you are watching. A simple example of this would be the ability to give the hero or heroine of a conventional movie your own face and idealized appearance, and likewise cast the other characters as you choose. There will still be made-up plots and characters in the future, I believe, but certain plot points will evolve according to your actions within the drama and your preferences. That is, you will be an observer or a character in a play that is partly of your own making. Certainly we will still enjoy passively watching the creations of others—but even those we will probably be able to easily tailor to our preferences.
To non-gamers, it may sound like science fiction. To the gamers among us, though, it is old news—current (2012) generation video games do much of this already. What will evolve, I think, is that the excitement of a great cinema experience will merge seamlessly into the interactivity and immersive quality of video gaming, and the two will in some sense become one. Certainly people will still go to places—theaters—that have more sophisticated equipment than an individual can comfortably afford, both for access to a heightened experience and for the social aspect. However I think our powerful smart devices and their associated intelligent agents will themselves become active participants in the experience, allowing us to continue seamlessly where the cinema experience leaves off.
Finally, a few perhaps paradoxical predictions. Though intelligent agents will take over more of the mundane tasks of programming, design and production, I believe the role of the “designer” will be more important than ever.
Role of the designer
Value of artisanship
And, finally, what happens to all the programmers? Well, someone needs to make all this stuff happen! The programmer is the person who tells the machines what the humans want it to do. Though the technologies and approaches to this will change, that essential mission will remain just as important for the next thirty years as it was for the past thirty. And perhaps moreso, because more things are now possible. I think it’s an exciting future we face—and it’s you who will help make it. I wish you much joy on the journey that, as Steve Jobs so rightly said, can itself be the reward.
Coda
Science Fiction writer Arthur C. Clarke was probably one of the most insightful futurists in recent history. In 1968, Clarke made bold predictions about the future in his book “2001: A Space Odyssey”. He was looking across just about the same 30-year time window I’m trying to envision. How well did Clarke do[xix]?
Well, it was mixed.
Even though it was written in 1968, some of Arthur Clarke’s predictions in “2001” were highly accurate: The re-establishment of friendly relations between the US and the countries of the former Soviet Union happened by the early 1990’s; Video conferencing became popular for businesses in the 1990’s, and went mainstream for consumers in the early 2000’s (Skype first came out in 2003); in-seat personal televisions for in-flight entertainment in airplanes (Clarke has them in space ships) came out in the late 1980’s; and many of the other predictions in his book indeed came true before or shortly after the year 2001.
A number of other devices that Clarke predicted—for example, a networked flat-screen tablet device—have become commonplace, though not by the year 2001. And there were other devices—for example, mobile phones—that had become ubiquitous by 2001, but which were not mentioned by Clarke, at least in this book.
On his major predictions, Clarke was completely wrong—at least on the timing. Even though man actually got to the moon years before Clarke envisioned, he didn’t stay. There were no permanent bases established there, and no program of manned space exploration has been attempted for any other planets. Intelligent computers with fluent speech and speech recognition, the ability to recognize faces and pictures, and to think autonomously, still remain in the future at this writing more than 10 years after the year 2001 has passed. To date, no signs of intelligent life have been detected on planets other than Earth. True, there are tantalizing hints that microbial life exists or once existed on Mars[xx], and that the number of Earth-like planets is large[xxi], with one perhaps as close as 6.5 light-years away[xxii]. But, bottom-line, Clarke’s most provocative predictions have yet to come to pass.
Clarke’s predictions that have so far failed to come true did so for several reasons:
- The external forces that had been driving technology in a particular direction changed. The driving force behind the space program at the time Clarke was writing—the 1960’s—was the military competition between the US and the Soviet Union called the Cold War. By the time of the first moon landing in 1969, the competition between the two countries had already shifted to arenas other than space. And by the early 1990’s, the Cold War had ended entirely. Without the military imperative behind it, the manned space program and the funding it required did not continue as Clarke had envisioned—hence no colonies on the moon by 2001. While I think that macro economic and social forces will continue to drive the electronics industry as they have in the past, I could be wrong. While individual “smart” items themselves get cheaper and cheaper, at present more and more massive concentrations of capital are needed to create those devices originally. Nanotechnology—once created—could completely change this equation, but currently it takes billions of dollars to create a new plant to manufacture state of the art display devices, microprocessors and other electronic components (“Fabs”). Economic disruption or other changes that limit capital investment in new technologies would slow or conceivable even stop its evolution. New social forces could also cause the technical world to branch out in a direction other than electronics. While I think technical, economic and social forces will continue to favor faster and faster evolution of computer and information technology for the next 30 years at least, events such as an all-out cyberwar, structural economic changes, or greatly heightened privacy concerns could place limits on growth.
- The technical problem was harder to solve than it originally appeared to be. Somewhat surprisingly speech and object recognition, as well as computer cognition, have proved to be much harder technical problems than Clarke originally imagined. At this writing I would guess that the earliest a HAL-type intelligence will be practical in a spaceship-portable system will be in the late 2030’s or early 2040’s, about 40 years after Clarke’s 2001. In other words, I estimate this goal is still almost as far in the future for us, as Clarke thought it was for him writing 45 years ago. Could my current 30-year forecast still be too conservative or too aggressive? Absolutely. Another possibility for a hard technical problem is something that causes the growth in computing power predicted by Moore’s law to slow or stop, or one that prevents higher capacity batteries from being commercially viable. We may indeed hit some fundamental physical limitation that we can’t figure out how to overcome. I don’t expect this to happen in the next 30 years, but it could.
- Projecting the assumptions of the current time onto a future time. One of the most amusing things when you look at past predictions of the future is the author’s implicit assumption that some commonplace item in the author’s own time would remain commonplace. One of my favorite “golden age of science fiction” (1950’s and 1960’s) authors, Robert Heinlein, describes people in the far future using slide rules to navigate their space ships. Arthur Clarke features phone booths—rather than mobile or networked phones—and other relics of 1960’s culture in 2001. Far from criticizing them, we should recognize the most difficult part of predicting the future is that we are looking forward from the context of our own time. Like fish in water, we are often so accustomed to what surrounds us here and now that it never occurs to us to question its existence in the future. As with any deep learning experience, the hardest part is to “let go” of our current assumptions—especially when we aren’t even conscious that we are making these assumptions.
For my predictions in this blog, from the vantage point of the year 2012, I believe that I am actually being quite conservative about what will happen by 2042. I can see that the manner in which these predictions will come true will probably change—for example, perhaps a technology other than 3D printing may evolve to produce the results I mention. Still, as I write this I think the predictions themselves are very likely. But then again, I’m sure Mr. Clarke felt the same way when he wrote his famous book. The business of predicting the future is ever fraught with peril, because one thing that is entirely predictable is that people will surprise you!
Personalized medicine
Nano-technology—creating macro structures
Bespoke
Printing custom organs and hip replacements
nanotechnology
Intermingling of computers with normal devices
3D printing
immersive—physical and virtual world are seamless
[i] Apple A6 Tri-core PowerVR SGX543MP3 running at 250MHz used in the iPhone 5 has total processing power of 27 GFLOPS, per http://en.wikipedia.org/wiki/Apple_System_on_Chips. The most powerful supercomputer known (that is, not classified) in 2012 was the Cray Titan, which at a peak rate of 17.59 petaflops (10^15 flops) was 637,000 times more powerful than an iPhone. The complete Google server farm was estimated to have 2M separate server machines in 2012 (James Pearn at https://plus.google.com/114250946512808775436/posts/VaQu9sNxJuY), each with a quad-core CPU. Google does not disclose details, but the quad-core Intel i7-950 which shipped in 2009 is a reasonable guess since Google has systems of various ages; these run at about 50 GFLOPS each. Two Million servers running at 50 GFLOPS gives 100 petaflops—about 3.7 million times more powerful than a current-generation iPhone. Note that not all of Google’s servers run Search, and not all are working at any given time. A current 2012 engineering-class desktop, containing a microprocessor such as Intel’s i7-3770 (Ivy Bridge), runs at about 100 GFLOPS.
[ii] See, for example, Raymond Kurzweil “Turing’s Prophecy”, http://www.kurzweilai.net/turing-s-prophecy. The 10 exaflop number for human intelligence is from Raymond Kurzweil as quoted in http://www.futuretimeline.net/subject/computers-internet.htm.
[iii] See, for example, http://en.wikipedia.org/wiki/Technological_singularity
[iv] http://www.nevadanano.com/solutions/human-presence-detection/
[v] http://electronics.howstuffworks.com/gadgets/high-tech-gadgets/computer-clothing.htm
[vi] http://www.google.com/glass/start/
[vii] http://www.youtube.com/watch_popup?v=6Cf7IL_eZ38&vq=medium
[viii] http://www.youtube.com/watch?v=9c6W4CCU9M4,
[ix] http://www.wired.com/dangerroom/2012/08/iarpa-holograms/
[x] http://www.popsci.com/technology/article/2012-03/achieving-immortality-russian-mogul-wants-begin-putting-human-brains-robots-and-soon; http://www.wired.com/dangerroom/2012/02/darpa-sci-fi/.
[xi] http://www.nlm.nih.gov/medlineplus/news/fullstory_132301.html.
[xii] http://www.cell.com/trends/biotechnology//retrieve/pii/S0167779903000337, cited in http://en.wikipedia.org/wiki/Tissue_engineering. http://www.bbc.co.uk/news/technology-18677627 and http://www.theengineer.co.uk/in-depth/analysis/building-body-parts-with-3d-printing/1002542.article, as cited in http://en.wikipedia.org/wiki/3D_printing.
[xiii] http://www.ft.com/cms/s/2/22affc68-64ee-11e2-934b-00144feab49a.html#axzz2NZArgWsT
[xiv] http://www.economist.com/node/21541382
[xv] Note that the 10nm feature size projected for integrated circuits this decade is already at molecular scale. A small molecule like an amino acid is about 1 nm in size, with 10 nm being fairly typical for a molecule. “Atomic scale” is considered about 0.1 nm—basically the diameter of a single helium atom. By contrast, the wavelength of visible light is enormous—between 400nm to 800nm, depending on the color.
[xvi] http://eetimes.com/electronics-news/4070805/Cheaper-avenue-to-65-nm-, cited in http://en.wikipedia.org/wiki/3D_printing.
[xvii] http://blog.ted.com/2012/06/26/lee-cronin-at-tedglobal2012/, as referenced in http://www.3ders.org/articles/20120627-create-personalized-medicine-using-a-3d-molecule-printer.html
[xviii] http://www.extremetech.com/extreme/143365-3d-printing-cancer-drugs-molecule-by-molecule-using-dna-scaffolds
[xix] Though not entirely serious, thanks for some useful reminders from the blog http://www.currybet.net/cbet_blog/2009/02/how-accurate-was-kubricks-2001.php
[xx] http://www.theverge.com/2013/3/14/4100578/life-on-mars-still-elusive-after-curiosity-viking-other-discoveries
[xxi] http://news.discovery.com/space/earths-exoplanets-solar-systems.htm
[xxii] http://www.newscientist.com/article/dn23271-closest-earthlike-world-could-be-65-light-years-away.htmlThe next thirty years
Leave a comment