NOTE FROM 2025: I wrote down a few thoughts at the time of my 30th Anniversary as a software professional, in August of 2012. I didn’t publish it on the company blog, given that some of these thoughts here are more personal and ‘political’. A number of the ideas here informed the series I did publish, though. The 30th Anniversary series itself consists of the following parts: “pre-history” (before I began working professionally in 1982), 1982-1992, 1992-2002, 2002-2012, and my predictions for the next 30 years (as of 2012) from 2012 through 2042.

A flatter world

One of the most profound and significant changes brought on by computer technology has been the “flattening” of the world.

The biggest significance of this to my mind is that people of ability—no matter where in the world they live—now have the opportunity to prosper in a global marketplace, and can reasonably aspire within their or their children’s lifetimes to achieving a globally high standard of living. This certainly applies to software engineers, but because of the affects of technology, also to many less technical folks. By creating a global marketplace, people producing exceptional or unique products can now readily present and sell those products to a global audience, accepting payment and shipping them expeditiously at a reasonable price using software-enabled payment and logistics services. 

This global flattening trend has created winners and losers, but on the whole I think it has been a very good thing for the world. In 1982 whole industries and fortunes were made by identifying the producers of goods at point A, and moving and selling those goods to point B. The producers at point A could rather easily be underpaid and exploited, because their own ability to get the goods they made to point B—where they had value—was limited or non-existent, giving the intermediary all the power. This era is over. Fortunes can still made by scouring the world to carefully select the producers, “curating” the products to insure quality, and then delivering those products to consumers, insuring their satisfaction in the target market. This much has not and—I believe—will not change. However the producers and consumers may now directly connect with an ease that was literally unthinkable 30 years ago. While not producing a utopia, the possibility of a direct connect at least helps keep things more fair, making winners of both the producers and suppliers.

Like many US software engineering managers of my generation, I was opposed to outsourcing work to developing countries when it first started to become widespread in the 1990s, in the run-up to “y2k”. I suppose, in some part of my brain, I was afraid of the impact outsourcing would have on my job and my team. My conscious thinking at the time was that it was challenging enough to create great software with everyone in the same building, without adding communications, timezones and cultural barriers into the mix. This was before the full impact of the internet and affordable mobile telephony—also products of the software revolution—had effectively erased many of the barriers of geography.

Though I had done geographically distributed projects earlier in my career, 2003 marked my first experience with true outsourced product development.  I arranged an on-site visit to my first offshore team in Hyderabad India with the primary—though unspoken—purpose of making sure the workers were not being exploited, fully intending to find another job rather than be part of any exploitation if that’s what I found. I had only had phone contact with India before, and had no idea what to expect—and what I found really was unexpected. I found happy, energetic and intelligent people just like those I worked with in the US, working in conditions very much like those in the US. In fact, their working conditions were considerably nicer than a San Francisco startup I had recently worked at!

Working alongside this team, I also got over any qualms I might have had about “taking American jobs” and “moving them overseas”. To me, the whole point of being an American is that you can rise to the level of your ability. These people in Hyderabad were doing exactly that. Standing in India and figuratively looking back at the US, I felt strongly that the least patriotic thing I could possibly do would be to deny jobs to these great people simply because they lived in India—when these are exactly the same people we would have hired if they were in the US. America has never been an ethnicity or even a geography, but an idea. Hiring or not hiring a person simply because of where they were born or where they lived seemed to me like the least patriotic, most un-American thing I could do.

Since that time, I have always held my Indian—and later Belarussian, Ukrainian, Argentinian and other “offshore-based” employees—to exactly the same standards as I would hold a US employee in the same role. I expect my offshore teams to be as good, person-for-person, as my on-shore teams—and I have found that they can be and are. I feel I owe that to all concerned: the offshore people I am hiring; the US-based people I’m not hiring for that job; and myself and the other stakeholders involved. I don’t think I’d be honest or fair to any one of these stakeholders by expecting that just because someone works in India, China or another “offshore” location they are inherently less capable than an onshore person in the same role—to me that’s simply wrong.

This is not to say that thirty years on, in year 2012, the American-based programmer is dead—far from it. While census bureau statistics show that about a third of US-based software engineers are immigrants, that implies two-thirds were born in the US! That’s a fact that is somewhat surprising to me from my perspective in Silicon Valley, where nearly all the engineers seem to be from other countries! [[need some demographic info here]]

Dangers

The dark side of the information age lurks in the possibility of using the technology to control people’s lives rather than to empower them. I think most of us—even the technically literate—are only dimly and occasionally aware of the vast potential for misuse of the information infrastructure that surrounds us. Using this technology, it is an easy matter to track who we know, what we say and do, where we are and have been—and even what our values, aspirations and private fantasies may be. It’s all in the traces we leave on the internet, in our social networks, through our mobile devices, and in our spending patterns. And that is just the information we “control”! Much more information is passively tracked through cameras and other means in our increasingly connected world.

In the wrong hands, this information could result in the grimmest tyranny known to man—one where not only our activities could be tracked and controlled, but even our private thoughts could be discovered and used against us.

As a young man studying physics, I was fascinated by the Manhattan project—the secret initiative during World War II where scientists and engineers working in the US created the atomic bomb for the war effort. The most engrossing aspect of that effort to me was not just the physics, but the human dynamic. Even the most idealistic of the scientists involved got so caught up in the intellectual adventure of creating this new technology that they forgot about the horror of unleashing a terrible new weapon on the world. For many participants, the original motivation for creating the bomb was the fear that Nazi Germany—where much of the pioneering work in nuclear physics had originally been done—would develop such a weapon first. Even when it became clear that Germany had no such effort, the American engineers and scientists continued working on the bomb. In most cases this was not because they believed it was still necessary to win the war. Instead, many had become so excited about the intellectual challenge of creating this new technology that they never stopped to re-assess whether it was still the right thing to do.

In introspective moments, I have sometimes wondered if our generation has been part of a sort of inverted Manhattan project that, instead of creating the destructive aspects of our technology first, instead created the benign and positive ones. This would be as if the original Manhattan project had focused instead on the positive uses of nuclear fission—say, medical radioisotopes—rather than building the bomb. In this alternate universe, the Manhattan project would still have discovered the means to create a devastating weapon, but chosen not to do so. In some ways, by building positive uses of information technology first, that is what our generation has done.

While this sounds great and very enlightened of us, the problem is that any technology with the power to profoundly change the world for good also has the potential for great destruction. History shows us that technologies with the potential to be used as weapons or tools of oppression eventually will be, in some places and at some times. Much as our parents and grandparents left future generations to address the challenge of nuclear technology, we leave to our descendents the challenge of responsible use of information technology.

Summary

Thirty years is a long time, and at times I succumb to nostalgia over projects, products and day-to-day working relationships that are no more.  Some people I admire have passed away; others have been felled by the “golden bullet” which enriches people to the point they are no longer active in the software community. Some great ideas, products and companies have come and gone, never to be replaced or repeated.

But my overall feeling looking back on the last 30 years is great satisfaction. I’m not sure that many of us set out with the conscious goal of transforming the world and making it a better place—I think many of us were just excited by the possibilities and happened to be in the right place at the right time with the right skills. 

Lessons learned:

  • Listen to the people just graduating (or still in) University. Having experience is great in many ways, but in other ways it’s a handicap. It’s easy to develop a closed mind. I’m sure you’ve seen this many times—a person who responds to every new idea with “we tried that x years ago and it didn’t work.” This is why it’s important to listen to recent graduates, and learn through their still open minds. The new generation of developers does not come with the same preconceptions and prejudices that all of us develop over time; they start “fresh” and you can always learn something from the way they see the world if you stay open to it. One recent example would be the javascript language. The people of “my generation” tend to view javascript as a total kludge. We tend to see it as the way that encapsulation in web applications has been violated and hard-to-maintain business logic has found itself executing on the client. We see javascript programs as web pages using cut-and-paste code, with no re-use. Things have changed profoundly, and javascript is developing—or, arguably, has developed (it’s still hard for me to say that)—into a first-class language, both client-side and server-side. This is just one example. It’s not always easy to unlearn what you thought you knew and learn something else. But if you stick with your prejudices instead of being open to what the new joinees are learning and doing, you will rapidly grow stale and loose effectiveness as a developer or a technical manager.
  • Be curious and evaluate whether a new technology is a good solution to a real problem. The reason I was an early adopter of “winning” technologies like C and object orientation was not because I had precognition—it was simply that I recognized these technologies were good solutions to a real problem that I actually had. They turned out to be long-lived in the industry because they solved other people’s problems well too. When choosing a technology, the best approach is to ask yourself if it’s the best solution you’ve seen to a problem that you really have. If the answer is “yes”, and if the solution is simple and elegant, that technology will often be a winner—because more often than not, if it solves your problem it will solve other people’s problems too. Note that “technology” means, literally, the “technique” or method of solving a problem—it doesn’t mean a particular product or a particular company. For example, “C”, “distributed caching” and “AMQP messaging” are, in this sense, technologies. However there are many embodiments of each of these technologies, supplied by many vendors and open source suppliers. It generally is NOT a good idea to compromise on your selection of a technology—except in special circumstances (for example, no one understands how to use it; or it’s clearly headed for obsolescence) you want to choose the technology best-suited for the job at hand.
  • When choosing an implementation or vendor for a particular technology, you are often better off following the leader. For example, memcached is a currently-popular distributed caching mechanism. If you’re choosing to use a specific distributed cache, then the “market leader” or “most popular” is generally your best choice. Only if the leader genuinely does not suit your most important needs would I choose a less-popular system. That is because—just like teenage boys and girls—the most popular gets the most attention. This means the most popular (software, not boys and girls) will evolve the most rapidly, have bug fixed more quickly, have more active user groups who can answer your questions, have more people available to hire who already know about it, etc. The rule is not always true—there are some fantastic niche players sometimes—but in general the most popular implementation of the technology you’ve selected will your best choice.
  • While it’s always best to meet end-user needs—real or anticipated—keep in mind that it’s generally (though not always) better to ship than not to ship. The “perfect” can be the enemy of the “good”—and often the way to ship something great is to ship something good and constantly make it better.  The early days of the Blackberry are a great example of this. The Blackberry started out meeting a real need (mobile email) in a minimalist way and kept improving over time until the feature set became very rich. (Their mistake in my opinion was not throwing it out and starting over with another minimalist offering when they needed to—but that’s a different discussion). Starting with a simple beginning and growing isn’t always the best approach. If your initial offering doesn’t solve a real problem or if it doesn’t meet minimal user expectations or minimum standards, your product is dead with little hope of recovery. But before you hold on until it’s perfect or “complete” remember that “real artists ship”. It’s surprising the degree to which end users will adapt their behavior to take advantage of your product IF they see sufficient benefit to them (for example—the stylized alphabet used for data entry on Palm Pilots; the cryptic symbols, typing patterns and acronyms used for text messages (LMAO, etc.)).
  • Achieving excellence requires a focus on maximizing strengths rather than on minimizing weakness. The tendency of many companies and individuals is to try to fix what’s wrong, rather than to pursue excellence by building on what they already do well. For a software company, this generally takes the form of attempting to fill the “gaps” in their offerings, by either adding features, or by making an acquisition that covers some perceived hole. In individuals, it generally takes the form of addressing “areas of improvement” rather than building on the “strengths” you already have. The alternative for both is to learn to live with their imperfections and instead focus on taking their unique differentiators to the next level. Few actually do this because it entails a lot of risk to leave a known fault in place. If a flaw is going to literally put a company out of business—or will make you as an individual lose a job you want to keep—you need to address the flaw. There’s really no choice. However I think most companies and individuals have more choices than they realize. A strategy of fixing weakness certainly raises the average, but it does not maximize the upside. I think that’s why so many companies—and individuals—muddle along but never achieve excellence. For these people and companies, fear of failure is a stronger driver than seizing an opportunity for success. I think in some situations, this is rational behavior: If we can’t afford to risk losing our job—or having our business temporarily decrease—then maybe it makes sense to forget about “excellence” and instead focus our energies on avoiding disaster. But when we make this choice, we should realize precisely what we are doing: We are forfeiting the chance to achieve the best that we might be capable of. Companies and individuals that achieve excellence generally do so in spite of big flaws. They focus entirely on what they do or want to do well, rather than on being “pretty good” at everything. Like Achilles, companies and individuals have a choice: “A short life filled with glory, or a long life filled with obscurity.” The more we realize we have a choice, and are up-front about making it, the happier we will be as companies and individuals. 

So, the final question in looking back: Did it matter? I’m not planning to retire anytime in the near future—in fact my father is still an active guy.  I firmly believe my best working years are still ahead of me. But the biological fact of life is that I now have more working years behind me than I have in front of me. At this milestone, with the bulk of my career in my past, I have to ask: Did it make a difference? Did it mean anything more than collecting a paycheck?

Of course the relationships we have, and have had, with the people we work with are probably the most important aspects of any career, software or otherwise. There’s nothing quite like being part of a great team; it’s a satisfaction all its own. And the software industry has had no shortage of extraordinary characters, some of whom I had a chance to meet along the way—from true geniuses to a guy who regularly came to work with a parrot on his shoulder to a woman who had her hair dyed in the colors of the Apple logo (actually I didn’t meet her—but she had only recently left Apple when I joined). And those are just a few. Just thinking about some of these and the other characters puts a grin on my face.  I’ve also had the great satisfaction of helping people to advance their careers and skills, as others have helped me. The people have definitely made a difference—them to me and, I hope, I to them. That matters. But did the work itself make any difference?

Certainly over the last 30 years, the industry of which I—and probably you—are a part has changed almost every aspect of how about three-quarters of the world lives (that’s the percentage of people the World Bank estimates now have access to a mobile device). The convergent information, telecom and computer revolution has changed how we work and how we play; how we spend money and how we make it; how we interact with those we care about and how young people date; how we spend our leisure hours, and how we learn in school. In fact, it has changed the shape of the world in every way, geo-politically, socially and financially, more profoundly than any war or revolution in history.

If we can say that a soldier in an army helped to win a war, or that a revolutionary in a society helped to create a new form of government, then you and I as software engineering professionals can be said to have transformed the world in these profound ways. I and my colleagues; my competitors and friends; my bosses and subordinates; the leaders of our industry and the “freshers” just out of University—we have together changed the world, and mostly for the better. And I, for one, think that matters.

Yes, we did help change the world. (Courtesy of http://www.dailydealmedia.com/789world-bank-says-75-of-global-population-has-a-cell-phone/)
Posted in

Leave a comment