What comes after Moore's Law?

Today's short-lived software applications and cloud services open up possibilities for more specialized processor designs
931 readers like this.
quantum computing

The literal meaning of Moore’s Law is that CMOS transistor densities double every 18 to 24 months. While not a statement about processor performance per se, in practice performance and density have tracked each other fairly well. Historically, additional transistors were mostly put in service of running at higher clock speeds. More recently, microprocessors have mostly gotten more cores instead.

The practical effect has been that all the transistors delivered by process shrinks, together with design enhancements, meant that we could count on devices getting some combination of faster, cheaper, smaller, or more integrated, at an almost boringly predictable rate.

At a macro level, we’d simply live in a very different world had the successors to Intel’s first microprocessor, the 4004 released in 1971, improved at a rate akin to automobile fuel efficiency rather than their constant doubling.

[TEP_CALLOUT_TEXT_RIGHT:A number of factors are coming together to make specialty processor and system designs far more thinkable than in the past.]

Moore’s Law has also led to a computing environment that tends to be dominated by general-purpose designs. When I was an IT industry analyst in the 2000's, I was regularly pitched on all sorts of one-off processor and system architectures for high-performance computing or other specialized needs. For the most part, they were never commercially successful.

The reason for this is straightforward. If you’re competing against volume designs that are getting better at a rapid, predictable rate, whatever optimizations you come up with will often be matched by the general-purpose system in a year or two. Deciding between sitting back and tracking Moore’s Law versus buying a new, unproven system wasn’t exactly a tough call for most buyers.

However, today, a number of factors are coming together to make specialty processor and system designs far more thinkable than in the past.

The "End of Moore’s Law"

Reasonable people may disagree on exactly where we sit in the Moore’s Law endgame, a question that’s increasingly hard to even precisely frame as process nodes like 10nm become more akin to marketing terms than literal descriptions of feature sizes. Furthermore, Moore’s Law has also always been about economically doubling transistor density. At some point, shrinks may be technically feasible but don’t make sense from a business investment perspective, whether because of semiconductor fab capital costs or low yields.

However, no one seriously disputes that Moore’s Law is getting close to fundamental physical limits as processor features are approaching the size of atoms. That’s not to say there aren’t paths to continued performance improvements. Packaging approaches like 3D stacking, optical and other interconnect technologies, and new materials are among the research areas that could lead to continued performance and functionality improvements.

But, as Robert Colwell, the former director of the Microsystems Technology Office at DARPA, has pointed out, “from 1980 to 2010, clocks improved 3500X and microarchitectural and other improvements contributed about another 50X performance boost.” That’s a truly unprecedented pace of improvement, and there’s no reason to bet on something similar being just around the corner waiting to be pressed into service.

However post-Moore’s Law plays out long-term for hardware, certain other trends are coming together that offer to provide something of a roadmap for the nearer term. These include more ephemeral applications, new workloads, cloud service providers, and open source software. One suspects that we’ll also see some even more fundamental changes in the basic architectures of computer systems over time. Let’s look at four of those trends:

1. Ephemeral applications

[TEP_CALLOUT_TEXT_RIGHT:Value for digitally transforming organizations increasingly comes from mobile, web, analytics, and other applications that tend to change quickly.]

We used to think of software in organizations as being much longer-lived than the hardware it ran on. That can still be the case, but we no longer primarily view enterprise software through the lens of enterprise resource planning (ERP) software and other back-end systems. While those are still important assets in many organizations, value for digitally transforming organizations increasingly comes from mobile, web, analytics, and other applications that tend to change quickly in response to changing customer and market needs.

This represents a change to a traditional dynamic that supported (a small number of) long-lived hardware architectures that existed in large part to run slow-changing software systems.

If many applications are going to be tossed, or at least extensively modified, after a couple of years anyway, it’s less important to minimize change in the hardware layer. Hardware stability still matters, but it’s not the near-universal mandate it once was.

2. New workloads, such as machine learning

Significant new workloads also benefit disproportionately from running on processors that are different from traditional general-purpose CPUs such as x86 architectures. Machine learning operations, in particular, are heavily dependent on linear algebra (such as multiplying together large matrices). This is compute-intensive but simple and is almost tailor-made for graphics processing units (GPU), which have been developed for years by companies like Nvidia for use as video cards. Other algorithms, such as cryptocurrency proof-of-work consensus mechanisms, also benefit greatly from GPUs.

Beyond GPUs, we’re also seeing increased use of even more specialized processors for specific workloads. One good example is the tensor processing unit (TPU), developed by Google for machine learning. Intel’s $16.7 billion acquisition of Altera can also be seen as part of the transition to a world in which we see more special-purpose chips such as FPGAs. CPUs have long been complemented by other types of processors for functions like networking and storage, but specialization is branching out into new areas.

We’ve also seen how ARM has largely won out over x86 in mobile. While the jury is still out on whether ARM will have an increased presence in servers, it’s certainly plausible now that 64-bit versions are available and vendors have agreed to some standards.

[ Is your team preparing to capitalize on AI and machine learning? Read our related article, Artificial intelligence: The king of disruptors. ]

Gordon Haff is Technology Evangelist at Red Hat where he works on product strategy, writes about trends and technologies, and is a frequent speaker at customer and industry events on topics including DevOps, IoT, cloud computing, containers, and next-generation application architectures.