Greg Yeric earned Ph.D in Microelectronics at The University of Texas at Austin in 1993, and he’s been at the fore of research into semiconductor scaling ever since. Now a Fellow leading the Future Silicon Technology group within ARM Research, Yeric will deliver an ARM TechCon keynote this fall. I caught up with him to get a sense for what he’s focused on and where he sees both challenges and opportunities in scaling.
Q: Scaling’s dead, the pundits say. Or… not?
Yeric: This debate has been going on for years. Moore’s Law is proclaimed dead, and yet there always seems to be an innovation that fixes the problem. But given what we are up against in the single-digit nanometer nodes, it’s difficult to deny that the Moore’s Law curve is tailing off. This free ride is slowing down if not over. Foundries are doing a Herculean effort and are keeping the progress from totally flat lining, but you can see from wafer costs and so forth that it’s not going to be the same free ride.
We have to make this up some way. Maybe more of the burden—and opportunity—is falling to the people who will be attending ARM TechCon—finding ways to squeeze more out of architecture. One opportunity is to radically change what we think of as a system in a package—some kind of heterogeneous integration, such as 2D and 3D structures.
Beyond that, we get to where I’m both scared and excited about possible technology directions. In the past, with the reliable Moore’s Law escalator constantly churning away, it’s precluded investment in more radical technologies. As Moore’s Law slows, more radical shifts become increasingly justifiable to consider. Maybe, for instance, we should invest in an analog neural network—a separate piece of silicon leveraging new non-volatile memory technology that works 100-100x times better than digital approaches for given applications. If you buy that Moore’s Law progress is slowing, then it makes it worthwhile to consider investments in new architecture and software. These are big new problems, but we have to go ahead and consider these things.
There are a lot of technologies out there that excite me. Within ARM and also with our partners, we will need to get outside our comfort zone, our standard architecture, standard MOSFET-based CMOS VLSI, and put together new IP in new ways.
Q: That’s a heck of a challenge.
Yeric: It’s daunting. I started my career at 0.7 microns and we’re 100 times smaller than that now. I think about the things that seemed hard or impossible at the time—copper, weird lithography tricks, strained silicon and so on. These were each major hurdles at the time. But they were simple things in relation to what we might be talking about in next 10 years: 3D ICs, photonics, spintronics, new memory physics, etc. It’s so much more daunting than what we used to think was daunting! And, it’s a double-edged sword: It’s potentially extremely difficult and requires an uncompartmentalized approach. It’s not just a better transistor with better PPA (power, performance and area). We’re talking about new architectures that must justify extra investment as compared to staying on the standard escalator.
Q: Do we need to engage with universities in different ways to enable this?
Yeric: That’s an interesting question in general. The industry’s maturing. We don’t have a Bell Labs. We don’t have these powerhouses that we used to have, where the development required to continue Moore’s Law was handled in the industry. But now we might need some fundamental improvements, and the big fundamental research comes out of the universities. We may no longer have the luxury to wait and say “we’ll put XYZ-FET on the roadmap for 12 years for now.” We have to figure out how to mature these things more quickly.
As an example at ARM, we’ve invested in a couple of promising non-volatile memory technologies, to various degrees. Another problem we identified was the lack of appropriate benchmarking of new technologies in the academic communities. You can’t just quote the IDSAT of a transistor anymore, you have to consider how nanometer-scale chips get built, and that requires a full definition of a potential technology. I can give an example of how we have tried to help here in my talk.
Q: What are some additional technologies that get you excited?
Yeric: I’ve worked for 15 years at these two pieces of the puzzle: How do we squeeze the transistor electrostatically, and how do we print the things with smaller features sizes? What excites me is we’re hitting fundamental limits there. How do we reset that? There are quantum jumps in switching devices. One area is to go to spin-based logic, where the state becomes electron spin rather than charge, and thus circuits can operate in the millivolt range. That would get us back on the power track we’ve wanted to get back on. But the spintronics may not be able to match the speed of leading-edge CMOS. So in that case, you have to ask yourself “does a hybrid system make any sense, where some things are partitioned into slower, more energy-efficient spin-based compute and others remain in CMOS?” And then how do we actually pull that off? That’s a much bigger problem compared to optimizing a cache or pipeline.
On the patterning side, EUV is absolutely necessary, but it’s not going to fundamentally reset Moore’s Law for us. We then need to start looking at things like directed self-assembly. The name sounds like science fiction, but it’s maturing quickly. University progress has been amazing. People are now starting to look at this stuff in terms of how it could yield at the chip level. It’s not a complete savior for patterning, but it gives you a big jump, if you can accept different constraints. It makes you rethink how you put patterns down on chips. Not to denigrate what’s we’ve done in 10 years, but it has been incremental compared to where we might need to go.
Q: How’d you get into engineering in the first place?
Yeric: I had a really good math teacher in high school. Through her, I got into math and science as my school hobby. She drove us around math contests all over the state, which is saying something given the size of Texas. That exposed me to opportunities in the technology realm, and I got enamored with engineering—the concept of making a new, better widget—before college. For me at this age, this was the beginning of the personal computer, and I liked the idea of how magical these electric systems were. I took night classes and picked up programming, and enrolled in electrical engineering, without a specific idea as to what exactly. Then by happenstance, my parents met a guy at a party, found out he worked in the electrical engineering industry, and asked if he wouldn’t mind giving their son a tour of his company. Incredibly lucky, because it turns out he was a co-founder of this company (MOSTEK) but more importantly was basically a closeted professor. He offered me an internship and then sat me in his office and taught me as much as he could in a summer, like what an address decoder was, how the clock worked on a chip, etc. (I didn’t know at the time that he held a lot of fundamental patents in these topics). So, in both cases, I had these enthusiastic teachers who got me to the point of persevering through a degree in microelectronics—well, 3 degrees in it.
(Silicon wafer image by German Wikipediabiatch, original upload 7. Okt 2004 by Stahlkocher de:Bild:Wafer 2 Zoll bis 8 Zoll.jpg, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=928106)