Interview: Raja Koduri on Intel’s Arc GPUs and Where AI Is Leading Us

Share

Raja Koduri has had a long, storied career in the PC hardware industry, and was best known for developing multiple generations of graphics processing architectures at AMD and Apple before he joined Intel in 2017 to create a new Visual Computing Group. An alumnus of IIT-Kharagpur, Koduri currently serves as Executive Vice President and GM of Intel’s Accelerated Computing Systems and Graphics Group. Since joining, he has led the effort to develop a brand new scalable architecture, called Xe, which is the backbone for an entire stack of products delivering graphics as well as massively parallel compute capabilities. Xe can be found in everything from the upcoming Aurora supercomputer to the recently launched new Arc GPUs.

Gadgets 360 was able to chat with Raja Koduri about his plans for Arc in India and much more at a round-table as part of Intel’s inaugural ConnectiON conference in India. Here’s what he had to say.

Arc has had a relatively low-key launch in India. Can you talk a little bit about the adoption that you’ve seen, and did you have any targets? Has India as a market proven to be challenging in any specific or different way?

Raja Koduri: Well, I’m hoping we’ll change that “low-keyness” in India starting today! And we have a lot more plans. We are taking a measured approach, region after region. The interest level is very, very high. And [we’re working on] landing more partners in India who can ship good volumes here at good price points. So expect to see a lot more Arc in 2023 and more variations of Arc. The team is learning, this is the first discrete GPU launch in the whole ecosystem, right? So we want to do it step by step. This is one of the main reasons I’m here. I’m going to have interactions with the gaming community. I’m looking forward to talking about a lot more Arc-related stuff. And I also like to sample and understand the state of PC gaming in India. It’s been a while for me, so I would like to hear some of the perspectives of folks that are here.

intel arc a770 angle arc

Intel’s own graphics card based on its current top-end discrete GPU, the Arc A770

When you say more is coming in 2023 and there will be more board partners, does that mean the next generation is coming in 2023?

Raja Koduri: I won’t talk too much about Battlemage, but Arc itself has a lot of headroom. You’ll see updates and flavours, and in India we need to hit a really good price point. I think we did hit a good price point for the world but I want to understand how to do that for India as well. Also in [pre-built] systems that are the most high-volume. I wouldn’t take the IBC [Intel Branded Card] that we launched as the only product in India, even though I think it’s a beautiful design and we want to sell as many as possible. We also need to get the right form factor with the right power levels, and pricing including the tariff adjustments that happen here in India. There is the extra tax. These are the factors that I like to understand, and you’ll see more flavours rather than the next major generation.

So is the focus more on the retail DIY PC builder and buyer or OEMs and prebuilt PCs?

Raja Koduri: Both, but the OEM engagements start at little earlier. Retail DIY builders are near and dear to my heart. I don’t know the volume and numbers in India but I definitely want to do some cool things for Arc in retail over the next few months in India.

Are you going to be sticking to the roadmap for Battlemage and Celestial?

Raja Koduri: Yes, absolutely.

Are GPU design cycles also around 24 to 30 months long, as with CPUs for Intel?

Raja Koduri: Not really. Doing a new architecture is always very difficult. New architectures take 3-4 years but after that once you have a baseline, iterating on it is quite fast. Since we are coming from nothing, we want to iterate fast so that we can catch up to the competition in every segment.

intel arc gpus launch arc

The first generation of Arc GPUs are now available in laptops as well as high-end discrete graphics cards.

You have stated that Arc has challenges with games that make excessive numbers of draw calls, and you’d said that driver updates might mitigate that. So what’s the timeline on that? Is performance coming up to the mark with your expectations?

Raja Koduri: Yes, absolutely. The two APIs that are the most challenging for draw calls are DirectX 9 and DirectX 11. The DX9 driver update should be happening relatively soon and DX11 shortly thereafter. They are imminent. There’ll be some nice announcements. It will make a huge [difference], we’re not talking five or ten percent. In some cases, it’ll be much, much larger. 

Intel has stated that it is working on optimising for the top 100 games that are still DX11 and DX9. Is that happening? And how quickly is it going to be rolled out?

Raja Koduri: I think you should see a big update before Christmas.

Editor’s note: Intel released a driver update a few days after this statement was made, promising up to 79 percent better performance in CS:GO at 1080p as well as specific support for multiple DX9 and DX11 games.

So what are the major constraints that you face [in developing and popularising Arc]?

Raja Koduri: On the gaming side, the install base of old games is amazing. DirectX 9 is an API that launched in 2002. It’s a 20-year-old API and there are games that haven’t been touched for more than a decade but are very, very popular. Some of them actually have bugs. It isn’t just our driver; there were wrong uses of the API but they were bug-compatible with older AMD or NVIDIA drivers. So we have to make them work. The user doesn’t really care about what an API is, right? They just plug in an Arc card and run a game and say it doesn’t work. So it’s our responsibility to make it work, no matter where in the stack of properties [an incompatibilty lies]. That’s the long tail that we had to check, but we pretty much went through 95–96 percent of all those issues on our path to launch. Now we’re on the last 1–3 percent and we have managed to hammer through these releases.

As we get to 2023 I’m very optimistic that much of the compatibility challenges we had at Arc launch will be behind us and people can focus on all the incredible positives that it brings. Gaming: nice, smooth performance. Media: AV1 encode/ decode performance, even $1,000 – 1,900 GPUs don’t come close to Arc’s transcoding performance. Ray tracing and XeSS is far ahead.

Just price or performance-per-dollar competitiveness with AMD and Nvidia wasn’t the goal. It was the baseline, but we wanted to do something beyond that. So that’s the reason why we did what we did with AV1. Media plays a big role; more people are streaming their gameplay all the time. I don’t want people to have to buy another expensive card [for encoding].

Can you talk a little more about AI from the PC perspective?

Raja Koduri: AI is incredible, right now with things like Stable Diffusion, you can just download the entire model and create some amazing artwork on your PC. With an Arc card you don’t need to buy this expensive $2,000 Nvidia GPU.

That’s the democratisation aspect of AI, media and gaming. How that was going to come to a gamer or a PC enthusiast was a little abstract. So it’s pretty serendipitous that Stable Diffusion was released coincidentally with the Arc launch, and the dots were connected. I used to talk about this all the time with my team; that AI will change everybody’s workflows. I’m personally excited about it, I’ve installed it and I play around with it myself.

AI has many uses and it’s been the hype word for the past five years or so. From a gaming perspective, if you want to run a game even at 1080p with all features turned on, with ray tracing etc, you always needed to buy the Rolls Royce of GPUs. You were always envious of somebody who could afford that, if all you could afford was something that would deliver 720p. I’ve seen people turn off all options and the game doesn’t look anything like they want it to.

Of course that suits us, we have to upsell! But with AI, you don’t necessarily need that amount of compute and bandwidth to deliver similar visual quality. Computer graphics was always about simulating the transport of light, and to make beautiful-looking, compelling [visuals], you have to [draw every frame] in 16 milliseconds with the right kind of light. So AI is really “hacking” that by saying “hey, we don’t need to run all the math, we have enough information in the model that I can inform what you’re trying to do you even if you give me a lower-resolution picture”. And that’s going even further with the next generation. Stable Diffusion right now is used to generate static photographs, but some people are playing with videos, and you will start seeing that being applied to games, rather than having these expensive billion-polygon models, or characters with gigabytes of textures. They have an approximate representation of you and can [turn that into] a photorealistic representation in a few milliseconds using the AI portion of the GPU; the XMX units as we call them. Every game developer right now is looking at how they could incorporate this into their games because otherwise they’ll be left behind.

The third thing that’s going to happen is [it will become] incredibly easy to make content. If [multiple people] are playing a game, everybody’s looks the same, right? It’s the same thing you’re all going through. With AI your game could look very, very different to anyone else’s, it can transport you into different [experiences]. And that also opens up other possibilities. You could build a level just by speaking. If you want him to play a certain way, you can just say “hey, I want you to play this”. Just speak to the computer and it will generate a different level for your friend and you to play. It’s kind of like Minecraft on steroids. So that’s what you’re going to see; I find it quite fascinating.

intel xe stack archday intel xe

Intel has teased the codenames for the first four generations of consumer Arc GPUs

We’re seeing modern GPUs consuming ridiculous amounts of power, even though manufacturers have moved to more efficient process modes. 600W and 800W power supplies are becoming the norm now. Will Intel also follow this trend?

Raja Koduri: Performance per Watt, or delivering higher performance at lower power, is my top priority. There will always be someone with some skill who can say “I’m going to give you more juice”, but my focus is lower power. The other issue I find with just increasing power and bragging about benchmarks is that while it’s good from a marketing standpoint, [there is a limited] number of PC users who can just buy such a card and plug it in. It dramatically reduces your overall market, right?

It’s incredible how complex PCs have become. I don’t do as much DIY as I used to maybe 5-10 years ago, but recently I put two PCs together using both my hardware and competition’s hardware, and even just getting all these connectors in was like “Hmmmm!” I actually live this, and I found it hard. It was funny, I had to go look at YouTube videos! I really still love the DIY culture. That’s what makes democratisation easy, and I’d love to find ways to continue that. I’ve been thinking about how to do that in a more modular fashion, and I think the PC needs a reboot.

That mass-market approach, would that mean that you primarily focus on the mid- and lower-tier SKUs first and then push out high-end ones?

Raja Koduri: High-end has no limit right now. What is the definition of high-end? Is it 600 Watts? Obviously our partners and our customers want some halo SKUs for bragging rights, and we always like to figure out ways to enable that. But my priority at this point is getting that core audience, with one power connector. And that can get you up to 200-225W. If you nail that, and something a little above and a little below, all that falls into the sweet spot.

When XE first came out, I think a lot of us were excited that the integrated graphics on basic CPUs would get a significant bump, but we haven’t seen that yet. Where do you see the baseline for entry-level integrated graphics, and why hasn’t that progressed significantly recently?

Raja Koduri: Great question! In fact, that was always the plan. Meteor Lake was always the plan but what happened, and this is something we have publicly said, was delays in our Core CPU roadmap and 10nm process. We stayed on 10nm for a couple more generations [than intended]. Advanced XE graphics were on the next node, ready to go, but the Meteor Lake platform is the one that is going to ship that new Xe graphics with ray tracing and all those great features. So I can’t wait for the world to see Meteor Lake, which will change the entire integrated graphics landscape.

Even for the low-end Pentiums and Celerons? For entry-level PCs and Chromebooks?

Raja Koduri: Yes, you’ll see that.

raja koduri intel mumbai keynote raja koduri

Raja Koduri has had a long career in the graphics hardware and software industry

On the CPU side, Intel has stated that that any architecture can be transposed to any process. Other companies do chiplets and combine blocks fabbed on multiple processes. Is that also some something that Intel is doing?

Raja: Yes. With Meteor Lake, you’ll see that mixing and matching. Graphics and IO can be on different process technologies.

You said slightly earlier that PC form factors need to be shaken up. Given what you’re also saying about Meteor Lake, are we going to see a point where low-end GPUs go away altogether? Where there’s that much more integration into a single CPU or SoC? And what does that mean for laptops?

Raja Koduri: That’s a really good question. In particular, in the context of India. My hope, frankly, even though I am a bit conflicted on this, is that integrated will eliminate some of the bottom end of the discrete [GPU market]. I think that’s a good thing for consumers because you get smaller form factors, lower power and lower cost. A discrete GPU needs memory that you have to pay for, and power delivery [circuitry], so there is a bill of materials cost. With integrated, you already have memory for the CPU that you’re sharing, and power delivery. So overall cost-wise and performance-per-dollar-wise it will be so much more compelling than a CPU plus a discrete GPU. This is what Meteor Lake’s focus is. The realistic timeframe for that to trickle down into OEM notebooks… I would say 2024 is the year I would expect a dramatic shift in the PC graphics landscape on how high integrated will go.

Xe scales from integrated GPUs to supercomputers. Is that going to be the way forward now that you have had this experience? Do you have any changed priorities for the next generation of Xe? And on the software side will OneAPI also filter down to the consumer space?

Raja Koduri: We haven’t broken our vision. Architecture is a hardware-software contract. Will we need to break it? I’m not sure at this point. If we do, it will be for a good reason, like if I find a path to getting [Nvidia GeForce RTX] 4090-class performance at, say 200W – if only! That’s the level of benefit I need to see before I break it.  OneAPI is supported all the way from the smallest integrated chip to the biggest exascale machines.

Finally, to end on a lighter note, are you still involved with Baahubali?

Raja Koduri: The visual effects studio that I founded in 2010 – 12 years, my God! – it is still involved. I haven’t spent much time with them over the last five years, not as much as I did on Baahubali, but the next thing that [S S] Rajamouli is working on… he’s spent a lot of time in the US recently, and we spent a lot of time going “Is this AI thing real, how can we take advantage of it for content creation?” It’s in the brainstorming phase right now but I’m hoping I’ll find some time to spend on that over the next few years.

Some responses have been condensed and slightly edited for clarity.