Home Lifestyle Everything Announced at Intel’s Lunar Lake AI Chip Event – Video

Everything Announced at Intel’s Lunar Lake AI Chip Event – Video

30
0


Everything Announced at Intel's Lunar Lake AI Chip Event

Speaker 1: As we’ve launched the core ultra with Meteor Lake, it also introduced this next generation of chip based design. And Lunar Lake is the next step forward and I’m happy to announce it today. Lunar Lake is a revolutionary design. It’s new IP blocks for C-P-U-G-P-U and NPU. It’ll power the largest number of next gen AI PCs in the industry. We already have over 80 designs with 20 OEMs that will start shipping in volume [00:00:30] in Q3. First it starts with a great CPU. And with that, this is our next generation Lion Cove processor that has significant IPC improvements and delivers that performance while also delivering dramatic power efficiency gains as well. So it’s delivering core ultra performance at nearly half the power that we had in Meteor Lake, which was already a great chip. The GPU is also a huge [00:01:00] step forward. It’s based on our next generation SHE two IP and it delivers 50% more graphics of performance.

Speaker 1: And literally we’ve taken a discreet graphics card and we’ve shoved it into this amazing chip called Lunar Lake. Alongside this, we’re delivering strong AI compute performance with our enhanced NPU up to 48 tops of performance. And as you heard Satya talk about our collaboration with Microsoft and Copilot Plus, [00:01:30] and along with 300 other ISVs, incredible software support, more applications than anyone else. Now, some say that the NPU is the only thing that you need, and simply put, that’s not true. And now having engaged with hundreds of ISVs, most of them are taking advantage of CPU GPU U and NPU performance. In fact, our new SHE two GPU is an incredible on device AI performance [00:02:00] engine. Only 30% of the ISVs we’ve engaged with are only using the NPU, the GPU and the CPU in combination deliver extraordinary performance. The GPU 67 tops with our XMS performance, three and a half x the gains over prior generation.

Speaker 1: And since there’s been some talk about this other elite chip coming out and its superiority to the X 86, I just want to put that to bed right now, [00:02:30] ain’t true. Lunar lake running in our labs today outperforms the ex elite on the CPU, on the GPU and on AI performance delivering a stunning 120 tops of total platform performance and it’s compatible. So you don’t need any of those compatibility issues. This is X 86 at its finest. Every enterprise, every customer, every historical driver and capability simply works. This is a no-brainer. Everyone should upgrade. [00:03:00] And the final nail in the coffin of this discussion is some say the X 86 can’t win on power efficiency. Lunar lake bust this myth as well. This radical new SOC architecture and design delivers unprecedented power efficiency up to 40% lower SOC performance than mely, which was already very good. Customers are looking for high performance, cost effective gen AI training and inferencing solutions. [00:03:30] And they’ve started to turn to alternatives like Gowdy. They want choice, they want open, open software and hardware solutions and time to market solutions at dramatically lower TCOs. And that’s why we’re seeing customers like Navar, Airtel, Bosch Emphasis and Seeker turning to Gouty two. And we’re putting these pieces together. We’re standardizing through the open source community [00:04:00] and the Linux Foundation. We’ve created the open platform for enterprise AI to make Xeon and GDI a standardized AI solution for workloads like rag.

Speaker 2: So let me start with maybe a quick medical query.

Speaker 1: Okay, so this is Zon and Gaudi working together on a medical query. So it’s a lot of private confidential on-prem data being combined with a open source LLM. Exactly, exactly. Okay, very cool.

Speaker 2: All right, so let’s see what our LLM has to say. So [00:04:30] you can see it a typical LLM. We’re getting the text answer here standard, but it’s a multimodal LLM. So we also have this great visual here of the chest X-ray.

Speaker 1: Okay, I’m not good at reading, so what does this say?

Speaker 2: I’m not great either, but the nice thing about, and I’m going to spare you my typing skills, I’m going to do a little cut and pasting here. The nice thing about this multimodal LLM is we can actually ask it questions to further illustrate what’s going on here. So this LLM is actually going to analyze this image [00:05:00] and tell us a little bit more about this hazy opacity such as it’s, so, you can see here it’s saying it’s down here in the lower left. So once again, just a great example of multimodal LEM.

Speaker 1: And as you see, gouty is not just winning on price, it’s also delivering incredible TCO and incredible performance. And that performance is only getting better with GOUTY three. GOUTY three architecture is the only ML perf benchmark alternative to H one hundreds for LLM training and inferencing. And [00:05:30] GOUTY three only makes it stronger. We’re projected to deliver 40% faster time to train than H one hundreds and 1.5 x versus H two hundreds and faster inferencing than H one hundreds and delivering that 2.3 x performance per dollar and throughput versus H one hundreds in training, gouty three is expected to deliver two X the performance per dollar. And this idea is simply music to our customer’s [00:06:00] ears spend less and get more. It’s highly scalable uses open industry standards like ethernet, which we’ll talk more about in a second. And we’re also supporting all of the expected open source frameworks like PyTorch, VLLM on hundreds of thousands of models are now available on hugging face for Gaudi. And with our developer cloud, you can experience Gaudi capabilities firsthand, easily accessible and readily available. But of course with [00:06:30] this, the entire ecosystem is lining up behind Gouty three. And it’s my pleasure today to show you the wall of gaudy three.

Speaker 1: Today we’re launching Zion six with ecos, and we see this as an essential upgrade [00:07:00] for the modern data center, a high core count, high density, exceptional performance per watt. It’s also important to note that this is our first product on Intel three, and Intel three is the third of our five nodes in four years. As we continue our march back to process technology, competitiveness, and leadership next year, I’d like you to fill this rack with the equivalent compute capability of the Gen two using gen six. Okay, [00:07:30] give

Speaker 3: Me a minute or two, I’ll make it

Speaker 1: Happen. Okay, get with it. Come on up to it, buddy. And it’s important to think about the data centers. Every data center provider I know today is being crushed by how they upgrade, how they expand their footprint and the space, the flexibility for high performance computing. They have more demands for AI in the data center. And having a processor with 144 cores versus 28 cores for gen two [00:08:00] gives them the ability to both condense as well as to attack these new workloads as well with performance and efficiency that was never seen before. So Chuck, are you done? I’m

Speaker 3: Done. I wanted a few more reps, but you said equivalent. You

Speaker 1: Can put a little bit more in there. Okay. So let me get it. That rack has become this, and what you just saw was eCourse delivering this distinct advantage for cloud native and hyperscale workloads, 4.2 x in media trance code 2.6 [00:08:30] x performance per watt. And from a sustainability perspective, this is just game changing. A three to one rack consolidation over a four year cycle. Just one 200 rack data center would save 80 K megawatts per megawatt hours of energy, and Zana is everywhere. So imagine the benefits that this could have across the thousands and tens of thousands of data centers. In fact, if just 500 [00:09:00] data centers were upgraded with what we just saw, this would power almost 1.4 million Taiwan households for a year, 3.7 million cars off the road for a year, or power Taipei 1 0 1 for 500 years. And by the way, this will only get better. And if 144 cores is good, well let’s put two of them together and let’s have 288 cores. So later this [00:09:30] year, we’ll be bringing the second generation of our Zon six with E-courses, a whopping 288 cores. And this will enable a stunning six to one consolidation ratio. Better claim than anything we’ve seen in the industry.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here