Every year at Computex, we are bound to see a healthy amount of announcements surrounding new technology. As we are seeing more often this year, it is teeming with AI developments. This time we turn to Nvidia, Intel and Qualcomm to see what they are up to when it comes to AI hardware and applications.
Nvidia: AI giant continues to grow
Those who once saw Nvidia as primarily gamer-focused should have long since adjusted that viewpoint. The rise of data centers changed the company’s priorities years ago, but the hype surrounding AI is something it is only too happy to capitalize on. There are rivals (Google, Microsoft, Meta), but most organizations (including Google, Microsoft and Meta) are more than a little dependent on Nvidia’s famed A100 GPUs and other AI-accelerating hardware.
Earlier, we brought you news that Nvidia is putting the GH200 Superchip into full production. It also has a mountain of other AI developments, including collaborations with Microsoft, marketing company WPP and manufacturing companies such as Foxconn. The Microsoft deal involves simplifying AI development within the Windows ecosystem, which will soon be making leaps and bounds with the help of Copilot and Dev Home. Specifically, Nvidia wants to work with Microsoft to make its RTX video cards extremely suitable for dedicated AI workloads within Windows applications and, of course, games. The collaboration with WPP is focused on a “content engine” on the GPU giant’s Omniverse Cloud platform. This would make it possible to quickly produce “digital twins” (i.e. polygonal variants of physical objects). With the help of AI, this allows a marketing campaign to count on 3D representations of what it wants to sell.
Qualcomm: Snapdragons are everywhere, and the company wants to connect them
Qualcomm is a big player when it comes to providing smartphone SoCs. These days, even Samsung has put them to use worldwide. Flagships of non-Apple phones mostly contain a Qualcomm Snapdragon chip. Rival MediaTek, despite its also significant market share, remains stuck to the confines of the mid-range as a result. However, Snapdragons exist just about everywhere else as well, such as in the Meta Quest Pro VR headset and several smartwatches. Qualcomm chief Alex Katouzian would like Snapdragons to collectively consume AI workloads. This would make it possible for users to have all kinds of different tech talking to each other. This would be in a way we haven’t seen before.
Indeed, for now, the concept of interconnectivity is quite straightforward for end users. You can endlessly extend the interaction between smartwatches, smartphones, smartfridges and smartthermostats in terms of software. Those who use a smartwatch to measure blood pressure and running routes often share that data with their own phones. However, these devices rarely share the resources they offer. Rather, it is pure happenstance that a watch has a wrist sensor that the phone lacks. In effect, they all could conceivably do the same thing if they had the required tech on board. It’s not a question of hardware power. What’s missing is the pooling of the computing power these devices possess. Compare this to data center hardware that increasingly must operate as a whole to perform complex tasks, where the performance of each component can add up.
It therefore stands to reason that Qualcomm wants to move in this direction for its own consumer hardware as well. It also indicates that local performance is almost always more efficient than what a data center can do. However, for now it is a somewhat vague goal from the company. The form this pooling of hardware forces should take remains somewhat uncertain. More concrete is Qualcomm’s plan to sell more PCs. Arm PC market share growth is driven by products from Qualcomm and MediaTek. In addition, it seems that Windows is paying increasing attention to Arm, which may eventually become a real threat in the desktop market to the x86 architecture supported by AMD and Intel.
Intel: the VPU is the star of Meteor Lake
Intel is also going to add AI hardware to its own chips at a larger scale than before. The 14th-generation “Meteor Lake” processors will all include a built-in Vision Processing Unit (VPU), based on technology from Intel’s acquired Movidius. Some 13th-generation chips already featured a VPU. This way, AI should also be run locally, so the cloud will not have to do all the work. This will come in handy, for example, with Windows’ AI-integrated features.