Today, GPUs are still the most effective way of training AI systems, while companies are exploring all sorts of hardware for execution. Baidu executes with help from GPUs, for instance, while Microsoft uses programmable chips called FPGAs. Google went so far as to design its own chip, the TPU. But GPUs—originally design for other purposes—are far from ideal. “They just happen to be what we have,” says Sam Altman, president of the tech accelerator Y Combinator and co-chairman of open-source AI lab OpenAI. And not everyone has the resources to program their own chips, much less design them from scratch.

That’s where a chip like Nervana comes in. The question is how effective it will be. “We have zero details here,” says Patrick Moorhead, the president and principal analyst at Moor Insights and Strategy, a firm that closely follows the chip business. “We just don’t know what it will do.”

But Altman, for one, is bullish on Intel’s technology. He was an investor in Nervana when it was a startup. “Before that experience, I was skeptical that startups were going to play a really big role in designing new AI,” he told me last week, even before Intel announced its new chip. “Now I have become much more optimistic.”

Intel certainly gives this technology an added boost. Intel chips powered the rise of PC and the data center machines that serve up the modern Internet. It has the infrastructure needed to build chips at scale. It has the sales operation needed to push them into the market. And after years as the world’s dominant maker of data center chips, it has the leverage needed to get these chips inside the Internet’s biggest players. Intel missed the market for smartphone chips. But it still has a chance with AI.

Leave your Comment

Your email address will not be published. Required fields are marked *