Arm Ltd., the U.K. chip intellectual property vendor, is rolling out a machine learning platform designed to boost the functionality of devices at the network edge, beginning with an object detection capability.
Arm, which is focusing heavily on Internet of Things (IoT) processors, said this week its Project Trillium would bring machine learning and neural networking functionality to edge devices via a suite of scalable processors. The processor architecture initially runs at about 5 trillion operations per second, enough the chip IP vendor said to perform “daily machine learning tasks.”
Arm also stressed its scalable processor could perform machine learning functions independent of the cloud. “That’s clearly vital… for any device, such as an autonomous vehicle that cannot rely on a stable internet connection,” the company noted in a blog post.
Arm promotes its low-power architecture for mobile and IoT applications. The machine learning platform would draw between 1 and 2 watts, the company noted. Project Trillium also includes links to neural networking frameworks such as Google’s (NASDAQ: GOOGL)TensorFlow and the Caffe deep learning apps.
Along with a machine learning processor, the company also released an object detection chip that it claims is more efficient that standalone CPUs, GPUs or accelerators. The detector has initially been designed to detect people and other objects. Combining the processors would enable applications such as high-resolution facial recognition on a mobile phone.
Along with IoT, the chip design vendor hopes to scale its machine learning framework up to applications such as using machine learning inference in autonomous vehicles and datacenter servers. (An earlier version of the object detection chip is currently used as a computer vision processor in security cameras.)
“Project Trillium will be the backbone of a world where [machine learning] does not signal a category of device, but a technology functionality found in almost all devices,” the company added.
The other piece of the machine learning initiative is a neural networking software development kit designed to “bridge the gap” between existing frameworks such as Caffe and TensorFlow and underlying processor IP. The “translation” of those frameworks allows them to run on Arm Cortex CPUs and Mali GPUs, the company noted.
ARM said Tuesday (Feb. 13) its suite of machine learning building blocks would be available for preview in April, with general availability in mid-2018.
The machine learning push is the latest effort by Arm to move its low-power processor architecture into edge and IoT applications. Along with Project Trillium, the company is expected to launch additional IoT initiatives during an embedded systems conference later this month.
Arm is the latest tech company to release an object detection platform. A vision platform released last month by Facebook is based on the Caffe2 deep learning framework.