Adding artificial intelligence and machine learning to your devices boosts privacy and will increase its potential
The artificial intelligence revolution is being underwritten by the cloud. Every decision made by an AI involves sending information to vast data centres, where it's processed before being returned. But our data-hungry world is posing a problem: while we can process data at rapid rates, sending it back and forth is a logistical nightmare. And that's why AI is heading to your pocket.
In essence, this means adding brains to the phones and other technologies we use on a daily basis. "Machine learning and artificial intelligence not only makes devices more autonomous and valuable but also allows them to be more personal depending on what a customer likes or needs," says Vadim Budaev, software development team leader at Scorch AI.
Much of the work in the area is being led by tech's biggest companies, which are adding basic AI and machine learning applications to products as they develop them. Facebook has introduced deep learning that can "capture, analyse, and process pixels" in videos in real-time within its apps. Google's latest framework lets developers build AI into their apps.
Apps are the likely first step for introducing AI to devices, but it's predicted this will quickly move to other products. "An expanding variety of mobile devices will be able to run machine learning," says David Schatsky, a managing director at Deloitte. "Virtual and augmented reality headsets; smart glasses; a new generation of medical devices that will be able to do diagnostics in the field; drones and vehicles; and internet of things devices will combine sensing with local analysis." His company predicts that during 2017, 300 million smartphones will have a built-in neural network machine-learning capability.
The first products using on-device AI and machine learning are starting to appear. Australian startup Lingmo International's in-ear language translator claims to work without Bluetooth or Wi-Fi. Meanwhile, DJI's Phantom 4 drone, released in 2016, uses on-board machine vision to stop it from crashing.
Technology developed by Xnor AI is using CPUs (rather than GPUs) to put AI on devices. It claims to be able to detect objects, in real-time on a cellphone. A promotional video and a report from TechCrunch claims its systems can also be run on a lower-powered device. A Raspberry Pi, for example, could be used to detect knives and guns.
"Where the data sets are smaller or involving more individualised data sets (such as personal information), it will be significantly more practical to process on-device," explains Ofri Ben-Porat, from Pixoneye, a firm using on-device machine learning to scan photos.
When successful, there are multiple benefits of running machine learning on a device. To start with, the processing and decision making can be quicker as data doesn't need to be beamed to a remote location. Keeping data local means it doesn't have to be transmitted to the company providing the service – giving users greater privacy levels. Apple is testing the model through a system it calls differential privacy.
"Protecting customer information is a major priority for businesses, and we’ve seen in many instances the damage that can be done to a brand where customer data is hacked," Ben-Porat adds. "Processing data on-device alleviates this issue by ensuring that the data is retained on the user’s mobile rather than being transferred to the server".
At present, the difficulty in bringing AI to devices at scale lies in computing power. If phones can't process data quickly enough, AI systems will run down their batteries. Electrical engineers at the Massachusetts Institute of Technology have developed a way for neural networks – one of the key underlying systems behind machine learning – to reduce power consumption and be more portable.
There's also a new range of chips being developed that can specifically handle machine learning applications. Google's Tensor Processing Units powers its translate and search systems, while UK startup Graphcore has developed its own machine learning chips. Elsewhere, the field of neuromorphic computing is growing considerably.
On-device artificial intelligence is still in its infancy, but for the wider AI industry to continue to make big breakthroughs it's going to need all the computing power it can get.