China’s Got a Huge Artificial Intelligence Plan

China aims to make the artificial intelligence industry a "new, important" driver of economic expansion by 2020, according to a development plan issued by the State Council.

Policy makers want to be global leaders, with the AI industry generating more than 400 billion yuan ($59 billion) of output per year by 2025, according to an announcementfrom the cabinet late Thursday. Key development areas include AI software and hardware, intelligent robotics and vehicles, virtual reality and augmented reality, it said.

"Artificial intelligence has become the new focus of international competition," the report said. "We must take the initiative to firmly grasp the next stage of AI development to create a new competitive advantage, open the development of new industries and improve the protection of national security."

The plan highlights China’s ambition to become a world power backed by its technology business giants, research centers and military, which are investing heavily in AI. Globally, the technology will contribute as much as $15.7 trillion to output by 2030, according to a PwC report last month. That’s more than the current combined output of China and India.

Economic Ripples

"The positive economic ripples could be pretty substantial," said Kevin Lau, a senior economist at Standard Chartered Bank in Hong Kong. “The simple fact that China is embracing AI and having explicit targets for its development over the next decade is certainly positive for the continued upgrading of the manufacturing sector and overall economic transformation."

Chinese AI-related stocks advanced Friday. CSG Smart Science & Technology Co. climbed as much as 9.3 percent in Shenzhen before closing 3.1 percent higher, while intelligent management software developer Mesnac Co. surged 9.8 percent after hitting the 10 percent daily limit in earlier trading.

AI will have a significant influence on society and the international community, according to an opinion piece by East China University of Political Science and Law professor Gao Qiqi published Wednesday in the People’s Daily, the flagship newspaper of the Communist Party.

PwC found that the world’s second-biggest economy stands to gain more than any other from AI because of the high proportion of output derived from manufacturing.

Read More: AI Seen Adding $15.7 Trillion as Global Economy Game Changer

Another report from Accenture Plc and Frontier Economics last month estimated that AI could increase China’s annual growth rate by 1.6 percentage point to 7.9 percent by 2035 in terms of gross value added, a close proxy for GDP, adding more than $7 trillion.

Military, Civilian Initiatives

The State Council directive also called for China’s businesses, universities and armed forces to work more closely in developing the technology.

"We will further implement the strategy of integrating military and civilian developments," it said. "Scientific research institutes, universities, enterprises and military units should communicate and coordinate."

More AI professionals and scientists should be trained, the State Council said. It also called for promoting interdisciplinary research to connect AI with other subjects such as cognitive science, psychology, mathematics and economics.

— With assistance by Xiaoqing Pi, Emma Dai, David Ramli, Ryan Lovdahl, Robin Ganguly, and Jake Ulick

Intelligent Machines Google’s AI Guru Says That Great Artificial Intelligence Must Build on Neuroscience

Inquisitiveness and imagination will be hard to create any other way.

Demis Hassabis knows a thing or two about artificial intelligence: he founded the London-based AI startup DeepMind, which was purchased by Google for $650 million back in 2014. Since then, his company has wiped the floor with humans at the complex game of Go and begun making steps towards crafting more general AIs.

But now he’s come out and said that be believes the only way for artificial intelligence to realize its true potential is with a dose of inspiration from human intellect.

Currently, most AI systems are based on layers of mathematics that are only loosely inspired by the way the human brain works. But different types of machine learning, such as speech recognition or identifying objects in an image, require different mathematical structures, and the resulting algorithms are only able to perform very specific tasks.

Building AI that can perform general tasks, rather than niche ones, is a long-held desire in the world of machine learning. But the truth is that expanding those specialized algorithms to something more versatile remains an incredibly difficult problem, in part because human traits like inquisitiveness, imagination, and memory don’t exist or are only in their infancy in the world of AI.

In a paper published today in the journal Neuron, Hassabis and three coauthors argue that only by better understanding human intelligence can we hope to push the boundaries of what artificial intellects can achieve.

First, they say, better understanding of how the brain works will allow us to create new structures and algorithms for electronic intelligence. Second, lessons learned from building and testing cutting-edge AIs could help us better define what intelligence really is.

The paper itself reviews the history of neuroscience and artificial intelligence to understand the interactions between the two. It argues that deep learning, which uses layers of artificial neurons to understand inputs, and reinforcement learning, where systems learn by trial and error, both owe a great deal to neuroscience.

But it also points out that more recent advances haven’t leaned on biology as effectively, and that a general intelligence will need more human-like characteristics—such as an intuitive understanding of the real world and more efficient ways of learning. The solution, Hassabis and his colleagues argue, is a renewed “exchange of ideas between AI and neuroscience [that] can create a 'virtuous circle' advancing the objectives of both fields.”

Hassabis is not alone in this kind of thinking. Gary Marcus, a professor of psychology at New York University and former director of Uber’s AI lab, has argued that machine-learning systems could be improved using ideas gathered by studying the cognitive development of children.

Even so, implementing those findings digitally won’t be easy. As Hassabis explains in an interview with the Verge, artificial intelligence and neuroscience have become “two very, very large fields that are steeped in their own traditions,” which makes it “quite difficult to be expert in even one of those fields, let alone expert enough in both that you can translate and find connections between them.”

This famous roboticist doesn’t think Elon Musk understands AI

FEATURED IMAGE: PAUL MAROTTA/GETTY IMAGES FOR TECHCRUNCH

FEATURED IMAGE: PAUL MAROTTA/GETTY IMAGES FOR TECHCRUNCH

Earlier this week, at the campus of MIT, TechCrunch had the chance to sit down with famed roboticist Rodney Brooks, the founding director of MIT’s Computer Science and Artificial Intelligence Lab, and the cofounder of both iRobot and Rethink Robotics.

Brooks had a lot to say about AI, including his overarching concern that many people — including renowned AI alarmist Elon Musk — get it very wrong, in his view.

Brooks also warned that despite investors’ fascination with robotics right now, many VCs may underestimate how long these companies will take to build —  a potential problem for founders down the road.

Our chat, edited for length, follows.

TC: You started iRobot when there was no venture funding, back in 1990. You started Rethink in 2008, when there was funding but not a lot of interest in robotics. Now, there are both, which seemingly makes it a better time to start a robotics company. Is it?

RB: A lot of Silicon Valley and Boston VCs sort of fall over themselves about how they’re funding robotics [now], so you [as a founder] can get heard.

Despite [investors who say there is plenty of later-stage funding for robotics] , I think it’s hard for VCs to understand how long these far-out robotics systems will really take to get to where they can get a return on their investment, and I think that’ll be crunch time for some founders.

TC: There’s also more competition and more patents that have been awarded, and a handful of companies have most of the world’s data. Does that make them insurmountable?

RB: Someone starting a robotics company today should be thinking that maybe at some point, in order to grow, they’re going to have to get bought by a large company that has the deep pockets to push it further. The ecosystem would still use the VC funding to prune out the good ideas from the bad ideas, but going all the way to an IPO may be hard.

Second thing: On this data, yes, machine learning is fantastic, it can do a lot, but there are a lot of things that need to be solved that are not just purely software; some of the big innovations [right now] have been new sorts of electric motors and controls systems and gear boxes.

TC: You’re writing a book on AI, so I have to ask you: Elon Musk expressed again this past weekend that AI is an existential threat. Agree? Disagree?

RB: There are quite a few people out there who’ve said that AI is an existential threat: Stephen Hawking, astronomer Royal Martin Rees, who has written a book about it, and they share a common thread, in that: they don’t work in AI themselves. For those who do work in AI, we know how hard it is to get anything to actually work through product level.

Here’s the reason that people – including Elon – make this mistake. When we see a person performing a task very well, we understand the competence [involved]. And I think they apply the same model to machine learning. [But they shouldn’t.] When people saw DeepMind’s AlphaGo beat the Korean champion and then beat the Chinese Go champion, they thought, ‘Oh my god, this machine is so smart, it can do just about anything!’ But I was at DeepMind in London about three weeks ago and [they admitted that things could easily have gone very wrong].

TC: But Musk’s point isn’t that it’s smart but that it’s going to be smart, and we need to regulate it now.

RB:  So you’re going to regulate now. If you’re going to have a regulation now, either it applies to something and changes something in the world, or it doesn’t apply to anything. If it doesn’t apply to anything, what the hell do you have the regulation for? Tell me, what behavior do you want to change, Elon? By the way, let’s talk about regulation on self-driving Teslas, because that’s a real issue.

TC: You’ve raised interesting points about this in your writings, noting that the biggest worry about autonomous cars – whether they’ll have to choose between driving into a gaggle of baby strollers versus a group of elderly women – is absurd, considering how often that particular scenario happens today.

RB: There are some ethical questions that I think will slow down the adoption of cars. I live just a few blocks [from MIT]. And three times in the last three weeks, I have followed every sign and found myself at a point where I can either stop and wait for six hours, or drive the wrong way down a one-way street. Should autonomous cars be able to decide to drive the wrong way down a one-way street if they’re stuck? What if a 14-year-old riding in an Uber tries to override it, telling it to go down that one-way street? Should a 14-year-old be allowed to ‘drive’ the car by voice? There will be a whole set of regulations that we’re going to have to have, that people haven’t even begun to think about, to address very practical issues.

TC: You obviously think robots are very complementary to humans, though there will be job displacement.

RB: Yes, there’s no doubt and it will be difficult for the people who are being displaced. I think the role in factories, for instance, will shift from people doing manual work to people supervising. We have a tradition in manufacturing equipment that it has horrible user interfaces and it’s hard and you have to take courses, whereas in consumer electronics [as with smart phones], we have made the machines we use teach the people how to use them. And I do think we need to change our attitude in industrial equipment and other sorts of equipment, to make the machines teach the people how to use them.

TC: But do we run the risk of not taking this displacement seriously enough? Isn’t the reason we have our current administration because we aren’t thinking enough about the people who will be impacted, particularly in the middle of the country?

RB: There’s a sign that maybe I should have seen and didn’t. When I started Rethink Robotics, it was called Heartland Robotics. I’d just come off six years of being an adviser to the CEO of John Deere; I’d visited every John Deere factory. I could see the aging population. I could see they couldn’t get workers to replace the aging population. So I started Heartland Robotics to build robotics to help the heartland.

It’s no longer called Heartland Robotics because I started to get comments like, “Why didn’t you just come out and call it Bible Belt Robotics?” The people in the Midwest thought we were making fun of them. I should have now, in retrospect, thought of that a little deeper.

TC: If you hadn’t started Rethink, what else would you want to be focused on right now?

RB: I’m a robotics guy, so every problem I think I can solve has a robotics solution. But what are the sorts of things that are important to humankind, which the current model of either large companies investing in or VCs investing in, aren’t going to solve? For instance: plastics in the ocean. It’s getting worse; it’s contaminating our food chain. But it’s the problem of the commons. Who is going to fund a startup company to get rid of plastics in the ocean?  Who’s going to fund that, because who’s going to [provide a return for those investors] down the line?

So I’m more interested in finding places where robotics can help the world but there’s no way currently of getting the research or the applications funded.

TC: You’re thought as the father of modern robotics. Do you feel like you have to be out there, evangelizing on the part of robotics and roboticists, so people understand the benefits, rather than focus on potential dangers?

RB: It’s why I’m right now writing a book on AI and robotics and the future — because people are getting too scared about the wrong things and not thinking enough about what the real implications will be.

Think Tank: Is AI the Future of E-commerce Fraud Prevention?

Shutterstock / Maksim Kabakou

Shutterstock / Maksim Kabakou

Michael Reitblat, chief executive officer of Forter, explains how AI can play a key role in fraud prevention.

There’s a lot of debate about what Artificial Intelligence really means, and how we should feel about it. Will it transform our world for the better? Will the machines take over? Will it simply make processes we already perform faster and smoother? As Gartner says in “A Framework for Applying AI in the Enterprise,” “The artificial intelligence acronym ‘AI’ might more appropriately stand for ‘amazing innovations’ that do what we thought technology couldn’t do.”

 

One way and another, we’re talking about “smart machines” — machines that are trained on existing, historical data, and use that to make accurate deductions or predictions about examples with which they’re presented. The applications are wide-ranging, from medicine to retail to self-driving cars and beyond.

 

For e-commerce, AI means the ability to deliver capabilities that simply were not possible before. There are two main directions in which this expresses itself:

1) Uncovering trends and audiences: A well-trained e-commerce AI can identify trends of buyer behavior, or interests in new products or experiences and adapt quickly.

2) Personalization: The experience can be tailored to each customer in ways that were not an option when companies had to configure/design the experience for everyone at once (or maybe have a few versions based on geographies). Customers can be offered the information and products they want, when they want them, in the ways that are best suited to them.

Why I’ve Come to Love AI

As someone who travels a lot, I often have a fairly complex customer story when I shop online. I might be on a work trip to China, using a proxy to shop on a favorite U.S. store with my credit card, which has a New York billing address, sending something to an office in San Francisco to pick up on my next stop. There’s a good chance I’ll be on a mobile device, and since I like to explore new things, I’m often buying something of a kind I’ve never bought before.

All of this makes me unpopular with e-commerce fraud prevention systems. I’ve lost count of the number of times I’ve been rejected, or delayed for days while my order is painstakingly reviewed. Sometimes I’ve moved on by the time the package finally arrives at the place to which I had ordered it.

The thing is, I get it. I was a fraud prevention analyst myself, back in the time before AI was an option. I know exactly how hard these transactions are to get right, from the human perspective. I know how long it can take to review a transaction, and that as an analyst the tendency is always to play it safe — even if that means sending a good customer away.

AI isn’t a magic tool, but properly leveraging AI can enable retailers to eat the cake — driving their sales upward by creating frictionless, speedy buying experiences for consumers — and have it, too — be completely protected against online payment fraud.

The 3 Unmatched Advantages of AI-based Fraud Protection Systems

Scale: An AI system can “look” at 6,000 data points in every transaction, and match them with billions of other transactions to look for patterns, discrepancies, and simple coincidences of events in just a fraction of a second. This means that all fraud decisions can happen 100 percent in real-time, regardless of how much traffic the site is receiving, or whether the fraud team is down with the flu.

Accuracy: In the last year a well-built and trained fraud protection AI has proven repeatedly that it outperforms even the best human reviewers in accuracy. For retailers the reduction in false declines (good customers mistakenly rejected as fraud) means more sales, and happier consumers, and the reduction in fraud chargebacks means lower costs, and lower risk. Beyond that, it enables new business models that were previously considered too risky, like the growing popularity of the try-and-buy model.

Adaptivity: In fraud prevention, one of the great challenges is the speed of learning necessary in order to deal with new fraudulent moda operandi. If a fraudster finds a new technique that works, it will spread like wildfire and hundreds of fraudsters will attack thousands of retailers at once. An AI-based solution is the only realistic way for retailers to fight fraud together in this highly dynamic environment, combining their efforts and sharing data in a centralized way to prevent fraudsters from abusing one retailer after another. In fact, AI has the potential to actually reverse the asymmetry and push the fraudsters back. From the criminal point of view, if a new method to defraud is blocked almost immediately after it is first conceived and tried out, it isn’t worth investing in.

AI is the future of e-commerce fraud prevention. It brings scale, accuracy and adaptivity to improve customer experience, block fraud and increase sales. Some retailers have already started leveraging AI, and they’re gaining a competitive advantage in this highly competitive field. Better fraud prevention is about to become standard. No site can afford to get left behind.

Michael Reitblat is chief executive officer of Forter.

Drones and phones are the next frontier for AI breakthroughs

ozalp/iStock

ozalp/iStock

Adding artificial intelligence and machine learning to your devices boosts privacy and will increase its potential

The artificial intelligence revolution is being underwritten by the cloud. Every decision made by an AI involves sending information to vast data centres, where it's processed before being returned. But our data-hungry world is posing a problem: while we can process data at rapid rates, sending it back and forth is a logistical nightmare. And that's why AI is heading to your pocket.

In essence, this means adding brains to the phones and other technologies we use on a daily basis. "Machine learning and artificial intelligence not only makes devices more autonomous and valuable but also allows them to be more personal depending on what a customer likes or needs," says Vadim Budaev, software development team leader at Scorch AI.

Much of the work in the area is being led by tech's biggest companies, which are adding basic AI and machine learning applications to products as they develop them. Facebook has introduced deep learning that can "capture, analyse, and process pixels" in videos in real-time within its apps. Google's latest framework lets developers build AI into their apps.

Apps are the likely first step for introducing AI to devices, but it's predicted this will quickly move to other products. "An expanding variety of mobile devices will be able to run machine learning," says David Schatsky, a managing director at Deloitte. "Virtual and augmented reality headsets; smart glasses; a new generation of medical devices that will be able to do diagnostics in the field; drones and vehicles; and internet of things devices will combine sensing with local analysis." His company predicts that during 2017, 300 million smartphones will have a built-in neural network machine-learning capability.

The first products using on-device AI and machine learning are starting to appear. Australian startup Lingmo International's in-ear language translator claims to work without Bluetooth or Wi-Fi. Meanwhile, DJI's Phantom 4 drone, released in 2016, uses on-board machine vision to stop it from crashing.

Technology developed by Xnor AI is using CPUs (rather than GPUs) to put AI on devices. It claims to be able to detect objects, in real-time on a cellphone. A promotional video and a report from TechCrunch claims its systems can also be run on a lower-powered device. A Raspberry Pi, for example, could be used to detect knives and guns.

"Where the data sets are smaller or involving more individualised data sets (such as personal information), it will be significantly more practical to process on-device," explains Ofri Ben-Porat, from Pixoneye, a firm using on-device machine learning to scan photos.

When successful, there are multiple benefits of running machine learning on a device. To start with, the processing and decision making can be quicker as data doesn't need to be beamed to a remote location. Keeping data local means it doesn't have to be transmitted to the company providing the service – giving users greater privacy levels. Apple is testing the model through a system it calls differential privacy.

"Protecting customer information is a major priority for businesses, and we’ve seen in many instances the damage that can be done to a brand where customer data is hacked," Ben-Porat adds. "Processing data on-device alleviates this issue by ensuring that the data is retained on the user’s mobile rather than being transferred to the server".

At present, the difficulty in bringing AI to devices at scale lies in computing power. If phones can't process data quickly enough, AI systems will run down their batteries. Electrical engineers at the Massachusetts Institute of Technology have developed a way for neural networks – one of the key underlying systems behind machine learning – to reduce power consumption and be more portable.

There's also a new range of chips being developed that can specifically handle machine learning applications. Google's Tensor Processing Units powers its translate and search systems, while UK startup Graphcore has developed its own machine learning chips. Elsewhere, the field of neuromorphic computing is growing considerably.

On-device artificial intelligence is still in its infancy, but for the wider AI industry to continue to make big breakthroughs it's going to need all the computing power it can get.

 

This AI can detect a concussion years after it happened

Image: REUTERS/Martin Rose/Pool

Image: REUTERS/Martin Rose/Pool

A new method uses artificial intelligence to accurately detect brain damage caused by concussions years after the trauma happened.

“With 1.6 to 3.8 million concussions per year in the US alone, the prevalence of this injury is alarming…”

While the short-term effects of head trauma can be devastating, the long-term effects can be equally hard for patients. The symptoms may linger years after the concussion happened. The problem is it is often hard to say whether their symptoms are being caused by a concussion or other factors like another neurological condition or the normal aging process.

The only way to prove the presence of brain damage caused by concussion years after it occurred was through post-mortem examination. A means of diagnosing concussion in living patients, however, remained elusive.

Artificial intelligence in action

The research team recruited former university athletes between the ages of 51 and 75 who played contact sports such as ice hockey and American football. From that group, the researchers formed a cohort of 15 athletes who reported being concussed in their athletic careers, and a control group of 15 athletes who had not been concussed.

The researchers performed a battery of tests on both groups, including neuropsychological testing, genotyping, structural neuroimaging, magnetic resonance spectroscopy, and diffusion weighted imaging. Then, they pooled the data and fed it to computers that use artificial intelligence software to “learn” the differences between the brain of a healthy athlete versus the brain of a previously concussed athlete.

They found that white matter connections between several brain regions of concussed individuals showed abnormal connectivity that might reflect both degeneration and the brain’s method of compensating for damage.

Using the data, the computers were able to detect concussion with up to 90-percent accuracy.

Have you read?

The work, once more thoroughly tested and refined, could have implications for current and future concussion lawsuits.

The National Football League, for example, faced a decade-long lawsuit by former players who claimed it did not do enough to protect them from concussion. The lawsuit was complicated by the fact there was no objective way to determine if the neurological symptoms they experienced were caused by the concussions they received as players.

The National Hockey League is currently facing a similar lawsuit by former players.

Larger sample needed

Sebastien Tremblay, the first author of the paper on the findings, says they need to validate the signature on a larger sample size, using various magnetic resonance imaging (MRI) scanners, before it becomes an effective means to diagnose concussion.

When perfected, the signature could also aid treatment of concussion by providing doctors with an accurate picture of what is causing their patients’ symptoms.

The need for such tools is greater than ever. According to the federal government, reported concussions have increased 40 percent between 2004 and 2014 among young football, soccer, and hockey players.

“With 1.6 to 3.8 million concussions per year in the US alone, the prevalence of this injury is alarming,” says Tremblay, a postdoctoral researcher at the Montreal Neurological Institute and Hospital (The Neuro) at McGill University.

“It is unacceptable that no objective tools or techniques yet exist to diagnose them, not to mention the sheer lack of scientifically valid treatment options. With our work, we hope to provide help to the vast population of former athletes who experience neurological issues after retiring from contact sport,” Tremblay says.

“Future studies, including systematic comparisons with patient groups presenting with other age-related neurological conditions, together with identifying new biomarkers of concussion, would help refine the developed, computer-assisted model of the remote effects of concussion on the ageing brain,” says Louis de Beaumont, a researcher at Université de Montreal and the paper’s senior author.

The study’s results appear in the European Journal of Neuroscience. Additional authors of the paper are from Université de Montreal, the Montreal Neurological Institute and Hospital (The Neuro) at McGill University, and the Ludmer Center for NeuroInformatics. The Canadian Institutes of Health Research (CIHR) funded this study.

Is AI going to be a job killer? Maybe not

Image: REUTERS/Hannah McKay  

Image: REUTERS/Hannah McKay
 

There’s no shortage of dire warnings about the dangers of artificial intelligence these days.

Modern prophets, such as physicist Stephen Hawking and investor Elon Musk, foretell the imminent decline of humanity. With the advent of artificial general intelligence and self-designed intelligent programs, new and more intelligent AI will appear, rapidly creating ever smarter machines that will, eventually, surpass us.

When we reach this so-called AI singularity, our minds and bodies will be obsolete. Humans may merge with machines and continue to evolve as cyborgs.

Is this really what we have to look forward to?

AI’s checkered past

Not really, no.

AI, a scientific discipline rooted in computer science, mathematics, psychology, and neuroscience, aims to create machines that mimic human cognitive functions such as learning and problem-solving.

Since the 1950s, it has captured the public’s imagination. But, historically speaking, AI’s successes have often been followed by disappointments – caused, in large part, by the inflated predictions of technological visionaries.

In the 1960s, one of the founders of the AI field, Herbert Simon, predicted that “machines will be capable, within twenty years, of doing any work a man can do.” (He said nothing about women.)

Marvin Minsky, a neural network pioneer, was more direct, “within a generation,” he said, “… the problem of creating ‘artificial intelligence’ will substantially be solved”.

But it turns out that Niels Bohr, the early 20th century Danish physicist, was right when he (reportedly) quipped that, “Prediction is very difficult, especially about the future.”

Today, AI’s capabilities include speech recognition, superior performance at strategic games such as chess and Goself-driving cars, and revealing patterns embedded in complex data.

These talents have hardly rendered humans irrelevant.

New neuron euphoria

But AI is advancing. The most recent AI euphoria was sparked in 2009 by much faster learning of deep neural networks.

Artificial intelligence consists of large collections of connected computational units called artificial neurons, loosely analogous to the neurons in our brains. To train this network to “think”, scientists provide it with many solved examples of a given problem.

Suppose we have a collection of medical-tissue images, each coupled with a diagnosis of cancer or no-cancer. We would pass each image through the network, asking the connected “neurons” to compute the probability of cancer.

We then compare the network’s responses with the correct answers, adjusting connections between “neurons” with each failed match. We repeat the process, fine-tuning all along, until most responses match the correct answers.

Eventually, this neural network will be ready to do what a pathologist normally does: examine images of tissue to predict cancer.

This is not unlike how a child learns to play a musical instrument: she practices and repeats a tune until perfection. The knowledge is stored in the neural network, but it is not easy to explain the mechanics.

Networks with many layers of “neurons” (therefore the name “deep” neural networks) only became practical when researchers started using many parallel processors on graphical chips for their training.

Another condition for the success of deep learning is the large sets of solved examples. Mining the internet, social networks and Wikipedia, researchers have created large collections of images and text, enabling machines to classify images, recognise speech, and translate language.

Already, deep neural networks are performing these tasks nearly as well as humans.

AI doesn’t laugh

But their good performance is limited to certain tasks.

Scientists have seen no improvement in AI’s understanding of what images and text actually mean. If we showed a Snoopy cartoon to a trained deep network, it could recognise the shapes and objects – a dog here, a boy there – but would not decipher its significance (or see the humour).

We also use neural networks to suggest better writing styles to children. Our tools suggest improvement in form, spelling, and grammar reasonably well, but are helpless when it comes to logical structure, reasoning, and the flow of ideas.

Current models do not even understand the simple compositions of 11-year-old schoolchildren.

 

AI’s performance is also restricted by the amount of available data. In my own AI research, for example, I apply deep neural networks to medical diagnostics, which has sometimes resulted in slightly better diagnoses than in the past, but nothing dramatic.

In part, this is because we do not have large collections of patients’ data to feed the machine. But the data hospitals currently collect cannot capture the complex psychophysical interactions causing illnesses like coronary heart disease, migraines or cancer.

Robots stealing your jobs

So, fear not, humans. Febrile predictions of AI singularity aside, we’re in no immediate danger of becoming irrelevant.

AI’s capabilities drive science fiction novels and movies and fuel interesting philosophical debates, but we have yet to build a single self-improving programcapable of general artificial intelligence, and there’s no indication that intelligence could be infinite.

 

Deep neural networks will, however, indubitably automate many jobs. AI will take our jobs, jeopardising the existence of manual labourers, medical diagnosticians, and perhaps, someday, to my regret, computer science professors.

Robots are already conquering Wall Street. Research shows that “artificial intelligence agents” could lead some 230,000 finance jobs to disappear by 2025.

In the wrong hands, artificial intelligence can also cause serious danger. New computer viruses can detect undecided voters and bombard them with tailored news to swing elections.

Already, the United States, China, and Russia are investing in autonomous weapons using AI in drones, battle vehicles, and fighting robots, leading to a dangerous arms race.

Now that’s something we should probably be nervous about.

Artificial Intelligence ushers in the era of superhuman doctors

Bruno Mangyoku

Bruno Mangyoku

Non-human intelligence will soon be a standard part of your medical care – if it isn’t already. Can you trust it?

By Kayt Sukel

THE doctor’s eyes flit from your face to her notes. “How long would you say that’s been going on?” You think back: a few weeks, maybe longer? She marks it down. “Is it worse at certain times of day?” Tough to say – it comes and goes. She asks more questions before prodding you, listening to your heart, shining a light in your eyes. Minutes later, you have a diagnosis and a prescription. Only later do you remember that fall you had last month – should you have mentioned it? Oops.

One in 10 medical diagnoses is wrong, according to the US Institute of Medicine. In primary care, one in 20 patients will get a wrong diagnosis. Such errors contribute to as many as 80,000 unnecessary deaths each year in the US alone.

These are worrying figures, driven by the complex nature of diagnosis, which can encompass incomplete information from patients, missed hand-offs between care providers, biases that cloud doctors’ judgement, overworked staff, overbooked systems, and more. The process is riddled with opportunities for human error. This is why many want to use the constant and unflappable power of artificial intelligence to achieve more accurate diagnosis, prompt care and greater efficiency.

AI-driven diagnostic apps are already available. And it’s not just Silicon Valley types swapping clinic visits for diagnosis via smartphone. The UK National Health Service (NHS) is trialling an AI-assisted app to see if it performs better than the existing telephone triage line. In the US and mainland Europe, health

Video Fake Obama created using AI tool to make phoney speeches

 

Researchers at the University of Washington have produced a photorealistic former US President Barack Obama.

Artificial intelligence was used to precisely model how Mr Obama moves his mouth when he speaks.

Their technique allows them to put any words into their synthetic Barack Obama’s mouth.

BBC Click finds out more.

See more at Click's website and @BBCClick.

Elon Musk: regulate AI to combat 'existential threat' before it's too late

Elon Musk urges for proactive regulation of AI to help prevent the risk of the ‘existential threat’ Photograph: Brian Snyder/Reuters

Elon Musk urges for proactive regulation of AI to help prevent the risk of the ‘existential threat’
Photograph: Brian Snyder/Reuters

 

Tesla and SpaceX CEO says AI represents a ‘fundamental risk to human civilisation’ and that waiting for something bad to happen is not an option

Tesla and Space X chief executive Elon Musk has pushed again for the proactive regulation of artificial intelligence because “by the time we are reactive in AI regulation, it’s too late”.

Speaking at the US National Governors Association summer meeting in Providence Rhode Island, Musk said: “Normally the way regulations are set up is when a bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry.

“It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation.”

Musk has previously stated that AI is one of the most pressing threats to the survival of the human race, and that his investments into its development were made with the intention of keeping an eye on its development.

“AI is the rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’ll be too late,” Musk told the meeting. “AI is a fundamental risk to the existence of human civilisation.”

While Musk has repeatedly shared his worries over AI and its development that is seen as inevitable in some regard, words appeared to hit home with multiple governors of the 32 taking part in the meeting, with follow-up questions looking for suggestions for how to go about regulating AI’s development. Musk suggested that the first stage would be to “learn as much as possible” to better understand the problem.

Musk also talked about electric and self-driving cars, saying that at some stage having a non-autonomous vehicle intended for travel rather than recreation would be considered strange and that the biggest threat to autonomous cars would be a hack of the software to take control of a fleet of connected vehicles.

Artificial Intelligence Is Set To Change The Face Of IT Operations

Source: GraphicStock

Source: GraphicStock

Artificial Intelligence will have a profound impact on the IT industry. The Machine Learning algorithms and models that bring AI to the forefront get only better with data. If these algorithms can learn from existing medical reports, and help doctors with diagnosis, the same can be used to improve IT operations. After all, enterprise IT deals with humongous data acquired from servers, operating systems, applications and users. These datasets can be used for creating ML models that assist system administrators, DevOps teams and IT support departments.

Here are a few areas of enterprise IT that AI will significantly impact.

Log Analysis

Analyzing logs is the most obvious use case for AI-driven operations. Every layer of the stack – hardware, operating systems, servers, applications – generates the data stream that can be collected, stored, processed, and analyzed by ML algorithms. Today this data is used by the IT team to perform audit trails and root cause analysis (RCA) of an event caused due to a security breach or a system failure. Traditional log management platforms such as Splunk, Elasticsearch, Data Dog, and New Relic are augmenting their platforms with Machine Learning. By bringing AI to log analysis, IT can proactively find anomalies in the systems before a failure is reported.

Having sensed the opportunity in bringing ML to log management, a few startups are building AI-driven log analysis platforms. These intelligent tools can correlate data from networking gear, servers and applications to pinpoint the issue in real-time.

Going forward, the software will become smart enough to self-diagnose and self-heal to recover from failures. ML algorithms will be embedded right into the source of data including operating systems, databases, and application software.

Capacity Planning

IT architects spend considerable about of time planning the resource needs of applications. It could be very challenging to define the server specifications for a complex, multi-tier application deployment. Each physical layer of the application needs to be matched with the number of CPU cores, the amount of RAM, storage capacity and network bandwidth.

In the public cloud environments, this results in identifying the right VM type for each tier. Some of the mature IaaS offerings such as Amazon EC2, Azure VMs and Google Compute Engine offer dozens of VM types making it a difficult choice. Cloud providers regularly add new VM families to support the emerging workloads like Big Data, game rendering, parallel processing, and data warehousing.

Machine Learning can come to rescue of infrastructure architects by helping them define the right specifications of hardware or choose the appropriate instance type in the public cloud. The algorithms learn from existing deployments and their performance to recommend the optimal configuration for each workload.

It’s a matter of time before the public cloud providers add an intelligent VM recommendation engine for each running workload. This move will reduce the burden on IT architects by assisting them in identifying the right configuration and specifications.

Infrastructure Scaling

Thanks to the elasticity of the cloud, administrators can define auto scaling for applications. Auto scaling can be configured to be proactive or reactive. In proactive mode, admins will schedule the scale-out operation before a particular event. For example, if a direct mailer campaign triggered every weekend results in additional load, they can configure the infrastructure to scale-out on a Friday evening and scale-in on Sunday. In reactive mode, the underlying monitoring infrastructure will track key metrics such as CPU utilization and memory usage to initiate a scale-out operation. When the load returns to the normalcy, the scale-in operation takes place bringing back the infrastructure to its original form.

With Machine Learning, IT admins can configure predictive scaling that learns from the previous load conditions and usage patterns. The system will become intelligent enough to decide when to scale with no explicit rules. This design complements capacity planning by adjusting the runtime infrastructure needs more accurately.

In the coming months, public cloud providers will start adding predictive scaling to their IaaS offering.

Cost Management

Assessing the cost of infrastructure plays a crucial role in IT architecture. Especially in the public cloud, cost analysis and forecast is complex. Cloud providers charge for a variety of components including the usage of VMs, storage capacity, IOPS, internal and external bandwidth, and API calls made by applications.

Machine Learning can accurately forecast the cost of infrastructure. By analyzing the workloads and their usage patterns, it becomes possible to provide a breakup of the cost across various components, applications, departments, and subscription accounts. This would help business units to secure IT budgets more accurately.

Intelligent cost management will become a de facto feature of public cloud platforms.

Energy Efficiency

Large enterprises and infrastructure providers are continuing to invest in massive data centers. One of the most complex challenges of managing data centers is power management. The increase in energy costs combined with environmental responsibility has put pressure on the data center industry to improve its operational efficiency.

By applying Machine Learning to the power management, data center administrators can dramatically reduce the energy usage. Google is pioneering AI-driven power management through DeepMind, a UK-based company that the search giant acquired in 2014 for $600 million. Google claims that it managed to reduce the amount of energy used for cooling by up to 40 percent. The below graph shows how the PUE (Power Usage Effectiveness) was adjusted based on the ML recommendations.

 

Source: DeepMind ML-based Energy Management

Source: DeepMind

ML-based Energy Management

AI-driven power management will become accessible to enterprises to bring energy efficiency into data center management.

Performance Tuning

After an application is deployed in production, a considerable amount of time is spent in tuning its performance. Especially, database engines that deal with significant amount of transactions experience reduced performance over a period of time. DBAs step in to drop and rebuild indices and clear the logs to free up space. Almost every workload including web applications, mobile applications, Big Data solutions, and line-of-business applications need tweaking to get the optimal performance.

Machine Learning can deliver auto tuning of workloads. By analyzing the logs and the time taken for common tasks such as processing a query or responding to a request, the algorithm can apply an accurate fix to the problem. It augments the log management by taking action instead of escalating the issue to the team. This will directly impact the cost of support and running enterprise IT help desks.

Artificial Intelligence will have an enormous impact on the L1 and L2 IT support roles. Most of the issues that are escalated to them will be tackled by intelligent algorithms.

Janakiram MSV is an analyst, advisor, and architect. Follow him on Twitter,  Facebook and LinkedIn.