Business are expected to use AI to stay ahead of the game - but how do you get started?

Business and AI.PNG

Despite the big hype, smaller, medium-size and sometimes even larger businesses are often unsure about where to begin: “How can we use artificial intelligence in our organization and what value can it bring?” This is the question that many company directors and company managers have asked themselves. Organizations are often not aware of the vast opportunities that they are already sitting on in terms of what is possible with their data, but they do know they need to get started with AI not to be left behind the competition.

While everyone talks about AI transforming vastly each industry in the near future, many businesses are not sure what exactly this can mean for their own organisation: What business processes could be automated? What processes could be made more efficient with AI, and where could a machine learning algorithm bring the most value?

So, why have some businesses not yet started using AI? Innovating with AI and machine learning requires access to highly skilled individuals. These are data scientists mastering not only statistics and data visualization, but also complex machine learning and AI methods. Machine learning engineers and AI architects are rare and harder to find, locating someone excellent is a lengthy process, and hiring them is costly. AI experts often have PhDs in an artificial intelligence field, and many are still doing research in the academic system, because AI is not a field you become expert in overnight.

Before we can solve the talent gap, we need to fill the knowledge gap. There are companies, such as Brainpool AI, which provide the experts but also help organisations understand how they can get started with AI, from data structuring and engineering, to identifying machine learning opportunities within the business. By working closely with the company’s in-house teams, Brainpool consultants perform analytics audits, figuring out what data is available and what data analytics has been done, how their data should be structured and merged, and help businesses understand what kind of questions can be answered with machine learning, and where they can bring the most value.

Say you are a retailer and want to know if you are offering the right kind of stock that makes your business run efficiently and profitable while offering product ranges that make your customers happy. You may be wondering whether the set of Mayonnaise brands you are offering is a satisfactory range to your customers but also cost-efficient.

Here are some examples of how AI can help us:

  1. AI powered product selection – ensuring the consumer receives the most relevant choice of products based on their online behavior. We see Amazon getting quite good at this.

  2. AI powered stock management – using AI to maximise customer satisfaction whilst in the same time optimizing stock management to ensure business runs efficiently

  3. Personal health virtual assistant/healthcare bots - AI powered technology can help patients by suggesting what medication or attention is needed based on their described symptoms

  4. Medical diagnostics - millions of tests are being carried out by hospitals today for various illnesses which are hard to detect. AI can enhance speed and accuracy of these tests

  5. Fraud detection – AI can help companies in industries such as telecom or banking detect and prevent fraud with higher accuracy

The range of applications is huge, it would be hard to list them all. When thinking of getting started with AI, no matter what application or the industry you’re in, it is important to select the right tools that are suitable to the type of data and the problems you are tackling. AI frameworks such as TensorFlow, H2o, Caffe, PowerAI are some of them. You will also need advice on the libraries that your organisation should be using such as R, Matlab or Python. Artificial Intelligence and Machine Learning experts can help you select the right tools and deliver a portfolio of powerful machine learning solutions to choose from with a roadmap of how to get started.

The goal is to become self-sufficient and learn exactly what steps you need to take in order to be ready to start using AI within your business. If you are already using data science, you should get experts to evaluate whether the algorithms your company is using is really the state-of-the-art and the best you could be doing.

Don’t wait around, otherwise you’ll get left on the platform with your competitors moving away in a speeding train. Get expert advice from a company like Brainpool and get started with AI today.



Singapore's first robot masseuse starts work

 Credit: Nanyang Technological University

 Credit: Nanyang Technological University

A robot masseuse has started work in Singapore today. Named Emma, short for Expert Manipulative Massage Automation, it specialises in back and knee massages as it mimics the human palm and thumb to replicate therapeutic massages such as shiatsu and physiotherapy.

Emma started work on her first patients today at the NovaHealth Traditional Chinese Medicine (TCM) clinic, working alongside her human colleagues – a physician and a massage therapist.

Emma 3.0 – the first to go into public service – is a third more compact than the first prototype unveiled last year, offers a wider range of massage programmes and provides a massage that is described by patients as almost indistinguishable from a professional masseuse.

Emma uses advanced sensors to measure tendon and muscle stiffness, together with Artificial Intelligence and cloud-based computing to calculate the optimal massage and to track a patient's recovery over a course of treatments.

Emma is developed by AiTreat, a technology start-up company incubated at Nanyang Technological University, Singapore (NTU Singapore).

Just two years old, AiTreat has a valuation of SGD$10 million (USD $7.3 million) after it recently completed its seed round funding, supported by venture capitalists from Singapore, China and the United States, including Brain Robotics Capital LP from Boston.

Founder of AiTreat and NovaHealth, Mr Albert Zhang, an alumnus of NTU Singapore who led the development of Emma, said the company's technology aims to address workforce shortages and quality consistency challenges in the healthcare industry.

Using Emma in chronic pain management has the potential of creating low-cost treatment alternatives in countries where healthcare costs are high, and where aging populations have a growing demand for such treatment.

Mr Zhang said that Emma was designed to deliver a clinically precise massage according to the prescription of a qualified traditional Chinese medicine physician or physiotherapist, without the fatigue faced by a human therapist.

"By using Emma to do the labour intensive massages, we can now offer a longer therapy session for patients while reducing the cost of treatment. The human therapist is then free to focus on other areas such as the neck and limb joints which Emma can't massage at the moment," said Mr Zhang, who graduated from NTU's Double Degree programme in Biomedical Sciences and Chinese Medicine.

In Singapore, a conventional treatment package for lower back pain consisting of a consultation, acupuncture and a 20-minute massage, would typically range from SGD$60 to SGD$100 (USD$44 to USD$73).

At NovaHealth TCM clinic, a patient could receive the same consultation, acupuncture but with a 40-minute massage from Emma and a human therapist for SGD$68 (USD$50).

Emma is housed in a customised room with two massage beds. Located in between both beds, Emma can massage one patient while the physician provides treatments for the second patient, before switching over.

This arrangement ensures Emma is always working on a patient, maximising the productivity of the clinic. It is estimated that staffing requirements to run a clinic can be reduced from five people to three, as Emma does the job of two masseuses.

How Emma works

Emma has a touch screen with a fully articulated robotic limb with six degrees of freedom. Mounted at the end of the limb are two soft massage tips made from silicon, which can be warmed for comfort.

Emma also has advanced sensors and diagnostic functions which can measure the exact stiffness of a particular muscle or tendon.

The data collected of each patient is then sent to a server in a cloud, where an Artificial Intelligence (AI) computes the exact pressure to be delivered during the massage procedure.

The AI can also track and analyse the progress of the patient, generating a performance report that will allow a physician to measure a patient's recovery using precise empirical data.

This proprietary cloud intelligence is supported by Microsoft, after Mr Zhang and his teammates won the Microsoft Developer Day Start-up Challenge last year.

Once it has proved that Emma can improve the productivity and effectiveness of TCM treatments, Mr Zhang hopes it could be a business model for other clinics to follow in future.

AiTreat is currently incubated at NTUitive, the university's innovation and commercialisation arm.

The start-up is supported by the StartupSG-Tech grant, which funds up to SGD$500,000, as well as SPRING Singapore's ACE start-up grant and the Technology for Enterprise Capability Upgrading (T-Up) grant.

The development of Emma is also on the TAG.PASS accelerator programme by SGInnovate, which will see Mr Zhang tie up with overseas teams to target multiple markets such as in US and China.

Chief Executive Officer of NTU Innovation and NTUitive Dr Lim Jui said harnessing disruptive technologies such as robotics and AI to improve everyday life is what Singapore needs to keep its innovative edge.

"To remain competitive in the global arena, start-ups will need to tap on emerging technologies to create a unique product that can tackle current challenges, similar to what AiTreat has done," Dr Lim explained.

"We are proud to have guided Mr Albert Zhang in his vision to bring affordable healthcare solutions to the market for Singapore, which can alleviate some of the chronic pain problems which our elderly faces."

The official launch of Emma and the NovaHealth clinic today was attended by fellow entrepreneurs and industry leaders, including Mr Inderjit Singh, Chairman of NTUitive, NTU's innovation and enterprise arm, and a member of NTU Board of Trustees.

Mr Inderjit Singh said, "There is great potential for Emma to be of service to society, especially as the population ages. The massage techniques of experienced and renowned TCM physicians can be reproduced in Emma, giving the public easier access to quality treatment. I look forward to future studies which could improve the efficacy of such massages, using herbal ointments containing modern ingredients that improve wear and tear, such as glucosamine.

Running in parallel to Emma's work schedule is a research project to measure and benchmark Emma's efficacy,

Interested in how AI is helping the Healthcare sector? Register for your ticket here:

AI innovation will trigger the robotics network effect

Image Credit: Oryx Vision

Image Credit: Oryx Vision

Anyone who has thought about scaling a business or building a network is familiar with a dynamic referred to as the “network effect.” The more buyers and sellers who use a marketplace like eBay, for example, the more useful it becomes. Well, the data network effect is a dynamic in which increased use of a service actually improves the service, such as how machine-learning models generally grow more accurate as a result of training from larger and larger volumes of data.

Autonomous vehicles and other smart robots rely on sensors that generate increasingly massive volumes of highly varied data. This data is used to build better AI models that robots rely on to make real-time decisions and navigate real-world environments.

The confluence of sensors and AI at the heart of today’s smart robots generate a virtuous feedback loop, or what we might call a “robotics network effect.” We are currently on the verge of the tipping point that will create this network effect and transform robotics.

The rapid evolution of AI

To understand why robotics is the next frontier of AI, it helps to step back and understand how AI itself has evolved.

Machine intelligence systems developed in recent years are able to leverage huge amounts of data that simply didn’t exist in the mid-1990s when the internet was still in its infancy. Advances in storage and compute have made it possible to quickly and affordably store and process large amounts of data. But these engineering improvements alone can’t explain the rapid evolution of AI.

Open source machine learning libraries and frameworks have played a quiet but equally essential role. When the scientific computing framework Torch was released 15 years ago under a BSD open source license, it included a number of algorithms still commonly used by data scientists, including deep learning, multi-layer perceptrons, support vector machines, and K-nearest neighbors.

More recently, open source projects like TensorFlow and PyTorch have made valuable contributions to this shared repository of knowledge, helping software engineers with diverse backgrounds develop new models and applications. Domain experts require a vast amount of data to create and train these models. Large incumbents have a huge advantage because they can leverage existing data network effects.

Sensor data and processing power

Light detection and ranging (lidar) sensors have been around since the early 1960s. They’ve since found application in geomatics, archaeology, forestry, atmospheric studies, defense, and other industries. In recent years, lidars have become the preferred sensors for autonomous navigation.

The lidar sensor on Google’s autonomous vehicles generates 750MB of data per second. The 8 computer vision cameras on board collectively generate another 1.8GB per second. All this data has to be crunched in real time, but centralized compute (in the cloud) simply isn’t fast enough for real-time, high-velocity situations. To solve for this bottleneck, we’re decentralizing compute by pushing processing to the edge or, in the case of robots, on board.

The current solution for most of today’s autonomous vehicles is to use two on-board “boxes,” each of which is equipped with an Intel Xeon E5 CPU and 4 to 8 Nvidia K80 GPU accelerators. At peak performance, this consumes over 5000W in electricity. Recent hardware innovations like Nvidia’s new Drive PX Pegasus, which can compute 320 trillion operations per second, are beginning to more effectively address this bottleneck.

AI on the edge

Our ability to both process sensor data and fuse various modalities of data together will continue to drive the evolution of smart robots. In order for this sensor fusion to happen in real time, we need to put our machine learning and deep learning models on the edge. Of course, decentralized AI compounds the demands on decentralized processors.

Thankfully, machine learning and deep learning compute is becoming much more efficient. Graphcore’s intelligent processing units (IPUs) and Google’s tensor processing units (TPUs), for example, are lowering the cost and accelerating the performance of neural networks at scale.

Elsewhere, IBM is developing neuromorphic chips that mimic brain anatomy. Prototypes use a million neurons, with 256 synapses per neuron. The system is particularly well suited to interpret sensory data because it’s designed to approximate the way the human brain interprets and analyzes perceptual data.

The result of all this data coming from sensors means we are on the verge of a robotics network effect, a shift that will have dramatic implications for AI, robotics, and their various applications.

A new world of data

The robotics network effect will enable new technologies and machines to act not only on larger volumes and velocities of data, but also on expanding varieties of data. New sensors will be able to detect and capture data that we might not even be thinking about, bound as we are by the limited nature of human perception. Machines and smart devices will contribute enriched data back onto the cloud and to neighboring agents, informing decision making, enhancing coordination, and playing a vital role in continuous model improvements.

These advancements are coming more quickly than many realize. Aromyx, for example, uses receptors and advanced machine learning models to build sensor systems and a platform for the digital capture, indexing, and search of scent and taste data. The company’s EssenceChip is a disposable sensor that outputs the same biochemical signals that the human nose or tongue sends to the brain when we smell or taste a food or beverage.

Open Bionics is developing robotic prostheses that rely on haptic data collected from sensors within the arm socket to control hand and finger movements. This non-invasive design leverages machine learning models to translate fine muscle tension sensed by the electrodes into complex motor response in the bionic hands.

Sensor data will be instrumental in pushing the boundaries of AI. AI systems will simultaneously expand our ability to process data and discover creative uses for this data. Among other things, this will inspire new robotic form factors capable of collecting even broader modalities of data. As we advance our ability to “see” in new ways, the everyday world around is rapidly emerging as the next great frontier of discovery.

Alex Housley is the founder and CEO of Seldon, the machine learning deployment platform that gives data science teams new capabilities around infrastructure, collaboration, and compliance.

Santiago Tenorio is a general partner at Rewired, a robotics-focused venture studio investing in applied science and technologies that advance machine perception.

AI in Retail

AI in Retail

AI in Retail

AI and Machine Learning are completely transforming the retail industry these days. Our purchase journey is becoming shorter and more personalised than ever. We see it happening but do we understand the technology behind it? Here are a few examples for how it is done.

Predictive Sales

Build self-learning models that predict sales, help increase sales revenue, and reduce storage costs. Using Linear Latent Variable Models (LAVA) and/or Elastic Nets to estimate the latent factors that highlight customers purchasing behaviour.

Big Data Analytics and Visualisation

Systematic analysis of big data is crucial when exploring under-performing streams of sales revenue. By deploying a combination of large-scale analytics and data visualisation we can illuminate hidden campaign strategies, such as cross-sales, which will alleviate such poorly performing SKUs.

Supply Chain

Implement statistical models with demand and supply uncertainty features that are inherent to the supply chain process. The perturbation of these model treat hidden externalities and generate a robust toolkit for modelling supply chain. Some additional areas where machine learning could help you’re your business is planning group problems, optimising stock levels, and warehouse automation.

Backtesting Campaign Strategies

Campaigns can be costly if they are not implemented correctly, and thoroughly backtested. Finely tuning tune back-testing models will help build a well-constructed cost-effective campaign strategies, giving management at all levels the details and implications for deployment.

Targeted Campaign and Retail Segmentation

Have a nuanced view of public opinion and target customers more accurately with Multi-level Regression and Poststratification (MRP). Create retail segmentation with artificial neural networks (ANNs) giving you a better understanding of your customers' shopping habits.

For more information visit

-------------------------------- will be exhibiting at the AI Congress 2018. Get your ticket today!

The Incredible Ways Heineken Uses Big Data, The Internet of Things And Artificial Intelligence (AI)

Photographer: Taylor Weidman/Bloomberg

Photographer: Taylor Weidman/Bloomberg

Every industry can benefit from Big Data, IoT and AI, and that includes brewers. Dutch brewer Heineken has been a worldwide brewing leader for the last 150 years, but today, as the No. 1 brewer in Europe and No. 2 in the world they are ramping up their results thanks to the use of big data and AI. As the company sets out to better compete in the formidable U.S. beer market they plan to leverage the vast amounts of data they collect. Currently they sell more than 8.5 million barrels of its various beer brands here in the U.S., but they hope to increase those numbers with data-driven improvements and AI augmentation to its operations, marketing, advertising and customer experience.

Heineken Improves Operations through Data Analytics

From forecasting to optimizing delivery routes, Heineken uses data at every stage of the supply chain. Data informs Heineken’s collaborative planning, forecasting and replenishment processes to eliminate inefficiencies throughout the entire chain. Through data analytics the brewer can adjust production when there is high inventory, long production or replenishment lead-times, and seasonal variances in the demand for its products.

The Internet of Things and Heineken’s Ignite Bottle

The brewer is not letting the potential of the Internet of Things (IoT) pass them by. By 2025, the Internet of Things is expected to generate up to $11.1 trillion a year in economic value according to a McKinsey Global Institute report. Heineken has already dabbled in the IoT with its Ignite bottle—one of the winning ideas for the company’s annual Future Bottle Design Challenge. These interactive bottles have 50 individual components and sensors including LED lights that turn beer bottles into connected devices that respond to the beat of the music in clubs and reflect its rhythms so that “every bottle becomes part of the party.” Its lights also flicker when the bottle is tipped back for a drink, “cheers” another bottle and dims when nobody is touching it.  The Ignite bottle certainly contributes to a memorable customer experience when enjoying a Heineken brew.

Data-Driven Marketing

Heineken partnered with Walmart on a pilot program with Shopperception, a company that analyzes the behavior of shoppers in front of the shelves and uses the metrics it gathers to create real-time events to drive more conversions. This program helped them gather a tremendous amount of data on how every six-pack or can of Heineken left the store. The brewer and retailer can assess all the data collected to better understand the customer who is purchasing Heineken as well what might be the best location in the store to sell beer and when.

Heineken also has a strong social media following and created partnerships with Facebook and Google to better understand their customers. Now armed with this insight, Heineken can create personalized and event-driven marketing experiences.

5 reasons businesses are struggling with large-scale AI integration

Artificial intelligence is an important vehicle for companies looking to automate processes, reduce the cost of operation, or fuel innovation. Despite the positive influence AI-supported activities have on business, a successful implementation won’t happen overnight. First you need a complete understanding of your business goals, technology needs, and the impact AI will have on customers and employees. The majority of employees face challenges or concerns relating to AI adoption, and that needs addressing.

The implication of successful AI adoption is far reaching for businesses undertaking full-cycle digital transformation, which places equal emphasis on automation, innovation, and learning. While employees may experience trepidation at the prospect of AI reshaping or eliminating day-to-day tasks, their productivity could actually increase because more of their time can be directed toward activities that produce value-driven business outcomes. No matter the role or the business unit, AI, automation, and machine learning are changing how work is performed.

As AI becomes pervasive, companies must face challenges head-on. Executives will need to consider the following five areas as they progress with digital transformation and move to invest more heavily in AI.

1. Legacy infrastructure

The adage “out with the old and in with the new” rings true for decision makers who are assessing whether their current infrastructures are intelligent enough to support today’s technology. AI-supported activities require ingestion of vast amounts of data; thus, infrastructure must be agile and scalable. Traditional structures like software-defined infrastructures (SDIs) aren’t necessarily the best option. While SDIs provide flexibility, the structure is limited by the source fixed source code and the administrator who is writing the scripts. More sophisticated AI algorithms and intelligence systems require smarter structures like AI-defined infrastructure (ADIs) and cloud-based networks that can quickly expand based on business needs.

Moreover, while neural networks have existed for decades, only now is massive computing power available at a reasonable cost, which in turn has helped increase the number of layers in these networks. Each layer adds more intelligence but also consumes enormous computing power, which used to be prohibitively expensive. More layers mean better outcomes.

2. The skills gap

AI is generating a demand for new skill sets in the workplace. However, currently, there is a widespread shortage of talent that possess the knowledge and capabilities to properly build, fuel, and maintain these technologies within their organizations. The lack of well-trained professionals who can build and direct a company’s AI and digital transformation journeys noticeably hinders progress and continues to be a major hurdle for businesses.

To mitigate this, businesses should look inward and enforce on-the-job training and reskilling. For example, LinkedIn just announced it plans to teach all its engineers the basics of using AI. With the proper staff powering AI, employees are able to focus on other critical activities and boost productivity creating a large ROI. If an enterprise’s digital transformation goal is for AI to become a business accelerator, it needs to be an amplifier of its people. It’s going to take work to give everyone access to the fundamental knowledge and skills in problem-finding and remove the elitism around advanced technology, but the boost to productivity and ROI will be worth it in the end.

3. Ethical dilemmas

While AI is still in early stages, ethical concerns abound. Both proponents and detractors of AI (Elon Musk most famous among the latter group) have focused on who wins and who loses when AI grows more prominent in business and daily life. A recent study that sought to better understand how AI and automation technologies are driving full-cycle digital transformation in various industry sectors found 62 percent of enterprises felt that a successful transition to AI-powered processes and workflows requires stringent ethical standards.

It’s critical that businesses develop guidelines and rules as adoption takes place. An ethical framework with buy-in from leadership will ensure products and services, processes, and employees are treated appropriately with respect to how AI is adopted, used, and expanded. Having moral standards or systems in place assures issues such as unemployment, bias, and inequality are carefully scrutinized as AI is added to the corporate environment.

4. Data abundance and availability

AI algorithms cannot properly execute without access to data. The more data available, the more accurate and effective AI will be. As systems evolve and more connections between networks, devices, and processes arise, colossal amounts of structured and unstructured data can be accessed.

Before deploying AI, IT teams and data scientists should collect, clean, and label datasets for machine learning algorithms to ingest to improve AI applications. Filtering through these large amounts of data is no small feat considering 80 percent of organizations’ data is unstructured. The better an organization can clean up its data, the sooner it can improve accuracy and expand use of the data. Over time, AI and machine learning will become smarter about analyzing data and making discoveries quickly that can positively affect businesses’ bottom lines.

5. Budget concerns

Deploying AI effectively takes a vast amount of time, resources, and budget. While AI cuts costs in the long run, it typically requires significant investment at the start. Enterprises are investing millions of dollars, and companies of other sizes invest substantial sums ranging from tens of thousands to hundreds of thousands. However, running extensive projects with unstructured data could cost your organization up to $500,000, so costs are comparable.

Businesses that haven’t yet allocated budget for AI should start small by manually auditing the organization to streamline processes and free up employees’ bandwidth. This allows decision makers to clearly see which systems aren’t utilized effectively and which areas could benefit from technology down the road.

The future of business requires artificial intelligence. But AI is also the future of innovation. AI needs its human creators to succeed in order for the technology to become more useful. While some have already adopted AI applications, others are still lagging, which is understandable considering the challenges businesses face during this process. However, once these barriers are overcome, enterprises will finally see how AI can drastically revolutionize businesses, improve processes, and increase employee productivity at scale in the coming years.

Mohit Joshi is president and head of banking, financial services and insurance, and health care and life sciences at Infosys, a multinational corporation that provides business consulting, information technology, and outsourcing services.



The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Learn how your competitors are implementing and monetising AI. Register today for a ticket at an early bird rate!

New surgical robots are about to enter the operating theatre

Cambridge Medical Robotics

Cambridge Medical Robotics

ROBOTS have been giving surgeons a helping hand for years. In 2016 there were about 4,000 of them scattered around the world’s hospitals, and they took part in 750,000 operations. Most of those procedures were on prostate glands and uteruses. But robots also helped surgeons operate on kidneys, colons, hearts and other organs. Almost all of these machines were, however, the products of a single company. Intuitive Surgical, of Sunnyvale, California, has dominated the surgical-robot market since its device, da Vinci, was cleared for use by the American Food and Drug Administration in 2000.

That, though, is likely to change soon, for two reasons. One is that the continual miniaturisation of electronics means that smarter circuits can be fitted into smaller and more versatile robotic arms than those possessed by Intuitive’s invention. This expands the range of procedures surgical robots can be involved in, and thus the size of the market. The other is that surgical robotics is, as it were, about to go generic. Many of Intuitive’s patents have recently expired. Others are about to do so. As a result, both hopeful startups and established health-care companies are planning to enter their own machines into the field.

Though the word “robot” suggests a machine that can do its work automatically, both da Vinci and its putative competitors are controlled by human surgeons. They are ways of helping a surgeon wield his instruments more precisely than if he were holding them directly. Da Vinci itself has four arms, three of which carry tiny surgical instruments and one of which sports a camera. The surgeon controls these with a console fitted with joysticks and pedals, with the system filtering out any tremors and accidental movements made by its operator. That, combined with the fact that the system uses keyhole surgery (whereby instruments enter the patient’s body through small holes instead of large cuts, making procedures less invasive), reduces risks and speeds up recovery. But at more than $2m for the equipment, plus up to $170,000 a year for maintenance, da Vinci is expensive. If a new generation of surgical robots can make things cheaper, then the benefits of robot-assisted surgery will spread.

Arms and the man

This summer Cambridge Medical Robotics (CMR), a British company, unveiled Versius, a robot that it hopes to start selling next year (a picture of the machine can be seen above). Unlike da Vinci, in which the arms are all attached to a single cart, Versius sports a set of independent arms, each with its own base. These arms are small and light enough to be moved around an operating table as a surgeon pleases, or from one operating theatre to another as the demands of a hospital dictate. This way, the hospital need not dedicate a specific theatre to robotic surgery, and the number of arms can be tailored to the procedure at hand.

Unlike a da Vinci arm, which is like that of an industrial robot, a Versius arm is built like a human one. It has three joints, corresponding to the shoulder, the elbow and the wrist. This means, according to Martin Frost, CMR’s chief executive, that surgeons will be able to use angles and movements they are already familiar with, instead of having to learn a robot-friendly version of a procedure from scratch. The company has yet to decide how much the arms will cost, but Mr Frost expects that operations which employ Versius will work out to be only a few hundred dollars more expensive than those conducted by humans alone. With da Vinci, the difference can amount to thousands.

Versius will compete with da Vinci on its own turf—abdominal and thoracic surgery. Others, though, want to expand robotics into new areas. Medical Microinstruments (MMI), based near Pisa, in Italy, has recently shown off a robot intended for reconstructive microsurgery, a delicate process in which a surgeon repairs damaged blood vessels and nerves while looking through a microscope. This robot allows the surgeon to control a pair of miniature robotic wrists, 3mm across, that have surgical instruments at their tips.

MMI’s device does away with the control console. Instead, the surgeon sits next to the patient and manipulates the instruments with a pair of joysticks that capture his movements and scale them down appropriately. That means he can move as if the vessels really were as big as they appear through the microscope.

Such a robot could even be used for operating on babies. “In their case,” observes Giuseppe Prisco, MMI’s boss, “even ordinary procedures are microsurgery.” The company is now doing preclinical tests. Dr Prisco reckons the market for robotic microsurgery to be worth $2.5bn a year.

A third new firm hoping to build a surgical robot is Auris Robotics. This is the brainchild of Frederic Moll, one of the founders of Intuitive Surgical (though he left more than ten years ago). Auris remains silent about when its robots will reach the market, but the firm’s patent applications give some clues as to what they might look like when they do. Auris appears to be developing a system of flexible arms with cameras and surgical instruments attached, which could enter a patient’s body through his mouth.

That tallies with an announcement the firm made earlier this year, saying that the robot will first be used to remove lung tumours. Lung cancer is the world’s deadliest sort, killing 1.7m people a year. What makes it so deadly, Auris notes, is that it is rarely stopped early. Opening someone’s thorax and removing parts of his lung is risky and traumatic. It is not always worthwhile if the tumour is still small, because small tumours do not necessarily grow big. If they do, though, they are usually lethal if left in situ—but much harder to remove than when they were small. Auris’s design could ease this dilemma by passing surgical instruments from the mouth into the trachea and thence to the precise point inside the affected lung where they are needed, in order to cut away only as much tissue as required.

Auris, CMR and MMI are all startups. But two giants of the medical industry are also joining the quest to build a better surgical robot. One is Medtronic, the world’s largest maker of medical equipment. The other is Johnson & Johnson, which has teamed up with Google’s life-science division, Verily, to form a joint venture called Verb Surgical.

Like Auris, Medtronic is keeping quiet about the design of its robot. But it has said that it plans to begin using it on patients in 2018. Also like Auris, though, some information can be deduced from other sources. In particular, Medtronic has licensed MIRO, a robot developed by Germany’s space agency for the remote control of mechanical arms in space. MIRO is made of lightweight, independent arms. These, presumably, could be fixed directly onto the operating table.

A robot based on MIRO would let surgeons rely on touch as well as sight, since MIRO’s instruments are equipped with force sensors that relay feedback to the joysticks used to operate them, and thus to the operator’s hands. The lack of such haptic feedback (the ability to feel the softness of tissues, and the resistance they offer to the surgeon’s movements) has long been a criticism of da Vinci. Surgeons often rely on touch, for example, to discern healthy from tumorous tissue.

Verb Surgical was formed in 2015 and demonstrated its latest prototype to investors earlier this year. Scott Huennekens, the firm’s boss, says the machine will be particularly suitable for gynaecological, urological, abdominal and thoracic surgery. 

Robot, teach thyself

Verb wants not just to build surgical machines, but to get its robots to learn from one another. The firm plans to connect all the machines it sells to the internet. Each bot will record data about, and videos of, every procedure it performs. These will be fed to machine-learning algorithms for analysis, to tease out what works best.

Mr Huennekens compares this to the way Google’s driverless-car division collects data on its vehicles’ journeys in order to improve their performance. A couple of years after its launch, and after processing enough images, the system could start helping surgeons to tell sick tissue from healthy, to decide where nerves and blood vessels are, and to plan procedures accordingly. Later, when the algorithms have swallowed many more years’ worth of data, the robots may be able to help surgeons make complex decisions such as how to deal with unexpected situations, what the best way is to position the robotic arms, and where and how to cut.

As for Intuitive, it, too, has noticed the size of the lung-cancer market. In collaboration with Fosun Pharma, a Chinese firm, it has announced a new system for taking biopsies of early-stage lung cancers in order to determine how threatening they are. It has also announced the launch of the da Vinci X, a lower-cost version of its workhorse. Robots may already be in many theatres, but a bigger part awaits.


The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today for a ticket at an early bird rate!

This futurist isn't scared of AI stealing your job. Here's why

REUTERS/Kim Kyung-Hoon

REUTERS/Kim Kyung-Hoon

You know a topic is trending when the likes of Tesla’s Elon Musk and Facebook’s Mark Zuckerberg publicly bicker about its potential risks and rewards. In this case, Musk says he fears artificial intelligence will lead to World War III because nations will compete for A.I. superiority. Zuckerberg, meanwhile, has called such doomsday scenarios “irresponsible” and says he is optimistic about A.I.

But another tech visionary sees the future as more nuanced. Ray Kurzweil, an author and director of engineering at Google, thinks, in the long run, that A.I. will do far more good than harm. Despite some potential downsides, he welcomes the day that computers surpass human intelligence—a tipping point otherwise known as “the singularity.” That’s partly why, in 2008, he cofounded the aptly named Singularity University, an institute that focuses on world-changing technologies. We caught up with the longtime futurist to get his take on the A.I. debate and, well, to ask what the future holds for us all.

Fortune: Has the rate of change in technology been in line with your predictions?

Kurzweil: Many futurists borrow from the imagination of science-fiction writers, but they don’t have a really good methodology for predicting when things will happen. Early on, I realized that timing is important to everything, from stock investing to romance—you’ve got to be in the right place at the right time. And so I started studying technology trends. If you Google how my predictions have fared, you’ll get a 150-page paper analyzing 147 predictions I made about the year 2009, which I wrote in the late ’90s—86% were correct, 78% were exactly to the year.

What’s one prediction that didn’t come to fruition?

That we’d have self-driving cars by 2009. It’s not completely wrong. There actually were some self-driving cars back then, but they were very experimental.

Why are we so bad at predicting certain things? For example, Donald Trump winning the presidency?

He’s not technology.

Have you tried to build models for predicting politics or world events?

The power and influence of governments is decreasing because of the tremendous power of social networks and economic trends. There’s some problem in the pension funds in Spain, and the whole world feels it. I think these kinds of trends affect us much more than the decisions made in Washington and other capitals. That’s not to say they’re not important, but they actually have no impact on the basic trends I’m talking about. Things that happened in the 20th century like World War I, World War II, the Cold War, and the Great Depression had no effect on these very smooth trajectories for technology.

What do you think about the current debate about artificial intelligence? Elon Musk has said it poses an existential threat to humanity.

Technology has always been a double-edged sword, since fire kept us warm but burned down our houses. It’s very clear that overall human life has gotten better, although technology amplifies both our creative and destructive impulses. A lot of people think things are getting worse, partly because that’s actually an evolutionary adaptation: It’s very important for your survival to be sensitive to bad news. A little rustling in the leaves may be a predator, and you better pay attention to that. All of these technologies are a risk. And the powerful ones—biotechnology, nanotechnology, and A.I.—are potentially existential risks. I think if you look at history, though, we’re being helped more than we’re being hurt.

How will artificial intelligence and other technologies impact jobs?

We have already eliminated all jobs several times in human history. How many jobs circa 1900 exist today? If I were a prescient futurist in 1900, I would say, “Okay, 38% of you work on farms; 25% of you work in factories. That’s two-thirds of the population. I predict that by the year 2015, that will be 2% on farms and 9% in factories.” And everybody would go, “Oh, my God, we’re going to be out of work.” I would say, “Well, don’t worry, for every job we eliminate, we’re going to create more jobs at the top of the skill ladder.” And people would say, “What new jobs?” And I’d say, “Well, I don’t know. We haven’t invented them yet.”

That continues to be the case, and it creates a difficult political issue because you can look at people driving cars and trucks, and you can be pretty confident those jobs will go away. And you can’t describe the new jobs, because they’re in industries and concepts that don’t exist yet.


The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today for a ticket at an early bird rate!

Are chatbots ready to rule the (customer service) world?

Chatbots Ready to Rule.jpg

With so many conflicting opinions and predictions it’s hard to tell what the real state of chatbots is, even if it is all the talk around customer service, experience and care these days. Service providers are trying to figure out how they can best leverage the bot opportunity.

The bot race

Tech research firm Forrester recently conducted a survey on behalf of Amdocs, polling 7,200 consumers worldwide, and 31 decision makers from Tier-1 service providers. It reported that 86 percent of consumers regularly engage with bots, while 65 percent of service providers are investing in artificial intelligence (AI) to create and deliver better customer experiences and in chatbot infrastructure and capabilities.

AI research firm TechEmergence notes that chatbots are expected to be the top consumer application for AI over the next five years, while tech analyst company Gartner explains the anticipated acceleration of adoption:

“Chatbots are entering the market rapidly and have broad appeal for users due to efficiency and ease of interaction.”

As such, the firm issues a powerful call to action:

“Customer interactions are moving to conversational interfaces, so marketers need to have a bot strategy if they want to be part of that future.”

In a Forbes article, Shep Hyken, who speciializes in customer experience, supports these predictions and suggests the benefits of chatbots to the enterprise include they fact they:

  • are available 24/7 – making ‘working hours’ irrelevant;
  • don’t make customers wait for an answer, dispensing with hold times;
  • personalize customers’ experience by delivering only relevant information; and
  • make friends and build brand.

Chatbot challenges

However, as most consumers know by now – this isn’t always the case:

  • If the chatbot is available, but doesn’t understand the inquiry, availbility is irrelevant and how many times have you head or read “I’m sorry, I didn’t get that”?
  • If you don’t have to wait for chatbots, if they cannot process your request or handle more complex engagements, and you have to transfer to a live agent, then hold times are back in play (and you’ve had the added frustration of an extra step in the process).
  • If they can connect customer data to the engagement, but haven’t got a 360 degree view of the customer, the response might be neither contextual nor timely (again, causing friction);
  • How can they act as the customer’s ‘friend’ if they don’t really understand their request and so cannot sufficiently and effectively fulfill their needs? Or, if they can’t engage in a way that’s naturally conversational, intuitive, and personalized?

So, while BusinessInsider believes that “a chatbot’s usefulness is limited only by “creativity and imagination,” we know there’s more to it than that. Namely, service providers that want to leverage chatbots and reap the promised rewards of taking customer experience to new heights while decreasing costs, will need to:

  • Ensure that personalization does not rely solely on CRM data, but is based on a 360 customer view which includes behavior history, channel preference and journey patterns.
  • Ensure the right balance between virtual and live agents, seamlessly transferring the engagement to a live agent as needed, in a way that is transparent to the customer.
  • Make sure that the chatbot understands intents specific to telecoms so they can more accurately address the needs of service providers’ customers.
  • Integrate chatbots with the relevant business systems to make all the required data readily available.
  • Make sure chatbots can turn every care engagement into a commerce opportunity, by presenting the most relevant and timely marketing offer to customers.
  • Optimize each engagement by learning from past interactions.

Accordingly, a successful chatbot strategy should seek to ensure that a bot:

  • uses intelligence and machine learning;
  • is designed for communications and media industries;
  • understands telco-specific intents; and
  • is fully integrated with core back-end systems.

Learn more about how to achieve each of these critical capabilities in TM Forum’s recently published Quick Insights report, How an Intelligent Chatbot Can Revolutionize the Virtual Agent Experience (page 26).


The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today!

The Usefulness—and Possible Dangers—of Machine Learning

University of Pennsylvania workshop addresses potential biases in the predictive technique.

University of Pennsylvania workshop addresses potential biases in the predictive technique.

Stephen Hawking once warned that advances in artificial intelligence might eventually “spell the end of the human race.” And yet decision-makers from financial corporations to government agencies have begun to embrace machine learning’s enhanced power to predict—a power that commentators say “will transform how we live, work, and think.”

During the first of a series of seven Optimizing Government workshops held at the University of Pennsylvania Law School last year, Aaron Roth, Associate Professor of Computer and Information Science at the University of Pennsylvania, demystified machine learning, breaking down its functionality, its possibilities and limitations, and its potential for unfair outcomes.

Machine learning, in short, enables users to predict outcomes using past data sets, Roth said. These data-driven algorithms are beginning to take on formerly human-performed tasks, like deciding whom to hire, determining whether an applicant should receive a loan, and identifying potential criminal activity.

In large part, machine learning does not differ from statistics, said Roth. But unlike statistics, which aims to create models for past data, machine learning requires accurate predictions on new examples.

This eye toward the future requires simplicity. Given a set of past, or “training,” data, a decision-maker can always create a complex rule that predicts a label—say, likelihood of paying back a loan—given a set of features, like education and employment. But a lender does not seek to predict whether a past loan applicant included in a dataset actually paid back a loan given her education and employment, but instead whether a new applicant will likely pay back a loan, explained Roth.

A simple rule might not be perfect, but it will provide more accuracy in the long run, said Roth, because it will more effectively generalize a narrow set of data to the population at large. Roth noted that for more complex rules, algorithms must use bigger data sets to combat generalization errors.

Because machine-learning algorithms work to optimize decision-making, using code and data sets that can be held up to public scrutiny, decision-makers might think machine learning is unbiased. But discrimination can arise in several non-obvious ways, argued Roth.

First, data can encode existing biases. For example, an algorithm that uses training data to predict whether someone will commit a crime should know whether the people represented in the data set actually committed crimes. But that information is not available—rather, an observer can know only whether the people were arrested, and police propensity to arrest certain groups of people might well create bias.

Second, an algorithm created using insufficient amounts of training data can cause a so-called feedback loop that creates unfair results, even if the creator did not mean to encode bias. Roth explained that a lender can observe whether a loan was paid back only if it was granted in the first place. If training data incorrectly show that a group with a certain feature is less likely to pay back a loan, because the lender did not collect enough data, then the lender might continue to deny those people loans to maximize earnings. The lender would never know that the group is actually credit-worthy, because the lender would never be able to observe the rejected group’s loan repayment behavior.

Third, different populations might have different characteristics that require separate models. To demonstrate his point, Roth laid out a scenario where SAT scores reliably indicate whether a person will repay a loan, but a wealthy population employs SAT tutors, while a poor population does not. If the wealthy population then has uniformly higher SAT scores, without being on the whole more loan-worthy than the poor population, then the two populations would need separate rules. A broad rule would preclude otherwise worthy members of the poor population from receiving loans. The result of separate rules is both greater fairness and increased accuracy—but if the law precludes algorithms from considering race, for example, and the disparity is racial, then the rule would disadvantage the non-tutored minority.

Finally, by definition, fewer data exist about groups that are underrepresented in the data set. Thus, even though separate rules can benefit underrepresented populations, such rules create new problems, arguedRoth. Because the training data used by machine learning will include fewer points, generalization error can be higher than it is for more common groups, and the algorithm can misclassify underrepresented populations with greater frequency—or in the loan context, deny qualified applicants and approve unqualified applicants at a higher rate.

Roth’s presentation was followed by commentary offered by Richard Berk, the Chair of the Department of Criminology. Berk explained that algorithms are unconstrained by design, which optimizes accuracy, but argued that the lack of constraint might be what gives some critics of artificial intelligence some pause. When decision-makers cede control of algorithms, they lose the ability to control the assembly of information, and algorithms might invent variables from components that alone have, for example, no racial content, but when put together, do.

Berk stated that mitigating fairness concerns often comes at the expense of accuracy, leaving policymakers with a dilemma. Before an algorithm can even be designed, a human must make a decision as to how much accuracy should be sacrificed in the name of fairness.

Roth stated that this tradeoff causes squeamishness among policymakers—not because such tradeoffs are new, but because machine learning is often more quantitative, and therefore makes tradeoffs more visible than with human decision-making. A judge, for example, might make an opaque tradeoff by handing down more guilty verdicts, thereby convicting more guilty people at the expense of punishing the innocent. But that tradeoff is not currently measurable. Both Roth and Berk expressed hope that machine learning’s effect of forcing more open conversations about these tradeoffs will lead to better, more consistent decisions.

Penn Law Professor Cary Coglianese, director of the Penn Program on Regulation, introduced and moderated the workshop. Support for the series came from the Fels Policy Research Initiative at the University of Pennsylvania.


The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today!

Machine Learning in Marketing

The world of marketing is being transformed at such a fast pace it’s getting hard for marketers to follow the newest tech developments that are being introduced every day. The amount of data digital marketing is creating is so big that a lot of media agencies are actually experiencing an information overload, and don’t know how to make use of it. Data becomes a problem where it should be bringing value to a business. Here are a few ways in which machine learning techniques can help.

Customer Segmentation:

Improve customer segments and targeted advertising using machine learning segmentation methods (e.g. cluster analysis, k-means, Nearest Neighbour). Classify customers using Supervised Learning Models, find new audiences using recommendation systems increase efficiency of your media spend.

Behavioural Analysis:

Find patterns in the way customers interact with your brand by using predictive modelling and forecasting. Optimise conversions, increase customer satisfaction.

Social Media - Early Opportunity Detection:

Analyse real-time Twitter and Facebook data streams to capture current sentiments with respect to brands, products or adverts. Get a head start on sentiment outbreaks to uncover important opportunities and avoid PR crises.

Sales and Marketing Integration:

Build an easy to navigate interface to measure an integrated impact of sales teams and marketing campaigns. Track direct correlation between media budgets and number of products sold.

Influencers Strategy:

Improve efficiency of your campaigns using social network analysis. Find audience most susceptible to the message, and use this audience to amplify the impact of your campaigns. Find and target.

Implicit Survey Design:

Improve accuracy of surveys by using established psychological tools and test instruments, such as gamification. Learn about your audience to make informed decisions.

Neuroscience (Beta):

Optimise website usability and impact using eye tracking methods to. Investigate consumer’s perception of adverts using brain imagining (FMRI) analysis.

For more information about above methods, or if you’re interested in the different ways in which AI and ML can be used in marketing visit


Brainpool will be exhibiting at the AI Congress 2018. To meet with them and other leading experts, sign up for your ticket today!


Retail Revolutionized: Three ways to profit from artificial intelligence

Jill Standish

Jill Standish

Whether we’re receiving coupons based on our spending, or product suggestions based on other people’s spending, artificial intelligence (AI) is transforming how consumers shop and experience brands. For retailers, meanwhile, AI could increase profits by almost 60%. It could be a game-changer in this labor-intensive sector, augmenting the workforce and enabling employees to become more productive.

Some retailers already recognize ways for AI to complement their human workforce and boost profits. Stitch Fix is a clothing retailer that combines the expertise of fashion stylists with algorithms that analyze unstructured consumer data to deliver hand-picked items based on their preferences.  Another forward-thinking fashion company is Original Stitch, which deploys AI to analyze customers’ photographs of their favorite shirts before custom-tailoring and delivering a brand-new piece of clothing.  

Yet some retailers are hesitant about AI, and unsure how they can keep up to speed with the technology – let alone make the most of it. We have identified three ways for these retailers to revolutionize the retail experience using AI.

1. Understand the consumer

AI allows companies to find out more about how customers behave and what they want, giving them confidence that they are stocking the right products, targeting them at the right consumers, and building the right loyalty programs. 

The data they gather from their Web and mobile channels already enables online retailers to develop more detailed and accurate customer profiles. But this sort of insight does not have to be exclusively Web-based: physical retailers could use AI technology to learn about customer activity as they walk around stores. Which displays do customers linger over? Which products do they take off the shelves but then decide not to buy? This sort of data will tell retailers when, where and how to nudge customers toward purchases, and give them the insights they need to improve the customer experience. 

2. Guide them to what they want – and don’t know they want

Similarly, retailers can use AI to make it easier for customers to find what they are looking for – and, crucially, help them find things they don’t yet know they want. 

This is especially valuable for the largest online brands, with their vast range of products. Consumers who feel overwhelmed by the sheer quantity of items will go elsewhere, so retailers that can guide customers in the right direction have a serious competitive advantage. And it is the online retailers that were first to recognize the value of nudging customers toward further purchases by using machine learning to anticipate their needs.

Used sensitively, AI makes customers feel that retailers understand what they want. Progressive retailers are already using AI to provide more sophisticated online recommendations, but they are also looking into tailoring the homepage to each user so they are presented with the items they desire most. 

Consumers already know that the adverts they see online are personalized to them; Google uses AI to tailor its search results for individual users; and some online retailers use structured data to adapt what they show customers according to what they have searched for in the past. What is stopping retailers from customizing each person’s experience of the entire site? 

3. Knock their socks off

Online shopping impresses customers with its ease and efficiency. As AI makes online shopping easier, customers are less likely to go to stores for commodity products such as laundry detergent. But as far as providing memorable experiences goes, physical stores have the upper hand. So, this is the time to start exploring how to use AI to dazzle customers. 

Grocery retailer Coop Italia is a great example. Customers can simply wave a hand over a box of grapes to see nutritional and provenance information on a raised monitor. It also uses “vertical shelving”: touch applications that enable customers to search for other products and find out about related products, promotions, and even waste-disposal.  At some Neiman Marcus department stores, meanwhile, customers can try out a “memory mirror” – a virtual dressing room to compare outfits, see them from 360 degrees and share video clips with friends. 

With so many of us consulting our phones while we shop – to read reviews and research product information – it is only a matter of time before retailers answer these queries on the shop floor, using bots. AI lets them carry out multidimensional conversations with customers through text-based chats, spoken conversations, gestures and even virtual reality. 

This is not hype. AI advances have already given some retailers increased customer loyalty and higher profits. Now retailers have the opportunity to boost their profits further by using AI alongside the human workforce – producing even greater efficiencies, and truly revolutionizing the in-store experience. 

Machine learning: what does the industry want next?


In this guest post, we hear technology insights and tips from Mariano Albera VP Technology at Expedia Affiliate Network

Machine learning is more popular in the travel industry now than ever. There’s a simple explanation for that fact: machine learning is more powerful now than ever before.

The appeal of machine learning – essentially a form of artificial intelligence (AI) whereby computers learn without being explicitly programmed with new information – is clear. At exceptional speed, for example, complex algorithms can identify subtle but important data patterns that humans could never have spotted. In ‘learning’ from that information, the ‘machine’ can predict patterns ahead, and then act to process that knowledge to maximise future business. In a sense, then, machine learning is a modern and highly sophisticated technological application of a long-established notion – study the past to predict the future.

Machine learning is a modern and highly sophisticated technological application of a long – established notion – study the past to predict the future

The practical applications of machine learning, and other forms of AI such as data mining, are many and varied in the travel industry.

The rise of the chatbot

‘Chatbots’ are particularly visible examples of machine learning at work. As the name suggests, chatbots are essentially machines – messenger apps – with which customers seem to have conversations. Armed with the knowledge of the customer’s past bookings, the chatbot can offer targeted recommendations highly likely to be converted into sales. Critically, the chatbot keeps learning from each booking the customer makes, so recommendations become more relevant with every new ‘chat’ and customer interaction. That’s a huge benefit in an industry as personalised as travel. Effectively, the machine learns how to close the deal without human help.

In many ways chatbots are already better than humans. They:

  • Provide low-cost 24/7 customer support.

  • Deliver real-time message translation, so you’re not on the phone at two o’clock in the morning trying to find an English-speaking sales assistant in Tokyo

  • Are much faster than waiting for a call centre to answer the phone. If you want information about train times, good theatre and the weather in New York, for example, a machine will source and deliver that information to you more rapidly than even the most well-informed human.

Admittedly, chatbots cannot always answer complex questions but their sophistication is constantly improving. By definition, the machines keep learning.

Icelandair, Lufthansa and Austrian Airlines are three carriers to have seen the potential of machine learning and introduced chatbots.

Practical planning, time saving

Machine learning also helps in areas such as planning optimal flight routes. Assessing the millions of flight options on, say, a long-haul, round-trip journey, complex algorithms can learn from past booking data to filter those possibilities down to the small number of most practical or appealing options…all in just seconds.

Another application for machine learning is in addressing the problem of duplicate listings. Online travel agents, for example, gathering data from multiple sources, face issues of misspelling, punctuation and differing word orders that have historically caused problems for computers. Now, however, machines can analyse data and work out for themselves that ‘Delta Air Line’ is actually the same as ‘Delta Airlines’. No more staff time wasted de-duping and no more frustrated customers seeing two listings for exactly the same flight.

How EAN is benefiting

Like many travel companies, machine learning is increasingly critical to how we do business at Expedia Affiliate Network (EAN), where we use hundreds of hotel features to rank hotels for our travel partners by relevance to an individual consumer’s preferences.

Like chatbots, we learn from every interaction. Let’s say, for example, that a traveller always selects hotels with high-quality gyms but never shows interest in swimming pools. By monitoring each of his selected and rejected options and bookings, our machines learn that fact without being explicitly programmed with those details. So, when the traveller next books, for example, a flight into Atlanta with a partner airline, he is instantly shown a range of suitable local hotels, prioritising gyms over pools, this maximising the likelihood of a conversion to sale. 

Lessons learnt

At EAN, we are working on using a type of machine learning called ‘deep learning’ to rank and sort hotel images and this is what we’ve learnt is that very first thing people glance at within a hotel listing, before considering the hotel name or price, is the image. In fact, it takes us around one twentieth of a second to process an image, so the quality and relevance of the images and the order in which they are displayed to travellers is crucial. 

More from in white paper titled Does Deep Learning Hold Answers? 

In the past, we relied on a manual process to select the featured image for a listing, while the other images were randomly ordered or grouped. EAN has over 300,000 properties, and over 10 million images so, as you can imagine, ranking and sorting the images for these manually is difficult. Enter AI to do this automatically.

Looking forward

Our aim is to automatically order and sort the images not just according to image quality, but also to traveller types, customer preferences and seasonality, so that the images most likely to encourage a booking are displayed to each individual consumer.

The speed at which data can be processed, analysed and actioned, is already exceptional, and is improving daily

The good news is that machine learning is advancing fast. The speed at which data can be processed, analysed and actioned, is already exceptional, and is improving daily. Across many industries, not just travel, I’d expect to see machine learning move from niche applications to mission-critical processes.

As is so often the case, in issues of computing, the limitations are as much human as technological. Almost every part of the digital user experience can be improved with AI. We all need to think creatively about how machine learning can enhance our activities.

What do we, as an industry, want to do next?

This is a guest post from Mariano Albera, VP Technology, Expedia Affiliate Network


To find out more about how the Travel Industry is adopting AI, check out The AI Congress. The leading international artificial intelligence show, it takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today!

Can robots coexist with humans?

Working together: will robots be central to our future?

With advances in artificial intelligence forging ahead, it’s time to think seriously about how we see robots fitting into our society.

Of all the tech trends dominating headlines at the moment, artificial intelligence (AI) seems to be generating the most debate.

As we continue to develop this technology – at a seemingly exponential rate – we face increasing pressure to examine the role we really want it to fulfil, and how it should be integrated.

With many high-profile figures – from Stephen Hawking to Elon Musk – warning of the potential pitfalls, it may take time before society fully accepts the idea of ubiquitous AI, or “robot workers”. But in reality, there is a strong argument that, far from capping innovation in AI, we should find ways to put it to use in order to stay ahead.

Prudential’s global head of AI, Dr Michael Natusch, says machine learning in a business context grew out of a need for up-to-the-minute analytical tools. “What really drew people’s attention across a wide range of industries to machine learning is the ability to extract insight out of large, multi-structured data sets,” he says. “Drawing understanding and insights in an automated and continuous way – that’s what we really mean by AI.

“The way businesses can use AI is exactly the same way that businesses can use human intelligence. It enables us to make decisions, to understand what’s happening, and do things faster, better and cheaper.”

The labour debate

Of course, this drive to lower costs and save time may mean changes to some job roles we know today. According to a recent PwC report, around 30pc of existing UK jobs face automation over the next 15 years – with manual roles in areas such as manufacturing, transport and retail likely to be most affected.

But PwC’s chief economist John Hawksworth believes this could be a positive move. “Automating more repetitive tasks will eliminate some existing jobs, but could also enable workers to focus on higher value, more rewarding and creative work, removing the monotony from our day jobs,” he explains.

And that’s not to mention the upshot in productivity this will bring: “Advances in robotics and AI should also create additional jobs in less automatable parts of the economy as this extra wealth is spent or invested.

“The UK employment rate is at its highest level now since comparable records began in 1971, despite all the advances in digital and other labour-saving technologies we have seen since.”

And as Dr Natusch notes, this widespread concern over AI may be blinding people to its strengths. “Very often, it sounds like AI is in competition with humans, but the real power will come from humans and AI augmenting each other,” he says. “It’s this symbiosis of humans and AI that will drive major advances across a wide range of industries.”

Real robot workers

Of course, cultural factors are just as important as economic ones – especially in sectors such as retail, where robots could prove useful in customer service roles. Hitachi is one company exploring the potential for AI in this area.

Their “symbiotic” robot, named EMIEW3, is designed for customer service, using a cloud-connected “brain” and surveillance cameras to spot people in need of help, communicate with them and offer assistance. Having already been trialled at Tokyo’s busy Haneda Airport, EMIEW3 arrived in the UK for the first time this year.

“These trials are helping to give us a first-hand sense of people’s attitudes to robots, and to see if they find them genuinely helpful,” explains Dr Rachel Jones, senior strategy designer for Hitachi Europe. “It’s also leading to some interesting learnings about the interactions between humans and robots.”

Interestingly, Dr Jones’ team has already noticed contrasts in the way different demographics respond this new technology. “For example, people in Japan are much more open to innovative technologies, and therefore the introduction of robots is generally embraced more positively,” she explains.

“In Europe and the UK, the reception of robots appears to be more cautious. This raises broader questions about the future of society, including where we want to go with new technologies and how we see robots fitting in.”

But even while we work through the creases from a logistical perspective, Dr Natusch remains positive. “I think AI will enable us to provide services to people that we are not remotely able to do today,” he says. “And that by itself will bring wealth and employment across nations.”

And it’s clear that businesses, in particular, cannot ignore this trend. “I think it’s absolutely imperative to get started with AI today, because ultimately that is what will ensure survival of your organisation,” says Dr Natusch. “Rather than thinking long and hard about your AI strategy, the key thing is to get started with something now.”

Innovations for the future

Modern life is saturated with data, and new technologies are emerging nearly every day – but how can we use these innovations to make a real difference to the world?

Hitachi believes that Social Innovation should underpin everything they do, so they can find ways to tackle the biggest issues we face today.

Visit to learn how Social Innovation is helping Hitachi drive change across the globe.


The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today!

Local Digital Development agency opens it's doors to support innovation.

FIN Digital & ISDM Solutions announced a new initiative named Smart Business Spaces as part of their interactive solution offerings. The two firms have joined together to launch an IOT Playground in their loft style office space. Located two blocks from the White House, the initiative will support organizations that are seeking opportunities for innovation.

FIN & ISDM will offer dedicated programming to local executives designed to encourage an open exchange of ideas and promote technology. The IOT Playground will host workshops, events, industry-focused trainings, Internet of Things (IoT) demonstrations and Q&A sessions.
"Throughout our time, we've seen that leaders rarely have a space for real conversations about what it means to implement high-tech solutions," said FIN CIO Rakia Finley. "With the support from the DC community we’re excited to change that."

"This initiative will give business leaders a safe space to ask tough questions about the in's and out's of technology. We believe, by doing this, we're supporting D.C.'s vision to create a more diverse and inclusive city that supports tech economy." CEO of FIN, Marcus Finley.
Leaders will get insight on utilizing technologies including; audio visual, video, mobile development, smart devices, VR, bots, beacons, and web applications, to generate custom solutions for their organization or industry. The programming aims to foster innovation and help organizations turn ideas into reality.

“We’re excited about the power of tech and we want to see local job creators just as excited. It’s our belief that by creating this space for them to come and ask questions they will be empowered to get innovative,” said Stephen Milner, of ISDM Solutions.

The initiative will take place in the joint office of FIN Digital & ISDM Solutions for the remainder of the year with a launch event on September 14th for D.C. leaders.


Healthcare and AI

Healthcare is one of the main industries being transformed by AI. The range of applications of Artificial Intelligence and Machine Learning in healthcare is so broad, it’s hard to think of an area which won’t be transformed over the coming years. A lot these applications can help save lives, so it’s research definitely worth investing in. Here are some examples.

Healthcare bots:

Customer service can be improved with specialized chat bots that interact with patients through chat windows. Automate scheduling follow-up appointments with patients. Minimise human error by ensuring they are directed to the appropriate healthcare department, and reduce kpi times.

Disease Identification/Diagnosis:

It is now possible to build state of the art classification algorithm for diagnosing patients based on mere mobile phone photos. Identify rare diseases with learning algorithms such as functional-gradient boosting (FGB), which self-report behavioural data to allow distinguishing between people with rare and more common chronic illnesses.

Personalized Treatment:

Supervised learning allows physicians to select from more limited sets of diagnoses. An example of this is the estimation of patient risk factors relative to symptoms and genetic information. Such models can be calibrated and trained on micro biosensors and mobile phone applications which will give more sophisticated health data to assess treatment efficacy. Reduce treatment cost and optimize individual patent health.

Drug Discovery:

Machine learning in early-stage drug discovery can be used to estimate the success rate of initial screening of drug compounds relative to biological factors. The application of unsupervised learning (k-nearest neighbour algorithm) to precision medicine has identified mechanisms in multi-factor diseases, and created alternative treatments and therapies.

Clinical Trial Research:

Selecting and identifying ideal candidates for clinical trials by sampling from a broader range of data to find features that are currently underutilised, an example of this could be social media and number of doctor visits. Use machine learning to improve the safety of the trialists by monitoring their health in real-time remotely.

Epidemic Outbreak Prediction:

The monitoring and predicting of epidemic outbreaks has been performed successful by machine learning technologies for a number of years now. Collecting vast amounts of data from satellites, historical healthcare databases, and social media; one can train support vector machines and deep neural networks potential outbreaks such as malaria and ebola.

If you’re particularly interested in finding our more about any of the above visit


Brainpool will be exhibiting at the AI Congress 2018. To meet with them and other leading experts, sign up for your ticket today!

Nigel - the robot that could tell you how to vote

Source: KIMERA

Source: KIMERA

The creators of a new artificial intelligence programme hope it could one day save democracy. Are we ready for robots to take over politics?

"Siri, who should I vote for?"

"That's a very personal decision."

Apple's "personal assistant", Siri, doesn't do politics. It has stock, non-committal answers for anything that sounds remotely controversial. Not unlike some politicians in fact.

But the next generation of digital helpers, powered by advances in artificial intelligence (AI), might not be so reticent.

One piece of software being developed by a company in Portland, Oregon, aims to be able to offer advice on every aspect of its users' lives - including which way to vote.

"We want you to trust Nigel, we want Nigel to know who you are and serve you in everyday life," says Nigel's creator Mounir Shita.

"It (Nigel) tries to figure out your goals and what reality looks like to you and is constantly assimilating paths to the future to reach your goals.

"It's constantly trying to push you in the right direction."

Shita's company, Kimera Systems, claims to have cracked the secret of "artificial general intelligence" - independent thinking - something that has eluded AI researchers for the past 60 years.

Instead of learning how to perform specific tasks, like most current AI, Nigel will roam free and unsupervised around its users' electronic devices, programming itself as it goes.

"Hopefully eventually it will gain enough knowledge to be able to assist you in political discussions and elections," says Shita.

Nigel has been met with a certain amount of scepticism in the tech world.

Its achievements have been limited so far - it has learned to switch smartphones to silent mode in cinemas without being asked, from observing its users' behaviour.

But Shita believes his algorithm will have the edge on the other AI-enhanced digital assistants being developed by bigger Silicon Valley players - and he has already taken legal advice on the potential pitfalls of a career in politics for Nigel.

"Our goal, with Nigel, is by this time next year to have Nigel read and write at a grade school level. We are still way off participating in politics, but we are going there," he says.

AI is already part of the political world - with ever more sophisticated algorithms being used to target voters at election time.

Teams of researchers are also competing to produce an algorithm that will halt the spread of "fake news".

Mounir Shita argues that this will be good for democracy, making it infinitely harder for slippery politicians to pull the wool over voters' eyes.

"It's going to be a lot harder to brainwash an AI that has access to a lot of information and can tell a potential voter what the politician said is a lie or is unlikely to be true."

What makes him think anyone would listen to a robot?

Voters are increasingly turning their back on identikit "machine politicians" in favour of all-too-human mavericks, like the most famous Nigel in British politics - Farage - and his friend Donald Trump.

How could AI Nigel - which was named after Mounir Shita's late business partner Nigel Deighton rather than the former UKIP leader - compete with that?

Because, says Shita, you will have learned to trust Nigel - and it will be more in tune with your emotions than a political leader you have only seen on television.

Nigel - robot Nigel, that is - could even have helped voters in the UK make a more informed decision about Brexit, he claims, although it would not necessarily have changed the outcome of the referendum.

"The whole purpose of Nigel is to figure out who you are, what your views are and adopt them.

"He might push you to change your views, if things don't add up in the Nigel algorithm.

"Let me go to the extreme here, if you are a racist, Nigel will become a racist. If you are a left-leaning liberal, Nigel will become a left-leaning liberal.

"There is no one Nigel. Everyone has their own Nigel and each one of those Nigel's purpose is to adapt to your views. There is no political conspiracy behind this."

Ian Goldin, professor of globalisation and development at the University of Oxford, also believes AI could have a role to play in debunking political spin and lies.

But he fears politicians have yet to wake up to what it will mean for the future of society or, indeed, their own jobs.

In his book, Age of Discovery: Navigating the Risks and Rewards of Our New Renaissance, Goldin and co-author Chris Kutarna seek a middle ground between apocalyptic visions of humans controlled by robots and the techno-utopian dreams of Silicon Valley's elite.

He tells BBC News: "I think the threats posed by technology are rising as rapidly as the benefits and one hopes that somewhere, in some secret place, people are worrying about it.

"But the politicians certainly aren't talking about it."

Instead of thinking about machine-learning as some distant piece of science fiction, they should "join the dots" to see how it is already changing the political and social landscape, he argues.

He points to a research paper by the Oxford Martin Programme on Technology and Employment, which suggested that Donald Trump owes his US election victory to voters who have had their jobs taken away from them by automation.

"In the machine-learning world innovation happens more rapidly, so the pace of change accelerates," says Goldin.

"That means two things - people get left behind more quickly, so inequality grows more rapidly, and the second thing it means is that you have to renew everything quicker - fibre optics, infrastructure, energy systems, housing stock, mobility and flexibility."

He adds: "They (politicians) are going to have to form a view on whether they throw sand in the wheels. What are they going to do with the workers who are laid off?"

AI evangelists like Mounir Shita have a simple answer to this. And it does not involve throwing sand in the wheels of technology - they see meddling politicians as the enemy and Elon Musk, creator of the Tesla electric car, who has warned about the catastrophic consequences for humanity of unregulated AI, as misguided, at best.

Shita is relaxed about a world where machines do all the work: "I am not envisioning people sitting on their couch eating potato chips, gaining weight, because they have nothing to do. I envision people free from labour and can pursue whatever interests or hobbies they have."

Ian Goldin takes a less rosy view of an AI-enhanced future.

Rather than indulging in hobbies or world travel, those made idle by machines are more likely to be drinking themselves to death or attempting suicide, if recent research into the so-called "diseases of despair" among poorly educated members of the white working class in America is anything to go by, he says.

In the end, it all comes down to two competing views of human nature and whether we want Nigel or something like it in our lives.

  • British politicians, on a House of Lords committee, are set to investigate the economic, ethical and social implications of artificial intelligence over the coming months.



The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today!

Facebook and Google need humans, not just algorithms, to filter out hate speech

(Reuters/Navesh Chitrakar)

(Reuters/Navesh Chitrakar)

Facebook and Google give advertisers the ability to target users by their specific interests. That’s what has made those companies the giants that they are. Advertisers on Facebook can target people who work for a certain company or had a particular major in college, for example, and advertisers on Google can target anyone who searches a given phrase.

But what happens when users list their field of study as “Jew hater,” or list their employer as the “Nazi Party,” or search for “black people ruin neighborhoods?”

All of those were options Facebook and Google suggested to advertisers as interests they could target in their ad campaigns, according to recent reports by ProPublica and BuzzFeed. Both companies have now removed the offensive phrases that the news outlets uncovered, and said they’ll work to ensure their ad platforms no longer offer such suggestions.

That, however, is a tall technical order. How will either company develop a system that can filter out offensive phrases? It would be impossible for humans to manually sift through and flag all of the hateful content people enter into the websites every day, and there’s no algorithm that can detect offensive language with 100% accuracy; the technology has not yet progressed to that point. The fields of machine learning and natural language processing have made leaps and bounds in recent years, but it remains incredibly difficult for a computer to recognize whether a given phrase contains hate speech.

“It’s a pretty big technical challenge to actually have machine learning and natural language processing be able to do that kind of filtering automatically,” said William Hamilton, a PhD candidate at Stanford University, who specializes in using machine learning to analyze social systems. “The difficulty in trying to know, ‘is this hate speech?’ is that we actually need to imbue our algorithms with a lot of knowledge about history, knowledge about social context, knowledge about culture.”

A programmer can tell a computer that certain words or word combinations are offensive, but there are too many possible permutations of word combinations that amount to an offensive phrase to pre-determine them all. Machine learning allows programmers to feed hundreds or thousands of offensive phrases into computers to give them a sense of what to look for, but the computers are still missing the requisite context to know for sure whether a given phrase is hateful.

“You don’t want to have people targeting ads to something like ‘Jew hater,'” Hamilton said. “But at the same time, if somebody had something in their profile like, ‘Proud Jew, haters gonna hate,’ that may be OK. Probably not hate speech, certainly. But that has the word ‘hate,’ and ‘haters,’ and the word ‘Jew.’ And, really, in order to understand one of those is hate speech and one of those isn’t, we need to be able to deal with understanding the compositionality of those sentences.”

And the technology, Hamilton said, is simply “not quite there yet.”

The solution will likely require a combination of machines and humans, where the machines flag phrases that appear to be offensive, and humans decide whether those phrases amount to hate speech, and whether the interests they represent are appropriate targets for advertisers. Humans can then feed that information back to the machines, to make the machines better at identifying offensive language.

Google already uses that kind of approach to monitor the content its customers’ ads run next to. It employs temp workers to evaluate websites that display ads served by its network, according to a recent article in Wired, and to rate the nature of their content. Most of those workers were asked to focus primarily on YouTube videos starting last March, when advertisers including Verizon and Walmart pulled their ads from the platform after learning some had been shown in videos that promoted racism and terrorism.

The workers now spend most of their time looking for and flagging those kinds of videos to make sure ads don’t end up on them, according to Wired. Once they’ve identified offensive materials in videos and their associated content, they feed the details to a machine-learning system, and the system can in turn learn to identify such content on its own. It’s not an easy job, however, and some of the temp workers Wired interviewed said they can barely keep up with the amount of content they’re typically tasked with checking.

Google’s chief business officer, Philipp Schindler, echoed that sentiment in an interview with Bloomberg News in April, and cited it as a reason he believed the company should cut humans out of the equation altogether.

“The problem cannot be solved by humans and it shouldn’t be solved by humans,” he said.

Until machines can learn the difference between “Jew hater” and “Proud Jew, haters gonna hate,” though, the problem of identifying and flagging hate speech can only be solved by humans–with smart machines assisting them. And there have to be enough of those humans to make a meaningful impact on the amount of content users of Facebook and Google type into the services every day. It may be far cheaper to throw algorithms and overworked temps at the problem than it would be to hire vast armies of full-time workers, but it’s likely far less effective as well.

Facebook and Google have not yet determined exactly what approach they’ll take to keep offensive targeting options off of their ad platforms. Facebook is still assessing the situation, but is considering limiting which user profile fields advertisers can target, according to Facebook spokesperson Joe Osborne.

“Our teams are considering things like limiting the total number of fields available or adding more reviews of fields before they show up in ads creation,” Osborne said in an email to Quartz. (Ads creation is the area of Facebook where advertisers can customize their ads.)

Google said in a statement that its ad-targeting system already identifies some hate speech, and rejects certain ads altogether, but that the company will continue to work on the problem.

“Our goal is to prevent our keyword suggestions tool from making offensive suggestions, and to stop any offensive ads appearing. We have language that informs advertisers when their ads are offensive and therefore rejected. In this instance, ads didn’t run against the vast majority of these keywords, but we didn’t catch all these offensive suggestions. That’s not good enough and we’re not making excuses. We’ve already turned off these suggestions, and any ads that made it through, and will work harder to stop this from happening again,” the company said.


The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today!

Robot that can solve the Rubik’s cube and thread a needle conducts Italian orchestra in world first

Tobias Schwarz | Getty Images

Tobias Schwarz | Getty Images

  • ABB's dual-armed YuMi robot becomes the first to conduct an orchestra.
  • The robot performed in Pisa Tuesday evening as part of Italy's 'First International Festival of Robotics'.
  • YuMi performed alongside Italian tenor Andrea Bocelli and the Lucca Philharmonic Orchestra.

Italy, a country steeped in ancient tradition, has taken a stride forward in the twenty-first century race towards automation, becoming the first country to showcase a robot-conducted orchestra.

YuMi, a dual-armed robot designed by ABB, accompanied Italian tenor Andrea Bocelli and conducted the Lucca Philharmonic Orchestra at a gala event in Pisa's Teatro Verdi Tuesday evening.

The performance was a world first by a robotic conductor and celebrated Italy's 'First International Festival of Robotics', which kicked off Friday.

YuMi conducted three pieces, including Bocelli's rendition of 'La Donna e Mobile' from Verdi's Rigoletto and a solo by Maria Luigia Borsi of Puccini's Gianni Schicchi.

The robot was trained by Italian conductor Andrea Colombini. Writing in a blog post ahead of the performance, Colombini described the process as "satisfying, albeit challenging"; consisting first of programming via performance and then fine-tuning to synchronize the robot's movements with the music.

"The gestural nuances of a conductor have been fully reproduced at a level that was previously unthinkable to me. This is an incredible step forward, given the rigidity of gestures by previous robots," Colombini wrote.

Tuesday's performance marks the latest milestone for Swedish robotics firm ABB, which first unveiled YuMi in April 2015.

Described as a "collaborative" robot, it is designed to perform alongside humans and complement the workforce. Already it has demonstrated its ability to solve a Rubik's cube and threat a needed.

However, such developments have faced criticism over concerns that developments in robotics could outpace new job creation and risk job losses.

Colombini insisted that YuMi would not do away with the need for humans to inject "spirit" and "soul" into orchestral performances.

"I imagine the robot could serve as an aid, perhaps to execute, in the absence of a conductor, the first rehearsal, before the director steps in to make the adjustments that result in the material and artistic interpretation of a work of music," he added in his post.



The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today!


Face-reading AI will be able to detect your politics and IQ, professor says

Professor whose study suggested technology can detect whether a person is gay or straight says programs will soon reveal traits such as criminal predisposition

Your photo could soon reveal your political views, says a Stanford professor. Photograph: Frank Baron for the Guardian

Your photo could soon reveal your political views, says a Stanford professor. Photograph: Frank Baron for the Guardian

Voters have a right to keep their political beliefs private. But according to some researchers, it won’t be long before a computer program can accurately guess whether people are liberal or conservative in an instant. All that will be needed are photos of their faces.

Michal Kosinski – the Stanford University professor who went viral last week for research suggesting that artificial intelligence (AI) can detect whether people are gay or straight based on photos – said sexual orientation was just one of many characteristics that algorithms would be able to predict through facial recognition.

Using photos, AI will be able to identify people’s political views, whether they have high IQs, whether they are predisposed to criminal behavior, whether they have specific personality traits and many other private, personal details that could carry huge social consequences, he said.

Kosinski outlined the extraordinary and sometimes disturbing applications of facial detection technology that he expects to see in the near future, raising complex ethical questions about the erosion of privacy and the possible misuse of AI to target vulnerable people.

“The face is an observable proxy for a wide range of factors, like your life history, your development factors, whether you’re healthy,” he said.

Faces contain a significant amount of information, and using large datasets of photos, sophisticated computer programs can uncover trends and learn how to distinguish key traits with a high rate of accuracy. With Kosinski’s “gaydar” AI, an algorithm used online dating photos to create a program that could correctly identify sexual orientation 91% of the time with men and 83% with women, just by reviewing a handful of photos.

Kosinski’s research is highly controversial, and faced a huge backlash from LGBT rights groups, which argued that the AI was flawed and that anti-LGBT governments could use this type of software to out gay people and persecute them. Kosinski and other researchers, however, have argued that powerful governments and corporations already possess these technological capabilities and that it is vital to expose possible dangers in an effort to push for privacy protections and regulatory safeguards, which have not kept pace with AI.

Kosinski, an assistant professor of organizational behavior, said he was studying links between facial features and political preferences, with preliminary results showing that AI is effective at guessing people’s ideologies based on their faces.

This is probably because political views appear to be heritable, as research has shown, he said. That means political leanings are possibly linked to genetics or developmental factors, which could result in detectable facial differences.

Kosinski said previous studies have found that conservative politicians tend to be more attractive than liberals, possibly because good-looking people have more advantages and an easier time getting ahead in life.

Kosinski said the AI would perform best for people who are far to the right or left and would be less effective for the large population of voters in the middle. “A high conservative score … would be a very reliable prediction that this guy is conservative.”



Kosinski is also known for his controversial work on psychometric profiling, including using Facebook data to draw inferences about personality. The data firm Cambridge Analytica has used similar tools to target voters in support of Donald Trump’s campaign, sparking debate about the use of personal voter information in campaigns.

Facial recognition may also be used to make inferences about IQ, said Kosinski, suggesting a future in which schools could use the results of facial scans when considering prospective students. This application raises a host of ethical questions, particularly if the AI is purporting to reveal whether certain children are genetically more intelligent, he said: “We should be thinking about what to do to make sure we don’t end up in a world where better genes means a better life.”

Some of Kosinski’s suggestions conjure up the 2002 science-fiction film Minority Report, in which police arrest people before they have committed crimes based on predictions of future murders. The professor argued that certain areas of society already function in a similar way.

He cited school counselors intervening when they observe children who appear to exhibit aggressive behavior. If algorithms could be used to accurately predict which students need help and early support, that could be beneficial, he said. “The technologies sound very dangerous and scary on the surface, but if used properly or ethically, they can really improve our existence.”

There are, however, growing concerns that AI and facial recognition technologies are actually relying on biased data and algorithms and could cause great harm. It is particularly alarming in the context of criminal justice, where machines could make decisions about people’s lives – such as the length of a prison sentence or whether to release someone on bail – based on biased data from a court and policing system that is racially prejudiced at every step.

Kosinski predicted that with a large volume of facial images of an individual, an algorithm could easily detect if that person is a psychopath or has high criminal tendencies. He said this was particularly concerning given that a propensity for crime does not translate to criminal actions: “Even people highly disposed to committing a crime are very unlikely to commit a crime.”

He also cited an example referenced in the Economist – which first reported the sexual orientation study – that nightclubs and sport stadiums could face pressure to scan people’s faces before they enter to detect possible threats of violence.

Kosinski noted that in some ways, this wasn’t much different from human security guards making subjective decisions about people they deem too dangerous-looking to enter.

The law generally considers people’s faces to be “public information”, said Thomas Keenan, professor of environmental design and computer science at the University of Calgary, noting that regulations have not caught up with technology: no law establishes when the use of someone’s face to produce new information rises to the level of privacy invasion.

Keenan said it might take a tragedy to spark reforms, such as a gay youth being beaten to death because bullies used an algorithm to out him: “Now, you’re putting people’s lives at risk.”

Even with AI that makes highly accurate predictions, there is also still a percentage of predictions that will be incorrect.

“You’re going down a very slippery slope,” said Keenan, “if one in 20 or one in a hundred times … you’re going to be dead wrong.”

Contact the author:



The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today!