NASA Explores Artificial Intelligence for Space Communications

 Credits: NASA

Credits: NASA

NASA spacecraft typically rely on human-controlled radio systems to communicate with Earth. As collection of space data increases, NASA looks to cognitive radio, the infusion of artificial intelligence into space communications networks, to meet demand and increase efficiency.

“Modern space communications systems use complex software to support science and exploration missions,” said Janette C. Briones, principal investigator in the cognitive communication project at NASA’s Glenn Research Center in Cleveland, Ohio. “By applying artificial intelligence and machine learning, satellites control these systems seamlessly, making real-time decisions without awaiting instruction.”

To understand cognitive radio, it’s easiest to start with ground-based applications. In the U.S., the Federal Communications Commission (FCC) allocates portions of the electromagnetic spectrum used for communications to various users. For example, the FCC allocates spectrum to cell service, satellite radio, Bluetooth, Wi-Fi, etc. Imagine the spectrum divided into a limited number of taps connected to a water main.

What happens when no faucets are left? How could a device access the electromagnetic spectrum when all the taps are taken?

Software-defined radios like cognitive radio use artificial intelligence to employ underutilized portions of the electromagnetic spectrum without human intervention. These “white spaces” are currently unused, but already licensed, segments of the spectrum. The FCC permits a cognitive radio to use the frequency while unused by its primary user until the user becomes active again.

In terms of our metaphorical watering hole, cognitive radio draws on water that would otherwise be wasted. The cognitive radio can use many “faucets,” no matter the frequency of that “faucet.” When a licensed device stops using its frequency, cognitive radio draws from that customer’s “faucet” until the primary user needs it again. Cognitive radio switches from one white space to another, using electromagnetic spigots as they become available.

“The recent development of cognitive technologies is a new thrust in the architecture of communications systems,” said Briones. “We envision these technologies will make our communications networks more efficient and resilient for missions exploring the depths of space. By integrating artificial intelligence and cognitive radios into our networks, we will increase the efficiency, autonomy and reliability of space communications systems.”

For NASA, the space environment presents unique challenges that cognitive radio could mitigate. Space weather, electromagnetic radiation emitted by the sun and other celestial bodies, fills space with noise that can interrupt certain frequencies.

“Glenn Research Center is experimenting in creating cognitive radio applications capable of identifying and adapting to space weather,” said Rigoberto Roche, a NASA cognitive engine development lead at Glenn. “They would transmit outside the range of the interference or cancel distortions within the range using machine learning.” 

In the future, a NASA cognitive radio could even learn to shut itself down temporarily to mitigate radiation damage during severe space weather events. Adaptive radio software could circumvent the harmful effects of space weather, increasing science and exploration data returns.

A cognitive radio network could also suggest alternate data paths to the ground. These processes could prioritize and route data through multiple paths simultaneously to avoid interference. The cognitive radio’s artificial intelligence could also allocate ground station downlinks just hours in advance, as opposed to weeks, leading to more efficient scheduling.

Additionally, cognitive radio may make communications network operations more efficient by decreasing the need for human intervention. An intelligent radio could adapt to new electromagnetic landscapes without human help and predict common operational settings for different environments, automating time-consuming processes previously handled by humans.

The Space Communications and Navigation (SCaN) Testbed aboard the International Space Station provides engineers and researchers with tools to test cognitive radio in the space environment. The testbed houses three software-defined radios in addition to a variety of antennas and apparatus that can be configured from the ground or other spacecraft.

“The testbed keeps us honest about the environment in orbit,” said Dave Chelmins, project manager for the SCaN Testbed and cognitive communications at Glenn. “While it can be simulated on the ground, there is an element of unpredictability to space. The testbed provides this environment, a setting that requires the resiliency of technology advancements like cognitive radio.”

Chelmins, Rioche and Briones are just a few of many NASA engineers adapting cognitive radio technologies to space. As with most terrestrial technologies, cognitive techniques can be more challenging to implement in space due to orbital mechanics, the electromagnetic environment and interactions with legacy instruments. In spite of these challenges, integrating machine learning into existing space communications infrastructure will increase the efficiency, autonomy and reliability of these systems.

The SCaN program office at NASA Headquarters in Washington provides strategic and programmatic oversight for communications infrastructure and development. Its research provides critical improvements in connectivity from spacecraft to ground.

For more information about SCaN, visit:

What These 5 Women Are Doing to Solve Tech’s Diversity Problem

 Getty Images

Getty Images

From gender-neutral AI to coding

At a time when diversity remains a front-burner issue within the tech industry, this year’s Consumer Electronics Show—the tech world’s largest conference—is surprisingly lacking in, well, diversity. While, in the past, the agenda-setting conference has showcased powerhouse solo women keynoters such as IBM CEO Ginni Rometty, General Motors CEO Mary Barra and former Yahoo CEO Marissa Mayer, this year, CES has chosen, for instance, to present a trio of women executives from A+E Networks, MediaLink and 605, sharing the stage alongside five male execs in a keynote panel.

Not surprisingly, CES’ male-dominated lineup has been widely slammed, with a number of CMOs and other marketing executives publicly criticizing the organization.

CES’ gender imbalance is emblematic of the broader gender inequity issues currently roiling tech. According to Girls Who Code, last year, 30,000 men graduated with computer science degrees compared to 7,000 women. Once they graduate, the statistics are grim. According to Crunchbase, the number of companies with at least one female founder increased to 9 percent between 2009 and 2012—but that number hasn’t budged in five years. The funding picture isn’t much better. According to the Harvard Business Review, among venture capital bankrolled tech startups, just 9 percent of the entrepreneurs are women.

Not content with the status quo, a number of women in tech are taking the lead to tip the gender scales, creating opportunities for women while at the same time making systemic changes when it comes to culture and thinking about diversity.

Here, Adweek highlights five women working to change the tech industry’s game.


1. Kriti Sharma, vp of artificial intelligence at Sage

Kriti Sharma AI.jpg

What she’s doing: Making AI inclusive

Artificial intelligence may be the buzziest new word in tech circles, but it has a significant gender problem, according to Sharma. For starters, AI assistants like Apple’s Siri and Amazon’s Alexa, which have female voices and personas as their default option, reinforce gender stereotypes. While these female-branded assistants are often used as “helpers,” fielding passive and anodyne questions (e.g., Siri, what’s the temperature?) or conducting household tasks like dimming lights, their male-branded counterparts such as IBM’s Watson, Salesforce’s Einstein and Samsung’s Bixby are touted as muscular, complex problem solvers deployed to such tasks as plugging into a brand’s CRM system and using AI to determine which sales leads are most promising based on past behavior.

Sharma aims to create a more gender-neutral AI industry. At Sage’s two-day “BotCamp” workshops, students get hands-on opportunities learning to build their own chatbots. And Sharma recently hired Sage’s first conversation designer, a role designed specifically to analyze the voice tones and personalities used to create virtual assistants.

Further, Sage’s code of ethics requires developers to follow five guidelines when creating AI. It covers everything from how to name virtual assistants to building diverse data sets that help companies make hiring decisions when gender is taken out of the equation.

“Women are going to lose twice as many jobs as men due to AI,” Sharma explains, citing research from the Institute for Spatial Economic Analysis. “What we don’t talk about is how [AI] is going to impact different parts of society in different ways. I do a lot of work in that area.”


2. Allison Jones, director of marketing and communications at Code2040

Allison Jones Tech.jpg

What she’s doing: Getting tech students in the door

Code2040’s mission is to make sure that black and Latinx men and women are well represented in tech. To that end, the 30-person organization provides computer science college students with internships at major companies like Squarespace, Spotify, The New York Times and Goldman Sachs.

The organization also works directly with companies to shake up and realign their internal hiring processes. When Code2040 helped blogging platform Medium hire its technical talent, instead of focusing on the usual factors such as college GPAs, it worked with Medium to create face-to-face events with engineering interns in order to get to know each candidate personally.

While just 20 percent of computer science bachelor’s degrees and 5 percent of the technical workforce are black and Latinx, by 2040 they will comprise 40 percent of the U.S. population. Says Jones, “It’s not enough to just connect folks to talent—you have to make sure that your company has the culture that helps them drive, succeed and grow.” Adding, “The opportunities provide a way to generate wealth. We are building products that need to reflect the communities that are going to be the majority by 2040.”


3. Reshma Saujani, founder and CEO of Girls Who Code

RESHMA SAUJANI girls who code.jpg

What she’s doing: Teaching thousands of young women to code

In the six years since Saujani, a former attorney, launched Girls Who Code—53,000 young women have graduated from the program. By the end of 2018, her goal is to nearly double that number, hitting 100,000.

The way Saujani sees it, although the demand for technical roles continues to rise, the percentage of women who actually hold computing roles is falling. The organization’s own research finds that 24 percent of computer scientists in 2017 were women, down from 37 percent in 1995. By 2027, the percentage is expected to slip further to 22 percent.

“I think parity has to be intentional about gender and race,” Saujani says. “We talk a lot about access to computer science education. We should be focused on participation.”

At the same time, she says, simply getting more tech companies to hire women is just the first part of the equation; the second is retention. “What causes women to leave the workforce and college is the lack of community,” Saujani adds.


4. Neha Murarka, co-founder and CEO of

Neha Murarka Smoogs.jpg

What she’s doing: Making bitcoin easy to understand

If the technology industry is dominated by men, think of bitcoin as an even more exclusive boys club.

“It’s a niche within a niche,” says Murarka. As co-founder of the five-person startup, she’s trying to help more women understand the nascent technology. powers a media player that digital creators including publishers and authors embed into their websites asking consumers to make small payments in exchange for accessing content. Instead of using a credit card to make individual payments, bitcoin stores users’ information, safely allowing them to pay for every second that they watch a video or read an article. Currently, the Nigerian news network BattaBox and author Akul Tripathi are testing’s micro-payments to access and read a series of articles and books.

In Muraka’s spare time, she co-hosts London Women in Bitcoin, a meetup event aimed at attracting more women into the cryptocurrency space. Here, women network while learning about such topics as the ethics behind building bitcoin technology.

“Most of the people who come to us are everyday people from different industries, not just technical industries,” says Muraka, who believes in order to get more women in tech, they need tech educations.

“In my undergrad and post-grad, I was the only girl in the whole department,” she says. “Even when I was working in my second job in London, we were 22 developers and I was the only girl.”


5. Katharine Zaleski, co-founder and president, PowerToFly

KATHARINE ZALESKI Power to fly.jpg

What she’s doing: Helping big brands find talent

In 2014, Zaleski—who had spent years working in media at The Huffington Post, The Washington Post and NowThis News—realized society needed to change the way it talked about women and work.

So, she started PowerToFly with Milena Berry, connecting women with companies. Think of it as an all-women version of LinkedIn: Women create profiles and then outfits like American Express, Casper and Hearst get lists of qualified, tech-heavy, female candidates. For example, Casper recently posted 10 job openings on the site, including positions for a data engineer, an IT manager and a data and engineering director.

In three years, PowerToFly has created 1 million profiles. In addition to career matchmaking, PowerToFly also runs social and mobile campaigns that advertise companies’ roles through user-acquisition tactics, reaching another 12 million women. It has sent out 30,000 diverse candidates in 2017.

“Companies can no longer say that they have a 'pipeline' problem,” Zaleski says. “When it comes time to interview for a role, not only are we giving them the women that they need to look at immediately, but we’re giving them a lead list and they’re able to say that they’re really interviewing 50/50 male-female.”


Uber and Volkswagen team up with artificial intelligence firm in race to develop self-driving cars

 Uber tested its first fleet of self driving cars in 2016 REUTERS

Uber tested its first fleet of self driving cars in 2016 REUTERS

Nvidia will partner with Uber and Volkswagen as the graphics chipmaker’s artificial intelligence platforms make further gains in the autonomous vehicle industry.

The company, which already has partnerships in the industry with companies such as carmaker Tesla and China’s Baidu, makes computer graphics chips and has also been expanding into technology for self-driving cars.

CEO Jensen Huang told an audience at the CES technology conference in Las Vegas that Uber’s self-driving car fleet was using Nvidia technology to help its autonomous cars perceive the world and make split-second decisions.

Uber has been using Nvidia’s GPU computing technology since its first test fleet of Volvo SC90 SUVS were deployed in 2016 in Pittsburgh and Phoenix.

Uber’s autonomous driving programme has been shaken this year by a lawsuit filed in San Francisco by rival Waymo alleging trade secret theft.

Nevertheless, Nvidia said development of the Uber self-driving programme had gained steam, with one million autonomous miles being driven in just the past 100 days.

With Volkswagen, Nvidia said it was infusing its artificial intelligence technology into the German carmakers’ future lineup, using Nvidia’s new Drive IX platform. The technology will enable so-called “intelligent co-pilot” capabilities based on processing sensor data inside and outside the car.

So far, 320 companies involved in self-driving cars - whether software developers, carmakers and their suppliers, sensor and mapping companies - are using Nvidia Drive, formerly branded as the Drive PX2, the company said.

Nvidia also said its first Xavier processors would be delivered to customers this quarter. The system on a chip delivers 30 trillion operations per second using 30 watts of power.

Bets that Nvidia will become a leader in chips for driverless cars, data centres and artificial intelligence have more than doubled its stock price in the past 12 months, making the Silicon Valley company the third-strongest performer in the S&P 500 during that time.

How one Chinese firm uses A.I. to teach English

Langauge learning.jpg


Chinese education start-up Liulishuo has developed what it calls the world's first artificial intelligence English teacher.

After years spent gathering data on Chinese people speaking English, the firm employed deep learning to create personalized English courses powered by AI. Available on the firm's mobile app, the courses were launched in 2016 and boast around 50 million registered users.

AI teaching can triple learning efficiency, CEO and Founder Yi Wang told CNBC on the sidelines of the Morgan Stanley Tech, Media & Telecom conference in Beijing.

Schools have long suffered from a short supply of highly qualified teachers, he said, but now "technology, especially AI and mobile internet, has enabled us to extract the best out of the best teachers."

"We're seeing a tidal shift here," he added.

Wang, a former Google product manager, says Liulishuo will eventually move on to other languages as it looks to build "the most intelligent and efficient AI language teacher."



5 Key Artificial Intelligence Predictions For 2018: How Machine Learning Will Change Everything

During 2017 it was hard to escape predictions that artificial intelligence is about to change the world. In 2018, this is unlikely to change. However, an increased focus on repeatable and quantifiable results is likely to ground some of the “big picture” thinking in reality.

Don’t get me wrong -  in 2018 AI and machine learning will still be making headlines, and there are likely to be more sensationalized claims about robots wanting to take our jobs or even destroy us. However, stories about real innovation and progress should start to receive more prominence as the promise of the smart, learning machines increasingly begins to bear fruit.

Here are my predictions for what we will see in 2018:

  1. There will be less hype and hot air about AI – but a lot more action

With any breakthrough technology comes hype. As the arrival of functional and useful AI is something that has been predicted for centuries, it’s hardly surprising people want to talk about it, now it’s here.

It also means that there’s inevitably a lot of hot air – for starters, take a look at my rundown of the most common AI myths. Inevitably this eventually dies down as the media moves onto the “next big thing”. In its place during 2018, I expect we should start to see real progress towards achieving some of the dreams and ambitions which have been talked up over the past few years.

All the indicators show that investment into the development and integration of AI and, in particular machine learning, technology is continuing to increase in scale. And importantly, results are starting to appear beyond computers learning to beat humans at board games and TV game shows. I expect 2018 to provide a continuous stream of small but sure steps forward, as machine learning and neural network technology takes on more routine tasks.

Business are expected to use AI to stay ahead of the game - but how do you get started?

Business and AI.PNG

Despite the big hype, smaller, medium-size and sometimes even larger businesses are often unsure about where to begin: “How can we use artificial intelligence in our organization and what value can it bring?” This is the question that many company directors and company managers have asked themselves. Organizations are often not aware of the vast opportunities that they are already sitting on in terms of what is possible with their data, but they do know they need to get started with AI not to be left behind the competition.

While everyone talks about AI transforming vastly each industry in the near future, many businesses are not sure what exactly this can mean for their own organisation: What business processes could be automated? What processes could be made more efficient with AI, and where could a machine learning algorithm bring the most value?

So, why have some businesses not yet started using AI? Innovating with AI and machine learning requires access to highly skilled individuals. These are data scientists mastering not only statistics and data visualization, but also complex machine learning and AI methods. Machine learning engineers and AI architects are rare and harder to find, locating someone excellent is a lengthy process, and hiring them is costly. AI experts often have PhDs in an artificial intelligence field, and many are still doing research in the academic system, because AI is not a field you become expert in overnight.

Before we can solve the talent gap, we need to fill the knowledge gap. There are companies, such as Brainpool AI, which provide the experts but also help organisations understand how they can get started with AI, from data structuring and engineering, to identifying machine learning opportunities within the business. By working closely with the company’s in-house teams, Brainpool consultants perform analytics audits, figuring out what data is available and what data analytics has been done, how their data should be structured and merged, and help businesses understand what kind of questions can be answered with machine learning, and where they can bring the most value.

Say you are a retailer and want to know if you are offering the right kind of stock that makes your business run efficiently and profitable while offering product ranges that make your customers happy. You may be wondering whether the set of Mayonnaise brands you are offering is a satisfactory range to your customers but also cost-efficient.

Here are some examples of how AI can help us:

  1. AI powered product selection – ensuring the consumer receives the most relevant choice of products based on their online behavior. We see Amazon getting quite good at this.

  2. AI powered stock management – using AI to maximise customer satisfaction whilst in the same time optimizing stock management to ensure business runs efficiently

  3. Personal health virtual assistant/healthcare bots - AI powered technology can help patients by suggesting what medication or attention is needed based on their described symptoms

  4. Medical diagnostics - millions of tests are being carried out by hospitals today for various illnesses which are hard to detect. AI can enhance speed and accuracy of these tests

  5. Fraud detection – AI can help companies in industries such as telecom or banking detect and prevent fraud with higher accuracy

The range of applications is huge, it would be hard to list them all. When thinking of getting started with AI, no matter what application or the industry you’re in, it is important to select the right tools that are suitable to the type of data and the problems you are tackling. AI frameworks such as TensorFlow, H2o, Caffe, PowerAI are some of them. You will also need advice on the libraries that your organisation should be using such as R, Matlab or Python. Artificial Intelligence and Machine Learning experts can help you select the right tools and deliver a portfolio of powerful machine learning solutions to choose from with a roadmap of how to get started.

The goal is to become self-sufficient and learn exactly what steps you need to take in order to be ready to start using AI within your business. If you are already using data science, you should get experts to evaluate whether the algorithms your company is using is really the state-of-the-art and the best you could be doing.

Don’t wait around, otherwise you’ll get left on the platform with your competitors moving away in a speeding train. Get expert advice from a company like Brainpool and get started with AI today.



Singapore's first robot masseuse starts work

  Credit: Nanyang Technological University

 Credit: Nanyang Technological University

A robot masseuse has started work in Singapore today. Named Emma, short for Expert Manipulative Massage Automation, it specialises in back and knee massages as it mimics the human palm and thumb to replicate therapeutic massages such as shiatsu and physiotherapy.

Emma started work on her first patients today at the NovaHealth Traditional Chinese Medicine (TCM) clinic, working alongside her human colleagues – a physician and a massage therapist.

Emma 3.0 – the first to go into public service – is a third more compact than the first prototype unveiled last year, offers a wider range of massage programmes and provides a massage that is described by patients as almost indistinguishable from a professional masseuse.

Emma uses advanced sensors to measure tendon and muscle stiffness, together with Artificial Intelligence and cloud-based computing to calculate the optimal massage and to track a patient's recovery over a course of treatments.

Emma is developed by AiTreat, a technology start-up company incubated at Nanyang Technological University, Singapore (NTU Singapore).

Just two years old, AiTreat has a valuation of SGD$10 million (USD $7.3 million) after it recently completed its seed round funding, supported by venture capitalists from Singapore, China and the United States, including Brain Robotics Capital LP from Boston.

Founder of AiTreat and NovaHealth, Mr Albert Zhang, an alumnus of NTU Singapore who led the development of Emma, said the company's technology aims to address workforce shortages and quality consistency challenges in the healthcare industry.

Using Emma in chronic pain management has the potential of creating low-cost treatment alternatives in countries where healthcare costs are high, and where aging populations have a growing demand for such treatment.

Mr Zhang said that Emma was designed to deliver a clinically precise massage according to the prescription of a qualified traditional Chinese medicine physician or physiotherapist, without the fatigue faced by a human therapist.

"By using Emma to do the labour intensive massages, we can now offer a longer therapy session for patients while reducing the cost of treatment. The human therapist is then free to focus on other areas such as the neck and limb joints which Emma can't massage at the moment," said Mr Zhang, who graduated from NTU's Double Degree programme in Biomedical Sciences and Chinese Medicine.

In Singapore, a conventional treatment package for lower back pain consisting of a consultation, acupuncture and a 20-minute massage, would typically range from SGD$60 to SGD$100 (USD$44 to USD$73).

At NovaHealth TCM clinic, a patient could receive the same consultation, acupuncture but with a 40-minute massage from Emma and a human therapist for SGD$68 (USD$50).

Emma is housed in a customised room with two massage beds. Located in between both beds, Emma can massage one patient while the physician provides treatments for the second patient, before switching over.

This arrangement ensures Emma is always working on a patient, maximising the productivity of the clinic. It is estimated that staffing requirements to run a clinic can be reduced from five people to three, as Emma does the job of two masseuses.

How Emma works

Emma has a touch screen with a fully articulated robotic limb with six degrees of freedom. Mounted at the end of the limb are two soft massage tips made from silicon, which can be warmed for comfort.

Emma also has advanced sensors and diagnostic functions which can measure the exact stiffness of a particular muscle or tendon.

The data collected of each patient is then sent to a server in a cloud, where an Artificial Intelligence (AI) computes the exact pressure to be delivered during the massage procedure.

The AI can also track and analyse the progress of the patient, generating a performance report that will allow a physician to measure a patient's recovery using precise empirical data.

This proprietary cloud intelligence is supported by Microsoft, after Mr Zhang and his teammates won the Microsoft Developer Day Start-up Challenge last year.

Once it has proved that Emma can improve the productivity and effectiveness of TCM treatments, Mr Zhang hopes it could be a business model for other clinics to follow in future.

AiTreat is currently incubated at NTUitive, the university's innovation and commercialisation arm.

The start-up is supported by the StartupSG-Tech grant, which funds up to SGD$500,000, as well as SPRING Singapore's ACE start-up grant and the Technology for Enterprise Capability Upgrading (T-Up) grant.

The development of Emma is also on the TAG.PASS accelerator programme by SGInnovate, which will see Mr Zhang tie up with overseas teams to target multiple markets such as in US and China.

Chief Executive Officer of NTU Innovation and NTUitive Dr Lim Jui said harnessing disruptive technologies such as robotics and AI to improve everyday life is what Singapore needs to keep its innovative edge.

"To remain competitive in the global arena, start-ups will need to tap on emerging technologies to create a unique product that can tackle current challenges, similar to what AiTreat has done," Dr Lim explained.

"We are proud to have guided Mr Albert Zhang in his vision to bring affordable healthcare solutions to the market for Singapore, which can alleviate some of the chronic pain problems which our elderly faces."

The official launch of Emma and the NovaHealth clinic today was attended by fellow entrepreneurs and industry leaders, including Mr Inderjit Singh, Chairman of NTUitive, NTU's innovation and enterprise arm, and a member of NTU Board of Trustees.

Mr Inderjit Singh said, "There is great potential for Emma to be of service to society, especially as the population ages. The massage techniques of experienced and renowned TCM physicians can be reproduced in Emma, giving the public easier access to quality treatment. I look forward to future studies which could improve the efficacy of such massages, using herbal ointments containing modern ingredients that improve wear and tear, such as glucosamine.

Running in parallel to Emma's work schedule is a research project to measure and benchmark Emma's efficacy,

Interested in how AI is helping the Healthcare sector? Register for your ticket here:

AI innovation will trigger the robotics network effect

 Image Credit: Oryx Vision

Image Credit: Oryx Vision

Anyone who has thought about scaling a business or building a network is familiar with a dynamic referred to as the “network effect.” The more buyers and sellers who use a marketplace like eBay, for example, the more useful it becomes. Well, the data network effect is a dynamic in which increased use of a service actually improves the service, such as how machine-learning models generally grow more accurate as a result of training from larger and larger volumes of data.

Autonomous vehicles and other smart robots rely on sensors that generate increasingly massive volumes of highly varied data. This data is used to build better AI models that robots rely on to make real-time decisions and navigate real-world environments.

The confluence of sensors and AI at the heart of today’s smart robots generate a virtuous feedback loop, or what we might call a “robotics network effect.” We are currently on the verge of the tipping point that will create this network effect and transform robotics.

The rapid evolution of AI

To understand why robotics is the next frontier of AI, it helps to step back and understand how AI itself has evolved.

Machine intelligence systems developed in recent years are able to leverage huge amounts of data that simply didn’t exist in the mid-1990s when the internet was still in its infancy. Advances in storage and compute have made it possible to quickly and affordably store and process large amounts of data. But these engineering improvements alone can’t explain the rapid evolution of AI.

Open source machine learning libraries and frameworks have played a quiet but equally essential role. When the scientific computing framework Torch was released 15 years ago under a BSD open source license, it included a number of algorithms still commonly used by data scientists, including deep learning, multi-layer perceptrons, support vector machines, and K-nearest neighbors.

More recently, open source projects like TensorFlow and PyTorch have made valuable contributions to this shared repository of knowledge, helping software engineers with diverse backgrounds develop new models and applications. Domain experts require a vast amount of data to create and train these models. Large incumbents have a huge advantage because they can leverage existing data network effects.

Sensor data and processing power

Light detection and ranging (lidar) sensors have been around since the early 1960s. They’ve since found application in geomatics, archaeology, forestry, atmospheric studies, defense, and other industries. In recent years, lidars have become the preferred sensors for autonomous navigation.

The lidar sensor on Google’s autonomous vehicles generates 750MB of data per second. The 8 computer vision cameras on board collectively generate another 1.8GB per second. All this data has to be crunched in real time, but centralized compute (in the cloud) simply isn’t fast enough for real-time, high-velocity situations. To solve for this bottleneck, we’re decentralizing compute by pushing processing to the edge or, in the case of robots, on board.

The current solution for most of today’s autonomous vehicles is to use two on-board “boxes,” each of which is equipped with an Intel Xeon E5 CPU and 4 to 8 Nvidia K80 GPU accelerators. At peak performance, this consumes over 5000W in electricity. Recent hardware innovations like Nvidia’s new Drive PX Pegasus, which can compute 320 trillion operations per second, are beginning to more effectively address this bottleneck.

AI on the edge

Our ability to both process sensor data and fuse various modalities of data together will continue to drive the evolution of smart robots. In order for this sensor fusion to happen in real time, we need to put our machine learning and deep learning models on the edge. Of course, decentralized AI compounds the demands on decentralized processors.

Thankfully, machine learning and deep learning compute is becoming much more efficient. Graphcore’s intelligent processing units (IPUs) and Google’s tensor processing units (TPUs), for example, are lowering the cost and accelerating the performance of neural networks at scale.

Elsewhere, IBM is developing neuromorphic chips that mimic brain anatomy. Prototypes use a million neurons, with 256 synapses per neuron. The system is particularly well suited to interpret sensory data because it’s designed to approximate the way the human brain interprets and analyzes perceptual data.

The result of all this data coming from sensors means we are on the verge of a robotics network effect, a shift that will have dramatic implications for AI, robotics, and their various applications.

A new world of data

The robotics network effect will enable new technologies and machines to act not only on larger volumes and velocities of data, but also on expanding varieties of data. New sensors will be able to detect and capture data that we might not even be thinking about, bound as we are by the limited nature of human perception. Machines and smart devices will contribute enriched data back onto the cloud and to neighboring agents, informing decision making, enhancing coordination, and playing a vital role in continuous model improvements.

These advancements are coming more quickly than many realize. Aromyx, for example, uses receptors and advanced machine learning models to build sensor systems and a platform for the digital capture, indexing, and search of scent and taste data. The company’s EssenceChip is a disposable sensor that outputs the same biochemical signals that the human nose or tongue sends to the brain when we smell or taste a food or beverage.

Open Bionics is developing robotic prostheses that rely on haptic data collected from sensors within the arm socket to control hand and finger movements. This non-invasive design leverages machine learning models to translate fine muscle tension sensed by the electrodes into complex motor response in the bionic hands.

Sensor data will be instrumental in pushing the boundaries of AI. AI systems will simultaneously expand our ability to process data and discover creative uses for this data. Among other things, this will inspire new robotic form factors capable of collecting even broader modalities of data. As we advance our ability to “see” in new ways, the everyday world around is rapidly emerging as the next great frontier of discovery.

Alex Housley is the founder and CEO of Seldon, the machine learning deployment platform that gives data science teams new capabilities around infrastructure, collaboration, and compliance.

Santiago Tenorio is a general partner at Rewired, a robotics-focused venture studio investing in applied science and technologies that advance machine perception.

AI in Retail

 AI in Retail

AI in Retail

AI and Machine Learning are completely transforming the retail industry these days. Our purchase journey is becoming shorter and more personalised than ever. We see it happening but do we understand the technology behind it? Here are a few examples for how it is done.

Predictive Sales

Build self-learning models that predict sales, help increase sales revenue, and reduce storage costs. Using Linear Latent Variable Models (LAVA) and/or Elastic Nets to estimate the latent factors that highlight customers purchasing behaviour.

Big Data Analytics and Visualisation

Systematic analysis of big data is crucial when exploring under-performing streams of sales revenue. By deploying a combination of large-scale analytics and data visualisation we can illuminate hidden campaign strategies, such as cross-sales, which will alleviate such poorly performing SKUs.

Supply Chain

Implement statistical models with demand and supply uncertainty features that are inherent to the supply chain process. The perturbation of these model treat hidden externalities and generate a robust toolkit for modelling supply chain. Some additional areas where machine learning could help you’re your business is planning group problems, optimising stock levels, and warehouse automation.

Backtesting Campaign Strategies

Campaigns can be costly if they are not implemented correctly, and thoroughly backtested. Finely tuning tune back-testing models will help build a well-constructed cost-effective campaign strategies, giving management at all levels the details and implications for deployment.

Targeted Campaign and Retail Segmentation

Have a nuanced view of public opinion and target customers more accurately with Multi-level Regression and Poststratification (MRP). Create retail segmentation with artificial neural networks (ANNs) giving you a better understanding of your customers' shopping habits.

For more information visit

-------------------------------- will be exhibiting at the AI Congress 2018. Get your ticket today!

The Incredible Ways Heineken Uses Big Data, The Internet of Things And Artificial Intelligence (AI)

 Photographer: Taylor Weidman/Bloomberg

Photographer: Taylor Weidman/Bloomberg

Every industry can benefit from Big Data, IoT and AI, and that includes brewers. Dutch brewer Heineken has been a worldwide brewing leader for the last 150 years, but today, as the No. 1 brewer in Europe and No. 2 in the world they are ramping up their results thanks to the use of big data and AI. As the company sets out to better compete in the formidable U.S. beer market they plan to leverage the vast amounts of data they collect. Currently they sell more than 8.5 million barrels of its various beer brands here in the U.S., but they hope to increase those numbers with data-driven improvements and AI augmentation to its operations, marketing, advertising and customer experience.

Heineken Improves Operations through Data Analytics

From forecasting to optimizing delivery routes, Heineken uses data at every stage of the supply chain. Data informs Heineken’s collaborative planning, forecasting and replenishment processes to eliminate inefficiencies throughout the entire chain. Through data analytics the brewer can adjust production when there is high inventory, long production or replenishment lead-times, and seasonal variances in the demand for its products.

The Internet of Things and Heineken’s Ignite Bottle

The brewer is not letting the potential of the Internet of Things (IoT) pass them by. By 2025, the Internet of Things is expected to generate up to $11.1 trillion a year in economic value according to a McKinsey Global Institute report. Heineken has already dabbled in the IoT with its Ignite bottle—one of the winning ideas for the company’s annual Future Bottle Design Challenge. These interactive bottles have 50 individual components and sensors including LED lights that turn beer bottles into connected devices that respond to the beat of the music in clubs and reflect its rhythms so that “every bottle becomes part of the party.” Its lights also flicker when the bottle is tipped back for a drink, “cheers” another bottle and dims when nobody is touching it.  The Ignite bottle certainly contributes to a memorable customer experience when enjoying a Heineken brew.

Data-Driven Marketing

Heineken partnered with Walmart on a pilot program with Shopperception, a company that analyzes the behavior of shoppers in front of the shelves and uses the metrics it gathers to create real-time events to drive more conversions. This program helped them gather a tremendous amount of data on how every six-pack or can of Heineken left the store. The brewer and retailer can assess all the data collected to better understand the customer who is purchasing Heineken as well what might be the best location in the store to sell beer and when.

Heineken also has a strong social media following and created partnerships with Facebook and Google to better understand their customers. Now armed with this insight, Heineken can create personalized and event-driven marketing experiences.

5 reasons businesses are struggling with large-scale AI integration

Artificial intelligence is an important vehicle for companies looking to automate processes, reduce the cost of operation, or fuel innovation. Despite the positive influence AI-supported activities have on business, a successful implementation won’t happen overnight. First you need a complete understanding of your business goals, technology needs, and the impact AI will have on customers and employees. The majority of employees face challenges or concerns relating to AI adoption, and that needs addressing.

The implication of successful AI adoption is far reaching for businesses undertaking full-cycle digital transformation, which places equal emphasis on automation, innovation, and learning. While employees may experience trepidation at the prospect of AI reshaping or eliminating day-to-day tasks, their productivity could actually increase because more of their time can be directed toward activities that produce value-driven business outcomes. No matter the role or the business unit, AI, automation, and machine learning are changing how work is performed.

As AI becomes pervasive, companies must face challenges head-on. Executives will need to consider the following five areas as they progress with digital transformation and move to invest more heavily in AI.

1. Legacy infrastructure

The adage “out with the old and in with the new” rings true for decision makers who are assessing whether their current infrastructures are intelligent enough to support today’s technology. AI-supported activities require ingestion of vast amounts of data; thus, infrastructure must be agile and scalable. Traditional structures like software-defined infrastructures (SDIs) aren’t necessarily the best option. While SDIs provide flexibility, the structure is limited by the source fixed source code and the administrator who is writing the scripts. More sophisticated AI algorithms and intelligence systems require smarter structures like AI-defined infrastructure (ADIs) and cloud-based networks that can quickly expand based on business needs.

Moreover, while neural networks have existed for decades, only now is massive computing power available at a reasonable cost, which in turn has helped increase the number of layers in these networks. Each layer adds more intelligence but also consumes enormous computing power, which used to be prohibitively expensive. More layers mean better outcomes.

2. The skills gap

AI is generating a demand for new skill sets in the workplace. However, currently, there is a widespread shortage of talent that possess the knowledge and capabilities to properly build, fuel, and maintain these technologies within their organizations. The lack of well-trained professionals who can build and direct a company’s AI and digital transformation journeys noticeably hinders progress and continues to be a major hurdle for businesses.

To mitigate this, businesses should look inward and enforce on-the-job training and reskilling. For example, LinkedIn just announced it plans to teach all its engineers the basics of using AI. With the proper staff powering AI, employees are able to focus on other critical activities and boost productivity creating a large ROI. If an enterprise’s digital transformation goal is for AI to become a business accelerator, it needs to be an amplifier of its people. It’s going to take work to give everyone access to the fundamental knowledge and skills in problem-finding and remove the elitism around advanced technology, but the boost to productivity and ROI will be worth it in the end.

3. Ethical dilemmas

While AI is still in early stages, ethical concerns abound. Both proponents and detractors of AI (Elon Musk most famous among the latter group) have focused on who wins and who loses when AI grows more prominent in business and daily life. A recent study that sought to better understand how AI and automation technologies are driving full-cycle digital transformation in various industry sectors found 62 percent of enterprises felt that a successful transition to AI-powered processes and workflows requires stringent ethical standards.

It’s critical that businesses develop guidelines and rules as adoption takes place. An ethical framework with buy-in from leadership will ensure products and services, processes, and employees are treated appropriately with respect to how AI is adopted, used, and expanded. Having moral standards or systems in place assures issues such as unemployment, bias, and inequality are carefully scrutinized as AI is added to the corporate environment.

4. Data abundance and availability

AI algorithms cannot properly execute without access to data. The more data available, the more accurate and effective AI will be. As systems evolve and more connections between networks, devices, and processes arise, colossal amounts of structured and unstructured data can be accessed.

Before deploying AI, IT teams and data scientists should collect, clean, and label datasets for machine learning algorithms to ingest to improve AI applications. Filtering through these large amounts of data is no small feat considering 80 percent of organizations’ data is unstructured. The better an organization can clean up its data, the sooner it can improve accuracy and expand use of the data. Over time, AI and machine learning will become smarter about analyzing data and making discoveries quickly that can positively affect businesses’ bottom lines.

5. Budget concerns

Deploying AI effectively takes a vast amount of time, resources, and budget. While AI cuts costs in the long run, it typically requires significant investment at the start. Enterprises are investing millions of dollars, and companies of other sizes invest substantial sums ranging from tens of thousands to hundreds of thousands. However, running extensive projects with unstructured data could cost your organization up to $500,000, so costs are comparable.

Businesses that haven’t yet allocated budget for AI should start small by manually auditing the organization to streamline processes and free up employees’ bandwidth. This allows decision makers to clearly see which systems aren’t utilized effectively and which areas could benefit from technology down the road.

The future of business requires artificial intelligence. But AI is also the future of innovation. AI needs its human creators to succeed in order for the technology to become more useful. While some have already adopted AI applications, others are still lagging, which is understandable considering the challenges businesses face during this process. However, once these barriers are overcome, enterprises will finally see how AI can drastically revolutionize businesses, improve processes, and increase employee productivity at scale in the coming years.

Mohit Joshi is president and head of banking, financial services and insurance, and health care and life sciences at Infosys, a multinational corporation that provides business consulting, information technology, and outsourcing services.



The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Learn how your competitors are implementing and monetising AI. Register today for a ticket at an early bird rate!

New surgical robots are about to enter the operating theatre

 Cambridge Medical Robotics

Cambridge Medical Robotics

ROBOTS have been giving surgeons a helping hand for years. In 2016 there were about 4,000 of them scattered around the world’s hospitals, and they took part in 750,000 operations. Most of those procedures were on prostate glands and uteruses. But robots also helped surgeons operate on kidneys, colons, hearts and other organs. Almost all of these machines were, however, the products of a single company. Intuitive Surgical, of Sunnyvale, California, has dominated the surgical-robot market since its device, da Vinci, was cleared for use by the American Food and Drug Administration in 2000.

That, though, is likely to change soon, for two reasons. One is that the continual miniaturisation of electronics means that smarter circuits can be fitted into smaller and more versatile robotic arms than those possessed by Intuitive’s invention. This expands the range of procedures surgical robots can be involved in, and thus the size of the market. The other is that surgical robotics is, as it were, about to go generic. Many of Intuitive’s patents have recently expired. Others are about to do so. As a result, both hopeful startups and established health-care companies are planning to enter their own machines into the field.

Though the word “robot” suggests a machine that can do its work automatically, both da Vinci and its putative competitors are controlled by human surgeons. They are ways of helping a surgeon wield his instruments more precisely than if he were holding them directly. Da Vinci itself has four arms, three of which carry tiny surgical instruments and one of which sports a camera. The surgeon controls these with a console fitted with joysticks and pedals, with the system filtering out any tremors and accidental movements made by its operator. That, combined with the fact that the system uses keyhole surgery (whereby instruments enter the patient’s body through small holes instead of large cuts, making procedures less invasive), reduces risks and speeds up recovery. But at more than $2m for the equipment, plus up to $170,000 a year for maintenance, da Vinci is expensive. If a new generation of surgical robots can make things cheaper, then the benefits of robot-assisted surgery will spread.

Arms and the man

This summer Cambridge Medical Robotics (CMR), a British company, unveiled Versius, a robot that it hopes to start selling next year (a picture of the machine can be seen above). Unlike da Vinci, in which the arms are all attached to a single cart, Versius sports a set of independent arms, each with its own base. These arms are small and light enough to be moved around an operating table as a surgeon pleases, or from one operating theatre to another as the demands of a hospital dictate. This way, the hospital need not dedicate a specific theatre to robotic surgery, and the number of arms can be tailored to the procedure at hand.

Unlike a da Vinci arm, which is like that of an industrial robot, a Versius arm is built like a human one. It has three joints, corresponding to the shoulder, the elbow and the wrist. This means, according to Martin Frost, CMR’s chief executive, that surgeons will be able to use angles and movements they are already familiar with, instead of having to learn a robot-friendly version of a procedure from scratch. The company has yet to decide how much the arms will cost, but Mr Frost expects that operations which employ Versius will work out to be only a few hundred dollars more expensive than those conducted by humans alone. With da Vinci, the difference can amount to thousands.

Versius will compete with da Vinci on its own turf—abdominal and thoracic surgery. Others, though, want to expand robotics into new areas. Medical Microinstruments (MMI), based near Pisa, in Italy, has recently shown off a robot intended for reconstructive microsurgery, a delicate process in which a surgeon repairs damaged blood vessels and nerves while looking through a microscope. This robot allows the surgeon to control a pair of miniature robotic wrists, 3mm across, that have surgical instruments at their tips.

MMI’s device does away with the control console. Instead, the surgeon sits next to the patient and manipulates the instruments with a pair of joysticks that capture his movements and scale them down appropriately. That means he can move as if the vessels really were as big as they appear through the microscope.

Such a robot could even be used for operating on babies. “In their case,” observes Giuseppe Prisco, MMI’s boss, “even ordinary procedures are microsurgery.” The company is now doing preclinical tests. Dr Prisco reckons the market for robotic microsurgery to be worth $2.5bn a year.

A third new firm hoping to build a surgical robot is Auris Robotics. This is the brainchild of Frederic Moll, one of the founders of Intuitive Surgical (though he left more than ten years ago). Auris remains silent about when its robots will reach the market, but the firm’s patent applications give some clues as to what they might look like when they do. Auris appears to be developing a system of flexible arms with cameras and surgical instruments attached, which could enter a patient’s body through his mouth.

That tallies with an announcement the firm made earlier this year, saying that the robot will first be used to remove lung tumours. Lung cancer is the world’s deadliest sort, killing 1.7m people a year. What makes it so deadly, Auris notes, is that it is rarely stopped early. Opening someone’s thorax and removing parts of his lung is risky and traumatic. It is not always worthwhile if the tumour is still small, because small tumours do not necessarily grow big. If they do, though, they are usually lethal if left in situ—but much harder to remove than when they were small. Auris’s design could ease this dilemma by passing surgical instruments from the mouth into the trachea and thence to the precise point inside the affected lung where they are needed, in order to cut away only as much tissue as required.

Auris, CMR and MMI are all startups. But two giants of the medical industry are also joining the quest to build a better surgical robot. One is Medtronic, the world’s largest maker of medical equipment. The other is Johnson & Johnson, which has teamed up with Google’s life-science division, Verily, to form a joint venture called Verb Surgical.

Like Auris, Medtronic is keeping quiet about the design of its robot. But it has said that it plans to begin using it on patients in 2018. Also like Auris, though, some information can be deduced from other sources. In particular, Medtronic has licensed MIRO, a robot developed by Germany’s space agency for the remote control of mechanical arms in space. MIRO is made of lightweight, independent arms. These, presumably, could be fixed directly onto the operating table.

A robot based on MIRO would let surgeons rely on touch as well as sight, since MIRO’s instruments are equipped with force sensors that relay feedback to the joysticks used to operate them, and thus to the operator’s hands. The lack of such haptic feedback (the ability to feel the softness of tissues, and the resistance they offer to the surgeon’s movements) has long been a criticism of da Vinci. Surgeons often rely on touch, for example, to discern healthy from tumorous tissue.

Verb Surgical was formed in 2015 and demonstrated its latest prototype to investors earlier this year. Scott Huennekens, the firm’s boss, says the machine will be particularly suitable for gynaecological, urological, abdominal and thoracic surgery. 

Robot, teach thyself

Verb wants not just to build surgical machines, but to get its robots to learn from one another. The firm plans to connect all the machines it sells to the internet. Each bot will record data about, and videos of, every procedure it performs. These will be fed to machine-learning algorithms for analysis, to tease out what works best.

Mr Huennekens compares this to the way Google’s driverless-car division collects data on its vehicles’ journeys in order to improve their performance. A couple of years after its launch, and after processing enough images, the system could start helping surgeons to tell sick tissue from healthy, to decide where nerves and blood vessels are, and to plan procedures accordingly. Later, when the algorithms have swallowed many more years’ worth of data, the robots may be able to help surgeons make complex decisions such as how to deal with unexpected situations, what the best way is to position the robotic arms, and where and how to cut.

As for Intuitive, it, too, has noticed the size of the lung-cancer market. In collaboration with Fosun Pharma, a Chinese firm, it has announced a new system for taking biopsies of early-stage lung cancers in order to determine how threatening they are. It has also announced the launch of the da Vinci X, a lower-cost version of its workhorse. Robots may already be in many theatres, but a bigger part awaits.


The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today for a ticket at an early bird rate!

This futurist isn't scared of AI stealing your job. Here's why

 REUTERS/Kim Kyung-Hoon

REUTERS/Kim Kyung-Hoon

You know a topic is trending when the likes of Tesla’s Elon Musk and Facebook’s Mark Zuckerberg publicly bicker about its potential risks and rewards. In this case, Musk says he fears artificial intelligence will lead to World War III because nations will compete for A.I. superiority. Zuckerberg, meanwhile, has called such doomsday scenarios “irresponsible” and says he is optimistic about A.I.

But another tech visionary sees the future as more nuanced. Ray Kurzweil, an author and director of engineering at Google, thinks, in the long run, that A.I. will do far more good than harm. Despite some potential downsides, he welcomes the day that computers surpass human intelligence—a tipping point otherwise known as “the singularity.” That’s partly why, in 2008, he cofounded the aptly named Singularity University, an institute that focuses on world-changing technologies. We caught up with the longtime futurist to get his take on the A.I. debate and, well, to ask what the future holds for us all.

Fortune: Has the rate of change in technology been in line with your predictions?

Kurzweil: Many futurists borrow from the imagination of science-fiction writers, but they don’t have a really good methodology for predicting when things will happen. Early on, I realized that timing is important to everything, from stock investing to romance—you’ve got to be in the right place at the right time. And so I started studying technology trends. If you Google how my predictions have fared, you’ll get a 150-page paper analyzing 147 predictions I made about the year 2009, which I wrote in the late ’90s—86% were correct, 78% were exactly to the year.

What’s one prediction that didn’t come to fruition?

That we’d have self-driving cars by 2009. It’s not completely wrong. There actually were some self-driving cars back then, but they were very experimental.

Why are we so bad at predicting certain things? For example, Donald Trump winning the presidency?

He’s not technology.

Have you tried to build models for predicting politics or world events?

The power and influence of governments is decreasing because of the tremendous power of social networks and economic trends. There’s some problem in the pension funds in Spain, and the whole world feels it. I think these kinds of trends affect us much more than the decisions made in Washington and other capitals. That’s not to say they’re not important, but they actually have no impact on the basic trends I’m talking about. Things that happened in the 20th century like World War I, World War II, the Cold War, and the Great Depression had no effect on these very smooth trajectories for technology.

What do you think about the current debate about artificial intelligence? Elon Musk has said it poses an existential threat to humanity.

Technology has always been a double-edged sword, since fire kept us warm but burned down our houses. It’s very clear that overall human life has gotten better, although technology amplifies both our creative and destructive impulses. A lot of people think things are getting worse, partly because that’s actually an evolutionary adaptation: It’s very important for your survival to be sensitive to bad news. A little rustling in the leaves may be a predator, and you better pay attention to that. All of these technologies are a risk. And the powerful ones—biotechnology, nanotechnology, and A.I.—are potentially existential risks. I think if you look at history, though, we’re being helped more than we’re being hurt.

How will artificial intelligence and other technologies impact jobs?

We have already eliminated all jobs several times in human history. How many jobs circa 1900 exist today? If I were a prescient futurist in 1900, I would say, “Okay, 38% of you work on farms; 25% of you work in factories. That’s two-thirds of the population. I predict that by the year 2015, that will be 2% on farms and 9% in factories.” And everybody would go, “Oh, my God, we’re going to be out of work.” I would say, “Well, don’t worry, for every job we eliminate, we’re going to create more jobs at the top of the skill ladder.” And people would say, “What new jobs?” And I’d say, “Well, I don’t know. We haven’t invented them yet.”

That continues to be the case, and it creates a difficult political issue because you can look at people driving cars and trucks, and you can be pretty confident those jobs will go away. And you can’t describe the new jobs, because they’re in industries and concepts that don’t exist yet.


The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today for a ticket at an early bird rate!

Are chatbots ready to rule the (customer service) world?

Chatbots Ready to Rule.jpg

With so many conflicting opinions and predictions it’s hard to tell what the real state of chatbots is, even if it is all the talk around customer service, experience and care these days. Service providers are trying to figure out how they can best leverage the bot opportunity.

The bot race

Tech research firm Forrester recently conducted a survey on behalf of Amdocs, polling 7,200 consumers worldwide, and 31 decision makers from Tier-1 service providers. It reported that 86 percent of consumers regularly engage with bots, while 65 percent of service providers are investing in artificial intelligence (AI) to create and deliver better customer experiences and in chatbot infrastructure and capabilities.

AI research firm TechEmergence notes that chatbots are expected to be the top consumer application for AI over the next five years, while tech analyst company Gartner explains the anticipated acceleration of adoption:

“Chatbots are entering the market rapidly and have broad appeal for users due to efficiency and ease of interaction.”

As such, the firm issues a powerful call to action:

“Customer interactions are moving to conversational interfaces, so marketers need to have a bot strategy if they want to be part of that future.”

In a Forbes article, Shep Hyken, who speciializes in customer experience, supports these predictions and suggests the benefits of chatbots to the enterprise include they fact they:

  • are available 24/7 – making ‘working hours’ irrelevant;
  • don’t make customers wait for an answer, dispensing with hold times;
  • personalize customers’ experience by delivering only relevant information; and
  • make friends and build brand.

Chatbot challenges

However, as most consumers know by now – this isn’t always the case:

  • If the chatbot is available, but doesn’t understand the inquiry, availbility is irrelevant and how many times have you head or read “I’m sorry, I didn’t get that”?
  • If you don’t have to wait for chatbots, if they cannot process your request or handle more complex engagements, and you have to transfer to a live agent, then hold times are back in play (and you’ve had the added frustration of an extra step in the process).
  • If they can connect customer data to the engagement, but haven’t got a 360 degree view of the customer, the response might be neither contextual nor timely (again, causing friction);
  • How can they act as the customer’s ‘friend’ if they don’t really understand their request and so cannot sufficiently and effectively fulfill their needs? Or, if they can’t engage in a way that’s naturally conversational, intuitive, and personalized?

So, while BusinessInsider believes that “a chatbot’s usefulness is limited only by “creativity and imagination,” we know there’s more to it than that. Namely, service providers that want to leverage chatbots and reap the promised rewards of taking customer experience to new heights while decreasing costs, will need to:

  • Ensure that personalization does not rely solely on CRM data, but is based on a 360 customer view which includes behavior history, channel preference and journey patterns.
  • Ensure the right balance between virtual and live agents, seamlessly transferring the engagement to a live agent as needed, in a way that is transparent to the customer.
  • Make sure that the chatbot understands intents specific to telecoms so they can more accurately address the needs of service providers’ customers.
  • Integrate chatbots with the relevant business systems to make all the required data readily available.
  • Make sure chatbots can turn every care engagement into a commerce opportunity, by presenting the most relevant and timely marketing offer to customers.
  • Optimize each engagement by learning from past interactions.

Accordingly, a successful chatbot strategy should seek to ensure that a bot:

  • uses intelligence and machine learning;
  • is designed for communications and media industries;
  • understands telco-specific intents; and
  • is fully integrated with core back-end systems.

Learn more about how to achieve each of these critical capabilities in TM Forum’s recently published Quick Insights report, How an Intelligent Chatbot Can Revolutionize the Virtual Agent Experience (page 26).


The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today!

The Usefulness—and Possible Dangers—of Machine Learning

 University of Pennsylvania workshop addresses potential biases in the predictive technique.

University of Pennsylvania workshop addresses potential biases in the predictive technique.

Stephen Hawking once warned that advances in artificial intelligence might eventually “spell the end of the human race.” And yet decision-makers from financial corporations to government agencies have begun to embrace machine learning’s enhanced power to predict—a power that commentators say “will transform how we live, work, and think.”

During the first of a series of seven Optimizing Government workshops held at the University of Pennsylvania Law School last year, Aaron Roth, Associate Professor of Computer and Information Science at the University of Pennsylvania, demystified machine learning, breaking down its functionality, its possibilities and limitations, and its potential for unfair outcomes.

Machine learning, in short, enables users to predict outcomes using past data sets, Roth said. These data-driven algorithms are beginning to take on formerly human-performed tasks, like deciding whom to hire, determining whether an applicant should receive a loan, and identifying potential criminal activity.

In large part, machine learning does not differ from statistics, said Roth. But unlike statistics, which aims to create models for past data, machine learning requires accurate predictions on new examples.

This eye toward the future requires simplicity. Given a set of past, or “training,” data, a decision-maker can always create a complex rule that predicts a label—say, likelihood of paying back a loan—given a set of features, like education and employment. But a lender does not seek to predict whether a past loan applicant included in a dataset actually paid back a loan given her education and employment, but instead whether a new applicant will likely pay back a loan, explained Roth.

A simple rule might not be perfect, but it will provide more accuracy in the long run, said Roth, because it will more effectively generalize a narrow set of data to the population at large. Roth noted that for more complex rules, algorithms must use bigger data sets to combat generalization errors.

Because machine-learning algorithms work to optimize decision-making, using code and data sets that can be held up to public scrutiny, decision-makers might think machine learning is unbiased. But discrimination can arise in several non-obvious ways, argued Roth.

First, data can encode existing biases. For example, an algorithm that uses training data to predict whether someone will commit a crime should know whether the people represented in the data set actually committed crimes. But that information is not available—rather, an observer can know only whether the people were arrested, and police propensity to arrest certain groups of people might well create bias.

Second, an algorithm created using insufficient amounts of training data can cause a so-called feedback loop that creates unfair results, even if the creator did not mean to encode bias. Roth explained that a lender can observe whether a loan was paid back only if it was granted in the first place. If training data incorrectly show that a group with a certain feature is less likely to pay back a loan, because the lender did not collect enough data, then the lender might continue to deny those people loans to maximize earnings. The lender would never know that the group is actually credit-worthy, because the lender would never be able to observe the rejected group’s loan repayment behavior.

Third, different populations might have different characteristics that require separate models. To demonstrate his point, Roth laid out a scenario where SAT scores reliably indicate whether a person will repay a loan, but a wealthy population employs SAT tutors, while a poor population does not. If the wealthy population then has uniformly higher SAT scores, without being on the whole more loan-worthy than the poor population, then the two populations would need separate rules. A broad rule would preclude otherwise worthy members of the poor population from receiving loans. The result of separate rules is both greater fairness and increased accuracy—but if the law precludes algorithms from considering race, for example, and the disparity is racial, then the rule would disadvantage the non-tutored minority.

Finally, by definition, fewer data exist about groups that are underrepresented in the data set. Thus, even though separate rules can benefit underrepresented populations, such rules create new problems, arguedRoth. Because the training data used by machine learning will include fewer points, generalization error can be higher than it is for more common groups, and the algorithm can misclassify underrepresented populations with greater frequency—or in the loan context, deny qualified applicants and approve unqualified applicants at a higher rate.

Roth’s presentation was followed by commentary offered by Richard Berk, the Chair of the Department of Criminology. Berk explained that algorithms are unconstrained by design, which optimizes accuracy, but argued that the lack of constraint might be what gives some critics of artificial intelligence some pause. When decision-makers cede control of algorithms, they lose the ability to control the assembly of information, and algorithms might invent variables from components that alone have, for example, no racial content, but when put together, do.

Berk stated that mitigating fairness concerns often comes at the expense of accuracy, leaving policymakers with a dilemma. Before an algorithm can even be designed, a human must make a decision as to how much accuracy should be sacrificed in the name of fairness.

Roth stated that this tradeoff causes squeamishness among policymakers—not because such tradeoffs are new, but because machine learning is often more quantitative, and therefore makes tradeoffs more visible than with human decision-making. A judge, for example, might make an opaque tradeoff by handing down more guilty verdicts, thereby convicting more guilty people at the expense of punishing the innocent. But that tradeoff is not currently measurable. Both Roth and Berk expressed hope that machine learning’s effect of forcing more open conversations about these tradeoffs will lead to better, more consistent decisions.

Penn Law Professor Cary Coglianese, director of the Penn Program on Regulation, introduced and moderated the workshop. Support for the series came from the Fels Policy Research Initiative at the University of Pennsylvania.


The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today!

Machine Learning in Marketing

The world of marketing is being transformed at such a fast pace it’s getting hard for marketers to follow the newest tech developments that are being introduced every day. The amount of data digital marketing is creating is so big that a lot of media agencies are actually experiencing an information overload, and don’t know how to make use of it. Data becomes a problem where it should be bringing value to a business. Here are a few ways in which machine learning techniques can help.

Customer Segmentation:

Improve customer segments and targeted advertising using machine learning segmentation methods (e.g. cluster analysis, k-means, Nearest Neighbour). Classify customers using Supervised Learning Models, find new audiences using recommendation systems increase efficiency of your media spend.

Behavioural Analysis:

Find patterns in the way customers interact with your brand by using predictive modelling and forecasting. Optimise conversions, increase customer satisfaction.

Social Media - Early Opportunity Detection:

Analyse real-time Twitter and Facebook data streams to capture current sentiments with respect to brands, products or adverts. Get a head start on sentiment outbreaks to uncover important opportunities and avoid PR crises.

Sales and Marketing Integration:

Build an easy to navigate interface to measure an integrated impact of sales teams and marketing campaigns. Track direct correlation between media budgets and number of products sold.

Influencers Strategy:

Improve efficiency of your campaigns using social network analysis. Find audience most susceptible to the message, and use this audience to amplify the impact of your campaigns. Find and target.

Implicit Survey Design:

Improve accuracy of surveys by using established psychological tools and test instruments, such as gamification. Learn about your audience to make informed decisions.

Neuroscience (Beta):

Optimise website usability and impact using eye tracking methods to. Investigate consumer’s perception of adverts using brain imagining (FMRI) analysis.

For more information about above methods, or if you’re interested in the different ways in which AI and ML can be used in marketing visit


Brainpool will be exhibiting at the AI Congress 2018. To meet with them and other leading experts, sign up for your ticket today!


Retail Revolutionized: Three ways to profit from artificial intelligence

 Jill Standish

Jill Standish

Whether we’re receiving coupons based on our spending, or product suggestions based on other people’s spending, artificial intelligence (AI) is transforming how consumers shop and experience brands. For retailers, meanwhile, AI could increase profits by almost 60%. It could be a game-changer in this labor-intensive sector, augmenting the workforce and enabling employees to become more productive.

Some retailers already recognize ways for AI to complement their human workforce and boost profits. Stitch Fix is a clothing retailer that combines the expertise of fashion stylists with algorithms that analyze unstructured consumer data to deliver hand-picked items based on their preferences.  Another forward-thinking fashion company is Original Stitch, which deploys AI to analyze customers’ photographs of their favorite shirts before custom-tailoring and delivering a brand-new piece of clothing.  

Yet some retailers are hesitant about AI, and unsure how they can keep up to speed with the technology – let alone make the most of it. We have identified three ways for these retailers to revolutionize the retail experience using AI.

1. Understand the consumer

AI allows companies to find out more about how customers behave and what they want, giving them confidence that they are stocking the right products, targeting them at the right consumers, and building the right loyalty programs. 

The data they gather from their Web and mobile channels already enables online retailers to develop more detailed and accurate customer profiles. But this sort of insight does not have to be exclusively Web-based: physical retailers could use AI technology to learn about customer activity as they walk around stores. Which displays do customers linger over? Which products do they take off the shelves but then decide not to buy? This sort of data will tell retailers when, where and how to nudge customers toward purchases, and give them the insights they need to improve the customer experience. 

2. Guide them to what they want – and don’t know they want

Similarly, retailers can use AI to make it easier for customers to find what they are looking for – and, crucially, help them find things they don’t yet know they want. 

This is especially valuable for the largest online brands, with their vast range of products. Consumers who feel overwhelmed by the sheer quantity of items will go elsewhere, so retailers that can guide customers in the right direction have a serious competitive advantage. And it is the online retailers that were first to recognize the value of nudging customers toward further purchases by using machine learning to anticipate their needs.

Used sensitively, AI makes customers feel that retailers understand what they want. Progressive retailers are already using AI to provide more sophisticated online recommendations, but they are also looking into tailoring the homepage to each user so they are presented with the items they desire most. 

Consumers already know that the adverts they see online are personalized to them; Google uses AI to tailor its search results for individual users; and some online retailers use structured data to adapt what they show customers according to what they have searched for in the past. What is stopping retailers from customizing each person’s experience of the entire site? 

3. Knock their socks off

Online shopping impresses customers with its ease and efficiency. As AI makes online shopping easier, customers are less likely to go to stores for commodity products such as laundry detergent. But as far as providing memorable experiences goes, physical stores have the upper hand. So, this is the time to start exploring how to use AI to dazzle customers. 

Grocery retailer Coop Italia is a great example. Customers can simply wave a hand over a box of grapes to see nutritional and provenance information on a raised monitor. It also uses “vertical shelving”: touch applications that enable customers to search for other products and find out about related products, promotions, and even waste-disposal.  At some Neiman Marcus department stores, meanwhile, customers can try out a “memory mirror” – a virtual dressing room to compare outfits, see them from 360 degrees and share video clips with friends. 

With so many of us consulting our phones while we shop – to read reviews and research product information – it is only a matter of time before retailers answer these queries on the shop floor, using bots. AI lets them carry out multidimensional conversations with customers through text-based chats, spoken conversations, gestures and even virtual reality. 

This is not hype. AI advances have already given some retailers increased customer loyalty and higher profits. Now retailers have the opportunity to boost their profits further by using AI alongside the human workforce – producing even greater efficiencies, and truly revolutionizing the in-store experience. 

Machine learning: what does the industry want next?


In this guest post, we hear technology insights and tips from Mariano Albera VP Technology at Expedia Affiliate Network

Machine learning is more popular in the travel industry now than ever. There’s a simple explanation for that fact: machine learning is more powerful now than ever before.

The appeal of machine learning – essentially a form of artificial intelligence (AI) whereby computers learn without being explicitly programmed with new information – is clear. At exceptional speed, for example, complex algorithms can identify subtle but important data patterns that humans could never have spotted. In ‘learning’ from that information, the ‘machine’ can predict patterns ahead, and then act to process that knowledge to maximise future business. In a sense, then, machine learning is a modern and highly sophisticated technological application of a long-established notion – study the past to predict the future.

Machine learning is a modern and highly sophisticated technological application of a long – established notion – study the past to predict the future

The practical applications of machine learning, and other forms of AI such as data mining, are many and varied in the travel industry.

The rise of the chatbot

‘Chatbots’ are particularly visible examples of machine learning at work. As the name suggests, chatbots are essentially machines – messenger apps – with which customers seem to have conversations. Armed with the knowledge of the customer’s past bookings, the chatbot can offer targeted recommendations highly likely to be converted into sales. Critically, the chatbot keeps learning from each booking the customer makes, so recommendations become more relevant with every new ‘chat’ and customer interaction. That’s a huge benefit in an industry as personalised as travel. Effectively, the machine learns how to close the deal without human help.

In many ways chatbots are already better than humans. They:

  • Provide low-cost 24/7 customer support.

  • Deliver real-time message translation, so you’re not on the phone at two o’clock in the morning trying to find an English-speaking sales assistant in Tokyo

  • Are much faster than waiting for a call centre to answer the phone. If you want information about train times, good theatre and the weather in New York, for example, a machine will source and deliver that information to you more rapidly than even the most well-informed human.

Admittedly, chatbots cannot always answer complex questions but their sophistication is constantly improving. By definition, the machines keep learning.

Icelandair, Lufthansa and Austrian Airlines are three carriers to have seen the potential of machine learning and introduced chatbots.

Practical planning, time saving

Machine learning also helps in areas such as planning optimal flight routes. Assessing the millions of flight options on, say, a long-haul, round-trip journey, complex algorithms can learn from past booking data to filter those possibilities down to the small number of most practical or appealing options…all in just seconds.

Another application for machine learning is in addressing the problem of duplicate listings. Online travel agents, for example, gathering data from multiple sources, face issues of misspelling, punctuation and differing word orders that have historically caused problems for computers. Now, however, machines can analyse data and work out for themselves that ‘Delta Air Line’ is actually the same as ‘Delta Airlines’. No more staff time wasted de-duping and no more frustrated customers seeing two listings for exactly the same flight.

How EAN is benefiting

Like many travel companies, machine learning is increasingly critical to how we do business at Expedia Affiliate Network (EAN), where we use hundreds of hotel features to rank hotels for our travel partners by relevance to an individual consumer’s preferences.

Like chatbots, we learn from every interaction. Let’s say, for example, that a traveller always selects hotels with high-quality gyms but never shows interest in swimming pools. By monitoring each of his selected and rejected options and bookings, our machines learn that fact without being explicitly programmed with those details. So, when the traveller next books, for example, a flight into Atlanta with a partner airline, he is instantly shown a range of suitable local hotels, prioritising gyms over pools, this maximising the likelihood of a conversion to sale. 

Lessons learnt

At EAN, we are working on using a type of machine learning called ‘deep learning’ to rank and sort hotel images and this is what we’ve learnt is that very first thing people glance at within a hotel listing, before considering the hotel name or price, is the image. In fact, it takes us around one twentieth of a second to process an image, so the quality and relevance of the images and the order in which they are displayed to travellers is crucial. 

More from in white paper titled Does Deep Learning Hold Answers? 

In the past, we relied on a manual process to select the featured image for a listing, while the other images were randomly ordered or grouped. EAN has over 300,000 properties, and over 10 million images so, as you can imagine, ranking and sorting the images for these manually is difficult. Enter AI to do this automatically.

Looking forward

Our aim is to automatically order and sort the images not just according to image quality, but also to traveller types, customer preferences and seasonality, so that the images most likely to encourage a booking are displayed to each individual consumer.

The speed at which data can be processed, analysed and actioned, is already exceptional, and is improving daily

The good news is that machine learning is advancing fast. The speed at which data can be processed, analysed and actioned, is already exceptional, and is improving daily. Across many industries, not just travel, I’d expect to see machine learning move from niche applications to mission-critical processes.

As is so often the case, in issues of computing, the limitations are as much human as technological. Almost every part of the digital user experience can be improved with AI. We all need to think creatively about how machine learning can enhance our activities.

What do we, as an industry, want to do next?

This is a guest post from Mariano Albera, VP Technology, Expedia Affiliate Network


To find out more about how the Travel Industry is adopting AI, check out The AI Congress. The leading international artificial intelligence show, it takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today!

Can robots coexist with humans?

Working together: will robots be central to our future?

With advances in artificial intelligence forging ahead, it’s time to think seriously about how we see robots fitting into our society.

Of all the tech trends dominating headlines at the moment, artificial intelligence (AI) seems to be generating the most debate.

As we continue to develop this technology – at a seemingly exponential rate – we face increasing pressure to examine the role we really want it to fulfil, and how it should be integrated.

With many high-profile figures – from Stephen Hawking to Elon Musk – warning of the potential pitfalls, it may take time before society fully accepts the idea of ubiquitous AI, or “robot workers”. But in reality, there is a strong argument that, far from capping innovation in AI, we should find ways to put it to use in order to stay ahead.

Prudential’s global head of AI, Dr Michael Natusch, says machine learning in a business context grew out of a need for up-to-the-minute analytical tools. “What really drew people’s attention across a wide range of industries to machine learning is the ability to extract insight out of large, multi-structured data sets,” he says. “Drawing understanding and insights in an automated and continuous way – that’s what we really mean by AI.

“The way businesses can use AI is exactly the same way that businesses can use human intelligence. It enables us to make decisions, to understand what’s happening, and do things faster, better and cheaper.”

The labour debate

Of course, this drive to lower costs and save time may mean changes to some job roles we know today. According to a recent PwC report, around 30pc of existing UK jobs face automation over the next 15 years – with manual roles in areas such as manufacturing, transport and retail likely to be most affected.

But PwC’s chief economist John Hawksworth believes this could be a positive move. “Automating more repetitive tasks will eliminate some existing jobs, but could also enable workers to focus on higher value, more rewarding and creative work, removing the monotony from our day jobs,” he explains.

And that’s not to mention the upshot in productivity this will bring: “Advances in robotics and AI should also create additional jobs in less automatable parts of the economy as this extra wealth is spent or invested.

“The UK employment rate is at its highest level now since comparable records began in 1971, despite all the advances in digital and other labour-saving technologies we have seen since.”

And as Dr Natusch notes, this widespread concern over AI may be blinding people to its strengths. “Very often, it sounds like AI is in competition with humans, but the real power will come from humans and AI augmenting each other,” he says. “It’s this symbiosis of humans and AI that will drive major advances across a wide range of industries.”

Real robot workers

Of course, cultural factors are just as important as economic ones – especially in sectors such as retail, where robots could prove useful in customer service roles. Hitachi is one company exploring the potential for AI in this area.

Their “symbiotic” robot, named EMIEW3, is designed for customer service, using a cloud-connected “brain” and surveillance cameras to spot people in need of help, communicate with them and offer assistance. Having already been trialled at Tokyo’s busy Haneda Airport, EMIEW3 arrived in the UK for the first time this year.

“These trials are helping to give us a first-hand sense of people’s attitudes to robots, and to see if they find them genuinely helpful,” explains Dr Rachel Jones, senior strategy designer for Hitachi Europe. “It’s also leading to some interesting learnings about the interactions between humans and robots.”

Interestingly, Dr Jones’ team has already noticed contrasts in the way different demographics respond this new technology. “For example, people in Japan are much more open to innovative technologies, and therefore the introduction of robots is generally embraced more positively,” she explains.

“In Europe and the UK, the reception of robots appears to be more cautious. This raises broader questions about the future of society, including where we want to go with new technologies and how we see robots fitting in.”

But even while we work through the creases from a logistical perspective, Dr Natusch remains positive. “I think AI will enable us to provide services to people that we are not remotely able to do today,” he says. “And that by itself will bring wealth and employment across nations.”

And it’s clear that businesses, in particular, cannot ignore this trend. “I think it’s absolutely imperative to get started with AI today, because ultimately that is what will ensure survival of your organisation,” says Dr Natusch. “Rather than thinking long and hard about your AI strategy, the key thing is to get started with something now.”

Innovations for the future

Modern life is saturated with data, and new technologies are emerging nearly every day – but how can we use these innovations to make a real difference to the world?

Hitachi believes that Social Innovation should underpin everything they do, so they can find ways to tackle the biggest issues we face today.

Visit to learn how Social Innovation is helping Hitachi drive change across the globe.


The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today!

Local Digital Development agency opens it's doors to support innovation.

FIN Digital & ISDM Solutions announced a new initiative named Smart Business Spaces as part of their interactive solution offerings. The two firms have joined together to launch an IOT Playground in their loft style office space. Located two blocks from the White House, the initiative will support organizations that are seeking opportunities for innovation.

FIN & ISDM will offer dedicated programming to local executives designed to encourage an open exchange of ideas and promote technology. The IOT Playground will host workshops, events, industry-focused trainings, Internet of Things (IoT) demonstrations and Q&A sessions.
"Throughout our time, we've seen that leaders rarely have a space for real conversations about what it means to implement high-tech solutions," said FIN CIO Rakia Finley. "With the support from the DC community we’re excited to change that."

"This initiative will give business leaders a safe space to ask tough questions about the in's and out's of technology. We believe, by doing this, we're supporting D.C.'s vision to create a more diverse and inclusive city that supports tech economy." CEO of FIN, Marcus Finley.
Leaders will get insight on utilizing technologies including; audio visual, video, mobile development, smart devices, VR, bots, beacons, and web applications, to generate custom solutions for their organization or industry. The programming aims to foster innovation and help organizations turn ideas into reality.

“We’re excited about the power of tech and we want to see local job creators just as excited. It’s our belief that by creating this space for them to come and ask questions they will be empowered to get innovative,” said Stephen Milner, of ISDM Solutions.

The initiative will take place in the joint office of FIN Digital & ISDM Solutions for the remainder of the year with a launch event on September 14th for D.C. leaders.