This futurist isn't scared of AI stealing your job. Here's why

REUTERS/Kim Kyung-Hoon

REUTERS/Kim Kyung-Hoon

You know a topic is trending when the likes of Tesla’s Elon Musk and Facebook’s Mark Zuckerberg publicly bicker about its potential risks and rewards. In this case, Musk says he fears artificial intelligence will lead to World War III because nations will compete for A.I. superiority. Zuckerberg, meanwhile, has called such doomsday scenarios “irresponsible” and says he is optimistic about A.I.

But another tech visionary sees the future as more nuanced. Ray Kurzweil, an author and director of engineering at Google, thinks, in the long run, that A.I. will do far more good than harm. Despite some potential downsides, he welcomes the day that computers surpass human intelligence—a tipping point otherwise known as “the singularity.” That’s partly why, in 2008, he cofounded the aptly named Singularity University, an institute that focuses on world-changing technologies. We caught up with the longtime futurist to get his take on the A.I. debate and, well, to ask what the future holds for us all.

Fortune: Has the rate of change in technology been in line with your predictions?

Kurzweil: Many futurists borrow from the imagination of science-fiction writers, but they don’t have a really good methodology for predicting when things will happen. Early on, I realized that timing is important to everything, from stock investing to romance—you’ve got to be in the right place at the right time. And so I started studying technology trends. If you Google how my predictions have fared, you’ll get a 150-page paper analyzing 147 predictions I made about the year 2009, which I wrote in the late ’90s—86% were correct, 78% were exactly to the year.

What’s one prediction that didn’t come to fruition?

That we’d have self-driving cars by 2009. It’s not completely wrong. There actually were some self-driving cars back then, but they were very experimental.

Why are we so bad at predicting certain things? For example, Donald Trump winning the presidency?

He’s not technology.

Have you tried to build models for predicting politics or world events?

The power and influence of governments is decreasing because of the tremendous power of social networks and economic trends. There’s some problem in the pension funds in Spain, and the whole world feels it. I think these kinds of trends affect us much more than the decisions made in Washington and other capitals. That’s not to say they’re not important, but they actually have no impact on the basic trends I’m talking about. Things that happened in the 20th century like World War I, World War II, the Cold War, and the Great Depression had no effect on these very smooth trajectories for technology.

What do you think about the current debate about artificial intelligence? Elon Musk has said it poses an existential threat to humanity.

Technology has always been a double-edged sword, since fire kept us warm but burned down our houses. It’s very clear that overall human life has gotten better, although technology amplifies both our creative and destructive impulses. A lot of people think things are getting worse, partly because that’s actually an evolutionary adaptation: It’s very important for your survival to be sensitive to bad news. A little rustling in the leaves may be a predator, and you better pay attention to that. All of these technologies are a risk. And the powerful ones—biotechnology, nanotechnology, and A.I.—are potentially existential risks. I think if you look at history, though, we’re being helped more than we’re being hurt.

How will artificial intelligence and other technologies impact jobs?

We have already eliminated all jobs several times in human history. How many jobs circa 1900 exist today? If I were a prescient futurist in 1900, I would say, “Okay, 38% of you work on farms; 25% of you work in factories. That’s two-thirds of the population. I predict that by the year 2015, that will be 2% on farms and 9% in factories.” And everybody would go, “Oh, my God, we’re going to be out of work.” I would say, “Well, don’t worry, for every job we eliminate, we’re going to create more jobs at the top of the skill ladder.” And people would say, “What new jobs?” And I’d say, “Well, I don’t know. We haven’t invented them yet.”

That continues to be the case, and it creates a difficult political issue because you can look at people driving cars and trucks, and you can be pretty confident those jobs will go away. And you can’t describe the new jobs, because they’re in industries and concepts that don’t exist yet.


The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today for a ticket at an early bird rate!

Are chatbots ready to rule the (customer service) world?

Chatbots Ready to Rule.jpg

With so many conflicting opinions and predictions it’s hard to tell what the real state of chatbots is, even if it is all the talk around customer service, experience and care these days. Service providers are trying to figure out how they can best leverage the bot opportunity.

The bot race

Tech research firm Forrester recently conducted a survey on behalf of Amdocs, polling 7,200 consumers worldwide, and 31 decision makers from Tier-1 service providers. It reported that 86 percent of consumers regularly engage with bots, while 65 percent of service providers are investing in artificial intelligence (AI) to create and deliver better customer experiences and in chatbot infrastructure and capabilities.

AI research firm TechEmergence notes that chatbots are expected to be the top consumer application for AI over the next five years, while tech analyst company Gartner explains the anticipated acceleration of adoption:

“Chatbots are entering the market rapidly and have broad appeal for users due to efficiency and ease of interaction.”

As such, the firm issues a powerful call to action:

“Customer interactions are moving to conversational interfaces, so marketers need to have a bot strategy if they want to be part of that future.”

In a Forbes article, Shep Hyken, who speciializes in customer experience, supports these predictions and suggests the benefits of chatbots to the enterprise include they fact they:

  • are available 24/7 – making ‘working hours’ irrelevant;
  • don’t make customers wait for an answer, dispensing with hold times;
  • personalize customers’ experience by delivering only relevant information; and
  • make friends and build brand.

Chatbot challenges

However, as most consumers know by now – this isn’t always the case:

  • If the chatbot is available, but doesn’t understand the inquiry, availbility is irrelevant and how many times have you head or read “I’m sorry, I didn’t get that”?
  • If you don’t have to wait for chatbots, if they cannot process your request or handle more complex engagements, and you have to transfer to a live agent, then hold times are back in play (and you’ve had the added frustration of an extra step in the process).
  • If they can connect customer data to the engagement, but haven’t got a 360 degree view of the customer, the response might be neither contextual nor timely (again, causing friction);
  • How can they act as the customer’s ‘friend’ if they don’t really understand their request and so cannot sufficiently and effectively fulfill their needs? Or, if they can’t engage in a way that’s naturally conversational, intuitive, and personalized?

So, while BusinessInsider believes that “a chatbot’s usefulness is limited only by “creativity and imagination,” we know there’s more to it than that. Namely, service providers that want to leverage chatbots and reap the promised rewards of taking customer experience to new heights while decreasing costs, will need to:

  • Ensure that personalization does not rely solely on CRM data, but is based on a 360 customer view which includes behavior history, channel preference and journey patterns.
  • Ensure the right balance between virtual and live agents, seamlessly transferring the engagement to a live agent as needed, in a way that is transparent to the customer.
  • Make sure that the chatbot understands intents specific to telecoms so they can more accurately address the needs of service providers’ customers.
  • Integrate chatbots with the relevant business systems to make all the required data readily available.
  • Make sure chatbots can turn every care engagement into a commerce opportunity, by presenting the most relevant and timely marketing offer to customers.
  • Optimize each engagement by learning from past interactions.

Accordingly, a successful chatbot strategy should seek to ensure that a bot:

  • uses intelligence and machine learning;
  • is designed for communications and media industries;
  • understands telco-specific intents; and
  • is fully integrated with core back-end systems.

Learn more about how to achieve each of these critical capabilities in TM Forum’s recently published Quick Insights report, How an Intelligent Chatbot Can Revolutionize the Virtual Agent Experience (page 26).


The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today!

The Usefulness—and Possible Dangers—of Machine Learning

University of Pennsylvania workshop addresses potential biases in the predictive technique.

University of Pennsylvania workshop addresses potential biases in the predictive technique.

Stephen Hawking once warned that advances in artificial intelligence might eventually “spell the end of the human race.” And yet decision-makers from financial corporations to government agencies have begun to embrace machine learning’s enhanced power to predict—a power that commentators say “will transform how we live, work, and think.”

During the first of a series of seven Optimizing Government workshops held at the University of Pennsylvania Law School last year, Aaron Roth, Associate Professor of Computer and Information Science at the University of Pennsylvania, demystified machine learning, breaking down its functionality, its possibilities and limitations, and its potential for unfair outcomes.

Machine learning, in short, enables users to predict outcomes using past data sets, Roth said. These data-driven algorithms are beginning to take on formerly human-performed tasks, like deciding whom to hire, determining whether an applicant should receive a loan, and identifying potential criminal activity.

In large part, machine learning does not differ from statistics, said Roth. But unlike statistics, which aims to create models for past data, machine learning requires accurate predictions on new examples.

This eye toward the future requires simplicity. Given a set of past, or “training,” data, a decision-maker can always create a complex rule that predicts a label—say, likelihood of paying back a loan—given a set of features, like education and employment. But a lender does not seek to predict whether a past loan applicant included in a dataset actually paid back a loan given her education and employment, but instead whether a new applicant will likely pay back a loan, explained Roth.

A simple rule might not be perfect, but it will provide more accuracy in the long run, said Roth, because it will more effectively generalize a narrow set of data to the population at large. Roth noted that for more complex rules, algorithms must use bigger data sets to combat generalization errors.

Because machine-learning algorithms work to optimize decision-making, using code and data sets that can be held up to public scrutiny, decision-makers might think machine learning is unbiased. But discrimination can arise in several non-obvious ways, argued Roth.

First, data can encode existing biases. For example, an algorithm that uses training data to predict whether someone will commit a crime should know whether the people represented in the data set actually committed crimes. But that information is not available—rather, an observer can know only whether the people were arrested, and police propensity to arrest certain groups of people might well create bias.

Second, an algorithm created using insufficient amounts of training data can cause a so-called feedback loop that creates unfair results, even if the creator did not mean to encode bias. Roth explained that a lender can observe whether a loan was paid back only if it was granted in the first place. If training data incorrectly show that a group with a certain feature is less likely to pay back a loan, because the lender did not collect enough data, then the lender might continue to deny those people loans to maximize earnings. The lender would never know that the group is actually credit-worthy, because the lender would never be able to observe the rejected group’s loan repayment behavior.

Third, different populations might have different characteristics that require separate models. To demonstrate his point, Roth laid out a scenario where SAT scores reliably indicate whether a person will repay a loan, but a wealthy population employs SAT tutors, while a poor population does not. If the wealthy population then has uniformly higher SAT scores, without being on the whole more loan-worthy than the poor population, then the two populations would need separate rules. A broad rule would preclude otherwise worthy members of the poor population from receiving loans. The result of separate rules is both greater fairness and increased accuracy—but if the law precludes algorithms from considering race, for example, and the disparity is racial, then the rule would disadvantage the non-tutored minority.

Finally, by definition, fewer data exist about groups that are underrepresented in the data set. Thus, even though separate rules can benefit underrepresented populations, such rules create new problems, arguedRoth. Because the training data used by machine learning will include fewer points, generalization error can be higher than it is for more common groups, and the algorithm can misclassify underrepresented populations with greater frequency—or in the loan context, deny qualified applicants and approve unqualified applicants at a higher rate.

Roth’s presentation was followed by commentary offered by Richard Berk, the Chair of the Department of Criminology. Berk explained that algorithms are unconstrained by design, which optimizes accuracy, but argued that the lack of constraint might be what gives some critics of artificial intelligence some pause. When decision-makers cede control of algorithms, they lose the ability to control the assembly of information, and algorithms might invent variables from components that alone have, for example, no racial content, but when put together, do.

Berk stated that mitigating fairness concerns often comes at the expense of accuracy, leaving policymakers with a dilemma. Before an algorithm can even be designed, a human must make a decision as to how much accuracy should be sacrificed in the name of fairness.

Roth stated that this tradeoff causes squeamishness among policymakers—not because such tradeoffs are new, but because machine learning is often more quantitative, and therefore makes tradeoffs more visible than with human decision-making. A judge, for example, might make an opaque tradeoff by handing down more guilty verdicts, thereby convicting more guilty people at the expense of punishing the innocent. But that tradeoff is not currently measurable. Both Roth and Berk expressed hope that machine learning’s effect of forcing more open conversations about these tradeoffs will lead to better, more consistent decisions.

Penn Law Professor Cary Coglianese, director of the Penn Program on Regulation, introduced and moderated the workshop. Support for the series came from the Fels Policy Research Initiative at the University of Pennsylvania.


The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today!

Machine Learning in Marketing

The world of marketing is being transformed at such a fast pace it’s getting hard for marketers to follow the newest tech developments that are being introduced every day. The amount of data digital marketing is creating is so big that a lot of media agencies are actually experiencing an information overload, and don’t know how to make use of it. Data becomes a problem where it should be bringing value to a business. Here are a few ways in which machine learning techniques can help.

Customer Segmentation:

Improve customer segments and targeted advertising using machine learning segmentation methods (e.g. cluster analysis, k-means, Nearest Neighbour). Classify customers using Supervised Learning Models, find new audiences using recommendation systems increase efficiency of your media spend.

Behavioural Analysis:

Find patterns in the way customers interact with your brand by using predictive modelling and forecasting. Optimise conversions, increase customer satisfaction.

Social Media - Early Opportunity Detection:

Analyse real-time Twitter and Facebook data streams to capture current sentiments with respect to brands, products or adverts. Get a head start on sentiment outbreaks to uncover important opportunities and avoid PR crises.

Sales and Marketing Integration:

Build an easy to navigate interface to measure an integrated impact of sales teams and marketing campaigns. Track direct correlation between media budgets and number of products sold.

Influencers Strategy:

Improve efficiency of your campaigns using social network analysis. Find audience most susceptible to the message, and use this audience to amplify the impact of your campaigns. Find and target.

Implicit Survey Design:

Improve accuracy of surveys by using established psychological tools and test instruments, such as gamification. Learn about your audience to make informed decisions.

Neuroscience (Beta):

Optimise website usability and impact using eye tracking methods to. Investigate consumer’s perception of adverts using brain imagining (FMRI) analysis.

For more information about above methods, or if you’re interested in the different ways in which AI and ML can be used in marketing visit


Brainpool will be exhibiting at the AI Congress 2018. To meet with them and other leading experts, sign up for your ticket today!


Retail Revolutionized: Three ways to profit from artificial intelligence

Jill Standish

Jill Standish

Whether we’re receiving coupons based on our spending, or product suggestions based on other people’s spending, artificial intelligence (AI) is transforming how consumers shop and experience brands. For retailers, meanwhile, AI could increase profits by almost 60%. It could be a game-changer in this labor-intensive sector, augmenting the workforce and enabling employees to become more productive.

Some retailers already recognize ways for AI to complement their human workforce and boost profits. Stitch Fix is a clothing retailer that combines the expertise of fashion stylists with algorithms that analyze unstructured consumer data to deliver hand-picked items based on their preferences.  Another forward-thinking fashion company is Original Stitch, which deploys AI to analyze customers’ photographs of their favorite shirts before custom-tailoring and delivering a brand-new piece of clothing.  

Yet some retailers are hesitant about AI, and unsure how they can keep up to speed with the technology – let alone make the most of it. We have identified three ways for these retailers to revolutionize the retail experience using AI.

1. Understand the consumer

AI allows companies to find out more about how customers behave and what they want, giving them confidence that they are stocking the right products, targeting them at the right consumers, and building the right loyalty programs. 

The data they gather from their Web and mobile channels already enables online retailers to develop more detailed and accurate customer profiles. But this sort of insight does not have to be exclusively Web-based: physical retailers could use AI technology to learn about customer activity as they walk around stores. Which displays do customers linger over? Which products do they take off the shelves but then decide not to buy? This sort of data will tell retailers when, where and how to nudge customers toward purchases, and give them the insights they need to improve the customer experience. 

2. Guide them to what they want – and don’t know they want

Similarly, retailers can use AI to make it easier for customers to find what they are looking for – and, crucially, help them find things they don’t yet know they want. 

This is especially valuable for the largest online brands, with their vast range of products. Consumers who feel overwhelmed by the sheer quantity of items will go elsewhere, so retailers that can guide customers in the right direction have a serious competitive advantage. And it is the online retailers that were first to recognize the value of nudging customers toward further purchases by using machine learning to anticipate their needs.

Used sensitively, AI makes customers feel that retailers understand what they want. Progressive retailers are already using AI to provide more sophisticated online recommendations, but they are also looking into tailoring the homepage to each user so they are presented with the items they desire most. 

Consumers already know that the adverts they see online are personalized to them; Google uses AI to tailor its search results for individual users; and some online retailers use structured data to adapt what they show customers according to what they have searched for in the past. What is stopping retailers from customizing each person’s experience of the entire site? 

3. Knock their socks off

Online shopping impresses customers with its ease and efficiency. As AI makes online shopping easier, customers are less likely to go to stores for commodity products such as laundry detergent. But as far as providing memorable experiences goes, physical stores have the upper hand. So, this is the time to start exploring how to use AI to dazzle customers. 

Grocery retailer Coop Italia is a great example. Customers can simply wave a hand over a box of grapes to see nutritional and provenance information on a raised monitor. It also uses “vertical shelving”: touch applications that enable customers to search for other products and find out about related products, promotions, and even waste-disposal.  At some Neiman Marcus department stores, meanwhile, customers can try out a “memory mirror” – a virtual dressing room to compare outfits, see them from 360 degrees and share video clips with friends. 

With so many of us consulting our phones while we shop – to read reviews and research product information – it is only a matter of time before retailers answer these queries on the shop floor, using bots. AI lets them carry out multidimensional conversations with customers through text-based chats, spoken conversations, gestures and even virtual reality. 

This is not hype. AI advances have already given some retailers increased customer loyalty and higher profits. Now retailers have the opportunity to boost their profits further by using AI alongside the human workforce – producing even greater efficiencies, and truly revolutionizing the in-store experience. 

Machine learning: what does the industry want next?


In this guest post, we hear technology insights and tips from Mariano Albera VP Technology at Expedia Affiliate Network

Machine learning is more popular in the travel industry now than ever. There’s a simple explanation for that fact: machine learning is more powerful now than ever before.

The appeal of machine learning – essentially a form of artificial intelligence (AI) whereby computers learn without being explicitly programmed with new information – is clear. At exceptional speed, for example, complex algorithms can identify subtle but important data patterns that humans could never have spotted. In ‘learning’ from that information, the ‘machine’ can predict patterns ahead, and then act to process that knowledge to maximise future business. In a sense, then, machine learning is a modern and highly sophisticated technological application of a long-established notion – study the past to predict the future.

Machine learning is a modern and highly sophisticated technological application of a long – established notion – study the past to predict the future

The practical applications of machine learning, and other forms of AI such as data mining, are many and varied in the travel industry.

The rise of the chatbot

‘Chatbots’ are particularly visible examples of machine learning at work. As the name suggests, chatbots are essentially machines – messenger apps – with which customers seem to have conversations. Armed with the knowledge of the customer’s past bookings, the chatbot can offer targeted recommendations highly likely to be converted into sales. Critically, the chatbot keeps learning from each booking the customer makes, so recommendations become more relevant with every new ‘chat’ and customer interaction. That’s a huge benefit in an industry as personalised as travel. Effectively, the machine learns how to close the deal without human help.

In many ways chatbots are already better than humans. They:

  • Provide low-cost 24/7 customer support.

  • Deliver real-time message translation, so you’re not on the phone at two o’clock in the morning trying to find an English-speaking sales assistant in Tokyo

  • Are much faster than waiting for a call centre to answer the phone. If you want information about train times, good theatre and the weather in New York, for example, a machine will source and deliver that information to you more rapidly than even the most well-informed human.

Admittedly, chatbots cannot always answer complex questions but their sophistication is constantly improving. By definition, the machines keep learning.

Icelandair, Lufthansa and Austrian Airlines are three carriers to have seen the potential of machine learning and introduced chatbots.

Practical planning, time saving

Machine learning also helps in areas such as planning optimal flight routes. Assessing the millions of flight options on, say, a long-haul, round-trip journey, complex algorithms can learn from past booking data to filter those possibilities down to the small number of most practical or appealing options…all in just seconds.

Another application for machine learning is in addressing the problem of duplicate listings. Online travel agents, for example, gathering data from multiple sources, face issues of misspelling, punctuation and differing word orders that have historically caused problems for computers. Now, however, machines can analyse data and work out for themselves that ‘Delta Air Line’ is actually the same as ‘Delta Airlines’. No more staff time wasted de-duping and no more frustrated customers seeing two listings for exactly the same flight.

How EAN is benefiting

Like many travel companies, machine learning is increasingly critical to how we do business at Expedia Affiliate Network (EAN), where we use hundreds of hotel features to rank hotels for our travel partners by relevance to an individual consumer’s preferences.

Like chatbots, we learn from every interaction. Let’s say, for example, that a traveller always selects hotels with high-quality gyms but never shows interest in swimming pools. By monitoring each of his selected and rejected options and bookings, our machines learn that fact without being explicitly programmed with those details. So, when the traveller next books, for example, a flight into Atlanta with a partner airline, he is instantly shown a range of suitable local hotels, prioritising gyms over pools, this maximising the likelihood of a conversion to sale. 

Lessons learnt

At EAN, we are working on using a type of machine learning called ‘deep learning’ to rank and sort hotel images and this is what we’ve learnt is that very first thing people glance at within a hotel listing, before considering the hotel name or price, is the image. In fact, it takes us around one twentieth of a second to process an image, so the quality and relevance of the images and the order in which they are displayed to travellers is crucial. 

More from in white paper titled Does Deep Learning Hold Answers? 

In the past, we relied on a manual process to select the featured image for a listing, while the other images were randomly ordered or grouped. EAN has over 300,000 properties, and over 10 million images so, as you can imagine, ranking and sorting the images for these manually is difficult. Enter AI to do this automatically.

Looking forward

Our aim is to automatically order and sort the images not just according to image quality, but also to traveller types, customer preferences and seasonality, so that the images most likely to encourage a booking are displayed to each individual consumer.

The speed at which data can be processed, analysed and actioned, is already exceptional, and is improving daily

The good news is that machine learning is advancing fast. The speed at which data can be processed, analysed and actioned, is already exceptional, and is improving daily. Across many industries, not just travel, I’d expect to see machine learning move from niche applications to mission-critical processes.

As is so often the case, in issues of computing, the limitations are as much human as technological. Almost every part of the digital user experience can be improved with AI. We all need to think creatively about how machine learning can enhance our activities.

What do we, as an industry, want to do next?

This is a guest post from Mariano Albera, VP Technology, Expedia Affiliate Network


To find out more about how the Travel Industry is adopting AI, check out The AI Congress. The leading international artificial intelligence show, it takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today!

Can robots coexist with humans?

Working together: will robots be central to our future?

With advances in artificial intelligence forging ahead, it’s time to think seriously about how we see robots fitting into our society.

Of all the tech trends dominating headlines at the moment, artificial intelligence (AI) seems to be generating the most debate.

As we continue to develop this technology – at a seemingly exponential rate – we face increasing pressure to examine the role we really want it to fulfil, and how it should be integrated.

With many high-profile figures – from Stephen Hawking to Elon Musk – warning of the potential pitfalls, it may take time before society fully accepts the idea of ubiquitous AI, or “robot workers”. But in reality, there is a strong argument that, far from capping innovation in AI, we should find ways to put it to use in order to stay ahead.

Prudential’s global head of AI, Dr Michael Natusch, says machine learning in a business context grew out of a need for up-to-the-minute analytical tools. “What really drew people’s attention across a wide range of industries to machine learning is the ability to extract insight out of large, multi-structured data sets,” he says. “Drawing understanding and insights in an automated and continuous way – that’s what we really mean by AI.

“The way businesses can use AI is exactly the same way that businesses can use human intelligence. It enables us to make decisions, to understand what’s happening, and do things faster, better and cheaper.”

The labour debate

Of course, this drive to lower costs and save time may mean changes to some job roles we know today. According to a recent PwC report, around 30pc of existing UK jobs face automation over the next 15 years – with manual roles in areas such as manufacturing, transport and retail likely to be most affected.

But PwC’s chief economist John Hawksworth believes this could be a positive move. “Automating more repetitive tasks will eliminate some existing jobs, but could also enable workers to focus on higher value, more rewarding and creative work, removing the monotony from our day jobs,” he explains.

And that’s not to mention the upshot in productivity this will bring: “Advances in robotics and AI should also create additional jobs in less automatable parts of the economy as this extra wealth is spent or invested.

“The UK employment rate is at its highest level now since comparable records began in 1971, despite all the advances in digital and other labour-saving technologies we have seen since.”

And as Dr Natusch notes, this widespread concern over AI may be blinding people to its strengths. “Very often, it sounds like AI is in competition with humans, but the real power will come from humans and AI augmenting each other,” he says. “It’s this symbiosis of humans and AI that will drive major advances across a wide range of industries.”

Real robot workers

Of course, cultural factors are just as important as economic ones – especially in sectors such as retail, where robots could prove useful in customer service roles. Hitachi is one company exploring the potential for AI in this area.

Their “symbiotic” robot, named EMIEW3, is designed for customer service, using a cloud-connected “brain” and surveillance cameras to spot people in need of help, communicate with them and offer assistance. Having already been trialled at Tokyo’s busy Haneda Airport, EMIEW3 arrived in the UK for the first time this year.

“These trials are helping to give us a first-hand sense of people’s attitudes to robots, and to see if they find them genuinely helpful,” explains Dr Rachel Jones, senior strategy designer for Hitachi Europe. “It’s also leading to some interesting learnings about the interactions between humans and robots.”

Interestingly, Dr Jones’ team has already noticed contrasts in the way different demographics respond this new technology. “For example, people in Japan are much more open to innovative technologies, and therefore the introduction of robots is generally embraced more positively,” she explains.

“In Europe and the UK, the reception of robots appears to be more cautious. This raises broader questions about the future of society, including where we want to go with new technologies and how we see robots fitting in.”

But even while we work through the creases from a logistical perspective, Dr Natusch remains positive. “I think AI will enable us to provide services to people that we are not remotely able to do today,” he says. “And that by itself will bring wealth and employment across nations.”

And it’s clear that businesses, in particular, cannot ignore this trend. “I think it’s absolutely imperative to get started with AI today, because ultimately that is what will ensure survival of your organisation,” says Dr Natusch. “Rather than thinking long and hard about your AI strategy, the key thing is to get started with something now.”

Innovations for the future

Modern life is saturated with data, and new technologies are emerging nearly every day – but how can we use these innovations to make a real difference to the world?

Hitachi believes that Social Innovation should underpin everything they do, so they can find ways to tackle the biggest issues we face today.

Visit to learn how Social Innovation is helping Hitachi drive change across the globe.


The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today!

Local Digital Development agency opens it's doors to support innovation.

FIN Digital & ISDM Solutions announced a new initiative named Smart Business Spaces as part of their interactive solution offerings. The two firms have joined together to launch an IOT Playground in their loft style office space. Located two blocks from the White House, the initiative will support organizations that are seeking opportunities for innovation.

FIN & ISDM will offer dedicated programming to local executives designed to encourage an open exchange of ideas and promote technology. The IOT Playground will host workshops, events, industry-focused trainings, Internet of Things (IoT) demonstrations and Q&A sessions.
"Throughout our time, we've seen that leaders rarely have a space for real conversations about what it means to implement high-tech solutions," said FIN CIO Rakia Finley. "With the support from the DC community we’re excited to change that."

"This initiative will give business leaders a safe space to ask tough questions about the in's and out's of technology. We believe, by doing this, we're supporting D.C.'s vision to create a more diverse and inclusive city that supports tech economy." CEO of FIN, Marcus Finley.
Leaders will get insight on utilizing technologies including; audio visual, video, mobile development, smart devices, VR, bots, beacons, and web applications, to generate custom solutions for their organization or industry. The programming aims to foster innovation and help organizations turn ideas into reality.

“We’re excited about the power of tech and we want to see local job creators just as excited. It’s our belief that by creating this space for them to come and ask questions they will be empowered to get innovative,” said Stephen Milner, of ISDM Solutions.

The initiative will take place in the joint office of FIN Digital & ISDM Solutions for the remainder of the year with a launch event on September 14th for D.C. leaders.


Healthcare and AI

Healthcare is one of the main industries being transformed by AI. The range of applications of Artificial Intelligence and Machine Learning in healthcare is so broad, it’s hard to think of an area which won’t be transformed over the coming years. A lot these applications can help save lives, so it’s research definitely worth investing in. Here are some examples.

Healthcare bots:

Customer service can be improved with specialized chat bots that interact with patients through chat windows. Automate scheduling follow-up appointments with patients. Minimise human error by ensuring they are directed to the appropriate healthcare department, and reduce kpi times.

Disease Identification/Diagnosis:

It is now possible to build state of the art classification algorithm for diagnosing patients based on mere mobile phone photos. Identify rare diseases with learning algorithms such as functional-gradient boosting (FGB), which self-report behavioural data to allow distinguishing between people with rare and more common chronic illnesses.

Personalized Treatment:

Supervised learning allows physicians to select from more limited sets of diagnoses. An example of this is the estimation of patient risk factors relative to symptoms and genetic information. Such models can be calibrated and trained on micro biosensors and mobile phone applications which will give more sophisticated health data to assess treatment efficacy. Reduce treatment cost and optimize individual patent health.

Drug Discovery:

Machine learning in early-stage drug discovery can be used to estimate the success rate of initial screening of drug compounds relative to biological factors. The application of unsupervised learning (k-nearest neighbour algorithm) to precision medicine has identified mechanisms in multi-factor diseases, and created alternative treatments and therapies.

Clinical Trial Research:

Selecting and identifying ideal candidates for clinical trials by sampling from a broader range of data to find features that are currently underutilised, an example of this could be social media and number of doctor visits. Use machine learning to improve the safety of the trialists by monitoring their health in real-time remotely.

Epidemic Outbreak Prediction:

The monitoring and predicting of epidemic outbreaks has been performed successful by machine learning technologies for a number of years now. Collecting vast amounts of data from satellites, historical healthcare databases, and social media; one can train support vector machines and deep neural networks potential outbreaks such as malaria and ebola.

If you’re particularly interested in finding our more about any of the above visit


Brainpool will be exhibiting at the AI Congress 2018. To meet with them and other leading experts, sign up for your ticket today!

Nigel - the robot that could tell you how to vote

Source: KIMERA

Source: KIMERA

The creators of a new artificial intelligence programme hope it could one day save democracy. Are we ready for robots to take over politics?

"Siri, who should I vote for?"

"That's a very personal decision."

Apple's "personal assistant", Siri, doesn't do politics. It has stock, non-committal answers for anything that sounds remotely controversial. Not unlike some politicians in fact.

But the next generation of digital helpers, powered by advances in artificial intelligence (AI), might not be so reticent.

One piece of software being developed by a company in Portland, Oregon, aims to be able to offer advice on every aspect of its users' lives - including which way to vote.

"We want you to trust Nigel, we want Nigel to know who you are and serve you in everyday life," says Nigel's creator Mounir Shita.

"It (Nigel) tries to figure out your goals and what reality looks like to you and is constantly assimilating paths to the future to reach your goals.

"It's constantly trying to push you in the right direction."

Shita's company, Kimera Systems, claims to have cracked the secret of "artificial general intelligence" - independent thinking - something that has eluded AI researchers for the past 60 years.

Instead of learning how to perform specific tasks, like most current AI, Nigel will roam free and unsupervised around its users' electronic devices, programming itself as it goes.

"Hopefully eventually it will gain enough knowledge to be able to assist you in political discussions and elections," says Shita.

Nigel has been met with a certain amount of scepticism in the tech world.

Its achievements have been limited so far - it has learned to switch smartphones to silent mode in cinemas without being asked, from observing its users' behaviour.

But Shita believes his algorithm will have the edge on the other AI-enhanced digital assistants being developed by bigger Silicon Valley players - and he has already taken legal advice on the potential pitfalls of a career in politics for Nigel.

"Our goal, with Nigel, is by this time next year to have Nigel read and write at a grade school level. We are still way off participating in politics, but we are going there," he says.

AI is already part of the political world - with ever more sophisticated algorithms being used to target voters at election time.

Teams of researchers are also competing to produce an algorithm that will halt the spread of "fake news".

Mounir Shita argues that this will be good for democracy, making it infinitely harder for slippery politicians to pull the wool over voters' eyes.

"It's going to be a lot harder to brainwash an AI that has access to a lot of information and can tell a potential voter what the politician said is a lie or is unlikely to be true."

What makes him think anyone would listen to a robot?

Voters are increasingly turning their back on identikit "machine politicians" in favour of all-too-human mavericks, like the most famous Nigel in British politics - Farage - and his friend Donald Trump.

How could AI Nigel - which was named after Mounir Shita's late business partner Nigel Deighton rather than the former UKIP leader - compete with that?

Because, says Shita, you will have learned to trust Nigel - and it will be more in tune with your emotions than a political leader you have only seen on television.

Nigel - robot Nigel, that is - could even have helped voters in the UK make a more informed decision about Brexit, he claims, although it would not necessarily have changed the outcome of the referendum.

"The whole purpose of Nigel is to figure out who you are, what your views are and adopt them.

"He might push you to change your views, if things don't add up in the Nigel algorithm.

"Let me go to the extreme here, if you are a racist, Nigel will become a racist. If you are a left-leaning liberal, Nigel will become a left-leaning liberal.

"There is no one Nigel. Everyone has their own Nigel and each one of those Nigel's purpose is to adapt to your views. There is no political conspiracy behind this."

Ian Goldin, professor of globalisation and development at the University of Oxford, also believes AI could have a role to play in debunking political spin and lies.

But he fears politicians have yet to wake up to what it will mean for the future of society or, indeed, their own jobs.

In his book, Age of Discovery: Navigating the Risks and Rewards of Our New Renaissance, Goldin and co-author Chris Kutarna seek a middle ground between apocalyptic visions of humans controlled by robots and the techno-utopian dreams of Silicon Valley's elite.

He tells BBC News: "I think the threats posed by technology are rising as rapidly as the benefits and one hopes that somewhere, in some secret place, people are worrying about it.

"But the politicians certainly aren't talking about it."

Instead of thinking about machine-learning as some distant piece of science fiction, they should "join the dots" to see how it is already changing the political and social landscape, he argues.

He points to a research paper by the Oxford Martin Programme on Technology and Employment, which suggested that Donald Trump owes his US election victory to voters who have had their jobs taken away from them by automation.

"In the machine-learning world innovation happens more rapidly, so the pace of change accelerates," says Goldin.

"That means two things - people get left behind more quickly, so inequality grows more rapidly, and the second thing it means is that you have to renew everything quicker - fibre optics, infrastructure, energy systems, housing stock, mobility and flexibility."

He adds: "They (politicians) are going to have to form a view on whether they throw sand in the wheels. What are they going to do with the workers who are laid off?"

AI evangelists like Mounir Shita have a simple answer to this. And it does not involve throwing sand in the wheels of technology - they see meddling politicians as the enemy and Elon Musk, creator of the Tesla electric car, who has warned about the catastrophic consequences for humanity of unregulated AI, as misguided, at best.

Shita is relaxed about a world where machines do all the work: "I am not envisioning people sitting on their couch eating potato chips, gaining weight, because they have nothing to do. I envision people free from labour and can pursue whatever interests or hobbies they have."

Ian Goldin takes a less rosy view of an AI-enhanced future.

Rather than indulging in hobbies or world travel, those made idle by machines are more likely to be drinking themselves to death or attempting suicide, if recent research into the so-called "diseases of despair" among poorly educated members of the white working class in America is anything to go by, he says.

In the end, it all comes down to two competing views of human nature and whether we want Nigel or something like it in our lives.

  • British politicians, on a House of Lords committee, are set to investigate the economic, ethical and social implications of artificial intelligence over the coming months.



The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today!

Facebook and Google need humans, not just algorithms, to filter out hate speech

(Reuters/Navesh Chitrakar)

(Reuters/Navesh Chitrakar)

Facebook and Google give advertisers the ability to target users by their specific interests. That’s what has made those companies the giants that they are. Advertisers on Facebook can target people who work for a certain company or had a particular major in college, for example, and advertisers on Google can target anyone who searches a given phrase.

But what happens when users list their field of study as “Jew hater,” or list their employer as the “Nazi Party,” or search for “black people ruin neighborhoods?”

All of those were options Facebook and Google suggested to advertisers as interests they could target in their ad campaigns, according to recent reports by ProPublica and BuzzFeed. Both companies have now removed the offensive phrases that the news outlets uncovered, and said they’ll work to ensure their ad platforms no longer offer such suggestions.

That, however, is a tall technical order. How will either company develop a system that can filter out offensive phrases? It would be impossible for humans to manually sift through and flag all of the hateful content people enter into the websites every day, and there’s no algorithm that can detect offensive language with 100% accuracy; the technology has not yet progressed to that point. The fields of machine learning and natural language processing have made leaps and bounds in recent years, but it remains incredibly difficult for a computer to recognize whether a given phrase contains hate speech.

“It’s a pretty big technical challenge to actually have machine learning and natural language processing be able to do that kind of filtering automatically,” said William Hamilton, a PhD candidate at Stanford University, who specializes in using machine learning to analyze social systems. “The difficulty in trying to know, ‘is this hate speech?’ is that we actually need to imbue our algorithms with a lot of knowledge about history, knowledge about social context, knowledge about culture.”

A programmer can tell a computer that certain words or word combinations are offensive, but there are too many possible permutations of word combinations that amount to an offensive phrase to pre-determine them all. Machine learning allows programmers to feed hundreds or thousands of offensive phrases into computers to give them a sense of what to look for, but the computers are still missing the requisite context to know for sure whether a given phrase is hateful.

“You don’t want to have people targeting ads to something like ‘Jew hater,'” Hamilton said. “But at the same time, if somebody had something in their profile like, ‘Proud Jew, haters gonna hate,’ that may be OK. Probably not hate speech, certainly. But that has the word ‘hate,’ and ‘haters,’ and the word ‘Jew.’ And, really, in order to understand one of those is hate speech and one of those isn’t, we need to be able to deal with understanding the compositionality of those sentences.”

And the technology, Hamilton said, is simply “not quite there yet.”

The solution will likely require a combination of machines and humans, where the machines flag phrases that appear to be offensive, and humans decide whether those phrases amount to hate speech, and whether the interests they represent are appropriate targets for advertisers. Humans can then feed that information back to the machines, to make the machines better at identifying offensive language.

Google already uses that kind of approach to monitor the content its customers’ ads run next to. It employs temp workers to evaluate websites that display ads served by its network, according to a recent article in Wired, and to rate the nature of their content. Most of those workers were asked to focus primarily on YouTube videos starting last March, when advertisers including Verizon and Walmart pulled their ads from the platform after learning some had been shown in videos that promoted racism and terrorism.

The workers now spend most of their time looking for and flagging those kinds of videos to make sure ads don’t end up on them, according to Wired. Once they’ve identified offensive materials in videos and their associated content, they feed the details to a machine-learning system, and the system can in turn learn to identify such content on its own. It’s not an easy job, however, and some of the temp workers Wired interviewed said they can barely keep up with the amount of content they’re typically tasked with checking.

Google’s chief business officer, Philipp Schindler, echoed that sentiment in an interview with Bloomberg News in April, and cited it as a reason he believed the company should cut humans out of the equation altogether.

“The problem cannot be solved by humans and it shouldn’t be solved by humans,” he said.

Until machines can learn the difference between “Jew hater” and “Proud Jew, haters gonna hate,” though, the problem of identifying and flagging hate speech can only be solved by humans–with smart machines assisting them. And there have to be enough of those humans to make a meaningful impact on the amount of content users of Facebook and Google type into the services every day. It may be far cheaper to throw algorithms and overworked temps at the problem than it would be to hire vast armies of full-time workers, but it’s likely far less effective as well.

Facebook and Google have not yet determined exactly what approach they’ll take to keep offensive targeting options off of their ad platforms. Facebook is still assessing the situation, but is considering limiting which user profile fields advertisers can target, according to Facebook spokesperson Joe Osborne.

“Our teams are considering things like limiting the total number of fields available or adding more reviews of fields before they show up in ads creation,” Osborne said in an email to Quartz. (Ads creation is the area of Facebook where advertisers can customize their ads.)

Google said in a statement that its ad-targeting system already identifies some hate speech, and rejects certain ads altogether, but that the company will continue to work on the problem.

“Our goal is to prevent our keyword suggestions tool from making offensive suggestions, and to stop any offensive ads appearing. We have language that informs advertisers when their ads are offensive and therefore rejected. In this instance, ads didn’t run against the vast majority of these keywords, but we didn’t catch all these offensive suggestions. That’s not good enough and we’re not making excuses. We’ve already turned off these suggestions, and any ads that made it through, and will work harder to stop this from happening again,” the company said.


The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today!

Robot that can solve the Rubik’s cube and thread a needle conducts Italian orchestra in world first

Tobias Schwarz | Getty Images

Tobias Schwarz | Getty Images

  • ABB's dual-armed YuMi robot becomes the first to conduct an orchestra.
  • The robot performed in Pisa Tuesday evening as part of Italy's 'First International Festival of Robotics'.
  • YuMi performed alongside Italian tenor Andrea Bocelli and the Lucca Philharmonic Orchestra.

Italy, a country steeped in ancient tradition, has taken a stride forward in the twenty-first century race towards automation, becoming the first country to showcase a robot-conducted orchestra.

YuMi, a dual-armed robot designed by ABB, accompanied Italian tenor Andrea Bocelli and conducted the Lucca Philharmonic Orchestra at a gala event in Pisa's Teatro Verdi Tuesday evening.

The performance was a world first by a robotic conductor and celebrated Italy's 'First International Festival of Robotics', which kicked off Friday.

YuMi conducted three pieces, including Bocelli's rendition of 'La Donna e Mobile' from Verdi's Rigoletto and a solo by Maria Luigia Borsi of Puccini's Gianni Schicchi.

The robot was trained by Italian conductor Andrea Colombini. Writing in a blog post ahead of the performance, Colombini described the process as "satisfying, albeit challenging"; consisting first of programming via performance and then fine-tuning to synchronize the robot's movements with the music.

"The gestural nuances of a conductor have been fully reproduced at a level that was previously unthinkable to me. This is an incredible step forward, given the rigidity of gestures by previous robots," Colombini wrote.

Tuesday's performance marks the latest milestone for Swedish robotics firm ABB, which first unveiled YuMi in April 2015.

Described as a "collaborative" robot, it is designed to perform alongside humans and complement the workforce. Already it has demonstrated its ability to solve a Rubik's cube and threat a needed.

However, such developments have faced criticism over concerns that developments in robotics could outpace new job creation and risk job losses.

Colombini insisted that YuMi would not do away with the need for humans to inject "spirit" and "soul" into orchestral performances.

"I imagine the robot could serve as an aid, perhaps to execute, in the absence of a conductor, the first rehearsal, before the director steps in to make the adjustments that result in the material and artistic interpretation of a work of music," he added in his post.



The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today!


Face-reading AI will be able to detect your politics and IQ, professor says

Professor whose study suggested technology can detect whether a person is gay or straight says programs will soon reveal traits such as criminal predisposition

Your photo could soon reveal your political views, says a Stanford professor. Photograph: Frank Baron for the Guardian

Your photo could soon reveal your political views, says a Stanford professor. Photograph: Frank Baron for the Guardian

Voters have a right to keep their political beliefs private. But according to some researchers, it won’t be long before a computer program can accurately guess whether people are liberal or conservative in an instant. All that will be needed are photos of their faces.

Michal Kosinski – the Stanford University professor who went viral last week for research suggesting that artificial intelligence (AI) can detect whether people are gay or straight based on photos – said sexual orientation was just one of many characteristics that algorithms would be able to predict through facial recognition.

Using photos, AI will be able to identify people’s political views, whether they have high IQs, whether they are predisposed to criminal behavior, whether they have specific personality traits and many other private, personal details that could carry huge social consequences, he said.

Kosinski outlined the extraordinary and sometimes disturbing applications of facial detection technology that he expects to see in the near future, raising complex ethical questions about the erosion of privacy and the possible misuse of AI to target vulnerable people.

“The face is an observable proxy for a wide range of factors, like your life history, your development factors, whether you’re healthy,” he said.

Faces contain a significant amount of information, and using large datasets of photos, sophisticated computer programs can uncover trends and learn how to distinguish key traits with a high rate of accuracy. With Kosinski’s “gaydar” AI, an algorithm used online dating photos to create a program that could correctly identify sexual orientation 91% of the time with men and 83% with women, just by reviewing a handful of photos.

Kosinski’s research is highly controversial, and faced a huge backlash from LGBT rights groups, which argued that the AI was flawed and that anti-LGBT governments could use this type of software to out gay people and persecute them. Kosinski and other researchers, however, have argued that powerful governments and corporations already possess these technological capabilities and that it is vital to expose possible dangers in an effort to push for privacy protections and regulatory safeguards, which have not kept pace with AI.

Kosinski, an assistant professor of organizational behavior, said he was studying links between facial features and political preferences, with preliminary results showing that AI is effective at guessing people’s ideologies based on their faces.

This is probably because political views appear to be heritable, as research has shown, he said. That means political leanings are possibly linked to genetics or developmental factors, which could result in detectable facial differences.

Kosinski said previous studies have found that conservative politicians tend to be more attractive than liberals, possibly because good-looking people have more advantages and an easier time getting ahead in life.

Kosinski said the AI would perform best for people who are far to the right or left and would be less effective for the large population of voters in the middle. “A high conservative score … would be a very reliable prediction that this guy is conservative.”



Kosinski is also known for his controversial work on psychometric profiling, including using Facebook data to draw inferences about personality. The data firm Cambridge Analytica has used similar tools to target voters in support of Donald Trump’s campaign, sparking debate about the use of personal voter information in campaigns.

Facial recognition may also be used to make inferences about IQ, said Kosinski, suggesting a future in which schools could use the results of facial scans when considering prospective students. This application raises a host of ethical questions, particularly if the AI is purporting to reveal whether certain children are genetically more intelligent, he said: “We should be thinking about what to do to make sure we don’t end up in a world where better genes means a better life.”

Some of Kosinski’s suggestions conjure up the 2002 science-fiction film Minority Report, in which police arrest people before they have committed crimes based on predictions of future murders. The professor argued that certain areas of society already function in a similar way.

He cited school counselors intervening when they observe children who appear to exhibit aggressive behavior. If algorithms could be used to accurately predict which students need help and early support, that could be beneficial, he said. “The technologies sound very dangerous and scary on the surface, but if used properly or ethically, they can really improve our existence.”

There are, however, growing concerns that AI and facial recognition technologies are actually relying on biased data and algorithms and could cause great harm. It is particularly alarming in the context of criminal justice, where machines could make decisions about people’s lives – such as the length of a prison sentence or whether to release someone on bail – based on biased data from a court and policing system that is racially prejudiced at every step.

Kosinski predicted that with a large volume of facial images of an individual, an algorithm could easily detect if that person is a psychopath or has high criminal tendencies. He said this was particularly concerning given that a propensity for crime does not translate to criminal actions: “Even people highly disposed to committing a crime are very unlikely to commit a crime.”

He also cited an example referenced in the Economist – which first reported the sexual orientation study – that nightclubs and sport stadiums could face pressure to scan people’s faces before they enter to detect possible threats of violence.

Kosinski noted that in some ways, this wasn’t much different from human security guards making subjective decisions about people they deem too dangerous-looking to enter.

The law generally considers people’s faces to be “public information”, said Thomas Keenan, professor of environmental design and computer science at the University of Calgary, noting that regulations have not caught up with technology: no law establishes when the use of someone’s face to produce new information rises to the level of privacy invasion.

Keenan said it might take a tragedy to spark reforms, such as a gay youth being beaten to death because bullies used an algorithm to out him: “Now, you’re putting people’s lives at risk.”

Even with AI that makes highly accurate predictions, there is also still a percentage of predictions that will be incorrect.

“You’re going down a very slippery slope,” said Keenan, “if one in 20 or one in a hundred times … you’re going to be dead wrong.”

Contact the author:



The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today!

How artificial intelligence is impacting the service industry

Every single day, millions of dollars are being spent in call centres simply to answer repeated questions, thousands of times over and over again. Let us see how AI is going to affect IT service management.

In this age of artificial intelligence, managers at all levels need to accept that a significant chunk of their jobs might be done better and more efficiently by machines. Researchers conducted surveys and found that managers at all levels spend bulk of their time in administrative tasks, such as making schedules or writing reports. These are the very tasks that are most likely to be automated in the near future. In fact, some companies have already made improvements by transitioning these tasks to AI.

It was found that managers need to have the skill of judgement to succeed at work in this age of automation, which includes thinking creatively, data analysis, data interpretation, and developing strategy. Other skills that stood out as most important are social networking, coaching, and collaboration. These are the skills which will help managers stand out when AI take over the administrative tasks that managers perform today. Machines would never completely replace managers but they would give managers more time. So, the real priority for managers should be refocussing on the tasks that only humans can do using their creativity, collaborating attitude, empathising nature, and the power of judgement.

Machine v/s human

AI is very good at eliminating human error. Humans, many a times tend to deviate from standard defined processes which results in fatalities. Examples could be in various areas — a critical patient operated upon by a team of doctors may die because of a minor human error or pilots who have to take care of thousands of computations while the plane is flying, may cause the plane to crash. In such situations, a trained intelligent bot can take decisions which are as close to defined standards.

Let us try to understand why automation is being considered a threat to future jobs. It began with basic automation, wherein a particular manual task was programmed to be done by machine.The task when done by machine was completed faster and without any human error of course. This led to more automation of such menial tasks which resulted in improved efficiency; hence, the productivity of the entire organisation grew multi-fold.

Now with the evolution of technologies like artificial intelligence, machine learning and deep learning, machines can actually learn on their own and be taught to do more complicated tasks which are currently being done by humans.

IT Customer Support is one such domain wherein we can have intelligence bots well-trained with large sample datasets, who know how to respond to every kind of customer query or ticket. These bots are intuitive enough to adapt and improve themselves without human intervention. They become better with time.

Understanding IT service management (ITSM)

IT Service Management, also called ITSM, is the term used to refer to the implementation, managing and delivering of quality IT services in the best possible way to meet the needs of a business.

It ensures an appropriate mix of people, processes and technology are in place to provide value to a business. Essentially, ITSM is about value- it is about taking your resources, your capabilities and making something valuable for your business.

According to reports, global outsourced customer services market is projected to reach $84.7 billion by 2020. Another study, revealed that companies lose more than $62 billion due to poor customer service. Obviously, no company can afford to provide a not so good customer support.

Why AI is required in customer service


Every single day, millions of dollars are being spent in call centres simply to answer repeated questions, thousands of times over and over again. In other words, providing customer support is really an expensive task. A study on customer support market found the following:

  1. Around 270 billion phone calls were made annually to call centres which cost around $600 billion.
  2. One out of two incoming calls require escalation or go unresolved.
  3. 61 percent of all calls could have been resolved with better access to information.

Entities involved in customer service

A customer support service has various entities involved and let us understand how AI is going to affect each one of them:

  1. Agent: AI is going to recommend solutions, recommend classifications, help agent understand what the issue is, thereby making the agent smarter and provide the best reply to customer.
  2. Customer: AI can deflect cases by answering questions such as what can be done on the website to help customers find the better solution. It can suggest how we can push the solution faster to the customer. AI clubbed together with chatbots helps in responding to customers queries faster and more accurately based on data analysis in real time.
  3. Operations: With AI one can predict the close time of a customer issue, one can actually allocate the case to someone knowledgeable in that specific topic.

What AI has brought to customer service business

Following are some of the areas where artificial intelligence has proved to be very efficient: :

  1. Increased customer satisfaction
  2. Customer attrition reduction
  3. Customer effort reduction
  4. Higher customer service satisfaction level by agent
  5. Reduced agent on-boarding time
  6. Reduced cost per ticket
  7. Improved business outcome
  8. Reduced costs of service operations
  9. Increased revenue

Companies improving customer service by using artificial intelligence

Let us take a look at a few startups working in automating the customer service process at various levels:

  1. Automates customer service and support using artificial intelligence and natural language processing
  2. DigitalGenius: Brings practical applications of deep learning and artificial intelligence to customer service operations of large companies
  3. IPSoft: Assists with service desk support, helps field engineers troubleshoot, and supports procurement
  4. Next IT: Assists with customer service
  5. Digital Reasoning: Scans up to billions of communications from thousands of traders to spot language patterns and identifies potentially fraudulent activity
  6. Luminoso: Analyses customer feedback to propose product design changes; reviews how consumers feel about food items or grocery store experience

What future holds for ITSM

Artificial Intelligence is currently in a very early stage and has a long way to go for replacing human task force, but yes, it is moving steadily towards this and making huge progress across multiple domains.

Someone who is concerned about his/her career should start learning skills which would not get affected by automation. This involves the usage of human empathy — sensing the emotions of others — and the ability to think rationally and then come up with algorithms to solve problems.

We can sum up by saying that artificial intelligence is powerful but it is still artificial and does not have the natural powers that humans possess.

(Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of YourStory.)


The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today!

Artificial Intelligence Is Learning How To Develop Games

A screenshot from Georgia Tech's clone engine

A screenshot from Georgia Tech's clone engine

An AI in Georgia can recreate a game just by watching it being played

Researchers at Georgia Institute of Technology are developing an AI that can recreate a game engine simply by watching gameplay. 

This technology, as detailed in a press release, is being created in an effort to aid video game developers to "speed up game development and experiment with different styles of play." During their most recent experiments, the AI watched two minutes of Super Mario Bros.gameplay, and then built its own version of the game by studying and frames and predicting future events.

"To get their AI agent to create an accurate predictive model that could account for all the physics of a 2D platform-style game, the team trained the AI on a single 'speedrunner' video, where a player heads straight for the goal," Georgia Institute's communications officer Joshua Preston explained. This school of thought, he added, made the most difficult scenario possible for training the AI.

By allowing the AI to study the actual frames of the game, researchers found it was able to predict frames of the game much closer to the actual frames of Super Mario Bros. than other tests the team had run with different methods. This simplifies the process, necessitating their AI only need to watch a video of a game in action to begin replicating a game and learning its engine.

"Our AI creates the predictive model without ever accessing the game’s code, and makes significantly more accurate future event predictions than those of convolutional neural networks,” lead researcher Matthew Guzdial said in the release. “A single video won’t produce a perfect clone of the game engine, but by training the AI on just a few additional videos you get something that’s pretty close.”

Once the team had their model, there was only one test left: how did it play? A second AI system was then implemented to test the recreated level to ensure the player wouldn't fall through a level – kind of like a QA tester, but instead a highly intricate AI system.

The researchers found "the AI playing with the cloned engine proved indistinguishable compared to an AI playing the original game engine."

"To our knowledge this represents the first AI technique to learn a game engine and simulate a game world with gameplay footage," associate professor of Interactive Computing and co-investigator on the project Mark Riedl said.

The researchers go on to stress that, as of right now, their AI systems work best when the majority of the action happens on screen. Games where action happens away from the player's direct frame of sight might prove difficult for the system.

The nascent technology does raise the question of what sort of impact a more realized version of the AI could have on the game industry. Specifically, could it eliminate the need for certain jobs, like QA tester, in the game industry? 

However, Georgia Tech's Riedl says developers don't need to fear their job security; this technology will be an aid in development, not a replacement. Riedl tells Glixel that this AI will help novice game developers create projects once out of their reach. Using this kind of AI would allow developers with no coding or design experience to show the AI how a game should work, which it would then replicate. 

"Instead of putting people out of work, this will make it possible for people to create games that were otherwise unable to do so," Riedl said. "That makes it possible for more people to create – increasing the size of the pie instead of supplanting individuals. Second, professionals may be able to build games faster by having the system make an initial guess about the mechanics. Working more efficiently doesn’t necessarily put people out of work, but does allow them to make bigger and better games in the time available."

What about QA testers? Well, according to Riedl, they'll still be necessary thanks to one feature they have over AI systems necessary for playing games: the human touch.

"[Video games] are made to be enjoyed by humans," Riedl said. "Because of that you're always going to need humans to actually test the games. AI might help to test things we simply can't test currently but can be formalized mathematically, like game balance ... but one will need to use humans to see if other humans will enjoy the game for the foreseeable future."


The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today!

Experimenting with machine learning in media

From the Gutenberg printing press in 1440 to virtual reality today, advances in technology have made it possible to discover new audiences and new ways of expressing. And there’s more to come.

Machine learning is the latest technology to change how news, entertainment, lifestyle and sports content is created, distributed and monetized. YouTube, for example, has used machine learning to automatically caption more than one billion videos to make them more accessible to the 300 million+ people who are deaf or hard of hearing.

While many media executives are increasingly aware of machine learning, it's not always apparent which problems are most suited for machine learning and whose solutions will result in maximum impact.

Machine learning can help transform your business with new user experiences, better monetization of your content and reduce your operational cost.

Executives, here are three things to keep in mind as you consider and experiment with machine learning to transform your  digital business:

  1. The time to experiment with machine learning is right now. The barriers to using machine learning have never been lower. In the same way companies started thinking about investing in mobile 10 years ago, the time to start exploring machine learning is right now. Solutions like Google Cloud Machine Learning Engine have made powerful machine learning infrastructure available to all without the need for investment in dedicated hardware. Companies can start experimenting today with Google Cloud Machine Learning APIs at no charge—and even developers with no machine learning expertise can do it. For example, in less than a day, Time Inc. used a combination of Cloud Machine Learning APIs to prototype a personalized date night assistant that integrated fashion, lifestyle and events recommendations powered by its vast corpus of editorial content.

  2. Bring together key stakeholders from diverse teams to identify the top problems to solve before you start. Machine learning is not the answer to all of your business woes, but a toolkit that can help solve specific, data-intensive problems at scale. With limited time and people to dedicate to machine learning applications, start by  bringing together the right decision makers across your business, product and engineering teams to identify the top problems to solve. Once the top challenges are identified, teams need to work closely with their engineering leads to determine technical feasibility and prioritize where machine learning could have the highest impact. Key questions to answer that will help prioritize efforts are: Can current technology reasonably solve the problem? What does success look like? What training data is needed, and is that data currently available or does it need to be generated. This was the approach that was taken during a recent Machine Learning for Media hackathon hosted by Google and the NYC Media lab, and it paid off with clearer design objectives and better prototypes. For example, for the Associated Press, there was an opportunity to quickly generate sports highlights from analysis of video footage. So they created an automated, real-time sports highlights tool for editors using Cloud Video Intelligence API.

  3. Machine learning has a vibrant community that can help you get started. Companies can kickstart their machine learning endeavors by plugging into the vibrant and growing machine learnig community. TensorFlow, an open source machine learning framework, offers resources, meetups, and more. And if your company needs more hands-on assistance, Google offers a suite of services through the Advanced Solutions Lab to work side-by-side with companies to build bespoke machine learning solutions. There are also partners with deep technical expertise in machine learning that can help. For example, Quantiphi, a machine learning specialist, has been working closely with media companies to extract meaningful insights from their video content using a hybrid of the Cloud Video Intelligence API and custom models created using TensorFlow. However you decide to integrate machine learning technologies into your business, there's a growing ecosystem of solutions and subject matter experts that are available to help.


The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today!

Ocado launches Alexa app for voice-activated online shopping

(Credit: Amazon)

(Credit: Amazon)

Online grocery retailer Ocado has announced it will be the first supermarket in the UK to launch an app for the voice-controlled personal assistant, Amazon Alexa.

The Ocado app for Amazon’s smart home speaker, Echo, will enable customers to use voice commands to add products to an existing order or basket, to check their orders before they submit them and to find out  what products are in season and how best to include them in recipes. They’ll also be able to track deliveries.

In order to understand individual customers’ product preferences, the Ocado Technology team built an Ocado Conversational Service, based on artificial intelligence (AI), which is able to suggest both related and previously bought items for customers to add to their baskets.

Behind the scenes

In a blog post about the new service, the Ocado Technology e-commerce team explains how, when it first started building its Alexa ‘skill’ (a chunk of function built to support a specific use for Amazon’s smart speakers), it quickly realized that it would important to support a “natural, bi-directional conversational flow.”

This is what allows the service to ‘understand’ orders made in different ways, as well as commands that allow a customer to check their basket’s contents, for example, or verify the total price of an order.

According to the blog post, Alexa converts the audio stream into a command (for example, ‘add to basket’) and a search term (such as ‘cheese’), based on examples provided by Ocado, which has trained Alexa to recognize the top 15,000 most commonly searched items from

These text queries are then passed on to the Ocado skill, which also runs on AWS, where the request is processed and an appropriate response is established using internal APIs [application programming interfaces].

(Credit: Ocado Technology)

(Credit: Ocado Technology)

It’s this response that leads to this two-way conversation, the blog post explains. “If the request can be fulfilled, i.e. we have the item in stock, the Ocado skill will send an output to Alexa; for example, ‘I’ve added Cathedral mature cheddar to Thursday’s Ocado order. Can I help you with anything else?’ However, if the item is out of stock, unavailable or cannot be found, the Ocado skill will not only offer the appropriate notification, but can also make alternative suggestions; ‘Sorry the Cathedral City mature cheddar you usually buy is out of stock. How about trying the Ocado organic mature cheddar instead?’”

This means that shoppers can gradually collate their shopping basket over a few days, as and when they finish items in their kitchens.

Consumer confidence?

Ocado’s clearly hoping that this could mean an end to hastily-conducted audits of our kitchen cabinets prior to a shop, or entering into online systems those reminders previously scribbled on shopping lists, sticky notes or kitchen whiteboards.

“Grocery shopping should be quick, easy and convenient,” said Lawrence Hene, marketing and commercial director at Ocado. “Using voice technology, we’ve made it even easier, by developing our new app that will enable customers to add to their Ocado baskets without even lifting a finger.”

Commenting on the launch, John Rakowski, director of technology strategy at application monitoring and analytics specialist AppDynamics, said that the announcement demonstrates continued momentum in building speech-activated services and a “very intriguing development” in the battle for online supermarket shoppers.

“While there may be some mainstream consumer scepticism about the practical value of voice assistants, we’re certain to see further deployments of the technology by Amazon and other digital retailers in the near future,” he said. “Ten years ago, the launch of the iPhone and the advent of apps drew a fair degree of initial scepticism. Now apps are part of everyday life, and more so, will become crucial in the retail battleground.”  

Meanwhile at Fujitsu, Rupal Karia, head of commercial for the UK and Ireland suggested that the pressure is on for retailers to give customers “what they want, before they know they want it.”

In late July, consumer confidence levels in the UK slumped to the same levels seen immediately after the Brexit referendum, against a backdrop of rising inflation and weakening wage growth.

According to Karia, retailers must use technology to differentiate the experience that they can offer customers or face a worrying prospect, namely “being the next generation of retailers to be pushed out of the high street for good.”


The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today!


The sky-rocketing demand for AI experts results in recruitment revolution

Brainpool  Logo - AI Congress.png

The demand for AI and Machine Learning experts is sky rocketing, there is a predicted 50% - 60% gap between the supply and the demand by 2018. The A.I market value which is currently estimated at $0.6BN is supposed to reach $37BN by 2025, with 50% compound annual growth rate.

The amount of AI and ML projects available is overwhelming, and data scientists get buried in recruiters’ emails every day. It is difficult to find and distinguish the cutting edge projects, and academics spend too much of their valuable time looking for interesting clients and talking to mass recruiters. In the same time companies struggle to find the resources they need to innovate and automate their processes with machine learning.

Brainpool was created to solve this problem. It is a matching platform in which data scientists and clients can easily find each other on a project basis, without unnecessary admin, paperwork or recruiters. Both sides of the marketplace get scored on multiple factors such as sophistication or difficulty level of a project and skills required to complete it. Brainpool’s algorithm matches the two sides to ensure satisfaction for both data scientist’s and the client side.

Most members of the pool have PhDs from universities such as UCL, Oxford, Cambridge, Harvard and worked for AI leading companies such as Google DeepMind or Spotify. Brainpool gives researchers an opportunity to work on interesting client projects across industries, whilst making sure they have time to continue their research and stay on top of the latest AI and ML developments.

The idea originated at UCL’s computing department, the CEO Paula Parpart who is currently finishing her PhD in Computational Cognitive Science has personally experienced the problem described above and decided to find a solution. Rather than being yet another recruitment platform, Brainpool is an academia based network of top level data scientists, where they can exchange ideas, learn from each other and develop algorithms and products as solutions to repeating problems across industries.

Pretty much every industry will be completely transformed by AI and ML over the next decade. Make sure you are ready for the change, and have the right resources to keep up and stay ahead of competition.

To find out more visit

Do AI Voice Assistants Have A Place In Business?

Fans of science fiction have long been anticipating always-on voice assistance both at home and while on the job.

While voice assistant systems didn’t come online as early as many anticipated, there are a number of options, both widely used and in development, and it’s time for businesses to determine is voice assistance is right for their office.

Android and iOS rule the smartphone world, and nearly all devices running these offer voice assistance through Google Assistant and Siri. While these programs are constantly evolving, they already offer broad capabilities, making it easier to schedule activities, set reminders, and find answers to questions. Microsoft’s Cortana is fast making inroads as well, especially since it made its way to PCs, and Amazon’s Alexa has outdone expectations for providing home-based voice assistance. While these programs make voice assistance popular among consumers, they haven’t made much progress in an office setting.

Office Benefits

Voice assistance programs are great for organizing your personal life, and they can certainly help for scheduling work-related activities. However, true success in the business world will require a more comprehensive approach.


Artificial intelligence empowers all voice assistant programs, and tailoring programs to business needs is important for gaining more traction. While consumer-targeting voice assistance is great for certain tasks, those in the business world often need finely-tuned information, and new AI paradigms might be able to match this need.

Robust AI

Perhaps the most famous AI program, outside of consumer devices, is IBM’s Watson, which made headlines by scoring strong wins against human opponents on Jeopardy! Since its victory, Watson has gone on to find its way into hospitals and other medical practices, where it can provide medical guidelines based on a large database of scientific literature and excellent natural language recognition capabilities. Other companies are making large investments in the AI field as well, with Qualcomm recently making a large push.


Collaborative Voice Assistance

Perhaps the most valuable field for AI in the office, at least initially, will be collaboration tools. Personal assistance is helpful, but companies operate as teams, and collaboration is the key to success. Voice assistance can help keep everyone updated, and periodic voice reminders can shine in cases where email might be ignored.

Furthermore, voice assistance makes it easier to input new events, potentially encouraging workers to share more information. While popular consumer voice assistance programs can be adjusted to these tasks, it might take a new startup to start transforming the office.

Do People Like Voice Assistance?

Although people frequently try out voice assistance programs, studies show that few continue using them for extended periods of time; most people stop after a few days or weeks. Much like video calling, it might be the case that people simply prefer typed or written notes over automated assistants. To make progress in the business field, voice assistant programs will need to demonstrate real value that lasts beyond when the novelty wears off.

Robotic Service: A Potential Backdoor

Similar to voice assistance, robotic service interfaces are expected to increasingly come online in the coming years, replacing humans during checkout at retail locations. These systems share a number of similarities with voice assistance programs, and packages that combine both front-room and back-room artificial intelligence might provide the breakthrough an office needs to standardize on a voice assistance system.

3d Rendering Of Human Brain On Technology Background Represent Artificial Intelligence And Cyber Space Concept

Fragmentation: Voice Assistance’s Biggest Threat

Large tech companies are investing heavily into voice assistance and AI, and these companies have made tremendous progress. However, businesses want to ensure that they’re investing in technology that will last, and it’s unclear if one voice assistant will eventually reign supreme and pick up the support needed from vendors and third-party developers to thrive in the business environment. Although voice assistance will continue making headway into offices, it’s not clear if or when it will radically change office operations around the globe, or what the killer app will be.

It’s difficult to determine which technologies will eventually take hold. Video calling, a staple of science fiction in the 20th century, has only carved out a niche role, as people seem to prefer voice conversation. Few predicted instant messaging would become popular, but it’s now a significant communication portal. Voice assistance will almost certainly play a role for certain niche purposes, but it remains to be seen how popular it will eventually become.


The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today




You can’t throw a rock in 2017 without hitting some new walk of life where robots are being employed. The latest? A bricklaying robot called SAM100 (Semi-Automated Mason) that builds walls six times faster than a human bricklayer. (And probably about 10 times faster than the majority of Digital Trends writers.)

Created by New York-based company Construction Robotics, SAM is ready and willing to lay 3,000 bricks per day, using its combination of a conveyor belt, robotic arm, and concrete pump. By comparison, a human builder will average around 500 bricks per day.

“For a lot of different reasons, the construction industry has been slow to adopt innovation and change,” construction manager Zachary Podkaminer told Digital Trends. “Compare a construction site today from a picture of one years ago and, with the exception of a few tools, it really hasn’t changed all that much. Now it seems the industry is finally evolving and we’re trying to be a part of that by bringing technology to construction sites.”

Costing around $500,000, SAM isn’t cheap, but it’s a potentially transformative tool in revolutionizing future building sites. SAM is already working on building sites around the U.S. and recently received an upgrade to SAM OS 2.0. which allows it to lay “soldier course” bricks.

Is Construction Robotics worried that it’s putting human laborers out of business, though?

“We don’t see construction sites being fully automated for decades, if not centuries,” Podkaminer said. “This is about collaboration between human workers and machines. What SAM does is to pick up the bricks, put mortar on them, and puts it on the wall. It still requires a mason to work alongside it. SAM’s just there to do the heavy lifting.”

At present, SAM’s human partner is required to smooth over the concrete before SAM places more bricks. While some people are going to be concerned that robots like this will replace humans on construction sites, if — as Podkaminer notes — robots can do the backbreaking heavy lifting and leave people to do other jobs, that could work out best for all involved.

Plus, we presume it doesn’t shout mean comments about our skinny arms as we walk past the sites it is working on.


The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today