Artificial Intelligence in Oncology: Fantasy or Reality?

Artificial intelligence promises to use the power of data to solve some of the biggest problems of our time. But can it help us treat a disease as complex as cancer?

Artificial intelligence and machine learning are (not that) new technologies that have been recently boosted thanks to hardware improvements. Through algorithms, they can learn, predict and advise based on vast amounts of data.

Its potential to disrupt all sorts of markets has led to some big investments. In April, the European Commission announced a €20Bn AI strategy for Europe. France also launched its own €1.5Bn program, which was followed by the opening of new R&D facilities by companies like Fujitsu, Facebook or Google DeepMind.

One of the areas where AI is expected to have a major impact is healthcare, where it can be used to interpret the data from the massive databases gathered over the years by companies, healthcare providers and payers. In particular, the treatment of cancer could greatly benefit from the arrival of AI technology.

Why is artificial intelligence relevant in oncology?

Oncologists have been trying for decades to define small subsets of cancer patients that can benefit from a specific treatment. However, the success of targeted therapies has so far been limited. At the moment, medical doctors are overcrowded with data from imaging, genomics, co-morbidities and previous treatments.

This is where AI comes into play. The technology has the potential to crunch the data to predict the prognosis of the patient and advise doctors with different options available, including personalized medicine and clinical trials with experimental therapies.

AI as a diagnostic tool

Some companies are already selling ‘AI as a service’ solutions ranging from early stage diagnosis to prognosis. For example, in the context of breast cancer, only 5% of women who are recalled after a first screening are indeed sick. This increases the costs and is a stressful experience for patients. Therapixel, a startup specialized in medical imaging, is using artificial intelligence to deal with this issue by performing automated mammography analysis.

  Shutterstock

Shutterstock

In the specialty of pathology, AI has shown that it can significantly reduce the error rate of diagnosis as compared to a specialist on their own. In this field, Google is developing an augmented reality microscope that uses AI software to assist pathologists in the detection of cancer, which could reduce significantly some time-consuming activities such as manual cell counting. IBM has ambitious goals with its AI product Watson for Genomics, although so far its results don’t seem as good as promised.

In Switzerland, Sophia Genetics is using artificial intelligence to pinpoint gene mutations behind cancer to assist doctors in the prescription of the best treatment. Their solution costs on average $50-$200 per genetic evaluation and according to the company it is currently used by more than 420 hospitals in over 60 countries.

Another deep tech innovation for early detection of cancer from Freenome has attracted a $77M round from well-known VCs, including Andreessen Horowitz and Google Verily. Freenome recently announced a strategic collaboration with the Institut Curie to evaluate its AI genomics platform as a tool to predict patients’ responses to immuno-oncology therapies by observing changes in biomarkers circulating in the bloodstream.

AI for precision medicine

Using AI to stratify patients has big potential, but a major bottleneck is that we are still lacking a range of personalized medicine drugs wide enough to treat all these patients. According to Sam Natapoff, analyst at Bloomberg, drug development is “made for AI applications.” This opportunity has attracted large AI developers, big pharma and a huge number of startups. It is estimated that approximately one hundred startups are using AI in the field of drug discovery.

In late 2016, Pfizer announced a collaboration with IBM Watson for Drug Discovery in order to “analyze massive volumes of disparate data sources, including licensed and publicly available data as well as Pfizer’s proprietary data.”

Sanofi and GSK have announced, respectively, $300M and $42M deals with Exscientia, a spin out of the University of Dundee, Scotland, to identify synergistic combinations of cancer targets, to then develop drugs against those targets.

  Shutterstock

Shutterstock

Roche, out of many other deals including the acquisition of Flatiron Health for $1.9Bn and a partnership with GNS Healthcare, is supporting an open research initiative called EPIDEMIUM to bring together multiple players and apply AI to the research of new cancer therapies.

However, this field is still at a very early stage. So far, only the British company BenevolentAI, in partnership with Janssen, has shown concrete results, which have led to a drug candidate now moving to a Phase II trial.

Reducing trial costs

Artificial intelligence has the potential to draw insights from tremendous volumes of real-world data and apply it to the design of clinical trials, which could reduce significantly the cost. Especially given that patient recruitment alone represents about 30% of the total clinical trial time.

Recently, the Horizon 2020 program granted €16M to a huge European consortium — including big names like Institut Curie, Charité, Bayer, Philips and IBM — aiming to use AI technology to improve clinical outcomes in oncology at lower cost.

However, precedents have not been that promising. In 2013, the M.D. Anderson Cancer Center launched a program to test whether IBM Watson could speed up the process of matching patients with clinical trials. In the end, the $62M program didn’t prove to be efficient and cost-effective.

Challenges to overcome

Data scientists have to deal with unstructured electronic health records, and data coming from multiple sources that has been collected and structured for different purposes. Most routine databases do not have sufficient quality to be used by AI algorithms to achieve the quality standard required for clinical trials.

From a regulatory perspective, the authorities have been proactive to address issues in the approval process. FDA Commissioner Scott Gottlieb said recently during a conference in Washington: “AI holds enormous promise for the future of medicine,” and “We’re actively developing a new regulatory framework to promote innovation in this space, and support the use of AI-based technologies.”

The very first cloud-based deep learning algorithm has been approved recently by the FDA under the category of medical devices, meaning it can be used in clinical routine. In the EU, there is a legislative proposal for new medical device regulations, not yet adopted, that addresses software for medical devices with a medical purpose of “prediction and prognosis.” An additional challenge in Europe is that the recent General Data Protection Regulation (GDPR) enforcement is also impacting the development of AI algorithms.  

There is undoubtedly fantasy around AI. Entrepreneurs are tempted to surf on the hype and the limited understanding of the community about what AI is and what it can do. The AI business value chain should be discussed to clarify the involvement of different stakeholders in all steps, from raw algorithms to results. That way, we will be able to finally switch from an overhyped technology with only a few proof-of-concept examples to a breakthrough in healthcare.

Book your tickets for the AI Congress here: https://theaicongress.com/bookyourtickets/

CONGRATULATIONS TO ALL THE 2018 WINNERS…

Best New Conference Launch 2.png

CONFERENCE AWARDS SALUTES CHAMPIONS OF THE CONFERENCE INDUSTRY...

Conference Awards | www.conferenceawards.co.uk

29th June 2018 | City Central at the HAC, London

GCN Events are delighted to announce the winners of the 9th Annual Conference Awards.

The shortlist saw representation from a diverse range of companies and events held in the UK and overseas and organized by commercial conference companies; agencies; corporates and associations.

Winners were revealed at a ceremony at City Central at the HAC in London with hosting and entertainment provided by Zoe Lyons and Milton Jones, courtesy of headline sponsor Performing Artistes.

A complete list of 2018 winners is below:

Best Conference by a Small Company

C21Media – Content London

Best Conference by an Events Agency

BRANDFUEL – Google Zeitgeist

Best Partnership or Collaboration

MEETinLEEDS – Communication Matters' Augmentative and Alternative Communication Annual Conference

Best Free to Attend Conference

Redactive – RCM Annual Conference and Exhibition

Best Conference Venue - under 500 attendees theatre style

One Great George Street

Best Conference Venue - 500 to 1300 attendees theatre style

Smart Group – Here East

Best Conference Venue - over 1300 attendees theatre style

Manchester Central

Best Event Re-Vamp or Re-Launch

KNect365, part of Informa – London Tech Week

Best Corporate Event

WONDER London – Google Cloud Next 2017

Best Awards Event

Procurement Leaders – World Procurement Awards

Best UK Conference (under 1500 delegates)

Contentive – HRD Summit

Best UK Conference (over 1500 delegates)

Haymarket Media Group – CIPD Annual Conference & Exhibition 2017 – Embracing the New World of Work

Best Overseas Conference of the Year (under 400 delegates)

The Science and Information Organization – Future Technologies Conference 2017

Best Overseas Conference of the Year (over 400 delegates)

Green Power – MIREC Week 2017

Best Conference Venue Customer Service

Green & Fortune – Kings Place

Best Events Operation Team

Festival of Media – Operations Team

Best Event Linked to a Publication

C21Media – Content London

Best New Conference Launch

Oliver Kinross – The AI Congress

Best Large Scale Event or Congress

Euromoney Trading – Investing in African Mining Indaba

Best Conference Series

weCONECT Global Leaders – Smart Mobility Series

 

The full list of finalists and winners and supplementary material can be found on the website: http://conferenceawards.co.uk/winners/

 

Additional silver and bronze honours were also handed to:

* Aesthetics Media – Aesthetics Conference and Exhibition * ALM – Legal Week Connect * America Square Conference Centre * Association of MBAs – * Bioscientifica * C21Media * Campden Wealth * Cheltenham Racecourse * Clarion Events * Conferenz * drp * Edinburgh International Conference Centre * EMAP Publishing * etc.venues * Euromoney Trading * Evenco International * Faversham House * Fitz All Media * Forgather * Global Trade Review * HGA * IAB UK * Incisive Business Media * Incisive Media – Channel Awards * Inntel * Marketforce * Ocean Media Group * QEII Centre * Redactive * Reed Business Information * Sense Media * Surrey County Cricket Club * Terrapinn – World Vaccine Congress Washington 2018 * The Principal Hotel Company – The Principal Manchester * The Telegraph Festival of Business * Tobacco Dock * Warwick Conferences * Whitehall Media * WONDER London

Rory Ross-Russell joint-founder of GCN Events commented:

The Conference Awards once again underline the importance of the vibrant conference sector. It was a fantastic showcase of the innovation; dynamism and creativity evident in all the players in the market and an important day to take stock and celebrate the hard-won achievements of the last year. ”

Winners.jpg

GENDER SPECIFIC NETWORKING – Mind the Gap

logo_tranparent_2-01.png

Dr Laura Weis
Researcher & Consultant


Despite increasing efforts to remove practices preventing women from moving up the organizational hierarchy, women remain underrepresented in senior management positions. There is still talk of limited social mobility, unequal opportunities and the importance of smashing glass ceilings holding back women.
Several explanations have been put forward for this enduring gender gap. One argument that has recently received considerable attention is that women do not have the same access to career enhancing networks as men. Networks, defined as informal relations connecting individuals and groups of individuals, are increasingly relevant in organizations. A recent report published by the British Psychological Society (BPS) revealed that of those women who made it to the top of the corporate chain, their ability to build, maintain and use social capital was the key to their success.
Academic studies have shown that establishing powerful networks is beneficial for many reasons including increased motivation, social support, performance and individual career opportunities. Crucially, these studies provide evidence that men and women differ in the structure of their personal networks, as well as in the rewards gained from them. For instance, men often have a greater number of instrumental ties, relationships that provide job-related resources, while women have a greater number of expressive ties, relationships that provide emotional and social support. Consequently, women tend to have smaller networks of stronger relationships, while men see their networks as a way to get ahead and are more interested in what the relationship can yield. Women like to get along with others, men ahead of others.

Furthermore, while men prefer to network with other men on both expressive and instrumental contents, women often choose other women for expressive contents only and prefer to go to males for instrumental contents.


This has two important consequences:

• Since men seek friendship from those men who also provide access to organizational resources, they build so called “multiplex relations”, characterised by the exchange of both personal and professional resources. These relationships are shown to be key in the process of becoming a senior leader. But women do not tend build multiplex ties as frequently, preventing themselves from building the deep, trusting relationships with powerful men (and women) that are often necessary for promotion, in particular in high level jobs were performance is hard to predict.
• The preference of both males and females to have instrumental relations with males, results in
females rarely being in informal/natural roles of influence (e.g. advice or information giving).
This underrepresentation in informal influence positions may negatively affects women’s ability
to construct a credible leader identity.

Also, in settings where men predominate in positions of power, women often have a smaller pool of high-status individuals (women and man) to draw on. This difference partly stems from a reluctance of women to undertake the instrumental activities required to build a strong network.
Why? Women often fear that these activities will appear inauthentic and overly instrumental.

Is this fear justified? Research suggests yes. Successful females are often judged much harsher
compared to males and are seen as more aggressive, self-promoting, and power-hungry and are thus penalized in the form of social rejection.

In spite of these challenges, some women rise to leadership positions, but structural obstacles and cultural biases continue to influence their progress and leadership experiences. As women climb up the corporate ladder, they become increasingly scarce making them more visible. This subjects women to greater scrutiny, leading them to become risk-averse, overly focused on details, and prone to micromanage, often preventing them to step up to the top level.

The above explained network phenomena remain fairly unaddressed challenges to women and
receiving one-on-one mentoring and training only, is not likely to lead to advancement. Women need to network differently and be given opportunities to do so. The network literature suggests women need to network upwards to more powerful people more often. Women’s natural networking and high EQ skills are a great professional asset, but mainly when used strategically. Yet, women often shrink back from tactically using that skill, or feel unable to harness it. Companies should give women permission, encouragement and opportunities to build powerful inter- and intra-organizational networks. ‘Who you know and who knows you’ is responsible for a large percentage of career progression and women’s limited access to powerful networks, as well as prevailing hesitation about strategic networking, represents an often overlooked barrier to their opportunities.

Brainpool will be exhibiting at the AI Congress on September 11 & 12. Register today to reserve your ticket: https://theaicongress.com/bookyourtickets/

Link to the full article: https://brainpool.ai/blog/

How Artificial Intelligence Could Kill Capitalism

If you believe the hype, then Artificial Intelligence (AI) is set to change the world in dramatic ways soon. Nay-sayers claim it will lead to, at best, rising unemployment and civil unrest, and at worst, the eradication of humanity. Advocates, on the other hand, are telling us to look forward to a future of leisure and creativity as robots take care of the drudgery and routine.

 Adobe Stock

Adobe Stock

A third camp – probably the largest – are happy to admit that the forces of change which are at work are too complicated to predict and, for the moment, everything is up in the air. Previous large-scale changes to the way we work (past industrial revolutions) may have been disruptive in the short-term. However, in the long term what happened was a transfer of labor from countryside to cities, and no lasting downfall of society.

However, as author Calum Chace points out in his latest book 'Artificial Intelligence and the Two Singularities'  this time there’s one big difference. Previous industrial revolutions involved replacing human mechanical skills with tools and machinery. This time it’s our mental functions which are being replaced – particularly our ability to make predictions and decisions. This is something which has never happened before in human history, and no one exactly knows what to expect.

When I recently met with Culum Chase in London, he told me “A lot of people think it didn’t happen in the past, so it won’t happen now – but everything is different now.

“In the short run, AI will create more jobs as we learn how to work better with machines. But it’s important to think on a slightly longer timescale than the next 10 to 15 years.”

One guiding idea has always been that as machines take care of menial work (be that manual labor, augmenting the abilities of skilled professionals such doctors, lawyers, and engineers, or making routine decisions), humans will be free to spend their time on leisure or creative pursuits.

However, as Chace says, that would require the existence of the “abundance economy” – a Star Trek-like utopia where the means of filling our basic needs - sustenance and shelter - are so highly available that they are essentially free.

Without this happening, humans will find themselves in a situation where they have to go out and compete for whatever paid jobs are still available to humans in the robot-dominated workforce. As a simple example, a fully automated farm would, in theory, provide food at a far cheaper cost than one staffed with human farm hands, machinery operators, administrative staff, distributions operatives and security guards. However, if the owner of the farm still parts with his goods to the highest bidder, there would be inequalities in how that food is distributed among the populace and the potential for a poverty-struck underclass which lacks access to adequate sustenance. Nothing new there – of course, this underclass has always existed throughout history. However, it doesn’t exactly fit with the idea of the Star Trek utopia we need to have in place before we can comfortably hand the reigns to the machines.

This makes it something of a “chicken and egg” problem, and the ideal way for it to play out would seemingly be a gradual and managed transition to a smart machine-driven economy. This process would involve careful oversight of which human roles were being automated, and ensuring that the “plentiful” resources are in place to support those who unfortunately do find that they are being replaced, rather than merely “augmented.”

The problem is that this would require two elements: A concerted and informed effort from governments and regulators to understand the scale of the challenge and enable the right framework for it to happen. And an acceptance by those leading the charge – the tech industry – that there is a more important motive than profit for getting the change right.

Neither of those seems likely to happen any time soon. Despite the “make the world a better place” ethos, big tech’s overriding aim is still to generate growth and profit for their enterprises.

Also, managing the political change could be an even tougher job than persuading a tech CEO that she shouldn’t be focusing on revenue or profits.

“People aren’t stupid,” Chace says, while discussing how automated driving systems look set to erode the employment opportunities for humans whose trade is driving.

“They will see these robots driving around taking people’s jobs, and think ‘it won’t be long until they come for mine’ – and then there will be a panic. And panics lead to very nasty populist politicians, of the left or the right, being elected.”

Chace also doesn’t believe that the concept of universal basic income – currently being trialed in some Scandinavian countries – is the right answer, or at least not in its current form.

“The problem with universal basic income is that it’s basic. If all we can do is give people a basic income, we’ve failed, and society probably isn’t saveable.”

A future where the majority of humans live a subsistence-level income funded by the fruits of a robotic labor force, while a “1 percent” upper class – those in control of the robots – build their empires and reach for the stars – isn’t appealing to those with an egalitarian mindset. However, it could be the direction we’re heading in.

However, argues Chace, it’s not too late to plot a better course.

“We’ve all got a job to do – to wake up our political leaders who are not thinking about this, and wake up our tech leaders – who seem to be deeply in denial.

“If we do grasp the challenge we can have an amazing world for ourselves, our kids and our grandkids, a world where machines do the boring stuff and humans do the worthwhile, interesting stuff.”

Get your tickets for the AI Congress here: https://theaicongress.com/bookyourtickets/

How artificial intelligence can teach us about humans

 AI tools such as Watson Marketing Assistant can help staff add value in other areas CREDIT: GETTY

AI tools such as Watson Marketing Assistant can help staff add value in other areas CREDIT: GETTY

Jeremy Waite of IBM on how AI helps firms make increasingly informed decisions about customer behaviour

Fully understanding consumer behaviour can be very challenging for marketers. In light of recent high-profile cases of personal data misuse, customers have been increasingly sceptical about giving up their personal data, preferring to share content privately through messaging apps such as WhatsApp, Snapchat or Messenger. At the same time, GDPR regulations mean businesses are unable to use vast quantities of data they collect on customers without their express permission.

This means that, rather than offering personalisation, marketers have to rely on creating “personas” – types of customers they can target who don’t require personally identifiable information. Coupled with this, they are increasingly turning to the latest augmented or artificial intelligence technologies to build a better picture of consumers and how they are likely to behave in their decision making.

One company at the forefront of the AI revolution, IBM, has reinvented its offerings over the course of a century. One of its key areas of development is around its Watson artificial intelligence platform. It first gained worldwide attention in 2011 when it beat US game show champions on Jeopardy! and IBM recently launched Project Debater, demonstrating AI advances not only in understanding natural language, but also in being able to form it into a reasoned debate. Watson has since been developed into a suite of cloud-based AI tools designed to help businesses with their commerce, marketing and supply chain by using a voice- or text-powered intelligent assistant.

For example, thanks to IBM’s acquisition of The Weather Company, including Weather.com and The Weather Channel mobile app, Watson is able to use 2.5 billion internet of things (IoT) sensors across the world to help companies in several ways. “Weather is one of the biggest factors in driving consumer behaviour, such as determining what they are going to buy and when,” says Jeremy Waite, chief strategy officer of Watson Customer Engagement, Europe. “It’s also massively important for logistics such as shipping, especially for brands that need their products to be on the shelves very quickly.”

Whereas 10 years ago brands were in control of the online relationship with customers via email and social media, gathering all the individual data they needed in the process, the power over data has recently shifted back to the consumer. “What makes IBM unique is we are able to use AI to analyse vast amounts of anonymous data in order to find triggers of what companies should do next,” Waite says. “It’s like having your very own voice-activated data scientist and marketer sitting next to you.”

"The average employee wastes a day a week searching for information to do their job which could be done in a few seconds with a digital assistant."

But isn’t there a legitimate concern the adoption of AI technology could lead to reduction in jobs, particularly in marketing? Far from it, says Waite, who believes that Watson should be viewed as helping people to do their jobs more efficiently. “The average employee wastes a day a week searching for information to do their job which could be done in a few seconds with a digital assistant.”

AI tools such as Watson Marketing Assistant can help staff add value in other parts of the business, leaving tasks that can be automated to technology. AI technology has been used by the All England Lawn Tennis Club to analyse tennis matches going back decades, even assessing crowd reaction and assisting media teams in producing highlights packages of Wimbledon. “There’s huge pressure on staff in those two weeks to produce high-quality content,” says Waite. “This technology frees them up to be more creative in other areas.”

Similarly, IBM’s Watson technology is being used at the World Cup in Russia by Fox Sports to allow fans to put their own highlights packages together. Fans can choose the players and tournaments (going back to 1958) they want, and “play types” such as goals, shots, saves and red cards.

AI has an increasing role to play in helping businesses with their digital transformation. AI-powered virtual agents, or “chatbots”, are widely used by organisations to help customers solve the vast majority of queries, and the technology can help automate processes that would take far longer manually, be it cataloguing images and video for sports content providers or producing marketing reports and invoices for clothing manufacturers.

Yet the reality, according to research from PwC, is that 85pc of digital transformation projects still fail. All too often, though, it seems this isn’t a reflection of the technology itself, but because of the people involved – either through a lack of buy-in from the chief executive or a short-term perspective.

“The key is to invest more in people than we do in technology,” says Waite. “Nate Silver famously said that we ask too much of our technology and not enough of ourselves. With a board willing to buy into and back the right vision in the long-term, and a strategy that filters down to the marketing teams, transformation works.”

It is only when the culture towards digital changes, everyone’s views are aligned and AI becomes a board-level agenda that AI in business can really hope to succeed.

Book your tickets for the AI Congress here: https://theaicongress.com/bookyourtickets/

Robots Are Our Friends -- How Artificial Intelligence Is Leveling-Up Marketing

AI is doing the jobs we don't want to do and the jobs we can't do, and improving the jobs we already do.

 Image credit: PhonlamaiPhoto | Getty Images

Image credit: PhonlamaiPhoto | Getty Images

How many times in the last year have you heard the question, “Is artificial intelligence going to take over our jobs?” For marketers, this question is just as relevant, but I’m here to tell you that robots are our marketing friends.

We are just at the start of tapping into artificial intelligence (AI) for marketing, but there are already a number of great ways that this  technology is improving our jobs, not killing them.

AI, in fact, has the potential to do the jobs we don’t want and the jobs we can’t do, and to ultimately help us do the jobs we already do, better. Here's more on each prediction:

AI is doing the jobs we don’t want to do.

The first obvious application of artificial intelligence is to automate the tasks that we humans don’t want to do -- those repetitive, low- skill tasks. AI can be easily programmed to do such work and do it faster, more cheaply and more reliably. A great example is the cataloguing of marketing data to be used for analysis.

Say, for example, that you wanted to write a unique blog article on the topic of “video marketing.” In order to figure out a unique angle for your article, you may want to catalogue all of the existing content on the topic of “video marketing” and even categorize each article by website, author and share metrics. This could be a very manual process for a human  and something that would invite human error into the process.

Where AI shines is that it can do such repetitive tasks -- but at scale. Imagine that, in the same time frame, a junior marketer could catalogue 100 “video marketing” articles while a machine could catalogue more than 1,000 articles on the same topic, along with 1,000 articles each for 100 more topics.

Such AI ability becomes particularly useful for marketers who are attempting to aggregate data about what’s happening outside of their company -- what’s being published by their competitors, customers or industry peers. There are tens of millions of pieces of content data created every minute, and if marketers want to leverage it, we need to employ machines to help.

AI is doing the jobs we can’t do.

Not only is AI automating jobs we don’t want to do, it’s also opening the doors to jobs we can’t do. Since AI has the ability to process an infinitely larger dataset than a human can, it can leverage that scale to identify marketing insights that would otherwise be lost.

Say you want to take the next step in that content-marketing data-collection project: You not only want to catalogue all of the “video marketing” content, but to catalogue all of the content being published in your industry more broadly. Ultimately, you'll want to use this catalogue to drive market-informed content campaigns of our own.

Identifying new topics emerging or types of articles that garner above-average shares can help direct new content creation to align with existing trends. A given article could have many different qualities that could lead to its success. It’s AI’s ability to tag and compare many data points that ultimately produce the marketing takeaway.

AI’s strength in turning a mass of data into insight truly shines in the noisiest, highest-volume channels that a marketer hopes to master. Social media, content marketing, news and PR are great places to start, but even competitors’ job postings and website changes can be great inputs for marketing campaigns if a business can manage to extract insight out of the noise.

Again, AI-based technologies have the ability to throw out the noise -- whether that means the same old promotional tweet or a website update to fix a typo. Those technologies can then focus on the signal -- like a tweet about an acquisition or a website change to alter a competitor’s pricing. In this way, AI can see both the forest and the trees in online data to surface takeaways for marketers they would not be able to find manually -- and in real-time, at that.

AI is improving the jobs we already do.

By incorporating AI into our marketing, we have the opportunity to free up that expensive, intelligent, creative resource that is a marketer to do higher value work. Instead of collecting data, marketers can analyze it. Instead of sifting through data, marketers can act on it.

By delegating work to AI-driven technologies, marketers can improve their work by creating content they know will stand out, by implementing conversion-optimization strategies observed from competitors’ sites and enabling sales using the latest competitor pricing. And that's just the start of what AI can potentially do.

Get your tickets for the AI Congress here: https://theaicongress.com/bookyourtickets/

Insurers turn to artificial intelligence in war on fraud

Machine learning is helping the insurance industry flag suspicious claims–and even crawl through social media accounts to find fraud.

calculator-385506_1920.jpg

From bogus claims to shady brokers, insurance fraud costs companies and their customers more than $40 billion a year, the FBI estimates. And that’s excluding medical insurance fraud, which is estimated by industry groups to cost tens of billions more.

The staggering level of criminality costs us all, adding $400 to $700 a year to premiums we pay for our homes, cars, and healthcare, the feds say. There are simply not enough investigators to put a significant dent in the criminality, so the industry is turning to the machines.

Using artificial intelligence to pick out inconsistencies and unusual patterns has quickly become standard for insurance companies, whether they’re looking for sophisticated rings of fraudsters rigging auto accidents or just individuals embellishing how much their damaged property was worth.

“You can’t not do it–this is kind of part and parcel of modern insurance,” says Jim Guszcza, U.S. chief data scientist at Deloitte Consulting. “You can’t not have machine learning and predictive analytics for claims.”

Among the companies harnessing the power of data is Lemonade, a New York home and renters’ insurance startup founded in 2015 by two tech veterans. CEO Daniel Schreiber says the data-driven approach often lets Lemonade evaluate and pay out claims substantially quicker than many traditional insurers.

In about one third of cases, claims can be approved and paid out essentially instantly on approval by the company’s algorithms, he says. “Even if a human is involved, it’s radically quicker.”

Humans can also still review claims after they’ve been paid, checking up on and improving the automated processes, he says. That way, they can teach the algorithms what to be suspicious of in a claim–just as the machines can highlight suspicious factors they might miss.

“We’re finding that our claims department is responding much faster because it’s now competing with an algorithm,” he says.

For many insurance companies, it’s not so much a competition but a way to more effectively triage claims to let humans dive deep into the ones that need more examination.

“Part of the zeitgeist among insurers today is low-touch or no-touch claims processing,” says James Quiggle, director of communications at the Coalition Against Insurance Fraud, an industry group. “More and more insurance companies are looking toward machines to help deal with often basic scams, thereby freeing up investigators for more complicated aspects for investigations that only humans can handle.”

Sophisticated AI tools can also spot complex patterns of fraud–like groups of connected people filing similar claims, perhaps with overlapping networks of doctors or lawyers, about injuries from deliberately staged car accidents.

“Data crunching can sift through a hugely confusing pile of information, make sense of it, piece it all together in a way that investigators can clearly see and thus create an aha moment where the entire ring is outlined in graphic terms on a computer screen,” says Quiggle. “The whole scam–the whole organization–is minutely outlined in graphic terms that might have taken months to analyze with humans acting alone.”

BIG BROTHER IS WATCHING

Part of the equation is that ever more data is now available to investigators and their digital assistants, from public social media posts–people with purported severe injuries posting pictures from the softball field to license plate readings and even Fitbit records.

Hanzo, a web archiving and analysis firm with offices in the U.S. and U.K., develops software that insurers can use to pull and sift through data from social media, marketplace sites like eBay and Craigslist, and elsewhere on the web in researching claims.

“Anything you can see in a browser we can effectively collect,” says Keith Laska, the company’s chief commercial officer. “The web-crawling technology then spiders down and tries to sift through thousands of pages of content to find the relevant information.”

That could be evidence that an insurance customer claiming items were stolen from a car listed similar items on a classified site or checked in on social media to locations far from the scene of the crime when the burglary was said to take place.

Inevitably, experts say, increased use of machine learning and disparate sources of data will raise questions about privacy that could lead to the development of industry standards or regulation.

For instance, Quiggle says, people might look askance at a worker’s compensation insurer flying a drone over an injury victim’s backyard, hoping to find evidence that the person is in better shape than claimed.

“You think this person is doing fun stuff when he or she should be flat on his or her back,” he says. “Where can you go and where can’t you go to try to pierce that person’s privacy veil? Can you fly over that person’s backyard?”

Book your tickets for the AI Congress here: https://theaicongress.com/bookyourtickets/

Artificial Consciousness: How To Give A Robot A Soul

The Terminator was written to frighten us; WALL-E was written to make us cry. Robots can’t do the terrifying or heartbreaking things we see in movies, but still the question lingers: What if they could?

Granted, the technology we have today isn’t anywhere near sophisticated enough to do any of that. But people keep asking. At the heart of those discussions lies the question: can machines become conscious? Could they even develop — or be programmed to contain — a soul? At the very least, could an algorithm contain something resembling a soul?

The answers to these questions depend entirely on how you define these things. So far, we haven’t found satisfactory definitions in the 70 years since artificial intelligence first emerged as an academic pursuit.

Take, for example, an article recently published on BBC, which tried to grapple with the idea of artificial intelligence with a soul. The authors defined what it means to have an immortal soul in a way that steered the conversation almost immediately away from the realm of theology. That is, of course, just fine, since it seems unlikely that an old robed man in the sky reached down to breath life into Cortana. But it doesn’t answer the central question — could artificial intelligence ever be more than a mindless tool?

 Victor Tangermann, The Birth of Alexa, Photoshop, 2018

Victor Tangermann, The Birth of Alexa, Photoshop, 2018

That BBC article set out the terms — that an AI system that acts as though it has a soul will be determined by the beholder. For the religious and spiritual among us, a sufficiently-advanced algorithm may seem to present a soul. Those people may treat it as such, since they will view the AI system’s intelligence, emotional expression, behavior, and perhaps even a belief in a god as signs of an internal something that could be defined as a soul.

As a result, machines containing some sort of artificial intelligence could simultaneously be seen as an entity or a research tool, depending on who you ask. Like with so many things, much of the debate over what would make a machine conscious comes down to what of ourselves we project onto the algorithms.

“I’m less interested in programming computers than in nurturing little proto-entities,” Nancy Fulda, a computer scientist at Brigham Young University, told Futurism. “It’s the discovery of patterns, the emergence of unique behaviors, that first drew me to computer science. And it’s the reason I’m still here.”

Fulda has trained AI algorithms to understand contextual language and is working to build a robotic theory of mind, a version of the principle in human (and some animal) psychology that lets us recognize others as beings with their own thoughts and intentions. But, you know, for robots.

“As to whether a computer could ever harbor a divinely created soul: I wouldn’t dare to speculate,” added Fulda.

There are two main problems that need resolving. The first is one of semantics: it is very hard to define what it truly means to be conscious or sentient, or what it might mean to have a soul or soul-function, as that BBC article describes it.

The second problem is one of technological advancement. Compared to the technology that would be required to create artificial sentience — whatever it may look like or however we may choose to define it — even our most advanced engineers are still huddled in caves, rubbing sticks together to make a fire and cook some woolly mammoth steaks.

At a panel last year, biologist and engineer Christof Koch squared off with David Chalmers, a cognitive scientist, over what it means to be conscious. The conversation bounced between speculative thought experiments regarding machines and zombies (defined as those who act indistinguishably from people but lack an internal mind). It frequently veered away from things that can be conclusively proven with scientific evidence. Chalmers argued that a machine, one more advanced than we have today, could become conscious, but Koch disagreed, based on the current state of neuroscience and artificial intelligence technology.

Neuroscience literature considers consciousness a narrative constructed by our brains that incorporates our senses, how we perceive the world, and our actions. But even within that definition, neuroscientists struggle to define why we are conscious and how best to define it in terms of neural activity. And for the religious, is this consciousness the same as that which would be granted by having a soul? And this doesn’t even approach the subject of technology.

“AI people are routinely confusing soul with mind or, more specifically, with the capacity to produce complicated patterns of behavior,” Ondřej Beran, a philosopher and ethicist at University of Pardubice, told Futurism.

“AI people are routinely confusing soul with mind”

“The role that the concept of soul plays in our culture is intertwined with contexts in which we say that someone’s soul is noble or depraved,” Beran added — that is, it comes with a value judgment. “[In] my opinion what is needed is not a breakthrough in AI science or engineering, but rather a general conceptual shift. A shift in the sensitivities and the imagination with which people use their language in relating to each other.”

Beran gave the example of works of art generated by artificial intelligence. Often, these works are presented for fun. But when we call something that an algorithm creates “art,” we often fail to consider whether the algorithm has merely generated sort of image or melody or created something that is meaningful — not just to an audience, but to itself. Of course, human-created art often fails to reach that second group as well. “It is very unclear what it would mean at all that something has significance for an artificial intelligence,” Beran added.

So would a machine achieve sentience when it is able to internally ponder rather than mindlessly churn inputs and outputs? Or is would it truly need that internal something before we as a society consider machines to be conscious? Again, the answer is muddled by the way we choose to approach the question and the specific definitions at which we arrive.

“I believe that a soul is not something like a substance,” Vladimir Havlík, a philosopher at the Czech Academy of Sciences who has sought to define AI from an evolutionary perspective, told Futurism. “We can say that it is something like a coherent identity, which is constituted permanently during the flow of time and what represents a man,” he added.

Havlík suggested that rather than worrying about the theological aspect of a soul, we could define a soul as a sort of internal character that stands the test of time. And in that sense, he sees no reason why a machine or artificial intelligence system couldn’t develop a character — it just depends on the algorithm itself. In Havlík’s view, character emerges from consciousness, so the AI systems that develop such a character would need to be based on sufficiently advanced technology that they can make and reflect on decisions in a way that compares past outcomes with future expectations, much like how humans learn about the world.

But the question of whether we can build a souled or conscious machine only matters to those who consider such distinctions important. At its core, artificial intelligence is a tool. Even more sophisticated algorithms that may skirt the line and present as conscious entities are recreations of conscious beings, not a new species of thinking, self-aware creatures.

“My approach to AI is essentially pragmatic,” Peter Vamplew, an engineer at Federation University, told Futurism. “To me it doesn’t matter whether an AI system has real intelligence, or real emotions and empathy. All that matters is that it behaves in a manner that makes it beneficial to human society.”

“To me it doesn’t matter whether an AI system has real intelligence… All that matters is that it behaves in a manner that makes it beneficial to human society.”

To Vamplew, the question of whether a machine can have a soul or not is only meaningful when you believe in souls as a concept. He does not, so it is not. He feels that machines may someday be able to recreate convincing emotional responses and act as though they are human but sees no reason to introduce theology into the mix.

And he’s not the one who feels true consciousness is impossible in machines. “I am very critical of the idea of artificial consciousness,” Bernardo Kastrup, a philosopher and AI researcher, told Futurism. “I think it’s nonsense. Artificial intelligence, on the other hand, is the future.”

Kastrup recently wrote an article for Scientific American in which he lays out his argument that consciousness is a fundamental aspect of the natural universe, and that people tap into dissociated fragments of consciousness to become distinct individuals. He clarified that he believes that even a general AI — the name given to the sort of all-encompassing AI that we see in science fiction — may someday come to be, but that even such an AI system could never have private, conscious inner thoughts as humans do.

“Siri, unfortunately, is ridiculous at best. And, what’s more important, we still relate to her as such,” said Beran.

Even more unfortunate, there’s a growing suspicion that our approach to developing advanced artificial intelligence could soon hit a wall. An article published last week in The New York Times cited multiple engineers who are growing increasingly skeptical that our machine learning, even deep learning technologies will continue to grow as they have in recent years.

I hate to be a stick in the mud. I truly do. But even if we solve the semantic debate over what it means to be conscious, to be sentient, to have a soul, we may forever lack the technology that would bring an algorithm to that point.

But when artificial intelligence first started, no one could have predicted the things it can do today. Sure, people imagined robot helpers à la the Jetsons or advanced transportation à la Epcot, but they didn’t know the tangible steps that would get us there. And today, we don’t know the tangible steps that will get us to machines that are emotionally intelligent, sensitive, thoughtful, and genuinely introspective.

By no means does that render the task impossible — we just don’t know how to get there yet. And the fact that we haven’t settled the debate over where to actually place the finish line makes it all the more difficult.

“We still have a long way to go,” says Fulda. She suggests that the answer won’t be piecing together algorithms, as we often do to solve complex problems with artificial intelligence.

“You can’t solve one piece of humanity at a time,” Fulda says. “It’s a gestalt experience.” For example, she argues that we can’t understand cognition without understanding perception and locomotion. We can’t accurately model speech without knowing how to model empathy and social awareness. Trying to put these pieces together in a machine one at a time, Fulda says, is like recreating the Mona Lisa “by dumping the right amounts of paint into a can.”

Whether or not the masterpiece is out there, waiting to be painted, remains to be determined. But if it is, researchers like Fulda are vying to be the one to brush the strokes. Technology will march onward, so long as we continue to seek answers to questions like these. But as we compose new code that will make machines do things tomorrow that we couldn’t imagine yesterday, we still need to sort out where we want it all to lead.

Will we be da Vinci, painting a self-amused woman who will be admired for centuries, or will we be Uranus, creating gods who will overthrow us? Right now, AI will do exactly what we tell AI to do, for better or worse. But if we move towards algorithms that begin to, at the very least, present as sentient, we must figure out what that means.

Book your tickets for the AI Congress here: https://theaicongress.com/bookyourtickets/

Artificial intelligence’s greatest contribution may be in health care, and China is leading the way

Andy Chun says China’s early investments in hi-tech medical services are likely to pay off in meeting the health care needs of its massive population, especially as society ages

 Photo: Ala

Photo: Ala

The greatest contribution that artificial intelligence could make to humanity might be in health care. According to the consultancy firm Frost & Sullivan, AI has the potential to improve medical treatment outcomes by 30-40 per cent and reduce costs by as much as 50 per cent.

This is particularly important for China, with its population of 1.4 billion people. Medical services can be scarce in China’s rural areas while, in urban areas, services are highly strained due to the sheer volume of patients. According to the latest data from the Organisation for Economic Cooperation and Development, China has 1.8 practising doctors per 1,000 people, compared with 2.56 for the United States and 5.1 for Australia.

Adding further stress to China’s health care system is its ageing population. According to the United Nations, China is ageing more rapidly than almost any country in the world, due mainly to its previous one-child policy. By 2050, China’s population over 65 will reach around 330 million, roughly the current total population of the US.

China is not alone in its search for better health through AI. In a recent study by Accenture, the AI health care market in the US is predicted to reach US$6.6 billion by 2021 compared to US$600 million in 2014. However, China already has a smart health care strategy, an integral part of its overall AI strategic plan released in July 2017.

4.PNG

The plan calls for the development of a whole gamut of AI-related health care technologies, such as intelligent diagnosis, wearables, AI health monitoring, robot-assisted surgery, intelligent medical image recognition and medical genomics with a strong emphasis on elderly care.

Last year, the China Food and Drug Administration included AI diagnostic tools on its list of permitted medical devices. In May, China established a national Chinese Intelligent Medicine Association as a platform for research, exchange and cooperation in AI for health care.

I believe there are three key areas: deep learning to analyse medical images, cognitive computing to capture and apply medical knowledge and AI analytics to provide continuous health monitoring.

The medical profession is particularly well suited to the use of AI. Medical doctors rely greatly on perceptual senses, like vision and hearing, to gather information about patient health. Artificial neural network approaches such as deep-learning are ideal for exactly this type of work.

For example, Google is experimenting with deep-learning in retinal images to provide early detection of diabetic retinopathy with accuracy on a par with experts. In China, researchers have used AI on eye scans to diagnose congenital cataracts as accurately as human doctors.

Using AI deep-learning to process medical images, such as CT scans and X-rays, is particularly hot among China’s start-ups. Radiology departments at top Chinese hospitals routinely handle tens of thousands of scanned images per day. AI deep-learning is already used to analyse and highlight abnormalities. Among big players, Alibaba’s health unit uses AI to interpret CT scans, and Tencent’s Miying uses AI to detect early signs of cancer.

The other skill important in medicine is the ability to learn, recall and apply vast amounts of medical textbook knowledge and keep up to date with the newest medical research/journals and pharmaceutical products. AI can use natural language processing and machine learning to read and understand millions of online documents, as well as millions of data points to help diagnose and recommend treatment.

Researchers in China are also using AI to capture general medical knowledge. For example, iFlyTek and Tsinghua University successfully created an AI system that not only passed last year’s Chinese medical licensing exam but also scored better than 96 per cent of exam takers. The exam not only tested breadth of knowledge, but also ability to understand intricate connections between facts and use them to make decisions.

Because of more affordable health care wearables that track activities and heart rate, consumers are taking responsibility in monitoring their own health. According to Tractica’s forecast, annual wearable device shipments will increase from 118 million units in 2016 to 430 million units by 2022.

This increased use of wearables means a lot of daily health data will be available online. Big data and AI predictive analytics can continuously monitor and alert users of abnormalities, and before the outset of more major medical problems.

Insurance companies, such as China’s Ping An Health, are starting to integrate wearables into their offerings. For example, customers who live healthier lifestyles get points and rewards. Wearables provide insurance companies with vast amounts of highly valuable customer biometric data, which can be used with AI to offer continuous monitoring and health care advice and also provide discounts to those with healthier lifestyles.

AI relies greatly on data for machine learning and predictive analytics, and China has no shortage, with its population generating massive amounts of real-time medical data. The Chinese population is eager to use technology and adopt AI. China is also unique in its approach to health care that leverages both Western and traditional Chinese medicine.

With increased use of AI, combined with readily available medical and biometric data, China is on its way to providing quality personalised health care to more people at a lower cost, while keeping people healthier through continuous monitoring and alerts. With fewer people getting sick, the workloads for hospitals and medical staff will be reduced. A healthy nation is a wealthy nation, as the saying goes.

Book your tickets for the AI Congress here: https://theaicongress.com/bookyourtickets/

Marks & Spencer teams with Microsoft to bring artificial intelligence into stores

 CREDIT: SIMON DAWSON/BLOOMBERG

CREDIT: SIMON DAWSON/BLOOMBERG

Marks & Spencer is turning to artificial intelligence to sharpen its appeal to customers through a "game-changing" partnership with Microsoft.

The retailer is working with AI experts at the US tech giant to explore how the technology could improve the shopping experience on the high street.

M&S was tight-lipped on how AI might be applied, but chief executive Steve Rowe said it may prove a “game changer” for the UK retail industry.

Microsoft is reportedly working on technology that would eliminate cashiers and checkout lines from stores in a challenge to Amazon’s automated shop.

Mr Rowe said: “M&S is transforming into a digital first retailer, at a time when the sector is undergoing a customer-led revolution.

“We want to be at the forefront of driving value into the customer experience using the power of technology.”

The move is part of major reboot under Mr Rowe designed to revive the company’s flagging fortunes.

M&S announced a five-year transformation plan last year aimed at “restoring the basics” and transforming its culture towards a faster, leaner and more digitally-focused business.

Mr Rowe previously criticised the retailer for having “cumbersome” and “bureaucratic” structures, and a store estate which is not “fit for the future”.

However, the overhaul has proved painful for the business. M&S was condemned to its second straight year of falling profits in May as it racked a huge bill for store closures.

Pre-tax profits plunged 62.1pc to £66.8m for the year ending March 2018, largely impacted by £321.1m of costs linked to shutting underperforming shops.

M&S is joining forces with Microsoft amid a flurry of experimentation on the high street, as retailers look to find new ways to coax customers into stores following the rise of online shopping.

Waitrose plans to offer in-store health checks by teaming up with private healthcare company Bupa.

Game Digital is trying to step up its financial performance by pushing into the eSports market, creating in-store gaming zones where customers play each other for a fee.

Get your passes for the AI Congress here: https://theaicongress.com/bookyourtickets/

Microsoft acquires AI startup to fuel artificial intelligence capabilities

operating-system-1995434_1280.png

SAN FRANCISCO: Microsoft announced on Wednesday that it has signed an agreement to acquire Bonsai, an artificial intelligence (AI) startup based in San Francisco, to boost its AI and machine learning capabilities. 

Microsoft said its acquisition of the small startup is "another major step forward in our vision to make it easier for developers and subject matter experts to build the "brains -- machine learning model for autonomous systems of all kinds." 

In its official blog, Microsoft said Bonsai has developed technology that will let experts with AI experience to work with autonomous systems, reports Xinhua news agency. 

"The company is building a general-purpose, deep reinforcement learning platform especially suited for enterprises leveraging industrial control systems such as robotics, energy, HVAC, manufacturing and autonomous systems in general," said the tech giant. 

Bonsai's platform combined with rich simulation tools and reinforcement learning work in Microsoft Research will compose with its Azure Machine Learning running on the Azure Cloud with GPUs and Brainwave, it added. 

Based in Berkeley, California, Bonsai was founded in 2014 and has around 42 employees. 

Bonsai said on its official website that it is building "the world's first deep reinforcement learning platform that empowers enterprises to build intelligence into real-world systems." 

It claims to have a team that "brings deep experience in machine learning and developer tools from the likes of Microsoft, Uber, Google and Apple." 

Bonsai CEO Mark Hammond worked for Microsoft as an engineer in the late 1990s and early 2000s. 

Microsoft bought another two small AI startup companies, SwiftKey and Maluuba, in 2016 and early 2017. 
 

Book your tickets for the AI Congress here: https://theaicongress.com/bookyourtickets/

How Artificial Intelligence Can Transform Investing

Artificial intelligence is moving at a rapid pace and will soon be able to identify why an event occurred - this has profound implications for how we invest

Chess_article.jpg

Artificial intelligence's potential to help investors and traders was one of the most hotly debated topics at the recent Morningstar Investment Conference in Chicago.

BlackRock managing director and panellist Kevin Franklin kicked off proceedings with a quick Google search on the question, what is AI? The answer given by Google? "When a machine achieves the level of an intelligence of a human, then you can call it AI."

While AI is not quite there yet, speakers were excited about AI's future potential to help investors and professionals. One example is the ability of AI to perform "word mining" through text data sets, like earnings call transcripts.

Today, AI is capable of reading through an earnings call and learning and differentiating what is "bullish" versus what is "bearish". It appears that knowledge grows exponentially over time. Machines can now identify what word patterns may or may not lead analysts to make rating upgrades or downgrades.

The next frontier is what some experts call "strong AI", so that artificial intelligence not only extracts patterns in data but can also be creative. Strong AI, in theory, should be able to ask, fundamentally, "why did this happen?" and draw conclusions.

Another example of AI helping investing is in trading. As companies seek to build and expand their competitive edge they will be able to use AI to interpret massive amounts of trading data for every security in the world. With this development, firms can understand who bought and who sold which security. The advantage of this development, for example, allows firms' trading strategies to move away from crowded positions in the marketplace.

Computers Have Their Limits

Morningstar columnist Jon Rekenthaler looked at the use of AI in a recent article about chess and how computers learn.

"Hedge funds have long used artificial intelligence, with their short-term trades. However, neither they nor mutual funds have used machine learning to such an extent for their intermediate- to long-term trades. The vast majority of active decisions continue to be made either by humans, or by programs that obey human instructions."

And Rekenthaler is cautious about how far AI can fully replace human expertise in the financial markets.

"The artificial intelligence program cannot learn by playing itself – at least, not to the extent that it did when teaching itself chess. With a constrained, rules-based contest such as chess, a computer can glean as much insight from a hypothetical match, played against itself, as it can from evaluating an actual contest.

"Not so for the financial markets. It is difficult for a computer program to postulate how all other market participants might behave, so that it can devise a successful counter-strategy."

Book your tickets for the AI Congress here: https://theaicongress.com/bookyourtickets/

IBM shows off an artificial intelligence that can debate a human and change some minds

 IBM Research's experimental artificial intelligence system, Project Debater, with debater Dan Zafrir. IBM/handout

IBM Research's experimental artificial intelligence system, Project Debater, with debater Dan Zafrir.
IBM/handout

SAN FRANCISCO — IBM computers famously won at Jeopardy! and beat world class chess masters. Now, they're taking on human debaters.

At a media gathering here Monday afternoon, a black, artificial intelligence-infused IBM computer with a screen for a face more than held its own debating seasoned human debaters.

In one debate face-off, IBM's "Project Debater" AI computer made the case in favor of the government subsidizing space exploration against Israeli debate champion Noa Ovadia, who took the opposite position.

Ovadia was judged the winner by the crowd of journalists in "delivering" the argument—the computer's attempts at humor didn't measure up to the personality of a human — but IBM handily outscored Ovadia on the question of  "knowledge enrichment." 

IBM's computer fared better in a second debate persuading the crowd that telemedicine is worth pursuing against another human debater, Dan Zafrir. Again, the human prevailed on delivery, but this time only by a slim margin, and the computer was a big winner in knowledge enrichment. And at least 9 audience members' minds on the topic changed to the point of view of the computer. 

 Noa Ovadia prepares for her debate against the IBM Project Debater Monday, June 18, 2018, in San Francisco. AP Photo/Eric Risberg

Noa Ovadia prepares for her debate against the IBM Project Debater Monday, June 18, 2018, in San Francisco.
AP Photo/Eric Risberg

The debaters, both human and computer, were not made aware of the topics in advance. Each side had 4 minutes to make an opening statement, followed by a 4 minute rebuttal and a 2 minute closing summary. The computer went first each time.

The San Francisco event was the first time anyone outside of the company was able to witness a live IBM debate between a human and its AI system. But IBM researchers have been conducting debates in the lab for awhile, on such topics as "should income taxes exist?", "will autonomous cars help safety?" or "should antibiotics be used in our food supply?"

Through the IBM Cloud, the computer scanned billions of sentences to generate a coherent and persuasive position on the various topics. The machine then listens to its opponent’s speech and generates what IBM claims is a spontaneous compelling rebuttal, exhibiting a type of argumentation that until recently was simply out of reach for the machines.

“We believe that mastering language is a fundamental frontier that AI has to cross,” says IBM Research director Arvind Krishna. “There’s aspects like speech recognition, speech to text, that AI already does and does quite well. But that is not the same as listening comprehension or constructing a speech that can either be spoken or written or understanding the nuances of claims, meaning what supports a proposition or what may be against a proposition." 

Tech's biggest companies, IBM, Google, Apple, Microsoft and Facebook are among those engaged in a high stakes race for AI supremacy.

But the ability for a computer to not only persuasively compete in a debate against a live person, but to actually win the argument, is only likely to feed into fears expressed by Tesla and SpaceX CEO Elon Musk and the late cosmologist Stephen Hawking that artificial intelligence could spell doom for human civilization. 

Giving a physical shape to those fears, researchers at MIT used AI to create a psychopathic persona named Norman, named for the creepy character in Alfred Hitchcock’s classic thriller "Psycho," using disturbing image captions found on Reddit.

“I take it in a different way,” Krishna says of AI. “The sheer rate and pace of technology today has made a huge amount of information thrown at us from all kinds of sources….Is there something that I trust that can give me both sides of a position?...Here are the five pros, well-written and here are the five cons, well-written. It lets me form my own opinion.”

Krishna says the computer debater has made great progress the last couple of years. Two years ago, debating points were all over the map, with the computer able to make one or two really brilliant statements and five or ten inane statements. By the end of last year, the computer began to hold its own, he says.

One key factor is not just the persuasive arguments that the machine may make, but how those points are delivered. IBM used a New York actress as the voice of the computer.

“Just like in real debates, humor has to also play a role, not just a well-crafted logical argument,” Krishna says. The computer “will never do so well as when the human debater can bring in a personal anecdote or personal experience. It doesn’t know how to react to that today.”

Project Debater's idea of a joke: my blood would boil if I had blood.

So what are possible real life use cases for computers that can debate?  Krishna mentions legislators who might be debating critical issues, or lawyers preparing a brief. And students or business executives might also make use AI debating to help inform an opinion.

Project Debater earned a fan in debater Ovadia. "I’m blown away," she said.  "The technology is so impressive in terms of how many really human cognitive capabilities it’s able to do simultaneously."

Get your tickets for the AI Congress here: https://theaicongress.com/bookyourtickets/

Meet the Artificial Intelligence System That Can Predict the Future

Artificial-Intelligence-System-That-Could-Predict-Future-Developed-by-Researchers.jpg

A team of researchers has developed an artificial intelligence system that can allegedly peer into the future.

The artificial intelligence system was developed by researchers from the University of Bonn in Germany. According to reports, it can accurately predict what a person will do next within the next few minutes. That is if the past and the next few minutes will involve cooking.

According to the study that will be presented at the IEEE Conference on Computer Vision and Pattern Recognition later this month, the researchers trained their machine learning algorithm for hours using videos of people cooking breakfast foods and preparing salads. This led the AI system to predict the next steps in preparing salad recipes when it sees a person doing one.

“Accuracy was over 40 percent for short forecast periods, but then dropped the more the algorithm had to look into the future,” Dr. Jürgen Gall, one of the researchers, said. “We want to predict the timing and duration of activities – minutes or even hours before they happen.”

With their AI system, the Bonn researchers hope to improve smart home devices by giving them the capability to recognize what people are doing or what they would do next. For instance, smart speakers could one day remind us if we missed a recipe step while cooking or if we should adjust the stove heat while putting ingredients on the casserole.

Nevertheless, this latest development in the field of artificial intelligence doesn’t mean that the researchers have given their machine the ability to predict people’s intentions like conscious beings. Instead, it can only anticipate the next steps in a process, which at the moment is only limited to making breakfast and salads, depending on what the person has done in the past minutes.

As cool as a psychic robot would be, we may have to settle for a breakfast-based psychic robot for now. Although, having Alexa let me know when I’m burning my eggs is a start.

Book your tickets for the AI Congress here: https://theaicongress.com/bookyourtickets/

Will artificial intelligence bring a new renaissance?

Society needs to seriously rethink AI's potential, its impact to both our society and the way we live

dreamstime_s_94689873.jpg

Artificial intelligence is becoming the fastest disruptor and generator of wealth in history. It will have a major impact on everything. Over the next decade, more than half of the jobs today will disappear and be replaced by AI and the next generation of robotics.

AI has the potential to cure diseases, enable smarter cities, tackle many of our environmental challenges, and potentially redefine poverty. There are still many questions to ask about AI and what can go wrong. Elon Musk recently suggested that under some scenarios AI could jeopardise human survival. 

AI's ability to analyse data and its accuracy is enormous. This will enable the development of smarter machines for business.

But at what cost and how will we control it? Society needs to seriously rethink AI's potentials, its impact to both our society and the way we live.

Artificial intelligence and robotics were initially thought to be a danger to be blue-collar jobs, but that is changing with white-collar workers – such as lawyers and doctors – who carry out purely quantitative analytical processes are also becoming an endangered species. Some of their methods and procedures are increasingly being replicated and replaced by software.

For instance, researchers at MIT's Computer Science and Artificial Intelligence Laboratory, Massachusetts General Hospital and Harvard Medical School developed a machine learning model to better detect cancer.

They trained the model on 600 existing high-risk lesions, incorporating parameters like, family history, demographics, and past biopsies. It was then tested on 335 lesions and they found it could predict the status of a lesion which 97 per cent accuracy, ultimately enabling the researchers to upgrade those lesions to cancer.

Traditional mammograms uncover suspicious lesions, then test their findings with a needle biopsy. Abnormalities would undergo surgeries, usually resulting in 90 per cent to be benign, rendering the procedures unnecessary. As the amount of data and other potential variables are considered, human clinicians cannot compete at the same level of AI.

So will AI take the clinicians job or will it just provide a better diagnostic tool, freeing up the clinicians to provide better connection with their patients?

Confusion around the various terminologies relating to AI can warp the conversation. Artificial general intelligence (AGI) is where machines can successfully perform any intellectual task that a human can do - sometimes referred to as “strong AI”, or “full AI”. That is where a machine can perform “general intelligent actions”.

Max Tegmark in his recent book Life 3.0, describes AI as a machine or computer that displays intelligence. This contrasts with natural intelligence which you and I and other animals display. Research of AI is the study of intelligent agents; devices which sense their own environment and take actions to maximise its chances of success.

Tegmark refers to Life 3.0 as a representation of our current stage of evolution. Life 1.0 referred to biological origins, or our hardware, which has been controlled by the process of evolution.

Life 2.0 is our cultural development of humanity. This refers to our software, which drives us and our minds. Education and knowledge has been a major influence on this stage of our journey, constantly being updated and upgraded. These versions of Life are based on survival of the fittest, our education and time.

Life 3.0 is the technological age of humanity. We have effectively reached the point where we can upgrade our hardware and software. Not to the levels of the movies, it may be possible in the future but that might be a while away. All these upgrades have been due to our use of technology, advanced materials and drugs that improve our bodies.

The first renaissance

This was a period between the 14th and 17th centuries. The Renaissance encompassed innovative flowering of Latin and vernacular literatures, beginning with a resurgence of learning based on classical sources. Theories proposed to account for its origins a characteristic, focusing on a variety of factors including the social and civic peculiarities.

Renaissance literally means ‘"rebirth’, a cultural movement that profoundly affected European intellectual life. This period was a time of exploration of many changes in society. People were able to ask and explore their questions.

A ‘Renaissance man was a person who is skilled in multiple disciplines, someone who has a broad base of knowledge. These people pursued multiple fields of studies. A good example of a Renaissance man of this period was Leonardo Da Vinci, a Master of Art, engineering, anatomy as well as many other disciplines with remarkable success. The Renaissance man shows skills in many matters.

Einstein was a genius of theoretical physics, but he was not necessarily a Renaissance man. In the past, universities students were encouraged to study the liberal arts. The idea being to give a more rounded education.

It is not the case that many of these students are polymaths. That of having a broad-based education would lead to a more developed mind. As indicated by Daniel Pink, a Whole New Mind, the Master of Fine Arts will become the MBA of the future.

The new renaissance

AI is going to free us from many ardours duties around what we do for work. Businesses that have embraced these changes will grow, others will go. Robotics and AI are starting to have major social and cultural impacts.

We are seeing more protests technology, people becoming activists. The inequality of pay to work is impacting many people. Taxi drivers are affected by Uber, hotels by Airbnb and many more, the rules have changed, and many are not happy. This situation draws a close parallel to the cottage industries of the industrial age. That impact brought the rise of the luddites that were led by John Lud.

The disenfranchised workers faced with innovation, industrial level of change and the destruction of their industry rings true today as it did for the luddites in the industrial age.

“Recently, the term neo-Luddism has emerged to describe opposition to many forms of technology. According to a manifesto drawn up by the Second Luddite Congress in 1996, neo-Luddism is “a leaderless movement of passive resistance to consumerism and the increasingly bizarre and frightening technologies of the Computer Age.” (Wikipedia)

We need to take this time as an opportunity to create a new Renaissance period, enabling more of us to become ‘Renaissance people’, using our creativity and innovative traits. Innovation is what businesses wants but computers struggle to master.

Jobs of the future will come from this aspect of humanity, but if we are not looking, we ignore the situation the neo-Luddites may have a point. Potentially creating a comparable situation as when the then luddites started to break the industrial looms.

This was criminalised in 1721, leading to the Frame Breaking Act of 1812 and the death penalty. Not to say we will get that far, but there are some already building their camps and weaponising themselves for just that eventuality.

So, what can we do?

We need to talk about AI and the future. We need to realise that the impacts are going to be eminence - that we need to plan. Jobs are and will change so you need to prepare. Innovation is a top priority for many organisations.

It can no longer be left to the realm of the geeks and techies. We all need to be more innovative and creative, it must increase exponentially and become a core competency. Innovation is a matter of a change in mindset, developing the right environment and circumstances.

We need to ask more questions, to find the right answer. This is an important skill that many have forgotten or lost. We can find many answers on Google but, without the right question they are worthless.

We need to explore the process of doing just that, asking the right question to achieve the right outcomes.

Get ready for AI and the future because the future is NOW!

Get ready by attending the AI Congress here: https://theaicongress.com/bookyourtickets/

Google won't use artificial intelligence for weapons

 PHOTO: REUTERS

PHOTO: REUTERS

[SAN FRANCISCO] Google announced Thursday it would not use artificial intelligence for weapons or to "cause or directly facilitate injury to people," as it unveiled a set of principles for these technologies.

Chief executive Sundar Pichai, in a blog post outlining the company's artificial intelligence policies, noted that even though Google won't use AI for weapons, "we will continue our work with governments and the military in many other areas" including cybersecurity, training, and search and rescue.

The news comes with Google facing pressure from employees and others over a contract with the US military, which the California tech giant said last week would not be renewed.

Mr Pichai set out seven principles for Google's application of artificial intelligence, or advanced computing that can simulate intelligent human behaviour.

He said Google is using AI "to help people tackle urgent problems" such as prediction of wildfires, helping farmers, diagnosing disease or preventing blindness.

"We recognize that such powerful technology raises equally powerful questions about its use," Mr Pichai said in the blog.

"How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right."

The chief executive said Google's AI programs would be designed for applications that are "socially beneficial" and "avoid creating or reinforcing unfair bias."

He said the principles also called for AI applications to be "built and tested for safety," to be "accountable to people" and to "incorporate privacy design principles."

Google will avoid the use of any technologies "that cause or are likely to cause overall harm," Mr Pichai wrote.

That means steering clear of "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people" and systems "that gather or use information for surveillance violating internationally accepted norms."

The move comes amid growing concerns that automated or robotic systems could be misused and spin out of control, leading to chaos.

Several technology firms have already agreed to the general principles of using artificial intelligence for good, but Google appeared to offer a more precise set of standards.

The company, which is already a member of the Partnership on Artificial Intelligence including dozens of tech firms committed to AI principles, had faced criticism for the contract with the Pentagon on Project Maven, which uses machine learning and engineering talent to distinguish people and objects in drone videos.

Faced with a petition signed by thousands of employees and criticism outside the company, Google indicated the US$10 million contract would not be renewed, according to media reports.

Book you tickets for the AI Congress here: https://theaicongress.com/bookyourtickets/

APPLE'S PLANS TO BRING ARTIFICIAL INTELLIGENCE TO YOUR PHONE

 An attendee takes a photograph with an iPhone at Apple's Worldwide Developers Conference on Monday.  DAVID PAUL MORRIS/BLOOMBERG/GETTY IMAGES

An attendee takes a photograph with an iPhone at Apple's Worldwide Developers Conference on Monday.

DAVID PAUL MORRIS/BLOOMBERG/GETTY IMAGES

APPLE DESCRIBES ITS mobile devices as designed in California and assembled in China. You could also say they were made by the App Store, launched a decade ago next month, a year after the first iPhone.

Inviting outsiders to craft useful, entertaining, or even puerile extensions to the iPhone’s capabilities transformed the device into the era-defining franchise that enabled Uber and Snapchat. Craig Federighi, Apple’s head of software, is tasked with keeping that wellspring of new ideas flowing. One of his main strategies is to get more app developers to use artificial intelligence tools such as recognizing objects in front of an iPhone’s camera. The hope is that will spawn a new generation of ideas from Apple’s ecosystem of outsourced innovation.

“We have such a vibrant community of developers,” Federighi says. “We saw that if we could give them a big leg up toward incorporating machine learning into their apps they would do some really interesting things.”

He illustrates the point with a demo of an iPad app for basketball coaches called HomeCourt. You don’t have to be a pro; using the app is as easy as pointing an iPad’s camera at action on the court. Then the tricky stuff happens automatically. HomeCourt uses the support for machine learning added to Apple’s mobile operating system last year to analyze the video. The app tracks each time a player shoots, scores, or misses, and logs the shooter’s location on the court. Each event is indexed so a particular play can later be viewed with a single tap.

HomeCourt is built on tools announced by Federighi last summer, when he launched Apple’s bid to become a preferred playground for AI-curious developers. Known as Core ML, those tools help developers who’ve trained machine learning algorithms deploy them on Apple’s mobile devices and PCs.

At Apple’s Worldwide Developer Conference on Monday, Federighi revealed the next phase of his plan to enliven the app store with AI. It’s a tool called Create ML that’s something like a set of training wheels for building machine learning models in the first place. In a demo, training an image-recognition algorithm to distinguish different flavors of ice cream was as easy as dragging and dropping a folder containing a few dozen images and waiting a few seconds. In a session for developers, Apple engineers suggested Create ML could teach software to detect whether online comments are happy or angry, or predict the quality of wine from characteristics such as acidity and sugar content. Developers can use Create ML now but can’t ship apps using the technology until Apple’s latest operating systems arrive later this year.

Apple is far from the first tech company to release software to help developers build machine learning models. Facebook, Amazon, Microsoft, and Google have all done so, with Google’s TensorFlow most popular. Federighi claims none easily fit into an app developer’s regular workflow, limiting machine learning’s potential. “We're really unleashing this capability for this vast developer community,” he says. Create ML is built on top of Apple’s Swift programming language, introduced in 2014 and popular in some developer circles for its ease of use.

Simplifying can bring limitations. Create ML looks useful, but creating complex or unique uses of machine learning requires building something from scratch, says Chris Nicholson, CEO of Skymind, which helps companies with machine learning projects. Predicting events over time, like what a customer will buy next, typically requires something bespoke, he says. “What will make apps stand out is a fully custom, proprietary model,” says Nicholson.

Create ML is also limited to Apple devices. WWDC attendee Wolfram Kerl, CTO of startup Smartpatient, would like to make his company’s medication-tracking app capable of reading the labels on medicines. Apple doesn’t yet offer specific support for reading text from images, and Kerl is hopeful that may change. But he’s also watching Google’s recently launched machine-learning tools for mobile developers, ML Kit. It supports text recognition, and Kerl’s app also has to work on Android. “Google tends to make things work on both platforms,” he says.

Apple says its tools are restricted to its own devices to get the best performance out of its carefully integrated software and hardware. Last year, the company added a “neural engine” to the iPhone’s processor to power machine learning software.

Federighi says Create ML has already proved that it’s ready to help companies improve their apps with machine learning. He points to Memrise, a startup with a popular language-learning app. With the help of Create ML the company added a feature that lets users point their phone at an object to learn its name in different languages. Running Create ML on a MacBook Pro to train the model with 20,000 images, instead of renting a cloud server with conventional software, shortened the process from a day to under an hour, Federighi says.

That speed boost comes from the way Create ML trains new models by adapting ones already built into Apple’s operating systems to power image recognition and other features in the company’s own apps. Re-training an existing algorithm is a standard trick in machine learning known as transfer learning, and can generate good results with less data. Create ML models can also be much smaller, something important for mobile developers, because they build on pre-existing models already on a device. Memrise’s conventional model was 90 megabytes in size; the one made with Create ML was just 3 megabytes.

Many developers at WWDC liked Federighi’s pitch. Nitish Mehta, a software engineer at Symantec, was planning to attend an in-depth session on Create ML on Tuesday afternoon. It ultimately attracted thousands, some of whom whooped while an Apple engineer coded a fruit detector live on stage.

Mehta has some experience using machine learning, but thinks Create ML could help him and many other developers make broader use of the technology. “If you make it easier, more people will do it,” he says.

Federighi believes that would inevitably change what Apple devices can offer their owners, although he won’t be drawn into predicting exactly how. “So much of the experience on our devices is what third parties end up creating as apps,” he says.

Book you tickets for the AI Congress here: https://theaicongress.com/bookyourtickets/

Artificial Intelligence and Machine Learning in Medical Imaging

Please give an overview of the past research into machine learning and artificial intelligence in medical imaging. What are we currently able to do with this research?

The two major tasks in medical imaging that appear to be naturally predestined to be solved with AI algorithms are segmentation and classification. Most of techniques used in medical imaging were conventional image processing, or more widely formulated computer vision algorithms.

One can find many works with artificial neural networks, the backbone of deep learning. However, most works were focused on conventional computer vision which focused, and still does, on “handcrafted” features, techniques that were the results of manual design to extract useful and differentiating information from medical images.

Some progress was visible in the late 90s and early 2000s (for instance, the SIFT method in 1999, or visual dictionaries in early 2000s) but there were no breakthroughs. However, techniques like clustering and classification were in use with moderate success.

K-means (an old clustering method), support vector machines (SVM), probabilistic schemes, and decisions trees and their extended version ‘random forests’ were among successful approaches.  But artificial neural networks continued to fall short of expectations not just in medical imaging, but in computer vision in general.

Shallow networks (consisting of a few layers of artificial neurons) could not solve difficult problems and deep networks (consisting of many layers of artificial neurons) could not be trained because they were too big. By the mid 2000s there was theatrical progress in this field with the first major success stories in early 2010s on large datasets like ImageNet.

Now suddenly, it was possible to recognise cats and cars in an image, perform facial recognition and automatically label images with a caption describing its content. The investigations of applications of these powerful AI methods in medical imaging has started in the past 3-4 years and is in its infancy but promising results have been reported here and there.  

What applications are there for machine learning and artificial intelligence in medical imaging?

Based on recent publications, it seems that the focus of many researchers is on diagnosis, mainly cancer diagnosis, where the output of the AI software is often a “yes/no” decision for malignant/benign, respectively.

The other stream is working on segmenting (marking) specific parts of the images, again with the main attention of many works being on cancer diagnosis and analysis, but also for treatment planning and monitoring.

However, there is much more that AI can offer to medical imaging. Looking at its potentials for radiogenomics, auto-captioning of medical images, recognition of highly non-linear patterns in large datasets, and quantification and visualization of extremely complex image content, are just some examples. We are at the very beginning of an exciting path with many bifurcations.

What are the current limitations in the characterization of tissues and their attributes with artificial intelligence? What needs to be done to overcome this?

AI is a large field with a multitude of techniques based of different ideas. Deep learning is just one of them, but it is the one with the most success in recognizing image content in recent years. However, deep learning faces multiple challenges in digital pathology.

First and foremost, it requires a large number of marked (labelled) images (images in which the region of interest has been manually delineated by a pathologist) but the general workflow of digital pathology does not provide labelled images. This has led the research to work on specific cases, e.g., breast cancer, for which a small number of labelled images can be provided to demonstrate the feasibility of deep learning.

image (1).jpg

Another major challenge for deep learning in digital pathology is the dimensionality of the problem. Pathology images are extremely large, i.e., larger than 50,000 by 50,000 pixels. Deep networks, however, can only handle small input images, i.e., not larger than 300 by 300 pixels. Down-sampling images (making them smaller) would result in a loss of information.

A further obstacle in training deep networks is that they generally perform well if they are fed with “balanced” data, that means having almost the same number of images for any category you need to recognize. Imbalanced data impedes generalization, which means the network may make grave mistakes after training.

A final problem worth mentioning is the so-called “adversarial attacks” when someone with knowledge of the system, or exploiting the presence of artefacts and noise, could eventually fool a deep network into a wrong decision, an effect that is extraordinarily important in medical imaging; we cannot allow algorithms to be fooled when we are dealing with people’s lives.

Intensive research is being conducted at many fronts to find solutions for these and other challenges. Among others, one potential solution being worked on is “transfer learning”, to learn in a different domain and transfer the knowledge into the medical domain.

Can we teach the AI with millions of labelled natural photos (e.g., cars, faces, animals, buildings) and then use the acquired knowledge on histopathology images? Other potential remedies are to inject domain knowledge into deep networks, training “generative” models that do not directly deal with classification, and combining deep solutions with conventional algorithms and handcrafted features.  

How would the use of medical imaging interplay with other histopathological tests? Could they be replaced with a simple image search?

Definitely not. Image searches would be a new facilitator that will assist the pathologist and provide new insights. Presently, we may not have an accurate understanding of where the image search would fit most usefully, but we know for sure that the pathologist must remain in the center of all processing.

The tasks that we assign to the AI and computer vision will be widely specialized and customized; they naturally cannot render other existing (non-AI) technologies and other modes of tests useless. It’s all about complementing existing procedures with new insights, and not replacing them; well, at least this should be the guiding attitude.    

Please give an overview of your recent research to advance this field and the techniques that you have used.

At Kimia Lab, we have been working on a multitude of techniques, from deep networks to support vector machines, from local binary patterns to Radon transform, and from deep autoencoders to dimensionality reduction.

Our research philosophy is unconditionally pathologist-centric; we are there to design AI techniques that serve the pathology community. We are convinced that this is the right way of deploying AI, namely as a smart assistant to the pathologist and not a competitor.

image (2).jpg

We introduced a fundamental shift in our research and refrained from engaging in yes/no classification and instead are conducting many experiments to understand the polymorphic nature of tissue recognition before we attempt to design a final chain for the clinical workflow.

In addition, we have not lost our focus on non-AI computer vision for there are a lot of conventional methods that exhibited mediocre performance back in the day, but can now be rediscovered as partners to the powerful AI by relying on the faster computational platforms available.  

What advantages are there to the Radon transform that you used in your research?

This is one example of our efforts to not lose sight of well-established technologies. Radon transform is an old technique and has enabled us, among others, to do computed tomography.

Projections in small and large parts of the image can provide compressed information about tissue characteristics and where significant changes occur. They can serve as inputs to AI algorithms to provide additional information in a setting where multiple technologies work together.

Radon transform is not only a mathematically sound technology but, in contrast to deep networks, is interpretable. Why a specific image is selected can be relatively easily understood when we acquire Radon projections, whereas the millions of multiplications and additions inside a network do not offer any plausible way for understanding why a specific decision has been made.

However, we need deep architectures to learn. Hence, combining the old and the new is something we are heavily investing in.    

How can artificially intelligent search and categorization of medical images accelerate disease research and improve patient care?

If we abandon the classification-oriented AI (making yes/no decisions), which aims at eliminating the diagnostic role of the pathologist, then we are left with mining-oriented AI that identifies and extracts similar patterns from large archives of medical images.

Showing similar images to the pathologist when s/he is examining a new case is not something extraordinary, unless the retrieved cases are annotated with the information of evidently diagnosed patients from the past.

Then we have something that has never been done before: we are tapping into the collective wisdom of the physicians themselves to provide them with computational consultation. Consulting other pathologists for difficult cases is a common practice.

However, the image search will give us access to “computationally” consult hundreds of pathologists across the country (and the globe) through digital records. This will expedite the process, reduce error rates, save lives, release valuable pathologist time for other tasks (e.g. research and education), and finally save costs.

Where do you see the future of machine learning with regards to medical imaging?

Perhaps many of us are hoping that radiogeomics would be a revolutionary change in disease diagnosis that among others would may make the biopsy superfluous, as some researchers audaciously envision.

However, for the foreseeable future, we should rather look at “consensus building”. The manifestation of  medical diagnosis difficulty is clearly visible in the so-called “inter-observer variability”; doctors cannot agree on a diagnosis or measurement when given the same case.

For some cases like breast and lung cancer the disagreement can approach and even exceed 50% when the exact location of the malignancy is involved. Using AI for identifying and retrieving similar abnormalities and malignancies will open the horizon for building consensus.

If we can find several thousand cases of the past patients that can be confidently matched with the data of the current patient, then a “computational consensus” is not far away. The beauty of it is, again, that the AI will not be making any diagnostic decision but just making the existing medical wisdom accessible, the wisdom that is currently fallow under terabytes of digital dust.

As the technology advances, will there be a need to pathologists in the future?

The tasks and workload of the pathologists will certainly go through some transformation but the sensitive nature of what they do on one side, and the breadth and depth of knowledge they hold, on the other side, makes them indispensable as the ultimate recognition entities.

It is imaginable in near future that, by employing high-level visual programming languages, pathologists design and teach their own AI agents for very specific tasks. Not engineers, not computer scientists, it will be the pathologists that would have the medical knowledge to be in charge of exploiting the AI capabilities.

Book your tickets for the AI Congress here: https://theaicongress.com/bookyourtickets/

Revolutionizing everyday products with artificial intelligence

Mechanical engineering researchers are using AI and machine learning technologies to enhance the products we use in everyday life.

 Chelsea Turner/MIT

Chelsea Turner/MIT

“Who is Bram Stoker?” Those three words demonstrated the amazing potential of artificial intelligence. It was the answer to a final question in a particularly memorable 2011 episode of Jeopardy!. The three competitors were former champions Brad Rutter and Ken Jennings, and Watson, a super computer developed by IBM. By answering the final question correctly, Watson became the first computer to beat a human on the famous quiz show.

“In a way, Watson winning Jeopardy! seemed unfair to people,” says Jeehwan Kim, the Class ‘47 Career Development Professor and a faculty member of the MIT departments of Mechanical Engineering and Materials Science and Engineering. “At the time, Watson was connected to a super computer the size of a room while the human brain is just a few pounds. But the ability to replicate a human brain’s ability to learn is incredibly difficult.”

Kim specializes in machine learning, which relies on algorithms to teach computers how to learn like a human brain. “Machine learning is cognitive computing,” he explains. “Your computer recognizes things without you telling the computer what it’s looking at.”

Machine learning is one example of artificial intelligence in practice. While the phrase “machine learning” often conjures up science fiction typified in shows like "Westworld" or "Battlestar Galactica," smart systems and devices are already pervasive in the fabric of our daily lives. Computers and phones use face recognition to unlock. Systems sense and adjust the temperature in our homes. Devices answer questions or play our favorite music on demand. Nearly every major car company has entered the race to develop a safe self-driving car.

For any of these products to work, the software and hardware both have to work in perfect synchrony. Cameras, tactile sensors, radar, and light detection all need to function properly to feed information back to computers. Algorithms need to be designed so these machines can process these sensory data and make decisions based on the highest probability of success.

Kim and the much of the faculty at MIT’s Department of Mechanical Engineering are creating new software that connects with hardware to create intelligent devices. Rather than building the sentient robots romanticized in popular culture, these researchers are working on projects that improve everyday life and make humans safer, more efficient, and better informed. 

Making portable devices smarter

Jeehwan Kim holds up sheet of paper. If he and his team are successful, one day the power of a super computer like IBM’s Watson will be shrunk down to the size of one sheet of paper. “We are trying to build an actual physical neural network on a letter paper size,” explains Kim.

To date, most neural networks have been software-based and made using the conventional method known as the Von Neumann computing method. Kim however has been using neuromorphic computing methods.

“Neuromorphic computer means portable AI,” says Kim. “So, you build artificial neurons and synapses on a small-scale wafer.” The result is a so-called ‘brain-on-a-chip.’

Rather than compute information from binary signaling, Kim’s neural network processes information like an analog device. Signals act like artificial neurons and move across thousands of arrays to particular cross points, which function like synapses. With thousands of arrays connected, vast amounts of information could be processed at once. For the first time, a portable piece of equipment could mimic the processing power of the brain.

“The key with this method is you really need to control the artificial synapses well. When you’re talking about thousands of cross points, this poses challenges,” says Kim.

According to Kim, the design and materials that have been used to make these artificial synapses thus far have been less than ideal. The amorphous materials used in neuromorphic chips make it incredibly difficult to control the ions once voltage is applied.

In a Nature Materials study published earlier this year, Kim found that when his team made a chip out of silicon germanium they were able to control the current flowing out of the synapse and reduce variability to 1 percent. With control over how the synapses react to stimuli, it was time to put their chip to the test.

“We envision that if we build up the actual neural network with material we can actually do handwriting recognition,” says Kim. In a computer simulation of their new artificial neural network design, they provided thousands of handwriting samples. Their neural network was able to accurately recognize 95 percent of the samples.

“If you have a camera and an algorithm for the handwriting data set connected to our neural network, you can achieve handwriting recognition,” explains Kim.

While building the physical neural network for handwriting recognition is the next step for Kim’s team, the potential of this new technology goes beyond handwriting recognition. “Shrinking the power of a super computer down to a portable size could revolutionize the products we use,” says Kim. “The potential is limitless – we can integrate this technology in our phones, computers, and robots to make them substantially smarter.”

Making homes smarter

While Kim is working on making our portable products more intelligent, Professor Sanjay Sarma and Research Scientist Josh Siegel hope to integrate smart devices within the biggest product we own: our homes. 

One evening, Sarma was in his home when one of his circuit breakers kept going off. This circuit breaker — known as an arc-fault circuit interrupter (AFCI) — was designed to shut off power when an electric arc is detected to prevent fires. While AFCIs are great at preventing fires, in Sarma’s case there didn’t seem to be an issue. “There was no discernible reason for it to keep going off,” recalls Sarma. “It was incredibly distracting.”

AFCIs are notorious for such ‘nuisance trips,’ which disconnect safe objects unnecessarily. Sarma, who also serves as MIT's vice president for open learning, turned his frustration into opportunity. If he could embed the AFCI with smart technologies and connect it to the ‘internet of things,’ he could teach the circuit breaker to learn when a product is safe or when a product actually poses a fire risk.

“Think of it like a virus scanner,” explains Siegel. “Virus scanners are connected to a system that updates them with new virus definitions over time.” If Sarma and Siegel could embed similar technology into AFCIs, the circuit breakers could detect exactly what product is being plugged in and learn new object definitions over time.

If, for example, a new vacuum cleaner is plugged into the circuit breaker and the power shuts off without reason, the smart AFCI can learn that it’s safe and add it to a list of known safe objects. The AFCI learns these definitions with the aid of a neural network. But, unlike Jeewhan Kim’s physical neural network, this network is software-based.

The neural network is built by gathering thousands of data points during simulations of arcing. Algorithms are then written to help the network assess its environment, recognize patterns, and make decisions based on the probability of achieving the desired outcome. With the help of a $35 microcomputer and a sound card, the team can cheaply integrate this technology into circuit breakers.

As the smart AFCI learns about the devices it encounters, it can simultaneously distribute its knowledge and definitions to every other home using the internet of things.

“Internet of things could just as well be called 'intelligence of things,” says Sarma. “Smart, local technologies with the aid of the cloud can make our environments adaptive and the user experience seamless.”

Circuit breakers are just one of many ways neural networks can be used to make homes smarter. This kind of technology can control the temperature of your house, detect when there’s an anomaly such as an intrusion or burst pipe, and run diagnostics to see when things are in need of repair.

“We’re developing software for monitoring mechanical systems that’s self-learned,” explains Siegel. “You don’t teach these devices all the rules, you teach them how to learn the rules.”

Making manufacturing and design smarter

Artificial intelligence can not only help improve how users interact with products, devices, and environments. It can also improve the efficiency with which objects are made by optimizing the manufacturing and design process.

“Growth in automation along with complementary technologies including 3-D printing, AI, and machine learning compels us to, in the long run, rethink how we design factories and supply chains,” says Associate Professor A. John Hart.

Hart, who has done extensive research in 3-D printing, sees AI as a way to improve quality assurance in manufacturing. 3-D printers incorporating high-performance sensors, that are capable of analyzing data on the fly, will help accelerate the adoption of 3-D printing for mass production.

“Having 3-D printers that learn how to create parts with fewer defects and inspect parts as they make them will be a really big deal — especially when the products you’re making have critical properties such as medical devices or parts for aircraft engines,” Hart explains.  

The very process of designing the structure of these parts can also benefit from intelligent software. Associate Professor Maria Yang has been looking at how designers can use automation tools to design more efficiently. “We call it hybrid intelligence for design,” says Yang. “The goal is to enable effective collaboration between intelligent tools and human designers.”

In a recent study, Yang and graduate student Edward Burnell tested a design tool with varying levels of automation. Participants used the software to pick nodes for a 2-D truss of either a stop sign or a bridge. The tool would then automatically come up with optimized solutions based on intelligent algorithms for where to connect nodes and the width of each part.

“We’re trying to design smart algorithms that fit with the ways designers already think,” says Burnell.

Making robots smarter

If there is anything on MIT’s campus that most closely resembles the futuristic robots of science fiction, it would be Professor Sangbae Kim’s robotic cheetah. The four-legged creature senses its surrounding environment using LIDAR technologies and moves in response to this information. Much like its namesake, it can run and leap over obstacles. 

Kim’s primary focus is on navigation. “We are building a very unique system specially designed for dynamic movement of the robot,” explains Kim. “I believe it is going to reshape the interactive robots in the world. You can think of all kinds of applications — medical, health care, factories.”

Kim sees opportunity to eventually connect his research with the physical neural network his colleague Jeewhan Kim is working on. “If you want the cheetah to recognize people, voice, or gestures, you need a lot of learning and processing,” he says. “Jeewhan’s neural network hardware could possibly enable that someday.”

Combining the power of a portable neural network with a robot capable of skillfully navigating its surroundings could open up a new world of possibilities for human and AI interaction. This is just one example of how researchers in mechanical engineering can one-day collaborate to bring AI research to next level.

While we may be decades away from interacting with intelligent robots, artificial intelligence and machine learning has already found its way into our routines. Whether it’s using face and handwriting recognition to protect our information, tapping into the internet of things to keep our homes safe, or helping engineers build and design more efficiently, the benefits of AI technologies are pervasive.

The science fiction fantasy of a world overtaken by robots is far from the truth. “There’s this romantic notion that everything is going to be automatic,” adds Maria Yang. “But I think the reality is you’re going to have tools that will work with people and help make their daily life a bit easier.”

Book your tickets for the AI Congress here: https://theaicongress.com/bookyourtickets/