New Sony chief says artificial intelligence key to its survival

Group to keep offering PlayStation Vue TV service as it fits its quest for big data

 Kenichiro Yoshida, chief executive, says Sony could do more to utilise its own data trove, such as the 80m monthly users on its PlayStation Network © Bloomberg

Kenichiro Yoshida, chief executive, says Sony could do more to utilise its own data trove, such as the 80m monthly users on its PlayStation Network © Bloomberg

Sony’s new chief executive has positioned data and artificial intelligence at the centre of its survival strategy, warning that the likes of Amazon and Google pose an existential threat to the Japanese technology and entertainment group.

“The data mega players [such as Google, Amazon and Facebook] are so powerful they are capable of doing all kinds of things,” Kenichiro Yoshida said in his first media session since taking the helm of Sony in April. “The big challenge for our survival lies in the extent to which we can take control of data and AI. I personally feel a strong sense of crisis.” 

The comments by Mr Yoshida come as a recently revived Sony is looking to revive investment in entertainment content and technology. A day earlier, the group struck a $2.3bn deal to buy outright control of EMI Music Publishing, taking advantage of a recovery in the music industry driven by streaming services. 

Following a decade of deep losses driven by its ailing consumer electronics division, Sony has increased its focus on subscription revenue from online gaming and streaming of videos and music.

As part of that strategy, Mr Yoshida said the company would take a more strategic approach to collecting data from its users across a range of devices and platforms, spanning PlayStation games, financial services and mobile phones.

Sony does not intend to compete directly with the huge data platforms operated by Apple and other technology giants. But Mr Yoshida said Sony could do better in utilising its own data trove — such as the 80m monthly active users on Sony’s cloud gaming service PlayStation Network — to create content that matched users’ preferences. 

“We want to remain close to our users and I think that’s how we can survive,” Mr Yoshida said. 

For this reason, Sony will continue to offer its PlayStation Vue internet streaming TV service despite calls by some analysts to give up the effort in a clearly crowded market led by Netflix. 

“It is clear as crystal that Sony has no competitive advantage in this business. It has not reached even a 1m user base in the last three years,” said Jefferies analyst Atul Goyal. But Mr Yoshida said PS Vue offered valuable real-time data on viewers’ preferences. 

In February, Sony announced plans to launch a ride-hailing service in partnership with several Japanese taxi companies to obtain data on vehicles. The company is looking to expand the sale of image sensors, installed in Apple’s iPhones and other mobile devices, for use in self-driving cars. 

“We want to contribute to the safety of mobility,” Mr Yoshida said, adding that Sony had no plans to make its own vehicle.

Book your tickets for the AI Congress here: https://theaicongress.com/bookyourtickets/

Artificial intelligence - is the UK ready, willing and able to invest?

technology-3402365_960_720.jpg

Nothing short of a concerted effort by the government, and the public and private sectors, will be enough if the UK is to be a world leader of artificial intelligence, argues Mike Rebeiro, head of digital and innovation at law firm Macfarlanes.

As part of its Industrial Strategy unveiled last November, the government identified artificial intelligence (AI) as one of its four 'Grand Challenges' facing the UK. 

As such, the Department for Business, Energy & Industrial Strategy's (BEIS) stated ambition is "to put the UK at the forefront of the AI and data revolution", predicting that UK GDP will be 10% higher (or an additional £232bn per year) by 2030 as a direct result of AI.

The BEIS recently announced the UK Artificial Sector Deal between the government and the private sector, outlining a package of £603m in new private and public sector funding for AI, and up to £324m from existing government funding.

The sector deal focuses on five areas:

• Infrastructure - in addition to the £1bn+ being invested in digital infrastructure, creating new data sharing frameworks to address the barriers of sharing publicly and privately held data to allow for the "fair and equitable data sharing between organisations in the private sector and between the private and public sectors"

• Ideas - boosting research and development spending in the private sector to 2.4% by 2027 and rising to 3% in the longer term

• People - growing digital skills in the workforce and creating by 2025 at least 1,000 government-supported AI PhD places

• Business environment - the creation of a new AI Council, bringing together respected leaders from academia and industry, and the creation of a new government delivery body, the Office for Artificial Intelligence, as well as a new centre for data ethics and innovation

• Places - ensuring that businesses around the UK grow by using AI.

If the government and businesses can achieve these goals, there will be a growing investment and acquisition market in AI technologies and companies within the UK.

The week before the publication of the Sector Deal, the House of Lords Select Committee on Artificial Intelligence report was also published.

The report, AI in the UK: ready, willing and able?, concludes that the "UK is in a strong position to be among the world leaders in the development of artificial intelligence during the 21st century".

Nevertheless, the report also stated that the development of the UK as an AI hub will also require not only the governance of existing legislation, but also new legal frameworks to be put in place.

Unlike other disruptive technologies, many forms of AI have the capacity to learn, make decisions independently and decide the basis upon which it is going to make decisions without human involvement or intervention.

Register for your tickets for the AI Congress here: https://theaicongress.com/bookyourtickets/

How Artificial Intelligence Is Making Chatbots Better For Businesses

Just a few short years ago, having “conversations” in human languages with machines was pretty much universally a frustratingly comedic process.

 Adobe Stock

Adobe Stock

Today that has changed. While natural language processing (NLP) and recognition is far from perfect, thanks to machine learning algorithms it’s getting increasingly closer to a point where it will be harder to tell whether we are talking to a human or a computer.

Business has capitalized on this, with increasing numbers of chatbots deployed, usually in customer service functions but increasingly in internal processes and to assist in training.

At ICLR 2018 in Vancouver, Salesforce’s chief scientist, Richard Socher, presented seven breakthrough pieces of research covering practical advances in NLP including summarization, machine translation and question answering.

He told me “NLP is going to be incredibly important for business – it is going to fundamentally change how we provide services, how we understand sales processes and how we do marketing.

“Particularly on social media, you need NLP to understand the sentiment around your marketing messages and how people perceive your brand.”

Of course, this raises some issues, and one of the most glaring is, do people really want to talk to machines? From a business point of view it makes sense – it’s incalculably cheaper to carry on 1,000 simultaneous customer service conversations with a machine than with the giant human call center which would be needed to do the same job.

But from a customer point of view, are they gaining anything? Unless the service they receive is faster, more efficient and more useful, then they probably aren’t.

“I can’t speak for all chatbot deployments in the world – there are some that aren’t done very well,” says Socher.

“But in our case we’ve heard very positive feedback because when a bot correctly answers questions or fills your requirements it does it very, very fast.”

“In the end, users just want a quick answer, and originally people thought they wanted to talk to a person because the alternative was to go through a ten minute menu or to listen to ten options and then have to press a button – that’s not fun and its not fast and efficient.”

Key to achieving this efficient use of NLP technology are the concepts of aggregation and augmentation. Rather than thinking of a conversation exclusively taking place between one human and one machine, AI and chatbots can be used to monitor and draw insights from every conversation and learn from them how to perform better in the next one.
And augmentation means that the machine doesn’t have to conduct the entire conversation. Chatbots can “step in” for routine tasks such as answering straightforward questions from an organization’s knowledge base, or taking payment details.

In other situations, the speed of real-time analytics available today means that bots can raise an alert when they detect, for example, a customer becoming irate – thanks to sentiment analytics - prompting a human operator to take over the chat or call.

Summarization is another highly useful function of NLP, and one which is likely to be increasingly rolled out to chatbots. Internally, bots will be able to quickly digest, process and report business data when it is needed, and new recruits can quickly bring themselves up to speed. For customer-facing functions, customers can receive summarized answers to questions involving product and service lines, or technical support issues.

Chatbots are a form of the ‘intelligent assistant’ technology which powers Siri or Google Assistant on your phone, or Cortana on your desktop. Generally though they are focused on one specific task within an organization.

One study found that 40% of large businesses have implemented this technology in some form, or will have done so by the end of 2019.

Among those, 46% said that NLP is used for voice to text dictation, 14% for customer services and 10% for other data analytics work.

Chatbots are also increasingly ubiquitous in collaborative working environments such as Slack, where they can monitor conversations between teams and provide relevant facts or statistics at pertinent points in the conversation.

In the future, chatbots will probably be able to take things even further and propose strategy and tactics for overcoming business problems.

Socher tells me “They will probably be able to help us craft marketing messages, based on understanding of the language of all the things that have been successful in the past.”


Another example could be customer service bots which can allocate resources to dealing with customer cases based on the classification and sentiment analysis of the conversations they are having.

As with all AI, development of NLP is far from a finished process and level of conversation we are able to have today will undoubtedly seem archaically stilted and unnatural in just a couple of years’ time.

But today, organizations are clearly becoming more comfortable with the idea of integrating chatbots and intelligent assistants into their processes, and confident that it will lead to improvements in efficiency and customer satisfaction.

Register for your tickets for the AI Congress here: https://theaicongress.com/bookyourtickets/

How Artificial Intelligence Is Raising The Bar On The Science Of Marketing

 Artificial Intelligence, AKA the shiny new toy in the marketer's toolkit. Shutterstock

Artificial Intelligence, AKA the shiny new toy in the marketer's toolkit. Shutterstock

Artificial Intelligence (AI) has become part of the business landscape. It's now accepted as a technology for many applications and platforms. However, marketing is one of the areas where AI is transforming how the process works. As such, it's also solving some marketing challenges across industries.

However, like other technology slowly making its way into all aspects of work and life, such as the Internet of Things (IoT) and autonomous vehicles, the transformation process of AI in marketing may not quite be there yet. And, that may be for the best. Here's the current state of AI's disruption of marketing.

AI's Impact on Marketing Science

Specific changes from  AI's influence on marketing are already being felt, according to Charles (Chuck) Davis, co-founder and CTO of Element Data, a company behind an AI tool called Decision Cloud. “AI has enabled the evolution of search engines, recommendation engines, chatbots and voice data analysis and other technologies employed by marketers every day."

 

And, companies across industries are starting to understand how to incorporate AI and machine learning into their marketing efforts. Companies like Amazon and Netflix were early adopters. They used this technology to provide personalized recommendations to their customers. Although this marketing tactic is still used successfully, the marketing applications have progressed into many other areas.

Better Decisions Arrive Faster

Being able to make better decisions related to your marketing strategy means money well spent and better return on what you do use from the budget. If you could see the future to make informed predictions and execute on targeted actions, then you'd be making the best decisions and garnering the best results for doing so.

Catalant’s Pedro Pereira explains, “In sales and marketing, AI measures customer sentiment and tracks buying habits. Brands and advertisers use the information to make ecommerce more intuitive and for targeted promotions...AI creates efficiencies that wouldn’t be possible without sifting through piles of data.”

As you know, making the right decisions with the data you receive is challenging at best. That's where AI has made the difference. Companies like Element Data, Selligent Marketing Cloud,and SetSchedule are helping marketers take the massive volumes of data that comes from all these channels and platforms and group it in a structured way to see what decisions need to be made. Questions about what motivates customers and why they act a certain way can be answered. And, those insights come more quickly than any human could ever figure out.

By speeding more accurate decisions, business intelligence rapidly grows. As a result, the return increases further. That means more time and money for creating the right campaigns and spending more time interacting with each customer. AI then becomes truly worth its weight in gold.

Personalization Gets Help

Being able to make each and every experience for what could be thousands of customers seems like an impossible task. However, that is what today's customers want. Although Amazon and others have proved that it's possible, they have AI to thank. And, so many other companies are seeing the potential.

According to Emme Yllesca, CEO of real estate investment platform, Asset Column, “AI provides deep insights, allowing our brand to use that data in order to bridge the gap, resulting in a marketing message that hits the right pain points.” That means matching audience segments with specific problems and solutions.  That means a huge uplift in your response and success rate.

Aman Naimat, senior vice president of technology & engineering at Demandbase says personalization is at the crux of why marketing has to and will adopt AI. "Ultimately, marketing is all about how a brand communicates to its prospects and customers, and personalized, relevant customer experiences are the most effective way to reach their target audiences," says Naimat. "Think about how easy it is to filter out spam with the glance of an eye."

Naimat cautions that 1:1 conversations are difficult to have at scale. He believes the only way to achieve personalization at scale is to leverage AI and machine learning applications. "The knowledge you get from AI technology is akin to the knowledge most sales reps have when they research every single buyer in-depth. Today, many companies are already enabling this hyper-personalization at scale, creating context-rich conversations that help businesses understand, connect and relate to their audiences."

Content Marketing Is Efficient

With content in such demand, it's easy to focus on mass production. However, while quantity is important to a certain degree, it shouldn't put quality at risk. What you create must be relevant for numerous audiences but also be adjustable to each segment. As you know, that leads to a considerable amount of content to manage, organize, and put to work.

To create these content assets most likely also used a large number of resources. Therefore, you want to be able to tap them, repurpose them, and leverage them again at will.  That's again when AI becomes the marketing superhero. According to Jim Vernon, CEO of RockHer,“The majority of our content management uses artificial intelligence to some degree, allowing us to catalog, search and find any piece of content related to a specific search query.

Go Deeper Into the Data

To beat out the competition means knowing more about the intended customer and existing base. It's in the data, but it's a race to find it first and understand what to do with it.  “Consumer data is a very touchy subject,” says Saro Der Ohanesian, CEO of Vanguard Tax Relief, “and what and how data is collected is a completely different discussion.” As on simple example, we just need to look at how much Facebook has been in the news in recently involving its data collection.

Real estate is a good example of an ideal place to put AI's power to work on marketing to generate more effective results. For example, SetSchedule is a real estate marketing firm that has leveraged AI technology to create connections between realtors and local homeowners, home buyers, and investors to complete more property deals. The company uses AI to identify properties through predictive data. Then, it uses automated marketing to understand timing, seller and buyer intent, market conditions and more to develop leads that close more often than any marketing processes that did not use machine learning capability.

Marcos Meneguzzi, EVP and Head of Cards and Unsecured Lending for HSBC, also sees firsthand how AI is impacting the customer experience in his organization. “Customers want companies to treat them like individuals who matter - not interchangeable sources of revenue. The greatest promise for AI is about optimization of data and the valuable insights they can provide leading to greater personalization. This allows companies like HSBC to enhance and tailor our customer experiences.”

HSBC uses AI to predict the redemption of loyalty program rewards associated with their new suite of credit cards. Also, it is leveraged within Fraud Management in both models and rule building to detect anomalous behavior for the protection of our customers and the firm. Launching soon, HSBC’s new chat bot will augment the expertise of our bankers by providing fast and accurate responses to a wide range of questions that will reduce friction to getting answers and ultimately eliminate wait time.

In looking at future applications, Meneguzzi, states, “We’re actively evaluating and exploring additional innovative AI use cases across our businesses to deliver superior customer experiences. A number of projects look to improve the customer experience. This includes reducing fraud and card compromises. Others enable more personalized and relevant customer contacts within the personal banking space.”

Share the AI Love

Now, working with sales, customer service, and other areas of the business means sharing information and insights. And, it’s the CMO who can take the lead in pushing these efficiencies throughout the company by working with others on the executive team.

For example, this includes things like contract management. Although companies have typically relied on large sales platforms to cover this task, these platforms haven't been able to optimize the process the way AI could do. The technology can do a lot of the heavy lifting for the legal and sales teams while also protecting the contracts better than any other tools available.

More to Come

These are huge strides AI has made in moving the science of marketing forward. Other opportunities include Decision Intelligence. Not only will it change how CMOs make decisions, but it will also influence consumer decisions related to how, when, and where they spend their money. AI tools will learn what consumers have previously done, mimic that decision-making process, and then understand what to deliver to consumers to influence that decision.

First, there are other challenges. This includes knowing where to start making changes internally with marketing tools to integrate AI with the various types of data, data sources, and channels. However, AI could determine how you achieve that.

Second, companies have to think about becoming too dependent on AI. Jesse Wolfersberger, Senior Director of Decision Sciences for Maritz Motivation Solutions, recommends when integrating AI into your business, you need to have experienced professionals run the show. "Even after that, we recommend substantial testing and gradual roll-outs," he adds. "You don't want to be in the situation where you are taking actions based on an AI's recommendations and have it turn out that an analyst accidentally swapped the revenue column with the cost column.”

Naimat believes there is no risk in marketers becoming too dependent on AI. "Marketers will still need to drive AI tools that will help them do their jobs better and at scale," he said. "In fact, I believe that as AI advances, there will be a new class of marketers whose sole responsibility will be to drive this AI machinery, understand and take advantage of AI algorithms, and strategically point to the right data and goals which in turn will spark the integration between data and marketing, and ultimately, bring them closer together."

The real risk, as Naimat explained, is in the non-adoption of AI, with a loss of competitive advantage that data and insights can provide.

Resgister for your tickets here: https://theaicongress.com/bookyourtickets/

Fake News And How Artificial Intelligence Tools Can Help

A word of warning to those that infiltrate the content pipeline with information that’s not factual, because there’s heightened demand for new methods to distill the mountains of information we are presented with daily down to the unadulterated facts. People crave a way to cut through the opinions, marketing speak and propaganda to get to the truth. And technology just might be the solution we need to become data-driven decision-makers and objectively understand the information.

 Adobe Stock

Adobe Stock

There are reasons why we struggle under the weight of fake or worthless content. Every 60 seconds, 160 million emails are sent, 98,000 tweets are shared on Twitter, 600 videos are uploaded to YouTube and 1,500 blog entries are created. Nobody but a machine could keep up with it all.

Not only do we struggle to determine if politicians are telling us the truth, but marketers try to hook us up with all kinds of products that are just what we need because they are better than the competition, the safest, the only one that will get you your desired results. The hyperbole can be exhausting.

We have never experienced such a time when we have so much information and so many opinions thrown at us from so many angles. In response to our struggles, fact-checking organizations that are dedicated to dissect and analyze statements made by politicians and public figures now exist and are becoming increasingly visible.

As data continues to explode, the ability to rummage through it to find the truth required in a situation is essential. Consumers won’t be patient either. They want to find out anything they seek to know and they want to know it now. Brands will have to respond with truth and transparency if they hope to remain competitive.

Businesses are beginning to respond to their customers’ demands for facts. The big data-driven, machine-learning tech that is rolling out gives customers the raw material needed to measure and quantify absolute, objective facts and then act based on those findings, rather than rely on opinions and gut instincts so common today.

Checking Our Ads

AdVerif.ai offers a solution to verify ads so advertisers can keep an eye on where the content is displayed and publishers can check that content meets their policy. The tool augments the job of editorial staff with deep learning and Natural Language Processing capabilities to detect patterns that indicate spam, malware or inappropriate content. It also checks the content of ads and uses AI tools that leverage online knowledge repositories to either confirm facts or highlight potentially fake ones.

Facebook Fact Checking

Especially after the recent backlash against Facebook the company is on a mission to regain user trust. Facebook has been working with four independent fact-checking organizations—Snopes, Politifact, ABC News and FactCheck.org—to verify the truthfulness of viral stories. New tools that are designed to avert the spread of misinformation will notify Facebook users when they try to share a story that has been bookmarked as false by these ‘independent fact-checkers.’ Facebook has just recently announced its plan to open two new AI Labs that will work on creating an AI safety net for its users, tackling fake news, political propaganda as well as bullying on its platform.

Transparency of Reds and Whites

Alit Wine is leading the industry to “shine a light on the places that the wine industry doesn’t talk about,” founder Mark Tarlov says. One of those things that’s typically hush-hush in the industry is the how much each element of the winemaking process costs. But, not Alit Wine. The company sells wine directly to consumers and they detail exactly how much each step of production costs for the wines they sell.

Big Brother in Reverse

Usually we’re concerned about the scrutiny of the government into our own affairs. But, Contratobook helps citizens scrutinize the work of government and public officials. Launched in Mexico in 2016 by a group of anonymous hackers, the company is an open-source platform that allows people to search, filter and comment on more than 1.6 million government bids and contracts dating back to 2002. For those citizens with a desire to do so, they can look at each entry’s details including contact values, involved parties and start date to detect irregular or inaccurate expenses.

Those brands, platforms and companies who build trust with their customer base via transparency and factual information that can be verified with data are expected to have the competitive edge in a world that has grown weary of the widespread dishonesty and misinformation that permeates our culture. Thanks to big data and machine learning, any company can now create more transparent and trustworthy systems we will all benefit from.

Register for you tickets for the AI Congress here: https://theaicongress.com/bookyourtickets/

With The Emergence of AI, The Future of Mankind is Bright

The next few decades will surely be exciting as we can experience our science fiction fantasies playing out in our everyday lives.

20180411122448-shutterstock-624723386.jpeg

Humans have emerged as the “superior” species on earth for having adapted to the surroundings and significantly altered a wide variety of regions across the world. We have surpassed the intelligence of other species on planet earth and our exploration has resulted in remarkable historical impacts. A quest to find other dominant species has led us to explore other planets and celestial bodies, and so far, we haven’t found one!

Human Imagination

Subsequently, we started dreaming about building machines that could match the ingenuity of humans. We started off by creating all types of machines to help us in overcoming our limitations. We went on to invent computers to help us in extending our brain power of analyzing and comprehending large amounts of data for insights. With constant persistence, we continued our expedition in building intelligent machines, and now, we have reached a tipping point where machines are reflecting the intellectual prowess of human beings. Thanks to Artificial Intelligence! We are currently in the era where AI revolution is rewriting the power of technology. With such fast-paced progression, it’s only a question of time before we reach the state of singularity (when machines surpass human thinking). At present, it’s challenging to anticipate as to when machines will reach this state. A prediction says that they will make unimaginable advancements anywhere between 40 to 70 years to come.

Modern Artificial Intelligence

Today, AI-powered machines have the intellect like that of an infant and are limited to mimicking routine and rudimentary tasks. Just like how juveniles learn from the environment as they grow, these machines also learn from the environment, to develop over time, by acquiring numerous skills. Today’s machines are invented and supervised by humans. They are finely designed & maintained with utmost care and taught to think like humans, while they advance. The hope is that, in due course of time, these machines would take over all the human tasks that are neither creative nor desirable, thus liberating us from drudgery. Soon, these machines are expected to be as intelligent as humans, at which point, we can witness the true collaboration between man and machine.

These collaborative bots should help us in amplifying human imagination, problems solving and deep-thinking capabilities by several notches. Many mysteries of the universe could be unravelled by the unification of man and machine. This unification helps us overcome our limitations in correlating events, perceiving the deep linkages of causation and finding answers to complex problems. With intelligent machines by our side, we should be able to solve critical issues, such as chronic diseases, global warming and disabilities. Many of our current science fiction scenarios could turn into a reality – like inter-planetary travel, powering the earth completely with solar-energy, controlling weather patterns, regenerating human organs with flesh & blood, and even defining a new future for creating babies!

The Bright Future

The final frontier in the human evolution would be embedding AI in our body by implanting smart sensors, chips and electronic prosthetics. These embedded devices should help us see those things which we cannot see with our eyes, hear those sounds which we cannot hear today, touch those objects (like fire) which we cannot touch today, fly like birds and swim in deep oceans, thus making us superhumans on earth. In this final act, we would be moving from unification of man and machine to a higher level – integration.

In far future, machines may surpass human intelligence, take command & control of earth and we the humans could be working under the supervision of machines. But, knowing human psychology, it’s doubtful whether we would allow ourselves to be in such a situation – surrendering our supremacy to the machines. Since we are in control of the current machine evolution, we would certainly ensure to imbibe certain characteristics in these machines so that they always treat humans as their masters and never cause us any harm.

The next few decades will surely be exciting as we can experience our science fiction fantasies playing out in our everyday lives.

You want to know more about the future of Artificial Intelligence? Register for your tickets for the AI World Congress: https://theaicongress.com/bookyourtickets/

You can now get a degree in artificial intelligence

Starting autumn 2018, the programme will only be accepting a total of only 100 students a year

shutterstock_628619717.jpg

The Carnegie Mellon School of Computer Science (SCS) has launched the first undergraduate degree in artificial intelligence (AI) in the United States.

Starting Autumn 2018, the program appears fittingly rigorous, accepting a total of only 100 students a year. First years can only declare themselves AI majors in the spring, after completing core mathematics and computer science classes in the SCS. The 100 second-, third- and fourth-year students who make it onto the programme will take additional courses in statistics and probability, computational modeling, machine learning and symbolic computation.

The degree will also involve an emphasis on ethics and social responsibility, as part of the SCS' desire to use AI to improve social conditions.

"Carnegie Mellon has an unmatched depth of expertise in AI, making us uniquely qualified to address this need for graduates who understand how the power of AI can be leveraged to help people," said Andrew Moore, dean of the School of Computer Science.

This degree programme continues the university's tradition of leading innovations in computer science and AI.

The SCS was one of the first universities in the US dedicated entirely to computer science and in 1975, Allen Newell and Herbert A. Simon, researchers at Carnegie Mellon, received the A.M. Turing Award for contributions to AI. A total of twelve alumni and faculty have received Turing Awards, and a recent study by the US News and World Report ranked the SCS at Carnegie Mellon the best computer science college in the US for AI.

 In the UK, many universities already offer bachelor's degrees in computer science with an emphasis or modules in AI, which appears comparable to what Carnegie Mellon has planned.

While many fear that AI will replace humans in a large array of jobs, and this is arguably inevitable, initiatives such as this seem determined to use AI to fix social issues and improve quality of life rather than let the technology overtake humanity. If the creation of this program potentially proves anything, it's that the rise of AI will also create new jobs as it replaces people in old ones.

"It's an opportunity for us to shape what it means to be a degree program in AI as opposed to offering courses related to AI," said Reid Simmons, director of the new programme, according to the Carnegie website. "We want to be the first to offer an AI undergraduate degree. I'm sure we won't be the last. AI is here to stay."

Register for your tickets for the AI Congress now: https://theaicongress.com/bookyourtickets/

India wants to fire up its A.I. industry. Catching up to China and the US will be a challenge

  • A government-appointed task force has come up with a plan with recommendations to boost the AI sector in India, from developing AI technologies and infrastructure, to data usage and research.
  • But experts said that it's unlikely to catch up with China or U.S. It would first have to resolve some stumbling blocks, such as poor data quality and lack of expertise in this field.
  • But Asia's third-largest economy still has a chance to excel in some domains, such as industrial electronics.
 Pradeep Gaur | Mint | Getty Images A tech start-up at its office in Gurgaon, India

Pradeep Gaur | Mint | Getty Images
A tech start-up at its office in Gurgaon, India

India has ambitions to fire up its artificial intelligence capabilities — but experts say that it's unlikely to catch up with the U.S. and China, which are fiercely competing to be the world leader in the field.

An Indian government-appointed task force has released a comprehensive plan with recommendations to boost the AI sector in the country for at least the next five years — from developing AI technologies and infrastructure, to data usage and research.

The task force, appointed by India's Ministry of Commerce and Industry, proposes that the government work with the private sector to develop technologies, with a focus on smart cities and the country's power and water infrastructure.

It recommends a network of infrastructure — a testing facility, and six centers focusing on research in generating AI technologies, such as robotics, autonomous trucks and advanced financial technology.

A data center could be set up to "develop an autonomous AI machine that can work on multiple data streams in real time," the plan said. Calling data the "fuel that powers AI," the report said data marketplaces and exchanges could allow the "free flow of data."

Yet despite those aspirations, experts said that insufficient research support, poor data quality, and the lack of expertise in the field will be stumbling blocks for India.

Rishi Sharma, an associate research manager for enterprise infrastructure at research firm IDC, said: "India is lagging the global dominance presently in the AI space ... It will take time before (it) positions itself at a global standing."

India's Ministry of Commerce and Industry did not respond to a request for comment from CNBC.

India's plans to deploy A.I.

From crop management to fighting terrorism, there's a plan to deploy AI in 10 sectors in Asia's third-largest economy. Those include manufacturing, health care, agriculture, education and public utilities.

Here are a few areas proposed by the task force:

  • National defense: Secure public and critical infrastructure by predicting terror attacks, robots for counter terrorism operations.
  • Crop management: Using AI for crop prediction, health management and selection based on historical data and current factors. Crop monitoring and collection of data can be done by using drones and robots.
  • Environment: To automate and control — at the source — the levels of smoke and waste being released into the air, soil and water.

Can it succeed?

India's efforts come as the AI competition between China and U.S. intensifies, with China aiming to be the world leader in the space by 2030.

India, meanwhile, is late to the game, and will probably not dominate in the field except in a few areas, experts said.

IDC's Sharma said the country needs to resolve some issues first: "India stands a chance to compete at a global level, provided the hurdles are overcome." Challenges, she said, include poor data quality and integrity, as well as a lack of expertise.

Those critiques would not be news to New Delhi.

"The most important challenge in India is to collect, validate ... distribute AI-relevant data and making it accessible to organizations, people and systems without compromising privacy and ethics. Data is the bedrock of AI systems and reliability of AI systems depends primarily on quality and quantity of the data," the government report said.

Milan Sheth, a partner at EY covering intelligent automation, added: "There is a need to reskill a large number of people in a short span of time. It will take a couple of years, but tech developments will also take that same amount of time. To keep pace with adoption, that is the challenge."

While India is unlikely to be able to fully compete anytime soon, it can still aim to be a leader in a few areas such as industrial electronics, Sheth said.

"It will make a bid for dominating in a few areas but can't compete with the U.S. or China on academic investment," he said, adding that very few companies in India are getting sufficient funding for research.

India's GDP could reach $6 trillion in 2027 because of its digitization drive, according to a previous forecast by Morgan Stanley. That would make India the third-largest economy in the world — behind the U.S. and China, which recorded $18.5 trillion and $11.2 trillion in 2016 GDP, respectively.

Register for your tickets for the AI Congress here : https://theaicongress.com/bookyourtickets/

Artificial intelligence could have a big role to play in the way health care is administered

computer-3343887_1920.jpg

From high speed internet to connected devices, innovation is transforming almost every aspect of our lives. The field of medicine is no different. U.K.-based health care business Babylon Health, for instance, is combining digital technology with human doctors.

The company has grand ambitions. "If we can make health care accessible, affordable, put it in the hands of every human being on earth, if we can do with health what Google did with information, that's a phenomenal thing to have achieved," Ali Parsa, Babylon Health's founder and CEO, told CNBC's Nadine Dereza.

Parsa went on to stress just how much things were changing in the field of medicine. 

"Everything we know about intervention in medicine is being reinvented, whether it is electro biology or synthetic biology, whether it is laser manipulation or audiology intervention, whether it is organ reconstruction or DNA reengineering," he said. "We are reinventing the way we can intervene in your body in a way that we could never imagine before." 

Babylon Health is not the only organisation looking to use technology to transform the way patients are treated. 

"We want to work hard to more quickly diagnose our patients so we can begin treatment, and more quickly diagnosing requires artificial intelligence," Kevin Mahoney, senior vice president and chief administrative officer for the University of Pennsylvania Health System, said.

"It's going to require using big data," he added. "It's going to require looking for those patterns that we don't quite see, but always following it back through the physician who's been trained how to interpret that data."

The issue of whether we will eventually be treated by computers rather than humans is an intriguing one, but Mahoney sought to paint a more collaborative future.

"I'm not advocating that we're ever going to get to the point where the computer treats you," he said.

"But the amount of information that doctors are being told on a daily basis about new treatments, new evidence that's out there, artificial intelligence is going to be required to help condense that down and bring it directly to the patient's room so the doctor can intervene as effectively as possible."

You want to know more about how Artificial Intelligence will improve our healthcare methods? Register for your ticket here: https://theaicongress.com/bookyourtickets/

Google just gave a stunning demo of Assistant making an actual phone call

It’s hard to believe AI can interact with people this naturally

voice-control-2598422_1920.jpg

Onstage at I/O 2018, Google showed off a jaw-dropping new capability of Google Assistant: in the not too distant future, it’s going to make phone calls on your behalf. CEO Sundar Pichai played back a phone call recording that he said was placed by the Assistant to a hair salon. The voice sounded incredibly natural; the person on the other end had no idea they were talking to a digital AI helper. Google Assistant even dropped in a super casual “mmhmmm” early in the conversation.

Pichai reiterated that this was a real call using Assistant and not some staged demo. “The amazing thing is that Assistant can actually understand the nuances of conversation,” he said. “We’ve been working on this technology for many years. It’s called Google Duplex.”

Duplex really feels like next-level AI stuff, but Google’s chief executive said it’s still very much under development. Google plans to conduct early testing of Duplex inside Assistant this summer “to help users make restaurant reservations, schedule hair salon appointments, and get holiday hours over the phone.”

Pichai says the Assistant can react intelligently even when a conversation “doesn’t go as expected” and veers off course a bit from the given objective. “We’re still developing this technology, and we want to work hard to get this right,” he said. “We really want it to work in cases, say, if you’re a busy parent in the morning and your kid is sick and you want to call for a doctor’s appointment.” Google has published a blog post with more details and soundbites of Duplex in action.

“The technology is directed towards completing specific tasks, such as scheduling certain types of appointments. For such tasks, the system makes the conversational experience as natural as possible, allowing people to speak normally, like they would to another person, without having to adapt to a machine.” Google envisions other use cases like having Assistant call businesses and inquire about their hours to help keep Maps listings up to date. The company says it wants to be transparent about where and when Duplex is being used, as a voice that sounds this realistic and convincing is certain to raise some questions.

In current testing, Google notes that Duplex successfully completes most conversations and tasks on its own without any intervention from a person on Google’s end. But there are cases where it gets overwhelmed and hands off to a human operator. This section on the ins and outs of Duplex is very interesting:

The Google Duplex system is capable of carrying out sophisticated conversations and it completes the majority of its tasks fully autonomously, without human involvement. The system has a self-monitoring capability, which allows it to recognize the tasks it cannot complete autonomously (e.g., scheduling an unusually complex appointment). In these cases, it signals to a human operator, who can complete the task.

To train the system in a new domain, we use real-time supervised training. This is comparable to the training practices of many disciplines, where an instructor supervises a student as they are doing their job, providing guidance as needed, and making sure that the task is performed at the instructor’s level of quality. In the Duplex system, experienced operators act as the instructors. By monitoring the system as it makes phone calls in a new domain, they can affect the behavior of the system in real time as needed. This continues until the system performs at the desired quality level, at which point the supervision stops and the system can make calls autonomously.

Don't forget to register for your ticket here : https://theaicongress.com/bookyourtickets/

AI is so important to Google it’s rebranding its research division

Goodbye Google Research, hello Google AI

 Illustration by Alex Castro / The Verge

Illustration by Alex Castro / The Verge

Every big tech company is an AI company these days, but none more so than Google. To underline the point ahead of its I/O developers conference, the company has rebranded its Google Research division as Google AI, reflecting the centrality of artificial intelligence to the company’s future.

In a blog post announcing the news, the company said the rebrand was to “better reflect [its] commitment” to integrating AI into various services. It follows an organizational reshuffle last month which saw AI product development split up from Google’s search efforts, and veteran Googler Jeff Dean taking the helm of the new division. A newly-revamped homepage for Google AI also emphasizes more than just the company’s consumer products, highlighting recently-published research in topics like health and astronomy and open-source tools used by the AI community worldwide, like the machine learning framework Tensor Flow. (Important to note also: non-AI research will still be done under in the new “Google AI” division.)

 The homepage for Google AI.

The homepage for Google AI.

This focus on research and community contrasts slightly with Microsoft, which has also been pushing its AI credentials this week at its Build conference. But for Microsoft the message has been more about AI ethics and morality, with the company launching a new $25 million AI for Accessibility fund to develop the tech for people with disabilities. Google does plenty of work in the field of AI ethics too, but it’s interesting to see these two titans of the tech world trying to differentiate their message on the same subject.

Last month in a letter to investors, Google’s co-founder Sergey Brin warned of the threats posed by AI, like job destruction, biased algorithms, and misinformation. He also called AI “the most significant development in computing in my lifetime.” Google’s rebranding of its research division drives that point home.

Interested in how AI is implemented and monetised in companies? Register for your ticket here: https://theaicongress.com/bookyourtickets/

THE WIRED GUIDE TO ARTIFICIAL INTELLIGENCE

Supersmart algorithms won't take all the jobs, but they are learning faster than ever, doing everything from medical diagnostics to serving up ads.

AI Congress image.jpg

ARTIFICIAL INTELLIGENCE IS overhyped—there, we said it. It’s also incredibly important.

Superintelligent algorithms aren’t about to take all the jobs or wipe out humanity. But software has gotten significantly smarter of late. It’s why you can talk to your friends as an animated poop on the iPhone X using Apple’s Animoji, or ask your smart speaker to order more paper towels.

Tech companies’ heavy investments in AI are already changing our lives and gadgets, and laying the groundwork for a more AI-centric future.

The current boom in all things AI was catalyzed by breakthroughs in an area known as machine learning. It involves “training” computers to perform tasks based on examples, rather than by relying on programming by a human. A technique called deep learning has made this approach much more powerful. Just ask Lee Sedol, holder of 18 international titles at the complex game of Go. He got creamed by software called AlphaGo in 2016.

For most of us, the most obvious results of the improved powers of AI are neat new gadgets and experiences such as smart speakers, or being able to unlock your iPhone with your face. But AI is also poised to reinvent other areas of life. One is health care. Hospitals in India are testing software that checks images of a person’s retina for signs of diabetic retinopathy, a condition frequently diagnosed too late to prevent vision loss. Machine learning is vital to projects in autonomous driving, where it allows a vehicle to make sense of its surroundings.

There’s evidence that AI can make us happier and healthier. But there’s also reason for caution. Incidents in which algorithms picked up or amplified societal biases around race or gender show that an AI-enhanced future won’t automatically be a better one.

 

The Beginnings of Artificial Intelligence

Artificial intelligence as we know it began as a vacation project. Dartmouth professor John McCarthy coined the term in the summer of 1956, when he invited a small group to spend a few weeks musing on how to make machines do things like use language. He had high hopes of a breakthrough toward human-level machines. “We think that a significant advance can be made,” he wrote with his co-organizers, “if a carefully selected group of scientists work on it together for a summer.”

Those hopes were not met, and McCarthy later conceded that he had been overly optimistic. But the workshop helped researchers dreaming of intelligent machines coalesce into a proper academic field.

Early work often focused on solving fairly abstract problems in math and logic. But it wasn’t long before AI started to show promising results on more human tasks. In the late 1950s Arthur Samuel created programs that learned to play checkers. In 1962 one scored a win over a master at the game. In 1967 a program called Dendral showed it could replicate the way chemists interpreted mass-spectrometry data on the makeup of chemical samples.

As the field of AI developed, so did different strategies for making smarter machines. Some researchers tried to distill human knowledge into code or come up with rules for tasks like understanding language. Others were inspired by the importance of learning to human and animal intelligence. They built systems that could get better at a task over time, perhaps by simulating evolution or by learning from example data. The field hit milestone after milestone, as computers mastered more tasks that could previously be done only by people.

Deep learning, the rocket fuel of the current AI boom, is a revival of one of the oldest ideas in AI. The technique involves passing data through webs of math loosely inspired by how brain cells work, known as artificial neural networks. As a network processes training data, connections between the parts of the network adjust, building up an ability to interpret future data.

 

Artificial neural networks became an established idea in AI not long after the Dartmouth workshop. The room-filling Perceptron Mark 1 from 1958, for example, learned to distinguish different geometric shapes, and got written up in The New York Times as the “Embryo of Computer Designed to Read and Grow Wiser.” But neural networks tumbled from favor after an influential 1969 book co-authored by MIT’s Marvin Minsky suggested they couldn’t be very powerful.

Not everyone was convinced, and some researchers kept the technique alive over the decades. They were vindicated in 2012, when a series of experiments showed that neural networks fueled with large piles of data and powerful computer chips could give machines new powers of perception.

In one notable result, researchers at the University of Toronto trounced rivals in an annual competition where software is tasked with categorizing images. In another, researchers from IBM, Microsoft, and Google teamed up to publish results showing deep learning could also deliver a significant jump in the accuracy of speech recognition. Tech companies began frantically hiring all the deep-learning experts they could find.

 

The Future of Artificial Intelligence

Even if progress on making artificial intelligence smarter stops tomorrow, don’t expect to stop hearing about how it’s changing the world.

Big tech companies such as Google, Microsoft, and Amazon have amassed strong rosters of AI talent and impressive arrays of computers to bolster their core businesses of targeting ads or anticipating your next purchase.

They’ve also begun trying to make money by inviting others to run AI projects on their networks, which will help propel advances in areas such as health care or national security. Improvements to AI hardware, growth in training courses in machine learning, and open source machine-learning projects will also accelerate the spread of AI into other industries.

Meanwhile, consumers can expect to be pitched more gadgets and services with AI-powered features. Google and Amazon in particular are betting that improvements in machine learning will make their virtual assistants and smart speakers more powerful. Amazon, for example, has devices with cameras to look at their owners and the world around them.

The commercial possibilities make this a great time to be an AI researcher. Labs investigating how to make smarter machines are more numerous and better-funded than ever. And there’s plenty to work on: Despite the flurry of recent progress in AI and wild prognostications about its near future, there are still many things that machines can’t do, such as understanding the nuances of language, common-sense reasoning, and learning a new skill from just one or two examples. AI software will need to master tasks like these if it is to get close to the multifaceted, adaptable, and creative intelligence of humans. One deep-learning pioneer, Google’s Geoff Hinton, argues that making progress on that grand challenge will require rethinking some of the foundations of the field.

As AI systems grow more powerful, they will rightly invite more scrutiny. Government use of software in areas such as criminal justice is often flawed or secretive, and corporations like Facebook have begun confronting the downsides of their own life-shaping algorithms. More powerful AI has the potential to create worse problems, for example by perpetuating historical biases and stereotypes against women or black people. Civil-society groups and even the tech industry itselfare now exploring rules and guidelines on the safety and ethics of AI. For us to truly reap the benefits of machines getting smarter, we’ll need to get smarter about machines.

NASA Explores Artificial Intelligence for Space Communications

 Credits: NASA

Credits: NASA

NASA spacecraft typically rely on human-controlled radio systems to communicate with Earth. As collection of space data increases, NASA looks to cognitive radio, the infusion of artificial intelligence into space communications networks, to meet demand and increase efficiency.

“Modern space communications systems use complex software to support science and exploration missions,” said Janette C. Briones, principal investigator in the cognitive communication project at NASA’s Glenn Research Center in Cleveland, Ohio. “By applying artificial intelligence and machine learning, satellites control these systems seamlessly, making real-time decisions without awaiting instruction.”

To understand cognitive radio, it’s easiest to start with ground-based applications. In the U.S., the Federal Communications Commission (FCC) allocates portions of the electromagnetic spectrum used for communications to various users. For example, the FCC allocates spectrum to cell service, satellite radio, Bluetooth, Wi-Fi, etc. Imagine the spectrum divided into a limited number of taps connected to a water main.

What happens when no faucets are left? How could a device access the electromagnetic spectrum when all the taps are taken?

Software-defined radios like cognitive radio use artificial intelligence to employ underutilized portions of the electromagnetic spectrum without human intervention. These “white spaces” are currently unused, but already licensed, segments of the spectrum. The FCC permits a cognitive radio to use the frequency while unused by its primary user until the user becomes active again.

In terms of our metaphorical watering hole, cognitive radio draws on water that would otherwise be wasted. The cognitive radio can use many “faucets,” no matter the frequency of that “faucet.” When a licensed device stops using its frequency, cognitive radio draws from that customer’s “faucet” until the primary user needs it again. Cognitive radio switches from one white space to another, using electromagnetic spigots as they become available.

“The recent development of cognitive technologies is a new thrust in the architecture of communications systems,” said Briones. “We envision these technologies will make our communications networks more efficient and resilient for missions exploring the depths of space. By integrating artificial intelligence and cognitive radios into our networks, we will increase the efficiency, autonomy and reliability of space communications systems.”

For NASA, the space environment presents unique challenges that cognitive radio could mitigate. Space weather, electromagnetic radiation emitted by the sun and other celestial bodies, fills space with noise that can interrupt certain frequencies.

“Glenn Research Center is experimenting in creating cognitive radio applications capable of identifying and adapting to space weather,” said Rigoberto Roche, a NASA cognitive engine development lead at Glenn. “They would transmit outside the range of the interference or cancel distortions within the range using machine learning.” 

In the future, a NASA cognitive radio could even learn to shut itself down temporarily to mitigate radiation damage during severe space weather events. Adaptive radio software could circumvent the harmful effects of space weather, increasing science and exploration data returns.

A cognitive radio network could also suggest alternate data paths to the ground. These processes could prioritize and route data through multiple paths simultaneously to avoid interference. The cognitive radio’s artificial intelligence could also allocate ground station downlinks just hours in advance, as opposed to weeks, leading to more efficient scheduling.

Additionally, cognitive radio may make communications network operations more efficient by decreasing the need for human intervention. An intelligent radio could adapt to new electromagnetic landscapes without human help and predict common operational settings for different environments, automating time-consuming processes previously handled by humans.

The Space Communications and Navigation (SCaN) Testbed aboard the International Space Station provides engineers and researchers with tools to test cognitive radio in the space environment. The testbed houses three software-defined radios in addition to a variety of antennas and apparatus that can be configured from the ground or other spacecraft.

“The testbed keeps us honest about the environment in orbit,” said Dave Chelmins, project manager for the SCaN Testbed and cognitive communications at Glenn. “While it can be simulated on the ground, there is an element of unpredictability to space. The testbed provides this environment, a setting that requires the resiliency of technology advancements like cognitive radio.”

Chelmins, Rioche and Briones are just a few of many NASA engineers adapting cognitive radio technologies to space. As with most terrestrial technologies, cognitive techniques can be more challenging to implement in space due to orbital mechanics, the electromagnetic environment and interactions with legacy instruments. In spite of these challenges, integrating machine learning into existing space communications infrastructure will increase the efficiency, autonomy and reliability of these systems.

The SCaN program office at NASA Headquarters in Washington provides strategic and programmatic oversight for communications infrastructure and development. Its research provides critical improvements in connectivity from spacecraft to ground.

For more information about SCaN, visit:  www.nasa.gov/scan

What These 5 Women Are Doing to Solve Tech’s Diversity Problem

 Getty Images

Getty Images

From gender-neutral AI to coding

At a time when diversity remains a front-burner issue within the tech industry, this year’s Consumer Electronics Show—the tech world’s largest conference—is surprisingly lacking in, well, diversity. While, in the past, the agenda-setting conference has showcased powerhouse solo women keynoters such as IBM CEO Ginni Rometty, General Motors CEO Mary Barra and former Yahoo CEO Marissa Mayer, this year, CES has chosen, for instance, to present a trio of women executives from A+E Networks, MediaLink and 605, sharing the stage alongside five male execs in a keynote panel.

Not surprisingly, CES’ male-dominated lineup has been widely slammed, with a number of CMOs and other marketing executives publicly criticizing the organization.

CES’ gender imbalance is emblematic of the broader gender inequity issues currently roiling tech. According to Girls Who Code, last year, 30,000 men graduated with computer science degrees compared to 7,000 women. Once they graduate, the statistics are grim. According to Crunchbase, the number of companies with at least one female founder increased to 9 percent between 2009 and 2012—but that number hasn’t budged in five years. The funding picture isn’t much better. According to the Harvard Business Review, among venture capital bankrolled tech startups, just 9 percent of the entrepreneurs are women.

Not content with the status quo, a number of women in tech are taking the lead to tip the gender scales, creating opportunities for women while at the same time making systemic changes when it comes to culture and thinking about diversity.

Here, Adweek highlights five women working to change the tech industry’s game.

 

1. Kriti Sharma, vp of artificial intelligence at Sage

Kriti Sharma AI.jpg

What she’s doing: Making AI inclusive

Artificial intelligence may be the buzziest new word in tech circles, but it has a significant gender problem, according to Sharma. For starters, AI assistants like Apple’s Siri and Amazon’s Alexa, which have female voices and personas as their default option, reinforce gender stereotypes. While these female-branded assistants are often used as “helpers,” fielding passive and anodyne questions (e.g., Siri, what’s the temperature?) or conducting household tasks like dimming lights, their male-branded counterparts such as IBM’s Watson, Salesforce’s Einstein and Samsung’s Bixby are touted as muscular, complex problem solvers deployed to such tasks as plugging into a brand’s CRM system and using AI to determine which sales leads are most promising based on past behavior.

Sharma aims to create a more gender-neutral AI industry. At Sage’s two-day “BotCamp” workshops, students get hands-on opportunities learning to build their own chatbots. And Sharma recently hired Sage’s first conversation designer, a role designed specifically to analyze the voice tones and personalities used to create virtual assistants.

Further, Sage’s code of ethics requires developers to follow five guidelines when creating AI. It covers everything from how to name virtual assistants to building diverse data sets that help companies make hiring decisions when gender is taken out of the equation.

“Women are going to lose twice as many jobs as men due to AI,” Sharma explains, citing research from the Institute for Spatial Economic Analysis. “What we don’t talk about is how [AI] is going to impact different parts of society in different ways. I do a lot of work in that area.”

 

2. Allison Jones, director of marketing and communications at Code2040

Allison Jones Tech.jpg

What she’s doing: Getting tech students in the door

Code2040’s mission is to make sure that black and Latinx men and women are well represented in tech. To that end, the 30-person organization provides computer science college students with internships at major companies like Squarespace, Spotify, The New York Times and Goldman Sachs.

The organization also works directly with companies to shake up and realign their internal hiring processes. When Code2040 helped blogging platform Medium hire its technical talent, instead of focusing on the usual factors such as college GPAs, it worked with Medium to create face-to-face events with engineering interns in order to get to know each candidate personally.

While just 20 percent of computer science bachelor’s degrees and 5 percent of the technical workforce are black and Latinx, by 2040 they will comprise 40 percent of the U.S. population. Says Jones, “It’s not enough to just connect folks to talent—you have to make sure that your company has the culture that helps them drive, succeed and grow.” Adding, “The opportunities provide a way to generate wealth. We are building products that need to reflect the communities that are going to be the majority by 2040.”

 

3. Reshma Saujani, founder and CEO of Girls Who Code

RESHMA SAUJANI girls who code.jpg

What she’s doing: Teaching thousands of young women to code

In the six years since Saujani, a former attorney, launched Girls Who Code—53,000 young women have graduated from the program. By the end of 2018, her goal is to nearly double that number, hitting 100,000.

The way Saujani sees it, although the demand for technical roles continues to rise, the percentage of women who actually hold computing roles is falling. The organization’s own research finds that 24 percent of computer scientists in 2017 were women, down from 37 percent in 1995. By 2027, the percentage is expected to slip further to 22 percent.

“I think parity has to be intentional about gender and race,” Saujani says. “We talk a lot about access to computer science education. We should be focused on participation.”

At the same time, she says, simply getting more tech companies to hire women is just the first part of the equation; the second is retention. “What causes women to leave the workforce and college is the lack of community,” Saujani adds.

 

4. Neha Murarka, co-founder and CEO of Smoogs.io

Neha Murarka Smoogs.jpg

What she’s doing: Making bitcoin easy to understand

If the technology industry is dominated by men, think of bitcoin as an even more exclusive boys club.

“It’s a niche within a niche,” says Murarka. As co-founder of the five-person startup Smoogs.io, she’s trying to help more women understand the nascent technology. Smoogs.io powers a media player that digital creators including publishers and authors embed into their websites asking consumers to make small payments in exchange for accessing content. Instead of using a credit card to make individual payments, bitcoin stores users’ information, safely allowing them to pay for every second that they watch a video or read an article. Currently, the Nigerian news network BattaBox and author Akul Tripathi are testing Smoogs.io’s micro-payments to access and read a series of articles and books.

In Muraka’s spare time, she co-hosts London Women in Bitcoin, a meetup event aimed at attracting more women into the cryptocurrency space. Here, women network while learning about such topics as the ethics behind building bitcoin technology.

“Most of the people who come to us are everyday people from different industries, not just technical industries,” says Muraka, who believes in order to get more women in tech, they need tech educations.

“In my undergrad and post-grad, I was the only girl in the whole department,” she says. “Even when I was working in my second job in London, we were 22 developers and I was the only girl.”

 

5. Katharine Zaleski, co-founder and president, PowerToFly

KATHARINE ZALESKI Power to fly.jpg

What she’s doing: Helping big brands find talent

In 2014, Zaleski—who had spent years working in media at The Huffington Post, The Washington Post and NowThis News—realized society needed to change the way it talked about women and work.

So, she started PowerToFly with Milena Berry, connecting women with companies. Think of it as an all-women version of LinkedIn: Women create profiles and then outfits like American Express, Casper and Hearst get lists of qualified, tech-heavy, female candidates. For example, Casper recently posted 10 job openings on the site, including positions for a data engineer, an IT manager and a data and engineering director.

In three years, PowerToFly has created 1 million profiles. In addition to career matchmaking, PowerToFly also runs social and mobile campaigns that advertise companies’ roles through user-acquisition tactics, reaching another 12 million women. It has sent out 30,000 diverse candidates in 2017.

“Companies can no longer say that they have a 'pipeline' problem,” Zaleski says. “When it comes time to interview for a role, not only are we giving them the women that they need to look at immediately, but we’re giving them a lead list and they’re able to say that they’re really interviewing 50/50 male-female.”

 

Uber and Volkswagen team up with artificial intelligence firm in race to develop self-driving cars

 Uber tested its first fleet of self driving cars in 2016 REUTERS

Uber tested its first fleet of self driving cars in 2016 REUTERS

Nvidia will partner with Uber and Volkswagen as the graphics chipmaker’s artificial intelligence platforms make further gains in the autonomous vehicle industry.

The company, which already has partnerships in the industry with companies such as carmaker Tesla and China’s Baidu, makes computer graphics chips and has also been expanding into technology for self-driving cars.

CEO Jensen Huang told an audience at the CES technology conference in Las Vegas that Uber’s self-driving car fleet was using Nvidia technology to help its autonomous cars perceive the world and make split-second decisions.

Uber has been using Nvidia’s GPU computing technology since its first test fleet of Volvo SC90 SUVS were deployed in 2016 in Pittsburgh and Phoenix.

Uber’s autonomous driving programme has been shaken this year by a lawsuit filed in San Francisco by rival Waymo alleging trade secret theft.

Nevertheless, Nvidia said development of the Uber self-driving programme had gained steam, with one million autonomous miles being driven in just the past 100 days.

With Volkswagen, Nvidia said it was infusing its artificial intelligence technology into the German carmakers’ future lineup, using Nvidia’s new Drive IX platform. The technology will enable so-called “intelligent co-pilot” capabilities based on processing sensor data inside and outside the car.

So far, 320 companies involved in self-driving cars - whether software developers, carmakers and their suppliers, sensor and mapping companies - are using Nvidia Drive, formerly branded as the Drive PX2, the company said.

Nvidia also said its first Xavier processors would be delivered to customers this quarter. The system on a chip delivers 30 trillion operations per second using 30 watts of power.

Bets that Nvidia will become a leader in chips for driverless cars, data centres and artificial intelligence have more than doubled its stock price in the past 12 months, making the Silicon Valley company the third-strongest performer in the S&P 500 during that time.

How one Chinese firm uses A.I. to teach English

Langauge learning.jpg

 

Chinese education start-up Liulishuo has developed what it calls the world's first artificial intelligence English teacher.

After years spent gathering data on Chinese people speaking English, the firm employed deep learning to create personalized English courses powered by AI. Available on the firm's mobile app, the courses were launched in 2016 and boast around 50 million registered users.

AI teaching can triple learning efficiency, CEO and Founder Yi Wang told CNBC on the sidelines of the Morgan Stanley Tech, Media & Telecom conference in Beijing.

Schools have long suffered from a short supply of highly qualified teachers, he said, but now "technology, especially AI and mobile internet, has enabled us to extract the best out of the best teachers."

"We're seeing a tidal shift here," he added.

Wang, a former Google product manager, says Liulishuo will eventually move on to other languages as it looks to build "the most intelligent and efficient AI language teacher."

 

 

5 Key Artificial Intelligence Predictions For 2018: How Machine Learning Will Change Everything

During 2017 it was hard to escape predictions that artificial intelligence is about to change the world. In 2018, this is unlikely to change. However, an increased focus on repeatable and quantifiable results is likely to ground some of the “big picture” thinking in reality.

Don’t get me wrong -  in 2018 AI and machine learning will still be making headlines, and there are likely to be more sensationalized claims about robots wanting to take our jobs or even destroy us. However, stories about real innovation and progress should start to receive more prominence as the promise of the smart, learning machines increasingly begins to bear fruit.

Here are my predictions for what we will see in 2018:

  1. There will be less hype and hot air about AI – but a lot more action

With any breakthrough technology comes hype. As the arrival of functional and useful AI is something that has been predicted for centuries, it’s hardly surprising people want to talk about it, now it’s here.

It also means that there’s inevitably a lot of hot air – for starters, take a look at my rundown of the most common AI myths. Inevitably this eventually dies down as the media moves onto the “next big thing”. In its place during 2018, I expect we should start to see real progress towards achieving some of the dreams and ambitions which have been talked up over the past few years.

All the indicators show that investment into the development and integration of AI and, in particular machine learning, technology is continuing to increase in scale. And importantly, results are starting to appear beyond computers learning to beat humans at board games and TV game shows. I expect 2018 to provide a continuous stream of small but sure steps forward, as machine learning and neural network technology takes on more routine tasks.

Business are expected to use AI to stay ahead of the game - but how do you get started?

Business and AI.PNG

Despite the big hype, smaller, medium-size and sometimes even larger businesses are often unsure about where to begin: “How can we use artificial intelligence in our organization and what value can it bring?” This is the question that many company directors and company managers have asked themselves. Organizations are often not aware of the vast opportunities that they are already sitting on in terms of what is possible with their data, but they do know they need to get started with AI not to be left behind the competition.

While everyone talks about AI transforming vastly each industry in the near future, many businesses are not sure what exactly this can mean for their own organisation: What business processes could be automated? What processes could be made more efficient with AI, and where could a machine learning algorithm bring the most value?

So, why have some businesses not yet started using AI? Innovating with AI and machine learning requires access to highly skilled individuals. These are data scientists mastering not only statistics and data visualization, but also complex machine learning and AI methods. Machine learning engineers and AI architects are rare and harder to find, locating someone excellent is a lengthy process, and hiring them is costly. AI experts often have PhDs in an artificial intelligence field, and many are still doing research in the academic system, because AI is not a field you become expert in overnight.

Before we can solve the talent gap, we need to fill the knowledge gap. There are companies, such as Brainpool AI, which provide the experts but also help organisations understand how they can get started with AI, from data structuring and engineering, to identifying machine learning opportunities within the business. By working closely with the company’s in-house teams, Brainpool consultants perform analytics audits, figuring out what data is available and what data analytics has been done, how their data should be structured and merged, and help businesses understand what kind of questions can be answered with machine learning, and where they can bring the most value.

Say you are a retailer and want to know if you are offering the right kind of stock that makes your business run efficiently and profitable while offering product ranges that make your customers happy. You may be wondering whether the set of Mayonnaise brands you are offering is a satisfactory range to your customers but also cost-efficient.

Here are some examples of how AI can help us:

  1. AI powered product selection – ensuring the consumer receives the most relevant choice of products based on their online behavior. We see Amazon getting quite good at this.

  2. AI powered stock management – using AI to maximise customer satisfaction whilst in the same time optimizing stock management to ensure business runs efficiently

  3. Personal health virtual assistant/healthcare bots - AI powered technology can help patients by suggesting what medication or attention is needed based on their described symptoms

  4. Medical diagnostics - millions of tests are being carried out by hospitals today for various illnesses which are hard to detect. AI can enhance speed and accuracy of these tests

  5. Fraud detection – AI can help companies in industries such as telecom or banking detect and prevent fraud with higher accuracy

The range of applications is huge, it would be hard to list them all. When thinking of getting started with AI, no matter what application or the industry you’re in, it is important to select the right tools that are suitable to the type of data and the problems you are tackling. AI frameworks such as TensorFlow, H2o, Caffe, PowerAI are some of them. You will also need advice on the libraries that your organisation should be using such as R, Matlab or Python. Artificial Intelligence and Machine Learning experts can help you select the right tools and deliver a portfolio of powerful machine learning solutions to choose from with a roadmap of how to get started.

The goal is to become self-sufficient and learn exactly what steps you need to take in order to be ready to start using AI within your business. If you are already using data science, you should get experts to evaluate whether the algorithms your company is using is really the state-of-the-art and the best you could be doing.

Don’t wait around, otherwise you’ll get left on the platform with your competitors moving away in a speeding train. Get expert advice from a company like Brainpool and get started with AI today.

 

 

Singapore's first robot masseuse starts work

  Credit: Nanyang Technological University

 Credit: Nanyang Technological University

A robot masseuse has started work in Singapore today. Named Emma, short for Expert Manipulative Massage Automation, it specialises in back and knee massages as it mimics the human palm and thumb to replicate therapeutic massages such as shiatsu and physiotherapy.

Emma started work on her first patients today at the NovaHealth Traditional Chinese Medicine (TCM) clinic, working alongside her human colleagues – a physician and a massage therapist.

Emma 3.0 – the first to go into public service – is a third more compact than the first prototype unveiled last year, offers a wider range of massage programmes and provides a massage that is described by patients as almost indistinguishable from a professional masseuse.

Emma uses advanced sensors to measure tendon and muscle stiffness, together with Artificial Intelligence and cloud-based computing to calculate the optimal massage and to track a patient's recovery over a course of treatments.

Emma is developed by AiTreat, a technology start-up company incubated at Nanyang Technological University, Singapore (NTU Singapore).

Just two years old, AiTreat has a valuation of SGD$10 million (USD $7.3 million) after it recently completed its seed round funding, supported by venture capitalists from Singapore, China and the United States, including Brain Robotics Capital LP from Boston.

Founder of AiTreat and NovaHealth, Mr Albert Zhang, an alumnus of NTU Singapore who led the development of Emma, said the company's technology aims to address workforce shortages and quality consistency challenges in the healthcare industry.

Using Emma in chronic pain management has the potential of creating low-cost treatment alternatives in countries where healthcare costs are high, and where aging populations have a growing demand for such treatment.

Mr Zhang said that Emma was designed to deliver a clinically precise massage according to the prescription of a qualified traditional Chinese medicine physician or physiotherapist, without the fatigue faced by a human therapist.

"By using Emma to do the labour intensive massages, we can now offer a longer therapy session for patients while reducing the cost of treatment. The human therapist is then free to focus on other areas such as the neck and limb joints which Emma can't massage at the moment," said Mr Zhang, who graduated from NTU's Double Degree programme in Biomedical Sciences and Chinese Medicine.

In Singapore, a conventional treatment package for lower back pain consisting of a consultation, acupuncture and a 20-minute massage, would typically range from SGD$60 to SGD$100 (USD$44 to USD$73).

At NovaHealth TCM clinic, a patient could receive the same consultation, acupuncture but with a 40-minute massage from Emma and a human therapist for SGD$68 (USD$50).

Emma is housed in a customised room with two massage beds. Located in between both beds, Emma can massage one patient while the physician provides treatments for the second patient, before switching over.

This arrangement ensures Emma is always working on a patient, maximising the productivity of the clinic. It is estimated that staffing requirements to run a clinic can be reduced from five people to three, as Emma does the job of two masseuses.

How Emma works

Emma has a touch screen with a fully articulated robotic limb with six degrees of freedom. Mounted at the end of the limb are two soft massage tips made from silicon, which can be warmed for comfort.

Emma also has advanced sensors and diagnostic functions which can measure the exact stiffness of a particular muscle or tendon.

The data collected of each patient is then sent to a server in a cloud, where an Artificial Intelligence (AI) computes the exact pressure to be delivered during the massage procedure.

The AI can also track and analyse the progress of the patient, generating a performance report that will allow a physician to measure a patient's recovery using precise empirical data.

This proprietary cloud intelligence is supported by Microsoft, after Mr Zhang and his teammates won the Microsoft Developer Day Start-up Challenge last year.

Once it has proved that Emma can improve the productivity and effectiveness of TCM treatments, Mr Zhang hopes it could be a business model for other clinics to follow in future.

AiTreat is currently incubated at NTUitive, the university's innovation and commercialisation arm.

The start-up is supported by the StartupSG-Tech grant, which funds up to SGD$500,000, as well as SPRING Singapore's ACE start-up grant and the Technology for Enterprise Capability Upgrading (T-Up) grant.

The development of Emma is also on the TAG.PASS accelerator programme by SGInnovate, which will see Mr Zhang tie up with overseas teams to target multiple markets such as in US and China.

Chief Executive Officer of NTU Innovation and NTUitive Dr Lim Jui said harnessing disruptive technologies such as robotics and AI to improve everyday life is what Singapore needs to keep its innovative edge.

"To remain competitive in the global arena, start-ups will need to tap on emerging technologies to create a unique product that can tackle current challenges, similar to what AiTreat has done," Dr Lim explained.

"We are proud to have guided Mr Albert Zhang in his vision to bring affordable healthcare solutions to the market for Singapore, which can alleviate some of the chronic pain problems which our elderly faces."

The official launch of Emma and the NovaHealth clinic today was attended by fellow entrepreneurs and industry leaders, including Mr Inderjit Singh, Chairman of NTUitive, NTU's innovation and enterprise arm, and a member of NTU Board of Trustees.

Mr Inderjit Singh said, "There is great potential for Emma to be of service to society, especially as the population ages. The massage techniques of experienced and renowned TCM physicians can be reproduced in Emma, giving the public easier access to quality treatment. I look forward to future studies which could improve the efficacy of such massages, using herbal ointments containing modern ingredients that improve wear and tear, such as glucosamine.

Running in parallel to Emma's work schedule is a research project to measure and benchmark Emma's efficacy,

Interested in how AI is helping the Healthcare sector? Register for your ticket here: https://theaicongress.com/bookyourtickets/

AI innovation will trigger the robotics network effect

 Image Credit: Oryx Vision

Image Credit: Oryx Vision

Anyone who has thought about scaling a business or building a network is familiar with a dynamic referred to as the “network effect.” The more buyers and sellers who use a marketplace like eBay, for example, the more useful it becomes. Well, the data network effect is a dynamic in which increased use of a service actually improves the service, such as how machine-learning models generally grow more accurate as a result of training from larger and larger volumes of data.

Autonomous vehicles and other smart robots rely on sensors that generate increasingly massive volumes of highly varied data. This data is used to build better AI models that robots rely on to make real-time decisions and navigate real-world environments.

The confluence of sensors and AI at the heart of today’s smart robots generate a virtuous feedback loop, or what we might call a “robotics network effect.” We are currently on the verge of the tipping point that will create this network effect and transform robotics.

The rapid evolution of AI

To understand why robotics is the next frontier of AI, it helps to step back and understand how AI itself has evolved.

Machine intelligence systems developed in recent years are able to leverage huge amounts of data that simply didn’t exist in the mid-1990s when the internet was still in its infancy. Advances in storage and compute have made it possible to quickly and affordably store and process large amounts of data. But these engineering improvements alone can’t explain the rapid evolution of AI.

Open source machine learning libraries and frameworks have played a quiet but equally essential role. When the scientific computing framework Torch was released 15 years ago under a BSD open source license, it included a number of algorithms still commonly used by data scientists, including deep learning, multi-layer perceptrons, support vector machines, and K-nearest neighbors.

More recently, open source projects like TensorFlow and PyTorch have made valuable contributions to this shared repository of knowledge, helping software engineers with diverse backgrounds develop new models and applications. Domain experts require a vast amount of data to create and train these models. Large incumbents have a huge advantage because they can leverage existing data network effects.

Sensor data and processing power

Light detection and ranging (lidar) sensors have been around since the early 1960s. They’ve since found application in geomatics, archaeology, forestry, atmospheric studies, defense, and other industries. In recent years, lidars have become the preferred sensors for autonomous navigation.

The lidar sensor on Google’s autonomous vehicles generates 750MB of data per second. The 8 computer vision cameras on board collectively generate another 1.8GB per second. All this data has to be crunched in real time, but centralized compute (in the cloud) simply isn’t fast enough for real-time, high-velocity situations. To solve for this bottleneck, we’re decentralizing compute by pushing processing to the edge or, in the case of robots, on board.

The current solution for most of today’s autonomous vehicles is to use two on-board “boxes,” each of which is equipped with an Intel Xeon E5 CPU and 4 to 8 Nvidia K80 GPU accelerators. At peak performance, this consumes over 5000W in electricity. Recent hardware innovations like Nvidia’s new Drive PX Pegasus, which can compute 320 trillion operations per second, are beginning to more effectively address this bottleneck.

AI on the edge

Our ability to both process sensor data and fuse various modalities of data together will continue to drive the evolution of smart robots. In order for this sensor fusion to happen in real time, we need to put our machine learning and deep learning models on the edge. Of course, decentralized AI compounds the demands on decentralized processors.

Thankfully, machine learning and deep learning compute is becoming much more efficient. Graphcore’s intelligent processing units (IPUs) and Google’s tensor processing units (TPUs), for example, are lowering the cost and accelerating the performance of neural networks at scale.

Elsewhere, IBM is developing neuromorphic chips that mimic brain anatomy. Prototypes use a million neurons, with 256 synapses per neuron. The system is particularly well suited to interpret sensory data because it’s designed to approximate the way the human brain interprets and analyzes perceptual data.

The result of all this data coming from sensors means we are on the verge of a robotics network effect, a shift that will have dramatic implications for AI, robotics, and their various applications.

A new world of data

The robotics network effect will enable new technologies and machines to act not only on larger volumes and velocities of data, but also on expanding varieties of data. New sensors will be able to detect and capture data that we might not even be thinking about, bound as we are by the limited nature of human perception. Machines and smart devices will contribute enriched data back onto the cloud and to neighboring agents, informing decision making, enhancing coordination, and playing a vital role in continuous model improvements.

These advancements are coming more quickly than many realize. Aromyx, for example, uses receptors and advanced machine learning models to build sensor systems and a platform for the digital capture, indexing, and search of scent and taste data. The company’s EssenceChip is a disposable sensor that outputs the same biochemical signals that the human nose or tongue sends to the brain when we smell or taste a food or beverage.

Open Bionics is developing robotic prostheses that rely on haptic data collected from sensors within the arm socket to control hand and finger movements. This non-invasive design leverages machine learning models to translate fine muscle tension sensed by the electrodes into complex motor response in the bionic hands.

Sensor data will be instrumental in pushing the boundaries of AI. AI systems will simultaneously expand our ability to process data and discover creative uses for this data. Among other things, this will inspire new robotic form factors capable of collecting even broader modalities of data. As we advance our ability to “see” in new ways, the everyday world around is rapidly emerging as the next great frontier of discovery.

Alex Housley is the founder and CEO of Seldon, the machine learning deployment platform that gives data science teams new capabilities around infrastructure, collaboration, and compliance.

Santiago Tenorio is a general partner at Rewired, a robotics-focused venture studio investing in applied science and technologies that advance machine perception.