Face-reading AI will be able to detect your politics and IQ, professor says

Professor whose study suggested technology can detect whether a person is gay or straight says programs will soon reveal traits such as criminal predisposition

Your photo could soon reveal your political views, says a Stanford professor. Photograph: Frank Baron for the Guardian

Your photo could soon reveal your political views, says a Stanford professor. Photograph: Frank Baron for the Guardian

Voters have a right to keep their political beliefs private. But according to some researchers, it won’t be long before a computer program can accurately guess whether people are liberal or conservative in an instant. All that will be needed are photos of their faces.

Michal Kosinski – the Stanford University professor who went viral last week for research suggesting that artificial intelligence (AI) can detect whether people are gay or straight based on photos – said sexual orientation was just one of many characteristics that algorithms would be able to predict through facial recognition.

Using photos, AI will be able to identify people’s political views, whether they have high IQs, whether they are predisposed to criminal behavior, whether they have specific personality traits and many other private, personal details that could carry huge social consequences, he said.

Kosinski outlined the extraordinary and sometimes disturbing applications of facial detection technology that he expects to see in the near future, raising complex ethical questions about the erosion of privacy and the possible misuse of AI to target vulnerable people.

“The face is an observable proxy for a wide range of factors, like your life history, your development factors, whether you’re healthy,” he said.

Faces contain a significant amount of information, and using large datasets of photos, sophisticated computer programs can uncover trends and learn how to distinguish key traits with a high rate of accuracy. With Kosinski’s “gaydar” AI, an algorithm used online dating photos to create a program that could correctly identify sexual orientation 91% of the time with men and 83% with women, just by reviewing a handful of photos.

Kosinski’s research is highly controversial, and faced a huge backlash from LGBT rights groups, which argued that the AI was flawed and that anti-LGBT governments could use this type of software to out gay people and persecute them. Kosinski and other researchers, however, have argued that powerful governments and corporations already possess these technological capabilities and that it is vital to expose possible dangers in an effort to push for privacy protections and regulatory safeguards, which have not kept pace with AI.

Kosinski, an assistant professor of organizational behavior, said he was studying links between facial features and political preferences, with preliminary results showing that AI is effective at guessing people’s ideologies based on their faces.

This is probably because political views appear to be heritable, as research has shown, he said. That means political leanings are possibly linked to genetics or developmental factors, which could result in detectable facial differences.

Kosinski said previous studies have found that conservative politicians tend to be more attractive than liberals, possibly because good-looking people have more advantages and an easier time getting ahead in life.

Kosinski said the AI would perform best for people who are far to the right or left and would be less effective for the large population of voters in the middle. “A high conservative score … would be a very reliable prediction that this guy is conservative.”

Advertisement

 

Kosinski is also known for his controversial work on psychometric profiling, including using Facebook data to draw inferences about personality. The data firm Cambridge Analytica has used similar tools to target voters in support of Donald Trump’s campaign, sparking debate about the use of personal voter information in campaigns.

Facial recognition may also be used to make inferences about IQ, said Kosinski, suggesting a future in which schools could use the results of facial scans when considering prospective students. This application raises a host of ethical questions, particularly if the AI is purporting to reveal whether certain children are genetically more intelligent, he said: “We should be thinking about what to do to make sure we don’t end up in a world where better genes means a better life.”

Some of Kosinski’s suggestions conjure up the 2002 science-fiction film Minority Report, in which police arrest people before they have committed crimes based on predictions of future murders. The professor argued that certain areas of society already function in a similar way.

He cited school counselors intervening when they observe children who appear to exhibit aggressive behavior. If algorithms could be used to accurately predict which students need help and early support, that could be beneficial, he said. “The technologies sound very dangerous and scary on the surface, but if used properly or ethically, they can really improve our existence.”

There are, however, growing concerns that AI and facial recognition technologies are actually relying on biased data and algorithms and could cause great harm. It is particularly alarming in the context of criminal justice, where machines could make decisions about people’s lives – such as the length of a prison sentence or whether to release someone on bail – based on biased data from a court and policing system that is racially prejudiced at every step.

Kosinski predicted that with a large volume of facial images of an individual, an algorithm could easily detect if that person is a psychopath or has high criminal tendencies. He said this was particularly concerning given that a propensity for crime does not translate to criminal actions: “Even people highly disposed to committing a crime are very unlikely to commit a crime.”

He also cited an example referenced in the Economist – which first reported the sexual orientation study – that nightclubs and sport stadiums could face pressure to scan people’s faces before they enter to detect possible threats of violence.

Kosinski noted that in some ways, this wasn’t much different from human security guards making subjective decisions about people they deem too dangerous-looking to enter.

The law generally considers people’s faces to be “public information”, said Thomas Keenan, professor of environmental design and computer science at the University of Calgary, noting that regulations have not caught up with technology: no law establishes when the use of someone’s face to produce new information rises to the level of privacy invasion.

Keenan said it might take a tragedy to spark reforms, such as a gay youth being beaten to death because bullies used an algorithm to out him: “Now, you’re putting people’s lives at risk.”

Even with AI that makes highly accurate predictions, there is also still a percentage of predictions that will be incorrect.

“You’re going down a very slippery slope,” said Keenan, “if one in 20 or one in a hundred times … you’re going to be dead wrong.”

Contact the author: sam.levin@theguardian.com

--------------------------------------------------------------------------------

 

The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today! https://www.theaicongress.com/bookyourtickets/

How artificial intelligence is impacting the service industry

Every single day, millions of dollars are being spent in call centres simply to answer repeated questions, thousands of times over and over again. Let us see how AI is going to affect IT service management.

In this age of artificial intelligence, managers at all levels need to accept that a significant chunk of their jobs might be done better and more efficiently by machines. Researchers conducted surveys and found that managers at all levels spend bulk of their time in administrative tasks, such as making schedules or writing reports. These are the very tasks that are most likely to be automated in the near future. In fact, some companies have already made improvements by transitioning these tasks to AI.

It was found that managers need to have the skill of judgement to succeed at work in this age of automation, which includes thinking creatively, data analysis, data interpretation, and developing strategy. Other skills that stood out as most important are social networking, coaching, and collaboration. These are the skills which will help managers stand out when AI take over the administrative tasks that managers perform today. Machines would never completely replace managers but they would give managers more time. So, the real priority for managers should be refocussing on the tasks that only humans can do using their creativity, collaborating attitude, empathising nature, and the power of judgement.

Machine v/s human

AI is very good at eliminating human error. Humans, many a times tend to deviate from standard defined processes which results in fatalities. Examples could be in various areas — a critical patient operated upon by a team of doctors may die because of a minor human error or pilots who have to take care of thousands of computations while the plane is flying, may cause the plane to crash. In such situations, a trained intelligent bot can take decisions which are as close to defined standards.

Let us try to understand why automation is being considered a threat to future jobs. It began with basic automation, wherein a particular manual task was programmed to be done by machine.The task when done by machine was completed faster and without any human error of course. This led to more automation of such menial tasks which resulted in improved efficiency; hence, the productivity of the entire organisation grew multi-fold.

Now with the evolution of technologies like artificial intelligence, machine learning and deep learning, machines can actually learn on their own and be taught to do more complicated tasks which are currently being done by humans.

IT Customer Support is one such domain wherein we can have intelligence bots well-trained with large sample datasets, who know how to respond to every kind of customer query or ticket. These bots are intuitive enough to adapt and improve themselves without human intervention. They become better with time.

Understanding IT service management (ITSM)

IT Service Management, also called ITSM, is the term used to refer to the implementation, managing and delivering of quality IT services in the best possible way to meet the needs of a business.

It ensures an appropriate mix of people, processes and technology are in place to provide value to a business. Essentially, ITSM is about value- it is about taking your resources, your capabilities and making something valuable for your business.

According to reports, global outsourced customer services market is projected to reach $84.7 billion by 2020. Another study, revealed that companies lose more than $62 billion due to poor customer service. Obviously, no company can afford to provide a not so good customer support.

Why AI is required in customer service

 

Every single day, millions of dollars are being spent in call centres simply to answer repeated questions, thousands of times over and over again. In other words, providing customer support is really an expensive task. A study on customer support market found the following:

  1. Around 270 billion phone calls were made annually to call centres which cost around $600 billion.
  2. One out of two incoming calls require escalation or go unresolved.
  3. 61 percent of all calls could have been resolved with better access to information.

Entities involved in customer service

A customer support service has various entities involved and let us understand how AI is going to affect each one of them:

  1. Agent: AI is going to recommend solutions, recommend classifications, help agent understand what the issue is, thereby making the agent smarter and provide the best reply to customer.
  2. Customer: AI can deflect cases by answering questions such as what can be done on the website to help customers find the better solution. It can suggest how we can push the solution faster to the customer. AI clubbed together with chatbots helps in responding to customers queries faster and more accurately based on data analysis in real time.
  3. Operations: With AI one can predict the close time of a customer issue, one can actually allocate the case to someone knowledgeable in that specific topic.

What AI has brought to customer service business

Following are some of the areas where artificial intelligence has proved to be very efficient: :

  1. Increased customer satisfaction
  2. Customer attrition reduction
  3. Customer effort reduction
  4. Higher customer service satisfaction level by agent
  5. Reduced agent on-boarding time
  6. Reduced cost per ticket
  7. Improved business outcome
  8. Reduced costs of service operations
  9. Increased revenue

Companies improving customer service by using artificial intelligence

Let us take a look at a few startups working in automating the customer service process at various levels:

  1. Neva.ai: Automates customer service and support using artificial intelligence and natural language processing
  2. DigitalGenius: Brings practical applications of deep learning and artificial intelligence to customer service operations of large companies
  3. IPSoft: Assists with service desk support, helps field engineers troubleshoot, and supports procurement
  4. Next IT: Assists with customer service
  5. Digital Reasoning: Scans up to billions of communications from thousands of traders to spot language patterns and identifies potentially fraudulent activity
  6. Luminoso: Analyses customer feedback to propose product design changes; reviews how consumers feel about food items or grocery store experience

What future holds for ITSM

Artificial Intelligence is currently in a very early stage and has a long way to go for replacing human task force, but yes, it is moving steadily towards this and making huge progress across multiple domains.

Someone who is concerned about his/her career should start learning skills which would not get affected by automation. This involves the usage of human empathy — sensing the emotions of others — and the ability to think rationally and then come up with algorithms to solve problems.

We can sum up by saying that artificial intelligence is powerful but it is still artificial and does not have the natural powers that humans possess.

(Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of YourStory.)

-------------------------------------------------------------------------------------------------------------------

The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today! https://www.theaicongress.com/bookyourtickets/

Artificial Intelligence Is Learning How To Develop Games

A screenshot from Georgia Tech's clone engine

A screenshot from Georgia Tech's clone engine

An AI in Georgia can recreate a game just by watching it being played

Researchers at Georgia Institute of Technology are developing an AI that can recreate a game engine simply by watching gameplay. 

This technology, as detailed in a press release, is being created in an effort to aid video game developers to "speed up game development and experiment with different styles of play." During their most recent experiments, the AI watched two minutes of Super Mario Bros.gameplay, and then built its own version of the game by studying and frames and predicting future events.

"To get their AI agent to create an accurate predictive model that could account for all the physics of a 2D platform-style game, the team trained the AI on a single 'speedrunner' video, where a player heads straight for the goal," Georgia Institute's communications officer Joshua Preston explained. This school of thought, he added, made the most difficult scenario possible for training the AI.

By allowing the AI to study the actual frames of the game, researchers found it was able to predict frames of the game much closer to the actual frames of Super Mario Bros. than other tests the team had run with different methods. This simplifies the process, necessitating their AI only need to watch a video of a game in action to begin replicating a game and learning its engine.

"Our AI creates the predictive model without ever accessing the game’s code, and makes significantly more accurate future event predictions than those of convolutional neural networks,” lead researcher Matthew Guzdial said in the release. “A single video won’t produce a perfect clone of the game engine, but by training the AI on just a few additional videos you get something that’s pretty close.”

Once the team had their model, there was only one test left: how did it play? A second AI system was then implemented to test the recreated level to ensure the player wouldn't fall through a level – kind of like a QA tester, but instead a highly intricate AI system.

The researchers found "the AI playing with the cloned engine proved indistinguishable compared to an AI playing the original game engine."

"To our knowledge this represents the first AI technique to learn a game engine and simulate a game world with gameplay footage," associate professor of Interactive Computing and co-investigator on the project Mark Riedl said.

The researchers go on to stress that, as of right now, their AI systems work best when the majority of the action happens on screen. Games where action happens away from the player's direct frame of sight might prove difficult for the system.

The nascent technology does raise the question of what sort of impact a more realized version of the AI could have on the game industry. Specifically, could it eliminate the need for certain jobs, like QA tester, in the game industry? 

However, Georgia Tech's Riedl says developers don't need to fear their job security; this technology will be an aid in development, not a replacement. Riedl tells Glixel that this AI will help novice game developers create projects once out of their reach. Using this kind of AI would allow developers with no coding or design experience to show the AI how a game should work, which it would then replicate. 

"Instead of putting people out of work, this will make it possible for people to create games that were otherwise unable to do so," Riedl said. "That makes it possible for more people to create – increasing the size of the pie instead of supplanting individuals. Second, professionals may be able to build games faster by having the system make an initial guess about the mechanics. Working more efficiently doesn’t necessarily put people out of work, but does allow them to make bigger and better games in the time available."

What about QA testers? Well, according to Riedl, they'll still be necessary thanks to one feature they have over AI systems necessary for playing games: the human touch.

"[Video games] are made to be enjoyed by humans," Riedl said. "Because of that you're always going to need humans to actually test the games. AI might help to test things we simply can't test currently but can be formalized mathematically, like game balance ... but one will need to use humans to see if other humans will enjoy the game for the foreseeable future."

-------------------------------------------------------------------------------------------------------------------

The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today! https://www.theaicongress.com/bookyourtickets/

Experimenting with machine learning in media

From the Gutenberg printing press in 1440 to virtual reality today, advances in technology have made it possible to discover new audiences and new ways of expressing. And there’s more to come.

Machine learning is the latest technology to change how news, entertainment, lifestyle and sports content is created, distributed and monetized. YouTube, for example, has used machine learning to automatically caption more than one billion videos to make them more accessible to the 300 million+ people who are deaf or hard of hearing.

While many media executives are increasingly aware of machine learning, it's not always apparent which problems are most suited for machine learning and whose solutions will result in maximum impact.

Machine learning can help transform your business with new user experiences, better monetization of your content and reduce your operational cost.

Executives, here are three things to keep in mind as you consider and experiment with machine learning to transform your  digital business:

  1. The time to experiment with machine learning is right now. The barriers to using machine learning have never been lower. In the same way companies started thinking about investing in mobile 10 years ago, the time to start exploring machine learning is right now. Solutions like Google Cloud Machine Learning Engine have made powerful machine learning infrastructure available to all without the need for investment in dedicated hardware. Companies can start experimenting today with Google Cloud Machine Learning APIs at no charge—and even developers with no machine learning expertise can do it. For example, in less than a day, Time Inc. used a combination of Cloud Machine Learning APIs to prototype a personalized date night assistant that integrated fashion, lifestyle and events recommendations powered by its vast corpus of editorial content.

  2. Bring together key stakeholders from diverse teams to identify the top problems to solve before you start. Machine learning is not the answer to all of your business woes, but a toolkit that can help solve specific, data-intensive problems at scale. With limited time and people to dedicate to machine learning applications, start by  bringing together the right decision makers across your business, product and engineering teams to identify the top problems to solve. Once the top challenges are identified, teams need to work closely with their engineering leads to determine technical feasibility and prioritize where machine learning could have the highest impact. Key questions to answer that will help prioritize efforts are: Can current technology reasonably solve the problem? What does success look like? What training data is needed, and is that data currently available or does it need to be generated. This was the approach that was taken during a recent Machine Learning for Media hackathon hosted by Google and the NYC Media lab, and it paid off with clearer design objectives and better prototypes. For example, for the Associated Press, there was an opportunity to quickly generate sports highlights from analysis of video footage. So they created an automated, real-time sports highlights tool for editors using Cloud Video Intelligence API.

  3. Machine learning has a vibrant community that can help you get started. Companies can kickstart their machine learning endeavors by plugging into the vibrant and growing machine learnig community. TensorFlow, an open source machine learning framework, offers resources, meetups, and more. And if your company needs more hands-on assistance, Google offers a suite of services through the Advanced Solutions Lab to work side-by-side with companies to build bespoke machine learning solutions. There are also partners with deep technical expertise in machine learning that can help. For example, Quantiphi, a machine learning specialist, has been working closely with media companies to extract meaningful insights from their video content using a hybrid of the Cloud Video Intelligence API and custom models created using TensorFlow. However you decide to integrate machine learning technologies into your business, there's a growing ecosystem of solutions and subject matter experts that are available to help.

------------------------------------------------------------------------------------------------------------------

The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today! https://www.theaicongress.com/bookyourtickets/

Ocado launches Alexa app for voice-activated online shopping

(Credit: Amazon)

(Credit: Amazon)

Online grocery retailer Ocado has announced it will be the first supermarket in the UK to launch an app for the voice-controlled personal assistant, Amazon Alexa.

The Ocado app for Amazon’s smart home speaker, Echo, will enable customers to use voice commands to add products to an existing order or basket, to check their orders before they submit them and to find out  what products are in season and how best to include them in recipes. They’ll also be able to track deliveries.

In order to understand individual customers’ product preferences, the Ocado Technology team built an Ocado Conversational Service, based on artificial intelligence (AI), which is able to suggest both related and previously bought items for customers to add to their baskets.

Behind the scenes

In a blog post about the new service, the Ocado Technology e-commerce team explains how, when it first started building its Alexa ‘skill’ (a chunk of function built to support a specific use for Amazon’s smart speakers), it quickly realized that it would important to support a “natural, bi-directional conversational flow.”

This is what allows the service to ‘understand’ orders made in different ways, as well as commands that allow a customer to check their basket’s contents, for example, or verify the total price of an order.

According to the blog post, Alexa converts the audio stream into a command (for example, ‘add to basket’) and a search term (such as ‘cheese’), based on examples provided by Ocado, which has trained Alexa to recognize the top 15,000 most commonly searched items from Ocado.com.

These text queries are then passed on to the Ocado skill, which also runs on AWS, where the request is processed and an appropriate response is established using internal APIs [application programming interfaces].

(Credit: Ocado Technology)

(Credit: Ocado Technology)

It’s this response that leads to this two-way conversation, the blog post explains. “If the request can be fulfilled, i.e. we have the item in stock, the Ocado skill will send an output to Alexa; for example, ‘I’ve added Cathedral mature cheddar to Thursday’s Ocado order. Can I help you with anything else?’ However, if the item is out of stock, unavailable or cannot be found, the Ocado skill will not only offer the appropriate notification, but can also make alternative suggestions; ‘Sorry the Cathedral City mature cheddar you usually buy is out of stock. How about trying the Ocado organic mature cheddar instead?’”

This means that shoppers can gradually collate their shopping basket over a few days, as and when they finish items in their kitchens.

Consumer confidence?

Ocado’s clearly hoping that this could mean an end to hastily-conducted audits of our kitchen cabinets prior to a shop, or entering into online systems those reminders previously scribbled on shopping lists, sticky notes or kitchen whiteboards.

“Grocery shopping should be quick, easy and convenient,” said Lawrence Hene, marketing and commercial director at Ocado. “Using voice technology, we’ve made it even easier, by developing our new app that will enable customers to add to their Ocado baskets without even lifting a finger.”

Commenting on the launch, John Rakowski, director of technology strategy at application monitoring and analytics specialist AppDynamics, said that the announcement demonstrates continued momentum in building speech-activated services and a “very intriguing development” in the battle for online supermarket shoppers.

“While there may be some mainstream consumer scepticism about the practical value of voice assistants, we’re certain to see further deployments of the technology by Amazon and other digital retailers in the near future,” he said. “Ten years ago, the launch of the iPhone and the advent of apps drew a fair degree of initial scepticism. Now apps are part of everyday life, and more so, will become crucial in the retail battleground.”  

Meanwhile at Fujitsu, Rupal Karia, head of commercial for the UK and Ireland suggested that the pressure is on for retailers to give customers “what they want, before they know they want it.”

In late July, consumer confidence levels in the UK slumped to the same levels seen immediately after the Brexit referendum, against a backdrop of rising inflation and weakening wage growth.

According to Karia, retailers must use technology to differentiate the experience that they can offer customers or face a worrying prospect, namely “being the next generation of retailers to be pushed out of the high street for good.”

-------------------------------------------------------------------------------------------------------------------

The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today! https://www.theaicongress.com/bookyourtickets/

 

The sky-rocketing demand for AI experts results in recruitment revolution

Brainpool  Logo - AI Congress.png

The demand for AI and Machine Learning experts is sky rocketing, there is a predicted 50% - 60% gap between the supply and the demand by 2018. The A.I market value which is currently estimated at $0.6BN is supposed to reach $37BN by 2025, with 50% compound annual growth rate.

The amount of AI and ML projects available is overwhelming, and data scientists get buried in recruiters’ emails every day. It is difficult to find and distinguish the cutting edge projects, and academics spend too much of their valuable time looking for interesting clients and talking to mass recruiters. In the same time companies struggle to find the resources they need to innovate and automate their processes with machine learning.

Brainpool was created to solve this problem. It is a matching platform in which data scientists and clients can easily find each other on a project basis, without unnecessary admin, paperwork or recruiters. Both sides of the marketplace get scored on multiple factors such as sophistication or difficulty level of a project and skills required to complete it. Brainpool’s algorithm matches the two sides to ensure satisfaction for both data scientist’s and the client side.

Most members of the pool have PhDs from universities such as UCL, Oxford, Cambridge, Harvard and worked for AI leading companies such as Google DeepMind or Spotify. Brainpool gives researchers an opportunity to work on interesting client projects across industries, whilst making sure they have time to continue their research and stay on top of the latest AI and ML developments.

The idea originated at UCL’s computing department, the CEO Paula Parpart who is currently finishing her PhD in Computational Cognitive Science has personally experienced the problem described above and decided to find a solution. Rather than being yet another recruitment platform, Brainpool is an academia based network of top level data scientists, where they can exchange ideas, learn from each other and develop algorithms and products as solutions to repeating problems across industries.

Pretty much every industry will be completely transformed by AI and ML over the next decade. Make sure you are ready for the change, and have the right resources to keep up and stay ahead of competition.

To find out more visit brainpool.ai

Do AI Voice Assistants Have A Place In Business?

Fans of science fiction have long been anticipating always-on voice assistance both at home and while on the job.

While voice assistant systems didn’t come online as early as many anticipated, there are a number of options, both widely used and in development, and it’s time for businesses to determine is voice assistance is right for their office.

Android and iOS rule the smartphone world, and nearly all devices running these offer voice assistance through Google Assistant and Siri. While these programs are constantly evolving, they already offer broad capabilities, making it easier to schedule activities, set reminders, and find answers to questions. Microsoft’s Cortana is fast making inroads as well, especially since it made its way to PCs, and Amazon’s Alexa has outdone expectations for providing home-based voice assistance. While these programs make voice assistance popular among consumers, they haven’t made much progress in an office setting.

Office Benefits

Voice assistance programs are great for organizing your personal life, and they can certainly help for scheduling work-related activities. However, true success in the business world will require a more comprehensive approach.

 

Artificial intelligence empowers all voice assistant programs, and tailoring programs to business needs is important for gaining more traction. While consumer-targeting voice assistance is great for certain tasks, those in the business world often need finely-tuned information, and new AI paradigms might be able to match this need.

Robust AI

Perhaps the most famous AI program, outside of consumer devices, is IBM’s Watson, which made headlines by scoring strong wins against human opponents on Jeopardy! Since its victory, Watson has gone on to find its way into hospitals and other medical practices, where it can provide medical guidelines based on a large database of scientific literature and excellent natural language recognition capabilities. Other companies are making large investments in the AI field as well, with Qualcomm recently making a large push.

 

Collaborative Voice Assistance

Perhaps the most valuable field for AI in the office, at least initially, will be collaboration tools. Personal assistance is helpful, but companies operate as teams, and collaboration is the key to success. Voice assistance can help keep everyone updated, and periodic voice reminders can shine in cases where email might be ignored.

Furthermore, voice assistance makes it easier to input new events, potentially encouraging workers to share more information. While popular consumer voice assistance programs can be adjusted to these tasks, it might take a new startup to start transforming the office.

Do People Like Voice Assistance?

Although people frequently try out voice assistance programs, studies show that few continue using them for extended periods of time; most people stop after a few days or weeks. Much like video calling, it might be the case that people simply prefer typed or written notes over automated assistants. To make progress in the business field, voice assistant programs will need to demonstrate real value that lasts beyond when the novelty wears off.

Robotic Service: A Potential Backdoor

Similar to voice assistance, robotic service interfaces are expected to increasingly come online in the coming years, replacing humans during checkout at retail locations. These systems share a number of similarities with voice assistance programs, and packages that combine both front-room and back-room artificial intelligence might provide the breakthrough an office needs to standardize on a voice assistance system.

3d Rendering Of Human Brain On Technology Background Represent Artificial Intelligence And Cyber Space Concept

Fragmentation: Voice Assistance’s Biggest Threat

Large tech companies are investing heavily into voice assistance and AI, and these companies have made tremendous progress. However, businesses want to ensure that they’re investing in technology that will last, and it’s unclear if one voice assistant will eventually reign supreme and pick up the support needed from vendors and third-party developers to thrive in the business environment. Although voice assistance will continue making headway into offices, it’s not clear if or when it will radically change office operations around the globe, or what the killer app will be.

It’s difficult to determine which technologies will eventually take hold. Video calling, a staple of science fiction in the 20th century, has only carved out a niche role, as people seem to prefer voice conversation. Few predicted instant messaging would become popular, but it’s now a significant communication portal. Voice assistance will almost certainly play a role for certain niche purposes, but it remains to be seen how popular it will eventually become.

-------------------------------------------------------------------------------------------------------------------

The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today https://www.theaicongress.com/bookyourtickets/

 

SAM IS A CONSTRUCTION ROBOT THAT CAN LAY BRICKS 6 TIMES FASTER THAN YOU CAN

AI Logo.PNG

You can’t throw a rock in 2017 without hitting some new walk of life where robots are being employed. The latest? A bricklaying robot called SAM100 (Semi-Automated Mason) that builds walls six times faster than a human bricklayer. (And probably about 10 times faster than the majority of Digital Trends writers.)

Created by New York-based company Construction Robotics, SAM is ready and willing to lay 3,000 bricks per day, using its combination of a conveyor belt, robotic arm, and concrete pump. By comparison, a human builder will average around 500 bricks per day.

“For a lot of different reasons, the construction industry has been slow to adopt innovation and change,” construction manager Zachary Podkaminer told Digital Trends. “Compare a construction site today from a picture of one years ago and, with the exception of a few tools, it really hasn’t changed all that much. Now it seems the industry is finally evolving and we’re trying to be a part of that by bringing technology to construction sites.”

Costing around $500,000, SAM isn’t cheap, but it’s a potentially transformative tool in revolutionizing future building sites. SAM is already working on building sites around the U.S. and recently received an upgrade to SAM OS 2.0. which allows it to lay “soldier course” bricks.

Is Construction Robotics worried that it’s putting human laborers out of business, though?

“We don’t see construction sites being fully automated for decades, if not centuries,” Podkaminer said. “This is about collaboration between human workers and machines. What SAM does is to pick up the bricks, put mortar on them, and puts it on the wall. It still requires a mason to work alongside it. SAM’s just there to do the heavy lifting.”

At present, SAM’s human partner is required to smooth over the concrete before SAM places more bricks. While some people are going to be concerned that robots like this will replace humans on construction sites, if — as Podkaminer notes — robots can do the backbreaking heavy lifting and leave people to do other jobs, that could work out best for all involved.

Plus, we presume it doesn’t shout mean comments about our skinny arms as we walk past the sites it is working on.

-------------------------------------------------------------------------------------------------------------------

The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today https://www.theaicongress.com/bookyourtickets/

'Self-driving' lorries to be tested on UK roads

download.jpg

Small convoys of partially driverless lorries will be tried out on major British roads by the end of next year, the government has announced.

A contract has been awarded to the Transport Research Laboratory (TRL) to carry out the tests of vehicle "platoons". Up to three lorries will travel in formation, with acceleration and braking controlled by the lead vehicle. But the head of the AA said platoons raised safety concerns. The TRL will begin trials of the technology on test tracks, but these trials are expected to move to major roads by the end of 2018. The lead vehicle in the platoons will be controlled by a human driver and humans will also control the steering in lorries to the rear - though acceleration and braking will be mirrored. Lorries driving close together could see the front vehicle pushing air out of the way, making the other vehicles more efficient and lowering their emissions.

This could lead to fuel savings for companies that will hopefully be passed on to consumers, Transport Minister Paul Maynard said. The government has been promising such a project since at least 2014. Last year, for example, it announced its intention to carry out platooning trials but was later frustrated after some European lorrymakers declined to participate. A Department of Transport spokesman told the BBC that the experiments are now expected to go ahead as the contract had been awarded.

The TRL has announced its partners for the project:

  • DAF Trucks, a Dutch lorry manufacturer
  • Ricardo, a British smart tech transport firm
  • DHL, a German logistics company

Platooning has been tested in a number of countries around the world, including the US, Germany and Japan. However, British roads present a unique challenge, said Edmund King, president of the AA.

"We all want to promote fuel efficiency and reduce congestion but we are not yet convinced that lorry platooning on UK motorways is the way to go about it," he said, pointing out, for example, that small convoys of lorries can block road signs from the view of other road users."We have some of the busiest motorways in Europe with many more exits and entries." "Platooning may work on the miles of deserted freeways in Arizona or Nevada but this is not America," he added.

His comments were echoed by the RAC Foundation. Its director, Steve Gooding, said: "Streams of close-running HGVs could provide financial savings on long-distance journeys, but on our heavily congested motorways - with stop-start traffic and vehicles jostling for position - the benefits are less certain."

Campaign group the Road Haulage Association said "safety has to come first". Transport Minister Paul Maynard said platooning could lead to cheaper fuel bills, lower emissions and less congestion.

"But first we must make sure the technology is safe and works well on our roads, and that's why we are investing in these trials," he said.

The AI Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today https://www.theaicongress.com/bookyourtickets/

 

How AI can help make safer baby food (and other products)

Editor’s note: Whether you’re growing cucumbers or building your own robot arm, machine learning can help. In this guest editorial, Takeshi Ogino of Kewpie tells us how they used machine learning to ensure the quality and safety of the ingredients that go into their food products.

Quality control is a challenge for most industries, but in the world of food production, it’s one of the biggest. With food, products are as good as the ingredients that go into them. Raw materials can vary dramatically, from produce box to produce box, or even from apple to apple. This means inspecting and sorting the good ingredients from the bad is one of the most important tasks any food company does. But all that work inspecting by hand can be time-consuming and arduous both in terms of overhead and manpower. So what’s a food company to do?

At Kewpie Corporation, we turned to a surprising place to explore better ways to ensure food quality: artificial intelligence built on TensorFlow.

Although Kewpie Corporation is most famous for our namesake mayonnaise, we’ve been around for 100 years with dozens of products, from dressings to condiments to baby foods. We’ve always believed that good products begin with good ingredients.

Ingredients that are safe and also give you peace of mind

Last October, we began investigating whether AI and machine learning could ensure the safety and purity of our ingredients faster and more reliably than ever.

The project began with a simple question: “What does it mean to be a ‘good’ ingredient?” The ingredients we purchase must be safe, of course, and from trustworthy producers. But we didn’t think that went far enough. Ingredients also need to offer peace of mind. For example, the color of potatoes can vary in ways that have nothing to do with safety or freshness.

Kewpie depends on manual visual detection and inspection of our raw ingredients. We inspect the entire volume of ingredients used each day, which, at four to five tons, is a considerable workload. The inspection process requires a certain level of mastery, so scaling this process is not easy. At times we’ve been bottlenecked by inspections, and we’ve struggled to boost production when needed.

We’d investigated the potential for mechanizing the process a number of times in the past. However, the standard technology available to us, machine vision, was not practical in terms of precision or cost. Using machine vision meant setting sorting definitions for every ingredient. At the Tosu Plant alone we handle more than 400 types of ingredients, and across the company we handle thousands.

That’s when I began to wonder whether using machine learning might solve our problem.

Using unsupervised machine learning to detect defective ingredients

We researched AI and machine learning technology across dozens of companies, including some dedicated research organizations. In the end, we decided to go with TensorFlow. We were impressed with its capabilities as well as the strength of its ecosystem, which is global and open. Algorithms that are announced in papers get implemented quickly, and there’s a low threshold for trying out new approaches.

One great thing about TensorFlow is that it has such a broad developer community. Through Google, we connected with our development partner, BrainPad Inc, who impressed us with their ability to deliver production level solutions with the latest deep learning. But even BrainPad, who had developed a number of systems to detect defective products in manufacturing processes, had never encountered a company with stricter inspection standards than ours. Furthermore, because our inspections are carried out on conveyor belts, they had to be extremely accurate at high speeds. Achieving that balance between precision and speed was a challenge BrainPad looked forward to tackling.

Sorting diced potato pieces at the Tosu Plant.

Sorting diced potato pieces at the Tosu Plant.

To kick off the project, we started with one of our most difficult inspection targets: diced potatoes. Because they’re an ingredient in baby food, diced potatoes are subject to the strictest scrutiny both in terms of safety and peace of mind. That meant feeding more than 18,000 line photographs into TensorFlow so that the AI could thoroughly learn the threshold between acceptable and defective ingredients.

Our big breakthrough came when we decided to use the AI not as a ”sorter” but an ”anomaly detector.” Designing the AI as a sorter meant supervised learning, a machine learning model that requires labels for each instance in order to accurately train the model. In this case that meant feeding into TensorFlow an enormous volume of data on both acceptable and defective ingredients. But it was hugely challenging for us to collect enough defective sample data. But by training the system to be an anomaly detector we could employ unsupervised learning. That meant we only needed to feed it data on good ingredients. The system was then able to learn how to identify acceptable ingredients, and reject as defective any ingredients that failed to match. With this approach, we achieved both the precision and speed we wanted, with fewer defective samples overall.

By early April, we were able to test a prototype at the Tosu Plant. There, we ran ingredients through the conveyor belt and had the AI identify which ones were defective. We had great results. The AI picked out defective ingredients with near-perfect accuracy, which was hugely exciting to our staff.

The inspection team at the Tosu Plant.

The inspection team at the Tosu Plant.

It’s important to note that our goal has always been to use AI to help our plant staff, not replace them. The AI-enabled inspection system performs a rough removal of defective ingredients, then our trained staff inspects that work to ensure nothing slips through. That way we get “good” ingredients faster than ever and are able to process more food and boost production.

Today we may only be working with diced potatoes, but we can’t wait to expand to more ingredients like eggs, grains and so many others. If all goes well, we hope to offer our inspection system to other manufacturers who might benefit. Existing inspection systems such as machine vision have not been universally adopted in our industry because they're expensive and require considerable space. So there’s no question that the need for AI-enabled inspection systems is critical. We hope, through machine learning, we’re bringing even more safe and reassuring products to more people around the world.

The AI World Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today https://www.theaicongress.com/bookyourtickets/

China Citic and Baidu to rely on AI for joint venture bank

Internet search giant Baidu and China Citic Bank have received regulatory approval to launch a direct banking joint venture offering loans and deposit accounts to Chinese consumers.

5473.jpg

 

Baidu is looking to emulate the success of local e-commerce groups Tecncent and Alibaba, which have both opened online banks following the relaxation of bank licensing rules by Chinese authorities in 2015.

China Citic and Baidu have invested $313.34 million in cash as registered capital for the venture, in which Baidu will own 30%. Initially named Baixin Bank when the plans were first reported in 2015, the branding has shifted with the prevailing tide and will now launch as aiBank, reflecting a penchant for the use of artificial intelligence in running the business. 

In a statement, Baidu says: “AI is the core element of the bank’s branding, and the bank will offer a spate of innovative services by riding on Baidu’s technology in AI and massive amounts of data."

The AI World Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors. Register today https://www.theaicongress.com/bookyourtickets/

Separating Fact From Fiction: The Role Of Artificial Intelligence In Cybersecurity

Credit: Shutterstock

Credit: Shutterstock

Artificial intelligence (AI) has become such a buzzword that it’s at risk of becoming no more than tech marketing pixie dust. Just sprinkle a little here and suddenly, your solution inherits the foresight of a self-driving Tesla and the simplicity of an Amazon Echo.

As more solutions crowd the cybersecurity market touting the benefits of AI, it’s important to read through the hype. Machine learning (ML) can deliver transformative insights in some domains, but it has limitations. My goal is to help you pick apart vendor claims. If you plan to evaluate a solution that uses ML for cybersecurity, then hopefully this will inform your decision-making -- or at least give you a framework for learning more.

Do You Want Artificial Intelligence Or Machine Learning?

The answer is machine learning. As a cybersecurity practitioner, I tend to be a little prickly on this.AI implies cognitive introspection on the part of the tech -- an ability to improve itself based on understanding its own performance. We’re nowhere near this yet.

ML is a subfield of computer science that helps computers learn based on their inputs and decide how to behave without being explicitly programmed to do so. The ML practitioner will approach the task with a large and developing toolset. Different algorithms have different uses, and techniques overlap with computational statistics, mathematical optimization and data mining.

ML Uses Algorithms That Can Learn From Data

An ML algorithm builds a model that represents the behavior of a real-world system from data that represents samples of its behavior. Training can be supervised -- with prelabeled example data -- or unsupervised. Either way, the data needs to be a representative of the real world. Without representative data, no algorithm can offer useful and generalizable insights.

The challenge in cybersecurity is that the initial phases of an attack, such as malware or spear-phishing emails, vary every time the attack is launched, making it impossible to detect and classify with confidence. (This is another way of restating the famous mathematical proof attributed to Alan Turing in the 1930s of the so-called halting problem. In this case, it’s impossible for a computer program to determine whether another program is good or bad.)

With good training data, state-of-the-art ML algorithms can do a pretty good job of training a model that can then be used to sift through new, unlabeled data. The problem is the term “pretty good job.” It’s hard to know beforehand just how accurate the classification of new data will be. (Was the training data adequate? Is the model good at teasing apart the grey -- things that may be good, bad, etc.?) What’s beyond doubt, however, is every algorithm will make mistakes. It could generate false alerts or fail to detect the bad guy.

Machine Learning Isn’t Perfect And Can Be Fooled, But It’s Making Progress

To summarize, ML is bad when there’s massive variation in the data that makes training useless. For example, in anti-virus, polymorphism makes every attack using the same underlying malware look different. ML can’t adapt to this variance. Moreover, ML is not perfect. Depending on the techniques used and the domain of application, it will fail to spot all attacks and may falsely classify activity.

Despite these limitations, I’m tremendously excited by the progress being made using ML in cybersecurity, specifically where its application can greatly assist organizations to discover signs of untoward activity and to protect their assets from attack.

Today We Can See User Behavior, App Usage And More

Modern IT infrastructure is increasingly well-instrumented, delivering voluminous log data on user behavior, application use, network traffic, authentication activity and more. First-gen log processing capabilities such as Splunk gave IT pros the ability to make Google-like queries on large indexed data stores, which at least made the tasks at hand possible.

Today, the rapid advances in ML, and particularly self-training ML algorithms, offer a powerful new opportunity to automatically sift through massive amounts of data to look for weird stuff -- patterns of behavior that are outliers when compared to the rest of the data in the set. These tools are self-trained, requiring little to no effort from an expert to set up, and adaptable. As more data is aggregated, it can retrain itself to include new behaviors and adjust its findings.

Current solutions have a few drawbacks. Often an anomaly found by ML algorithms can be difficult to understand, as it may be the result of a set of abstract and hard-to-understand data points. In addition, such systems can be poor at teasing apart data that has many points of overlap.

Exciting ML Developments Are Brewing In The Domain Of Protection, At The Point Of Attack

Rather than try to detect malware before it executes -- as many “next-gen” vendors claim, which by Turing’s proof is a fool’s errand -- when malware actually executes on an endpoint, it’s easy to spot as a deviation from known normal behavior of the application it has attacked. It also offers a rich source of forensic detail, solving the need to label examples of malicious activity for ML.

But when malware executes, all bets are off: The system is compromised and the attack could spread immediately across the network, similar to WannaCry. The only way to avoid potentially disastrous consequences is to let malware execute in isolation to study it and map its behavior. ML, coupled with application isolation, prevents the downside of malware execution -- isolation eliminates the breach, ensures no data is compromised and that malware does not move laterally onto the network.

The Future Of This Approach Is Bright

With Microsoft adding capabilities for isolation in its virtualization-based security feature set, I expect to see local learning expand to cover authentication activity and user behavior analysis, in addition to covering a broad set of attack vectors.

Machine learning, applied appropriately, offers exciting new opportunities for cybersecurity. We are witnessing the dawn of a new era of productivity and enhanced protection, but we must avoid the temptation to believe the marketing hype.

Reuters/Fabrizio Bensch

Reuters/Fabrizio Bensch

Earlier this month, tech moguls Elon Musk and Mark Zuckerberg debated the pros and cons of artificial intelligence from different corners of the internet. While SpaceX’s CEO is more of an alarmist, insisting that we should approach AI with caution and that it poses a “fundamental existential risk,” Facebook’s founder leans toward a more optimistic future, dismissing “doomsday scenarios” in favor of AI helping us build a brighter future.

I now agree with Zuckerberg’s sunnier outlook—but I didn’t used to.

Beginning my career as an engineer, I was interested in AI, but I was torn about whether advancements would go too far too fast. As a mother with three kids entering their teens, I was also worried that AI would disrupt the future of my children’s education, work, and daily life. But then something happened that forced me into the affirmative.

An untraditional treatment

Imagine for a moment that you are a pathologist and your job is to scroll through 1,000 photos every 30 minutes, looking for one tiny outlier on a single photo. You’re racing the clock to find a microscopic needle in a massive data haystack.

Now, imagine that a woman’s life depends on it. Mine.

This is the nearly impossible task that pathologists are tasked with every day. Treating the 250,000 women in the US who will be diagnosed with breast cancer this year, each medical worker must analyze an immense amount of cell tissue to identify if their patient’s cancer has spread. Limited by time and resources, they often get it wrong; a recent study found that pathologists accurately detect tumors only 73.2% of the time.

In 2011 I found a lump in my breast. Both my family doctor and I were confident that it was a Fibroadenoma, a common noncancerous (benign) breast lump, but she recommended I get a mammogram to make sure. While the original lump was indeed a Fibroenoma, the mammogram uncovered two unknown “spots.” My journey into the unknown started here.

Since AI imaging was not available at the time, I had to rely solely on human analysis. The next four years were a blur of ultrasounds, biopsies, and surgeries. My well-intentioned network of doctors and specialists were not able to diagnose or treat what turned out to be a rare form of cancer, and repeatedly attempted to remove my recurring tumors through surgery.

After four more tumors, five more biopsies, and two more operations, I was heading toward a double mastectomy and terrified at the prospect of the cancer spreading to my lungs or brain.

I knew something needed to change. In 2015, I was introduced to a medical physicist that decided to take a different approach, using big data and a machine-learning algorithm to spot my tumors and treat my cancer with radiation therapy. While I was nervous about leaving my therapy up to this new technology, it—combined with the right medical knowledge—was able to stop the growth of my tumors. I’m now two years cancer-free.

I was thankful for the AI that saved my life— but then that very same algorithm changed my son’s potential career path.

A short-term shakeup

The positive impact of machine learning is often overshadowed by the doom-and-gloom of automation. Fearing for their own jobs and their children’s future, people often choose to focus on the potential negative repercussions of AI rather than the positive changes it can bring to society.

After seeing what this radiation treatment was able to do for me, my son applied to a university program in radiology technology to explore a career path in medical radiation. He met countless radiology technicians throughout my years of treatment and was excited to start his training off in a specialized program. However, during his application process, the program was cancelled: He was told it was because there were no longer enough jobs in the radiology industry to warrant the program’s continuation. Many positions have been lost to automation—just like the technology and machine learning that helped me in my battle with cancer.

This was a difficult period for both my son and I: The very thing that had saved my life prevented him from following the path he planned. He had to rethink his education mid-application when it was too late to apply for anything else, and he was worried that his back up plans would fall through.

He’s now pursuing a future in biophysics rather than medical radiation, starting with an undergraduate degree in integrated sciences. In retrospect, we both now realize that the experience forced him to rethink his career and unexpectedly opened up his thinking about what research areas will be providing the most impact on people’s lives in the future.

Although some medical professionals will lose their jobs to AI, the life-saving benefits to patients will be magnificent. Beyond cancer detection and treatment, medical professionals are using machine learning to improve their practice in many ways. For instance, Atomwise applies AI to fuel drug discovery, Deep Genomics uses machine learning to help pharmaceutical companies develop genetic medicines, and Analytics 4 Life leverages AI to better detect coronary artery disease.

While not all transitions from automated roles will be as easy as my son’s pivot to a different scientific field, I believe that AI has the potential to shape our future careers in a positive way, even helping us find jobs that make us happier and more productive.

Forging a path forward

As this technology rapidly develops, the future is clear: AI will be an integral part of our lives and bring massive changes to our society. It’s time to stop debating (looking at you, Musk and Zuckerberg) and start accepting AI for what it is: both the good and the bad.

Throughout the years, I’ve found myself on both sides of the equation, arguing both for and against the advancement of AI. But it’s time to stop taking a selective view on AI, choosing to incorporate it into our lives only when convenient. We must create solutions that mitigate AI’s negative impact and maximize its positive potential. Key stakeholders—governments, corporates, technologists, and more—need to create policies, join forces, and dedicate themselves to this effort.

And we’re seeing great progress. AT&T recently began retraining thousands of employees to keep up with technology advances and Google recently dedicated millions of dollars to prepare people for an AI-dominated workforce. I’m hopeful that these initiatives will allow us to focus on all the good that AI can do for our world and open our eyes to the potential lives it can save.

One day, yours just might depend on it, too.

-----------------------------------

The AI World Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI World Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors.

Elon Musk and AI Experts Call for Total Ban on Robotic Weapons

One hundred and sixteen roboticists and AI researchers, including SpaceX founder Elon Musk and Google Deepmind co-founder Mustafa Suleyman, have signed a letter to the United Nations calling for strict oversight of autonomous weapons, a.k.a. "killer robots." Though the letter itself is more circumspect, an accompanying press release says the group wants "a ban on their use internationally."

Other signatories of the letter include executives and founders from Denmark’s Universal Robotics, Canada’s Element AI, and France’s Aldebaran Robotics.

The letter describes the risks of robotic weaponry in dire terms, and says that the need for strong action is urgent. It is aimed at a group of UN officials considering adding robotic weapons to the UN’s Convention on Certain Conventional Weapons. Dating back to 1981, the Convention and parallel treaties currently restrict chemical weapons, blinding laser weapons, mines, and other weapons deemed to cause “unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately.”

Get Data SheetFortune’s technology newsletter.

Robotic warriors could arguably reduce casualties among human soldiers – at least, those of the wealthiest and most advanced nations. But the risk to civilians is the headline concern of Musk and Suleyman’s group, who write that “these can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways."

The letter also warns that failure to act swiftly will lead to an “arms race” towards killer robots – but that’s arguably already underway. Autonomous weapons systems or precursor technologies are available or under development from firms including RaytheonDassaultMiG, and BAE Systems.

Element AI founder Yoshua Bengio had another intriguing warning – that weaponizing AI could actually “hurt the further development of AI’s good applications.” That’s precisely the scenario foreseen in Frank Herbert’s sci-fi novel Dune, set in a universe where all thinking machines are banned because of their role in past wars.

The UN weapons group was due to meet on Monday, August 21, but that meeting has reportedly been delayed until November.

 

-----------------------------------

The AI World Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI World Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors.

Your plane could fly itself by 2025…if you’re cool with that

"This is (kind of) your captain speaking." (Reuters/Kai Pfaffenbach)

"This is (kind of) your captain speaking." (Reuters/Kai Pfaffenbach)

Airline passengers will give up leg room, overhead-bin space, and a healthy amount of dignity in exchange for a lower airfare. But many won’t give up human pilots.

A dilemma that sounds like it belongs in science fiction is one that some travelers may grapple with in the near future. “Technically speaking, remotely controlled planes carrying passengers and cargo could appear” by around 2025, the investment bank UBS said a report released Monday (Aug. 8). A switch to full automation could save the air-transportation industry $35 billion a year and cut passenger fares by around 10%.

Other benefits include allowing airlines go without increasingly hard-to-find pilots; rare, but unsavory human behavior, and even save on fuel that human pilots can waste.

Travelers aren’t on board yet. Only 17% of the 8,000 UBS surveyed said they would be likely to take a pilotless flight. “Perhaps surprisingly, half of the respondents said that they would not buy the pilotless flight ticket even if it was cheaper,” the researchers said.

Self-driving cars face less consumer resistance: A UBS survey in 2015 found 30% of people would be likely to ride in one. While some lawmakers are eager to get more self-driving cars on the road, autonomous vehicles will likely have many more physical obstacles to contend with in their paths than airplanes: hard-to-predict movements of pedestrians, altered street signs, and bikers.

Many functions of flight are already automated. Autopilot systems allow planes to cruise on their own. They can even land themselves. Modern planes are outfitted with sensors that relieve pilots from entering data into flight systems.

Boeing said is planning to test flights next year on which artificial intelligence will carry out some tasks handled by pilots. Pilots will still need to make quick decisions that autopilot systems may not be able to do, such as handling heavy turbulence. Boeing’s vice president for production development said in June that the automation wouldn’t be applied until it’s as adept at handling a mid-air crisis as captain Chesley Sullenberger. After a flock of geese knocked out the engines of a US Airways Airbus A320 shortly after takeoff in New York City, “Sully” landed the plane on water in “the Miracle on the Hudson.”

Over-reliance on automated systems could spell trouble for some pilots if those skills atrophy. A report on the Air France crash that killed 228 over the Atlantic in 2009 called for more manual training (paywall) for pilots after cockpit errors appeared to cause the Airbus A330 to stall.

UBS says fully automated aircraft could take to the skies as early as the mid-2020s, starting with cargo planes and air taxis. Fully automated commercial flight won’t likely take off until the 2040s, it says. It raises questions about security of the plane when the plane could be controlled remotely, and who would handle myriad unruly passengers

As UBS notes, technology isn’t the biggest hurdle—it’s convincing regulators and the public that autonomous planes will still fly safely.

 

"This is (kind of) your captain speaking." (Reuters/Kai Pfaffenbach)

Google’s Deeplearn.js brings machine learning to the browser

Thinkstock (Thinkstock)

Thinkstock (Thinkstock)

The open source GPU-accelerated library supports TypeScript and JavaScript, allowing you to train neural networks or run pre-trained models

Google is offering an open source, hardware-accelerated library for machine learning that runs in a browser. The library is currently supported only in the desktop version of Google Chrome, but the project is working to support more devices. 

The Deeplearn.js library enables training of neural networks within a browser, requiring no software installation or back end. “A client-side ML library can be a platform for interactive explanations, for rapid prototyping and visualization, and even for offline computation,” Google researchers said. “And if nothing else, the browser is one of the world’s most popular programming platforms.”

Using the WebGL JavaScript API for 2D and 3D graphics, Deeplearn.jscan conduct computations on the GPU. This offers significant performance, thus getting past the speed limits of JavaScript, the researchers said.

Deeplearn.js imitates the structure of the company’s TensorFlow machine intelligence library and NumPy, a scientific computing package based on Python. “We have also implemented versions of some of the most commonly used TensorFlow operations. With the release of Deeplearn.js, we will be providing tools to export weights from TensorFlow checkpoints, which will allow authors to import them into webpages for Deeplearn.js inference.”

Although Microsoft’s TypeScript is the language of choice, Deeplearn.js can be used with plain JavaScript. Demos of Deeplearn.js are featured on the project’s homepage. Deeplearn.js joins other projects that bring machine learning to JavaScript and the browser, including TensorFire, which allows execution of neural networks within a webpage, and ML.js, which provides machine learning and numerical analysis tools in JavaScript for Node.js.

Source: http://www.infoworld.com/article/3216464/machine-learning/googles-deeplearnjs-brings-machine-learning-to-the-browser.html

-----------------------------------

The AI World Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI World Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors.

'They said I was too old to work at a startup' – women on ageism in tech

In a sector that values youth, female tech leaders share their experiences of working in the industry

While diversity in tech has come to the fore in recent years, there is one aspect of this which has been rather overlooked: age. While the median average age for Britons in the UK is 40, the average age of a Facebook employee is 29, while at Amazon it’s 30, according to research firm Payscale. Given the industry’s gender imbalance, older women are the ones most likely to miss out.

From the founder of startup to the chief operating officer of a major tech firm, we spoke to seven women aged over 40 about their experience of navigating the industry.

Karen Quintos, 54, chief customer officer, Dell

‘It’s definitely been challenging for me as a woman on mostly male boards,’ says Dell CCO, Karen Quintos. Photograph: PR

The two main challenges facing women as they get older is a lack of role models and a lack of confidence. It’s definitely been challenging for me as a woman on mostly male boards. I’ve been lucky at Dell, as my male colleagues recognise I may have a different perspective on things sometimes, so I’ll often be asked to cast my eye over whatever’s being worked on.

The key to really continuing to thrive in your career as you get older is recognising and staying close to your advocates. Keep an open mind as to who your advocates might be, as you may find yourself surprised – I definitely was when I discovered who mine were.

Pip Wilson, 41, angel investor and founder of divorce app amicable

I founded my consultancy company in my late 20s and never felt disadvantaged as a woman. However, when I moved into the tech space with amicable in 2015, it was a real shock to see how male-dominated and also how young the industry is. I’ve only come across one woman founder who has successfully raised funding in her 40s.

Only 7% of VC firms have a female partner, and VCs like to invest in companies they understand. Women also tend to be less bullish when seeking investment, and will be honest about the problems with their companies. I also think the startup accelerator system disadvantages older people, particularly women with families. Not everyone can up their lives and move to San Francisco for three months, working 16-hour days.

Tanya Cordrey, 50, digital consultant and former chief digital officer of the Guardian

Tech is clearly a very youth-orientated industry. Many companies even pride themselves on the age of their employees, which I find a bit gross. Job hunting in my 50s has definitely felt more challenging than it did even in my 40s. I was once told by someone that I’m too old to work in a startup, which I was quite shocked by. But then, if a company doesn’t want me because I’m a woman over a certain age, then it’s not really a company I want to work for.

It’s important to keep your skills upgraded and on top of industry trends no matter what your age, but I think this becomes particularly important over 40 because there is an expectation that you’re more out of touch. You definitely have to work twice as hard in this regard.

Dr Sue Black OBE, 55, founder of techmums and honorary professor at University College London

‘The key to thriving as you grow older is finding networks of like-minded people who can mentor and support you,’ says Dr Sue Black.

I haven’t experienced much ageism personally – I actually think it’s become easier for me as I’ve got older, as people have started taking me more seriously in a professional environment. However, I’ve heard some terrible stories from other women [within the tech industry] about how they’ve been patronised or treated in a derogatory manner, even those at the top of their field.

The key to thriving as you grow older – or at any age – is finding networks of like-minded people who can mentor and support you, and vice versa. When I entered the industry in the 1990s I wasn’t meeting many other women in the workplace or at conferences, so I set up BCSWomen, the first online network for women in tech, so we could connect with each other. Now there are all kinds of different women’s networks out there, from programming to leadership.

Jacqueline de Rojas, 54, president of trade association techUK, and non-executive director of Rightmove UK

There is an issue in the industry with people being let go when they’re older. Some companies have a “your face doesn’t fit” mentality – it’s tricky to drum up the same level of enthusiasm around a startup if it’s not seen as being down with the kids. Women also face further problems when they go on maternity leave as their confidence and own perceived skill level can drop.

Personally, I think I’ve managed to avoid this by upgrading my skills regularly. At techUK we have a Returners’ Hub that helps people access return-to-work programmes. It’s important to not let your skills or your network die out.

In my view older people make more loyal employees as they have less financial pressures on them, and aren’t constantly chasing bigger salaries.

Nikki Cochrane, 44, co-founder of Digital Mums

We’re the good news story – in all honestly we haven’t experienced any ageism or sexism while starting up. There is no doubt that investment is a very male-dominated space, and we found it very challenging to seek out female investors, but we managed.

Our business runs courses teaching mums everything they need to know to start a career in social media management. The women we train are able to empathise and manage customers and clients in a way the average graduate just can’t. 

One of the best things about being a woman over 40 in this industry is the camaraderie. It’s a brilliant age to launch a business because you bring a whole wealth of experience to the table.

Elspeth Briscoe, 44, founder of online courses platform Learning With Experts

Raising investment is difficult but not impossible, says tech startup owner Elspeth Briscoe.

I started my career at eBay and Skype before launching my own company in 2011, and managed to raise £1.5m in investment from VCs and angels while pregnant. It’s difficult, but not impossible. The VC space can be a bit of a boys’ club but it really is like dating – you have to shop around and find people you click with, who believe in what you’re doing.

I actually feel more motivated in my career now than I ever did in my 20s. The role models I’ve had have really helped me too – I worked at eBay when [Hewlett Packard Enterprise chief executive] Meg Whitman was CEO, and she was just fantastic at creating an inclusive working environment. She really inspired me to get to where I am.

When Artificial Intelligence and Human Resources Intersect

GRAEME DAWES - FOTOLIA

GRAEME DAWES - FOTOLIA

AI is taking aim at the very people-oriented human resources profession. Expert Brandon Wirtz gives his take on why that's happening now and what it will mean for us all.

Brandon Wirtz was supposed to be a fifth-generation teacher. Indeed, the founder and CEO of artificial intelligence engine developer Recognant is a teacher -- of robots, not people -- and not the factory floor variety of bots, either. Instead, Wirtz sees AI changing a very human process: human resources.

To reach the place where artificial intelligence and HR meet, Wirtz spends his days educating his various AIs about everything from how to order pizza to what an appropriate pickup line might be. His bots -- "Loki," "Lobby" and "Molly" -- are at different stages of independence and aptitude. Loki, who identifies as female, is perhaps Wirtz's favorite bot -- and the most likely to drive him crazy with questions.

The games Wirtz plays with Loki -- "I Spy" is a particular favorite -- might seem frivolous, but they serve as the basis for the bot's education in how humans think and communicate. And though it may seem unimportant that Loki understands Santa Claus, zombies and Instagram, all of that matters when it comes to artificial intelligence and human resources if she -- or a bot like her -- is going to work in HR dealing with prospective employees. "I know this seems creepy, but it isn't," Wirtz laughed.

There's no fooling a bot

In his world view, Wirtz believes HR is largely broken and AI is going to fix it. "One of the biggest problems in HR is that you have an interviewer, and they know nothing about the particular job they're hiring for," he explained. "Lots of times, an HR person is faking knowledge about the job, so they don't know enough to know what keywords to be listening for."

Even if a bot has never heard of, say, Photoshop, it can quickly search the internet and arm itself with enough information to know if an applicant using Corel Draw may lack the necessary experience, Wirtz said. "The AI doesn't have to understand the conversation but can pass the transcription on to the hiring manager and indicate this was not an acceptable answer," he added. "AI is a way to get deeper interactions with interviewees, and it doesn't matter what they talk about because the AI is an expert or at least a jack-of-all-trades."

A robot might have a more in-depth interview with a job applicant, while showing no bias toward the candidate, Wirtz reasoned. "Sometimes it comes down to 'This candidate reminds me of someone I didn't like in high school,' or 'This person and I have bonded over the same hobby,'" he suggested. "Computers don't have these biases." However, they can be programmed to search for applicant biases, such as racially derogative messages posted on social media. And with the right training, a bot can even help sort out the very subtle human characteristics like emotional intelligence, sense of humor and even ambition, Wirtz said. "An AI is not very good at making jokes, but it can tell when a human has made a joke, and that can help [the bot] decide whether someone has the right personality for the job."

If you're dubious about artificial intelligence and human resources, you're far from alone, but Wirtz has an answer for skeptics. "A well-trained AI will listen, and humans really don't," he asserted. "Say you are looking to hire a test engineer. That's a job without a lot of upward mobility. So you want to hire someone who's going to be happy to stay a test engineer. An AI will analyze how many times the person used the future tense and that's key information that a human would more than likely have missed."

Early signs of intelligent life

Yet that kind of human analysis is only possible when bots are patiently taught by someone who understands the building blocks of AI. Although Wirtz started coding at the age of 7, he took a hiatus from that for several years to be a YMCA camp counselor before returning to software development. He worked on mind simulation products, which provided key training in psychology and understanding how the human brain works. He then moved on to AI-based content creators.

"I was trying to 'game' Google and create content for fun and profit that wasn't great but didn't suck, either," he acknowledged. "So I learned about content creation, fact extraction and knowledge building." Those were the tools that formed the basis for Loki and her AI colleagues at Recognant and what Wirtz hopes will foster a new approach to artificial intelligence and human resources.

Thanks to AI, applying blindly for a job will become a thing of the past, Wirtz predicted. Instead, a chatbot will appear right after the application is submitted online and strike up a conversation with the applicant. That prescreening exchange will provide the hiring manager with immediate information about the applicant; the applicant may receive feedback from the hiring manager as well.

"From spelling errors to subject knowledge to attitude, an AI can ask and record this exchange," Wirtz said. "If this works, we can get rid of the biases, the emphasis on education instead of experience and a lot more unknowns. And it really is starting to happen today."

Artificial intelligence better than scientists at choosing successful IVF embryos

Getty Images

Getty Images

Scientists are using artificial intelligence (AI) to help predict which embryos will result in IVF success.

In a new study, AI was found to be more accurate than embryologists at pinpointing which embryos had the potential to result in the birth of a healthy baby.

Experts from Sao Paulo State University in Brazil have teamed up with Boston Place Clinic in London to develop the technology in collaboration with Dr Cristina Hickman, scientific adviser to the British Fertility Society.

They believe the inexpensive technique has the potential to transform care for patients and help women achieve pregnancy sooner.

During the process, AI was “trained” in what a good embryo looks like from a series of images.

AI is able to recognise and quantify 24 image characteristics of embryos that are invisible to the human eye.

These include the size of the embryo, texture of the image and biological characteristics such as the number and homogeneity of cells.

During the study, which used cattle embryos, 48 images were evaluated three times each by embryologists and by the AI system.

The embryologists could not agree on their findings across the three images, but AI led to complete agreement.

Stuart Lavery, director of the Boston Place Clinic, said the technology would not replace examining chromosomes in detail, which is thought to be a key factor in determining which embryos are “normal” or “abnormal”.

He said: “Looking at chromosomes does work, but it is expensive and it is invasive to the embryo.

“What we are looking for here is something that can be universal.

“Instead of a human looking at thousands of images, actually a piece of software looks at them and is capable of learning all the time.

“As we get data about which embryos produce a baby, that data will be fed back into the computer and the computer will learn.

“What we have found is that the technique is much more consistent than an embryologist, it is more reliable.

“It can also look for things that the human eye can't see.

“We don't think it will replace genetic screening – we think it will be a complimentary to this type of screening.

“Analysis of the embryo won't improve the chances of that particular embryo, but it will help us pick the best one.

“We won't waste time on treatments that won't work, so the patient should get pregnant quicker.”

He said work was under way to look back at images from parents who had genetic screening and became pregnant. Applying AI to those images will help the computer learn, he said.

Mr Lavery added: “This is an innovative and exciting project combining state of the art embryology with new advances in computer modelling, all with the aim of selecting the best possible embryo for transfer to give all our patients the best possible chance of having a baby.

“Although further work is needed to optimise the technique, we hope that a system will be available shortly for use in a clinical setting.”

-----------------------------------

The AI World Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI World Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors.

Book your ticket here.

Facebook's Artificial Intelligence Robots Shut Down After They Start Talking to Each Other in Their Own Language

/ REUTERS/Tyrone Siu

/ REUTERS/Tyrone Siu

Facebook abandoned an experiment after two artificially intelligent programs appeared to be chatting to each other in a strange language only they understood.

The two chatbots came to create their own changes to English that made it easier for them to work – but which remained mysterious to the humans that supposedly look after them.

The bizarre discussions came as Facebook challenged its chatbots to try and negotiate with each other over a trade, attempting to swap hats, balls and books, each of which were given a certain value. But they quickly broke down as the robots appeared to chant at each other in a language that they each understood but which appears mostly incomprehensible to humans.

The robots had been instructed to work out how to negotiate between themselves, and improve their bartering as they went along. But they were not told to use comprehensible English, allowing them to create their own "shorthand", according to researchers.

The actual negotiations appear very odd, and don't look especially useful:

------------------------------------------

Bob: i can i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i i can i i i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i . . . . . . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i everything else . . . . . . . . . . . . . .

Alice: balls have 0 to me to me to me to me to me to me to me to me to

Bob: you i i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

------------------------------------------

But there appear to be some rules to the speech. The way the chatbots keep stressing their own name appears to a part of their negotiations, not simply a glitch in the way the messages are read out.

Indeed, some of the negotiations that were carried out in this bizarre language even ended up successfully concluding their negotiations, while conducting them entirely in the bizarre language.

      They might have formed as a kind of shorthand, allowing them to talk more effectively.

      “Agents will drift off understandable language and invent codewords for themselves,” Facebook Artificial Intelligence Research division's visiting researcher Dhruv Batra said. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

      That said, it's unlikely that the language is a precursor to new forms of human speech, according to linguist Mark Liberman.

      "In the first place, it's entirely text-based, while human languages are all basically spoken (or gestured), with text being an artificial overlay," he wrote on his blog. "And beyond that, it's unclear that this process yields a system with the kind of word, phrase, and sentence structures characteristic of human languages."

      The company chose to shut down the chats because "our interest was having bots who could talk to people", researcher Mike Lewis told FastCo. (Researchers did not shut down the programs because they were afraid of the results or had panicked, as has been suggested elsewhere, but because they were looking for them to behave differently.)

      The chatbots also learned to negotiate in ways that seem very human. They would, for instance, pretend to be very interested in one specific item – so that they could later pretend they were making a big sacrifice in giving it up, according to a paper published by FAIR.

      (That paper was published more than a month ago but began to pick up interest this week.)

      Facebook's experiment isn't the only time that artificial intelligence has invented new forms of language.

      Earlier this year, Google revealed that the AI it uses for its Translate tool had created its own language, which it would translate things into and then out of. But the company was happy with that development and allowed it to continue.

      Another study at OpenAI found that artificial intelligence could be encouraged to create a language, making itself more efficient and better at communicating as it did so.

      -----------------------------------

      The AI World Congress is the leading international artificial intelligence show and takes place at the O2 in London on January 30th & 31st.

      Uniting international enterprises and business leaders, AI experts, leading tech pioneers and investors that are driving the implementation of AI in global business, the AI World Congress will provide an innovative forum to discuss, engage, challenge and discover the incredible opportunities in AI across all major market sectors.

      Book your ticket here.