Scientists Teach Machines How to Learn Like Humans


Reading time ( words)

A team of scientists has developed an algorithm that captures our learning abilities, enabling computers to recognize and draw simple visual concepts that are mostly indistinguishable from those created by humans. The work, which appears in the latest issue of the journal Science, marks a significant advance in the field—one that dramatically shortens the time it takes computers to “learn” new concepts and broadens their application to more creative tasks.

“Our results show that by reverse engineering how people think about a problem, we can develop better algorithms,” explains Brenden Lake, a Moore-Sloan Data Science Fellow at New York University and the paper’s lead author. “Moreover, this work points to promising methods to narrow the gap for other machine learning tasks.”

The paper’s other authors were Ruslan Salakhutdinov, an assistant professor of Computer Science at the University of Toronto, and Joshua Tenenbaum, a professor at MIT in the Department of Brain and Cognitive Sciences and the Center for Brains, Minds and Machines.

When humans are exposed to a new concept—such as new piece of kitchen equipment, a new dance move, or a new letter in an unfamiliar alphabet—they often need only a few examples to understand its make-up and recognize new instances. While machines can now replicate some pattern-recognition tasks previously done only by humans—ATMs reading the numbers written on a check, for instance—machines typically need to be given hundreds or thousands of examples to perform with similar accuracy.

“It has been very difficult to build machines that require as little data as humans when learning a new concept,” observes Salakhutdinov. “Replicating these abilities is an exciting area of research connecting machine learning, statistics, computer vision, and cognitive science.”

Salakhutdinov helped to launch recent interest in learning with “deep neural networks,” in a paper published in Science almost 10 years ago with his doctoral advisor Geoffrey Hinton. Their algorithm learned the structure of 10 handwritten character concepts—the digits 0-9—from 6,000 examples each, or a total of 60,000 training examples.

Share

Print


Suggested Items

DARPA Funding Brings Machine Learning to BAE Systems’ Signals Intelligence Capabilities

07/08/2019 | BAE Systems
The solution provides a reconfigurable hardware platform for developers to make sense of radio frequency signals in increasingly crowded electromagnetic spectrum environments.



Copyright © 2021 I-Connect007. All rights reserved.