Scientists Teach Machines How to Learn Like Humans
December 15, 2015 | New York UniversityEstimated reading time: 4 minutes
To do so, they developed a “Bayesian Program Learning” (BPL) framework, where concepts are represented as simple computer programs. For instance, the letter ‘A’ is represented by computer code —resembling the work of a computer programmer— that generates examples of that letter when the code is run. Yet no programmer is required during the learning process: the algorithm programs itself by constructing code to produce the letter it sees. Also, unlike standard computer programs that produce the same output every time they run, these probabilistic programs produce different outputs at each execution. This allows them to capture the way instances of a concept vary, such as the differences between how two people draw the letter ‘A.’
While standard pattern recognition algorithms represent concepts as configurations of pixels or collections of features, the BPL approach learns “generative models” of processes in the world, making learning a matter of “model building” or “explaining” the data provided to the algorithm. In the case of writing and recognizing letters, BPL is designed to capture both the causal and compositional properties of real-world processes, allowing the algorithm to use data more efficiently. The model also “learns to learn” by using knowledge from previous concepts to speed learning on new concepts—e.g., using knowledge of the Latin alphabet to learn letters in the Greek alphabet. The authors applied their model to over 1,600 types of handwritten characters in 50 of the world’s writing systems, including Sanskrit, Tibetan, Gujarati, Glagolitic—and even invented characters such as those from the television series Futurama.
In addition to testing the algorithm’s ability to recognize new instances of a concept, the authors asked both humans and computers to reproduce a series of handwritten characters after being shown a single example of each character, or in some cases, to create new characters in the style of those it had been shown. The scientists then compared the outputs from both humans and machines through “visual Turing tests.” Here, human judges were given paired examples of both the human and machine output, along with the original prompt, and asked to identify which of the symbols were produced by the computer.
While judges’ correct responses varied across characters, for each visual Turing test, fewer than 25 percent of judges performed significantly better than chance in assessing whether a machine or a human produced a given set of symbols.
“Before they get to kindergarten, children learn to recognize new concepts from just a single example, and can even imagine new examples they haven’t seen,” notes Tenenbaum. “I’ve wanted to build models of these remarkable abilities since my own doctoral work in the late nineties. We are still far from building machines as smart as a human child, but this is the first time we have had a machine able to learn and use a large class of real-world concepts—even simple visual concepts such as handwritten characters—in ways that are hard to tell apart from humans.”
The work was supported by grants from the National Science Foundation to MIT’s Center for Brains, Minds and Machines (CCF-1231216), the Army Research Office (W911NF-08-1-0242, W911NF-13-1-2012), the Office of Naval Research (N000141310333), and the Moore-Sloan Data Science Environment at New York University.
Suggested Items
Europlacer Presents New Range of iineo SMT Placement Machines.
05/01/2024 | EuroplacerFor more than 15 years, the Europlacer iineo placement machines have made their mark on the SMT industry with unique features and unrivalled flexibility. Today, Europlacer announces the launch of the second generation iineo.
Real Time with... IPC APEX EXPO 2024: Factory Insights: A Customizable Data Collection Solution
05/01/2024 | Real Time with...IPCEditor Nolan Johnson and Julie Cliche-Dubois, product manager for Cogiscan, focus on Factory Insights, a new product that emphasizes customization. With years of experience in data collection from various machines and factories, the Factory Insights product offers dashboards and KPI customization, striking a balance between out-of-the-box solutions and customization, ultimately bringing contextualization to machine data.
Real Time with... IPC APEX EXPO 2024: Industrial Quality Solutions from Zeiss
04/23/2024 | Real Time with...IPC APEX EXPOEditor Nolan Johnson and Herminso Gomez of Zeiss Group discuss the company's industrial quality solutions, with a focus on X-ray technology. Zeiss provides a range of microscopy options and Herminso highlights the advantages of X-ray technology for aerospace, medical, and consumer electronics sectors.
VDMA: Machine Vision Navigating Through Uncertain Times
04/18/2024 | VDMAFor over a decade, the European machine vision industry has reported steady growth, with turnover increasing by an average of 9 percent annually between 2012 and 2022. Despite a temporary setback in 2020 (minus 4 percent) due to the Covid-19 pandemic, the industry rebounded strongly in 2021 (plus 17 percent) and 2022 (plus 11 percent).
IPC Bestows Posthumous Hall of Fame Award to Industry Icon Michael Ford
04/18/2024 | IPCIPC honored the late Michael Ford, Aegis Software, for his extraordinary contributions to the global electronics manufacturing industry with the IPC Raymond E. Pritchard Hall of Fame Award at IPC APEX EXPO 2024. IPC’s most prestigious honor, the Hall of Fame Award is given to individuals who have provided exceptional service and advancement to IPC and the electronics industry. Ford, an industry leader and valued IPC volunteer, died suddenly in January 2024.