A New Way for Machines to See, Taking Shape in Toronto
Along with two graduate students at the University of Toronto, Mr. Hinton, a professor there, built a system
that could analyze thousands of photos and teach itself to identify common objects like flowers and cars with an accuracy that didn’t seem possible.
The new lab is emblematic of what some believe to be the future of cutting-edge tech research:
Much of it is expected to happen outside the United States in Europe, China and longtime A. I.
research centers, like Toronto, that are more welcoming to immigrant researchers.
But these methods are still a long way from delivering machines with true intelligence — and new research is needed to deliver the kinds of autonomous machines
that so many of the top tech companies are now promising, including conversational computers and driverless cars.
With his capsule networks, Mr. Hinton aims to finally give machines the same three-dimensional perspective
that humans have — allowing them to recognize a coffee cup from any angle after learning what it looks like from only one.
He and his students soon moved to Google, and the mathematical technique
that drove their system — called a neural network — spread across the tech world.
This mathematical idea dates back to the 1950s, but the concept has found real-world applications in recent years, thanks to improvements in processing power
and the large amounts of data generated by the internet.