Dave Ackley, a longtime computer scientist and professor emeritus at the University of New Mexico, never expected research he was involved with would win a Nobel Prize in Physics, but things changed last week when his dissertation advisor became a Nobel Laureate.

Geoffrey Hinton and John Hopfield were awarded the 2024 Nobel Prize in Physics for breakthrough brain-like algorithmic discoveries that helped pave the way for the advancement of artificial intelligence. Hopfield developed a model of neural networks that can save and recreate patterns similar to the human brain. The model is known as the Hopfield Network. In the early 1980s, Hinton and his colleague Terry Sejnowski used the Hopfield Network as the foundation for a new neural network known as the Boltzmann Machine. As a graduate student at Carnegie Mellon University, and Hinton’s first Ph.D. student, Ackley worked with Hinton on the Boltzmann Machine. 

A Nobel idea

Ackley started his graduate work before Hinton joined the faculty at Carnegie Mellon. During his first year of study, he was interested in a computational model Hinton wrote in a chapter of “Parallel Models of Associative Memory.” Ackley tried to implement Hinton’s model, but was struggling to make it functional, so he cold emailed Hinton to better understand how to implement the code. A year later Hinton became a professor at Carnegie Mellon and Ackley naturally joined him as a student. Hinton charged Ackley with implementing the Boltzmann Machine idea –– and to their excitement it worked.

Dave Ackley in the 80s
Dave Ackley in the 1980s.

“To this day, I recall sitting in Geoff's office soon after I'd gotten the code working, as we tried it out on a small but famously challenging learning problem. After each round of learning, my code printed out a number giving the remaining error,” Ackley wrote about the experience. “As I read out the numbers, moment by moment, one by one, Geoff plotted them on an impromptu graph on his chalkboard. The numbers were falling! The errors were decreasing! The machine was learning!”

The Boltzmann Machine offered a revolutionary perspective on the potential for brain-like algorithms, which are designed to use a network of simple components to complete complex tasks. The work paved the way for additional artificial intelligence research that helped create the generative AI systems available today, like ChatGPT. While the Boltzmann Machine is not directly implemented in today’s generative AI, it is largely recognized as having helped pave the way.

"ChatGPT and the rest of modern machine artificial intelligence, from the point of view of many of us in the 80’s,  is the unbelievable grand success of the brain-like approach,” Ackley said.

The Boltzmann Machine was named after Ludwig Boltzmann, a physicist who published mathematical ideas and statistical concepts utilized in the machine. The Nobel Committee emphasized the physics concepts utilized in the idea’s implementation. 

“Computer science didn’t exist when Nobel was inventing dynamite and feeling guilty about it,” Ackley joked.

When Hinton published his initial work on the Boltzmann Machine, he suggested the research team alphabetize their names. Ackley ended up listed first on the publication that to this day remains his most-cited work.

Since the Boltzmann Machine

After Ackley graduated with his Ph.D., he was hired by Bell Communications Research to work on problems of his own choosing. He started out looking at machine learning, but soon moved on to examining artificial life, evolution and how those ideas might be implemented in computing. Ackley would travel to New Mexico to attend annual artificial life conferences in Santa Fe. After being “enchanted by the desert palette,” he moved to the Albuquerque area and became an associate professor in the UNM Department of Computer Science.

nathan-rackleys-ackley-logo
Nathan Rackley, Ackley's former student and previous Daily Lobo cartoonist, drew a fun logo for the since-retired professor.

After joining UNM, Ackley soon became interested in developing cybersecurity systems utilizing ideas present in the natural world. After years of research and discouragement over the constant need for patches and temporary security fixes, Ackley decided to take his computer science research in a completely new direction.

Since his retirement in 2018, he has focused his efforts on developing new computer architecture that works without CPU or RAM. He began the Living Computation Foundation, which seeks to support science and engineering exploring the connections between life and computing. His efforts to develop new computer architecture more similar to the natural world are documented in the T2 Tile Project, where he develops new computer architectures with some of the robustness of living systems in the world around us. Ackley is working to develop a system that centers on scalability and robustness first, then correctness and efficiency. 

Hearing the news

To say Ackley was shocked to hear the news about his mentor “Geoff” would be an understatement. 

“I got up around 8 and toddled in to look at my email and the first thing I saw was an interview request from the New York Times and I’m going, ‘is this spam?’” 

He quickly searched online for news about the Nobel Prize. While scrolling through news, Ackley was thrilled for Hinton, who has spent the decades since developing the Boltzmann Machine on machine learning research.

“As I drilled into it I was like, ‘holy crap! It’s the Boltzmann Machine,’” Ackley said. “Even though I didn’t have the big ideas –– I just did the implementation and experiments and wrote up the paper –– it was still pretty exciting.”
 

Top image: Dave Ackley (second from the left in the back wearing a blue shirt) attended the first-ever Connectionist Summer School event held at Carnegie Mellon University in 1986. The event was planned by Geoffrey Hinton and Terry Sejnowski who are also pictured.