Large Language Models (LLMs) are pre-trained foundational models that are self-supervised and can be adapted with fine-tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. GPT-3 and more recently LaMDA can carry on dialogs with humans on many topics after minimal priming with a few examples. However, there has been a wide range of reactions and debate on whether these LLMs understand what they are saying or exhibit signs of intelligence. This high variance is exhibited in three interviews with LLMs reaching wildly different conclusions. A new possibility was uncovered that could explain this divergence. What appears to be intelligence in LLMs may in fact be a mirror that reflects the intelligence of the interviewer, a remarkable twist that could be considered a Reverse Turing Test. If so, then by studying interviews we may be learning more about the intelligence and beliefs of the interviewer than the intelligence of the LLMs.
Sejnowski pioneered learning algorithms for neural networks in the 1980s, inventing the Boltzmann machine with Geoffrey Hinton; this was the first learning algorithm for multilayer neural networks and laid the foundation for deep learning. He trained a network to pronounce English text, called NETtalk, which was a real-world NLP application of neural networks. He developed an unsupervised learning algorithm for blind source separation called Independent Component Analysis (ICA), which has many practical applications.
Sejnowski is the President of the Neural Information Processing Systems (NeurIPS) Foundation, which organizes the annual NeurIPS conference that was attended by over 16,000 researchers in 2022. He is a member of the National Academy of Sciences, the National Academy of Engineering, the National Academy of Medicine and the American Academy of Arts and Sciences.
You may also like
Page 1 of 10