SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : Technical analysis for shorts & longs
SPY 675.24-1.2%Nov 4 4:00 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Johnny Canuck who wrote (66594)10/8/2025 12:01:28 PM
From: Johnny Canuck  Read Replies (1) of 67640
 


current progress 100%



John Thornhill

Published2 hours ago

36Print this page

Oscar Wilde once defined fox hunting as the unspeakable in pursuit of the uneatable. Were he alive today he might describe the quest for artificial general intelligence as the unfathomable in pursuit of the indefinable.
Many hundreds of billions of dollars are currently being pumped into building generative AI models; it’s a race to achieve human-level intelligence. But not even the developers fully understand how their models work or agree exactly what AGI means.
Instead of hyperventilating about AI ushering in a new era of abundance, wouldn’t it be better to drop the rhetoric and build AI systems for more defined, realisable goals? That was certainly the view at a conference hosted by the University of Southampton and the Royal Society last week. “We should stop asking: is the machine intelligent? And ask: what exactly does the machine do?” said Shannon Vallor, a professor at the University of Edinburgh. She has a point.
The term AGI was first popularised in computing in the 2000s to describe how AI systems might one day perform general-purpose, human-level reasoning (as opposed to narrow AI that excels at one thing). Since then, it has become the holy grail for the industry, used to justify a colossal spending spree.
The leading AI research labs, OpenAI and Google DeepMind, both have an explicit corporate mission to achieve AGI, albeit with varying definitions. OpenAI’s is: “a highly autonomous system that outperforms humans at most economically valuable work”. But even Sam Altman, its chief executive, who has signed $1tn in deals this year to boost computing power, concedes this is not a “super-useful term”.
Even if we accept its use case, there remain two concerns. What happens if we achieve AGI. And what happens if we don’t.
The Silicon Valley consensus, as it has been called, suggests that AGI, however defined, is within reach this decade. That goal is being pursued with missionary zeal by the leading AI labs in the belief that it will unleash massive productivity gains and generate huge returns for investors.
Indeed, some west coast tech leaders have founded a $100mn political action committee to support “pro-AI” candidates in the 2026 midterm elections and squash unhelpful regulation. They point to the astonishingly rapid adoption of AI-powered chatbots and scoff at the doomers, or decelerationists, who want to slow progress and hobble the US in its technological race with China.
But AGI is not going to be an instant, miraculous blessing. OpenAI itself acknowledges that it would also come “with serious risk of misuse, drastic accidents, and societal disruption”. This helps explain why insurers are now balking at providing comprehensive cover to the industry.
Some experts, such as Eliezer Yudkowsky and Nate Soares, go way further, warning that a rogue superintelligence could pose an existential threat to humanity. The title of their recent book, If Anyone Builds It, Everyone Dies, pretty much sums up the argument.
Yet not everyone is convinced that AGI’s arrival is imminent. Sceptics doubt that the industry’s favourite party trick of scaling computing power to produce smarter large language models will take us there. “We’re still a few conceptual breakthroughs short of AGI,” says one top researcher.
In a survey conducted by the Association for the Advancement of Artificial Intelligence this year, 76 per cent of the 475 (mostly academic) respondents thought it unlikely or very unlikely that current approaches would yield AGI. That could be a problem: the US stock market appears to rest on the opposite conviction.
Many of the attendees at last week’s event contested the Silicon Valley framing of AI. The world was not on a predestined technological trajectory with only one outcome. Other approaches were worth pursuing rather than betting so much on the deep learning behind generative AI models. AI companies could not just shrug off the problems they were creating today with promises of a more glorious tomorrow. The rest of society should resist being treated “as passengers at the back of a bus” hoping AI takes us somewhere nice, said Vallor.
The 85-year-old computer pioneer Alan Kay, a revered figure in the industry, offered some perspective. He argued that AI could undoubtedly bring real benefits. Indeed, it had helped detect his cancer in an MRI scan. “AI is a lifesaver,” he said.
However, Kay worried that humans are easily fooled and that AI companies cannot always explain how their models produce the results. Software engineers, like aeroplane designers or bridge builders, he said, had a duty of care to ensure their systems did not cause harm or fail. The main theme of this century should be safety.
The best way forward would be to harness humanity’s collective intelligence, steadily amassed over generations. “We already have artificial superhuman intelligence,” Kay said. “It is science.” AI has already produced some exquisite breakthroughs, such as Google DeepMind’s AlphaFold model that predicted the structures of over 200mn proteins — winning the researchers a Nobel.
Kay highlighted his particular concerns about the vulnerabilities of AI-generated code. Citing his fellow computer scientist Butler Lampson, he said: “Start the genies off in bottles and keep them there.” That’s not a bad adage for our AI age.
john.thornhill@ft.com
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext