SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Pastimes : All Things Technology - Media and Know HOW

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
From: S. maltophilia11/10/2025 1:02:09 PM
   of 2017
 
....The last concern is economic. LLMs might stop being cheap to use. Right now, we use them on large companies’ servers, and they are heavily subsidized. OpenAI (ChatGPT) is spending billions of dollars per year and is years away from profitability. It would take a better financial analyst than I am to sort this all out, but there’s concern that some of the investment money isn’t real, due to circular investments. And anyhow, the worldwide data center costs alone for generative AI are projected to reach $6.7 trillion per year (yes, trillion — that’s twice the GDP of the United Kingdom). Never mind the environmental challenge of even generating the electricity. Do we want to put so much of the world’s computing into just a few companies’ data centers?


So what is the new paradigm?
The key idea behind the new paradigm is that, basically, LLMs aren’t for everything. They have made possible some new computer applications — creative rather than analytic — where the user wants suggestions, rough drafts or sketches, enumerated possibilities, and approximate paraphrases. But it’s time to admit that old-style computing isn’t going to be replaced.

That opens up new roads in two directions. One is smaller LLMs. The reason for training LLMs on all the world’s available text and pictures was the hope that human thinking would emerge. If we get back to the basic function of LLMs — pattern recognition and paraphrase — then maybe those training sets are way too big, and we’d be better off with smaller models that can run locally on modest CPUs and GPUs. Your computer would be your own again.

After all, it shouldn’t take trillions of words to model English. We don’t consider Biblical Hebrew a...

askwoody.com
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext