SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : 2026 TeoTwawKi ... 2032 Darkest Interregnum -- Ignore unavailable to you. Want to Upgrade?


To: Julius Wong who wrote (210717)1/29/2025 10:21:33 PM
From: TobagoJack  Read Replies (1) | Respond to of 218722
 
Re <<2>>

bloomberg.com
Microsoft Probing If DeepSeek-Linked Group Improperly Obtained OpenAI Data
- Microsoft notified OpenAI of activity that may violate terms
- Individuals were observed exfiltrating data using OpenAI API
bloomberg.com

Microsoft Has Kind Words for DeepSeek AI, Offers It to Customers
Dina Bass
30 January 2025 at 09:44 GMT+9

The logos of OpenAI and DeepSeek artificial intelligence apps arranged on a laptop screen.
Photographer: Andrey Rudakov/Bloomberg

Microsoft Corp. Chief Executive Officer Satya Nadella had some kind words for DeepSeek, the Chinese artificial intelligence startup that roiled his company’s shares earlier this week.

The upstart stunned the US tech industry with an open-source AI model called R1 that it claims rivals or outperforms the abilities of western technology but at a fraction of the cost.

Satya NadellaPhotographer: Chona Kasinger/Bloomberg“DeepSeek has had some real innovations,” Nadella said during an investor call after Microsoft reported quarterly results on Wednesday. “Obviously now that all gets commoditized and it’s going to get broadly used.”

DeepSeek’s feat prompted investors to wonder if Nadella’s company needs to spend so much money on AI infrastructure. Couldn’t Microsoft and partner OpenAI train its AI models and handle user queries — a process known as inferencing — more cheaply?

Nadella said they already have been doing exactly that.

“We ourselves have been seeing significant efficiency gains both in training and inference for years now,” he said. Microsoft has used its software to wring better performance and cost savings from each new generation of AI models and AI hardware, Nadella said.

He said Microsoft did a lot of the work in partnership with OpenAI. It’s not enough to release the best new model, he added. You have to make it cost-effective to use. “If it’s too expensive to serve, it’s no good, right?” he said.

Microsoft still plans to spend $80 billion on data centers this fiscal year to help it meet demand from customers for its AI products, though the company expects the growth in expenses to taper off in fiscal 2026, which starts July 1.

On Tuesday, Bloomberg News reported that Microsoft and OpenAI are investigating whether a group linked to DeepSeek had obtained data output from OpenAI’s technology without authorization.

That hasn’t stopped Microsoft from offering DeepSeek’s model to customers. On Wednesday, the company said it had added R1 to its Azure AI Foundry, a repository of more than 1,800 models that companies can use to design and manage AI programs.



To: Julius Wong who wrote (210717)1/29/2025 11:40:24 PM
From: TobagoJack  Respond to of 218722
 
re <<DeepSeek sends 2 big signals that shatter tech investor assumptions>>

Team Russia observes, and if correct, then more DeepSeeks in pipeline(s)

rt.com

Chinese education is superior to Western – Telegram founder Durov

Telegram co-founder and CEO Pavel Durov has attributed China’s rapid advancements in artificial intelligence (AI) to its highly competitive education system, emphasizing the nation’s consistent excellence in mathematics and programming.

In a statement on Wednesday, Durov noted that China’s education system fosters “fierce competition” among students, a principle reminiscent of the “highly efficient” Soviet model. He contrasted this with Western educational approaches, where competition is often minimized to avoid hurting students’ feelings.

“Victory and defeat are two sides of the same coin. Eliminate the losers — and you eliminate the winners,” Durov wrote in a post celebrating the Chinese New Year.

Durov emphasized that while telling all students they are champions, regardless of performance, may seem kind, reality will quickly shatter this illusion after graduation.

“Reality, unlike well-meaning school policies, does have public grades and rankings — whether in sports, business, science, or technology,” he wrote on Wednesday, warning that removing transparency in student performance can make school feel meaningless for ambitious teenagers.

“AI benchmarks that demonstrate DeepSeek’s superiority are one of such public rankings. And more are coming,” he cautioned.

“China’s progress in algorithmic efficiency hasn’t come out of nowhere,” Durov explained, noting that Chinese students have “long outperformed others in math and programming at international Olympiads.”

China leads the US in cumulative medal counts in the International Mathematical Olympiad (IMO), with 185 gold medals to 151, as well as 102 to 68 in the International Olympiad in Informatics (IOI). While the US managed to beat China by a mere two points in the 2024 IMO, ending its decade-long reign, some critics attributed the victory to the “disproportionate” Asian representation on the American team.

Durov’s comments come in the wake of the success of Chinese AI startup DeepSeek, which has developed an open-source reasoning model without access to cutting-edge US chips and at a fraction of the cost. He cautioned that unless the US education system undergoes significant reforms to encourage competition and recognize excellence, China’s growing dominance in technology appears inevitable



To: Julius Wong who wrote (210717)1/30/2025 8:41:06 AM
From: TobagoJack  Read Replies (2) | Respond to of 218722
 
Re <<DeepSeek … big signals … assumption>> … if true, news is on the OMG level … coding was done on using machine language, and … too scary to think about implications … shall appraise the Jack of the situation … that (1) DS learning done using Huawei chip, (2) can use any other normal CPU, and (3) Nvidia GPU not necessary

In the meantime,
nature.com

Scientists flock to DeepSeek: how they’re using the blockbuster AI model

Researchers are testing how well the open model can perform scientific tasks — in topics from mathematics to cognitive neuroscience.
29 January 2025
The DeepSeek model is accessible through a chatbot app.Credit: Mladen Antonov/AFP via Getty

Scientists are flocking to DeepSeek-R1, a cheap and powerful artificial intelligence (AI) ‘reasoning’ model that sent the US stock market spiralling after it was released by a Chinese firm last week.

Repeated tests suggest that DeepSeek-R1’s ability to solve mathematics and science problems matches that of the o1 model, released in September by OpenAI in San Francisco, California, whose reasoning models are considered industry leaders.

Although R1 still fails on many tasks that researchers might want it to perform, it is giving scientists worldwide the opportunity to train custom reasoning models designed to solve problems in their disciplines.

“Based on its great performance and low cost, we believe Deepseek-R1 will encourage more scientists to try LLMs in their daily research, without worrying about the cost,” says Huan Sun, an AI researcher at Ohio State University in Columbus. “Almost every colleague and collaborator working in AI is talking about it.”

Open seasonFor researchers, R1’s cheapness and openness could be game-changers: using its application programming interface (API), they can query the model at a fraction of the cost of proprietary rivals, or for free by using its online chatbot, DeepThink. They can also download the model to their own servers and run and build on it for free — which isn’t possible with competing closed models such as o1.

Since R1’s launch on 20 January, “tons of researchers” have been investigating training their own reasoning models, based on and inspired by R1, says Cong Lu, an AI researcher at the University of British Columbia in Vancouver. That’s backed up by data from Hugging Face, an open-science repository for AI that hosts the DeepSeek-R1 code. In the week since its launch, the site had logged more than three million downloads of different versions of R1, including those already built on by independent users.

Scientific tasksIn preliminary tests of R1’s abilities on data-driven scientific tasks — taken from real papers in topics including bioinformatics, computational chemistry and cognitive neuroscience — the model matched o1’s performance, says Sun. Her team challenged both AI models to complete 20 tasks from a suite of problems they have created, called the ScienceAgentBench. These include tasks such as analysing and visualizing data. Both models solved only around one-third of the challenges correctly. Running R1 using the API cost 13 times less than did o1, but it had a slower “thinking” time than o1, notes Sun.

R1 is also showing promise in mathematics. Frieder Simon , a mathematician and computer scientist at the University of Oxford, UK, challenged both models to create a proof in the abstract field of functional analysis and found R1’s argument more promising than o1’s. But given that such models make mistakes, to benefit from them researchers need to be already armed with skills such as telling a good and bad proof apart, he says.

Much of the excitement over R1 is because it has been released as ‘open-weight’, meaning that the learnt connections between different parts of its algorithm are available to build on. Scientists who download R1, or one of the much smaller ‘distilled’ versions also released by DeepSeek, can improve its performance in their field through extra training, known as fine tuning. Given a suitable data set, researchers could train the model to improve at coding tasks specific to the scientific process, says Sun.



To: Julius Wong who wrote (210717)1/30/2025 9:20:51 AM
From: TobagoJack  Read Replies (1) | Respond to of 218722
 
Random clips, neo-fireworks, and BYD commercial final version