SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : NVIDIA Corporation (NVDA)
NVDA 178.94-0.9%Nov 21 9:30 AM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
From: Frank Sully11/17/2021 8:51:21 PM
   of 2646
 
Comments from Seeking Alpha posters “stocks for profit” and “geekinasuit” concerning AMD, Intel and NVIDIA and neural networks and AI:

**************************************************************************************
stocks for profit:

AMD has traveled a long twisty road in their rivalry with INTC.

At the height of monopoly INTC was the 800 pound gorilla. INTC's market heft dwarfed OEMs and even many server manufacturers. Supply and pricing leverage kept INTC's minions in thrall. INTC paid Dell $1B a year to not use AMD chips. There were likely many others where similar things occurred.

Yet time marches on. Things change. Rise of hyperscalers and the cloud shifted the center of power away from INTC. Strong arm techniques used on INTC's minions don't work with gargantuan hyperscalers where compute performance is prized above all else. No bribe or threat however large comes close to what can be earned with top-shelf chips in their datacenters. Even longtime INTC stalwart Facebook is switching to AMD.

This sea change in the compute market allows AMD's Zen to thrive. It's no accident that AMD is making faster headway with hyperscalers than server manufacturers. Hyperscalers look at INTC and see a monkey not an 800 pound gorilla. That monkey can scream and fling dung but it doesn't matter. AMD's chips are better so hyperscalers want them. Server makers are smaller and still flinch when the monkey screams. But HPE is smart and brave enough to ignore the flying dung and embrace AMD. Unfortunately many OEMs are financially weak and beholden to INTC. Some of them have started to break away from INTC's grasp but this process will take time.

AMD's top down strategy hits INTC where they are weakest. For that effort AMD gets high revenue, high margin, and quite frankly high prestige by winning hyperscalers. These wins generate great PR. More sales will migrate down the food chain to the server market and eventually OEMs.

Ultimately INTC will be holding a sack of BS with no takers for their paltry offerings.

***************************************************************************************

geekinasuit:

LOL "screaming and flying dung" .... well said.

AMD is now dealing with Nvidia in a similar way, aiming at the places where Nvidia is weakest which is in HPC high precision work loads. The massive performance gains the MI200 has over the A100 is enough to get high profile adoption over Nvidia, and that will drive the required software development process and drive exposure that AMD is something to consider over Nvidia.

AMD notably left out AI benches from their published MI200 performance disclosure which means they are not beating Nvidia, however I expect at some point AMD will begin to rival Nvidia in so-called deep learning tasks, these are optimizations using artificial neural net structures which can get away with relative low levels of precision, but it's not a black and white kind of thing, high precision is also useful for plenty of use cases. AMD will probably work top down as it gains traction, from high precision to lower precision. If the precision can be made variable in a programmable targeted way (as opposed to globally), there will be additional advantages gained from that.

I want to stress the point that "AI" as it is referred to these days, is not what most people think it is, it's actually a kind of optimization algorithm that moves outputs closer to a desired state (given a set of inputs) that's all that it is, there's no mystery to it, anyone can play around with the concept on a home PC. These optimization process comes with a ton of caveats and limitations and there are many different ways of performing the optimization process, for example the neural network-like structures are absolutely not required, neither is the commonly used back error prop algo, these are idealized solutions to a much more general problem space.

If we think of AI as what it really is (as a general optimization process), then we can see that there are many potential applications where GPU, FPGA, and other kinds of accelerators can be used to perform the optimizations (aka "learning"), and as the problem space is expanded to include incresingly wider ranges of use cases, many of these applications will benefit from higher levels of precision math.

Nvidia is ultimately targeting lower precision use cases because that's what is mostly in use right now, this is driven by the necessity to speed up the excruciatingly slow optimization process. In my very humble opinion, many AI researchers have no idea WTF they are doing and are just playing around trying different random things (some tried before many time already) to see what sticks. There is indeed some exceptionally good research going on (been happening for decades) but the popularity explosion means that most of the research going on will be designed to get grant money and other kinds of attention rather than accomplish anything meaningful - and this is what Nvidia is profiting from right now. Eventually the whole mess will mature to a higher level of sensibility, but we're a few years away from that happening.

***************************************************************************************

geekinasuit:

I lived thorough the 1980 - 1993 winter. I was an undergraduate student in the mid 80's completing my Bachelor of Computer Science (early field back in time) and did a final year thesis on using simulated Neural network models. One person walked out of my talk, someone else laughed, another heckled me "This is not computer science, it's biology!". At the end of the talk, I got a standing ovation from the remaining attendees, including apologies for the bad behavior that I had to endure.

I never lost interest in the field of AI, and I almost got a job in the 1990's coding on an Neural network system (there was a brief resurgence of interest for military type of apps but I'm a pacifist so it was not for me).

My "problem" is that I know enough to know what's going on with AI today, and it's not as rosy as far too many investors think it is. If we're not being realistic about the problems space and the fact that almost all of the gains today are from massive performance speed up in hardware, then nothing really new is happening and we'll soon be stuck in the same place as before. The scaling issue is already being reached, brute force can only get you so far before exponentially growing costs at diminishing returns shut things down.

Bottom line is that we need new types of AI models, neural networks may point us in the right direction, but the current concepts are simply not the solution. As II tried to explain, you do not even require neural network models or the back propagation algorithm optimize the mappings of inputs to outputs, and the neural network structures are not always the best solution (if ever). Before I had to stop working on neural network solutions (had to get a real job and pay rent), I ditched the expensive neural network structures and back propagation to use much faster and simpler algorithms that could optimize almost any kind of mapping, as long as it was a smooth curve (required for steepest decent algos). I'm actually blown away by the fact that back propagation and Neural networks remain the main topic of interest. Basically the whole concept has been stuck in time over the last 30 years repeating the same old ideas over and over again. There indeed is some excellent research being done, and there are plenty of researchers that realize that the neural network models are nothing like what goes on inside biological brains, but these guys will be harmed if the interest abruptly shuts down again. It may not happen as bad as before, since there are now some applications that fit the models well enough to keep things chugging along, I hope so.

Anyway, fun talking about this stuff, but we're here to make money even if it means investing in concepts that may eventually fail, as long as we get in and out at the right times we're all good.

***************************************************************************************

geekinasuit:

This is an excellent presentation and discussion of what is going on, it should be easy for most people to follow: (one hour video)



There are other links I can post, one of the more interesting ones are about biological versions of surprisingly complex intelligent behaviors seen in organisms that have no identifiable "brain" at all. Intelligent behaviors including communications across individual organisms have been documented in plants, slim mold, and even some single cell organisms. It sounds laughable but it's serious science by respected researchers, it's not quackery. Here's a great talk on the subject

There are other links I can post, one of the more interesting ones are about biological versions of surprisingly complex intelligent behaviors seen in organisms that have no identifiable "brain" at all. Intelligent behaviors including communications across individual organisms have been documented in plants, slim mold, and even some single cell organisms. It sounds laughable but it's serious science by respected researchers, it's not quackery. Here's a great talk on the subject: (one and a half hour video)



I think the bottom line is that neural nets are simply one type of solution to a vastly larger superset of possible solutions, and that while we are seeing rapid gains due to exponential increases in computational power, that it's a big mistake to ignore all the other kinds of solutions that are available, many of which will be much simpler to implement and will require much lower levels of computational power. An example of using overly complex solutions is what's going on with self driving vehicles, there are some companies stubornly focusing exclusively on overly complex solutions that use camera based vision despite there being much simpler solutions using a combination of cameras and other sensors such as radar and lidar. The combinations of sensors are also more fault tolerant and much more capable, and by extension these solutions are much safer as well.

***************************************************************************************

geekinasuit:

One more link for you, but I would not look at it until after going through the other two links (more than once if you are interested) because it gets down to the lowest levels of computational theory.

Have your favorite beverage when watching this talk about computational irreducibility: (one hour 20 minute video)



The idea here is that the problems that AI is attempting to solve (including the AI itself) is simply a very difficult problem with no simple answer and that to find answers is always going to require a lot of time and energy, and that also includes finding the solutions to the problems itself. Finding the solution to a problem space, is often the most time and energy intensive operation, gains are only possible by re-using the solution through scaling and generality of use.

BTW The speaker (Stephen Wolfram) may seem to you like an egotistical lunatic, but I've come to realize he's on the right track and is a very level headed scientist worth following.(one hour and 20 minutes video)

As you said "all models eventually fail" and that is indeed true. We should definitely enjoy and profit from the good times while they last! The trick is to get in and out at the right times.

I may seem pessimistic, however I'm actually very optimistic that the resurged neural network party is not over yet, it could go one for another 5 years at least, and as I mentioned there are some practical applications that do fit the models well enough which will help keep the neural network idea persisting for a long time after all the excitement starts to drop off. If you are invested in Nvidia, for example, make the most out of the extreme hype while you can. Hopefully you will be able to make a better informed decision when/if the party is ending before the herd does.

You never want to be following the herd: if it’s going in the same direction as you are, it's only by coincidence you are on the same path.

GLTA

***************************************************************************************
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext