Average MSRP is $30K USD.
Off one customer, Nvidia made 15 billion. Jesus.
Yeah their pricing is already quite reflective of the investors ambitions of selling the pickaxes to the gold rush. IMO, it’s never about the success but about the potential behind a company.
Hyperscalers don’t pay close to MSRP, expect more like $5-8k per chip
And this is a part of the reason why Nvidia can price their consumer GPU’s like they do.
They literally are full throttle in the B2B and anything that might’ve been dedicated to the retail market just frees up production for the AI business that is absolutely ridiculously profitable for them ATM.
data center products are bottlenecked by chip on wafer on substrate packaging as they use chiplets.
Gaming GPUs do not- they have pricing power for consumer GPUs because they are a generation ahead of competitors.
I got to enable a special Instance tier for one of our engineering teams in AWS the other day. They come with 6 or 7 of these GPUs. Had to coordinate with our TAMs because they’re basically bare metal hosts and cost so so much - he told me even internal AWS folks aren’t allowed to play with them because of the cost and demand. Crazy.
AMD and their MI300 should catch a similar traction, and propel AMD to much greater heights, market cap wise. Now might be an excellent time to buy…
I mean, who else makes this level of AI accelerator? Nobody. Nobody but AMD and Nvidia can do this right now. Seems to me they are both going to be much, much larger companies in the next 10 years than anyone thought they might be.
I mean, who else makes this level of AI accelerator?
Google, Microsoft, and Amazon are making AI accelerators for their datacenters. Some are for training and others are for inference (running the trained models for services).
I’d be somewhat hesitant until the full benchmarks are out. AMD’s higher memory I think it was sounded neat, but that could be very fleeting in terms of advantages.
None of those companies are real chip makers and, while they may be able to produce a nice, custom solution for their applications, will never compete with the actual chip makers, and AMD and Nvidia will continue to supply a greater and greater share of the vast majority of AI super chips, eventually eliminating most custom solutions.
Yeah I’m in this industry and to get a H100/A100 card you are looking around 40-60 weeks or at least 4 months. It’s not Meta/Facebook alone that use these cards…
40 weeks is a lot more then 4 months… unless you are saying the low end isn’t 40 weeks