• 0 Posts
  • 20 Comments
Joined 1 year ago
cake
Cake day: October 25th, 2023

help-circle






  • Rapidus was created last year and only started clearing the plot of land to build their first fab in September. Talking about 1nm production is putting the cart before the horse, they aren’t expecting their first 2nm node to be in production until 2027. Let’s not forget that GloFo had decades of experience, bought IBMs fabs, and had an EUV machine and still called it quits on pursuing the leading edge after years of failing. Intel had numerous issues with 10nm, and TSMC and Samsung are now struggling with 3nm. Point being, talk is cheap and execution is extremely hard in the industry.




  • They would need to completely revamp the branding if they want to market towards younger non-tech interested individuals. Simplify the hardware/software and boost marketing with a cool and luxury focus.

    That’s impossible, as Google cant control what people do with Android. If some OEM wants to make a $50 Android phone, they cant stop them.

    As you pointed out, Android still outsells Apple globally, but it should be obvious that most of those phones are budget devices, not flagships.

    Android will never be a premium brand, but there can be premium brands or lineups within Android. This is probably why Google completely killed the Nexus brand, which used to sell very cheap decent devices, now Pixel is more of a mid-high end brand.



  • You’re overlooking the context of that situation. The whole reason Intel hit a wall with CPU performance was because they were overzealous with the improvements being done with 10nm, and as we know Intel couldn’t make it happen on time or as planned. Meanwhile TSMC and Samsung moved to EUV, which paid off big time. AMD soon moved to TSMC and rode on their curtails. Intel has now learned from that mistake and is why they are open to using TSMC not only for additional capacity but to make sure they are never stuck on a node while everyone else isn’t.

    So Nvidia wouldn’t get stuck like Intel did, as Nvidia hops around to whatever fab they feel is best for them in terms of performance and pricing. Now could Nvidia hit a wall with architectural designs? Sure, but so could AMD, but considering this is Nvidia’s core business, they have a ton more money and engineers and are far more desirable to work for, it’s probably less likely Nvidia will have issues and more likely AMD will.




  • Not sure why is so pessimistic about future support after seeing all the effort Intel has put into Arc drivers, which are obviously manual tuning too. APO will never ever come to every game, most games wont even benefit much from it, and it would be far too much work, but all they would need to do is look at like the top 100 games played every year, quickly go through them and see which are under performing due to scheduling issues (not hard to do), then hand tune the ones they expect to find performance left on the table.

    Finding an additional 30% performance, and lower power consumption is definitely worth the effort, its far cheaper for Intel to go down this route than it is to get these gains in silicon. And its not like Intel has any plans to move away from heterogeneous designs anytime soon, even AMD is now doing them and they have their own scheduler issues (X3D on 1/2 CCDs and Zen4+Zen4c).

    I’d obviously like to see support on 13th gen and the midrange SKUs too, and ideally not have a separate APO app.





  • Context of that generational leap makes it far less impressive, and its clear that Apple wont be able to pull this off again in the next 2 years unless there are major changes.

    Apple used to have M silicon trail A silicon architecture by a generation, this is the first year Apple has put them on the same level, its essentially 2 generations of architecture leap this year.

    M2 was on N5P, M3 is on N3B, that node difference alone can account for half of the uplift.

    I’d actually argue that M3 is underwhelming for what went into it, it also wasnt ‘one year’, but 16 months.