Logo

Why Many Gamers Are Choosing AMD GPUs Over Nvidia in 2025

Sun Jul 13 2025 15:27:00 GMT+0000 (Coordinated Universal Time) • Ethan Phillips

Price-to-Performance: AMD’s Value Advantage

One of the biggest reasons gamers are flocking to AMD in 2025 is the superior price-to-performance value of Radeon GPUs. AMD has aggressively priced its latest Radeon RX 7000 and 9000-series cards to undercut equivalent GeForce RTX models in cost per frame. For example, the new midrange Radeon RX 9060 XT 16GB debuted at an MSRP around $350, yet it often outperforms Nvidia’s competing RTX 5060 Ti while costing roughly $80 less. According to TechSpot’s analysis, at MSRP the RX 9060 XT delivers about 17% more performance per dollar than an RTX 5060 Ti 16GB, meeting the 15% value advantage threshold that reviewers expect from AMD in order to offset Nvidia’s stronger feature set. In practical terms, this means a gamer can get similar or better FPS with a Radeon card while spending less money.

This value gap shows up across multiple tiers. The RX 9060 XT is currently “the best-value graphics card available” in its class, offering significantly lower cost per frame than Nvidia’s 50-series cards in many regions. Even when both cards are selling above MSRP, the Radeon often maintains a double-digit percentage lead in performance per dollar, making it the sensible choice for budget-conscious buyers. AMD’s previous generation RDNA 3 cards (like the RX 7800 XT and 7900 XT/XTX) have also seen price cuts that improve their value, whereas Nvidia’s high-end RTX 40-series launched at steep prices (e.g. $1,199 for the RTX 4080) and only modest discounts have emerged over time. Gamers have taken notice of Nvidia’s generational price hikes and are rewarding AMD for delivering more frames for the buck.

More VRAM for Future-Proofing

Another key factor tilting gamers toward AMD is VRAM – the memory on the GPU. Modern games are increasingly demanding in terms of VRAM, and many recent titles can exceed 8GB usage at high textures or resolutions. Nvidia has been conservative with VRAM on some of its midrange cards (for instance, the RTX 3060 Ti/4060 Ti launched with 8GB, and the RTX 4070 has 12GB), which has led to concerns about “future-proofing.” AMD, by contrast, has made high VRAM a selling point: most current Radeon GPUs in the mid-tier and up come with 12GB, 16GB, or even 24GB memory. This difference became very apparent in 2023-2024 when games like The Last of Us Part I and Resident Evil 4 Remake showed notable performance dips on 8GB cards at higher settings, whereas cards with 12–16GB handled them smoothly.

By mid-2025, 8GB is widely considered insufficient for gaming at anything beyond very low settings. In fact, the 16GB on AMD’s mainstream cards is viewed as a significant advantage. As TechSpot notes, the Radeon 9060 XT’s double VRAM (16GB vs. 8GB) gives it an edge now that 8GB cards “tank performance” once their memory runs out. Nvidia has partially responded by releasing 16GB variants of some models (e.g. an RTX 5060 Ti 16GB, and previously a 16GB 4060 Ti due to community pressure), but those often come at a higher price point that erodes value. For example, an RTX 5060 Ti 16GB launched around $429, notably above the RX 9060 XT’s price, while Nvidia’s cheaper 8GB version of the 5060 Ti is generally not recommended for 2025 gaming.

AMD’s philosophy of “more VRAM for your dollar” resonates with gamers who want to future-proof their systems. A card like the Radeon RX 7800 XT or 9070 (16GB) can comfortably run modern titles at 1440p with ultra textures, whereas a similarly priced Nvidia RTX 4070 or 5070 with 12GB might already be nearing its VRAM limits in certain games. The extra memory also benefits those who mod games or play VR titles, which can consume large amounts of textures. In short, AMD has won goodwill by addressing a pain point (VRAM shortages) that many Nvidia owners experienced in recent years.

Driver and Software Improvements

AMD has historically been dogged by a reputation for shaky drivers and weaker software features compared to Nvidia, but in 2025 that narrative has changed considerably. Radeon drivers (the Adrenalin software suite) have matured and are now regarded as stable and feature-rich. Gamers are reporting far fewer issues like black screens or crashes than in past generations. In addition, AMD’s software package offers a robust feature set: Radeon GPUs have built-in streaming/recording tools, performance tuning, and technologies like Radeon Super Resolution and Anti-Lag. AMD even introduced HYPR-RX, a one-click feature that intelligently applies resolution scaling, anti-lag, and sharpening to boost frame rates without much hassle – making it easier for users to optimize games.

Crucially, AMD has been working to close the gap with Nvidia’s proprietary features. With the latest RDNA 3 and RDNA 4 architectures, AMD’s driver optimizations and API support have reduced CPU overhead, which helps in CPU-bound games or high refresh gaming. (In some scenarios, Nvidia’s driver overhead can cause GeForce GPUs to bog down when paired with a slower CPU, whereas AMD’s approach yields slightly higher fps on the same CPU.) This is a technical edge that benefits gamers chasing high frame rates in esports titles or 1080p gaming on mainstream CPUs.

On the software side, while Nvidia still leads in certain areas (more on that later), AMD’s gap has narrowed. For example, AMD’s latest FSR (FidelityFX Super Resolution) technology has progressed to version 4, bringing improved upscaling image quality closer to DLSS. Tom’s Hardware reports that FSR 4 is “a big jump in image quality over FSR 3” and is rolling out to more games. AMD’s frame generation solution (part of FSR 3.x) works on all GPU brands, giving a wider swath of gamers access to frame-doubling, though it lacks the AI refinement of Nvidia’s approach. AMD’s drivers have also integrated features like Radeon Boost (dynamic resolution to improve fps in motion) and Radeon Chill (power-saving frame rate caps) which can be handy for gamers. These improvements signal that AMD’s software ecosystem is no longer a reason to avoid Radeon – in fact, many users appreciate AMD’s more open and developer-friendly approach (such as making FSR available to all and having robust Linux driver support).

That said, Nvidia still offers some premium software tricks – for instance, GeForce Experience with NVENC encoding, RTX Video Super Resolution, and the Nvidia Broadcast suite (for AI noise suppression, virtual backgrounds, etc.). AMD has equivalents (like the AMD Noise Suppression feature and VCE encoder), but Nvidia’s tend to be more polished. Nonetheless, for pure gaming purposes, AMD’s recent driver reliability and the utility of features like Anti-Lag have given gamers confidence that switching to Radeon won’t mean sacrificing a stable experience. AMD even addressed past weaknesses by boosting its ray tracing drivers and adding AI acceleration support (Matrix cores on RDNA4), which we’ll discuss next.

Ray Tracing Performance vs. Real-World Use

There’s no question that Nvidia’s GeForce RTX cards still hold the crown in raw ray tracing performance. Nvidia invested heavily in ray-tracing cores and software like DLSS to make real-time ray tracing feasible, and their lead persists into 2025. High-end GeForce GPUs (RTX 4080/4090 and the newer 50-series) can run ray-traced games significantly faster than AMD equivalents, especially when leveraging DLSS quality or frame generation. For example, in a fully ray-traced title or path-traced game, an Nvidia RTX 5090 or 4080 will outpace a Radeon 7900 XTX or 9070 XT by a noticeable margin. However, the importance of this advantage depends on the gamer.

Many mainstream gamers and competitive players still prioritize high frame rates and visual clarity without ray tracing. In standard rasterized (non-RT) rendering, AMD’s GPUs perform excellently – often matching or beating Nvidia at a given price point. A Radeon RX 7900 XTX (AMD’s last-gen flagship) can keep up with the pricier RTX 4080 in traditional raster performance, which is “quite impressive for AMD and very welcome competition,” as one workstation-focused test noted. The compromise is that when you “turn on every bell and whistle” like ultra ray tracing, AMD’s cards “can’t keep up” with Nvidia’s top GPUs in those workflows. Nvidia’s head start in RT hardware and denoising tech gives them an edge for those who insist on maxed-out visuals.

The crux in 2025 is that many gamers don’t find ultra ray tracing to be a must-have, especially if it means paying a hefty premium or taking a big fps hit. Ray tracing effects (global illumination, reflections, shadows, etc.) undeniably make games look better, but they’re also very performance-intensive. On midrange hardware (the segment where a lot of gamers purchase), enabling ray tracing often isn’t practical without upscaling. For instance, an RTX 4060-class GPU might technically support ray tracing, but trying to use it at 1440p could push frame rates below 60 FPS unless DLSS is used – at which point the visual difference narrows. AMD’s strategy has been to offer great raster performance and sufficient RT for moderate use, betting that many players will run games in raster mode or with only mild ray tracing. This bet seems to be paying off. Gamers who care more about high refresh rates or competitive performance happily disable ray tracing (or run at “Medium” RT at most), negating Nvidia’s advantage. In those scenarios, a Radeon card that costs $100 less can deliver equal or better experience with settings that players commonly use.

It’s also worth noting that AMD has improved ray tracing generation-over-generation. RDNA 3 GPUs were about 50% faster in ray tracing than RDNA 2, and the new RDNA 4 (RX 9000 series) reportedly further closes the gap. Tom’s Hardware observes that with the RX 9070 XT, “AMD shored up [its] greatest weaknesses… RT performance and AI, both of which are now much closer to Nvidia’s latest”. In practical terms, a mid-2025 Radeon like the 9070 XT can comfortably handle ray-traced lighting in many games at 1080p or 1440p with FSR enabled – something older Radeons struggled with. It’s mainly in the extreme case (ultra RT at 4K) that Nvidia’s top-end cards pull far ahead – but very few gamers in the mainstream target that scenario. Thus, for ray tracing enthusiasts who want the absolute best visuals (and who likely have the budget for a 5080 or 5090), Nvidia remains the go-to. But for the average gamer, ray tracing is a nice-to-have, not a necessity, and AMD provides plenty of performance in the rasterized modes that dominate today’s gameplay. As one analysis wryly noted, “Nvidia has a large head start with ray tracing at the moment, but AMD is steadily improving”, and for many gaming scenarios the difference is becoming less of a deal-breaker.

AI and Generative Workloads

In the era of AI and machine learning, Nvidia has cultivated a strong reputation – their GPUs are ubiquitous for AI research, deep learning, and recently, consumer-facing generative AI tools. This is largely thanks to Nvidia’s CUDA and Tensor Core technology, as well as a software stack (libraries, frameworks) that is well-optimized for GeForce and professional Quadro/RTX cards. How does this play into gamer choices? For some hobbyist gamers who dabble in AI or content creation, Nvidia’s advantages in AI/generative workloads remain a deciding factor. If you want to run Stable Diffusion image generation, train machine learning models, or use AI-enhanced creative apps, Nvidia GPUs typically offer better performance and compatibility out-of-the-box. The RTX 40-series introduced features like DLSS 3’s AI frame generation and the newer RTX 50-series adds DLSS 4 with transformer-based upscaling and multi-frame generation (inserting 3-4 AI frames vs. 2). These AI-driven features give Nvidia a clear lead in the bleeding-edge intersection of gaming and AI.

AMD has been playing catch-up here. RDNA 4 includes improved AI accelerators (sometimes dubbed “Matrix cores”) to boost AI processing, and the new FSR 4 upscaler incorporates AI for better image quality. This means Radeon owners will start to see benefits like smarter upscaling and possibly other AI-enhanced effects. Nonetheless, Nvidia’s ecosystem for AI and creator workloads is still more robust in mid-2025. Professional content creators (video editors, 3D artists) often favor Nvidia due to superior support in applications like Adobe Premiere, Blender (OptiX render), DaVinci Resolve, and so on. According to Puget Systems, which tests GPUs for content creation, AMD’s Radeon 7000 series can outperform Nvidia’s 40-series in some specific apps, but overall Nvidia still earns the “best GPUs for content creation” crown due to broad excellence across many workflows. In their analysis, AMD held a small lead in certain AI imaging tasks, but Nvidia was “significantly better” in others, and the consistency of Nvidia’s performance plus software support tipped the scales in favor of Nvidia for most creators.

What this means for gamers is that the choice might hinge on whether you use your GPU for more than gaming. Gamers who also stream or produce videos, or those who experiment with AI tools, may lean Nvidia because features like NVENC encoding, AI green screen, or faster GPU computation in productivity apps give them extra value. On the other hand, gamers who primarily game (and perhaps do light content creation that isn’t GPU-intensive) may find AMD’s cards perfectly sufficient. AMD is also involved in open-source AI efforts (like ROCm for compute and supporting Stable Diffusion on Radeon via DirectML), but the ecosystem is young and not as plug-and-play as Nvidia’s. In summary, Nvidia currently retains the edge for AI/generative and creator use-cases, which is why we still see many content creators, AI enthusiasts, and game developers sticking with GeForce. However, for the majority of gamers who don’t heavily use those features, this advantage may not justify Nvidia’s higher prices. AMD’s focus on pure gaming value appeals to those users more than Nvidia’s AI bells and whistles.

Market Context

The market context around GPU launches and availability is also influencing the AMD vs. Nvidia dynamic. Earlier GPU generations (2020–2022) were plagued by supply shortages and scalping, but by 2025 the situation has improved – you can generally buy current-gen cards from both vendors without extraordinary effort. However, pricing and availability still vary by model and region. Notably, AMD’s latest releases in 2025 have seen better initial availability at MSRP in the US than some of Nvidia’s launches. For example, the Radeon RX 9060 XT (16GB) was “available at Micro Center just a week after launch, at its $350 MSRP” in multiple stores. In contrast, Nvidia’s GeForce RTX 5070 Ti, which launched a bit earlier, was often selling well above its $749 MSRP due to demand and limited supply of MSRP units. This meant early adopters of Nvidia’s mid-high cards paid a premium, whereas AMD buyers had a shot at retail pricing.

Recent product launches have shaken up the lineup on both sides. Nvidia began rolling out its GeForce RTX 50-series “Blackwell” GPUs in early 2025, starting with models like the RTX 5070 (March 2025) and followed by the 5070 Ti, 5060 Ti, etc. These cards brought moderate performance gains (on the order of ~20% over 40-series equivalents) and new features like DLSS 4. However, Nvidia also largely maintained its pricing structure – for instance, the RTX 5070 launched at $549 (12GB) and the 5070 Ti at $749 (16GB). Reviewers noted that while the 5070 Ti offers high-end performance (comparable to last generation’s RTX 4080 in some cases), its value proposition was only strong if you could find one at $749, which in practice was difficult. Many AIB (board partner) cards came in higher, and stock of base models was limited at launch.

AMD, on the other hand, surprised the market by launching its RDNA 4 Radeon RX 9000-series in the midrange first (rather than starting at the top as they did with last gen). The RX 9070 XT and RX 9060 XT were announced around May/June 2025 with aggressive pricing – roughly $599–$699 for the 9070 XT (16GB) and $349 for the 9060 XT (16GB). These undercut Nvidia’s equivalents in MSRP. However, real-world retail pricing complicated the picture. The RX 9070 XT has been plagued by inflated street prices, often selling $100–$200 above its MSRP due to initial supply constraints. TechSpot’s data shows that in the US, the 9070 XT was about $200 over MSRP shortly after release, essentially erasing its intended 16% cost-per-frame advantage over the RTX 5070 Ti and making the Nvidia card the better buy until Radeon prices normalize. In contrast, the slightly lower-tier Radeon RX 9070 (non-XT) and the RX 9060 XT have been closer to their MSRPs and thus delivered strong value. By July 2025, the GeForce RTX 5070 (12GB) has emerged as a price/performance darling in the midrange, precisely because Nvidia kept it near MSRP and AMD’s 9070 series pricing ran high. As Tom’s Hardware notes, “$600 RTX 5070s are far more common than $600 RX 9070s,” tilting midrange buyers toward the Nvidia card unless AMD cards get discounted.

In summary, availability and pricing in mid-2025 favor AMD at the lower end and favor Nvidia in the mid-upper range – the inverse of what one might expect from MSRP alone. At the mainstream/budget end, AMD’s cards are readily available and often selling at or below their intended prices, making them easy recommendations (e.g., RX 7600 and 9060 XT for 1080p gaming, each delivering great value per dollar). In the midrange ($500–$700 segment), Nvidia has managed to flood the market with RTX 5070 cards that stick closer to MSRP, while AMD’s equivalent GPUs have seen markups that reduce their value advantage. At the ultra high-end, Nvidia currently stands unchallenged – the RTX 5080 and 5090 face no direct competition since AMD’s next enthusiast Radeon (the hypothetical RX 9090) isn’t out yet. Nvidia has taken advantage of this by pricing the RTX 5090 exorbitantly (starting around $1,999 MSRP, with actual street prices ~$2,500–$3,000) – a price only the most deep-pocketed enthusiasts will pay. Most gamers aren’t in that stratospheric market, but it underscores that Nvidia currently commands the halo segment, while AMD is content gaining ground with mainstream gamers.

The next few months could see adjustments: as supply catches up, AMD’s RX 9070 XT might drop to its ~$599–649 target, instantly making it a strong value against a $749 5070 Ti. Likewise, Nvidia might introduce “Super” or cut-down variants (there are rumors of an RTX 5050 for budget segment) which could shift value dynamics again. For now, though, many gamers are seizing the opportunity to get an AMD card that’s actually in stock at a fair price, rather than chase an Nvidia card that might be overpriced or unavailable at its nominal MSRP.

In sum, AMD’s GPUs provide excellent real-world gaming performance at 1080p and 1440p for significantly lower cost, and they hold their own at 4K in rasterized gaming. Nvidia’s cards excel when all the latest tech (RT, DLSS/multi-frame, etc.) is leveraged, and they remain the only option if you absolutely must have ultra-4K with every setting maxed. But for the average gamer’s settings and targets – which often involve normal ultra/high graphics (not extreme RT) and a mix of resolutions – AMD hardware is hitting a sweet spot in 2025.

In essence, AMD is winning over gamers who focus on value, longevity (VRAM), and pure gaming performance for the dollar, as well as those who have an ideological or practical preference for open support. Nvidia retains the loyalty of those who want the absolute best performance and features, especially outside just gaming (content creation, AI), or who are deeply involved in tech where Nvidia has unique offerings. It’s a classic split between a cost-effective choice and a premium choice: AMD is the practical pick for many gamers’ needs, while Nvidia is the premium pick for those who leverage the extras or demand every last FPS with all features on.

Key GPU Model Comparison (AMD vs. Nvidia, 2025)

To summarize the landscape, below is a comparison of some key AMD and Nvidia GPU models in 2025, including their MSRP (USD), VRAM, general performance tier, and a rough value rating:

1: AMD’s suggested price for RX 9070 XT is around $649–699, but retail prices have been higher in mid-2025.

2: Approximate MSRP for RX 9070 (non-XT) assumed similar to RTX 5070’s $549.

3: The RTX 5060 Ti also has an 8GB variant at a lower MSRP (~$399), but 8GB VRAM is not recommended for 2025 gaming.

This table highlights how AMD and Nvidia stack up in different segments. For instance, at the top-end, Nvidia’s RTX 5090 is unrivaled in performance but is an extremely expensive, low-value proposition for gamers (its value rating is “Poor” because you pay a fortune for a marginal gain). AMD’s current top card (7900 XTX) offers very high performance and a massive 24GB memory at half or a third the price of Nvidia’s best, making it a “Good” value for high-end gaming despite being slightly slower than Nvidia’s fastest. In the midrange, Nvidia’s RTX 5070 has emerged as a strong value (especially since it’s closer to its MSRP) – we rate it “Very Good” because it hits a sweet spot of performance, features, and price. AMD’s competing RX 9070 (non-XT) has more VRAM and similar raster performance, earning a “Good” rating, though its appeal depends on actual street price and whether one needs Nvidia’s extras. Lower down, AMD’s RX 9060 XT shines with an “Excellent” value rating – it’s widely regarded as one of the best buys for mainstream gamers in 2025. Nvidia’s attempt to cater to that segment, the RTX 5060 Ti 16GB, is a positive step (acknowledging the VRAM demand) but given its higher cost, it ends up only a “Fair” value next to the stellar Radeon offering.

In short, the best value GPU overall is often an AMD card in each price bracket, except where Nvidia has cut prices or where AMD’s pricing isn’t holding MSRP. Gamers should use such comparisons to weigh what matters most: pure performance per dollar, or the added features that might justify spending a bit more on Nvidia in certain cases.

Recommendations and Outlook

As of mid-2025, AMD offers the better value for the majority of gamers, especially those focused on gaming performance and longevity per dollar. If you are a gamer with a midrange budget (for example, $300–$600 for a GPU), an AMD Radeon card will likely give you more VRAM and equal or better rasterized performance for the same money compared to Nvidia. Cards like the RX 7800 XT / RX 9070 series are excellent choices for 1440p, and the RX 7600/6600 or RX 9060 XT cover 1080p gamers nicely, often outperforming equivalently priced GeForces in raw FPS. For pure gaming needs, it’s hard to go wrong with AMD right now in those segments – you’ll get a card that can max out games today and is equipped to handle tomorrow’s titles (thanks to generous VRAM and continued driver improvements).

That being said, Nvidia is still the better choice for certain users despite the higher prices. If you’re a content creator who relies on Adobe, Blender, or other GPU-accelerated apps, or if you plan to do a lot of AI/generative work on your PC, Nvidia’s ecosystem will serve you better in most cases (the tooling and support simply favor CUDA). Similarly, if you’re a gamer who must have the absolute highest performance and plans to leverage features like full ray tracing, DLSS 4, and multi-frame generation to their fullest, then a high-end GeForce RTX is the way to go. You’ll pay a premium, but you’ll also get capabilities that AMD’s cards can’t fully match yet (like 3X/4X frame generation, DLSS-only games, advanced Broadcast streaming features, etc.). For example, a streamer who wants to game and stream from the same PC at 4K60 might lean Nvidia for the superior NVENC encoder and AV1 streaming support with polished integration. A competitive gamer who values Nvidia Reflex in every supported title might stick with GeForce. These are niche but valid reasons to favor Nvidia.

For the average consumer building a gaming PC in 2025, however, the recommendation tilts towards AMD for the graphics card. AMD currently offers the best overall value in GPUs – a point echoed by many tech reviews and cost-per-frame analyses. The combination of lower prices, ample VRAM, and strong performance means you simply get more for what you spend. Nvidia’s cards aren’t “bad” by any means – in fact, the RTX 50-series are excellent performers technically. The issue is Nvidia often charges more for a given level of performance, counting on their feature set and brand cachet to justify it. Right now, gamers seem willing to vote with their wallets for AMD’s approach: slightly fewer frills, but a lot more bang for the buck.

When might Nvidia become more competitive again? There are a few possibilities on the horizon. First, as the RTX 50-series matures, Nvidia could enact price cuts or introduce refreshes (e.g., “Ti” or “Super” variants) if sales are weaker than expected against AMD. We saw a hint of this in the RTX 40 generation when Nvidia released a 12GB version of the RTX 3060 and later the 16GB 4060 Ti to address criticisms – they do respond to market pressure eventually. If AMD continues to gain market share through 2025, Nvidia might adjust pricing on cards like the RTX 5070 Ti or 5080 to close the value gap (the TechSpot analysis suggested the 5080 in particular “needs to come down in price” to be reasonable).

The second factor is competition at the high end. Right now Nvidia can price the 5080/5090 sky-high because AMD has no direct competitor above ~$1000. That may change with AMD’s next flagship (let’s call it RX 9090 or whatever naming) possibly launching in late 2025 or early 2026. If AMD releases a card that challenges the RTX 5090, Nvidia might be forced to either lower prices or introduce an even faster variant (a hypothetical 5090 Ti or similar) at the same price to justify the premium. Historically, competition at the top end (think Radeon VII vs RTX 2080, or RX 6900 XT vs RTX 3090) has led to small price battles or bundle offers to sway buyers. So a lot depends on AMD’s roadmap; if they bring the fight to the $1000+ category, Nvidia will have to respond to remain competitive in value at that tier.

Finally, looking further ahead, Nvidia’s next-gen (RTX 60-series) in a couple of years might re-tip the scales. Each generation leap can reset the value equation. Nvidia could focus on efficiency and cost this time, or include more VRAM to avoid criticism, thereby improving their value proposition. Likewise, if gaming workloads in 2-3 years place more emphasis on AI (say, AI-driven NPCs, procedural generation, etc.), Nvidia’s investment in AI hardware could make GeForce cards more broadly appealing, not just for niche uses. In short, Nvidia’s competitiveness will return if either their pricing strategy changes or the technology landscape shifts to favor their strengths.

In conclusion, mid-2025 is something of a renaissance for AMD in the GPU space: they’ve earned the business of many gamers by focusing on the fundamentals – performance, memory, and price. Nvidia remains the choice for bleeding-edge enthusiasts and multi-purpose users who leverage the GeForce’s proprietary advantages. The good news for all consumers is that this rivalry is forcing both companies to innovate and adjust. As gamers, we should keep an eye on upcoming product releases and price movements. But for now, if a friend asks “Which GPU should I buy for my new gaming PC?” – the answer will often be: Save some money and go with AMD, unless you specifically need what Nvidia is offering. Competition has given us options, and in 2025, AMD is the option delivering the best gaming value overall.

Ultimately, whichever brand you choose, make sure it aligns with your particular needs (be it pure gaming, content creation, or VR/AI experiments). Both AMD and Nvidia GPUs can provide fantastic gaming experiences – but this year, it’s AMD that might let you achieve that experience and keep a bit more cash in your pocket, which is a win most gamers will happily take.

Buy the new RX 9070XT here: https://amzn.to/4lVW1SG