A change in the naming convention of laptop GPUs may dupe gamers into purchasing systems with lower graphics performance than they thought.
Nvidia – the world’s leading PC graphics card manufacturer – makes the majority of GPUs offered in true gaming laptops today. Generally these cards are less powerful than their desktop namesakes, but they still offer a great gaming experience for those who want to play while on the move.
One reason for their lower performance is the fact that they have to be fitted into a smaller chassis, which means that heat dissipation becomes a problem. To address this, Nvidia introduced its Max-Q technology in 2017, which allowed compact laptops to get the capabilities of a powerful GeForce card, with slightly muted Thermal Design Power (TDP). TDP – which is measured in watts – is the maximum theoretical power output of a computer chip or component at its peak load.
Alternatively, it is also an indication of the amount of heat that part can produce which the cooling system of the device it is located within can handle without failure. Generally, the higher the TDP, the better the performance of the card, bar the few instances where efficiency has become advanced enough to trump raw power.
Max-Q no more In order for consumers to be able to distinguish between faster and slower cards, manufacturers mostly indicated in laptop specification sheets if the card was using Max-Q technology or was a native mobile GPU (non-Max-Q or Max-P). However, with the arrival of the RTX 30 Series on laptops, Nvidia has decided to drop the Max-Q branding. Instead, manufacturers can decide for themselves exactly how much power they want their cards to be able to access, which gives them more room for adapting design and cooling appropriately, and including other Max-Q-related features.
The RTX 3060 can now have a TDP anywhere between 60 to 115W, the RTX 3070 can range from 80W to 120W, while the RTX 3080 can be between 80W and over 150W. This does not include the additional 5W, 10W or 15Wwhich can be unlocked with the Dynamic Boost 2 feature. RTX 3060 beating the RTX 3080 If a manufacturer chooses not to include the TDP, there is no way of knowing beforehand if you are, for example, buying a laptop with an RTX 3070 GPU running at 80W or 120W.
What makes it even more confusing is that a lower-end card with higher TDP may perform better than a superior model with lower TDP. This means that there is no longer a simple hierarchy whereby the RTX 3080 card would always trump the RTX 3070, and the RTX 3070 would beat the RTX 3060. In fact, a report from Computerbase indicated that the raw teraflops of graphics power on various RTX 3060 mobile cards is in fact higher than certain RTX 3070 and RTX 3080 cards with Max-Q TDPs.
The table (click link below) shows the list of 28 known variants of RTX 30 graphics cards for desktops and laptops uncovered by Computerbase, arranged according to their teraflops.
Following a backlash over the changes, Nvidia has announced it will require laptop manufacturers to indicate the performance specifications of the GPU in detail. Companies must now indicate attributes like clock speeds, the wattage of a graphics card, and other features. Consumers will have to peruse these carefully in order to see whether a “higher-end” card is indeed better than on another unit.
One place to turn to would be reputed tech review sites that test a wide variety of laptops. Several sites have said that the development is a troubling one, and Notebookcheck has even gone as far as saying it will start calling out manufacturers who do not disclose the wattage of the GPUs in their laptops.
Popular tech YouTuber Dave Lee (Dave2D) has also started including the wattage of the RTX 30 cards for his laptop reviews, a trend which will likely stick with other reviewers as well. If you are able to use the computer beforehand and have some time to test it, you can do the following: Check the GPU power usage at idle. Run a GPU benchmark like PassMark with the graphics load at its peak. Check the GPU power usage at this point. Subtract the first measurement. This should give you a rough indication of the GPU’s TDP.