The Scarlett SoC physically has 56 compute units for graphics, but only uses 52 in the retail product. The presentation at ISSCC spent some time going into the upsides of both options, but ultimately why Microsoft went with 52.
Assuming a defect happens in one of the GPU compute units or WGPs, which is a very good chance because the GPU is the biggest part of the processor, by absorbing that defect and disabling that WGP, that SoC can be used in a console and the effective yield is higher.
When the defect rate is 0.09, which is nice and low, the chances that two defects occur on the same chip are very small. Even then, by choosing to run a design with only 26 WGPs enabled, two less than the full 28 WGPs, almost everything that comes off the manufacturing line can be used – an effective yield increase, reducing the average cost per processor by a third.
Microsoft has already explained that the cost of the processors for this generation of consoles is a lot higher than the
Xbox One X in 2017 and a lot lot higher than the
Xbox One from 2013. This comes down to having roughly the same die area, but on a more advanced process node, more complex steps and structures, large IP blocks (some of which may be licensed), higher wafer price, and lower yield.
So the opportunity to reduce the cost of the processor by up to a third, at the expense of a 20% power tradeoff in the GPU for the same performance, isn’t a bet to be taken lightly, and no doubt a number of engineers and bean counters would weigh up the pros and cons. Different design departments may have chosen to go in the other direction.