![]() ![]() The CEO of NVIDIA, Jensen Huang, famously stated that “Moore’s law is dead”. How Do We Keep Getting Even More Computing Power? The Orin has 15W, 30W and 50W power modes. For the Orin, the memory is faster with 32GB of 256-bit LPDDR5 main memory versus the older LPDDR4 of the Xavier. The AGX Xavier has 512 Volta Architecture CUDA cores with the same number of Tensor Cores.īy comparison the Orin CPU complex consists of 12 ARM Cortex A78AE v8.2 64 bit CPUs, grouped in fours. The Orin uses the new NVIDIA Ampere architecture with 2048 CUDA cores and 64 Tensor Cores. Here I saw up to a total system power consumption of up to ~31W for workloads such as 456.hmmer (~22W active), while more bottlenecked workloads ran around ~21-25W for the total platform (~12-16W active).We can see from the specs some of the reasons that NVIDIA claims a 6-7X performance improvement over the earlier AGX Xavier. ![]() Because performance doesn’t scale linearly with the additional secondary cluster cores, the power increase was also more limited. I didn’t get to measure full power figures for all rate benchmarks, but in general the power of an additional core within a separate cluster will scale near linearly with the power figures of the previously discussed single-core Speed runs, meaning the 4-core rate benchmarks showcase active power usage of around ~12-15W. It’s possible that NVIDIA either sees a majority of workloads targeted on the AGX to not be an issue in terms of their scaling, or actual use of the platform in automotive use-cases we would see each core in a cluster operating in lock-step with each other, theoretically eliminating possible resource contention issues on the level of the shared L2 cache. Overall, NVIDIA's design choices for Xavier’s CPU cluster are quite peculiar. The same analysis can be applied to the floating point rate benchmarks: Some of the less memory sensitive workloads don’t have all that much issue in scaling well with core counts, however others such as 433.milc and 470.lbm again showcase quite bad performance figures when run on all cores on the system. On the opposite end, workloads such as 462.libquantum suffer dramatically under this kind of CPU setup, and the CPU cores are severely bandwidth starved and can’t process things any faster than if there were just one core per cluster active. The only workloads that aren’t nearly as affected by this, are the workloads which are largely execution unit bound and put less stress on the data plane of the CPU complex, this can be seen in the great scaling of some workloads such as 456.hmmer for example. In effect what this means that in the 4C rate scenarios, each core within a cluster essentially has exclusive use of the 2MB L2 cache, while on the 8C rate workloads, the L2 cache has to be actively shared among the two cores, resulting in resource contention and worse performance per core. In fact, by default what the kernel scheduler on the AGX does is to try to populate one core within all clusters first, before it schedules anything on the secondary cores within a cluster. The 8C workloads however showcase significantly worse performance scaling factors, and it’s here where things get interesting:īecause Xavier’s CPU complex consists of four CPU clusters, each with a pair of Carmel cores and 2MB L2 cache shared among each pair, we have a scenario where the CPU cores can be resource constrained. ![]() The majority of the 4C workloads scaling near the optimal 4x factor that we’d come to expect, with only a few exceptions having a little worse performance scaling figures. To showcase this better, let’s compare the scaling factor in relation to the single-core Speed results: Here we see the four core run perform roughly as expected, however the eight core run scores are a bit less than you’d expect. I chose to run the rate benchmarks with both 4 and 8 instances to showcase the particularity of NVIDIA’s Carmel CPU core configuration. The way that SPEC2006 rate scores are calculated is that it measures the execution time of all instances and uses the same reference time scale as the Speed score- with the only difference here being is that the final score is multiplied by the amount of instances. SPEC2006 Rate uses the same workloads as the Speed benchmarks, with the only difference being that we’re launching more instances (processes) simultaneously. NVIDIA’s Jetson AGX dev kit doesn’t have these issues thanks to its oversized cooler, and this makes it a fantastic opportunity to test out Xavier's eight Carmel cores, as all are able to run at peak frequency without worries about thermals (in our test bench scenario). On mobile devices thermal constraints largely don’t allow these devices to showcase the true microarchitectural scaling characteristics of a platform, as it's not possible to maintain high clockspeeds. Something that we have a tough time testing on other platforms is the multi-threaded performance of CPUs. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |