RFO Benchmark v3.1 (for Revit 2016-2018)
RFO Benchmark v3.1, for Revit 2016-2018, is now available at RFO!
This year brought a lot of changes. Probably too many changes to be honest. And this summer has been my busiest in 10 years on the work front, so the combination was just not ideal for getting a side project done.
Some of the changes are under the hood stuff in the Journals, such as an issue that made switching views rather slow. It didn’t impact the final results, because the only thing being timed was the repeating Refresh and the View Cube manipulation after the view change. But, on the high repetition tests that folks use for in-house validation and purchase decisions could easily take an extra hour to run. That needed to be fixed! The journals also got a lot of cleanup, mostly replacing repetitive sequences with VBScipt code right in the Journal. That will make maintenance going forward MUCH easier.
Some of the other under the hood stuff relates to benchmark automation. Previously the Revit files in the Resources folder were the ones used during processing, which means that only one machine could run the benchmark at a time when using a network shared RFO Benchmark. Some folks were copying the whole benchmark to the local machine and running the benchmark there, and that’s great. But having the ability to benchmark en masse from a shared location is useful, and with v3.1 those resource files are now copied to a dedicated machine specific sub-folder in Resources, and those copies are used. That makes en masse benchmarking much easier.
However, the main change this year was a big revision in the tests themselves, especially on Graphics. For the longest time we have really wanted a way to hammer a card hard, because the test has been light weight enough that you really don’t see any difference between a $50 Radeon R7 and an $800 Quadro M4000. Last year I added the Expanded test, which does the entire Model Creation test in the 9th position of a 3×3 grid. The other 8 positions are links of the final RFO benchmark model. Net result is a much heavier test, but still not very “real world”, in that the model barely pushes up against 8GB of RAM. That wasn’t even a big model 5 years ago. So, for v3.1 I added a GPU Hammer test. This is an extended GPU test with more timed items, and the model is a 5×5 grid of links. That’s nearly three times the complexity of the Expanded test, and it really does hammer on the hardware. I ran some tests with GPU-Z running, and the GPU was regularly hit 50-80% utilization, which for Revit is really hammering. Net result, I think we finally have a way to validate some hardware decisions. Spoiler, that Quadro M4000 may still not be worth the money, but at least you might have the data to defend that position. However, defending the position that a $50 isn’t a professional card and you need something better just got a lot easier.
One last change that’s worth noting is some improvements in the Revit build identification code. Each new build of Revit is much more than bug fixes and new features in the .# releases. There are performance improvements as well. Currently 2018 is rather slower than 2016 & 2017, but early benchmarking of 2017 where slower than 2015 & 2016, now it’s on par or faster. That’s a result of improvements in the various updates, and as such the benchmark numbers aren’t much use without also knowing the build number. Last year I introduced that, but Autodesk is constantly changing where that information is found, so post 2017.1 things didn’t get reported right. Now the code accurately reflects current build for all three years, and will be easier to maintain moving forward. Because 2018.1 marks a full YEAR of Autodesk doing it consistently the same way. that can’t last much longer, can it?
And one last comment and feeble excuse for being late. I retired Revit 2015 support, but the test is available for Revit 2016-2018. I will continue with that scheme, of supporting at least 2 years back for any given revision of the benchmark. I think being able to compare performance over a few years, as well as over a few updates, is worth while. But older than 2 years has diminishing returns, and especially with the Verticals of 2016 and earlier it was a lot of extra work.
Anyway, next year should be a little better, because the PowerShell underpinning the benchmark won’t need much revision, and with 2016 no longer supported the journals will be easier to manage, since there won’t be a social case for Revit Structure any more. And, I am promising myself and everyone else, right here and now, that I won’t be changing the benchmark wholesale, unless something really major comes up. Let’s hope it doesn’t!