The new RFO Benchmarks are here!
Well, after a LOOONG delay, I can finally say the new RFO benchmark V3 is available. If you just want to be about the benchmarking, link at the bottom. But, if you are curious, here’s a little missive about what’s new, and whats changed, and what I learned, and all that jazz.
Part of the delay initially was an issue with Revit 2017.0 DWG export performance. I was seeing 2017 taking twice as long as 2016 had when I tested the early implementation of v2 for 2017, and I wasn’t initially sure if it was an actual bug in Revit, or an issue with the journals in 2017, or something else entirely. I didn’t want to include the test if it was falsely suggesting a performance issue, but I also didn’t want to publicly claim a performance issue until I was sure. So, lots of benchmarking and manually exporting with a stopwatch, on multiple machines, both Virtual & Real, both mine and others, led me to conclude that it was real. I made Autodesk aware of the issue, and waited to see if it was Revit, or my journals, or what.
And in the meantime I decided to add some Color Fill Plan and View Template tests, because those are tasks we all use all the time (you are using View Templates, aren’t you?!?).
And I wanted to revise the Export tests, because…
1: They needed their own group, removed from the more “daily chore” Model creation group.
2: The needed revised journals, as the old journals included the user interaction part in the time, which isn’t really a machine performance test.
3: Exports needed expanding, to include DWF as a comparison to DWG, and after a conversation with Autodesk turned a light on for me, I wanted to add print tests as well.
All are performance constrained tasks, but also tasks that we don’t do all day every day, so they needed to be treated differently.
And, having started down that path I made some discoveries. Like… Revit Structure has never had Color Fills! WTF Autodesk? With those tests implemented for all the other verticals & providing meaningful data, and with Autodesk abandoning the verticals concept with 2017 anyway, I didn’t want to not include them. So I had to add some VBS in the journals so as to not have even more failures for the long suffering RST benchmarkers, who have seen a disproportionate share of issues over the years.
And, upon completing the Print tests, which (brilliantly, thanks to some great input from Autodesk) use the Microsoft ‘XPS Document Writer’ printer, I discovered that not everyone has that printer installed, just 99% of people. So more VBS in Journal trickery followed.
Then, just as I thought I was wrapping up, RVT 2017.1 dropped. And it changed how “updates” are logged in the registry. That mattered because (again, thanks to some insights from the Factory) I had added some code to determine the patch status. Because R2 updates have been showing a solid performance increase over FCS, and the benchmarks don’t identify this at all. And it mattered because I got lazy and implemented that bit of code in a way that worked, until 2017.1 changed the rules. So, back into the code to revise things, to how I SHOULD have done it to start with.
And, after all those changes, it was obvious the 2017 test was meaningless as a comparison with previous year v2 benchmarks, so I had to implement v3 for at least a few years prior to allow comparisons.
And that brings us to today. The benchmarks can now be downloaded here, with links to the Results threads.
And woot to that!
Oh, and in case you are wondering, that DWG export issue WAS a bug. And Autodesk fixed it in 2017.1. And the new benchmark shines a light on that. And woot to that, too.
Comments