CodeSOD: All in the Timing
We like using constants instead of magic numbers. Today, we see an unusual misuse of them. It's unusual because, while it's a set of bad choices, it's not quite a `#define ONE 1` level of silliness.
First, a little background. Benjamin inherited a building automation system. This building automation system was implemented in Microsoft's Visual C++, version 6.0, way back in the 90s. As of the late 2010s, not only was it still in use, it was still pinned to the same compiler version. So not exactly the most modern of applications, but it accomplished the business goals, albeit with a lot of bugs, errors, and mysterious glitches.
Here is how the application checked how long an operation took:
clock_t start = clock();pCache->UpdateChanges(g_hUpdateThread); double duration = ((double)(clock() - start) / CLOCKS_PER_SEC) * 1000; // duration is in ms
Now, this is a pretty mild-looking block of code. At a quick glance, the only thing that leaps out about it is that 1000 is used as a bare constant. We log the time at the start, we take the difference of the time at the end, and then convert our units.
And that's where the code just leads to facepalms. The clock function returns the time in milliseconds. So CLOCKS_PER_SEC is 1000. So we divide by 1,000 to convert to seconds, then multiply by 1,000 to convert back to milliseconds.
It's an interesting case where the name of the constant ends up being misleading and confusing, despite seeming perfectly clear. The number of clock ticks is the number of milliseconds. The unit we chose to represent that, though, is "clocks", which conceals the conversion and creates this confusion.
Benjamin adds: "If Microsoft ever decides to change the function clock() to return something other than milliseconds, this code will get its chance to shine like the star it was born to be!"
[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.