Article 6ZA7 What is the unchecked keyword good for? Part two

What is the unchecked keyword good for? Part two

by
ericlippert
from Fabulous adventures in coding on (#6ZA7)

Last time I explained why the designers of C# wanted to have both checked and unchecked arithmetic in C#: unchecked arithmetic is fast and dangerous, checked arithmetic is slightly slower but turns subtle, easy-to-miss mistakes into program-crashing exceptions. It seems clear why there is a "checked" keyword in C#, but since unchecked arithmetic is the default, why is there an "unchecked" keyword?

There are a bunch of reasons; here are the ones that immediately come to mind.

First reason: constant integer arithmetic is always checked by default. This can be irritating. Suppose for example you have some interop code and you wish to create a constant for the E_FAIL:

const int E_FAIL = 0x80004005;

That's an error because that number is too big to fit into an int. But you might not want to use a uint. You might think well I'll just say

const int E_FAIL = (int)0x80004005;

But that is also illegal because constant arithmetic conversions are also always checked by default. So we still have a conversion that is going to fail. What you have to do is turn off checked constant arithmetic:

const int E_FAIL = unchecked((int)0x80004005);

Second reason: you might have a block of code in which you want all the arithmetic to be checked, but there is one part - say, the inside of a performance-sensitive loop - where you want to get the maximum speed, and are willing to turn off checked arithmetic there and there alone.

Third reason: C# allows you to change the default to checked arithmetic for non-constant integer math via a compiler flag. If you've done so, and you need to turn it back off again on a temporary basis, then you have to use the unchecked keyword.

Fourth reason: the unchecked block can be used as a form of self-documenting code, to say "I am aware that the operation I'm doing here might overflow, and that's fine with me." For example, I'll often write something like:

int GetHashCode(){ unchecked { int fooCode = this.foo == null ? 0 : this.foo.GetHashCode(); int barCode = this.bar == null ? 0 : this.bar.GetHashCode(); return fooCode + 17 * barCode; }}

The "unchecked" emphasizes to the reader that we fully expect that multiplying and adding hash codes could overflow, and that this is OK; we want to be truncating to 32 bits and we expect that the numbers will be large.

There were a bunch of good comments to the previous post; among the questions posed in those comments were:

What do you think of the compiler switch that changes from unchecked to checked arithmetic as the default?

I'm not a big fan of this approach, for several reasons. First, hardly anyone knows about the switch; there's a user education problem here. Second, I like it when the text of the program can be understood correctly by the reader without having to know the details of the compilation process. Third, it adds testing burden; now there are two ways that every program can be compiled, and that means that there are more test cases in the test matrix.

The C# team is often faced with problems where they have to balance breaking backwards compatibility with improving a feature, and many times the users advocating for the feature suggest "put in a compiler switch that preserves the backwards compatibility" (or, more rarely "put in a switch that turns on the feature", which is the safer option.) The C# team has historically been quite resistant to adding more switches. We're stuck with the "checked" switch now, but I think there's some regret about that.

Should checked arithmetic have been the default?

I understand why the desire was there to make unchecked arithmetic the default: it's familiar, it's faster, a new language is going to be judged in part on benchmarks, and so on. But with hindsight, I would rather that checked arithmetic have been the default, and users be forced to turn it off for precisely those situations where the inner-loop performance is genuinely impacted by this nano-optimization. We have other safety features like array bounds checking on by default; it makes sense to me that arithmetic bounds checking would be on by default as well. But again, we're stuck with it now.


2603 b.gif?host=ericlippert.com&blog=67759120
External Content
Source RSS or Atom Feed
Feed Location http://ericlippert.com/feed
Feed Title Fabulous adventures in coding
Feed Link https://ericlippert.com/
Reply 0 comments