> Even though the values are declared u_int, they seem to be used in the > code as signed numbers (maybe that's a problem), so return EINVAL for > a number, cast to signed, that is negative seems appropriate. Under no circumstances at all. If the error is raised by the compiler and the code rejected unless a correction is made, then it may be OK, but on run time the "-1" does NOT exist at all, it is only a pattern of zeroes and ones, which might as well be a valid unsigned value. Rejecting such patterns on the premise that the programmer might had made a mistake and used a negative number in the code fails to consider the case where the programmer actually intended to use such a big unsigned value. If the var is an u_int then every bit pattern that can be interpreted as an unsigned integer should be accepted. Otherwise it should be declared a different type or an acceptable range of values stated clearly somewhere and tested through appropriate variable/constants/macros. What if next computer generation allows for bigger iov_len values and there is room for valid values bigger than 2**31 + 1? What if the machine uses a different convention for representing sign? What if...? The problem is not that the bit pattern *might* have been interpreted as a negative number by a hypothetical human, but that the acceptable limits are note well defined/tested. The code should not test the hypothetical intetion of a hypothetical sloppy programmer disregarding possibly legit values. It should define legal values and test them, nothing more, nothing less. jr
This archive was generated by hypermail 2b30 : Fri Apr 13 2001 - 14:08:48 PDT