I was fiddling around with some toy code and needed to find the most significant digit of an integer. I wondered if there was some more clever way to find it, like John Carmack’s fast inverse square root.
Here’s the classic method, using modulus arithmetic in a loop:
int msd(int num)
{
if (num < 0)
num *= -1;
while (num >= 10)
num /= 10;
return num;
}
Calling this function a ridiculous number of times yielded an average total runtime of 270,589 microseconds.
Here’s Google’s AI-suggested method:
int getMSD(int num) {
if (num == 0) {
return 0;
}
int digits = std::log10(std::abs(num)) + 1;
return num / static_cast(std::pow(10, digits - 1));
}
The same ridiculous number of calls yielded an average total runtime of 2,241,092 microseconds. That’s over eight times slower than the classic, modulus-loop method!
Additionally, Google AI’s static_cast from double to int throws a C4244 warning (possible loss of data). Finally, I believe that Google’s solution is fundamentally incorrect: it returns negative values for negative inputs. By my understanding of “most significant digit”, a digit is defined as the numbers zero through nine, thus any function returning a most significant digit must return a single, unsigned number 0..9.