venta: (Default)
[personal profile] venta
Is there a name for the psychological effect whereby you think you don't know the answer to a question just because you've been told it's hard ?

(The cuts here are just provided to explain the concepts behind the question, they can be safely ignored if you know, or don't want to know.)

Recently someone mentioned that they get a lot of programmers in for interview who can't express the integer -1 in hex. Hexadecimal (or hex) is writing numbers in base 16. "Ordinary" numbers are base 10, so for something bigger than 9 you move to two digits. Using hex, you can write 0-9 as normal, then use A-F to express numbers up to 15. The prefix "0x" is often used to show that it's not a decimal number.

So:
9 = 0x9
10 = 0xA
31 = 0x1F
etc.

Expressing negative numbers gets a bit more complicated; to say -10 you don't just use -0xA. Instead you (approximately) imagine that you have a fixed number of digits to store your number in. Let's say we can only have four digits of hex, so the 'biggest' number we can write is 0xFFFF.

If we add 1 to that, we get 0x10000 - but we only have four digits, so we've effectively said that adding one returns us to zero (0x0000). This is basically the same effect you get when your car's mileometer runs out of digits and rolls round to zero again.

So, if we add 1 to 0xFFFF and get 0, 0xFFFF must be equal to -1. Right ? Well, yes. Obviously you need to have a convention to say whether you're doing negative numbers, so you know whether to interpret OxFFFF as a really big number, or as -1.

(Note for techies: two's complement always scares me if I think about it too hard. If anyone can provide a more coherent explanation, please do.)

Now, I think "express -1 in hex" is a very easy question. Thinking of ways in which it might go wrong, I asked how big an integer was. An integer is a whole number: 1, 2, 23472875, etc. A computer will have a set idea of how much memory it uses to store an integer. A bit can store either a 0 or a 1, and 32 bits is fairly normal for an integer. I've heard people claim that an integer is a 32-bit value; it needn't be. If you assume it is, and the computer you're writing code for uses 16-bit or 64-bit integers, Bad Shit will ensue..

32 bits. So that's 0xFFFFFFFF, then. Except, because I knew a lot of people got it wrong, I assumed I was falling into some elephant trap. I wasn't - 0xFFFFFFFF is the correct answer.

Is there a name for that, other than paranoia and lack of self-confidence ?
This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

Profile

venta: (Default)
venta

December 2024

S M T W T F S
1234567
891011121314
15161718192021
2223 2425262728
293031    

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 21st, 2025 04:45 am
Powered by Dreamwidth Studios