cmp byte ptr [eax], 90 ; compare the byte eax points to with 90h.
eax holds an address of some memory place, if you check what is on that address ('follow in dump' if you use olydbg), you should see what the 90h is compared to.
In your piece of code, 00401000 till 0040105F is being scanned for the byte 90h. ECX is the counter with the remaining bytes to scan.
If 90h is found the algo jumps out of the loop. (je 00401014)
Since the memory we scan is the memory we are executing, you can be pretty sure it is checking if you nopped out code. (patched with 90h)
I think you got confused by the intel argument order :
:0040100A 2BC8 sub ecx, eax ; => 4198495 - 4198400 = 95 => so i could do the math
what should i do in this case?
cmp byte ptr [eax], 90 ; -> this means it takes a bye of EAX and compare it with 90 [90 here is in decimal form right?], i just wonder how could i find out what exactly is the value of byte ptr [eax] ?
if you are coming from a C/C++ background it may be easier to think of eax and ecx used as pointers. when registers are wrapped with the square bracket it does the same thing as the * operator in C. Get me the contents of this memory address.
cmp byte ptr[eax], 90
if (*somePtr == 90h)
as a general question I wonder why the output does not specify this number is in hex (by either appending h to it or prefixing it with 0x) it seems a much more clean way to see the difference immediately.
s a general question I wonder why the output does not specify this number is in hex (by either appending h to it or prefixing it with 0x) it seems a much more clean way to see the difference immediately.
Well, nobody's ever explained it to me this way, but I suspect it's because:
A) It takes fewer bytes to represent the equivalent value - FF=255 = smaller file size.; Imagine old mainframe where 1 kilobyte was HUGE memory.
B) Since everything else in programming is represented in hex, why bother with the processing overhead of reading a hex byte, then converting it to base 10 and displaying it? Already it can take vast amounts of time to disassemble a prog, why make it take longer?
C) Since the default/de facto standard is to use hex, why bother berating the obvious by marking with 0xnn or nn (h)?
There are some progs that still use 0xnn when displaying bytes, but I really have no idea why. The only time it would really be necessary is if the program were to have the ability to display values using differently based numbering systems - such as some hex editors, which can show hex, decimal, octal, etc.
Hex was chosen simply because it's a power of 2...
It's easier to convert binary and hex because it's the same logic behind them, being both power of 2. If you might have noticed or not, a nibble (one hex number) is 4 bits, octal digits (for the matter) are 3 bits...
Which is why both were chosen, and that's the reason why base64 was next in line (using 32 was redundant.)
Easier transformation for us, humans. But still, we have 10 fingers and 10 toes, so we can count to 10. That's why everything in our lives has a sense of decimal base in them. Arabic, Japanese, Indians, Europeans. Everyone used the decimal base because that is our limit, we can only count upto 10 fingers. Unless we're disfigured and got an extra toe/finger/etc..
IDA, Olly, real disassembling engines, use hex purely (unless stated or requested otherwise.) while VS debugger is using decimal because many people are still trapped in a cage of 10 fingers.
That's why it will always stay this way. It's human nature. _________________ thoughts roam free and endless..
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum You cannot attach files in this forum You cannot download files in this forum