Which is faster, a program written in decimal, hexa decimal or binary (running time and compiling time)? | Sololearn: Learn to code for FREE!
New course! Every coder should learn Generative AI!
Try a free lesson
+ 10

Which is faster, a program written in decimal, hexa decimal or binary (running time and compiling time)?

Why several number inside lyra2Z.cl written using hexadecimal, not only using decimal number? https://github.com/djm34/sgminer-msvc2015/blob/master/kernel/lyra2Z.cl If hexadecimal faster than decimal, How to prove it? If decimal faster than hexadecimal, How to prove it? This question related to: https://sololearn.com/Discuss/1522035/what-s-the-difference-between-ulong4-and-ulong4-what-s-the-meaning-or-purpose-using-symbol

27th Sep 2018, 10:04 AM
Adi Pratama
Adi Pratama - avatar
17 Answers
+ 7
@Rishikesh Mind explaining how you got to your answer? There is no difference in speed when using hexadecimal over decimal, it compiles to the same machine code. However hexadecimal is more readable in certain situations, for example when doing bitwise operations it is much easier to see the binary representation in your head than with decimal numbers. For example, 0x10, 0x20, 0x40, 0x80, 0x100 is easier to recognize than 16, 32, 64, 128, 256. In his code there is a number which reads: 0x0100000000000000, it's quite obvious that only 1 bit is set here. While in the decimal representation: 72057594037927936, not that obvious anymore. Hexadecimal is also used when working close to memory, for example reading from or to specific memory locations ( think about game trainers for example ) or with pointers.
27th Sep 2018, 3:16 PM
Dennis
Dennis - avatar
+ 8
Okey, if different numerical systems can not make many difference in a program compilation/execution time. then, do u have any suggestion to optimize execution time for this program? https://github.com/djm34/sgminer-msvc2015/blob/master/kernel/lyra2Z.cl
30th Sep 2018, 7:45 PM
Adi Pratama
Adi Pratama - avatar
+ 7
It's not always about speed. It's about being more human-friendly (at least among programmers and electrical engineers). From a machine POV, both representations boil down to a sequence of 0's and 1's which indicate the level of voltage inside of transistors. You might think since hex numbers are a power of 2 (2⁴), then it's probably the more convenient choice when the deal is optimal performance. But as I said, the machine treat both of the systems the same. As for the usage you have provided in your question from Github, the bit-mask 0xFFFFFFFF in the macro definition in line 61, is probably more understandable and compact than (1111 1111 1111 1111 1111 1111 1111 1111)₂ (4294967295)₁₀ or 2³² - 1 when there's no built-in macro like UINT_MAX¹ available or if there is, but they can't fit the requirement of the numeric constants. ____ ¹ https://en.cppreference.com/w/c/types/limits
28th Sep 2018, 8:07 AM
Babak
Babak - avatar
+ 7
Adi Pratama-Universitas Brawijaya Your assumption is simply wrong. I wonder what makes you think that different numerical systems make a difference in a program compilation/execution time. Machine can only see 0 and 1 representation of the constants whether you defined them as hex, oct, dec, or pure binary. Considering your claim, people are probably crazy enough to define their program's magic numbers in decimal most of the time!
28th Sep 2018, 11:36 AM
Babak
Babak - avatar
+ 5
Adi Pratama-Universitas Brawijaya Now you really make me laugh! First, you must tell us the current execution time of the "whole" project using an instrumentation tool, if you believe that there's a bottleneck in this particular file. Second, with a glance, everyone tells that this project has been used the lowest level operations to squeeze every clock-cycle of the GPU so don't expect to see some non-portable assembly code laying around the codebase. Third, this is not a hello world program to simply ask everyone to give you a hint. The project is a hardware-specific cryptocurrency algorithm which is a pretty big deal. If you are eager to use and improve it, you should probably ask the original developer(s) for reliable instructions. And of course with more patience! ;)
1st Oct 2018, 6:09 AM
Babak
Babak - avatar
+ 5
What is instrumentation tool should be used to check execution time of this project https://github.com/djm34/sgminer-msvc2015/blob/master/kernel/lyra2Z.cl ?
1st Oct 2018, 8:15 PM
Adi Pratama
Adi Pratama - avatar
+ 4
So, compile time is difference between hexadecimal and decimal even though it is so minimal. How about the running time? Is it also different?
27th Sep 2018, 3:44 PM
Adi Pratama
Adi Pratama - avatar
+ 3
@Adi Pratama-Universitas Brawijaya Since hexadecimal and decimal compile to the same machine code, their running time is equal. It's just much easier to convert hexadecimal to binary For every hexadecimal digit you take 4 0's and map them to the corresponding digit. 0x8F = take 4 * 2 0's = 0000 0000 binary of 8 = 1000 binary of F = 1111 result = 1000 1111 0x25DC = 4 * 4 0's = 0000 0000 0000 0000 2 = 0010, 5 = 0101, D = 1101, C = 1100, result = 0010 0101 1101 1100. You only have to remember the binary representation of the numbers from 0 to 15.
27th Sep 2018, 3:49 PM
Dennis
Dennis - avatar
+ 3
@Adi Pratama-Universitas Brawijaya Not sure why you think my answer implied that code written using binary, ( 0b111111 over 63 ), is faster. I was simply talking about readability, which in turn improves your programming speed. But writting code in binary would probably slow down compilation, no hard numbers on this though, many more characters are used in order to write down 0b11111111111111 than 16383, which increases file size and it certainly doesn't have a good impact on the readability ( usually ). In case you mean writting code in binary directly, I have written code using only a hex editor and it's not a fun thing.
28th Sep 2018, 11:24 AM
Dennis
Dennis - avatar
+ 2
@Rishikesh Oh, I'm not disagreeing with you, it does make some sense, but I also think that the compile time difference between hexadecimal and decimal is so minimal that it really doesn't matter.
27th Sep 2018, 3:39 PM
Dennis
Dennis - avatar
+ 1
program written in Hexa decimal is faster than decimal.
27th Sep 2018, 2:53 PM
Rishikesh Jadhav
Rishikesh Jadhav - avatar
+ 1
well when you are dealing with big datasets and have a lesser time limit then every small changes matters like this Dennis let's discuss this topic if u disagree with me
27th Sep 2018, 3:32 PM
Rishikesh Jadhav
Rishikesh Jadhav - avatar
+ 1
Dennis it's minimal but its not zero ri8? it's just like c is faster than c++ although it's faster than just a smaller than a 1/100th fract ion of a sec .. hope this makes sense 😊
27th Sep 2018, 3:44 PM
Rishikesh Jadhav
Rishikesh Jadhav - avatar
+ 1
i don't think it will make any difference in runtime
27th Sep 2018, 3:46 PM
Rishikesh Jadhav
Rishikesh Jadhav - avatar
+ 1
i asked my teacher this question a while back, and it turns out that hex is read quicker than dec. it makes sense because when choosing colors for example. rgb(123,456,789) uses more chars than #123456
27th Sep 2018, 6:52 PM
Logomonic Learning
Logomonic Learning - avatar
+ 1
hexadecimal text is a little little bit faster to turn into numbers but so small that you wont see the difference on modern machines.
28th Sep 2018, 11:08 AM
VcC
VcC - avatar
0
Dennis Adi Pratama-Universitas Brawijaya Idk how to prove it but I can tell u this much that every code u write (high level language like c++) is converted into assembly language and compiler does this job. after that assembly language is converted into machine code which is understandable by computer. If we want to program in assembly language , we always use Hexa decimal values. and if we use Hexa decimal numbers in high level language like c++ then the work of compiler gets reduced because it doesn't have to converted the numbers in Hexa decimal . thats the reason I guess Hexa decimal is faster than decimal.
27th Sep 2018, 3:26 PM
Rishikesh Jadhav
Rishikesh Jadhav - avatar