What is the diffrence between type conversion and type casting?
It would be great if you explain with an example :P
Basically conversion is automatic and lossless. For example you define something like this: float a = 5; then compiler converts 5 to 5.0 so int to float. Casting is something what coders do on themselves. For example you have function which returns 5 and then you want to divide it by 2. If you type a = 5/2 you will get 2 as a result. So you have to cast 5 to 5.0 (int to float) manually by: a = (float)5/2 after that you will receive desired 2.5 Of course it's just a fraction of this subject but it should be enough for now. Read manual for deeper understanding. :)
How do I generate a list with <N> unique elements randomly selected from another (already existing) list?
Given three integer matrices A(m*n),B(m*n) and C(m*n) . Print the one with more zero elements. beside code
I'm coding this for hours & can't make the value of the 5 random numbers round off to 2 decimal places like in the examples
Overloading + operator