IS FUNCTIONAL PROGRAMMING SLOW ??
Let's bring a bit of context: C is like assembly with a bit of syntatic sugar. It's full of mutable variables in its for loops, but those loops are fast , perhaps because assembly works same way where a register will be incremented or decremented(via single processor instruction) until looping stops. In my mind, C translates easy to assembly. Now functional programming would rather have everything immutable, including elements of a loop for example, like iterating over an object that contains or generates all possible values of that loop. Which means if something has to change (change is inevitable) at some point in the program(not just in loops), those immutable state objects/structures will have to be deallocated while new state objects be created, over and over. I cannot fathom this to ever surpass traditional C like procedural programming in terms of efficiency (both speed and memory) at the fundamental machine operational level. Am I right to believe that ?
3/5/2021 4:53:44 AMChillPill 🌶
30 AnswersNew Answer
I also like the following answer from the above SO post: -------------- As for now, functional languages aren't used heavily for industry projects, so not enough serious work goes into optimizers. Also, optimizing imperative code for an imperative target is probably way easier. Functional languages have one feat that will let them outdo imperative languages really soon now: trivial parallelization. Trivial not in the sense that it is easy, but that it can be built into the language environment, without the developer needing to think about it. The cost of robust multithreading in a thread-agnostic language like C is prohibitive for many projects. ------------- I guess this is what Arsenic was hinting at?
I think this is not totally correct, because you are forgetting all the compiler optimisations. Let see what optimisations are done in Haskell, for instance : - 'common subexpression elimination' : computes common subexpression only once - inlining of functions thanks to the function definition - specialisation of functions (like templates in C++ : the same function is automatically generated a multiple of times to be more efficient ; zero-cost at runtime) - lazyness : an expression is evaluated only if it is necessary to evaluate it - function fusion. Quoted from the website, in the case of lists, for instance "these intermediate lists never exist in memory entirely". So there are optimisations to avoid at maximum allocation and deallocation of objects. - stream fusions : optimized into efficient for loops (wich also means only one memory allocation for the whole operation) From : https://wiki.haskell.org/GHC_optimisations And you can look at benchmarks! Haskell is efficient!
In terms of speed I think you are right. On the lowest level everything boils down to mutable operations (changing bits in memory). Immutability is a higher level concept, that helps the programmer controlling state changes throughout the program. It's a bubble to protect us from our own fallacies. If we talk about memory usage, I would say this depends on the programming language. For example, Clojure applies "persistent data structures" and "structural sharing". This eliminates the need to allocate lots of new memory, when we only change a small part of a large data structure. Some languages are not so graceful with memory management of immutable data. https://augustl.com/blog/2019/you_have_to_know_about_persistent_data_structures/
I think functional programming has the benefit to concisely describe complicated operations with short expressions. For example, map and filter, they also translate to "for loops" eventually, but they are declarative in saying: "I want this function run through my data" - whereas a for loop is imperative by nature, saying: "take each value from this group of values, and do something with it". Maybe it does not sound very different, but functional languages can very easily compose (chain) multiple functions that would be possible imperatively by nesting many for loops. This programming style has the benefit to convey the intention of the program to the developer more clearly, but obviously it does have additional costs, comparing to imperative instruction to change memory bit by bit.
Théophile i agree about all optimizations that it helps performance but... take the laziness for example. You create yourself a handle, you return yourself an evaluation when needed at a later time. At assembly level, to me it sounds like It is a two functions block calling, calling first block returns only a pointer to call the second function block. every call, at assembly level, means stuff be push and popped off the stack. i dont see that possibly ever being faster that traditional C programming where a value is returned immediately.
Good question and it also depends on whether "slow" applies to: 1) writing code, 2) understanding and editing prewritten existing code, 3) compiling or 4) running the written code. :) The answers to those four different options are likely different. Worth creating a matrix and weighing the options?
Now, let's consider another thing : C is meant to be a system programming langage. It means that you have full control of what you are doing. I don't know any pure functional langage meant to be a system programming langage. They are usually not designed to be low-level. However, as we can see in the case of Haskell, functional programming langages can be very efficient, at the end. Yes, procedural programming in C is fast. But I could argue that procedural programming in Python is not. 😉
I'd wager that most people write faster C than assembly, and most people write faster C# than C. That's because the compiler generally does a pretty good job optimizing while you get tired eventually. There's no reason we can't keep climbing the abstraction ladder so long as compilers follow suit. There is no "inherent" slowness in any language. Immutability isn't really a show-stopper, if you have a function that operates on immutable data then the compiler can probably reason about your code a lot better, optimize better, and optimize away all the copying later. It's a win-win for both you and the compiler. That's in an ideal world anyway.
I would say the loss in speed made by functional language by avoiding mutable states can easily be balanced by the fact that for the same reason it makes it easier to implement parallel processing in them ( as less mutable states means less synchronisation and locking overheads ) And I think in every case where speed practically matters, a multithreaded functional program will always have an edge over single threaded procedural C program.
#define CPP ChillPill if you are try to parallelise stuff in C, you will eventually run into issues where the mutable variables are shared between concurrent processes which will end up costing more runtime overheads ( as then we also have to create some syncing mechanism involving locking and unlocking mutexes for it ) And on other hand if you have a functional program containing all pure function, it will not only make it easier to implement multithreading but will also reduce some runtime overheads ( as a pure function will have no mutable state )
Arsenic while it is true often times one will need to use locks of some kind to prevent data racing, it is also possible to multithread something else without a need to use mutable data. all depends on the task to be performed, but i get what you mean.
Arsenic what about pthread in C? you can multithread in C too
"optimizing imperative code for an imperative target is probably way easier" yeah that s what i was thinking too. C code to assembly is like translating french to english. easy, compared to translating martian to english. hhh
Jholler Fortran is not as bad as I thought. Seems to take only 1.6 times longer on Intel than C++ compiled on g++, according to the below table (if you trust it), also from the same SO post. Relative Language speed 1.0 C++ GNU g++ 1.1 C GNU gcc 1.2 ATS 1.5 Java 6 -server 1.5 Clean 1.6 Pascal Free Pascal 1.6 Fortran Intel 1.8 Haskell GHC 2.0 C# Mono 2.1 Scala 2.2 Ada 2005 GNAT 2.4 Lisp SBCL 3.9 Lua LuaJIT
Sonic i would like to see the kind of benchmark used. interesting. Someone said to me recently: "C++ is the best"
Having said that I have seen pthreads used successfully in a medium scale industry project. The performance was good and I heard experts say that it won't buckle unless you used more than about 70 threads on that particular Linux architecture. But I don't know as I am not an expert myself.
Kiwwi# this link given by Arsenic has some benchmarks. https://github.com/kostya/benchmarks In one one of the benchmarks C took twice as long and consumed twice as much energy (Joules) than its C++ counterpart, but the C++ program consumed 3 times as much memory. But in the Base64 benchmark C was twice as fast as C++. It depends a lot on what the program is doing. It also is probably is due to compiler optimisation but don't ask me for all the details as a lot of it is beyond me.
#define CPP ChillPill that's probably right although I've never seen or heard Martian lol.
Sonic C++ faster than C, and Java faster than C# How?