[Solved] Memory allocation in C. Does every byte matter in modern computing? | Sololearn: Learn to code for FREE!
New course! Every coder should learn Generative AI!
Try a free lesson
+ 1

[Solved] Memory allocation in C. Does every byte matter in modern computing?

I've been trying to learn memory allocation in C. I've played with malloc, calloc, realloc & free. Now I'm taking a "length unknown" sentence using scanf(). I've found that I'm allocating a fixed amount to each word, eg 16 char. I could do some pointer math & keep updating my heap by 16char each time I reach the end of a word (might be 1 character, or 5, or 15). Then I wouldn't have bytes of memory holding lots of extra /0 unnecessarily. My question: Am I being silly trying to save a few bytes? https://code.sololearn.com/c24uafGqxgo5/?ref=app https://code.sololearn.com/cJ9cGXhpa8Xs/?ref=app

25th Feb 2022, 12:05 PM
HungryTradie
HungryTradie - avatar
6 Answers
+ 1
Addressing the implementation of dynamically sized structures, I always advise to typedef a fat pointer structure. A structure to keep not just the address to the heap where the data is stored, but its current size and total capacity. Bundling these up into one structure will keep data that is closely connected together and passed as a unit to functions. This enables you to provide such structures as opaque types with a set of well-written and tested library functions. If this flowing word thing is a code coach, I will implement it in a dynamic fashion and post here for you to discuss.
26th Feb 2022, 7:59 AM
Ani Jona 🕊
Ani Jona 🕊 - avatar
+ 2
Where to start ? 🤔 Well, are you being silly? Yes and no. If you are targeting a modern day home computer, then yes, you are being silly. Memory is abundant. If you are targeting a resource restricted device, e.g. embedded systems, then no, memory is a scarce commodity. As I mentioned earlier, the usual resize strategy is to double the space on each realloc. I said it is to keep the armortised time complexity constant. Perhaps this is a good time to elaborate this. Let's say that insertion into an array costs c time, where c is a constant reflecting time needed to calculate the address from the base and offset and to store some data there. If the current capacity is N, then inserting the N+1-th element will, in the worst case, require all previous N elements to be copied to a new location. If that takes d time for one element, then inserting the N+1-th element costs c+ N*d time. But since you doubled the size, you now have N new empty slots to fill. So, inserting one element is c+d constant time armortised.
26th Feb 2022, 7:44 AM
Ani Jona 🕊
Ani Jona 🕊 - avatar
+ 2
[Continued, post 2] But the time complexity is still constant. And that is the point behind this strategy! The obvious downside: if the memory required is 1025 byte, and you have just doubled to 1024, then you allocate 2048 bytes, with 1023 of them unused. So, a different school will tell you to always use a constant increase. But that will make the time complexity for inserting elements non-constant. Which to choose is up to you to decide, and to consider all aspects of the actual system you are designing. For a general purpose dynamic structure for a typical home pc, the strategy will be to keep a constant time insertion. Hence a doubling strategy will be used, because memory is cheap and easy to come by.
26th Feb 2022, 7:51 AM
Ani Jona 🕊
Ani Jona 🕊 - avatar
+ 2
All right. I wrote it in the playground, so I could not fully test it, but it seems to work correctly. https://code.sololearn.com/c1WDri42R5LO
26th Feb 2022, 12:00 PM
Ani Jona 🕊
Ani Jona 🕊 - avatar
+ 1
Amazing Ani Jona 🕊 My next module in my diploma is microcontrollers, so I've heard that saving memory is very important. I'm going to work on my competency in structures. Thanks.
26th Feb 2022, 8:02 AM
HungryTradie
HungryTradie - avatar
0
Oh, I forgot to go back to the C course. Lesson 36.1 talks about what I should learn to do.
25th Feb 2022, 12:21 PM
HungryTradie
HungryTradie - avatar