Understanding Memory Management, Part 1: C
(educatedguesswork.org)229 points by ekr____ 5 days ago | 90 comments
229 points by ekr____ 5 days ago | 90 comments
returningfory2 2 days ago | root | parent | next |
I feel like this comment is misleading because it gives the impression that the code in the article is wrong or unsafe, whereas I think it's actually fine? In the article, in the case when `tmp == NULL` (in your notation) the author aborts the program. This means there's no memory leak or unsafety. I agree that one can do better of course.
dataflow a day ago | root | parent |
You're confusing the code with the program it compiles to. The program is fine, okay. But the code is only "fine" or "safe" if you view it as the final snapshot of whatever it's going to be. If you understand that the code also influences how it's going to evolve in the future (and which code doesn't?) then no, it's not fine or safe. It's brittle and making future changes more dangerous.
Really, there's no excuse whatsoever for not having a separate function that takes the pointer by reference & performs the reallocation and potential termination inside itself, and using that instead of calling realloc directly.
returningfory2 21 hours ago | root | parent |
This is an article introducing people to memory management, targeted at beginners. The code snippets are there to illustrate the ideas. The author made the correct pedagogical decision to prioritize readability over optimal handling of an OOM edge case that would be confusing to introduce to beginner readers at this early stage.
Talking about "making future changes" seems to be missing the point of what the author is doing. They're not committing code to the Linux kernel. They're writing a beginner's article about memory management.
dataflow 20 hours ago | root | parent |
> This is an article introducing people to memory management, targeted at beginners
I realize, and that's what makes it even worse. First impressions have a heck of a stronger effect than 10th impressions. Beginners need to learn the right way in the beginning, not the wrong way.
Whenever did "safety first" stop being a thing? This is like like skipping any mention of goggles when teaching chemistry or woodworking for "pedagogical reasons". You're supposed to first you teach your students the the best way to do things, then you can teach them how to play fast and loose if it's warranted. Not the other way around!
returningfory2 20 hours ago | root | parent |
The code in the article is not wrong. It is not unsafe. The author explicitly handles the OOM case correctly. It is true that there are more optimal ways to do it if you do have an OOM handling strategy.
And no, you're not supposed to teach your students the best way to do things at the start. That's not how teaching works. You start with the simpler (but still correct) way, and then work towards the best way. This is why introductions to Rust are full of clone calls. The best Rust code minimizes the number of clones. But when you're introducing people to something, you don't necessarily do the optimal thing first because that disrupts the learning process.
dataflow 20 hours ago | root | parent |
> The code in the article is not wrong. It is not unsafe. The author explicitly handles the OOM case correctly.
And hence we circle back to what I just wrote above: you're confusing the code with the program that it compiles to. Because the code isn't there solely for the purpose of being compiled into a program, it's also serving as a stepping stone for other things (learning, modification, whatever). https://news.ycombinator.com/item?id=42733611
If it helps to phrase it differently: the code might be "compile-safe", but not "modification-safe" or "learning-safe".
returningfory2 16 hours ago | root | parent |
I don't see why the code is not "learning safe". The code presents the simplest safe way to handle an OOM condition. Seems basically perfect for a _beginners guide_ to manual memory management.
dataflow 16 hours ago | root | parent |
It's not learning-safe because it teaches said learners to write bad code like this.
tptacek a day ago | root | parent | prev | next |
I was looking for a place to hang this comment and here's as good as any: the right way to handle this problem in most C code is to rig malloc, realloc, and strdup up to explode when they'd return NULL. Proper error handling of a true out-of-memory condition is pretty treacherous, so most of the manual error handling stuff you see on things like realloc and malloc are really just performative. In an application setting like this --- not, like, the world's most popular TLS library or something --- aborting automatically on an allocation failure is totally reasonable.
Since that's essentially what EKR is doing here (albeit manually), I don't think this observation about losing the original `lines` pointer is all that meaningful.
dundarious 18 hours ago | root | parent |
After using this malloc-auto-abort() style for many many years, I've come to believe that if only for the better error handling properties, manual memory management should primarily be done via explicit up front arena allocation using OS API's like mmap/VirtualAlloc, then a bump allocator within the arena.
It helps in the vast amount of cases where sensible memory bounds are known or can be inferred, and it means that all system memory allocation errors* can be dealt with up front with proper error handling (including perhaps running in a more restrictive mode with less memory), and then all application memory allocation errors (running out of space in the arena) can be auto-abort() as before (and be treated as bugs). The other huge benefit is that there is no free() logic for incremental allocations within the arena, you just munmap/VirtualFree the arena in its entirety when done.
Of course, there are cases where there are no sensible memory bounds (in space or perhaps in time) and where this method is not appropriate without significant modification.
*modulo Linux's overcommit... which is a huge caveat
tptacek 17 hours ago | root | parent |
I feel like the prospect of using arenas and pools is further evidence that malloc and realloc should abort on failure, because you're right: if you're using an arena, you've not only taken application-layer control over allocation, but you've also implicitly segregated out a range of allocations for which you presumably have a strategy for exhaustion. The problem with malloc is that it's effectively the system allocator, which means the whole runtime is compromised when it fails. Yes: if you want to manually manage allocation failures, do it by using a pool or arena allocator on top of malloc.
dundarious 11 hours ago | root | parent |
Yes, fundamentally my point is that it's pretty much always useful to separate OS allocation from application-level "allocation" (more like consumption of allocated memory than true allocation), and, that application-level "allocation" should always auto-abort() or at least provide a trivially easy way to auto-abort().
So I agree, given malloc and friends are a combination of OS and application-level allocators, they should auto-abort(). I don't focus on malloc and friends though, because I'm not a fan of using the Rube Goldberg machine of "general purpose" allocators in most non-trivial situations. They're complicated hierarchies of size-based pools, and free lists, and locks, and on and on.
ekr____ 2 days ago | root | parent | prev | next |
Author here.
Thanks for the flag. As you have probably noticed, I just abort the program a few lines below on realloc failure, so this doesn't leak so much as crash. However, this is a nice example of how fiddly C memory management is.
witrak 18 hours ago | root | parent |
Taking into account how thoroughly you explain all the intricate details of memory handling it's strange that in the example you haven't clearly commented on the fact of oversimplification of handling unsuccessful allocation (leading to the potentially risky situation).
To say that "this is a nice example of how fiddly C memory management is" in the discussion is a bit too little - perhaps intended readers of the article would prefer an explicit warning there, just to be aware that they shouldn't forget to abort the program as you do.
lionkor 2 days ago | root | parent | prev | next |
Very odd that an article trying to teach memory management would miss this, this should be common knowledge to anyone who used realloc, just like checking the return of any allocation call.
bluetomcat 2 days ago | root | parent | next |
They treat an OOM situation as exceptional and immediately call abort() in case any allocation function returns NULL. The specification of these functions allows you to handle OOM situations gracefully.
josephg a day ago | root | parent |
> The specification of these functions allows you to handle OOM situations gracefully.
In theory, sure. But vanishingly little software actually deals with OOM gracefully. What do you do? Almost any interaction with the user may result in more memory allocations in turn - which presumably may also fail. It’s hard to even test OOM on modern systems because of OS disk page caching.
Honestly, panicking on OOM is a totally reasonable default for most modern application software. In languages like rust, this behaviour is baked in.
PhilipRoman 2 days ago | root | parent | prev |
>checking the return of any allocation call
I would say this is pointless on many modern systems unless you also disable overcommit, since otherwise any memory access can result in a crash, which is impossible to check for explicitly.
kevin_thibedeau a day ago | root | parent | prev |
abort() isn't an option on all modern systems.
o11c 2 days ago | root | parent | prev | next |
There's another bug, related to performance - this involves a quadratic amount of memory copying unless your environment can arrange for zero-copy.
ekr____ 2 days ago | root | parent | next |
Author here. Quite so. See footnote 3:https://educatedguesswork.org/posts/memory-management-1/#fn3
"If you know you're going to be doing a lot of reallocation like this, many people will themselves overallocate, for instance by doubling the size of the buffer every time they are asked for more space than is available, thus reducing the number of times they need to actually reallocate. I've avoided this kind of trickery to keep this example simple."
Karellen 2 days ago | root | parent | prev |
Surely that's only the case if realloc() actually resizes and copies on every call? Which it normally doesn't?
I thought that most implementations of realloc() would often "round up" internally to a larger size allocation, maybe power-of-two, maybe page size, or something? So if you ask for 20 bytes, the internal bookkeeping sets aside 32, or 4096, or whatever. And then if you realloc to 24 bytes, realloc will just note that the new allocation fits in the amount its reserved for you and return the same buffer again with no copying?
o11c 2 days ago | root | parent |
Some implementations might round up to encourage reuse:
* memory-checking allocators never do.
* purely-size-based allocators always do.
* extent-based allocators try to, but this easily fails if you're doing two interleaving allocations.
* the mmap fallback does only if allowing the kernel to choose addresses rather than keeping virtual addresses together, unless you happen to be on a kernel that allows not leaving a hole
Given that there's approximately zero overhead to do it right, just do it right (you don't need to store capacity, just compute it deterministically from the size).
tliltocatl 2 days ago | root | parent | prev | next |
If you exit the program (as in the process) you don't need to free anything.
astrobe_ 2 days ago | root | parent | prev | next |
The program abort()s if the reallocation fails. But indeed, for an educational example, it's not good to be too smart.
I believe the test if(!num_lines) is unnecessary, because reallocating a NULL pointer is equivalent to malloc(). This is also a bit "smart", but I think it is also more correct because you don't use the value of one variable (num_lines is 0) to infer the value of another (lines is NULL).
To go further, an opened-ended structure like:
struct
{
unsigned count;
char* lines[];
};
... could also be preferable in practice. But actually writing good C is not the topic of TFA.atiedebee a day ago | root | parent |
> I believe the test if(!num_lines) is unnecessary, because reallocating a NULL pointer is equivalent to malloc().
I thought that this behaviour was deprecated in C23, but according to cop reference it is still there[0].
An I thinking of realloc with 0 size or was this actually a thing that was discussed?
pjmlp 21 hours ago | root | parent |
Section 7.24.3.7 The realloc function
https://open-std.org/jtc1/sc22/wg14/www/docs/n3096.pdf
> If ptr is a null pointer, the realloc function behaves like the malloc function for the specified size. Otherwise, if ptr does not match a pointer earlier returned by a memory management function, or if the space has been deallocated by a call to the free or realloc function, or if the size is zero, the behavior is undefined. If memory for the new object is not allocated, the old object is not deallocated and its value is unchanged.
pjmlp 2 days ago | root | parent | prev | next |
It is even flagged as such on Visual Studio analyser.
aa-jv 2 days ago | root | parent | prev |
Actually, no. You've just committed one of the cardinal sins of the *alloc()'s, which is: NULL is an acceptable return, so errno != 0 is the only way to tell if things have gone awry.
The proper use of realloc is to check errno always ... because in fact it can return NULL in a case which is not considered an error: lines is not NULL but requested size is zero. This is not considered an error case.
So, in your fix, please replace all checking of tmp == NULL, instead with checking errno != 0. Only then will you have actually fixed the OP's unsafe, incorrect code.
spiffyk 2 days ago | root | parent | next |
From `malloc(3)`:
Nonportable behavior
The behavior of these functions when the requested size is zero is glibc specific; other implementations may return NULL without setting errno, and portable POSIX programs should tolerate such behavior. See realloc(3p).
POSIX requires memory allocators to set errno upon failure. However, the C standard does not require this, and applications portable to non-POSIX platforms should not assume this.
anymouse123456 2 days ago | root | parent |
As someone writing C for POSIX and embedded environments, this clarification is a super helpful.
cozzyd 2 days ago | root | parent | prev |
In this case if (num_lines+1)(sizeof (char)) is zero that is certainly unintended
AdieuToLogic a day ago | prev | next |
The example strdup implementation:
char *strdup(const char *str) {
size_t len = strlen(str);
char *retval = malloc(len);
if (!retval) {
return NULL;
}
strcpy(retval, str);
return retval;
}
Has a very common defect. The malloc call does not reserve enough space for the NUL byte required for successful use of strcpy, thus introducing heap corruption.Also, assuming a NULL pointer is bitwise equal to 0 is not portable.
msarnoff a day ago | root | parent |
re: the bitwise representation of NULL, evaluating a pointer in a Boolean context has the intended behavior regardless of the internal representation of a null pointer.
See the C FAQ questions 5-3 and 5-10, et al. https://c-faq.com/null/
samsquire 2 days ago | prev | next |
Thanks for such a detailed article.
In my spare time working with C as a hobby I am usually in "vertical mode" which is different to how I would work (carefully) at work, which is just getting things done end-to-end as fast as possible, not careful at every step that we have no memory errors. So I am just trying to get something working end-to-end so I do not actually worry about memory management when writing C. So I let the operating system handle memory freeing. I am trying to get the algorithm working in my hobby time.
And since I wrote everything in Python or Javascript initially, I am usually porting from Python to C.
If I were using Rust, it would force me to be careful in the same way, due to the borrow checker.
I am curious: we have reference counting and we have Profile guided optimisation.
Could "reference counting" be compiled into a debug/profiled build and then detect which regions of time we free things in before or after (there is a happens before relation with dropping out of scopes that reference counting needs to run) to detect where to insert frees? (We Write timing metadata from the RC build, that encapsulates the happens before relationships)
Then we could recompile with a happens-before relation file that has correlations where things should be freed to be safe.
EDIT: Any discussion about those stack diagrams and alignment should include a link to this wikipedia page;
jvanderbot 2 days ago | root | parent | next |
> which is just getting things done end-to-end as fast as possible, not careful at every step that we have no memory errors.
One horrible but fun thing a former professor of mine pointed out: If your program isn't going to live long, then you never have to deallocate memory. Once it exits, the OS will happily clean it up for you.
This works in C or perhaps lazy GC languages, but for stateful objects where destructors do meaningful work, like in C++, this is dangerous. This is one of the reasons I hate C++ so much: Unintended side effects that you have to trigger.
> Could "reference counting" be compiled into a debug/profiled build and then detect which regions of time we free things in before or after (there is a happens before relation with dropping out of scopes that reference counting needs to run) to detect where to insert frees?
This is what Rust does, kinda.
C++ also does this with "stack" allocated objects - it "frees" (calls destructor and cleans up) when they go out of scope. And in C++, heap allocated data (if you're using a smart pointer) will automatically deallocate when the last reference drops, but this is not done at compile time.
Those are the only two memory management models I'm familiar with enough to comment on.
MarkSweep 2 days ago | root | parent | next |
There is this old chestnut about “null garbage collectors”:
https://devblogs.microsoft.com/oldnewthing/20180228-00/?p=98...
> This sparked an interesting memory for me. I was once working with a customer who was producing on-board software for a missile. In my analysis of the code, I pointed out that they had a number of problems with storage leaks. Imagine my surprise when the customers chief software engineer said "Of course it leaks". He went on to point out that they had calculated the amount of memory the application would leak in the total possible flight time for the missile and then doubled that number. They added this much additional memory to the hardware to "support" the leaks. Since the missile will explode when it hits its target or at the end of its flight, the ultimate in garbage collection is performed without programmer intervention.
jvanderbot 2 days ago | root | parent | next |
Rapid disassembly as GC. Love it.
Have you heard the related story about the patriot missile system?
https://www.cs.unc.edu/~smp/COMP205/LECTURES/ERROR/lec23/nod...
Not a GC issue, but fun software bug.
gpderetta a day ago | root | parent | prev |
Untill the software is reused for a newer model with longer range and they forget to increase the ram size.
But of course that would never happen, wouldn't it?
pjmlp 2 days ago | root | parent | prev |
The wonders of corrupted data, stale advisory locks and UNIX IPC leftovers, because they weren't properly flushed, or closed before process termination.
jvanderbot 2 days ago | root | parent |
I'll narrow my scope more explicitly:
close(x) is not memory management - not at the user level. This should be done.
free(p) has no O/S side effects like this in C - this can be not-done if you don't malloc all your memory.
You can get away with not de-allocating program memory, but (as mentioned), that has nothing to do with freeing Os/ kernel / networking resources in C.
PhilipRoman 2 days ago | root | parent |
Most kernel resources are fairly well behaved, as they will automatically decrement their refcount when a process exits. Even mutexes have a "robust" flag for this exact reason. Programs which rely on destructors or any other form or orderly exit are always brittle and should be rewritten to use atomic operations.
pjmlp a day ago | root | parent |
Which kernel, on which specific OS?
This is a very non portable assumption, even we constrain it to only across UNIX/POSIX flavours.
PhilipRoman a day ago | root | parent |
As far as assumptions go, it's actually one of the most portable ones and for a good reason, considering it is a basic part of building a reliable system. Quoting POSIX:
Consequences of Process Termination
Process termination caused by any reason shall have the following consequences:
[..] All of the file descriptors, directory streams, conversion descriptors, and message catalog descriptors open in the calling process shall be closed.
[..] Each attached shared-memory segment is detached and the value of shm_nattch (see shmget()) in the data structure associated with its shared memory ID shall be decremented by 1.
For each semaphore for which the calling process has set a semadj value (see semop()), that value shall be added to the semval of the specified semaphore.
[..] If the process is a controlling process, the controlling terminal associated with the session shall be disassociated from the session, allowing it to be acquired by a new controlling process.
[..] All open named semaphores in the calling process shall be closed as if by appropriate calls to sem_close().
Any memory locks established by the process via calls to mlockall() or mlock() shall be removed. If locked pages in the address space of the calling process are also mapped into the address spaces of other processes and are locked by those processes, the locks established by the other processes shall be unaffected by the call by this process to _Exit() or _exit().
Memory mappings that were created in the process shall be unmapped before the process is destroyed.
Any blocks of typed memory that were mapped in the calling process shall be unmapped, as if munmap() was implicitly called to unmap them.
All open message queue descriptors in the calling process shall be closed as if by appropriate calls to mq_close().
caspper69 2 days ago | root | parent | prev | next |
Nothing is going to tell you where to put your free() calls to guarantee memory safety (otherwise Rust wouldn't exist).
There are tools that will tell you they're missing, however. Read up on Valgrind and ASAN.
In C, non-global variables go out of scope when the function they are created in ends. So if you malloc() in a fn, free() at the end.
If you're doing everything with globals in a short-running program, let the OS do it if that suits you (makes me feel dirty).
This whole problem doesn't get crazy until your program gets more complicated. Once you have a lot of pointers among objects with different lifetimes. or you decide to add some concurrency (or parallelism), or when you have a lot of cooks in the kitchen.
In the applications you say you are writing, just ask yourself if you're going to use a variable again. If not, and it is using dynamically-allocated memory, free() it.
Don't psych yourself out, it's just C.
And yes, there are ref-counting libraries for C. But I wouldn't want to write my program twice, once to use the ref-counting library in debug mode and another to use malloc/free in release mode. That sounds exhausting for all but the most trivial programs.
SkiFire13 2 days ago | root | parent | prev | next |
> I am curious: we have reference counting and we have Profile guided optimisation. > > Could "reference counting" be compiled into a debug/profiled build and then detect which regions of time we free things in before or after (there is a happens before relation with dropping out of scopes that reference counting needs to run) to detect where to insert frees?
Profile guided optimizations can only gather informations about what's most probable, but they can't give knowledge about things about what will surely happen. For freeing however you most often want that knowledge, because not freeing will result in a memory leak (and freeing too early will result in a use-aftee-free, which you definitely want to avoid so the analysis needs to be conservative!). In the end this can only be an _optimization_ (just like profile guided _optimization_s are just optimizations!) on top of a workflows that is ok with leaking everything.
mgaunard 2 days ago | root | parent | prev |
In C, not all objects need to be their own allocated entity (like they are in other languages). They can be stored in-line within another object, which means the lifetime of that object is necessarily constrained by that of its parent.
You could make every object its own allocated entity, but then you're losing most of the benefits of using C, which is the ability to control memory layout of objects.
pjmlp 2 days ago | root | parent |
As any systems programming language include those that predate C by a decade, and still it doesn't allow full control without compiler extensions, if you really want full control of memory layout of objects, Assembly is the only way.
gizmo686 a day ago | root | parent |
In practice C let's you control memory layout just fine. You might need to use __attribute__((packed)), which is technically non standard.
I've written hardware device drivers in pure C where you need need to peek and poke at specific bits on the memory bus. I defined a struct that matched the exact memory layout that the hardware specifies. Then cast an integer to a pointer to that struct type. At which point I could interact with the hardware by directly reading/writing fields if the struct (most of which were not even byte aligned).
It is not quite that simple, as you also have to deal with bypassing the cache, memory barriers, possibly virtual memory, finding the erreta that clarifies the originaly published register address was completely wrong. But I don't think any of that is what people mean when they say "memory layout".
pjmlp a day ago | root | parent |
Now split the struct across registers in C.
You are aware that some of those casting tricks are UB, right?
gizmo686 a day ago | root | parent |
Casting integers to pointers in C is implementation defined, not UB. In practice compilers define these casts as the natural thing for the architecture you are compiling to. Since mainstream CPUs don't do anything fancy with pointer tagging, that means the implementation defined behave does exactly what you expect it to do (unless you forget that you have paging enabled and cannot simply point to a hardware memory address).
If you want to control register layout, then C is not going to help you, but that is not typically what is meant by "memory layout".
And if you want to control cache usage ... Some architectures do expose some black magic which you would need to go to assembly to access. But for the most part controlling cache involves understanding how the cache works, then controlling the memory layout and accesses to work well with the cache.
writebetterc a day ago | prev | next |
This post caused me to create an account. This C code is not good. Writing C is absolutely harder than Python, but you're making it so much harder than it has to be. Your program is buggy as heck, has very finicky cleanup code, and so on.
Here's a much easier way to write the program:
1. Dump whole file into buffer as one string
2. Find newlines in buffer, replace with NULs. This also let's you find each line and save them in another buffer
3. Sort the buffer of all the lines you found
4. qsort the buffer
5. Print everything
6. Free both buffers
Or, as a C program: https://godbolt.org/z/38nq1MorM
commandlinefan 13 hours ago | root | parent |
> Dump whole file into buffer as one string
... unless the file is too big to fit into memory?
9999_points 2 days ago | prev | next |
Memory arenas should be taught to all programmers and become the default method of memory management.
_bohm 2 days ago | root | parent | next |
They're a great fit in many situations but certainly not all. Why not teach programmers a variety of allocation strategies and how to recognize when each might be a good fit?
caspper69 2 days ago | root | parent |
I initially read your username as boehm, and I was like wow, ok, this is a guy who knows his memory. :)
What situations would an arena allocator prove problematic or non-optimal, aside from the many allocations/deallocations scenario?
This is an area I'm very interested in, so any info would be appreciated.
_bohm 2 days ago | root | parent |
In general, everything allocated within an arena has its lifetime tied to that arena. In lots of situations this is a fine or even desirable property (e.g., a single request context in a server application), but can be a tough restriction to work with in situations where you need fine-grained deallocations and possibly want to reuse freed space. The lifetime property can also be a pain to work with in multithreaded scenarios, where you might have multiple threads needing to access data stored in a single arena. Another situation that comes to mind is large long-lived allocations where you might want to have some manual defragmentation in place for performance reasons.
caspper69 2 days ago | root | parent | prev |
I agree with you 100%. I think arenas are a much lighter burden for the programmer to reason about than lifetimes & access patterns.
But arenas can have one big drawback, and that is if you do a lot of allocations and deallocations, especially in long-running routines, you can essentially leak memory, because arenas are not usually freed until they are going out of scope. This can vary depending on the language and the implementation, though.
My thought to counteract that though is you could offer a ref-counted arena just for this scenario, but I'm not sure what exactly that would look like (automatic once refs hit 0? offer a purge() function like a GC?). I haven't wrapped my head around the ergonomics yet.
eddieh 20 hours ago | prev | next |
Too bad the first program in the article leaks its file descriptor.
Memory is but one resource you need to manage. File descriptors are the first oft overlooked resource in a long list of neglected finite resources.
erlkonig a day ago | prev | next |
Using abort() every time malloc and kin fail isn't really satisfying anything except the idea that the program should crash before showing incorrect results.
While the document itself is pretty good otherwise, this philosophical failing is a problem. It should give examples of COPING with memory exhaustion, instead of just imploding every time. It should also mention using "ulimit -Sd 6000" or something to lower the limit to force the problems to happen (that one happens to work well with vi).
Memory management is mature when programs that should stay running - notably user programs, system daemons, things where simply restarting will lose precious user data or other important internal data - HANDLE exhaustion, clean up any partially allocated objects, then either inform the user or keep writing data out to files (or something) and freeing memory until allocation starts working again. E.g. Vi informs the user without crashing, like it should.
This general philosophy is one that I've seen degrade enormously over recent years, and a trend we should actively fight against. And this trend has been greatly exacerbated by memory overcommit.
returningfory2 21 hours ago | root | parent |
It's a beginners article about memory management. I think it's weird that so many comments here are judging the code snippets as if they're commits to production systems. When writing articles like these there are pedagogical decisions to be made, such as simplifying the examples to make them easier to understand.
the_arun a day ago | prev | next |
> If we just concatenate the values in memory, how do we know where one line ends and the next begins? For instance, maybe the first two names are "jim" and "bob" or maybe it's one person named "jimbob", or even two people named "jimbo" and "b".
Don't we have a newline character? I thought we can read newline as `0xA` in Rust?
_bohm 2 days ago | prev | next |
This is a fantastic post. I really feel like these concepts should be introduced to programmers much earlier on in their education and this article does a great job of presenting the info in an approachable manner.
1970-01-01 2 days ago | prev | next |
This was a great (re)introduction to the fundamentals. Worthy of a bookmark.
2 days ago | prev | next |
numeromancer 2 days ago | prev | next |
Just no.
address = X
length = *X
address = address + 1
while length > 0 {
address = address + 1
print *address
}
ekr____ 2 days ago | root | parent | next |
Author here. You're quite right that this isn't the thing you would normally do. I'm just trying to help people work through the logic of the system with as few dependencies as possible, hence this (admittedly yucky) piece of pseudocode which isn't really C or Rust or Python or anything...
helothereycomb a day ago | root | parent |
At least update "length" for the for loop since it would go into an infinite loop the way it is now in any of those languages.
2 days ago | root | parent | prev |
jll29 2 days ago | prev | next |
Great post for intermediary programmers, who started programming in Python, and who should now learn what's under the hood to get to the next level of their education. Sometimes (perhaps most of the time), we should ignore the nitty gritty details, but the moment comes where you need to know the "how": either because you need more performance, sort out an issue, or do something that requires low-level action.
There are few sources like this post targeting that intermediate group of people: you get lots of beginner YouTube clips and Web tutorials and on HN you get discussions about borrow checking in Rust versus garbage collection in Go, how to generate the best code for it and who has the best Rope implementation; but little to educate yourself from the beginner level to the level where you can begin to grasp what the second group are talking about, so thanks for this educations piece that fills a gap.
imbnwa a day ago | root | parent |
Which is why it sucks the top comments are pedantry over what is proper C code, or other comments are about how to optimize the article's code, all missing the point that we're learning concepts that can be corrected later
commandlinefan 13 hours ago | root | parent |
> pedantry over what is proper C code
As soon as I clicked on the link and saw there was C code included, I knew how the comment section was going to go...
juanbafora 2 days ago | prev | next |
thanks for sharing these are core concept to better understand the coding
sylware 2 days ago | prev | next |
Avoid as much as you can the C standard lib allocator, go directly to mmap system call with your own allocator if you know you won't use CPU without a MMU.
If you write a library, let the user code install its own allocator.
commandlinefan 13 hours ago | root | parent | next |
> go directly to mmap system call
TFA said that, too... IIRC (and based on a quick googling), mmap is for memory-mapping files into the virtual address space. I thought sbrk() was used for low-level adjustment of available memory and malloc was responsible for managing an allocation handed to it by the sbrk() call. Or has that fallen out of fashion since I last did low-level C programming?
the-smug-c-one 2 days ago | root | parent |
What modern OS doesn't have the equivalent of mmap? Just som #ifdefs. I didn't know I'd ever hear "use malloc, because it's portable".
sylware is pretty much right anyway. Try to avoid malloc, or write smaller allocators on top of malloc.
pjmlp 2 days ago | root | parent |
Why bother with some #ifdefs when the ISO C standard library already does the job?
the-smug-c-one a day ago | root | parent |
Because you're probably writing a much larger program so some ifdefs aren't a big deal :-).
keyle a day ago | root | parent |
This is a silly argument because at the end of the day, once you make your code portable, you've now duplicated 99% of malloc and free, and you've left a mess for the team or next guy to maintain on top of everything else. You've successfully lowered the abstraction floor which is already pretty low in C.
jeffbee 2 days ago | root | parent | prev |
"malloc" is a weakly-bound symbol that can be overridden, on every system I've used. I don't know if some standard defines it to be weak. Anyway the point is that malloc is not necessarily a call to the C standard library function. It can be anything.
kevin_thibedeau a day ago | root | parent | next |
The linker doesn't try to resolve symbols it's already seen while static linking. This doesn't require a weak linkage flag for overriding system library functions since libc is linked at the end by default when static linking or at runtime when dynamic.
sylware 2 days ago | root | parent | prev |
"weakly-bound symbol" implies your a using a complex runtime library/binary format (like ELF).
A portable and clean design for a library is to allow to override the internal allocator via the API (often part of the init function call).
Look at vulkan3D which does many things right and doing this very part right. On the other side, you have some parts of the ALSA lib API which still requires to use the C lib free (may be obsolete though).
jeffbee 2 days ago | root | parent |
No, it doesn't mean that, because malloc can be replaced at build-time as well. But I agree that interfaces should avoid doing their own allocations and should let the caller dictate it.
sylware 2 days ago | root | parent |
Look at vulkan3D which does it right(TM).
aslihana 2 days ago | prev |
The comics at the beginning hahaha :D
bluetomcat 2 days ago | next |
This isn't proper usage of realloc:
In case it cannot service the reallocation and returns NULL, it will overwrite "lines" with NULL, but the memory that "lines" referred to is still there and needs to be either freed or used.The proper way to call it would be: