Zsh kills Python process with plenty of available VM

On a MacBook Pro, 16GB of RAM, 500 GB SSD, OS Sequoia 15.7.1, M3 chip, I am running some python3 code that requires lots of RAM and sure enough, once physical memory is exhausted, swapfiles of about 1GB each start being created, which I can see in /System/Volumes/VM. This folder has about 470 GB of available space at the start of the process (I can see this through get info) however, once about 40 or so swapfiles are created, for a total of about 40GB of virtual memory occupied (and thus still plenty of available space in VM), zsh kills the python process responsible for the RAM usage (notably, it does not kill another python process using only about 100 MB of RAM). What's going on here? All the documentation I was able to consult says macOS is designed to use all available storage on the startup disk (which must be the one I am using since I have only one disk and the available space aforementioned reflects this).


Then why zsh kills the process with so much available VM? Also, I changed the shell from zsh to bash, not sure whether this makes a difference (though it is still zsh that kills the process, not bash). One last note, I do not have administrator rights on this device, so I could not run dmesg to retrieve more precise information, but I doubt my employer put a cap on RAM usage on my profile, since this should not be possible on macOS and even if it were, I suppose it would show when I get info when I check the available space in VM.


Thanks for any insight you can share on this issue, is it a known bug or something? I could not find anything recent on it.

MacBook Pro 13″

Posted on Dec 6, 2025 6:47 AM

Reply
Question marked as Top-ranking reply

Posted on Dec 6, 2025 1:50 PM

gggg87 wrote:

On a MacBook Pro, 16GB of RAM, 500 GB SSD, OS Sequoia 15.7.1, M3 chip, I am running some python3 code that requires lots of RAM and sure enough, once physical memory is exhausted, swapfiles of about 1GB each start being created, which I can see in /System/Volumes/VM. This folder has about 470 GB of available space at the start of the process

Really? Are you sure about that? That computer would have to be practically empty. Note that when macOS tells you about "available" storage, it's talking about many different things, and only one of them is truly "free" storage.


I changed the shell from zsh to bash, not sure whether this makes a difference (though it is still zsh that kills the process, not bash).

It's not zsh that's killing the process. It's the kernel. Although this statement does have me concerned.


on macOS I run it in a tmux session from which I activate a conda environment

Perhaps you should have led with that. This alone would easily explain my concerns above. First of all, macOS doesn't come with Python, so there's that. Once the world "conda" enters the chat, all bets are off.


I have no way to even open /var/log/system.log due to lack of permissions (in the console I can only see log reports

Logging on macOS is a nightmare. It does require root to see the logs. You'll never find them looking at /var/log/system.log or any file. You can get a live stream using Console or use the "log" command line tool, after learning the predicate language, of course.


I was able to reproduce some version of this. I can run a Python script that just starts allocating memory. Once it hits 87 GB, it gets killed. When my Python script crashed, it reported the following in Console:


default	16:08:49.519563-0500	kernel	memorystatus: killing largest compressed process Python [39054] 86002 MB


My app was definitely using only compressed RAM. I wasn't getting any swap usage.


But the executioner here is the iOS "memorystatus" architecture.


And no, that's not a typo. You're just using a really big iPhone. 😄


So where to go from here? You can research "memorystatus" if you want. I don't know if there's a solution.


Keep in mind that there's a fundamental problem here. You don't have that much RAM. Assuming there isn't some huge bug or memory leak in the script, then it's simply trying to use VM as a data store. That's really not a good idea. Even if it works, it would be really slow. Sure that will work for Linux, because you can turn everything off and tweak it in any number of ways. That's not allowed for iPhones.


The most likely problem is simply a buggy Python script. You've already said that similar code runs for weeks with no problem. How is this code different?


Do you need to run this on a Mac? I've seen cases of needing to run one specific script that absolutely must have crazy amounts of RAM. An easy solution is an AWS EC2 instance with crazy amounts of RAM. Running it for an hour or two might cost you $12.


At the other extreme, I've seen people convinced that they were doing Really Important Work and their scripts really needed crazy amounts of RAM. Rather than test for $12, they bought 5 Linux servers with 1 TB RAM @ $20,000 each. As you can imagine, it was really awkward when I fixed the bug. 😄 Did I mention the bug was related to a conda-style environment? 😄 Just sayin'.

26 replies
Question marked as Top-ranking reply

Dec 6, 2025 1:50 PM in response to gggg87

gggg87 wrote:

On a MacBook Pro, 16GB of RAM, 500 GB SSD, OS Sequoia 15.7.1, M3 chip, I am running some python3 code that requires lots of RAM and sure enough, once physical memory is exhausted, swapfiles of about 1GB each start being created, which I can see in /System/Volumes/VM. This folder has about 470 GB of available space at the start of the process

Really? Are you sure about that? That computer would have to be practically empty. Note that when macOS tells you about "available" storage, it's talking about many different things, and only one of them is truly "free" storage.


I changed the shell from zsh to bash, not sure whether this makes a difference (though it is still zsh that kills the process, not bash).

It's not zsh that's killing the process. It's the kernel. Although this statement does have me concerned.


on macOS I run it in a tmux session from which I activate a conda environment

Perhaps you should have led with that. This alone would easily explain my concerns above. First of all, macOS doesn't come with Python, so there's that. Once the world "conda" enters the chat, all bets are off.


I have no way to even open /var/log/system.log due to lack of permissions (in the console I can only see log reports

Logging on macOS is a nightmare. It does require root to see the logs. You'll never find them looking at /var/log/system.log or any file. You can get a live stream using Console or use the "log" command line tool, after learning the predicate language, of course.


I was able to reproduce some version of this. I can run a Python script that just starts allocating memory. Once it hits 87 GB, it gets killed. When my Python script crashed, it reported the following in Console:


default	16:08:49.519563-0500	kernel	memorystatus: killing largest compressed process Python [39054] 86002 MB


My app was definitely using only compressed RAM. I wasn't getting any swap usage.


But the executioner here is the iOS "memorystatus" architecture.


And no, that's not a typo. You're just using a really big iPhone. 😄


So where to go from here? You can research "memorystatus" if you want. I don't know if there's a solution.


Keep in mind that there's a fundamental problem here. You don't have that much RAM. Assuming there isn't some huge bug or memory leak in the script, then it's simply trying to use VM as a data store. That's really not a good idea. Even if it works, it would be really slow. Sure that will work for Linux, because you can turn everything off and tweak it in any number of ways. That's not allowed for iPhones.


The most likely problem is simply a buggy Python script. You've already said that similar code runs for weeks with no problem. How is this code different?


Do you need to run this on a Mac? I've seen cases of needing to run one specific script that absolutely must have crazy amounts of RAM. An easy solution is an AWS EC2 instance with crazy amounts of RAM. Running it for an hour or two might cost you $12.


At the other extreme, I've seen people convinced that they were doing Really Important Work and their scripts really needed crazy amounts of RAM. Rather than test for $12, they bought 5 Linux servers with 1 TB RAM @ $20,000 each. As you can imagine, it was really awkward when I fixed the bug. 😄 Did I mention the bug was related to a conda-style environment? 😄 Just sayin'.

Dec 6, 2025 9:22 AM in response to gggg87

If you think zsh is killing your Python program, then switch to bash, or ksh or tcsh, and see if they kill your program?


does “ulimit -a” indicate any limitations that might be affecting your program?


Be aware all that virtual memory has an operating system cost, in that the os had to maintain page tables in real memory. Once you create enough virtual memory, the os may run out of real memory to hold the page tables.

Dec 7, 2025 8:37 AM in response to gggg87

gggg87 wrote:

So that's the context.

I totally understand. I've been there myself, many times.


Regarding your comment on iOS, I was under the understanding that macOS is fundamentally different from iOS in this regard: iOS does not allow swap due to containerisation, everything must be on RAM and therefore when it is out of RAM it kills the processes starting from the most demanding.

First of all, iOS and macOS are fundamentally identical in one key concept. They are proprietary operating systems whose internal behaviour in this regard is officially undocumented.


In certain cases, through careful reading of the official (and current) documentation, you might be able to come to an educated understanding (or guess, if you will) regarding how the operating systems works internally.


Note that anything posted outside of an "apple.com" domain is going to be unhelpful. Sometimes it's correct. Sometimes it's wrong. There's no way to tell unless you already know. Most of the time, you still have to do the work of reading the documentation and making your own analysis. Then you might be able to apply that reasoning to those internet theories and either prove or disprove them.


Taking all of this kind of guesstimation analysis into consideration, I can state that iOS and macOS are essentially the same operating system. What you regard as "macOS" is just one of many API "skins" that adopt iOS behaviour to a particular platform. The macOS skin is relatively large and complicated compared to the iPadOS or HomePod skins. But ultimately, it's just a compatibility layer on iOS.


And yes, modern iOS most definitely supports swap.


So either there is a memory leak (but how can there be one if the same code does not leak on Linux?

That's very easy. It's so easy, in fact, that this should be your first assumption in most cases of unconstrained memory usage. However, you did describe the same kind of unconstrained memory usage on Linux. It could still be a memory leak, just not a macOS-specific memory leak due to a poor effort at porting the code to macOS. I would still say this is the most likely explanation. But I understand that would be a more difficult issue to fix and justify. You would have to fix it and test it on all support Linux platforms first. And to get it incorporated, you would have to convince people that it wasn't a macOS bug, which would be virtually impossible.


I am not familiar with this concept, apologies if what I say makes no sense here) or the actual space available on this almost empty Mac is much smaller than what is reported.

Unfortunately, in this area, it's the macOS operating system that makes no sense. Unless you're aware of the problem, and specifically checked the two (2) places where the operating system will tell you the actual amount of free storage, then the most likely assumption is that you're simply out of free storage. This problem is so common that it should be assumed to be the cause of any apparent storage issue unless proven otherwise via a screenshot from one of those two places. Any other claims regarding free or available storage should be automatically discounted.


Checking with du -sh returns the same values as getinfo on the VM volume.

I don't know. du -sh sure doesn't look meaningful from here. I know getinfo on the VM volume will be wrong.


You can use Disk Utility if you know where to look. As it a GUI app, it's rather difficult to explain. If you're familiar with Linux, it's much easier to use "df -h". It's a little bit awkward just because, here, and here only, the "Avail" column is what you want. That really means "free" storage. But anywhere else, "Available" doesnt' mean free storage. This is what I meant about it making no sense.



Dec 6, 2025 11:00 AM in response to MrHoffman

I am not familiar with memory leaks, but the reason this is not a problem in linux is that on linux it is clear what the memory limit is (the preallocated swap + RAM) and the process gets killed when that limit is reached, and I can monitor closely that that limit is slowly approached over time, no sudden jumps in memory usage or anything of sorts. Everything behaves as expected. On macOS instead, I can see in advance the theoretical preallocated size to the VM volume in the startup disk, but this is nowhere near what the process gets to use before it is killed. It is almost as if that size is only a theoretical limit, but in actuality there is a way smaller limit (around 40 GB of VM) perhaps due to other limitations in the system and how it handles memory. In this case, I would like to find out what the actual limit is, if there is one, or find ways to track the parameter responsible. I asked the gurobi team about this too, still awaiting reply, and I will also ask the IT team at work to see if the issue is related to me not having root privileges, but I suspect these should not be issues since all the available space I can see from my account should be the one that has been allocated to me as a user. That is why I am also asking over here. BobHarris mentions pageables memory issues for example, which my reduce the actual VM one can use, compared to the theoretical limit, but I am not sure how to track that.

Dec 7, 2025 8:19 AM in response to gggg87

I did a small test, by creating a program that kept allocating large arrays and never releasing them.

Its memory usage went as high up as 130GB. Eventually the process was killed.


It is worth pointing out that my mac has 24GB of RAM, and I had a lot of free space on the system drive.

Oh, and this was not using Python, it was a small C++ program. It is possible that Python itself my impose some memory limitations. I would like to do a similar test in Python but I may not have enough free time until maybe thursday.

Dec 6, 2025 7:21 AM in response to gggg87

It is almost impossible to answer your question without any idea of what your python code is doing. There could be other reasons why it is failing, other than available RAM.


I understand that you cannot share the code, but you can at least tell us the EXACT error message that you get.

Regarding zsh vs bash, if the error message shows zsh, then it is for sure being run. This could be because you did not actually switch to bash, or you did by the script launches zsh to run the python command from.



Dec 6, 2025 9:04 AM in response to gggg87

One other relevant thing is the python environment - and by that I mean not just the python version, but also what packages are installed, and which versions of them.

Perhaps you can create a python virtual environment that matches the one on the linux machine, this might help in determining if it is really a difference of OS, or perhaps some python packages that are different.


You may be experiencing one case of dependency ****

Dec 6, 2025 10:26 AM in response to gggg87

Years ago, I had a situation where an app creating lots of sibling processes. No single process used too much memory, but after it starts about 100 sibling processes, the system would crash. This too a few days to occur.


It was not too many processes, it was the virtual memory. Once I figured out what was happening, I could see excessive swapfile creation, and Activity Monitor memory usage total was off the charts, but because each offending process was not using too much memory, nor CPU, so it was not at the top of the usage list in Activity Monitor, so it was difficult to initially figure out.


I have seen similar virtual memory exhaustion in other operating systems going back to Tru64 UNIX, HP-UX. Not so much on Linux, mostly because what I do on Linux is file system development, and not so much running apps. I live mostly in vim, which is not a memory issue.

Dec 7, 2025 4:16 PM in response to gggg87

gggg87 wrote:

Thanks for pointing at df -h, it does report a slightly different value than getinfo and du, but not by a significant amount: 343Gi available (at the moment, since the code has been restarted, only 2Gi used) and 3.6G ifree.

Just remember that when you saw them have the same value, it means only that you were lucky that one time. Get Info in the Finder is not reliable. That value for "Available" is not the same as free.


Also note that, in "df" output, it is the "Avail" column that represents "free" storage. The "iused" and "ifree" are something completely different. The Finder's "Available" value is something else that is completely different.


Essentially, the behaviour of the code is the right one

I'm going to have to go ahead and disagree with you on that one. What's the point of a tool that always crashes anyway? Is it reporting something useable before then?


The tool I mentioned was one of the OSM (Open Street Map) processing tools. There are many different versions. The traditional ones use a databases and take days or weeks to run on the full planet, depending on your RAM. But once you get it into that database, you can keep it running with real time updates, assuming you can afford a server that powerful.


But there is one tool that will generate tiles from a planet file in only an hour. But it can only do this by doing all processing in RAM (real RAM, not swap). So this is a no-brainer. Spend days or weeks testing, failing, testing. Or test with a smaller dataset in 5 minutes, then do the whole world for $10 and be done with it. And if you want to update next month or next year, spend another $10 - big deal.


But some tool that runs and consumes all swap space is simply wrong. I'm sorry, but this a pretty common result here in the forums. Someone wants to do something, similar to the way they did it elsewhere, but it was always wrong. So when the Mac does things properly, they get frustrated because it won't let them do it wrong. Sorry. That's just the way it is. The operating system was designed for running on a billion different cell phones. Allowing any app on those phones to consume all RAM and crash the device is just a non-starter. If that's your only option, then your only option is Linux.

Dec 7, 2025 4:58 PM in response to gggg87

gggg87 wrote:

…Thus, it is not abnormal that the memory usage keeps raising, it is the expected behaviour as long as this happens steadily (which is what is going on for what I was able to monitor over several days)…


App virtual memory usage that increases without bound is a broken design.


While the app might well produce useful information before crashing, the design is fundamentally broken.


It is quite reminiscent of a 16-bit app that runs off the end of PDP-11 or Apple II address space. such an app can be hard to debug, and hard to support, too. And hard to predict, as you are tussling with here. Falling off the end of the available address space is bad.


How the app might determine its limits and how much virtual memory is available would be the usual next discussion.


If the app meets your needs on Linux or *BSD or whatever (before crashing), by all means use that. Presumably, that means there is more swap space available, or more RAM, or both, too.

Dec 6, 2025 10:05 AM in response to Luis Sequeira1

I do not think that the package dependencies are the issue, since the other python process has been running for weeks, but since it uses far less memory, it has never been killed, even though it is in the same environment, using the very same packages, only running a slightly different optimisation routine, which is not memory intensive.


In conclusion, the discriminant seems to really just be memory usage, and I was wondering if this is a known macOS behaviour/bug, due to some old posts pointing at something similar, but on different versions of the OS.

Dec 6, 2025 12:53 PM in response to gggg87

gggg87 wrote:

Thank you for your reply, the ulimit -a parameters have been reported below in another reply, they seem fine to me. Is there a way to check in real time if the page tables are getting close to some limits? I did not know of that.

I am not aware of any way to see the kernel's page tables.


Generally when hitting virtual memory limits, you get a dialog box saying "Your system has run out of application memory". However, this is generally trigged because a GUI app could not allocate any more memory from the kernel, and the GUI frameworks throws up the dialog box.


However, you are a command line invoked program, so you are not using any of the GUI frameworks, so I think in this situation, either the process kills itself, or the kernel kills it. Mostly I'm guessing at this point, so do not waste too much time chasing these concepts. Also I know nothing about Python, and its exception handling if a system call fails (these days I'm mostly a C programmer, that dabbles in shell scripts, Perl, awk scripts).


There is vm_stat, vmmap and heap commands. I have NOT used them, so I cannot tell you if they will give you anything useful.


Most of my virtual memory knowledge comes from writing a hardware virtual memory hardware diagnostic in assembly language back in the '70's for a 16-bit mini-computer (it even had front panel switches 😁 ). Because of this diagnostic program I got up close and personal with virtual memory, I have tried to pay attention to various virtual memory implementations over the years, but in reality the basic concepts have not really changed that much, even the key hardware component (Translation Look-aside Buffer (TLB)). The changes are where the hardware is implemented (from separate hardware card to part of the CPU chip (heck CPU chips were just barely around in the 70's; I think there was just the intel 4004 and the 8008), and instead of the hardware doing more of the work, they moved page table management into software.

Dec 6, 2025 1:12 PM in response to BobHarris

Thank you for sharing, I do not think it is the issue here though, since besides default background processes of the OS, I only have the terminal, the 2 python processes and tmux running. In activity monitor I do not see any child processes, all the ram used is essentially due to this one python process, which takes about 50-60GB of memory at the time it is killed (this translates to only 30-40 GB of VM due to compression, I believe) and I took special care to limit the number of threads usable by gurobipy to 4, to limit memory usage. Considering that ulimit displays 2666 as max number of processes, I do not think, even accounting for all the background bloat of macOS, that I am nowhere near that number, unless gurobipy does things that macOS does not like in the background which do not show in Activity Monitor

Dec 7, 2025 5:47 AM in response to MrHoffman

Thank you for sharing the commands, what would be really helpful in this case then would be a command that, instead of vm usage, reports the residual available vm to the processes. It seems that, given what you say, either the getinfo run on the VM volume is just spitting out a theoretical value that is nowhere near the actual value, or there are other resources at play that I cannot look into (for that I'll wait a reply from Gurobi on how what this routine does is implemented on macOS). So, is there a way to get the actual value of residual remaining (i.e. available) space in the VM volume from terminal? If I run a standard du -sh I get the same values as those I get from getinfo, is the output of this command reliable in terms of describing the actual space remaining?

Zsh kills Python process with plenty of available VM

Welcome to Apple Support Community
A forum where Apple customers help each other with their products. Get started with your Apple Account.