Zsh kills Python process with plenty of available VM

On a MacBook Pro, 16GB of RAM, 500 GB SSD, OS Sequoia 15.7.1, M3 chip, I am running some python3 code that requires lots of RAM and sure enough, once physical memory is exhausted, swapfiles of about 1GB each start being created, which I can see in /System/Volumes/VM. This folder has about 470 GB of available space at the start of the process (I can see this through get info) however, once about 40 or so swapfiles are created, for a total of about 40GB of virtual memory occupied (and thus still plenty of available space in VM), zsh kills the python process responsible for the RAM usage (notably, it does not kill another python process using only about 100 MB of RAM). What's going on here? All the documentation I was able to consult says macOS is designed to use all available storage on the startup disk (which must be the one I am using since I have only one disk and the available space aforementioned reflects this).


Then why zsh kills the process with so much available VM? Also, I changed the shell from zsh to bash, not sure whether this makes a difference (though it is still zsh that kills the process, not bash). One last note, I do not have administrator rights on this device, so I could not run dmesg to retrieve more precise information, but I doubt my employer put a cap on RAM usage on my profile, since this should not be possible on macOS and even if it were, I suppose it would show when I get info when I check the available space in VM.


Thanks for any insight you can share on this issue, is it a known bug or something? I could not find anything recent on it.

MacBook Pro 13″

Posted on Dec 6, 2025 6:47 AM

Reply
Question marked as Top-ranking reply

Posted on Dec 6, 2025 1:50 PM

gggg87 wrote:

On a MacBook Pro, 16GB of RAM, 500 GB SSD, OS Sequoia 15.7.1, M3 chip, I am running some python3 code that requires lots of RAM and sure enough, once physical memory is exhausted, swapfiles of about 1GB each start being created, which I can see in /System/Volumes/VM. This folder has about 470 GB of available space at the start of the process

Really? Are you sure about that? That computer would have to be practically empty. Note that when macOS tells you about "available" storage, it's talking about many different things, and only one of them is truly "free" storage.


I changed the shell from zsh to bash, not sure whether this makes a difference (though it is still zsh that kills the process, not bash).

It's not zsh that's killing the process. It's the kernel. Although this statement does have me concerned.


on macOS I run it in a tmux session from which I activate a conda environment

Perhaps you should have led with that. This alone would easily explain my concerns above. First of all, macOS doesn't come with Python, so there's that. Once the world "conda" enters the chat, all bets are off.


I have no way to even open /var/log/system.log due to lack of permissions (in the console I can only see log reports

Logging on macOS is a nightmare. It does require root to see the logs. You'll never find them looking at /var/log/system.log or any file. You can get a live stream using Console or use the "log" command line tool, after learning the predicate language, of course.


I was able to reproduce some version of this. I can run a Python script that just starts allocating memory. Once it hits 87 GB, it gets killed. When my Python script crashed, it reported the following in Console:


default	16:08:49.519563-0500	kernel	memorystatus: killing largest compressed process Python [39054] 86002 MB


My app was definitely using only compressed RAM. I wasn't getting any swap usage.


But the executioner here is the iOS "memorystatus" architecture.


And no, that's not a typo. You're just using a really big iPhone. 😄


So where to go from here? You can research "memorystatus" if you want. I don't know if there's a solution.


Keep in mind that there's a fundamental problem here. You don't have that much RAM. Assuming there isn't some huge bug or memory leak in the script, then it's simply trying to use VM as a data store. That's really not a good idea. Even if it works, it would be really slow. Sure that will work for Linux, because you can turn everything off and tweak it in any number of ways. That's not allowed for iPhones.


The most likely problem is simply a buggy Python script. You've already said that similar code runs for weeks with no problem. How is this code different?


Do you need to run this on a Mac? I've seen cases of needing to run one specific script that absolutely must have crazy amounts of RAM. An easy solution is an AWS EC2 instance with crazy amounts of RAM. Running it for an hour or two might cost you $12.


At the other extreme, I've seen people convinced that they were doing Really Important Work and their scripts really needed crazy amounts of RAM. Rather than test for $12, they bought 5 Linux servers with 1 TB RAM @ $20,000 each. As you can imagine, it was really awkward when I fixed the bug. 😄 Did I mention the bug was related to a conda-style environment? 😄 Just sayin'.

26 replies

Dec 7, 2025 6:09 AM in response to etresoft

Thank you, yes I know that this could be easily resolved on the cloud, but the max I can access is 64 GB of RAM at the moment, I have requested 120 already but things are proceeding slowly for some reason, so I figured I could exploit macOS behaviour on this almost empty laptop I have sitting there. It is empty because I rarely use it. Even though swapping is going to be much slower than RAM, it seems it is not slower than the IT guys that have a huge backlog of things to do before allocating 120GB of RAM to my instance on their server. I have been waiting for a month already. So that's the context.


Regarding your comment on iOS, I was under the understanding that macOS is fundamentally different from iOS in this regard: iOS does not allow swap due to containerisation, everything must be on RAM and therefore when it is out of RAM it kills the processes starting from the most demanding. On macOS before it get that close to running out of RAM, the system starts swapping until there is available space in the startup disk. So either there is a memory leak (but how can there be one if the same code does not leak on Linux? I am not familiar with this concept, apologies if what I say makes no sense here) or the actual space available on this almost empty Mac is much smaller than what is reported. However no matter how small, it is hard to believe that it has only 40GB remaining, I really did put in it one a handful of scripts. Checking with du -sh returns the same values as getinfo on the VM volume. So the available space in VM should really be all that is reported. I will try to install a separate python version bypassing the conda environment and see if in a regular python environment this happens also, or if it is just conda that is messing with VM management. If you are positive that macOS does not have additional limits on VM, this should be worth a try and would at least rule something out, thanks again.

Dec 7, 2025 10:14 AM in response to etresoft

Thanks for pointing at df -h, it does report a slightly different value than getinfo and du, but not by a significant amount: 343Gi available (at the moment, since the code has been restarted, only 2Gi used) and 3.6G ifree.

Regarding the possibility of memory leak, I doubt it, because Gurobi does erase occupied memory when it is no longer needed. I can observe that the occupied storage sometimes decreases by 1Gi (to be fair I checked this on Linux, as on the Mac laptop it is less easy to check because things go way faster on that machine), usually when there are events in the routine that make certain branches of the search tree no longer needed. The reason for the memory usage is because branch and bound algorithms proceed by branching and this creates way more minni-problems to solve than those rendered useless by comparison with the new solutions found. Thus, it is not abnormal that the memory usage keeps raising, it is the expected behaviour as long as this happens steadily (which is what is going on for what I was able to monitor over several days). Essentially, the behaviour of the code is the right one, the problem is that it gets killed way too early at about 40GB of swap. I read somewhere that the read/write time getting longer with a lot of swap could be the reason for the routine getting killed, but I have no idea whether this is true or not. Do you happen to have experience with that claim?

Dec 7, 2025 10:27 AM in response to MrHoffman

Thanks for sharing the commands, there is something weird about the value returned by sysctl vm.swapusage. It seems to only return the swap available in the swapfile used at the moment or something along those lines, not the full capacity in the VM volume. For example it is returning only 2GB of total swap and only about 1.6 GB of free swap at the moment. But I know for a fact that before the code got killed, I reached about 40GB of swap, so this command must talk about some other form of vm... or maybe it refers to the swapfile currently created only, not accounting for the future ones that could be created if needed.

Dec 6, 2025 8:49 AM in response to Luis Sequeira1

Thank you for your reply. The script is not the problem, I also run it under Linux in different machines (Ubuntu and Linux Mint) and correctly runs until all system memory (physical+virtual) is exhausted, at which point it is killed with out of memory message. I have no problem sharing details, I just think they are not relevant since it works, except for macOS: it is a gurobipy branch and bound optimisation routine, so it is pretty much all automatic, I have no power over what the routine does, nor I know details since it is proprietary software, it just keeps printing a log on screen in real time as it progresses and reports various numbers every 5 seconds or so, but I can guarantee that it runs as expected, since it does so with Linux. The only difference is that while on Linux the code is run in a tmux session from which I activate a python environment, on macOS I run it in a tmux session from which I activate a conda environment, which should really be just the same. Given the above, the problem should really just be how macOS handles processes that require lots of virtual memory, since this is the only difference with respect to what's going on. Further, the fact that only the process with the most memory usage is killed, while the other python process keeps running, leads me to believe that if I were able to run dmesg, I would see an OOM SIGKILL.


You are actually right about the shell, although I set it to bash in the terminal's settings, when I started the tmux session from which I am running the python code, it somehow reverted back to zsh, I just checked it with echo $SHELL. But regardless, then this removes that variable and the shell is not the problem.


Regarding error messages, as I explained, when I run dmesg the system says that I do not have permissions to do that. All I can see is

zsh: killed

but I have no way to even open /var/log/system.log due to lack of permissions (in the console I can only see log reports, whereas system.log is empty and says 'unable to read the file'). Is there another way, not requiring root permissions, to determine whether it was an OOM kill signal?


Regarding other possible limits on resources I am able to assess, running 'limit' displays nothing wrong, it is all set to unlimited, only limits are stack size (7KB, pretty much like in the Linux Mint machine, which has 8, so this should not be the issue) core file size (0, same as in Linux Mint), processes (2666, this is less than in Linux Mint, which has roughly 47000) and file descriptors (2560, I do not have this setting in Linux Mint).

Dec 6, 2025 9:59 AM in response to gggg87

So to summarize my understanding of this unspecified-error question (and here without intending to be sarcastic or derogatory), this proprietary third-party app runs out of virtual memory on Linux and that’s okay, but it runs out of virtual memory on Mac and that’s bad, because of the different ways the two systems end what was seemingly described as a virtual-memory-leaking run-away app?


Last I checked, recent macOS offers 18 exabytes of virtual memory, limited by how much backing storage is available (in the boot partition) for that usage. (Details)


As another source of assistance with this app, have you checked with the gurobipy forum?

Dec 6, 2025 1:14 PM in response to gggg87

There is no virtual memory system in existence that can exceed the aggregate capacity of available memory and available backing storage. How that virtual memory addressing limit is reached, and what resource quotas might be enforced, and where that backing storage might be allocated, and whether the virtual memory system supports dynamically increasing backing storage, and details such as memory compression, varies.


Put more succinctly, you get as much virtual memory as there is (non-wired) main memory and backing storage (swap) available.


Mapping how Linux does virtual memory management to another platform won’t work well. XNU descended from Mach, and mostly still follows the Mach memory management scheme and norms. Not those of Linux. If you want more control over virtual memory allocations and backing storage than macOS and quite possibly Linux for instance, maybe OpenVMS x86-64 is a better operating system choice.


Memory-related shell commands:

ps -o pid,rss,vsz,pmem,comm {pid}    # process information for {pid}
vmmap -w {pid} | head -10            # process-level virtual memory
vm_stat 1                            # system-wide system-wide statistics

# maybe also useful
sysctl vm.swapusage                  # swap usuage; Activity Monitor app easily shows this, too



More reading: Memory and Virtual Memory


Fundamentally, trying to fix an app with no source code and with what might seems a virtual memory leak — or an unusual app design — is likely best left to the app developers.



Most of what I know of virtual memory comes from operating system development work on an early 32-bit virtual memory system, and then later with 64-bit operating system development work. (Far fewer front panel lights on these.) No app that tries to consume all resources is going to be particularly predictable though, as there are always competing activities on (most) modern systems. And a Python app running under the control of the Python interpreter only adds to the complexity of trying to predict the inevitable failure.

Dec 7, 2025 8:37 AM in response to gggg87

gggg87 wrote:

Thank you for sharing the commands, what would be really helpful in this case then would be a command that, instead of vm usage, reports the residual available vm to the processes. It seems that, given what you say, either the getinfo run on the VM volume is just spitting out a theoretical value that is nowhere near the actual value, or there are other resources at play that I cannot look into (for that I'll wait a reply from Gurobi on how what this routine does is implemented on macOS). So, is there a way to get the actual value of residual remaining (i.e. available) space in the VM volume from terminal? If I run a standard du -sh I get the same values as those I get from getinfo, is the output of this command reliable in terms of describing the actual space remaining?


There isn’t a way to do that because there isn’t a way to do that. Yes, that reads like a tautology. But think about it. By then time the data needed to answer that question can be gathered, it will have changed.


If you are on a near -completely quiescent single-user non-multitasking system, with no other active apps either using storage directly or using storage indirectly for virtual memory paging maybe, and minimal or no system activity, probably. But you are not.


So you just spent system resources to get an answer that is somewhere between unreliable and wrong. Which means you’ll get what resources are currently available.


Pragmatically, the app can allocate virtual storage and catch the allocation failure that way, or can allocate main storage and catch the error. And whatever the Python memory management implementation is itself doing here just adds to the “fun”.


iOS virtual memory doesn’t page user-written memory: https://alwaysprocessing.blog/2022/02/20/size-matters

Dec 7, 2025 8:39 AM in response to gggg87

(continued due to 5000 character limit)


Here is the meaningful portion of my "df -H" output:


Filesystem        Size    Used   Avail Capacity iused ifree %iused  Mounted on
/dev/disk3s3s1    995G     12G    197G     6%    451k  1.9G    0%   /
devfs             374k    374k      0B   100%    1.3k     0  100%   /dev
/dev/disk3s6      995G   3221M    197G     2%       3  1.9G    0%   /System/Volumes/VM
/dev/disk3s4      995G   8114M    197G     4%    1.3k  1.9G    0%   /System/Volumes/Preboot
/dev/disk3s2      995G     56M    197G     1%     110  1.9G    0%   /System/Volumes/Update
/dev/disk1s2      524M   6312k    504M     2%       1  4.9M    0%   /System/Volumes/xarts
/dev/disk1s1      524M   6042k    504M     2%      30  4.9M    0%   /System/Volumes/iSCPreboot
/dev/disk1s3      524M   2707k    504M     1%      64  4.9M    0%   /System/Volumes/Hardware
/dev/disk3s1      995G    773G    197G    80%    3.3M  1.9G    0%   /System/Volumes/Data
/dev/disk3s7      995G    143M    197G     1%     243  1.9G    0%   /Volumes/shared
map auto_home       0B      0B      0B   100%       0     0     -   /System/Volumes/Data/home


/dev doesn't count.

/System/Volumes/xarts, /System/Volumes/iSCPreboot, and /System/Volumes/Hardware don't count because they're part of a separate container. They do count, but more at a hardware usage level.

All the other volumes are shared volumes and the "Avail" column will be the same. That's my free storage. I'm well above 100 GB, so I'm doing alright. However, this is not true:


I do not have 212 GB free.


And this can be tricky. The case of the "-H" parameter is important. If I do "df -h", then it uses traditional math where 1K is 1024 instead of "new math" where 1K is 1000. But these are computers. It's really 1024, but that's not pretty, is it?


I will try to install a separate python version bypassing the conda environment and see if in a regular python environment this happens also, or if it is just conda that is messing with VM management.

I doubt conda is at fault in this respect. I was just speculating that conda was probably responsible for keeping zsh in there when you thought you had changed.


I was able to somewhat reproduce this problem using Xcode's Python. In my case, it only used compressed memory but still killed it eventually. I think that was probably because my Python script was fake. It was just allocating the RAM, not really using it. Virtual memory is a constant battle between the operating system and apps both lying to each other about the RAM that they need, want, and are actually given.


If you are positive that macOS does not have additional limits on VM, this should be worth a try and would at least rule something out, thanks again.

I'm not positive about anything. It's all guesswork. After all, you mentioned "Sequoia 15.7.1". I don't have that operating system available right now. I'm not even sure my Sequoia system is running that version. But I can tell you with absolute certainty that these fundamental behaviours can, and will, change even between these minor updates. I did my test with Python on Tahoe, so that alone might explain why I only got compressed RAM.

The only thing I am sure about is that a well-written program shouldn't consume memory without bounds. If you're attempting to run a poorly-written program, then you can't fault the operating system for detecting and killing it before it destabilizes the system.

Zsh kills Python process with plenty of available VM

Welcome to Apple Support Community
A forum where Apple customers help each other with their products. Get started with your Apple Account.