Optimal pagefile for 16GB RAM?
Title.
< >
Showing 1-15 of 20 comments
Omega 6 Jul @ 4:06pm 
Default, let Windows decide.
A&A 6 Jul @ 4:25pm 
Default or use as much as you need (you will always need it)
Last edited by A&A; 6 Jul @ 4:26pm
The page file optimal calculation is between 1.5x to 4x your physical RAM size.

Ideally, you just leave it as "default" unless you are using a SSD (Solid State Drive) and want a fixed size to reduce read/write access upon it (for prolonging lifespan).

Understand that 1GB of RAM is 1024MB

Therefore: 1024 x 16 x 1.5 = 24576MB

If you go under your System > About > Advanced System Settings
Advanced (tab) > Performance Settings > Advanced (tab) > Virtual Memory (change)

Select custom size and set both Initial size (MB) and Maximum size (MB) to the same value. It might already tell you a currently allocated and recommended allocation below to take into consideration.

This figure can start going backward in size if you have a lot of RAM (32GB or more), so you can actually start to reduce the Windows page file down. Just note: Crash debugging logs would need the full size and certain old games.
Last edited by Azza ☠; 6 Jul @ 7:07pm
Originally posted by Biggus Nickus:
Title.

Leave windows to decide it.
I suggest a RAM upgrade instead to at least 32GB. It is not super expensive.
If space & money is super tight, then maybe a 16GB optane drive to store the pagefile.
_I_ 6 Jul @ 8:43pm 
set its min/max to half your ram, or the most you will ever need
windows doesnt always manage its size well
for 16 GB RAM, i recommend a 16 GB "FIXED" pagefile. 16 GB are exact "16.384" MB ..

more would decrease speeds and performance

i would also recommend to buy 2x 16 GB RAM in dual channel !! from same manufacturer and model. And in your case, older RAM sticks are nott that expensive anymore.
Last edited by N3tRunn3r; 6 Jul @ 11:51pm
Originally posted by Omega:
Default, let Windows decide.
Another first post by Omega, another /thread situation?

To further add to this, the optimal one is almost always system managed, unless you're already having issues with it on system managed. The majority of people, even including many who claim they do have problems with it, don't.

Windows memory management is complicated. Unless you understand Windows memory management at a pretty deep level, and I almost guarantee you none of the people who reply here will (and this goes for myself), then changing it is ill advised. Most people who do understand the topic deeper than most tend to advise not to change the stuff unless you have to. Take that for what it's worth.

At best (assuming you're not already having issues that you can tell for certain are down to the default page file values), you gain nothing. At worst, you introduce unnecessary drawbacks. "But these drawbacks never happened to me" tends to be the reason you'll get from the people advising you to change this.

People can't help themselves from chasing placebo, or searching out advise to set something to Y value for X amount of RAM. There's no such thing as that, because the needs of the page file are in accordance to your workload and not in accordance to your RAM capacity. Since none of us here know the exact needs of your system at all times, then therefore none of us can just give you values that are sufficient. But your system itself does know what it needs. And on system managed, it can adjust the page file to your needs in real time. It's adaptable and flexible. It will use little it it needs little, but be able to grow if it needs to. But if you enact a manual quota, and it needs more, you get "uh oh, memory is low" panic messages from Windows instead. And that's a best case scenario. Better hope it doesn't crash or prevent you from saving unsaved session/scratch data in the process. So why would you want to take that stability and flexibility away? For what gain do you get that justifies this? The gain that, despite decades and decades and decades of time, the proponents of "you should change it" have failed to produce?

To make it all worse, because most people don't understand memory management, then if you go to seek help when your commit charge is exhausting your commit limit, most people won't know what the problem is. The same proponents who would tell you to change it, will instead start blaming something totally irrelevant.
Originally posted by Azza ☠:
Ideally, you just leave it as "default" unless you are using a SSD (Solid State Drive) and want a fixed size to reduce read/write access upon it (for prolonging lifespan).
For clarity, this is not how it works. You are not incurring any extra wear and tear when Windows merely adjusts the allowance of the page file limits. Windows is merely reserving the space when it raises (or lowers) the allowance. It's not like it's... writing/erasing that much filler data for no reason when it does this. You are therefore not saving on SSD wear and tear with a fixed page file size.
Omega 7 Jul @ 11:57am 
Originally posted by Illusion of Progress:
Originally posted by Omega:
Default, let Windows decide.
Another first post by Omega, another /thread situation?
A what?
Set a custom min/max of 8192 (8GB in other words), been using that for yearsssss now and never once had an issue.
Originally posted by ☥ - CJ -:
Set a custom min/max of 8192 (8GB in other words), been using that for yearsssss now and never once had an issue.

Been using that since I started the thread and everything seems smoother. We'll see,could be in my head.

Thanks for the replies!
Originally posted by Omega:
Originally posted by Illusion of Progress:
Another first post by Omega, another /thread situation?
A what?
There's a lot of times where you'll give the first reply to a thread, and it will basically be all that's needed because it's a summary of what is correct. I was saying this is one of those times.
Originally posted by Biggus Nickus:
Been using that since I started the thread and everything seems smoother. We'll see,could be in my head.
It is placebo, because setting a quota on your commit charge can't make your system smoother, but as long as it feels faster to you, you do you.

Here's a summary of what you just did and what it means.

You enacted a commit limit of 24 GB. Full stop. Before, your system would adjust your limits and let it grow if need be. Now it can't.

Now as long as your commit charge doesn't approach 24 GB (16 GB RAM plus 8 GB page file overhead), you'll be fine and it will perform about as it already was before (no better, and maybe slightly worse, but imperceptibly so, so let's say it's the same).

If it approaches 24 GB, however, Windows will start slowing/throwing messages of being low on memory/etc. Worst case scenario, you lose unsaved data (due to being unable to save since the program can't continue, or because it has crashed).

Keep an eye on your commit charge now that you did this. Task manager shows this as the first value in the "committed" section (the second number is the commit limit, and you just locked it to 24 GB so it will be stuck there now, and if Windows approaches this, instead of being capable of raising it like normally, it will now throw its hands up and crash up and on you if it is reached).

You should never take "this works for me" as good advice. You don't know what the person's memory capacity nor memory workload is like, and unless their workload matches yours exactly, then what works for them is completely useless for you.
Last edited by Illusion of Progress; 8 Jul @ 1:12pm
Omega 8 Jul @ 1:19pm 
Originally posted by Illusion of Progress:
Originally posted by Omega:
A what?
There's a lot of times where you'll give the first reply to a thread, and it will basically be all that's needed because it's a summary of what is correct. I was saying this is one of those times.
More discussion is clearly still needed, for OP does not quite seem to understand yet.
Omega 8 Jul @ 1:27pm 
Originally posted by Biggus Nickus:
Originally posted by ☥ - CJ -:
Set a custom min/max of 8192 (8GB in other words), been using that for yearsssss now and never once had an issue.

Been using that since I started the thread and everything seems smoother. We'll see,could be in my head.

Thanks for the replies!
You are imagining it. If you want the best possible performance; disable the pagefile

Only problem with doing this would be that if you use more memory than you have Windows will either lock or start killing processes to save itself from crashing.


Pagefile is just offloading data from the memory on to the disk. It may offload for various reasons. The main abstract reasons being;
- You are short on free memory
- Data stored in memory is never being accessed and is thus wasting RAM

The disk is much slower than memory, if programs are running from the pagefile it will signficantly hurt performance. So ideally you do not want to use it at all.


That is why I recommend keeping it default and automatic. If you have lots of RAM and never exceed it Windows will only create a small pagefile to offload some basic things. If you commonly max your memory Windows will create a larger one so it can offload more stuff.

Having a static pagefile means it can't dynamically expand, if you do suddenly max your RAM and start swapping to the pagefile and then also max the pagefile stuff will start crashing.
Last edited by Omega; 8 Jul @ 1:38pm
A&A 8 Jul @ 2:29pm 
huh? That 8GB or more shouldn't make the system run faster in all cases.

Your computer needs a page file. No doubt. But when you're running a program that uses about 9GB of 16GB of RAM, the operating system usually doesn't care how big your page file is, and it basically makes no difference. But if you get close to 16 GB, the pre-allocated (reserved) space on your drive should always be faster compared to the case with dynamic size. Perhaps it should be even better if the page file in a separate partition, as it is in Linux.

For myself, I kinda found a very niche usecase. If I got two PCs, connected directly via 10/25/40Gbit ethernet or something even faster. PC 1 has let's say has 16GB ddr3, so I can create a RAMdisk there, then create a virtual hard disk and attach it to PC 2. Then I can do whatever I want with this RAM. Running a page file or using for file caching or other stuff.
Last edited by A&A; 8 Jul @ 2:30pm
Optimal is 0, no pagefile, no trashing drives, no slow down or crash due to corrupt file.
RAM will always be Faster, even with new M.2 NVME.
For most games, barring heavy simulators and professional apps. 16gb is enough to run in RAM.
Unfortunately MS made Win 10 (and possibly 11) unable to run some games unless the Memory Leak "Feature" is on.
< >
Showing 1-15 of 20 comments
Per page: 1530 50

Date Posted: 6 Jul @ 4:05pm
Posts: 20