Monday, December 6, 2010

VisualWorks Memory Policy for 3 GB Memory Usage

As I wrote earlier, VisualWorks 7.7.1 can use close to 3 GB of memory when running on Windows 7 64 bit. My initial attempt to override #defaultMemoryUpperBound is however broken. It turns out other parameters are based on this number, and these are modified when this parameter is changed. Some of these modifications give problems allocating memory. It also cause the VisualWorks included memory policy unit tests to fail.

A better solution is to continue subclassing LargeGrainMemoryPolicy, but add #initialize to do the following:

                super initialize.
                self memoryUpperBound: 1024 * 1024 * (1024 * 3 - 128)

This should work OK. It will use the default values of LargeGrainMemoryPolicy, and allow for growth beyond 512 MB. I have not used this new memory policy long enough to actually confirm that it does not give any problems. Also, it might need modifications to deal with what happens when too much memory is consumed.

You do not need to subclass LargeGrainMemoryPolicy. You could also simply modify memoryUpperBound through the setter on LargeGrainMemoryPolicy. I choose subclassing because I think it makes it clear that a system is actually creating its own policy. It also makes it easier to add other modifications. More about this later...

12 comments:

Andrés said...

Runar, by changing the initialize method doing super, the change to the memoryUpperBound instance variable does not propagate to the other amounts that depend on it (see the #initialize method in the memory classes provided). What I'd do is send the class #defaultMemoryUpperBound:, and then send the class #install.

Runar Jordahl said...

I tried overriding #defaultMemoryUpperBound (which should be the same as you suggest), but this caused problems ( http://www.cincomsmalltalk.com/userblogs/runarj/blogView?entry=3466302418 ). Therefore I now set this value after running initialize.

Andrés said...

What were the problems, specifically?

Runar Jordahl said...

First of all, thank you so much for following up. It shows that Cincom engineers really care for the product.


The problem I ran into was:

I subclassed LargeGrainMemoryPolicy and implemented #defaultMemoryUpperBound to

^1024 * 1024 * 1024 * 3

This memory policy is described here: http://www.cincomsmalltalk.com/userblogs/runarj/blogView?entry=3466302418

When I used this memory policy, the system would fail from time to time. I then ran the memory policy unit tests which are included in VisualWorks 7.7.1 as parcels. A lot of the tests failed, indicating that simply overriding #defaultMemoryUpperBound and increasing the number, is not working.


I then decided to set #memoryUpperBound after the initialization has happened, as described in this blog post. Unit tests then pass, and so far I have not have had any problems when running the image.

All my tests are run on a Windows 7 64 bit system with 8 GB of RAM.

Andrés said...

Runar, I cannot reproduce the test failures you refer to. I made a subclass of LargeGrainMemoryPolicy, and then I did the following:

TestLargeGrainMemoryPolicy defaultMemoryUpperBound: (1 bitShift: 30)

TestLargeGrainMemoryPolicy install

Then I ran the tests in the memory policy checker parcel. All the tests passed.

Andrés said...

Whoops, sorry about that, I forgot you meant 1gb, not 3gb. I get three test failures now, I will look at these and let you know what's going on.

Andrés said...

Ahh, I see. The incrementalAllocationThreshold grows with the preferredGrowthIncrement, and it grows so much that eventually the soft low space limit becomes zero to avoid getting too close to the hard low space limit.

This test failure will only occur when there is not a whole lot of free memory in the image. As soon as the memory policy requests another chunk of memory, the test will pass because the available free old space bytes will grow. So, sometimes you will see the test failure, and sometimes you won't.

Nevertheless, this issue should not have caused the system to "fail from time to time". Other than the test failures, what problems did you see?

Runar Jordahl said...

We had out of memory messages from the VM, and then the VM terminated. If needed I could reproduce these. Typically, it would take some time until these errors occurred.

If needed, I could try to reproduce these again and report back to you.

Andrés said...

Runar, would you mind reproducing the problems? I'm a bit concerned that the memory policy in question might be operating outside its use scope, so it may have miscalculated the hard low space limit. Or maybe there is still an issue somewhere that requires increasing the low memory threshold. Do you think you can get me a workspace expression that causes the crash?

Runar Jordahl said...

I tried reproducing, but failed. When I originally got the problem, I was running 100s of unit tests from a console.

Andrés said...

Let me know if you run into problems. Also, did you change the sizesAtStartup?

Runar Jordahl said...

I will let you know if I manage to reproduce. I tried running a lot of allocation/deallocate test, but none failed.

When I initially had the problems, I did not change sizesAtStartup.