且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

为什么我的Delphi程序的内存不断增长?

更新时间:2023-02-02 15:08:01

您链接的CurrentMemoryUsage实用程序报告应用程序的工作集大小。工作集是映射到物理内存地址的虚拟内存地址空间的总页数。然而,这些页面中的一些或许多可能存在非常少的实际数据。因此,工作集是您的进程使用多少内存的上限。它指示保留使用多少地址空间,但不表示实际提交的数量(实际驻留在物理内存中),或者您的应用程序实际使用了多少页面。



尝试这样:在几次测试运行后,看到您的工作集大小爬升后,最小化应用程序的主窗口。你很可能会看到工作集的大小明显下降。为什么?因为Windows最小化丢弃未使用的页面并将工作集缩小到最小的应用程序时,Windows会执行SetProcessWorkingSetSize(-1)调用。操作系统不会这样做,而应用程序窗口的大小正常,因为减少工作集的大小太频繁可能会导致强制数据从交换文件重新加载的性能更糟。



更详细的介绍:您的Delphi应用程序分配内存相当小的块 - 一个字符串在这里,一个类。程序的平均内存分配通常小于几百字节。在系统范围内很难高效地管理这样的小分配,所以操作系统不会。它有效地管理大型内存块,特别是在4k虚拟内存页面大小和64k虚拟内存地址范围最小大小。



这为应用程序提出了一个问题:应用程序通常分配小块,但操作系统在相当大的块中释放内存。该怎么办?答案:suballocate。



Delphi运行时库的内存管理器和FastMM替换内存管理器(以及地球上所有其他语言或工具集的运行时库)都存在做一件事:将大量的内存块从操作系统中删除到应用程序使用的较小的块中。跟踪所有小块的位置,它们有多大,以及它们是否被泄漏都需要一些内存 - 称为开销。



在情况下大量的内存分配/释放,您可能会遇到您分配99%的分配情况,但流程的工作集大小只能缩小50%。为什么?大多数情况下,这是由堆碎片引起的:Delphi内存管理器从操作系统中获取并在内部分派的一个大块中仍然使用一小块内存。使用的内存内存计数很小(说明是300字节),但是由于它阻止堆管理器释放它回到操作系统的大块,所以这个小300字节块的工作集贡献更像是4k(或64k取决于它是虚拟页面还是虚拟地址空间 - 我不记得了)。



在涉及兆字节小内存分配的大量内存密集型操作中,堆碎片非常常见 - 特别是如果与内存密集型操作无关的内容的内存分配正在进行同时作为大工作。例如,如果通过80MB数据库操作进行操作,则会在状态进行时向列表框输出状态,用于报告状态的字符串将分散在数据库内存块中的堆中。当您释放数据库计算所使用的所有内存块时,列表框的字符串仍然存在(使用中,不会丢失),但是它们分散在整个地方,可能占用每个小字符串的整个OS大块。 p>

尝试最小化窗口技巧来查看是否减少了您的工作集。如果这样做,您可以折扣工作集计数器返回的数字的明显严重性。您也可以在大型计算操作之后添加对SetProcessWorkingSetSize的调用,以清除不再使用的页面。


I am using Delphi 2009 which has the FastMM4 memory manager built into it.

My program reads in and processes a large dataset. All memory is freed correctly whenever I clear the dataset or exit the program. It has no memory leaks at all.

Using the CurrentMemoryUsage routine given in spenwarr's answer to: How to get the Memory Used by a Delphi Program, I have displayed the memory used by FastMM4 during processing.

What seems to be happening is that memory is use is growing after every process and release cycle. e.g.:

1,456 KB used after starting my program with no dataset.

218,455 KB used after loading a large dataset.

71,994 KB after clearing the dataset completely. If I exit at this point (or any point in my example), no memory leaks are reported.

271,905 KB used after loading the same dataset again.

125,443 KB after clearing the dataset completely.

325,519 KB used after loading the same dataset again.

179,059 KB after clearing the dataset completely.

378,752 KB used after loading the same dataset again.

It seems that my program's memory use is growing by about 53,400 KB upon each load/clear cycle. Task Manager confirms that this is actually happening.

I have heard that FastMM4 does not always release all of the program's memory back to the Operating system when objects are freed so that it can keep some memory around when it needs more. But this continual growing bothers me. Since no memory leaks are reported, I can't identify a problem.

Does anyone know why this is happening, if it is bad, and if there is anything I can or should do about it?


Thank you dthorpe and Mason for your answers. You got me thinking and trying things that made me realize I was missing something. So detailed debugging was required.

As it turns out, all my structures were being properly freed upon exit. But the memory release after each cycle during the run was not. It was accumulating memory blocks that would normally have caused a leak that would have been detectable on exit if my exit cleanup was not correct - but it was.

There were some StringLists and other structures I needed to clear between the cycles. I'm still not sure how my program worked correctly with the extra data still there from the earlier cycles but it did. I'll probably research that further.

This question has been answered. Thanks for your help.

The CurrentMemoryUsage utility you linked to reports your application's working set size. Working set is the total number of pages of virtual memory address space that are mapped to physical memory addresses. However, some or many of those pages may have very little actual data stored in them. The working set is thus the "upper bound" of how much memory your process is using. It indicates how much address space is reserved for use, but it does not indicate how much is actually committed (actually residing in physical memory) or how much of the pages that are committed are actually in use by your application.

Try this: after you see your working set size creep up after several test runs, minimize your application's main window. You will most likely see the working set size drop significantly. Why? Because Windows performs a SetProcessWorkingSetSize(-1) call when you minimize an application which discards unused pages and shrinks the working set to the minimum. The OS doesn't do this while the app window is normal sized because reducing the working set size too often can make performance worse by forcing data to be reloaded from the swap file.

To get into it in more detail: Your Delphi application allocates memory in fairly small chunks - a string here, a class there. The average memory allocation for a program is typically less than a few hundred bytes. It's difficult to manage small allocations like this efficiently on a system-wide scale, so the operating system doesn't. It manages large memory blocks efficiently, particularly at the 4k virtual memory page size and 64k virtual memory address range minimum sizes.

This presents a problem for applications: applications typically allocate small chunks, but the OS doles out memory in rather large chunks. What to do? Answer: suballocate.

The Delphi runtime library's memory manager and the FastMM replacement memory manager (and the runtime libraries of just about every other language or toolset on the planet) both exist to do one thing: carve up big memory blocks from the OS into smaller blocks used by the application. Keeping track of where all the little blocks are, how big they are, and whether they've been "leaked" requires some memory as well - called overhead.

In situations of heavy memory allocation/deallocation, there can be situations in which you deallocate 99% of what you allocated, but the process's working set size only shrinks by, say, 50%. Why? Most often, this is caused by heap fragmentation: one small block of memory is still in use in one of the large blocks that the Delphi memory manager obtained from the OS and divvied up internally. The internal count of memory used is small (300 bytes, say) but since it's preventing the heap manager from releasing the big block that it's in back to the OS, the working set contribution of that little 300 byte chunk is more like 4k (or 64k depending on whether it's virtual pages or virtual address space - I can't recall).

In a heavy memory intensive operation involving megabytes of small memory allocations, heap fragmentation is very common - particularly if memory allocations for things not related to the memory intensive operation are going on at the same time as the big job. For example, if crunching through your 80MB database operation also outputs status to a listbox as it progresses, the strings used to report status will be scattered in the heap amongst the database memory blocks. When you release all the memory blocks used by the database computation, the listbox strings are still out there (in use, not lost) but they are scattered all over the place, potentially occupying an entire OS big block for each little string.

Try the minimize window trick to see if that reduces your working set. If it does, you can discount the apparent "severity" of the numbers returned by the working set counter. You could also add a call to SetProcessWorkingSetSize after your big compute operation to purge the pages that are no longer in use.