Memory fragmentation

I’ve been doing a lot of work trying to figure out why after loading a lot of pages much of your memory seems to disappear. I’ve tested all sorts of things — disabling extensions, plugins, images, etc. I’ve run leak tools over and over looking for things we might be leaking. Occasionally I’ll find something small we’re actually leaking but more often than not I don’t see any real leaks. This lead me to wonder where our memory went. Firefox has a lot of caches internally for performance reasons. These include things like the back/forward cache (which helps speed up loading pages when you hit back), the image cache (keeps images in memory to help load them faster), font cache, textrun cache (short lived, but used to cache computed glyph indicies and metrics and such), etc. We also introduced in Gecko 1.9 the cycle collector which hopes to avoid cycles in XPCOM objects that we might hit. We’ve also got the JS garbage collector. All of these things mean we could be holding on to a bunch of objects that could be taking up space so we want to eliminate those from the picture. I released the RAMBack extension earlier this week which clears most of these things.

So, if it is none of these things, what is going on? Why after a while do we end up using more memory than we should be if we aren’t leaking and our caches are clear? At least part of it seems to be due to memory fragmentation.

Let me give you some examples (with pictures!):

Loading the browser with about:blank as my homepage:

This represents a heap size of 12,589,696. This is made up of a total of 11,483,864 bytes of used blocks and 1,105,832 bytes of free blocks in varying sizes.

Each block in the image represents 4096 bytes in memory. Things range from solid black which are completely used to white which are mostly free.


Loading a bunch of windows and closing them and clearing my caches

Although you can get similar results on many sites, schrep gave me this TripAdvisor hotel search page which opens up lots of windows with lots of pages. To generate this image, I loaded the URL, waited for all of the pages to open, closed them all, loaded about:blank, and then ran RAMBack. At the end of that, here is the result:

Our heap is now 29,999,872 bytes! 16,118,072 of that is used (up 4,634,208 bytes from before… which caches am I forgetting to clear?). The rest, a whopping 13,881,800 bytes, is in free blocks! These are mostly scattered in between tiny used blocks. This is bad.

Light green blocks are completely free pages. I’ve highlighted those because the OS could page them out if it wanted to. You’ll notice there aren’t very many light green squares…

So.. What does this mean?

Well, it means that any allocations >4k are going at the end because we can’t really fit them anywhere earlier. This is bad for a variety of reasons including performance. It makes it very difficult for us to get big chunks of contiguous memory to give back to the OS. This makes us look big!

Yeah, duh, I already knew fragmentation was bad.. Now what?

Well, there are many things we can do. Thanks to vlad and dtrace I’ve got call stack distributions of all of our mallocs and can tell where the most allocations come from. As you might imagine, given the size of our codebase, we do allocations from lots and lots of different places. Fortunately, there are several hot spots. Those include Javascript, strings, sqlite, CSS parsing, HTML parsing, and array growing. For some of these we don’t need to heap allocate and can just do temporary allocations on the stack. For others we can’t, but we can use arenas (as we already do for some layout objects) to help reduce fragmentation. For example, we could have several arenas we could allocate small sized strings out of. Just during startup we do over 40,000 string allocations between 8 and 64 bytes. As as last resort, we could replace malloc and new entirely with something more generally better. I don’t think we should do this until we’ve done as much of the other things as possible.

I’ll be filing bugs and posting more details shortly.

Thoughts, suggestions, and comments welcome!

Edit: I found a small bug in the code I used to generate my images which resulted in fewer light green (empty) blocks than there should have been. I’ve updated the images to show properly.

89 thoughts on “Memory fragmentation

  1. fredrik

    Very interesting stuff, thanks.

    Jason Orendorff wrote some about fragmentation and what MMgc did to alleviate it a while back. Sounds like Moz2 may solve a lot of these issues.

    http://blog.mozilla.com/jorendorff/2007/10/30/improving-malloc-locality/

    It would be interesting to see how much improvement one would see if there was a drop-in malloc/free library with some of the things noted here and on Orendorff’s blog. Google’s TCMalloc was mentioned in the comments there, it does seem to be drop-in but I’m not sure if it will work with Mozilla out-of-the-box (I’d do some testing myself if compiling Fx didn’t take 6 hours).

    Reply
  2. pd

    maybe it’s the compression of your image but there seem to be a lot of white blocks that you say are empty but only two green blocks that are empty yet haven’t been reclaimed by the OS.

    Is this an accurate assessment?

    What’s the difference between white blocks and green blocks?

    Reply
  3. Stuart Parmenter

    fedrik: I’ve tried to use both hoard and tcmalloc. I’ve had issues getting Mozilla to run with both. Hoard’s problem seems to be a VC71 bug so I just need to upgrade. Not sure what tcmalloc’s issue is. We should definitely look at these allocators and see if using them makes sense, but we should do as much in our code to alleviate the problem before switching, imho.

    pd: in theory none of the “white blocks” are actually rgb(255, 255, 255) (aka fully empty) even if some are really close. I highlighted the completely empty blocks just so you could see them from, say, rgb(254,254,254) (which is a mostly empty block). The first thing when looking at fragmentation I thought of was images. I’ve run these tests with them completely disabled and it doesn’t really help much. Images allocate such big contiguous blocks that they tend to not cause as much fragmentation. That said, we may want to look at allocating them directly in to virtual memory instead of our heap so that they can be paged our more easily… although this would be slower.

    Anonymous (#2): The images are currently built using a two step process. Step one is to walk the heap using some code I wrote on Windows which dumps the heap and blocks to a file. Second step uses a python script I wrote, based one one vlad wrote, to generate the images. I would look to integrate this in to RAMBack if possible so others can generate images.

    Reply
  4. Stuart Parmenter

    Tom: I have tried the Low Fragmentation Heap on Windows and I don’t see much difference. Maybe slight wins, but we allocate so so so many things. (The images in my post were made up of nearly 180000 blocks on the heap). dtrace results (on mac) show us doing more than 3.2 million allocations <= 128 bytes! That is just loading the my home page. I think we can cut these down a lot. I’ll post dtrace data for people to look over in the next few days.

    Reply
  5. cocheddar

    I remember back in the days on a mac you could allocate how much memory an app can have. So say you like an application such as Firefox to have 64000KB initially, all you had to do was go to get info and change the memory requirements. I wonder why applications/operating systems no longer support this feature. I think the slow down is all this dynamic allocation business.

    IAMNAP = I am not a programmer

    Reply
  6. Roman G

    Generally, assuming firefox is written in c++, the solution is easy – just use your own allocator. This should both improve speed and solve the fragmentation problem. I’m sure there’re a lot of allocators available – just pick one and plug it in.

    Reply
  7. Neil

    Perversely, this may be rather a good thing. Since fragmentation is now so clearly identified as the major source of memory bloat, any improvement in fragmentation performance will result in immediate app-wide improvements in memory footprint.

    Possible approaches to the problem, other than standard heap-allocation methods:
    * relocation of allocated areas by changing methods that use them to use handles (as in the original Mac operating system)
    * allocation of groups of related small allocations from larger dedicated pre-allocated block, allowing the whole arena to be freed at once when the group of small objects is de-allocated

    Reply
  8. Steve Chapel

    This story is on digg under “firefox memory fragmentation”. However, my informative posts are being buried. Could others head on over to digg and digg any informative posts and bury the sensationalist ones? Thanks!

    Reply
  9. Rob Taylor

    It might be good to look at the slice allocator from glib. Its very good when you have a lot of small items which are of a known size.

    Reply
  10. James

    If you want a good site to test with, freebase.com uses a lot of javascript. Enough that I use a different profile or restart after a session there because performance drops through the floor. If you need an invite, send me an email.

    Reply
  11. stas

    Well, if there are no leaks, then
    even the small allocations won’t
    remain.
    You say:

    (only up 2,425,047 bytes from before)

    as if it is nothing. Where do these
    2,425,047 bytes in a small chunks do
    com from and why they are not released?
    If they were released too, there
    wouldn’t be any increase in fragmentation
    too, right?

    Reply
  12. Ed Schouten

    Reply on the OpenBSD-malloc post by an anonymous guy: That is about the same malloc() implementation as used in FreeBSD 6 and lower (often referred to as the phkmalloc – written by phk@). FreeBSD 7 will have a new malloc() implementation that tends to be a lot faster in threaded setups, called jemalloc(). What about porting that one to Linux?

    Reply
  13. Hugo Heden

    This article by Andrei Alexandrescu and Emery Berger comes to mind, though I am not at all sure it is relevant here:

    Policy-Based Memory Allocation
    — Fine-tuning your memory management

    “The way your application allocates memory can have a dramatic effect on its performance. Modern general-purpose memory allocators are pretty efficient, but there’s always room for improvement…”

    http://www.ddj.com/cpp/184402039

    Reply
  14. Chris

    I don’t want to drag that discussion here but you may be interested to know that your analysis agrees with theories on triggers for bug 263160.

    Reply
  15. Brodie

    Many projects look to changing memory allocators as a panacea for all problems. Following is the results of a study into a number of projects that use custom memory allocators, usually in order to improve performance. In many cases it doesn’t help at all, or there would have been better performance benefits by using a general purpose allocator.

    It is worthwhile keeping in mind that there may be no performance benefits from changing allocators, and it should only be done with objective testing before and after to ensure that it was worthwhile.

    http://www.cs.umass.edu/~emery/pubs/berger-oopsla2002.pdf

    Reply
  16. Stijn Vogels

    RAM usage certainly has been a big problem for FF. I cannot wait to give this a try when I get back to my other computer tomorrow. If it turns out positive, I may even start recommending it to friends and other users. Kuddos to you for having the idea!

    Reply
  17. Harry

    I saw this from digg, very interesting.

    Out of curiosity, is there room for optimization that would bring more allocations under 4k?

    Reply
  18. Finite

    If fragmentation is the problem, couldn’t defragmentation be a solution? Obviously, it would be ideal to avoid the fragmentation in the first place, but as long as there is still some wouldn’t it make sense (during idle time after loading a page) to malloc new blocks of memory and copy the little chunks in there, and then free the old blocks with the little chunks so the OS can reuse them? Unlike with a hard drive, it seems like with random access memory it shouldn’t be too expensive to be constantly defragging when idle. Disclaimer #1: The last time that I defragged a hard disk was on a state-of-the-art Mac with System 7. Disclaimer #2: IANACC (C coder) so maybe I’m missing some obvious reason why this suggestion makes no sense, but I was surprised to find this thread didn’t have the term “defrag” in it yet so I had to ask… why not just defrag?

    Reply
  19. Stuart Parmenter

    Finite: malloc and friends return pointers so you can’t just move the block of memory that they point to without updating the pointer to that block of memory. If you store handles instead of pointers you could update where they point to and basically defragment. Doing something like this could be good in certain places, but would be slower and isn’t practical for general use.

    Reply
  20. w.h.

    I chuckled while reading this, because I’ve seen this exact same bug caused by the exact same reasons.

    My suggestion, which I wasn’t able to talk folks into implementing:

    Write a copying garbage collector for those bits of code that are always allocating. Saves you some memory management effort on those bits, nips the fragmentation problem at the bud, and brings you all the closer to Greenspun’s tenth.

    And, as somebody who’s written special-purpose GC code for a C++ codebase, will take such a short amount of time (as in days) that once you’ve done the dirty work, you’ll wonder why you bothered doing anything else.

    Reply
  21. Peter

    FYI: This problem is entirely solved in modern languages that use references and a garbage collector (anything newer than 1958 really ought to have one — that’s when LISP came out). Aside from the improvements of better cache utilization, most garbage collectors will rearrange memory to compact it and eliminate fragmentation.

    If there was some way to begin shifting Mozilla to references and a garbage collector, this problem would go away.

    This is one of the many reasons why VMs now have better performance than C/C++ code (the others having to do with cache utilization, and in a few years, with utilization of parallel CPUs).

    Reply
  22. Rich

    RE: Finite’s discussion of defragmentation.

    Memory in general can not be de-fragmented. I think you’re thinking of a disk, that can be defragmented. Remember that in a disk, each block has other parts of the disk pointing to it. When you defragment, you can update these pointers to point to the new, better location. C, in general, doesn’t work like that. You don’t really know what or even how many pointers point to an object. You can’t change the location of an object since you have no hope of updating all the pointers to it. Once allocated, an object is stuck where it is, no matter how bad of a spot it is.

    There is one sort of exception. A pointer to a pointer, sometimes called a handle, can have the second pointer change. You always refer to it using the first pointer, and some special calls that know it’s a pointer to a pointer. MacOS previous to MacOS X used to use handles. When the OS decided to defragment the heap, it was called compaction. This would be hard to do in a multithreaded program, since you don’t have control over when any of the threads access the handle. Even single threaded MacOS 7 had system calls that moved memory and caused problems. On every call to the OS, you always had to check the docs to see if it may compact memory, and if so, your handles might be invalid.

    One place I guess you could “defragment memory” would be on a realloc call (reallocate a previously allocated block). In this case, you already know the block could move, so in this case, you always force a move and have it help with fragmentation by reallocating for efficiency. My guess is that reallocs() are such a small number of allocations in general that the effort here may not be worth it for most cases.

    Reply
  23. Adam B

    Harry: It is difficult to ‘defrag’, as you have suggested, because of pointers. If you move a block to another location, you may invalidate a pointer somewhere that was pointing to it. Because an allocated block does not keep a comprehensive list of all pointers referencing it, you couldn’t just go through and update every pointer that was.

    Reply
  24. john r pierce

    a very old fashion algorithm that I recall working fairly nicely was to break the heap up into several regions, say, ‘small’ ‘medium’ and ‘large’. within each region, allocations are rounded up to multiples of some reasonable size. maybe the small heap is for objects under 256 bytes, and it allocates in 16 byte increments, the medium heap is for objects 256< size < 4096, and it allocates in increments of 512 bytes, then the big heap handles objects 4096 < size < 65536 and allocates in 8192 byte blocks. each of these heaps in turn is allocated from the system heap in fairly large increments, say, 64kbytes.

    obviously, you histogram your actual allocations to determine what the size increments should be.

    Reply
  25. Paul

    Computer games have been dealing with memory fragmentation problems for years. Computer games pack memory more efficiently than probably any kind of large scale software. There are some techniques game developers use that aren’t being mentioned in the discussion here. You might want to talk to a experienced professional game developer to get some answers. The seasoned game development community is often appalled at the way PC programs use memory, though it is realized that PC software is more often developed with speed of coding in mind more than efficiency of memory.

    Reply
  26. Paolo Bonzini

    I would use allocation pools that are devoted to allocating objects of a particular type. So, all Blah objects (that all have size 20) come from a particular pool.

    Since similar objects often have similar lifetimes, it is more common to free pages completely, and these completely free pages can be used by another pool.

    Reply
  27. CAFxX

    Wouldn’t be easier always to allocate blocks in ascending address order? i.e. When a block is freed it is placed in a free-list. Make the free-list ordered by block address and always return the first one (i.e. the one with the smallest address).
    I don’t know if it this could be slower than per-page free-lists, but surely it uses less memory.

    Reply
  28. Noel Grandin

    It would instructive to go through the history of the linux kernel with regard to memory fragmentation, since you’re following the same path they have already trodden.

    Executive summary:

    – extensive use of slabs aka. arenas

    – splitting up arenas by allocation size

    – splitting up arenas by allocation lifetime.
    i.e. try to avoid mixing long-lived and short-lived types of objects because that increases the fragmentation.

    – put objects that can be flushed when necessary into their own arenas.

    Reply
  29. Robert

    john r pierce: Most allocators do this by default nowadays, the windows one is the exception (this feature is in the “Low Fragmentation Heap” allocator)

    Unfortunatly this problem needs a more detailed solution. From the look of that map it would appear that the free lists need sorting so that more allocations happen from the start of memory. But the most important need is some sort of lifetime control, so that allocations that die together are allocated from the same location. That way you have a better chance of more green blocks that can be returned to the OS (or at least a page allocator).

    BTW: I _really_ like those pictures!

    Reply
  30. JFred

    I’ve been coding for several decades now. Three things:

    1. If you have areas where the size of the allocated block is always the same, consider preallocating an array of these blocks and thereafter simply moving them back and forth from a free list to the place they are used. With linked list operations. Very fast and no frags. Requires recoding.

    2. Look at the http://www.microquill.com site. They’ve been selling diagnostic tools and replacement malloc/free systems for years now. Their claim to fame is threaded malloc/free but their tools address fragmentation with memory pools et cetera. You will find their tech support material enlightening.

    3. Using malloc/free all over the place is just not a good idea. Unfortunately, that’s the way they teach it.

    Reply
  31. anonymouse

    My firefox bloats daily to over 1GB in size. No extensions or plugins involved, using only latest and official builds.

    I’m for one am very happy this is being actively addressed.

    Thanks!

    Reply
  32. Alex

    Try to implement your own memory manager.
    Implement a class to handle a pool of fixed size blocks and have a few instances with different sizes to provide a good fit for your usage patterns.

    template
    class BlockAllocator
    {
    public:
    void* alloc ();
    bool free ( void* );
    bool isManagedByMe ( void* );
    };

    Measure the kind of blocks you are using and how many on peak.

    Say you have a list of block managers ordered by increasing size of the block. The allock method should try to alloc from those block managers until one alloc succeeds or, if all fail, fall back to the os alloc function.

    I have used this a few times on computer games and it works very well.

    Let me know if you have any question.

    I use firefox a lot, with many open tabs and it usually uses 100-200mB, and 300mb is not uncommon. I am considering to switch to opera because of this. I hope you have luck fixing the issue.

    Reply
  33. Finite

    Thanks for the informative answers, I see what you mean about the problem of not being able to update all those pointers.

    So, how hard would it be to replace all the calls to malloc etc with wrappers that use handles instead? Not quite so easy as just doing a search and replace, I’m guessing? :)

    Also, thank you very much for your work on this, and for the excellent pictures and clear description of the issue. I’m one of the many who have long wondered why it is that I need to restart FF [at least] every other day lest it eats all my swap. So, it is great to hear that someone is making progress on this.

    Reply
  34. Max

    Would it be feasible to annotate malloc calls with a lifetime hint? Caches get allocated LONG_LIFE, but temporaries get allocated TEMPORARY, etc. And then you can use separate arenas for different lifetimes. And if you have a default arena, you can annotate the allocation hotspots as necessary.

    Reply
  35. Nicolas

    This seams to explain why RAM usage in firefox keeps growing indefinitely. I rarely shutdown my pc so after a week is terrible, what I do kill FF, the start it and restore the session.

    There is also a different problem that is why I started killing it in the fist place: CPU usage. Even if I’m not using FF it suddenly start using most of the CPU, this happens since version 1. Especially when I leave it on for weeks or when using hibernation, hope someone finds the problem. IE7 although I rather not use it is a lot faster and don’t have these same problems. You can open the same pages in both and leave them for some time to compare.

    Reply
  36. Chris Ovenden

    Firstly, I’d like to express my gratitude for addressing this problem, whose existence has been denied by some, and coming up with something forward-looking and constructive.

    I am a heavy firefox user and supporter, and have a serious and to some dismaying multiple-tab habit. I have seen FF creep towards 1GB memory usage in the course of a day. Recent versions do seem a lot better in this regard however.

    I’m not a C++ programmer, and know practically nothing about GC, so feel free to correct my ignorance, but I would have thought that tab closing was the moment to trigger some kind of memory compacting routine, which could perhaps lay idle the rest of the time. We know that some memory has been/should be reclaimed at this point, and there should be no noticable pause because a previously-rendered page is being shown (or nothing, if it’s the last tab).

    Reply
  37. bobh

    Maybe a mix of Adam B’s multiple heaps and a garbage collection routine. For garbage collection, you might need double indirection so that the program stores a pointer to the memory block and always accesses that pointer to get the memory, then the garbage collection can move the block and adjust the pointer (or copy the block and then change the pointer).

    If multi-threading is a problem, then you need locks on each regions of the storage area. A given thread has to first check if the lock is open, then put its ID in the lock, then access memory, then unlock that region. Maybe divide the storage into 10 regions or whatever works to allow more multi-threading and fewer traffic jams. If a thread needs to store new data, it would check the locks to find an open region. The garbage routine might run during otherwise idle time.

    Reply
  38. Baczek

    i’m with the 1958 guys – a copying GC would actually improve performance and memory usage, and kill some memory leaks (if there were any) as a side effect.

    Reply
  39. Stephane Rodriguez

    If you are on Windows, I recommend measuring “page faults” (there is a column in the Task Manager). Seems pretty high with Firefox. Means many malloc()/virtualalloc() calls causing fragmentation, causing slowness.

    Reply
  40. Carlos

    Glad to see someone is working on this.
    Having a PC running 24/7 for months, I need to regularly close/restart FF because every 4 or 5 days it grows to use over 500Mb of Ram.

    When I restart and open the exact same tabs, it then uses a lot less memory.
    Have a picture of it at my post (the post is in portuguese, though)

    http://ptnik.blogspot.com/2007/11/browsers-com-alzheimer-e-outras-perdas.html

    It would be nice for it to regularly “auto-clean” itself, either time based, a manual command, or just when it reached a preset memory use amount.

    But worse than the actual memory use, is it gets slugish when it starts using all that memory. And for me, the most important thing in any program it’s the “response time”.

    Reply
  41. Lex Spoon

    Interesting stuff. A few thoughts:

    First, surely you should start by getting the best general purpose allocator you can. No, it’s not a silver bullet, but why make things hard on yourself? You can stick to the usual malloc/free API and semantics, while getting better prevention of fragmentation. Plus, Firefox will have good memory allocation on all platforms, including those whose native malloc implementation is terrible.

    Second, instead of writing fancy allocators with various constraints on how you use them (all allocations the same size, all references via handles, etc.), you might consider transitioning code over to a garbage collected language such as Javascript. As I understand, Firefox is already following that trend, so this is just another reason to keep that trend going.

    Anyway, good luck, and thanks for posting! There is a lot of theory about memory management, but it is hard to tell how things really work until people write experience reports like this one.

    Reply
  42. dc

    I was the first to bring on light the serius problem that memory allocation brings to software development.
    Do programmers really care about where their software goes?

    Reply
  43. Seth Wagoner

    As a commercial add-on developer I think this is hugely significant and you’re to be congratulated for establishing so clearly the source of one the biggest causes for blogospheric complaint about Firefox, and sometimes, about addons. If we ever meet in person, remind me to buy you a beer (and hopefully they’ll have a good Kiwi beer available!).

    Reply
  44. Pingback: Leaks? Memory? We never forgot about you. « pavlov.net

  45. Pingback: Interfaccia di Firefox 3 - Videogiochi Forum su Multiplayer.it

  46. Julien C

    “As you might imagine, given the size of our codebase, we do allocations from lots and lots of different places. Fortunately, there are several hot spots. Those include Javascript, strings, sqlite, CSS parsing, HTML parsing, and array growing. For some of these we don’t need to heap allocate and can just do temporary allocations on the stack.”

    Don’t do this. You will turn the current Minimo effort into a nightmare. Many mobile platforms have a very small stack and you don’t want to put big objects like long strings on the stack. The heap is still the way to go.

    “As as last resort, we could replace malloc and new entirely with something more generally better. I don’t think we should do this until we’ve done as much of the other things as possible.”

    As far as I know, the other “big players” in multiplatform web browsers (I’m thinking of a norwegian one right now) use memory pools and custom allocators. Why do you seem to think it’s bad?

    Reply
  47. Steve Chapel

    Berend de Boer said:

    “Boy, am I glad I use a language with garbage collection and not only that, a movable garbage collector, so memory is automatically compacted for me (http://www.eiffel.com/)”

    First, garbage collectors do not ensure you cannot leak memory http://developers.slashdot.org/article.pl?sid=07/11/17/0552247 second, Java has a compacting garbage collector and we all know how memory hungry Java programs can be, and third, which browsers are implemented with a compacting garbage collector?

    There are good reasons why these products are not more widely used than they are. There is no panacea for memory problems, only tradeoffs.

    Reply
  48. pavlov Post author

    Julien: We certainly shouldn’t go crazy with stack allocations. Need to be careful with recursion and the like. Most of our “large” stack allocations will be arrays and strings — both things would be easy to limit their size.

    Reply
  49. Pingback: Firefox needs more focus on its core development tasks.

  50. Kelly

    tcmalloc seems to work fine with firefox. The LD_PRELOAD method works but you end up with some extra small heap files from all the shell scripts that start firefox-bin. However, the zone allocator implementation in nspr probably makes it much less effective.

    I was also able to compile it in pretty easily, but I didn’t bother trying to make it an easier to configure option via .mozconfig. I tried using –enable-wrap-malloc, but tcmalloc isn’t a wrapper like that.

    Instructions are posted here:

    http://siliconvista.blogspot.com/2007/11/compiling-firefox-with-tcmalloc.html

    Reply
  51. pavlov Post author

    Kelly: I’ll have images with tcmalloc pretty soon. We’ve got a new tool to replay allocation logs with different allocators so we can see how they compare. Early reports show both tcmalloc and nedmalloc being about 10% faster on pure malloc/free speed. Not sure how much faster things would be in tests that matter. I’m still hooking up code to get the fragmentation info out of tcmalloc that I need.

    Reply
  52. Roger Glynn

    I am not capable of interpreting or using the level of info presented here. Basically, I have not added any extensions to Firefox (or any other program or browser). However, I typically see 700K or more memory used by Firefox. Right now on this PC (just started) I have 58K (now 66K, now 51K) FF memory with 2 FF windows and total of 3 tabs total open. My other (newer/ faster) XP with 2G memory is basically locked up right now — ie, clicking on the tabs at the bottom of the desktop will not not switch to the application — with 325K memory usage. I basically need 2 PCs to keep one going at any given time. It seems that the emperor has no clothes??? Internet Explorer also seems to get ‘Jammed up” fairly often. The reason I prefer FF is because of the fact that it reopens websites whenever FF is closed; very handy. However, once again – it is not ‘worth it’. What can be done to help this situation for the generic computer person?

    Reply
  53. pavlov Post author

    Roger: Ideally you won’t need to do anything except upgrade to Firefox 3 once it is out. We’ll be getting lots of great fixes in to Firefox 3 that should help you a lot.

    Reply
  54. pavlov Post author

    Clayon: Aside from disabling extensions and cleaning out your profile from time to time, the contents of that article are pretty bogus. Especially its suggestion to use config.trim_on_minimize which will just hide things from you and slow things down. It won’t really help anything.

    Reply
  55. MonkeyBoy

    Memory fragmentation has long been addressed by people with serious languages that depend on garbage collection, such as Lisp.

    Copying garbage collection (and to some extent generational garbage collection) was invented to deal with the fragmentation problem.

    From poking around I can’t see if FF3 incorporates the Tamarin project [1] that uses a common JavaScript/”ECMAScript 4th edition” engine provided by Adobe, or how it garbage collects.

    [1] http://www.mozilla.org/projects/tamarin/

    Reply
  56. pavlov Post author

    MonkeyBoy: Firefox 3 doesn’t use Tamarin. Most of Gecko (the platform Firefox is built on top of) is written in C++ with heavy uses of pointers and is not garbage collected. Our Javascript engine (SpiderMonkey) does do garbage collection on its own objects, but most of those objects are in arenas already so they don’t play a role in the fragmentation I’ve described here.

    Reply
  57. Pingback: Adding pretty pictures to memfault « Chris Wilson’s Weblog

  58. Pingback: FireFox memory leaks | Rick Tech

  59. Pingback: kuidas ilma flopidraivita buuditav CD teha at fruktlog

  60. Pingback:   Firefox 2 mit neuer Betaphase… by + mzungu’s weblog +

  61. Pingback: Things n’ Stuff » Blog Archive » More on Firefox and some on memory leaks

  62. Pingback: Scurz’s blog » Les fuites de memoire avec Iceweasel (= Firefox)

  63. Chris

    “As as last resort, we could replace malloc and new entirely with something more generally better.”

    As Julien C said, I don’t see why you think this a last resort. Because it would require changes throughout the codebase? That’s true, but I think the first step should be to have a single unified memory manager, even if right now all it does is call malloc and free. It would give you a lot of flexibility for monitoring and customizing your memory management.

    Reply
  64. Pingback: Firefox 3 Beta 2 released « Dataland

  65. Pingback: Firefox 3 Memory Usage « pavlov.net

  66. Pingback: Il consumo di memoria di Firefox 3 » Macpod.it

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s