Memory fragmentation

I’ve been doing a lot of work trying to figure out why after loading a lot of pages much of your memory seems to disappear. I’ve tested all sorts of things — disabling extensions, plugins, images, etc. I’ve run leak tools over and over looking for things we might be leaking. Occasionally I’ll find something small we’re actually leaking but more often than not I don’t see any real leaks. This lead me to wonder where our memory went. Firefox has a lot of caches internally for performance reasons. These include things like the back/forward cache (which helps speed up loading pages when you hit back), the image cache (keeps images in memory to help load them faster), font cache, textrun cache (short lived, but used to cache computed glyph indicies and metrics and such), etc. We also introduced in Gecko 1.9 the cycle collector which hopes to avoid cycles in XPCOM objects that we might hit. We’ve also got the JS garbage collector. All of these things mean we could be holding on to a bunch of objects that could be taking up space so we want to eliminate those from the picture. I released the RAMBack extension earlier this week which clears most of these things.

So, if it is none of these things, what is going on? Why after a while do we end up using more memory than we should be if we aren’t leaking and our caches are clear? At least part of it seems to be due to memory fragmentation.

Let me give you some examples (with pictures!):

Loading the browser with about:blank as my homepage:

This represents a heap size of 12,589,696. This is made up of a total of 11,483,864 bytes of used blocks and 1,105,832 bytes of free blocks in varying sizes.

Each block in the image represents 4096 bytes in memory. Things range from solid black which are completely used to white which are mostly free.


Loading a bunch of windows and closing them and clearing my caches

Although you can get similar results on many sites, schrep gave me this TripAdvisor hotel search page which opens up lots of windows with lots of pages. To generate this image, I loaded the URL, waited for all of the pages to open, closed them all, loaded about:blank, and then ran RAMBack. At the end of that, here is the result:

Our heap is now 29,999,872 bytes! 16,118,072 of that is used (up 4,634,208 bytes from before… which caches am I forgetting to clear?). The rest, a whopping 13,881,800 bytes, is in free blocks! These are mostly scattered in between tiny used blocks. This is bad.

Light green blocks are completely free pages. I’ve highlighted those because the OS could page them out if it wanted to. You’ll notice there aren’t very many light green squares…

So.. What does this mean?

Well, it means that any allocations >4k are going at the end because we can’t really fit them anywhere earlier. This is bad for a variety of reasons including performance. It makes it very difficult for us to get big chunks of contiguous memory to give back to the OS. This makes us look big!

Yeah, duh, I already knew fragmentation was bad.. Now what?

Well, there are many things we can do. Thanks to vlad and dtrace I’ve got call stack distributions of all of our mallocs and can tell where the most allocations come from. As you might imagine, given the size of our codebase, we do allocations from lots and lots of different places. Fortunately, there are several hot spots. Those include Javascript, strings, sqlite, CSS parsing, HTML parsing, and array growing. For some of these we don’t need to heap allocate and can just do temporary allocations on the stack. For others we can’t, but we can use arenas (as we already do for some layout objects) to help reduce fragmentation. For example, we could have several arenas we could allocate small sized strings out of. Just during startup we do over 40,000 string allocations between 8 and 64 bytes. As as last resort, we could replace malloc and new entirely with something more generally better. I don’t think we should do this until we’ve done as much of the other things as possible.

I’ll be filing bugs and posting more details shortly.

Thoughts, suggestions, and comments welcome!

Edit: I found a small bug in the code I used to generate my images which resulted in fewer light green (empty) blocks than there should have been. I’ve updated the images to show properly.

Explore posts in the same categories: Mozilla

Tags: , , , , , ,

You can comment below, or link to this permanent URL from your own site.

89 Comments on “Memory fragmentation”

  1. fredrik Says:

    Very interesting stuff, thanks.

    Jason Orendorff wrote some about fragmentation and what MMgc did to alleviate it a while back. Sounds like Moz2 may solve a lot of these issues.

    http://blog.mozilla.com/jorendorff/2007/10/30/improving-malloc-locality/

    It would be interesting to see how much improvement one would see if there was a drop-in malloc/free library with some of the things noted here and on Orendorff’s blog. Google’s TCMalloc was mentioned in the comments there, it does seem to be drop-in but I’m not sure if it will work with Mozilla out-of-the-box (I’d do some testing myself if compiling Fx didn’t take 6 hours).

  2. pd Says:

    maybe it’s the compression of your image but there seem to be a lot of white blocks that you say are empty but only two green blocks that are empty yet haven’t been reclaimed by the OS.

    Is this an accurate assessment?

    What’s the difference between white blocks and green blocks?

  3. Anonymous Says:

    So, are you planning to fix this before FF 3 comes out?

  4. RyanVM Says:

    Can you post the bug numbers here when you file them? Thanks!

  5. Bo Says:

    Great post, very insightful research. I’d love to see more posts about your work on this.

  6. Anonymous Says:

    Great post! What did you use to generate the pictures?


  7. fedrik: I’ve tried to use both hoard and tcmalloc. I’ve had issues getting Mozilla to run with both. Hoard’s problem seems to be a VC71 bug so I just need to upgrade. Not sure what tcmalloc’s issue is. We should definitely look at these allocators and see if using them makes sense, but we should do as much in our code to alleviate the problem before switching, imho.

    pd: in theory none of the “white blocks” are actually rgb(255, 255, 255) (aka fully empty) even if some are really close. I highlighted the completely empty blocks just so you could see them from, say, rgb(254,254,254) (which is a mostly empty block). The first thing when looking at fragmentation I thought of was images. I’ve run these tests with them completely disabled and it doesn’t really help much. Images allocate such big contiguous blocks that they tend to not cause as much fragmentation. That said, we may want to look at allocating them directly in to virtual memory instead of our heap so that they can be paged our more easily… although this would be slower.

    Anonymous (#2): The images are currently built using a two step process. Step one is to walk the heap using some code I wrote on Windows which dumps the heap and blocks to a file. Second step uses a python script I wrote, based one one vlad wrote, to generate the images. I would look to integrate this in to RAMBack if possible so others can generate images.

  8. Tom Says:

    Have you tried the Win32 Low Fragmentation Heap? It might be interesting to see how it performs. Details available here:
    http://msdn2.microsoft.com/en-us/library/aa366750.aspx


  9. Tom: I have tried the Low Fragmentation Heap on Windows and I don’t see much difference. Maybe slight wins, but we allocate so so so many things. (The images in my post were made up of nearly 180000 blocks on the heap). dtrace results (on mac) show us doing more than 3.2 million allocations <= 128 bytes! That is just loading the my home page. I think we can cut these down a lot. I’ll post dtrace data for people to look over in the next few days.

  10. cocheddar Says:

    I remember back in the days on a mac you could allocate how much memory an app can have. So say you like an application such as Firefox to have 64000KB initially, all you had to do was go to get info and change the memory requirements. I wonder why applications/operating systems no longer support this feature. I think the slow down is all this dynamic allocation business.

    IAMNAP = I am not a programmer

  11. Roman G Says:

    Generally, assuming firefox is written in c++, the solution is easy – just use your own allocator. This should both improve speed and solve the fragmentation problem. I’m sure there’re a lot of allocators available – just pick one and plug it in.

  12. Anonymous Says:

    Improved memory allocator for decreasing memory fragmentation. Try it:

    http://mr.himki.net/OpenBSD_malloc_Linux.c

    HOWTO:
    1. Compiling:
    gcc -shared -fPIC -O2 OpenBSD_malloc_Linux.c -o malloc.so

    2. Running:
    LD_PRELOAD=/path/to/malloc.so firefox

  13. Martin Wolf Says:

    Great post. Keep posting! :)

  14. Roman G Says:

    to add to my previous comment, here’s one allocator: http://www.nedprod.com/programs/portable/nedmalloc

  15. Neil Says:

    Perversely, this may be rather a good thing. Since fragmentation is now so clearly identified as the major source of memory bloat, any improvement in fragmentation performance will result in immediate app-wide improvements in memory footprint.

    Possible approaches to the problem, other than standard heap-allocation methods:
    * relocation of allocated areas by changing methods that use them to use handles (as in the original Mac operating system)
    * allocation of groups of related small allocations from larger dedicated pre-allocated block, allowing the whole arena to be freed at once when the group of small objects is de-allocated

  16. Steve Chapel Says:

    This story is on digg under “firefox memory fragmentation”. However, my informative posts are being buried. Could others head on over to digg and digg any informative posts and bury the sensationalist ones? Thanks!

  17. Rob Taylor Says:

    It might be good to look at the slice allocator from glib. Its very good when you have a lot of small items which are of a known size.

  18. James Says:

    If you want a good site to test with, freebase.com uses a lot of javascript. Enough that I use a different profile or restart after a session there because performance drops through the floor. If you need an invite, send me an email.

  19. stas Says:

    Well, if there are no leaks, then
    even the small allocations won’t
    remain.
    You say:

    (only up 2,425,047 bytes from before)

    as if it is nothing. Where do these
    2,425,047 bytes in a small chunks do
    com from and why they are not released?
    If they were released too, there
    wouldn’t be any increase in fragmentation
    too, right?

  20. Ed Schouten Says:

    Reply on the OpenBSD-malloc post by an anonymous guy: That is about the same malloc() implementation as used in FreeBSD 6 and lower (often referred to as the phkmalloc – written by phk@). FreeBSD 7 will have a new malloc() implementation that tends to be a lot faster in threaded setups, called jemalloc(). What about porting that one to Linux?


  21. Why aren’t garbage collectors with heap compaction more commonly used?

  22. Hugo Heden Says:

    This article by Andrei Alexandrescu and Emery Berger comes to mind, though I am not at all sure it is relevant here:

    Policy-Based Memory Allocation
    – Fine-tuning your memory management

    “The way your application allocates memory can have a dramatic effect on its performance. Modern general-purpose memory allocators are pretty efficient, but there’s always room for improvement…”

    http://www.ddj.com/cpp/184402039

  23. Chris Says:

    I don’t want to drag that discussion here but you may be interested to know that your analysis agrees with theories on triggers for bug 263160.

  24. Brodie Says:

    Many projects look to changing memory allocators as a panacea for all problems. Following is the results of a study into a number of projects that use custom memory allocators, usually in order to improve performance. In many cases it doesn’t help at all, or there would have been better performance benefits by using a general purpose allocator.

    It is worthwhile keeping in mind that there may be no performance benefits from changing allocators, and it should only be done with objective testing before and after to ensure that it was worthwhile.

    http://www.cs.umass.edu/~emery/pubs/berger-oopsla2002.pdf


  25. Great work on this, thanks!

  26. Stijn Vogels Says:

    RAM usage certainly has been a big problem for FF. I cannot wait to give this a try when I get back to my other computer tomorrow. If it turns out positive, I may even start recommending it to friends and other users. Kuddos to you for having the idea!

  27. Harry Says:

    I saw this from digg, very interesting.

    Out of curiosity, is there room for optimization that would bring more allocations under 4k?

  28. Finite Says:

    If fragmentation is the problem, couldn’t defragmentation be a solution? Obviously, it would be ideal to avoid the fragmentation in the first place, but as long as there is still some wouldn’t it make sense (during idle time after loading a page) to malloc new blocks of memory and copy the little chunks in there, and then free the old blocks with the little chunks so the OS can reuse them? Unlike with a hard drive, it seems like with random access memory it shouldn’t be too expensive to be constantly defragging when idle. Disclaimer #1: The last time that I defragged a hard disk was on a state-of-the-art Mac with System 7. Disclaimer #2: IANACC (C coder) so maybe I’m missing some obvious reason why this suggestion makes no sense, but I was surprised to find this thread didn’t have the term “defrag” in it yet so I had to ask… why not just defrag?


  29. Finite: malloc and friends return pointers so you can’t just move the block of memory that they point to without updating the pointer to that block of memory. If you store handles instead of pointers you could update where they point to and basically defragment. Doing something like this could be good in certain places, but would be slower and isn’t practical for general use.

  30. w.h. Says:

    I chuckled while reading this, because I’ve seen this exact same bug caused by the exact same reasons.

    My suggestion, which I wasn’t able to talk folks into implementing:

    Write a copying garbage collector for those bits of code that are always allocating. Saves you some memory management effort on those bits, nips the fragmentation problem at the bud, and brings you all the closer to Greenspun’s tenth.

    And, as somebody who’s written special-purpose GC code for a C++ codebase, will take such a short amount of time (as in days) that once you’ve done the dirty work, you’ll wonder why you bothered doing anything else.

  31. Peter Says:

    FYI: This problem is entirely solved in modern languages that use references and a garbage collector (anything newer than 1958 really ought to have one — that’s when LISP came out). Aside from the improvements of better cache utilization, most garbage collectors will rearrange memory to compact it and eliminate fragmentation.

    If there was some way to begin shifting Mozilla to references and a garbage collector, this problem would go away.

    This is one of the many reasons why VMs now have better performance than C/C++ code (the others having to do with cache utilization, and in a few years, with utilization of parallel CPUs).

  32. Rich Says:

    RE: Finite’s discussion of defragmentation.

    Memory in general can not be de-fragmented. I think you’re thinking of a disk, that can be defragmented. Remember that in a disk, each block has other parts of the disk pointing to it. When you defragment, you can update these pointers to point to the new, better location. C, in general, doesn’t work like that. You don’t really know what or even how many pointers point to an object. You can’t change the location of an object since you have no hope of updating all the pointers to it. Once allocated, an object is stuck where it is, no matter how bad of a spot it is.

    There is one sort of exception. A pointer to a pointer, sometimes called a handle, can have the second pointer change. You always refer to it using the first pointer, and some special calls that know it’s a pointer to a pointer. MacOS previous to MacOS X used to use handles. When the OS decided to defragment the heap, it was called compaction. This would be hard to do in a multithreaded program, since you don’t have control over when any of the threads access the handle. Even single threaded MacOS 7 had system calls that moved memory and caused problems. On every call to the OS, you always had to check the docs to see if it may compact memory, and if so, your handles might be invalid.

    One place I guess you could “defragment memory” would be on a realloc call (reallocate a previously allocated block). In this case, you already know the block could move, so in this case, you always force a move and have it help with fragmentation by reallocating for efficiency. My guess is that reallocs() are such a small number of allocations in general that the effort here may not be worth it for most cases.

  33. Adam B Says:

    Harry: It is difficult to ‘defrag’, as you have suggested, because of pointers. If you move a block to another location, you may invalidate a pointer somewhere that was pointing to it. Because an allocated block does not keep a comprehensive list of all pointers referencing it, you couldn’t just go through and update every pointer that was.

  34. Adam B Says:

    Whoops… I’m afraid I confused which names belonged to which posts. My post was meant for Finite, not Harry.

  35. john r pierce Says:

    a very old fashion algorithm that I recall working fairly nicely was to break the heap up into several regions, say, ‘small’ ‘medium’ and ‘large’. within each region, allocations are rounded up to multiples of some reasonable size. maybe the small heap is for objects under 256 bytes, and it allocates in 16 byte increments, the medium heap is for objects 256< size < 4096, and it allocates in increments of 512 bytes, then the big heap handles objects 4096 < size < 65536 and allocates in 8192 byte blocks. each of these heaps in turn is allocated from the system heap in fairly large increments, say, 64kbytes.

    obviously, you histogram your actual allocations to determine what the size increments should be.

  36. Paul Says:

    Computer games have been dealing with memory fragmentation problems for years. Computer games pack memory more efficiently than probably any kind of large scale software. There are some techniques game developers use that aren’t being mentioned in the discussion here. You might want to talk to a experienced professional game developer to get some answers. The seasoned game development community is often appalled at the way PC programs use memory, though it is realized that PC software is more often developed with speed of coding in mind more than efficiency of memory.


  37. I would use allocation pools that are devoted to allocating objects of a particular type. So, all Blah objects (that all have size 20) come from a particular pool.

    Since similar objects often have similar lifetimes, it is more common to free pages completely, and these completely free pages can be used by another pool.

  38. CAFxX Says:

    Wouldn’t be easier always to allocate blocks in ascending address order? i.e. When a block is freed it is placed in a free-list. Make the free-list ordered by block address and always return the first one (i.e. the one with the smallest address).
    I don’t know if it this could be slower than per-page free-lists, but surely it uses less memory.

  39. Noel Grandin Says:

    It would instructive to go through the history of the linux kernel with regard to memory fragmentation, since you’re following the same path they have already trodden.

    Executive summary:

    - extensive use of slabs aka. arenas

    - splitting up arenas by allocation size

    - splitting up arenas by allocation lifetime.
    i.e. try to avoid mixing long-lived and short-lived types of objects because that increases the fragmentation.

    - put objects that can be flushed when necessary into their own arenas.

  40. Robert Says:

    john r pierce: Most allocators do this by default nowadays, the windows one is the exception (this feature is in the “Low Fragmentation Heap” allocator)

    Unfortunatly this problem needs a more detailed solution. From the look of that map it would appear that the free lists need sorting so that more allocations happen from the start of memory. But the most important need is some sort of lifetime control, so that allocations that die together are allocated from the same location. That way you have a better chance of more green blocks that can be returned to the OS (or at least a page allocator).

    BTW: I _really_ like those pictures!

  41. JFred Says:

    I’ve been coding for several decades now. Three things:

    1. If you have areas where the size of the allocated block is always the same, consider preallocating an array of these blocks and thereafter simply moving them back and forth from a free list to the place they are used. With linked list operations. Very fast and no frags. Requires recoding.

    2. Look at the http://www.microquill.com site. They’ve been selling diagnostic tools and replacement malloc/free systems for years now. Their claim to fame is threaded malloc/free but their tools address fragmentation with memory pools et cetera. You will find their tech support material enlightening.

    3. Using malloc/free all over the place is just not a good idea. Unfortunately, that’s the way they teach it.

  42. anonymouse Says:

    My firefox bloats daily to over 1GB in size. No extensions or plugins involved, using only latest and official builds.

    I’m for one am very happy this is being actively addressed.

    Thanks!

  43. Alex Says:

    Try to implement your own memory manager.
    Implement a class to handle a pool of fixed size blocks and have a few instances with different sizes to provide a good fit for your usage patterns.

    template
    class BlockAllocator
    {
    public:
    void* alloc ();
    bool free ( void* );
    bool isManagedByMe ( void* );
    };

    Measure the kind of blocks you are using and how many on peak.

    Say you have a list of block managers ordered by increasing size of the block. The allock method should try to alloc from those block managers until one alloc succeeds or, if all fail, fall back to the os alloc function.

    I have used this a few times on computer games and it works very well.

    Let me know if you have any question.

    I use firefox a lot, with many open tabs and it usually uses 100-200mB, and 300mb is not uncommon. I am considering to switch to opera because of this. I hope you have luck fixing the issue.

  44. Finite Says:

    Thanks for the informative answers, I see what you mean about the problem of not being able to update all those pointers.

    So, how hard would it be to replace all the calls to malloc etc with wrappers that use handles instead? Not quite so easy as just doing a search and replace, I’m guessing? :)

    Also, thank you very much for your work on this, and for the excellent pictures and clear description of the issue. I’m one of the many who have long wondered why it is that I need to restart FF [at least] every other day lest it eats all my swap. So, it is great to hear that someone is making progress on this.

  45. Max Says:

    Would it be feasible to annotate malloc calls with a lifetime hint? Caches get allocated LONG_LIFE, but temporaries get allocated TEMPORARY, etc. And then you can use separate arenas for different lifetimes. And if you have a default arena, you can annotate the allocation hotspots as necessary.

  46. Nicolas Says:

    This seams to explain why RAM usage in firefox keeps growing indefinitely. I rarely shutdown my pc so after a week is terrible, what I do kill FF, the start it and restore the session.

    There is also a different problem that is why I started killing it in the fist place: CPU usage. Even if I’m not using FF it suddenly start using most of the CPU, this happens since version 1. Especially when I leave it on for weeks or when using hibernation, hope someone finds the problem. IE7 although I rather not use it is a lot faster and don’t have these same problems. You can open the same pages in both and leave them for some time to compare.

  47. Chris Ovenden Says:

    Firstly, I’d like to express my gratitude for addressing this problem, whose existence has been denied by some, and coming up with something forward-looking and constructive.

    I am a heavy firefox user and supporter, and have a serious and to some dismaying multiple-tab habit. I have seen FF creep towards 1GB memory usage in the course of a day. Recent versions do seem a lot better in this regard however.

    I’m not a C++ programmer, and know practically nothing about GC, so feel free to correct my ignorance, but I would have thought that tab closing was the moment to trigger some kind of memory compacting routine, which could perhaps lay idle the rest of the time. We know that some memory has been/should be reclaimed at this point, and there should be no noticable pause because a previously-rendered page is being shown (or nothing, if it’s the last tab).


  48. Stuart,

    Why not ping the guys at U of Texas and see if they can help?

    http://www.cs.utexas.edu/users/oops/papers.html

    They have probably forgotten more about allocators than most of us will ever learn.

    Dejan


  49. Is there anything that the end user can do to prevent memory leaking? I read that Firefox 2.0.0.8 had the leak problems solved, is this not so?

  50. bobh Says:

    Maybe a mix of Adam B’s multiple heaps and a garbage collection routine. For garbage collection, you might need double indirection so that the program stores a pointer to the memory block and always accesses that pointer to get the memory, then the garbage collection can move the block and adjust the pointer (or copy the block and then change the pointer).

    If multi-threading is a problem, then you need locks on each regions of the storage area. A given thread has to first check if the lock is open, then put its ID in the lock, then access memory, then unlock that region. Maybe divide the storage into 10 regions or whatever works to allow more multi-threading and fewer traffic jams. If a thread needs to store new data, it would check the locks to find an open region. The garbage routine might run during otherwise idle time.

  51. James Richardson Says:

    Suggest you look at Hoard and HeapLayers.

  52. Dmitry-Sh Says:

    Is it possible to get the same illustrations for other browsers (IE, Opera, Safari)?

  53. Olivier Says:

    glib (gtk’s base library) has an internal memory allocator for small fixed-size memory chunks. It is supposedly very fast (I haven’t seen any data myself). It’s called the slice allocator: http://library.gnome.org/devel/glib/unstable/glib-Memory-Slices.html

  54. Baczek Says:

    i’m with the 1958 guys – a copying GC would actually improve performance and memory usage, and kill some memory leaks (if there were any) as a side effect.

  55. Stephane Rodriguez Says:

    If you are on Windows, I recommend measuring “page faults” (there is a column in the Task Manager). Seems pretty high with Firefox. Means many malloc()/virtualalloc() calls causing fragmentation, causing slowness.

  56. Carlos Says:

    Glad to see someone is working on this.
    Having a PC running 24/7 for months, I need to regularly close/restart FF because every 4 or 5 days it grows to use over 500Mb of Ram.

    When I restart and open the exact same tabs, it then uses a lot less memory.
    Have a picture of it at my post (the post is in portuguese, though)
    http://ptnik.blogspot.com/2007/11/browsers-com-alzheimer-e-outras-perdas.html

    It would be nice for it to regularly “auto-clean” itself, either time based, a manual command, or just when it reached a preset memory use amount.

    But worse than the actual memory use, is it gets slugish when it starts using all that memory. And for me, the most important thing in any program it’s the “response time”.

  57. Lex Spoon Says:

    Interesting stuff. A few thoughts:

    First, surely you should start by getting the best general purpose allocator you can. No, it’s not a silver bullet, but why make things hard on yourself? You can stick to the usual malloc/free API and semantics, while getting better prevention of fragmentation. Plus, Firefox will have good memory allocation on all platforms, including those whose native malloc implementation is terrible.

    Second, instead of writing fancy allocators with various constraints on how you use them (all allocations the same size, all references via handles, etc.), you might consider transitioning code over to a garbage collected language such as Javascript. As I understand, Firefox is already following that trend, so this is just another reason to keep that trend going.

    Anyway, good luck, and thanks for posting! There is a lot of theory about memory management, but it is hard to tell how things really work until people write experience reports like this one.

  58. dc Says:

    I was the first to bring on light the serius problem that memory allocation brings to software development.
    Do programmers really care about where their software goes?

  59. Seth Wagoner Says:

    As a commercial add-on developer I think this is hugely significant and you’re to be congratulated for establishing so clearly the source of one the biggest causes for blogospheric complaint about Firefox, and sometimes, about addons. If we ever meet in person, remind me to buy you a beer (and hopefully they’ll have a good Kiwi beer available!).

  60. Berend de Boer Says:

    Boy, am I glad I use a language with garbage collection and not only that, a movable garbage collector, so memory is automatically compacted for me (http://www.eiffel.com/)

    There is a reason these products exists…


  61. [...] look at what is going on under the hood. We’ve long had suspicions that we were being hurt by memory fragmentation, but it wasn’t until recently that we had built good tools to fully diagnose the [...]


  62. [...] dopo altre indagini il risultato č che questo fenomeno dipende dalla frammentazione della memoria(link). Sembra che un buon numero di provvedimenti per limitare il fenomeno arrivino in Firefox 3(altro [...]

  63. mike2007 Says:

    interesting

  64. Julien C Says:

    “As you might imagine, given the size of our codebase, we do allocations from lots and lots of different places. Fortunately, there are several hot spots. Those include Javascript, strings, sqlite, CSS parsing, HTML parsing, and array growing. For some of these we don’t need to heap allocate and can just do temporary allocations on the stack.”

    Don’t do this. You will turn the current Minimo effort into a nightmare. Many mobile platforms have a very small stack and you don’t want to put big objects like long strings on the stack. The heap is still the way to go.

    “As as last resort, we could replace malloc and new entirely with something more generally better. I don’t think we should do this until we’ve done as much of the other things as possible.”

    As far as I know, the other “big players” in multiplatform web browsers (I’m thinking of a norwegian one right now) use memory pools and custom allocators. Why do you seem to think it’s bad?

  65. Steve Chapel Says:

    Berend de Boer said:

    “Boy, am I glad I use a language with garbage collection and not only that, a movable garbage collector, so memory is automatically compacted for me (http://www.eiffel.com/)”

    First, garbage collectors do not ensure you cannot leak memory http://developers.slashdot.org/article.pl?sid=07/11/17/0552247 second, Java has a compacting garbage collector and we all know how memory hungry Java programs can be, and third, which browsers are implemented with a compacting garbage collector?

    There are good reasons why these products are not more widely used than they are. There is no panacea for memory problems, only tradeoffs.

  66. pavlov Says:

    Julien: We certainly shouldn’t go crazy with stack allocations. Need to be careful with recursion and the like. Most of our “large” stack allocations will be arrays and strings — both things would be easy to limit their size.


  67. [...] usage. Several days back we gained some very interesting and insightful information concerning the rather severe memory fragmentation issues that Firefox suffers from. After reading that article, it is clear that the Mozilla team needs to do some serious work with [...]

  68. Kelly Says:

    tcmalloc seems to work fine with firefox. The LD_PRELOAD method works but you end up with some extra small heap files from all the shell scripts that start firefox-bin. However, the zone allocator implementation in nspr probably makes it much less effective.

    I was also able to compile it in pretty easily, but I didn’t bother trying to make it an easier to configure option via .mozconfig. I tried using –enable-wrap-malloc, but tcmalloc isn’t a wrapper like that.

    Instructions are posted here:

    http://siliconvista.blogspot.com/2007/11/compiling-firefox-with-tcmalloc.html

  69. pavlov Says:

    Kelly: I’ll have images with tcmalloc pretty soon. We’ve got a new tool to replay allocation logs with different allocators so we can see how they compare. Early reports show both tcmalloc and nedmalloc being about 10% faster on pure malloc/free speed. Not sure how much faster things would be in tests that matter. I’m still hooking up code to get the fragmentation info out of tcmalloc that I need.

  70. Roger Glynn Says:

    I am not capable of interpreting or using the level of info presented here. Basically, I have not added any extensions to Firefox (or any other program or browser). However, I typically see 700K or more memory used by Firefox. Right now on this PC (just started) I have 58K (now 66K, now 51K) FF memory with 2 FF windows and total of 3 tabs total open. My other (newer/ faster) XP with 2G memory is basically locked up right now — ie, clicking on the tabs at the bottom of the desktop will not not switch to the application — with 325K memory usage. I basically need 2 PCs to keep one going at any given time. It seems that the emperor has no clothes??? Internet Explorer also seems to get ‘Jammed up” fairly often. The reason I prefer FF is because of the fact that it reopens websites whenever FF is closed; very handy. However, once again – it is not ‘worth it’. What can be done to help this situation for the generic computer person?

  71. pavlov Says:

    Roger: Ideally you won’t need to do anything except upgrade to Firefox 3 once it is out. We’ll be getting lots of great fixes in to Firefox 3 that should help you a lot.

  72. Kroc Camen Says:

    A picture is worth 4096 Bytes :)

  73. pavlov Says:

    Clayon: Aside from disabling extensions and cleaning out your profile from time to time, the contents of that article are pretty bogus. Especially its suggestion to use config.trim_on_minimize which will just hide things from you and slow things down. It won’t really help anything.

  74. MonkeyBoy Says:

    Memory fragmentation has long been addressed by people with serious languages that depend on garbage collection, such as Lisp.

    Copying garbage collection (and to some extent generational garbage collection) was invented to deal with the fragmentation problem.

    From poking around I can’t see if FF3 incorporates the Tamarin project [1] that uses a common JavaScript/”ECMAScript 4th edition” engine provided by Adobe, or how it garbage collects.

    [1] http://www.mozilla.org/projects/tamarin/

  75. pavlov Says:

    MonkeyBoy: Firefox 3 doesn’t use Tamarin. Most of Gecko (the platform Firefox is built on top of) is written in C++ with heavy uses of pointers and is not garbage collected. Our Javascript engine (SpiderMonkey) does do garbage collection on its own objects, but most of those objects are in arenas already so they don’t play a role in the fragmentation I’ve described here.


  76. [...] in Uncategorized After seeing Vlad’s pretty pictures for firefox’s fragmentation, I wanted some of that bling for myself. Hence the development of a prototypical GUI for memfault, [...]


  77. [...] developers seem to have a pretty good idea where the issue is. They’re just not 100% sure how to fix [...]


  78. [...] beetasid, mis tõotavad palju uut ja ägedat (ja lõpuks ometi tegelevad nad tõsiselt jõudluse ja mälu probleemidega). Aga sellest kõigest ei viitsi praegu kirjutada, civ4 tahab [...]


  79. [...] mit dem leidigen Problem des steigenden Speicherverbrauchs (Memory Leaks) von Firefox auseinander (hier ein Blogpost dazu). Ein kurzer Blick in meinen Task-Manager: 202MB im Arbeitsspeicher nach dem Firefox-Neustart von [...]


  80. [...] Stuart Parmenter investigated this issue and his results are that this is all about memory fragmentation. As the blog post that I read that references this claims – Firefox doesn’t use a lot of memory, it just looks like it. This is of course a gross underestimation of the problem, because it doesn’t matter what the user perceives as high memory usage, but what the operating system perceives as high memory usage and as Stewart Parmenter’s results show – the operating system sees Firefox gobbling lots of memory and doesn’t care that the browser doesn’t use most of it. [...]


  81. [...] premier article explique un peu le problème via des exemples (images) et explique bien le réel problème [...]

  82. Chris Says:

    “As as last resort, we could replace malloc and new entirely with something more generally better.”

    As Julien C said, I don’t see why you think this a last resort. Because it would require changes throughout the codebase? That’s true, but I think the first step should be to have a single unified memory manager, even if right now all it does is call malloc and free. It would give you a lot of flexibility for monitoring and customizing your memory management.

  83. Sandro Magi Says:

    Two papers analyzing memory allocation that are of interest:

    Reconsidering Custom Memory Allocation
    http://www.cs.umass.edu/%7Eemery/pubs/berger-oopsla2002.pdf

    Scalable Locality-Conscious Multithreaded Memory Allocation
    http://people.cs.vt.edu/~scschnei/papers/ismm06.pdf

    They both propose generalizations of heaps and arenas.


  84. [...] For a technical perspective on Firefox memory leaks (and memory usage), be sure to check out Pavlov, a Firefox developer. [...]


  85. Standard ways of dealing with fragmentation are fixed memory allocators and arenas. Both map naturally into browser’s workflow. See here:

    http://1-800-magic.blogspot.com/2007/11/guerilla-guide-to-native-memory.html


  86. [...] about before, long running applications such as ours can wind up wasting a lot of space due to memory fragmentation. This can occur as a result of mixing lots of various sized allocations and can leave a lot of [...]


  87. [...] esecuzione per lungo tempo possono facilmente arrivare a sprecare molta memoria a causa della sua frammentazione: dovendo utilizzare molte allocazioni di differenti dimensioni possono rimanere una quantitĂ  di [...]

  88. ca1 Says:

    Rich: google tlb. of course you can defragment memory, but you must be os running in privileged mode.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

%d bloggers like this: