Torsten Curdt’s weblog

Memory on the iPhone

iPhone MemoryNow here is the question: How much memory does the iPhone have? How much memory is OK for applications to use? Curious as I am I did some research and wrote a little test project. This got me some odd results.

According to what you find on the net the iPhone has a total memory size of 128 MB and your application should not use more than 46MB of it. But what happens exactly when you try to obtain more?

So I wrote a little application that continuously allocates more and more memory. In fact it mallocs/frees the memory on every timer tick.

Now this is where it gets odd. While the sysctl() calls do report something around 128MB of RAM I can malloc() way beyond this. In fact calling malloc(700000000) does not fail at all! When I run the application on just the iPhone it will stop at around 719MB. When I run it through Instruments the whole devices freezes at around 46MB. This has been reproduced on 2.1 and 2.2 on different devices.

- (void)tick
    allocated = allocated + size;

    if (allocatedPtr) {

    allocatedPtr = malloc(allocated);

    if (!allocatedPtr) {
        NSLog(@"out of memory at %ld", allocated);

Later I found someone who stumbled across the same thing.

Lazyweb: what is going on here?

You can download the test application here. But as a disclaimer: you run this code on your own risk!


So afterall this means the result of malloc() has a different semantic of what I expected. Adding the following piece of code makes the program behave and gives the expected result.

    long *p = (long*)allocatedPtr;
    long count = allocated / sizeof(long);
    long i;
    for (i=0;i<count;i++) {
        *p++ = 0x12345678;

So it turns out if you allocate (and use!) around 46-50 MB in your iPhone application it will just get terminated.

  • Simo
    Yes, as mentioned, same happens in linux. As a funny trivia, this is documented in Bugs section of malloc: http://linux.die.net/man/3/mal...
  • Its called memory 'overcommit', and its a pretty standard feature on most Unixes....:
  • @philippe: wouldn't that be cool? :)

    I did some further tests. And it turns out using calloc or just writing some tiny little portion to the memory is not good enough. But if you access/fill the whole allocated bit you get the expected result. So seems that Vas is right - not that this is a surprise ;)

    ...but I still find it surprising as this has a couple of implications of malloc/calloc. Is it the same on a desktop? Did I just not notice before? That was an interesting exercise.
  • It's possibly just optimistic page assignment - pages mapped with no backing store until you actually write to them (they possibly read as zero, or may obtain a backing store on read as well). Try actually writing one byte every 4096 and see how far you can get.
  • philippe
    719MB, impressive :)

    Did you try to use calloc insteac of malloc ? Maybe the allocation does not fail, but the memory may not be usable...
blog comments powered by Disqus