Vincent Gable’s Blog

May 24, 2008

memcopy, memmove, and Speed over Safety

Filed under: Design,Programming | , ,
― Vincent Gable on May 24, 2008

“We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.”

–Knuth, Donald. Structured Programming with go to Statements, ACM Journal Computing Surveys, Vol 6, No. 4, Dec. 1974. p.268.

If you’ve programmed in C, you’ve seen memcpy (pronounced “mem-copy”). It’s the preferred way to copy a block of memory somewhere. There is a safer and equally easy to use alternative, memmove. But it has never gained traction, in spite of it’s advantages. As we will see, this is unfortunate, and may be an example of a programming culture that values (superficial) speed over safety.

The Difference

The difference between memcpy and memmove is simple: when the source and destination blocks of memory overlap (for example, if they are the same), memmove works, but memcpy‘s behavior is undefined. It might work. It might crash. It might corrupt data. It might behave differently during debugging.

memmove is safer.

memcpy can be faster, and usually is. There are less restrictions on it’s implementation, so more can be done to optimize it. But not necessarily a lot more — in fact, it could even be slower then memmove, and sometimes this is the case. On some systems, memcpy may just be memmove.

Faster

So how much faster can memcpy be then memmove? They both take O(N) time to copy N bytes of data. So in some computer-science circles, memcpy wouldn’t be considered faster then memmove.

Algorithmically, memmove needs one extra if-statement before it starts to copy; to determine if it needs to copy front-to-back, or back-to-front. (See this reference implementation for an example.) The only other advantage memcpy may have are esoteric processor-specific instructions that assume restricted pointers. So unless there is a “memcpy instruction”, we can expect the difference in speed to be pretty small.

But the real proof is in the pudding and … memmove is faster then memcpy! At least on my laptop, with this test; basically copying 4MB of memory 100 times. See for yourself:

$ gcc -O0 memcpy_memove_lab.c && ./a.out
   Surprise!
   memmove is 1.404409 times faster then memcpy

gcc -O3 memcpy_memove_lab.c && ./a.out
   Surprise!
   memmove is 1.054571 times faster then memcpy

“This must be an unfair test!”, you’re probably thinking. Yes, it is. Or at least it’s a dangerously narrow test. But it is also an honest and simple test. By that I mean, it is the first code I hammered out to just get some numbers for this article. Proper benchmarking is very difficult. I’m not going to attempt it.

The real lesson from this naive benchmark is that you must measure your code before concluding that an optimization is really faster. I would never have guessed that memmove would be up to 40% faster, at copying 4MB. But it was — in this particular instance.

On a related note, a significantly faster memcpy (say 2x) won’t have an appreciable impact on an application’s performance, unless the application spends a surprisingly large portion of it’s time copying memory (Amdahl’s law). For example, let’s say that 1 out of every 20 seconds is spent copying memory with memmove. (That’s a lot of time just moving bits around! Programs should do something with bits, not just move them.) So we replace memmove with memcpy, and this memcpy is a full 2x faster then memmove (which is optimistic). Surprisingly, we only get a 2.5% speedup! Remember that 1 in 20 is only 5% of the program’s total time. Cutting this time in half eliminates 2.5% of the program’s total execution time. If we can find an optimization that speeds up the other 95% of the program by just 2.7%, we get better performance overall.

So memcpy is unlikely to make a large difference to program performance, in general. Switching memcpy implementations is a “local optimization”, that has much less value then changing the algorithm that’s requiring all that duplication. It may even suddenly become slower when hardware is upgraded.

Safer

How much safer is memmove? This is a hard dimension to quantify, and I don’t have a satisfying answer. My instinct tells me that it isn’t dramatically safer. I don’t have any data to support this, but I believe it’s very rare to be copying memory into itself; compared to other memory-management errors, like a double-free().

But the bottom line is that, there are less ways your program can fail if you use memmove over memcpy. Period. Since memcpy‘s behavior is undefined when the source and destination overlap, it can be a vicious bitch to debug.

Speed over Safety

memcpy is preferred by a significant majority of C programmers. I don’t know exactly how many. But a google fight shows that memcpy is almost 6x more talked about then memmove (as of 2008-04-11). Anecdotally, memmove is mostly unheard of in my experience. It seems like the call of “faster” really is a siren’s-song for developers; luring them into dangerous code.

I think this is very unfortunate. Especially, because the performance advantage of memcpy just isn’t that big in general! (Sometimes it’s even harmful). Given the unreliability of software, anything that elements bugs is a Very Good Thing.

I wish I knew the full story of memcpy winning the popularity contest with memmove. By accident or design, it has left us with a programming culture that values superficial speed over safety.

For Further Reading:

Optimizing memcpy — includes some graphs showing the tradeoffs between optimizing for large chunks of memory (say copying pictures), and small data structures.

Why aren’t my optimizations optimizing? — “Optimizing code is a tricky business.”

memcpy_memove_lab.c — The naive benchmark from this article, plus a reference implementation of memcpy and memmove.

8 Comments »

  1. I played with your benchmark a little. If I turn on basic optimization (e.g. -O1 or -O2, etc) I can get different results.

    Also, if I swap the order of the for-loops (i.e. do memmove first and memcpy second), I can can also get different results.

    Combining the two things, I always get memcpy as the faster run.

    (Tested on Intel Core 2 Duo iMac).

    Comment by Eric Wing — January 16, 2009 @ 4:16 pm

  2. Thanks Eric, that really shows how variable low-level optimizations can be. I would not have guessed that something would be faster with no optimizations and full optimizations, but slower with some optimizations.

    The for-loop ordering thing sounds like bad benchmarking on my part. I should have touched the data first to warm up any caches. I suspect the first iteration of the first loop to run may be taking much longer while data is fetched into caches. That’s why timing memmove first made it slower the memcpy. Mea culpa.

    Also, it occurred to me that since restrict is now a keyword in C99, and the compiler might be able to help out a little more with making sure memcpy is being used correctly, memcpy is a less dangerous default choice today.

    But it’s still telling, and perhaps disturbing, that memcpy is the cultural choice.

    Comment by Vincent Gable — January 17, 2009 @ 3:28 am

  3. A few months back, I spent a few days doing extensive benchmarks on various assembler versions of memcpy (x86, SSE w/ non-temporal writes, prefetching, etc), and found that it’s not very difficult to beat the compiler if you’re doing memcpy’s of more than a few bytes at a time (especially if you use non-temporal writes). On an Intel P8400 w/ 3GB DDR2-1066, I’d easily double the memory bandwidth (in the average case) over the best inlined-code that the MSVC compiler could generate, and in certain cases, the performance was an order of magnitude higher (especially if the working set is resident entirely in L1 cache).

    memcpy is slow because compilers generally don’t generate great code. If memmove is faster (especialy 40% faster!), it’s because the compiler probably isn’t doing its job (excluding the aforementioned point about warming up the caches). I can’t think of any possible circumstances where memmove could be faster. It should degenerate into memcpy when the blocks don’t overlap. :\

    memcpy makes perfect sense to choose as the “natural” method of copying memory: in my experience, most memcpy’s generally occur on two buffers that can’t possibly overlap, and so memcpy is the natural, most sensible, and definitely the best choice for doing the copy. We can all agree that more constraints == more possible optimisation, even if the compiler doesn’t always do it this way. :) Using memmove in these cases wouldn’t be wrong, but it also wouldn’t be good practice. Not much point in adding code to handle situations that can’t possibly occur (excluding, of course, cosmic ray interference).

    However! After working at Ericsson in embedded development, I tend to agree with at least considering memmove. We have a lot of subtle problems with memory corruption that could be solved by this, and in a worrying high number of places, we also have _really_ dubious reinvention of the memmove wheel:

    > if ((uint32)buf1 + size memcpy()
    > else
    > strange_memmove_style_code

    I guess the obvious bottom line if you’re using C/C++ and using memcpy/memmove is: if can’t use them correctly, then it’s time to move to an easier language. Or time to starting having less coffee and more sleep. ;)

    Comment by Anonymous — October 17, 2009 @ 10:51 am

  4. I can’t think of any possible circumstances where memmove could be faster. It should degenerate into memcpy when the blocks don’t overlap.

    Here’s a hypothetical one. memcpy is implemented in hand-tuned assembly, and optimized for large chunks of memory. memmove is very simple, less than 6 lines of C (something like my implementation, but with memcpy_{backwards,forward} inlined). But the memmove function is put inline in a header.

    Now imagine a loop that copies lots of little data structures.

    Invoking memcpy will incur the overhead of a function call, but memmove won’t, because it’s inlined by the compiler. The result: the loop goes faster with memmove. Even if we did inline memcpy, it’s use of naked assembly would prevent the compiler from optimizing the function it was put in — and that would probably be a bigger performance hit.

    But I haven’t actually tested this scenario. And I believe that if you don’t test, you don’t know, when it comes to hardware-level optimizations.

    Comment by Vincent Gable — October 18, 2009 @ 1:50 am

  5. In your example you run memcpy() on uncached memory, memmove() afterwards so it can run on cached memory.
    For fairness you should iterate at least once over both of the memory arrays (and write something to every byte in it) before you start any benchmarking.

    Tom S.

    Comment by Tom Strainer — November 18, 2009 @ 5:19 am

  6. In your example you run memcpy() on uncached memory, memmove() afterwards so it can run on cached memory.

    Yes. I guess I didn’t point that out clearly enough in comment #2 and the article. The benchmark isn’t a measure of real-world memmove() vr memcpy() performance.

    What is interesting about that benchmark, and why I left it in the article, is that it shows how small (and unpredictable) the performance gains from using the less-safe malloc() generally are. Here I am copying 4194304 bytes of data a hundred times, and changing functions has less of an impact on performance than changing the order they are called in. (The term “premature optimization” comes to mind…)

    In some cases using memcpy() matters, but not I think in most cases.

    For fairness you should iterate at least once over both of the memory arrays (and write something to every byte in it) before you start any benchmarking.

    Yes, I need to warm up the caches. Or make sure they both start cold. But that’s not as easy as just touching every byte once! Each array is twice the size of my L2 cache, so if I wrote to every byte of A, than ditto for B, I’d only have the last half of B in cache. That’s still not fair!

    Comment by Vincent Gable — November 18, 2009 @ 5:44 am

  7. If memory areas don’t overlap memmove should be just a call to memcpy, so both calls would have the same performance. Otherwise the compiler / standard libraries aren’t optimized. But if both memory areas overlap, memmove would have to copy the memory backwards, which is always muuuch slower (memory caches and other parts of the hardware aren’t optimized for that, and there is nothing we can do about that). However, as you said, memcpy could destroy the data, and that’s not a viable option.

    In summary, memmove and memcpy should be equally faster (otherwise, use another implementation), the overhead of testing if memory overlaps is negligible (but memmove can’t never be faster in real scenarios). So I agree, memmove should always be used preferably over memcpy.

    Comment by Santiago — January 20, 2010 @ 11:10 pm

  8. I disagree, is usually obvious when using memcpy that memory will never overlap, and thus no danger. The function is commonly used by others (so not making code harder to follow for sake of efficiency.

    If you are going to add inefficiency, then do it where mistakes actually commonly happen. (Eg buffer overruns, including strings, and pointers to memory that has not been allocated or already been freed, including from bad error handling.)

    Speed: there are many different ways to do both memcpy and memmove in real world, so one situation doesn’t mean much. Intel processors for example have dedicated memcpy type instructions, and sometimes SSE instruction set. Most modern processors can move 4 bytes much faster than 1.

    Comment by David — February 8, 2012 @ 8:49 am

RSS feed for comments on this post.

Leave a comment

Powered by WordPress