|
|||||||||||
Java is not Faster than C A common claim among Java programmers (and among practically no one else) is that Java is faster than C. Let's get this straight once and for all. That is a lie. This article is grouped into sections and is intended to be read linearly. If you just want a quick refutation of some of the more common arguments, I try to cover the major mistakes in the relevant sections. The reader is assumed to be familiar with basic computer science topics and the technology involved. A nice overview of the latter may be found on the Wikipedia page on this topic. In general, topics proceed from the well-known to the arcane. More advanced readers may like to skip some of the more common, basic mistakes presented first. I have deliberately simplified these sections; the technical reader will be relieved to know that the more complete picture may be addressed later. I try to do a fair treatment of this issue. Please contact me if you find any technical inaccuracies. First, C programs are converted to assembly language, which is compiled to machine code. These are raw hex bytes representing individual instructions. This translation is basically direct, and so it follows that there's no unnecessary garbage for the computer to process—it's just your algorithm. You cannot get faster than that. It doesn't even make sense. When a Java compiler is implemented well, it will allocate an executable segment at run time, dump a compiled binary representation of the program into that segment, and then execute that segment, emulating as necessary. So at best—and if you ignore that extra overhead—a running Java program could (theoretically) equal the speed of a C program. But that doesn't happen. You hear about the Java Virtual Machine (JVM) all the time. That's because the JVM is a virtual machine that the Java compiler writes code for. This virtual machine is generally itself a program running on the CPU. This level of indirection makes Java at least half the speed of C. This doesn't always happen, since Java code is as often as possible compiled to native code instead of bytecode—but this is by no means a rule. As far as I know, for example, no Java compiler compiles Java's classes' constructors—they're always interpreted for technical reasons. Java also has to jump through extra computational hoops than does C. In the name of standards compliance (which actually isn't necessarily a bad thing), Java code is consistent across platforms (in a way that C code is not; famously, even between compilers)—in particular for library routines (sin, cos, exp) and floating point operations. This necessitates a level of emulation and checking that slows Java down. Additionally, features like bounds-checking on arrays and lack of consistent function inlining or tail-recursive capability add extra runtime cost. Also, see the architectures section. ====ARCHITECTURE-SPECIFIC OPTIMIZATION AND JIT==== C code, since it is compiled for each target architecture, is able to take advantage of special CPU instructions available on that architecture, often even without programmer input. Even C programmers are largely unaware that these "free" optimizations happen—leading to compiler-writer newbies' perpetual confusion at their basic creations' "slowness". Weirdly, then, when not precompiled, Java proponents claim that JIT compilation allows Java to achieve platform-specific optimizations like it's some new thing. C programs already often get that for free, just by virtue of being precompiled for their target architecture! Saying that Java sometimes can isn't really a point in Java's favor. I would also like to point out that JIT compilation, while amortized, is yet more extra runtime overhead. Even when precompiled, as a result of the "write once run anywhere" ideal, a Java program written for a smartphone will run in fundamentally the same way as on a supercomputer—maybe inconsistency is what we want! By trying to make a Java binary completely portable, the compiler must pander to the lowest common denominator of CPU functionality anyway. In fairness, a truly portable binary is a laudable and actually impressive feat. While a C program compiled for x86 will run on most desktops and laptops, it can fail for many other systems. This, of course, can be rectified by recompiling the C program to the target architecture. So, Java may be minimally more portable than C in the sense that C programs are best recompiled for each platform, but that's not what's at issue here. The point is that the extra layer Java inserts for architectural abstraction doesn't come without a significant performance hit. Furthermore, any (runtime!) optimizations a JIT compiler can do only approach what C compilers regularly do automatically—they aren't some magical advantage. You will on occasion see benchmarks proclaimed to show Java having an advantage over C. Taking after a famous quote, the following is observed: "In the computer industry, there are three kinds of lies: lies, damn lies, and benchmarks.". The backstory behind these benchmarks, then, is crucial. Java programs preallocate huge swaths of memory (e.g. "Hello World" compiled with Java 1.7 runs with 10 MB of RAM), and then cut into it as they need to. This hides the latency of the malloc ("new") operation so much that many Java programmers don't appreciate what the semantics of this operation actually implies: borrowing memory from the operating system, which arranges an internal structure through the MMU. Since in my experience Java programmers typically don't have this kind of in-depth familiarity with memory anyway, Java always does this preallocation so that carelessly-written, memory-allocation-bound programs can work better. What such benchmarks fail to control for is the fact that Java programs have effectively already allocated all of their memory—they have an unfair advantage! This wasteful technique is sometimes justified, so C can do it too (it's called a "pooled allocator") and if you write the same program in C using one, you'll see that it handily beats Java performance-wise. Any fair benchmark will show this. About the best argument Java programmers have for speed is the notion that Java compilers optimize their code more than do C compilers. This is actually (kindof) true, but it's misleading to just say this without qualification. The kind of optimization referred to is global optimization and the claim is that, since Java restricts so many language features, a Java compiler can optimize more quickly (I'm leery that this tradeoff actually would be a so-called "advantage", but let's examine the issue anyway). The "global" optimizations that are possible are (usually) actually limited to function boundaries and eliminate useless instructions (e.g. unnecessary loads and stores added by careless programmers), shaving off a few clock cycles in the best case—which may be significant for compute-bound applications. C compilers are perfectly capable of this too. Even MSVC can do it. Standard graph-coloring algorithms easily remove such trivialities; they are among the first optimizations added to new C compilers. Java programmers also say that C compilers can't optimize tense control flow and operations (goto, switch, pointer arithmetic, etc.). They actually often can, but that's not important. Those language features make C faster that it would be otherwise, and when they're used properly, they make ordinary C much faster than even hand-tuned Java code could hope to be, simply because in this regard C is a more expressive language. The (false) assertion that this faster-than-Java code can't be optimized further by a compiler doesn't make sense. Furthermore, C programs, since they are more expressive on the low-level, allow the programmer to make unsafe (hard to prove correct) optimizations that compilers can't. Such low-level functionality allows correspondingly low-level optimization techniques by programmers, who can make assumptions about the program (e.g. based on expected data) that compilers can't by their very nature (the "restrict" keyword is a good example of one such C99 feature that can produce massive performance benefits in cache-bound applications; there is no equivalent feature in Java). Java programmers may be correct that Java compilers optimize more quickly. But we're not talking about compiler speed, we're talking about program speed. In any case, C compilers often take longer than Java compilers anyway—even if Java compilers optimize more quickly, C compilers optimize for a longer time. This section was intended to be really more of a guideline to the issues. In reality, while I suspect that C compilers can perform better global optimization than Java compilers, as the above paragraph should suggest, I don't think that either side can claim to be clearly, obviously better. At this point I should note also that global optimization is almost always not worth its cost anyway—no matter what language you're using. The performance difference either way usually is small. So let's talk about the big difference then (next section). The most important difference is a cultural one: C programmers are accustomed to thinking about how memory is used—datatype size, and, importantly, locality of space and time. Java programmers—including, (and this is key), many of the ones I have encountered in the wild—often haven't even heard of these terms, let alone lived them. You can do memory-aware computation in Java (kinda, and it's more difficult), but the important point is that when teaching languages, teachers of Java often don't impart how important it is to do so. One assignment I have seen given to (C) programming students is to optimize a program to make it 100 times faster. At first, it seems impossible—this code is just a couple of for loops and some simple arithmetic operations. But then you think about it, you realize that the algorithm isn't cache-aware. You start thinking—will this fit in a cache line? Is a recursive function call to improve locality worth the overhead—since the compiler won't be able to make it tail-recursive? Unrolling this loop saves arithmetic but hurts the instruction cache. As you work through the assignment, you get a better feel for how you make these tradeoffs all the time. Every programmer, in my opinion, should be required to do a similar exercise. The point is that Java deliberately obfuscates these memory considerations, and Java programmers, in my experience, are consequently unaware of them. One upper-division (Java) computer science class I observed was baffled by the classic demonstration of exchanging two nested loops walking over a 2D array. But even if Java programmers were more aware of the memory operations all programs must by necessity use, Java doesn't provide more than superficial access to it. Most datatypes' backing is completely opaque. There's usually no way to know what internal representation something has, and since Java code manages memory implicitly, there's no way to know how anything is actually laid out. For example, the fact that every Java object carries extra information around with it (8 bytes, 12 for arrays) means that low-level API calls (e.g. OpenGL) can't be handled naturally with an array of objects the way they can in C. This necessitates crockish workarounds that (while hurting readability) undeniably hurt performance. Much more importantly, this extra data hurts cache locality. Larger data means more fetches from RAM more frequently, greater chance/exacerbation of thrashing, and higher memory usage and power consumption. Java must also load a massive amount of data before doing anything. It's a virtual machine after all, and as discussed previously, huge preallocations occur for even the most basic of programs. This has become such a problem that for some JRE distributions a JVM is always running, starting up with the OS exactly since they are so slow to start up! Arguments about instruction speed (where C wins) don't matter very much when so much of an application's performance hinges on memory usage (where C programmers tend to know best and have better ability to work with it). ====ARGUMENTS FOR PRACTICALITY==== Lastly, one argument Java programmers make is that, since Java makes it so easy to write optimized code, then in practice Java is faster. First, as above, Java physically prevents the writing of tense code by managing memory for you and restricting available language features and instructions. Additionally, the culture surrounding Java does not place high value on algorithmic efficiency, leading to Java programmers doing the same. Second, "in practice", C actually is much faster than Java. Why do you think C, C++, and FORTRAN are the de facto languages of high-performance computing? People who care about their programs being faster use these languages because they actually are fastest. If there's any rule I've seen borne out most in my years of programming, it's that (yes practically) code written in C (and C-like languages) is higher speed and higher quality than the same kind of code written in Java. Since Java is more forgiving, poorly-written, memory-rude programs do sometimes run faster. But though these statistical flukes are trumpeted like a rule, they should not be taken to imply that all code does. The reality is that such cases are rare (and meaningless, since no one should judge the performance of good code by how well any environment handles bad code). Third, "in practice" isn't what's at issue here. C is clearly faster than Java. If I were to concede that maybe, if a programmer is incompetent, Java's high optimization and wasteful memory preallocations take up the slack better than C, nothing really is gained. Saying so would be just as nonsensical as trying to argue that a pickup truck is faster than a race car because the race car is harder to drive. Java is, in my opinion, not exactly a bad language. I dislike it for other reasons, but in all honesty, the argument for performance is not actually usually (or maybe even often) a discerning factor. RAM is cheap, and clock cycles are short and plentiful. More importantly, there are more, larger, layers of cache. The performance argument against Java does not have as much weight as it once did. C is faster, but one doesn't really care except in the most demanding applications—and truth be told, Java is fast enough for most applications. I write this discourse then not to argue that Java is hopelessly sluggish, but to combat the increasing tide of misinformation about Java's utility. Yes, Java is quite fast. No, it is not faster than C.
|
|||||||||||
|