Hooray for managed code

For a long time I have hoped that managed code will beat statically compiled code one day. Managed code can make software more secure, CPU-architecture-independent and makes it easier to generate executable code. The only remaining problem is the performance. Theoretically managed code should be faster than native code that the linker does not understand, because of the better optimization opportunities. But in practice and especially in public perception it always trailed behind.
The benchmarks published by OSNews suggest that, at least in the commercial space, managed code is already competitive and can beat unmanaged C code. The nice thing about the OSN article is that it's probably the first time that someone was so (stupid|brave) to violate Microsoft's licensing agreement and publish benchmarks for C# - but even Java did very well against gcc.

Unfortunately the usefulness of the benchmarks is very limited. They do not measure important things like the quality of inlining, virtual method invocation (or more important for virtual methods: how good the VM is at inlining them at runtime) and the cost for object allocation. And it ignores language-specific features that cost performance. A good example is Java's immutable String class which causes object allocations for all modifing string operations. But it still gives hope for a better future. Congratulations, managed code :)


I'm sorry, but the benchmarks on OSNews are very, very dubious. If you take a look at the comments, people have run Linux + gcc benchmarks far faster than the ones in the test, and the ICC benchmarks should be still faster on the P4 the reviewer uses. The flags chosen for compilation are also dubious, as is the form of the code itself.

The Java entry is also fairly pointless, because it doesn't represent normal Java operation (lacking the use of objects and method calls) and also because it fails to use VMs such as IBM's, which are substantially faster.

All in all, only the .Net tests are in their favoured situation, and even then, apparently the VB.Net test is messed up in compilation as well, because it is running a VB6 compatability mode.

Worst of all, this junk has now been linked on Slashdot.

By luke chatburn at Fri, 01/09/2004 - 21:41

Well this benchmark is crap imho.
He just "tested" integer/double arithmetic. At first this hasn't anything to do with "real" problems(besides some special cases, of course) but it is of course also something where the vm shouldn't affect runtime, besides overhead when loading. No allocations/deallocation, virtual function calls etc.
On the other hand i don't understand how c can perform slower than managed code. When optimization is on and one takes a look at these really simple functions, one can hardly imagine that something runnning in a vm is any better than the minimal/optimal code in assembler, which should be created by the compiler for these simple functions. If i take a look at the assembler output of gcc -O3, i don't see where it should loose during the calculations to the managed code stuff.

By rischwa at Fri, 01/09/2004 - 23:17

On the other hand i don’t understand how c can perform slower than managed code.

It can happen when the C code uses functions in different compilation units. In bytecode you can inline even code from other compilation units, the usual C linkers can't do this (because they don't understand the binary code).
Another C issue is aliasing: it's not always possible to know whether there is a pointer to a variable, this can make it impossible to keep variables in registers between function calls to other compilation units. In a language that does not have pointers you don't have this problem.

By tjansen at Sat, 01/10/2004 - 00:16

>On the other hand i don’t understand how c can perform slower than managed code.
It should have been "... slower than managed code in this case", of course :).
It's obvious that managed code can have advantages, e.g. "hot-spot-optimisation" (dunno if this is the right name), which can't be done without a vm/interpreter.
But i just couldn't get it, how managed code could have an advanteage in this case(but shouldn't have that much disadvantages either, as said in my previous post. Theoretically the calculations could have been optimized to a constant by the compilers.

By rischwa at Sat, 01/10/2004 - 14:18

The trigonometry benchmark may use functions from different compilation units, and thus may be harder to optimize in C (I have no idea what the generated code on Win32 looks like, the trigonometry functions could also be inlined or even built-in). The simple integer benchmark probably just shows how well the compiler is optimizing, and gcc does not look very good in this particular example.

By tjansen at Sat, 01/10/2004 - 17:18

As Tim said, it could be that the code is written in a way that a compiler finds hard to optimise in C. Honestly, though, for this task, it should be quite obvious and pose no problems to the compiler. That said, the author of the article had never written any C++ before and barely any C in anger, either.

Moreover, the C test was done on GCC running under Cygwin and passing through to the Windows libraries. Using GCC under Linux with Athlons roughly performance-equivalent to the P4 the author used *HALVED* the run time to 34 seconds. A full 14 seconds better than the Visual C++.Net.

In short, the favoured results, of GCC in a Posix environment, where it is most likely to be used, against Visual C++.Net under Windows, it's favoured environment produces a clear victory to unmanaged C++ under GCC. VC++.Net takes 141% the time to execute as GCC for this code.

I'm not saying it is good code, but there you go.

By luke chatburn at Sat, 01/10/2004 - 02:48

A few month ago, a german computer magazine(c't -> www.heise.de) had a comparison between c++, c#, java 1.4.2 and delphi, testing stuff like virtual function invocation, object allocation etc.
For c++ they used MS VC++ and Borland's compiler.
Their result was that c# performed the best, with java en par in most of the stuff, but c++ performed the poorest on every single test. But if you took a look at their benchmarks, you knew, why. The authors both published books about c# and delphi, butobviously had no idea about c++. Their tests weren't even standard conform. With changing a single line in the code, c++ ran 2 times faster, outperforming everything else. With changing the rest of the stuff (like call by reference in a recursive function instead of call by value) and disabling debugging mode :), c++ ran 8 times faster than before, leaving absolutly no chance to the others.
Unfortunatly they never had the fairness to publish a counterstatement or admit that they had done something wrong, so probably a lot of people out there(c't is a really good and imho the most professional "allround" computer magazine, so there's not only idiots out there reading it), without knowledge of c++, believe this crap, they published.

By rischwa at Sat, 01/10/2004 - 14:24