According to Herb Sutter, C++ (and C and Fortran) are unmatched for performance per dollar and, therefore, he believes C++ will once again top the programming language charts as mobile devices make power consumption (which he equates with performance) a high priority again:
Herb said this in his recent lecture Why C++? and other Microsoft employees from the Visual C++ team have called this prophecy the "C++ Renaissance".
Similar statements about the superior performance of C++ were commonplace over 15 years ago when Java was new and unproven but we now know they were almost entirely wrong. The world migrated from native to managed languages over 10 years ago and never looked back because their performance is more than adequate. Furthermore, the implementations of managed languages have improved substantially since then and even toy benchmarks now show them competing with or even beating native code. Furthermore, the difficulty of optimizing large code bases means that many real-world applications are substantially faster when written using modern tools. For example, we recently helped a client translate 10,000 lines of numerical C++ code into F# and were easily able to make it 10× faster than the original C++. This is typical because C++ is much more difficult to optimize, particularly in the context of multicore parallelism, and developers in the real-world rarely have the time to do it if they are still using C++.
So why would anyone be promoting C++ now when it has so many disadvantages. Several reasons, we believe. Firstly, C++0x was finally released as C++11 last year and Microsoft's Visual C++ team hope to ride a wave of hype following this. Secondly, Microsoft want to draw people back into C++ as a route to vendor lock-in. We have seen several big code bases written in "C++" using Microsoft's tools and none of them were remotely portable due to the extensive use of proprietary features. Ironically, Herb tries to play to portability in his preachings and even goes so far as to assert that C++ has the advantage of offering a single string type that is compatible with the operating systems. In addition, C++ is particularly bad for metaprogramming and, consequently, it is unusually difficult to translate code mechanically from C++ to other languages. Thirdly, what else do Microsoft's Visual C++ team have left to hype?
Fortunately, this recent hype from Microsoft seems to have been largely ignored...
Here is the current trend in the UK job market for C++ developers:
Is it about to change?
Herb Sutter responded to some of my tweets about this. His responses are remarkable.
Firstly, he states that "no way should you need barriers to get sequential consistency". This is an amazing statement because all mainstream languages (Java, .NET, C++) and all dominant CPU architectures (ARM, Intel, AMD, PowerPC) require barriers to achieve sequential consistency. This property is ubiquitous because it is required for latency hiding to work effectively. The paper "Realization And Performance Comparison Of Sequential And Weak Memory Consistency Models In Network-on-Chip Based Multi-Core Systems" found that performance is around 40% worse if sequential consistency is imposed. ARM's Principal Software Engineer even went so far as to call sequentially consistent memory models a "nostalgic fantasy".
In reality, ARM and PowerPC have weak memory models that allow many memory operations to be reordered and x86 has a stronger memory model that prohibits most reordering but still allows reads to be moved before independent writes and, consequently, is also not sequentially consistent. Note that Herb's dream of sequential consistency is at odds with the desire for more latency hiding that he expressed in his latest article.
Then he predicts that CPU vendors will adopt stronger memory models within 2 years. Perhaps Herb has inside knowledge of Microsoft leaning on ARM to adopt the x86 memory model to ease Microsoft's Windows 8 port. Only time will tell if Herb's prediction is accurate but two interesting observations can be made. Firstly, Apple have sold 10,000,000s of iPad 2's that are running the 500,000 apps on their AppStore without anyone having complained about bugs caused by the weak memory model of the multicore ARMs inside them. Secondly, Intel weakened the x86 memory model when they added SSE and, consequently, also added the LFENCE, SFENCE and MFENCE instructions. So Herb is predicting that Intel will do a U-turn and ARM will throw away one of their largest performance advantages.
Finally, Herb asserts that "x86 is the canonical example of a strong hardware memory model". Although many people can be seen asserting that the x86 has a strong memory model, many experts describe it as a weak memory model because it does reorder and does not provide sequential consistency. For example, computer science researchers in this field from the University of Cambridge describe x86 as having a weak memory model in their paper "x86-TSO: A Rigorous and Usable Programmer’s Model for x86 Multiprocessors". And computer science researchers from the University of Oxford describe it as a weak memory model in their paper "Soundness of Data Flow Analyses for Weak Memory Models".
Automaton Explorer - The F# Journal just published an article about graphics: *"This article walks through the design and implementation of a simple graphical application that a...
2 months ago