In particular higher-complexity FPGAs as becoming available early 2008 and later were ideally suited for this step as these allowed for a top- down driven partitioning of functionality over software (running as programs on an embedded core) and hardware (synthesized and/or in the form of fixed IP blocks/functions). In this flow the C2RTL
tools would have implemented an important part by implementing the mapping of relevant parts of the (in C described) system onto the FPGA. Given that programming is an engineering discipline that (has to) tolerates bugs, FPGAs are ideal as these can, at the encounter of (critical) bugs, be easily and cheaply “fixed” (especially in comparison to “fixing” bugs in a high-end digital ASIC).
The availability of these C2RTL synthesis tools had been driven early on by the (high-end digital) ASIC design flows. Of course, in the context of an ASIC design, such tools were part of a very complicated design flow, representing a significant investment and used by a highly expensive design team. These teams had figured out quickly that the C2RTL tools available required hardware engineers with some C-language skills rather than software engineers. This was due to the fact that just providing a block of C-code would not produce useful results: the code had to be “massaged” to include significant details on the hardware level to ensure optimal results. In fact, a bottleneck in RTL-savvy programmers was moved up to a new level: hardware-savvy software programmers. The amount of hardware knowledge would vary somewhat and in general the C2RTL tools such as those from Synfora (now Synopsys), Mentor Graphics and AutoESL (now Xilinx) would generate excellent results once the C-code on the input side had been formatted to properly represent the target micro-architecture. Of course, within the cost structure of a high-end ASIC design flow, such efforts could be supported and the (EDA-type) business model used by the C2RTL tool vendors was equally a natural fit.
All this is perpendicular to the software engineering community that stands on the system/software side: there is little or no (RTL-level) hardware knowledge available, EDA-type business models are prohibitively expensive and having blocks of “RTL” is only useful if the interconnect between the hardware and the software is also provided. The ASIC market, with its dwindling number of design starts per year, could not sustain these tools. The FPGA market on the other hand, with its continuously increasing number of design starts, could not support these tools from a usability perspective (the lack of hardware-savvy software engineers and the fact that it did not solve the full problem (blocks of RTL are still required) as well as a cost perspective. All this resulted in the fading of these tools, punctuated by the departure of Mentor Graphics from this market as it recently sold its Catapult-C tools group to Calypto.
My reason for taking you along this history trail is to argue two key viewpoints with you:
The first point I believe is solidly illustrated by the history of the C2RTL tools over the last decennia: building a bridge from the hardware world to the software world, as imagined by the respective companies, turned out to be a bridge to an island, the island of the hardware-savvy software engineers. Despite the quality of the tools there weren’t enough people living on that island. Immigration (of FPGA designs) turned out to be prohibitively expensive for most given the business model and the fact that it did not bring them closer to the full system-level solution.
The second point is by the very nature of the departure point: software (and of course I am referring here to higher-level languages but in particular C and C++). Software is inherently sequential and any attempt to map it to hardware requires not only the ability to properly synthesize but also to properly parallelize its execution, both in hardware and software.
It is the combination of the above viewpoints that will result in a truly software-enabled design flow.