I recently listened in on a Webcast where the Wilson Research Group and Mentor Graphics 2012 Functional Verification Study was presented. You can read about it here. The slides show that the percentage of non-FPGA respins caused by logic or functional flaws was ≈55% in 2004 and ≈48% in 2012. That’s only a 7% improvement in 8 years! They also show that only 25% of development teams are using functional coverage as sign-off criteria. Why such a meager following?
Shortly into my management career I was working early for a chip start-up and we were executing to an aggressive schedule and were taking many shortcuts. When we entered the system-level verification stage, the progress slowed. The number of bugs we were finding was higher than expected and we were spending too much time and effort debugging simple mistakes within the design. Sound familiar?
We made the choice to finish writing the module-level specifications with the promise the project would take less time if we committed to this path. We reviewed each module’s interface and functionality and made sure they were aligned and coded to the same rules. We did the module level verification again against the updated specs. When that work was completed, we re-started the system-level verification and after weeding out the remaining minor problems, we spent a couple of months running pseudo-random verifications before we found another bug. We kept validating through the Fab cycle with no new bugs found.
It didn’t take long to find “The Bug” once the chip was soldered down. It took even less time to isolate the problem. A path that should have been sequential was coded combinatorial making one of the features unusable. After all our planning, time and effort how could we have missed it?
Over the next 16 months, we taped out additional designs, but we changed our release criteria based on what we learned from the first chip. We added three requirements:
- Functional coverage at the module level
- Functional coverage at the system level
- Performance measurement of every test at the system level
Our architecture documents had been written with flow charts to describe the functionality. Our chip specifications either used those same diagrams or created new ones to match the intended implementation. One of the designers realized we could extract the functional coverage from these diagrams. We added additional functional coverage in both the “C” behavioral model and from the flowcharts but only to the modules that changed. Adding coverage to modules already validated didn’t make sense. We wrote “C” to extract the chips performance during each test and verified it was within spec. We ran simulations until we achieved full coverage. The chip tape outs were successes. There were architectural and layout issues but we found no functional or performance problems.
The timeframe above was over a decade ago. We didn’t have SystemVerilog or uVM. Assertion Based Verification wasn’t even a buzzword. Formal Verification was still in its infancy. Solid Oak Technologies? Not even envisioned yet. And yet, we used functional coverage as our signoff criteria.
What we learned was to follow these 3 steps for success:
- Write a complete and detailed specification – Don’t take shortcuts with your specifications. Like the experience above shows, until you write the complete specification you don’t really know what to create or how it is intended to work. Write the details in such a way that extracting the intent is easy.
- Code RTL from the specification and follow the KISS principle – I can’t tell you how many times I’ve reversed engineered code into flowcharts and the designer doesn’t recognize the logic because it’s been modified so many times from the original design intent. Too often, the coding is started before the spec is completed or before the design intent is completely understood. The end result is band-aided code that is neither simple nor understandable nor matches the original intent.
- Extract the functional coverage from the specification, not the RTL – if it’s not in the specification, why code it or waste time testing it? Today’s design complexities require RTL to be created in manageable modules, i.e. no one codes a whole chip in a single file. Write specs at the module level and extract the design intent from them.
Here is an example design using functional coverage as the sign-off criteria. It contains an implementation of a SHA-2/256 hash generator complete with the module level specification, RTL, assertions, SystemVerilog and uVM test benches, the formal scripts for Mentor’s Questa® Formal, and a functional coverage report. All of this was generated from the diagrams in the specification using Solid Oak’s CoverAll™ software. Only the test benches were modified to obtain the functional coverage.
Jim O’Connor – President, CEO and Founder of Solid Oak Technologies