If you research Assertion Based Verification (ABV), you’ll run into the terms “observability” and “controllability” a lot. Observability means assertions detect design errors at their source, at the instant the error occurs which ultimately decreases debug time.
Low controllability occurs when the number of simulation vectors needed to exhaustively stimulate a design becomes prohibitive. Assertions, when utilized as targets for formal analysis, can provide exhaustive verification of blocks and interfaces and improve controllability.
Assertions aren’t just monitors and formal targets. Assertions also capture design intent and are used to report coverage information. With the proper use of assertions and the reporting of passes as well as failures, the metrics are available to know when functional verification is complete.
So how do we define the “proper use of assertions”? Solid Oak’s approach to generating assertions revolves around its CoverAll™ assertion, sequence and cover property generation capabilities based on design intent extracted from flow, FSM and timing diagrams. CoverAll generates low-level assertions from the flow diagrams and interface assertions from the timing diagrams developed by designers. Designers and Verifiers should also add high-level assertions to cover functionality not easily described by these diagrams. I’ve talked about my own experience in utilizing these methods in a previous post.
I went searching for data on other assertion experiences and I found a 2008 DAC paper titled “Assertion-Based verification of a 32 thread SPARC™ CMT Microprocessor” by Babu Turumella and Mukesh Sharma of Sun Microsystems. In this paper, the authors state:
“Exhaustive property checking, design defect isolation and functional coverage measurement are some of the key challenges of design verification. This paper describes how an assertion based approach successfully addressed these challenges for the verification of an enterprise class chip-multi-threaded (CMT) SPARC™ microprocessor.”
Look at this figure from the paper.
“First chart shows that majority of the bugs found using assertions were found in simulation. Part of this can be attributed to the amount of effort that went into simulation based verification, relative to formal verification. However, the blocks that were verified formally had lower number of bugs at the chip level, compared to blocks that were verified solely by simulation at unit level.
Second chart shows distribution of bugs found by different types of assertions. Although Inline [low-level] assertions contributed to majority of bugs, their utility was more significant during the initial design phase. Interface assertions found most of the critical bugs in the design.”
Here’s what I recommend for the proper use (and generation) of assertions:
- Generate the low-level assertions from your module-level design specifications. If you use Solid Oak’s CoverAll tool, that process is automated from the design intent flow and FSM diagrams you include in your specification. See an example here. You can also generate the RTL from these diagrams. Is this a conflict, i.e. generating RTL and assertions from the same source? No, because although you wouldn’t expect assertion failures, the low-level assertions are valuable as functional coverage sign-off targets both in simulation and in formal verification. I’ve found cases where formal has triggered failures because I allowed the tool free reign on my interfaces. The subsequent constraints (i.e. assumes) I added weren’t always clearly stated in the specification, but were definitely added after formal found them for me.
- Generate the interface assertions from your design specification. A properly written specification should contain a complete and unambiguous definition of all the interfaces on a module. If you use Solid Oak’s CoverAll tool, once again, that process is automated from the design intent timing diagrams you include in your specification. If the interface is extremely flexible and somewhat complex, PCIe for example, you may have to rely on 3rd party VIP. It’s still a good idea to capture the known sequences you expect to occur and obtain coverage on them. The Sun Microsystems paper as well as my own experiences indicate that most of the bugs will occur here since designs are typically assigned to a team of designers and verifiers who may or may not interpret the interfaces in the same manner or constrain them properly as I stated above.
- Generate the high-level assertions as you write your specification and create a set of assertions to be used across the entire design team for common functions like FIFOs [e.g. overflow and underflow] and counters [e.g. rollover sequence]. This is where engineering experience pays off: knowing where to place and how many of these assertions to create. These assertion should contain both expected behavior and conditions that are prohibited. A lot of this can be captured in a well written design specification. I typically include covergroups and cover properties here as well for critical structures in the design where I want to have increased observability.
I’d like to hear about your experiences with ABV and how you deploy assertions. Where do you find most of your bugs and how do you define closure?
Jim O’Connor – President, CEO and Founder of Solid Oak Technologies