Assertions are just like any other logic construct, so it is imperative that the assertions are coded to accurately describe design behavior; otherwise, both design and verification time will be spent debugging false assertion failures, negating the benefit of using assertions in the first place.
Traditionally, assertions are debugged by running dynamic simulations or using formal tools to exercise the logic and assertions. Failures in the design and assertions are then iteratively corrected as verification proceeds. Often simple assertions require little debug since the rules they convey are fairly absolute. For instance, the assertion "AWVALID must be de-asserted during reset" (~ARESETn |-> ~AWVALID) is pretty difficult to code incorrectly.
More complex assertions present a different issue. Often when coding an assertion that describes the interaction between two or more signals, the designer envisions a waveform that describes this behavior and then codes an SVA to describe these relationships. The problem is that the SVA is a temporal modeling language, which is inherently different from modeling digital logic*. This often leads to problems where the designer's intent in the coded SVA does not match the actual behavior of the assertion.
The problem is that a designer needs to close the loop between assertion intent (the mental image of a waveform) and the actual behavior of the assertion (the coded SVA). Zazz addresses this issue by generating a small randomized testbench around each individual assertion, allowing the designer to immediately observe behaviors associated with the assertion, thus closing the loop between intent and behavior. As an added benefit, this process often leads designers to see additional relationships which should be described to more fully constrain behavior, leading to even higher quality assertions.