Vlsi Design And Eda Tools Pdf
Electronic Design Automation
Introduction
Charles E. Stroud , ... Yao-Wen Chang , in Electronic Design Automation, 2009
About This Chapter
Electronic design automation (EDA) is at the center of technology advances in improving human life and use every day. Given an electronic system modeled at the electronic system level (ESL), EDA automates the design and test processes of verifying the correctness of the ESL design against the specifications of the electronic system, taking the ESL design through various synthesis and verification steps, and finally testing the manufactured electronic system to ensure that it meets the specifications and quality requirements of the electronic system. The electronic system can also be a printed circuit board (PCB) or simply an integrated circuit (IC). The integrated circuit can be a system-on-chip (SOC), application-specific integrated circuit (ASIC), or a field programmable gate array (FPGA).
On one hand, EDA comprises a set of hardware and software co-design, synthesis, verification, and test tools that check the ESL design, translate the corrected ESL design to a register-transfer level (RTL), and then takes the RTL design through the synthesis and verification stages at the gate level and switch level to eventually produce a physical design described in graphics data system II (GDSII) format that is ready to signoff for fabrication and manufacturing test (commonly referred to as RTL to GDSII design flow). On the other hand, EDA can be viewed as a collection of design automation and test automation tools that automate the design and test tasks, respectively. The design automation tools deal with the correctness aspects of the electronic system across all levels, be it ESL, RTL, gate level, switch level, or physical level. The test automation tools manage the quality aspects of the electronic system, be it defect level, test cost, or ease of self-test and diagnosis.
This chapter gives a more detailed introduction to the various types and uses of EDA. We begin with an overview of EDA, including some historical perspectives, followed by a more detailed discussion of various aspects of logic design, synthesis, verification, and test. Next, we discuss the important and essential process of physical design automation. The intent is to orient the reader for the remaining chapters of this book, which cover related topics from ESL design modeling and synthesis (including high-level synthesis, logic synthesis, and physical synthesis) to verification and test.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123743640500084
Why Open Source?
In Sarbanes-Oxley IT Compliance Using COBIT and Open Source Tools, 2005
IT Infrastructure
Because electronic design automation (EDA) tools have strong historical roots in UNIX, NuStuff has already embraced open source and Linux technologies to a great extent. NuStuff recognized early on the cost-saving benefits of migrating away from proprietary UNIX and Windows systems on both the client and server sides for engineering, while concurrently maintaining mostly Windows-centric clients for nonengineering and support personnel. To consolidate its IT infrastructure as much as possible, the company has standardized on Linux in the server room and eliminated as many Windows servers as possible, although it does have a few proprietary and legacy applications that run in only a Windows environment.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9781597490368500070
System on Chip (SoC) Design and Test
Swarup Bhunia , Mark Tehranipoor , in Hardware Security, 2019
3.1.2.4 Automatic Test Pattern Generation (ATPG)
ATPG is an electronic design automation (EDA) method used to find an input (or test) sequence that, when applied to a digital circuit, enables testers to distinguish between the correct circuit behavior and the faulty circuit behavior caused by defects. These algorithms usually operate with a fault generator program, which creates the minimal collapsed fault list, so that the designer needs not be concerned with fault generation [5]. Controllability and observability measures are used in all major ATPG algorithms. The effectiveness of ATPG is measured by the amount of modeled defects, or fault models, that are detected and the number of generated patterns. These metrics generally indicate test quality (higher with more fault detection) and test application time (higher with more patterns). ATPG efficiency is another important consideration. It is influenced by the fault model under consideration, the type of circuit under test (combinational, synchronous sequential, or asynchronous sequential), the level of abstraction used to represent the circuit under test (register, gate, transistor), and the required test quality [12].
Today, because of the very large circuits' size and shortened time-to-market requirement, all the ATPG algorithms are performed by commercially available EDA tools. Figure 3.7 illustrates the basic ATPG running flow. The tool first reads in the design netlist and library models, then, after building the model, it checks test design rules that are specified in the test protocol file. If any violations occur in this step, the tool reports the violation rule as warning or errors, depending on the severity. Using the ATPG constraints specified by the users, the tool performs ATPG analysis and generates test pattern set. If the test coverage meets the users' needs, test patterns are saved in files with a specific format. Otherwise, the users can modify the ATPG settings and constraints, and rerun ATPG.
It is worth noting that there are two coverage metrics: test coverage and fault coverage. Test Coverage is the percentage of detected faults among those detectable and gives the most meaningful measure of test pattern quality. Fault Coverage is the percent detected of all faults. It gives no indication of undetectable faults. Usually, test coverage is used in practice as an effectiveness measure of the test patterns generated by the ATPG tool.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128124772000083
Introduction to Microelectronics
In Top-Down Digital VLSI Design, 2015
1.3.4 Electronic design automation software
The VLSI industry has long become entirely dependent on electronic design automation (EDA) software. There is not one single step that could possibly be brought to an end without the assistance of sophisticated computer programs. The sheer quantity of data necessary to describe a multi-million transistor chip makes this impossible. The design flow outlined in the previous section gives a rough idea on the variety of CAE/CAD programs that are required to pave the way for VLSI and FPL design. Almost each box in fig.1.13 stands for yet another tool.
While a few vendors can take pride in offering a range of products that covers all stages from system- level decision making down to physical layout, their efforts tend to focus on relatively small portions of the overall flow for reasons of market penetration and profitability. Frequent mergers and acquisitions are another characteristic trait of the EDA industry. Truly integrated design environments and seamless design flows are hardly available out of the box.
Also, the idea of integrating numerous EDA tools over a common design database and with a consistent user interface, once promoted as front-to-back environments, aka frameworks, has lost momentum in the marketplace in favor of point tools and the "best in class" approach. Design flows are typically pieced together from software components of various origins. 24 The prevalence of software tools, design kits, and cell libraries from multiple sources in conjunction with the absence of industry-wide standards adds to the complexity of maintaining coherent design environments. Many of the practical difficulties with setting up efficient design flows are left to EDA customers and can sometimes become a real nightmare. Hopefully this trend will be reversed one day when customers are willing to pay more attention to design productivity than to layout density and circuit performance.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128007303000010
Introduction to systemverilog assertions
Erik Seligman , ... M V Achutha Kiran Kumar , in Formal Verification, 2015
Property Syntax and Examples
The most common types of properties are created using triggered implication constructs, sequence |-> property and sequence |=> property. The left hand side, or antecedent, of an implication must be a sequence. The right hand side, or consequent, can be a sequence or a property. The difference between the two operators is that |-> checks the property on the same clock tick when the sequence is matched, while the |=> operator checks one tick later. Triggered properties are vacuously true (that is, true for trivial reasons) on cycles where their triggering sequence is not matched. Figure 3.10 shows some examples of simple properties and traces where they are true and false at various times.
We should also make an important point here about these triggered implication properties: they provide a very important advantage for validation, as opposed to other ways to write logically equivalent properties. This is because many EDA tools take advantage of the information about the triggering condition. For example, an overlapping |-> operation is often logically equivalent to a simple Boolean operation, as in the example below:
a1_boolean: assert property (!cat || dog);
a1_trigger: assert property (cat |-> dog);
However, with the triggered version, EDA tools can provide an advantage in numerous ways, to provide improved debug features.
- •
-
Simulation debug tools can indicate cycles when the assertion was triggered, as well as when it passed or failed.
- •
-
FV tools can generate an implicit cover point to check whether the triggering condition is possible. This also allows them to automatically report when an assertion is proven vacuously: it cannot be violated because the triggering condition cannot be met.
Thus, we recommend that whenever you have multiple potential ways to write a property, you try to state it as a triggered implication if possible.
Tip 3.9
Use triggered properties (|->, |=>) in preference to other forms of properties when possible. This enables many EDA tools to provide specialized visualization and debug features.
Another set of useful tools for constructing properties are the linear temporal logic (LTL) operators. A full discussion of LTL operators is probably more detail than you need at this point, so we will refer you to LRM section 16.12 if you wish to learn the full story. But LTL operators are critical if you want to create "liveness properties": properties that specify aspects of potentially infinite execution traces. Probably the most useful LTL operators in practice are s_until and s_eventually. For the most part, these operators are exactly what they sound like: s_until specifies that one property must be true until another property (which must occur) is true, and s_eventually specifies that some expression must eventually be true. The s_ prefix on these operators stands for "strong," indicating that in an infinite trace the specified conditions must happen at some point. (You can omit the s_ prefix to get "weak" versions of these operators, which means that an infinite trace that never hits the condition is not considered a violation.) Infinite traces may sound odd if you are used to simulation, but they can be analyzed by FV tools; we will discuss this issue more in upcoming chapters. Figure 3.11 shows some examples of these LTL operators.
One more very useful class of properties is negated sequences. SVA sequences have a not operator, to enables us to check cases when the sequence is not matched. We did not include this in the sequence section above because technically a negated sequence is a property object, not a sequence object. This means that it cannot be used as the left-hand side of an implication operator. However, a negated sequence is often very useful as a property to check that some unsafe condition never occurs. Here are some examples of negated sequences used as properties. Figure 3.12 shows some examples of these.
Like sequences, properties are not actually meaningful unless included in an assertion statement; once again, the examples above are just fragments at this point. Here are some examples of assertion statements that might be useful in our arbiter model.
// assume after gnt 0, req 0 falls within 5 cycles
req0_fall_within_5: assume property
($rose(gnt[0]) |=> ##[1:5] $fell(req[0]));
// assert that any request0 is granted within 20 cycles
gnt0_within_20: assert property
($rose(req[0]) |-> ##[1:20] gnt[0])
// assert that any grant on 0 is eventually withdrawn
gnt0_fall: assert property
($rose(gnt[0]) |-> s_eventually (!gnt[0]));
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128007273000034
Parallel Computing
G.M. Megson , I.M. Bland , in Advances in Parallel Computing, 1998
2 Practice
From the abstract design we employ Electronic Design Automation tools to synthesize actual FPGA circuits. In this case we have chosen to target the XC4000 series FPGA from Xilinx. A sample array cell (a Mutation cell) is given in Fig 2. The reader is directed to [7] for a detailed discussion on the design and implementation of this array. All the other array cells have been designed in this manner and they are combined to form either a single device or a number of FPGA arranged in a pipeline. Here we focus on details of implementation of the cells and arrays not covered in the reference. Specifically we investigate the general problem of implementing systolic arrays onto FPGAs.
The period of the system clock, and hence the speed of the device, depends on the longest electrical path within the design; the critical path. The systolic principle, as applied to VLSI, keeps this path short by only using locally connected (systolic) array cells. For FPGAs we expect this principle to apply. On implementation however we discover that little of the abstract structure of the systolic arrays are preserved. Instead the place and route software attempts a global optimization and scatters the constituent logic blocks across the device. This is illustrated by Figs 3 and 4. Fig 3 shows the floorplan of the 5 celled array. Although the FPGA logic cells (CLBs) are concentrated into one corner of the device, no routing structure has been preserved. Fig 4 shows a single mutation cell. Here the logic blocks themselves have been scattered.
To improve matters we tried to sensibly place the design by hand. This, surprisingly, results in a design with a worst critical path. We believe this is due to overloading the limited routing resources in a particular area. An area/performance trade-off, made when placing and routing designs, is implied. This trade off is complicated by designs which use the majority of the FPGA (such as the Roulette Wheel array). As available CLBs are used up the design/CLB density is increased. This has two effects, firstly the area per cell required is reduced, a benefit, and secondly the critical path delay is increased, a disadvantage. Systolic arrays are often preferred to other architectures because they are scalable and easily replicated across the device. For an FPGA implementation on (at least) the XC4000 series, these advantages are lost.
Given these results we can conclude that the coarse-grained structure of the XC4000 FPGA is not particularly suitable for systolic array implementations. We can further postulate that a suitable device would have many simple cells with abundant routing, connected in a local fashion. The latest device from Xilinx, the XC62000 series of FPGAs [8] appears to exhibit these properties. We look forward to assessing this new device in terms of its suitability for systolic array implementations.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/S0927545298800956
Fundamentals of algorithms
Chung-Yang (Ric) Huang , ... Kwang-Ting (Tim) Cheng , in Electronic Design Automation, 2009
Publisher Summary
This chapter presents various fundamental algorithms to the electronic design automation (EDA) research and development—from the classic graphic theories, the practical heuristic approaches, and then to the theoretical mathematical programming techniques. The chapter goes through the fundamentals of algorithms that are essential for the readers to appreciate the various EDA technologies. Many of the EDA problems can be either represented in graph data structures or transformed into graph problems. The most representative ones, in which the efficient algorithms have been well studied, are elaborated. The readers should be able to use these graph algorithms in solving many of their research problems. Heuristic algorithms that yield suboptimal, yet reasonably good results are usually adopted as practical approaches. Several selected heuristic algorithms are also covered. The mathematical programming algorithms, which provide the theoretical analysis for the problem optimality, are explored and the mathematical programming problems that are the most common in the EDA applications are focused on.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123743640500114
Heterogeneous Computing: An Emerging Paradigm of Embedded Systems Design
Abderazak Ben Abdallah , in Computational Frameworks, 2017
3.2.4 IP cores
An IP core is a block of logic or a software library that we use to design an SoC based on single or multicore. These software and hardware IPs are designed and highly optimized in advance (time to market consideration) by specialized companies and are ready to be integrated with our new design. For example, we may buy a software library to perform some complex graphic operations and integrate that library with our existing code. We may also obtain the above code freely from an open-source site online. Universal Asynchronous Receiver/Transmitter, CPUs, Ethernet controllers and PCI interfaces are all examples of hardware IP cores.
As essential elements of design reuse, IP cores are part of the growing electronic design automation industry trend toward repeated use of previously designed components. Ideally, an IP core should be entirely portable. This means the core must be able to easily be integrated (plug-and-play style) into any vendor technology or design methodology. Of course, there are some IPs that are not standard and may need some kind of interface (called wrapper) before integrating it into a design. IP cores fall into one of two main categories, i.e. soft cores and hard cores:
- 1)
-
Soft IP core: Soft IP cores refer to circuits that are available at a higher level of abstraction such as register-transfer level (RTL). These types of cores can be customized by the user for specific applications.
- 2)
-
Hard IP Core: A hard IP core is one where the circuit is available at a lower level of abstraction such as the layout level. For this type of core, it is impossible to customize it to suit the requirements of the embedded system. As a result, there are limited opportunities in optimizing the cost functions by modifying the hard IP.
A good IP core should be configurable so that it can meet the needs of many different designs. It also should have a standard interface so that it can be integrated easily. Finally, a good IP core should come in the form of complete sets of deliverables: synthesizable RTL, complete test benches, synthesis scripts and documentation. The example shown in Figure 3.15 is for a hardware IP core from Altera FPGA provider [ALT 12].
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978178548256450003X
Fundamentals of CMOS design
Xinghao Chen , Nur A. Touba , in Electronic Design Automation, 2009
2.5.3 Layout design
Although most of the chip-level physical layout design activities are done by running automated EDA tools, most physical layout design library cells (a.k.a. books) are still created and fine-tuned manually with the help of EDA tools such as a layout editor. In this subsection, we highlight a few physical layout design examples of small CMOS circuit blocks. The layer-overlapping color display seen on designers' computer screens is known as symbolic layout. A chip-level symbolic layout display is often called the artwork. Once a chip-level physical layout design is verified against engineering metrics (such as DRC, timing, yield) and approved, EDA tools are used to extract manufacturing mask data from the physical layout data for production masks.
Figure 2.48 shows a symbolic layout of a classic CMOS inverter that uses the n-well process. The layout design uses one metal layer. Typically, cells and blocks in a library have the same height so that wires for VDD and GND can be aligned precisely throughout a chip. With this CMOS inverter, space is left between the n-channel transistor and the p-channel transistor so that this inverter cell maintains the same height as the other cells to be described in this subsection. Note that, whenever possible, n-well contacts (with VDD) are placed along the VDD supply line, and substrate contacts are placed along GND. These contacts are necessary to provide good grounding for the well and the substrate. Once a cell is created manually, it is important to check for any physical layout design rule violations. Typically, EDA tools provide such a function known as a design rule check (DRC). It is important to note that, when performing DRC with an EDA tool, a correct rule set must be specified. For example, to check this CMOS inverter layout design for any DRC violations, the n-well-based design rule set must be specified in the application. Inappropriate use of design rule set would result in either not discovering or wrongly identifying DRC violations.
Figure 2.49 shows a symbolic layout for a 2-input NAND gate that uses one metal layer and the n-well process. Because of this limitation, its two inputs are accessed at different sides. Typically, library cells would have their inputs on one side and their outputs on the other side. This can effectively reduce the overall wire length when cells are used in functional blocks. When a second metal layer is available, input b in Figure 2.49 can easily be rerouted to the West along the side of input a.
Figure 2.50 shows a symbolic layout of a 3-input OR followed by a 2-input NAND block, which uses one metal layer and the n-well process. Because it also uses one metal layer, the inputs of the block are accessed from both sides, and the output goes out on the left side. When a second metal layer is available, one can reroute inputs to the West and the output to the East. As an alternative, the inputs can also be routed for access from the South by extending the Poly wires beyond GND.
Note that in Figure 2.50, the n-channel transistor controlled by input a is one third of the size of the p-channel transistors controlled by inputs b, c, and d. This is because the p-channel transistors of inputs b, c, and d are in series connection, and by the transistor equivalence theory, the equivalent transistor size of p-channel transistors controlled by inputs b, c, and d is the same as the size of p-channel transistor of input a.
Figure 2.51 shows a symbolic layout of grading-series transistors in an AND dynamic CMOS block [Weste 1994] with 4 inputs. The layout design uses transistors of varying sizes according to the position in the series structure to reduce delay. The n-channel transistor closest to the output is the smallest, with n-channel transistors increasing their size as they are placed nearer GND. The switching time is reduced, because there is less capacitance at the output. With older technologies, it provided 15% to 30% performance boost. However, with submicron technologies, this improvement is much less, at 2% to 4% in some cases. Nevertheless, the example demonstrates how layout designs of blocks can be optimized.
It is worth noting that often multiple techniques can be applied to a block. As an exercise, readers can attempt to improve the design of Figure 2.51 by first analyzing and identifying the problems associated with the design and then modifying the circuit and layout designs that use the techniques discussed in this chapter to improve circuit speed, reduce transistor count, silicon area, and power consumption.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123743640500096
Formal verification
Erik Seligman , ... M V Achutha Kiran Kumar , in Formal Verification, 2015
FV in Real Design Flows
Based on the approaches above, there are a number of specific techniques that have been developed, using modern EDA tools, to leverage FV throughout the SOC design flow. In this section, we briefly review the major methods that we will discuss in the remainder of this book. Figure 1.5 illustrates the major stages of a VLSI design flow that we are emphasizing in this book, and where the FV methods we describe fit in.
- •
-
Assertion-Based Verification (ABV). This is the use of assertions, usually expressed in a language like SystemVerilog Assertions (SVA), to describe properties that must be true of RTL. In some cases, properties can fully describe the specification of a design. Using ABV does not in itself guarantee you will be doing true FV, since assertions can also be checked in simulation, and in fact such use accounts for the majority of ABV in the industry today. However, ABV is a key building block that enables FV.
- •
-
Formal Property Verification (FPV). This refers to the use of formal tools to prove assertions. FPV is a very general technique and can be further subdivided into numerous additional techniques:
- •
-
Early Design Exercise FPV. This refers to using FPV's ability to analyze RTL at early stages, in order to help gain insights into initial functionality and find early bugs.
- •
-
Full Proof FPV. This is the classic use of FPV to replace simulation and verify that an RTL model correctly implements its specification, described typically as a set of assertions.
- •
-
Bug Hunting FPV. This refers to the use of FPV to supplement simulation, in cases where it is not possible to exhaustively verify a design. It can still be a very powerful technique, finding rare corner-case bugs and gaining theoretical coverage exponentially greater than simulation.
- •
-
Unreachable Coverage Elimination. If your primary validation method is simulation, there will often be some point in the project when a few targeted cover states or lines of code have not been reached, and the validation team is struggling to figure out if they are testable or not. FPV can be used to identify reachable and unreachable states and regions of a design.
- •
-
Specialized FPV Apps. This refers to FPV as applied to particular problems, such as verifying adherence to a known protocol, ensuring correct SOC connectivity, ensuring correct control register implementation, and finding post-silicon bugs. In each of these cases, attention to the characteristics of the particular problem can improve the productivity of the FPV flow involved.
- •
-
Formal Equivalence Verification (FEV). This refers to using formal techniques to compare two models and determine if they are equivalent. The models might each be high-level models, RTL, or schematics, or this might involve the comparison between two of these levels of abstraction.
- •
-
Schematic FV. This refers to the use of FEV to check that a schematic netlist, resulting from synthesis or hand-drawn at transistor level, properly implements its RTL. This was the first type of FEV to become commercially viable, back in the 1990s, and was a critical enabler for the rise of powerful synthesis tools. Now nearly every company uses this technique, even if they do not do much FV in general.
- •
-
Feature Change Verification. With the rising reuse of existing RTL models in SOC design, there are many cases where a piece of RTL is expected to continue to work for currently defined functionality, but has the code modified or changed for new use cases. There are also cases of nonfunctional changes, where we expect functionality to remain exactly the same while the model is changed to fix timing or other issues. FEV can be used to compare the two pieces of RTL, with any new functionality shut off, and make sure the previously existing usage modes will continue to work.
- •
-
High-Level Model Equivalence Verification. This refers to the comparison of a high-level model, such as a System C model of the general functionality of a design, to an RTL model. This kind of comparison involves many challenges, since high-level models can be much more abstract than RTL. This is a new and emerging technology at the time of this writing, but it is advancing fast enough that we will include some discussion of this topic in our FEV chapter.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128007273000010
Vlsi Design And Eda Tools Pdf
Source: https://www.sciencedirect.com/topics/computer-science/electronic-design-automation
Posted by: geistfairie.blogspot.com
0 Response to "Vlsi Design And Eda Tools Pdf"
Post a Comment