Examples
Practical examples and use cases for OpenTestability.
Verilog Auto-Pipeline Examples
Example 1: One-Command COP Analysis
Simplest way to analyze a Verilog file:
# Start tool
./opentest
# One command - everything automatic!
opentest> cop -i priority_encoder.v
[INFO] Verilog auto-pipeline mode
[INFO] Parsing Verilog file...
[INFO] Creating DAG...
[INFO] Detecting reconvergence (simple algorithm)...
[INFO] Reconvergent sites: 13
[INFO] Running COP analysis...
[✓] Results: data/results/priority_encoder_cop.txt
[✓] JSON: data/results/priority_encoder_cop.json
# With custom output
opentest> cop -i priority_encoder.v -o my_results.txt
Example 2: One-Command SCOAP Analysis
opentest> scoap -i serial_ALU.v
[INFO] Verilog auto-pipeline mode
[INFO] Parsing Verilog file...
[INFO] Creating DAG...
[INFO] Detecting reconvergence...
[INFO] Fanout points extracted: 12
[INFO] Running SCOAP analysis...
[✓] Results: data/results/serial_ALU_scoap.json
# With verbose output
opentest> scoap -i serial_ALU.v -v
Example 3: Auto Commands with Reconvergence
# COP with automatic reconvergence and parallel
opentest> auto-cop -i circuit.txt -w 4
[INFO] Auto-COP mode
[INFO] Creating DAG...
[INFO] Detecting reconvergence...
[INFO] Reconvergent sites: 25
[INFO] Circuit: 45,000 gates (sequential mode)
[✓] COP completed: data/results/circuit_cop.txt
# SCOAP with automatic reconvergence
opentest> auto-scoap -i large_circuit.txt -w 8
[INFO] Auto-SCOAP mode
[INFO] Circuit: 180,000 gates (parallel mode activated)
[INFO] Processing 35 reconvergent cones
[✓] SCOAP completed with 3.6x speedup
Example 4: Python API - Auto-Pipeline
from opentestability.core.cop import run_cop
from opentestability.core.scoap import run as run_scoap
# COP with automatic everything
run_cop('circuit.txt', 'cop_results.txt',
reconvergence_algorithm='auto', # Auto-select algorithm
auto_parallel=True, # Parallel if >100k gates
max_workers=4) # 4 worker threads
# SCOAP with automatic everything
run_scoap('circuit.txt', 'scoap_results.json',
reconvergence_algorithm='simple', # Use simple algorithm
auto_parallel=True,
max_workers=8)
Test Point Insertion Examples
Example 1: Basic TPI Workflow
Complete workflow from Verilog to enhanced design:
./opentest
# Step 1: Parse design
opentest> parse -i priority_encoder.v
[✓] Parsed priority_encoder.v
# Step 2: Generate COP metrics with JSON output
opentest> cop -i data/parsed/priority_encoder.json -j
[✓] COP analysis completed: data/results/priority_encoder_cop.json
# Step 3: Insert test points
opentest> tpi -i data/parsed/priority_encoder.json \
-m data/results/priority_encoder_cop.json \
-t 50 -n 10
[✓] Test points inserted: data/TPI/priority_encoder_tp.v
Results:
Original: 17 gates, 34 signals
Enhanced: 22 gates, 39 signals (5 test points)
Gate overhead: 29.4%
Example 2: TPI with SCOAP Metrics
Using SCOAP metrics instead of COP:
# Generate SCOAP metrics
opentest> scoap -i data/parsed/design.json
[✓] SCOAP analysis completed
# Insert test points using SCOAP
opentest> tpi -i data/parsed/design.json \
-m data/results/design_scoap.json \
-t 60 -n 15 -v
Verbose output:
Candidates analyzed: 45 signals
Threshold: 60 (SCOAP complexity)
Selected for test points: 12 signals
Observation points designed: 10
Control points designed: 2
Validation: PASSED
Example 3: Conservative TPI
Insert only critical test points:
# Conservative approach - only worst signals
opentest> tpi -i large_design.json \
-m large_design_cop.json \
-t 20 -n 5
Results:
Candidates: 234 signals below threshold 20
Test points inserted: 5 (most critical)
Focus: Zero observability signals
Example 4: Aggressive TPI
Improve many signals for maximum coverage:
# Aggressive approach - improve many signals
opentest> tpi -i design.json \
-m design_cop.json \
-t 80 -n 50
Results:
Candidates: 156 signals below threshold 80
Test points inserted: 50
Coverage improvement: 15.3% estimated
Example 5: TPI Output Analysis
Generated Verilog example:
// Original: priority_encoder.v
module priority_encoder (
input [7:0] in,
input en,
output [2:0] out,
output valid
);
// Enhanced: priority_encoder_tp.v
module priority_encoder_tp (
// Original interface
in,
en,
out,
valid,
// Test point interface - DFT
tp_obs_out_0,
tp_obs_n_15,
tp_obs_n_14
);
input [7:0] in;
input en;
output [2:0] out;
output valid;
// DFT outputs (observation points)
output tp_obs_out_0;
output tp_obs_n_15;
output tp_obs_n_14;
// Original logic + test point buffers
BUFX2 U_TP_OBS_1 (.A(out[0]), .Y(tp_obs_out_0));
BUFX2 U_TP_OBS_2 (.A(n_15), .Y(tp_obs_n_15));
BUFX2 U_TP_OBS_3 (.A(n_14), .Y(tp_obs_n_14));
endmodule
Example 6: Python API for TPI
from opentestability.core.testpoint import TPIOrchestrator
# Create TPI orchestrator
tpi = TPIOrchestrator()
# Run complete TPI flow
success, report = tpi.run(
netlist_path='data/parsed/design.json',
metrics_path='data/results/design_cop.json',
output_path='data/TPI/design_tp.v',
threshold=50,
max_points=10
)
# Get summary statistics
summary = tpi.get_summary()
print(f"Test points inserted: {summary['test_points_inserted']}")
print(f"Gate overhead: {summary['gate_overhead_percent']:.1f}%")
Quick Start Examples
Example 1: Basic Analysis
Analyze a simple priority encoder:
# Start tool
./opentest
# Parse Verilog
opentest> parse -i priority_encoder.v
[✓] Parsed priority_encoder.v
# Create DAG
opentest> dag -i priority_enc_parsed.json
[✓] DAG saved to priority_enc_dag.json
# Run SCOAP
opentest> scoap -i priority_enc.txt
[✓] SCOAP analysis completed
# Visualize
opentest> visualize -i priority_enc_dag.json
[✓] Graph saved to priority_enc_graph.png
Example 2: Compare Algorithms
Compare all reconvergence algorithms:
opentest> compare -i serial_alu_dag.json
Comparison Summary:
Baseline: 15 reconvergences
Simple: 18 reconvergences
Advanced: 22 reconvergences
[✓] Comparison saved
Python API Examples
Example 3: Programmatic Analysis
#!/usr/bin/env python3
"""Complete analysis script."""
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent / "src"))
from opentestability.parsers.verilog_parser import parse
from opentestability.core.dag_builder import create_dag_from_netlist
from opentestability.core.scoap import run as scoap_run
from opentestability.core.reconvergence import analyze_reconvergence
from opentestability.visualization.graph_renderer import visualize_gate_graph
import json
def analyze_circuit(circuit_name):
"""Complete testability analysis workflow."""
print(f"Analyzing {circuit_name}...")
# Step 1: Parse
print(" [1/5] Parsing Verilog...")
parse(f"{circuit_name}.v", f"{circuit_name}.txt")
# Step 2: Create DAG
print(" [2/5] Creating DAG...")
dag_path = create_dag_from_netlist(f"{circuit_name}_parsed.json")
# Step 3: SCOAP
print(" [3/5] Running SCOAP analysis...")
scoap_path = scoap_run(f"{circuit_name}.txt",
f"{circuit_name}_scoap.json",
json_flag=True)
# Step 4: Reconvergence
print(" [4/5] Detecting reconvergence...")
reconv_path = analyze_reconvergence(f"{circuit_name}_dag.json")
# Step 5: Visualize
print(" [5/5] Generating visualization...")
viz_path = visualize_gate_graph(f"{circuit_name}_dag.json")
# Load and summarize results
with open(scoap_path, 'r') as f:
scoap_data = json.load(f)
with open(reconv_path, 'r') as f:
reconv_data = json.load(f)
print(f"\n✓ Analysis Complete:")
print(f" Signals analyzed: {len(scoap_data)}")
print(f" Reconvergences found: {len(reconv_data)}")
print(f" Visualization: {viz_path}")
return {
'scoap': scoap_data,
'reconvergence': reconv_data,
'dag': dag_path,
'visualization': viz_path
}
if __name__ == "__main__":
if len(sys.argv) != 2:
print("Usage: python analyze.py <circuit_name>")
sys.exit(1)
circuit = sys.argv[1]
results = analyze_circuit(circuit)
Example 4: Custom SCOAP Analysis
"""Find hard-to-test signals using SCOAP."""
import json
from opentestability.core.scoap import run as scoap_run
from opentestability.utils.file_utils import get_project_paths
# Run SCOAP
scoap_run("serial_alu.txt", "scoap_results.json", json_flag=True)
# Load results
paths = get_project_paths()
with open(paths['results'] / "scoap_results.json", 'r') as f:
scoap = json.load(f)
# Find hard-to-test signals
threshold = 10
hard_signals = []
for signal, metrics in scoap.items():
total_controllability = metrics['CC0'] + metrics['CC1']
observability = metrics['CO']
if total_controllability > threshold or observability > threshold:
hard_signals.append({
'signal': signal,
'CC0': metrics['CC0'],
'CC1': metrics['CC1'],
'CO': metrics['CO'],
'testability_score': total_controllability + observability
})
# Sort by testability (worst first)
hard_signals.sort(key=lambda x: x['testability_score'], reverse=True)
# Report
print("Hard-to-Test Signals:")
print(f"{'Signal':<15} {'CC0':<5} {'CC1':<5} {'CO':<5} {'Score':<6}")
print("-" * 40)
for sig in hard_signals[:10]: # Top 10
print(f"{sig['signal']:<15} {sig['CC0']:<5} {sig['CC1']:<5} "
f"{sig['CO']:<5} {sig['testability_score']:<6}")
Example 5: Visualize with SCOAP Colors
"""Color circuit nodes by testability."""
import json
from opentestability.visualization.graph_renderer import (
load_dag, create_graph_visualization
)
from opentestability.utils.file_utils import get_project_paths
# Load data
edges, labels, pi, po = load_dag("serial_alu_dag.json")
paths = get_project_paths()
with open(paths['results'] / "serial_alu_scoap.json", 'r') as f:
scoap = json.load(f)
# Create graph
graph = create_graph_visualization(edges, labels, pi, po)
# Color by testability
for node in labels:
if node in scoap:
cc_total = scoap[node]['CC0'] + scoap[node]['CC1']
# Color scale: green (easy) → yellow → red (hard)
if cc_total <= 5:
color = 'lightgreen'
elif cc_total <= 10:
color = 'yellow'
elif cc_total <= 15:
color = 'orange'
else:
color = 'red'
n = graph.get_node(node)
n.attr['fillcolor'] = color
n.attr['style'] = 'filled'
n.attr['label'] = f"{node}\\nCC:{cc_total}"
graph.layout(prog='dot')
graph.draw('testability_colored.png')
print("Visualization saved: testability_colored.png")
Batch Processing Examples
Example 6: Analyze Multiple Circuits
"""Batch analyze all circuits in input directory."""
from pathlib import Path
from opentestability.utils.file_utils import get_project_paths
from opentestability.parsers.verilog_parser import parse
from opentestability.core.dag_builder import create_dag_from_netlist
from opentestability.core.scoap import run as scoap_run
import json
paths = get_project_paths()
# Find all Verilog files
verilog_files = list(paths['input'].glob("*.v"))
print(f"Found {len(verilog_files)} circuits\n")
results_summary = []
for vfile in verilog_files:
basename = vfile.stem
print(f"Processing {basename}...")
try:
# Parse
parse(vfile.name, f"{basename}.txt")
# DAG
dag_path = create_dag_from_netlist(f"{basename}_parsed.json")
# SCOAP
scoap_path = scoap_run(f"{basename}.txt",
f"{basename}_scoap.json",
json_flag=True)
# Load results
with open(scoap_path, 'r') as f:
scoap_data = json.load(f)
# Calculate statistics
cc_values = [m['CC0'] + m['CC1'] for m in scoap_data.values()]
co_values = [m['CO'] for m in scoap_data.values()]
results_summary.append({
'circuit': basename,
'signals': len(scoap_data),
'avg_controllability': sum(cc_values) / len(cc_values),
'avg_observability': sum(co_values) / len(co_values),
'max_controllability': max(cc_values),
'max_observability': max(co_values)
})
print(f" ✓ {basename} completed\n")
except Exception as e:
print(f" ✗ Error: {e}\n")
# Save summary
summary_path = paths['results'] / "batch_summary.json"
with open(summary_path, 'w') as f:
json.dump(results_summary, f, indent=2)
print(f"\nBatch analysis complete!")
print(f"Summary saved to: {summary_path}")
# Print table
print("\nResults Summary:")
print(f"{'Circuit':<20} {'Signals':<8} {'Avg CC':<8} {'Avg CO':<8}")
print("-" * 50)
for r in results_summary:
print(f"{r['circuit']:<20} {r['signals']:<8} "
f"{r['avg_controllability']:<8.2f} {r['avg_observability']:<8.2f}")
Advanced Examples
Example 7: Critical Path Analysis
"""Identify critical paths using reconvergence data."""
import json
from opentestability.utils.file_utils import get_project_paths
paths = get_project_paths()
# Load reconvergence data
with open(paths['reconvergence_output'] / "circuit_dag_reconv.json", 'r') as f:
reconv_data = json.load(f)
# Load SCOAP data
with open(paths['results'] / "circuit_scoap.json", 'r') as f:
scoap_data = json.load(f)
# Analyze critical paths
critical_paths = []
for rc in reconv_data:
fanout = rc['fanout_point']
reconverge = rc['reconverge_point']
paths_list = rc.get('paths', [])
# Calculate criticality score
# Higher score = more critical
fanout_cc = scoap_data.get(fanout, {}).get('CC0', 0) + \
scoap_data.get(fanout, {}).get('CC1', 0)
reconverge_co = scoap_data.get(reconverge, {}).get('CO', 0)
path_count = len(paths_list)
criticality = fanout_cc + reconverge_co + (path_count * 2)
critical_paths.append({
'fanout': fanout,
'reconverge': reconverge,
'path_count': path_count,
'criticality': criticality,
'fanout_controllability': fanout_cc,
'reconverge_observability': reconverge_co
})
# Sort by criticality
critical_paths.sort(key=lambda x: x['criticality'], reverse=True)
# Report top 10 critical paths
print("Top 10 Critical Reconvergent Paths:")
print(f"{'Fanout':<10} {'Reconverge':<12} {'Paths':<6} {'Criticality':<12}")
print("-" * 50)
for cp in critical_paths[:10]:
print(f"{cp['fanout']:<10} {cp['reconverge']:<12} "
f"{cp['path_count']:<6} {cp['criticality']:<12}")
# Save for further analysis
with open(paths['results'] / "critical_paths.json", 'w') as f:
json.dump(critical_paths, f, indent=2)
Example 8: Sequential Circuit Analysis
"""Analyze a sequential circuit with flip-flops."""
from opentestability.parsers.verilog_parser import parse_verilog_netlist
from opentestability.core.dag_builder import build_dag
import json
# Parse sequential circuit
modules = parse_verilog_netlist("data/input/serial_ALU.v")
module_data = modules['serial_alu']
# Identify flip-flops
flip_flops = []
combinational_gates = []
for gate_type, inst_name, conns in module_data['instances']:
if 'DFF' in gate_type or 'LATCH' in gate_type:
flip_flops.append({
'type': gate_type,
'name': inst_name,
'connections': conns
})
else:
combinational_gates.append({
'type': gate_type,
'output': conns.get('Y', conns.get('Z', 'unknown')),
'inputs': [v for k, v in conns.items() if k not in ['Y', 'Z', 'Q', 'QN']]
})
print(f"Circuit Analysis:")
print(f" Total gates: {len(module_data['instances'])}")
print(f" Flip-flops: {len(flip_flops)}")
print(f" Combinational gates: {len(combinational_gates)}")
print(f"\nFlip-flop Details:")
for ff in flip_flops:
print(f" {ff['type']}: {ff['name']}")
print(f" Q output: {ff['connections'].get('Q', 'N/A')}")
print(f" D input: {ff['connections'].get('D', 'N/A')}")
# Build DAG of combinational portion
edges, labels = build_dag(combinational_gates)
print(f"\nCombinational DAG:")
print(f" Nodes: {len(labels)}")
print(f" Edges: {len(edges)}")
Integration Examples
Example 9: Export to Other Tools
"""Export analysis results to other EDA tool formats."""
import json
from opentestability.utils.file_utils import get_project_paths
paths = get_project_paths()
# Load SCOAP results
with open(paths['results'] / "circuit_scoap.json", 'r') as f:
scoap = json.load(f)
# Export to CSV
import csv
csv_path = paths['results'] / "scoap_export.csv"
with open(csv_path, 'w', newline='') as f:
writer = csv.writer(f)
writer.writerow(['Signal', 'CC0', 'CC1', 'CO', 'Testability_Score'])
for signal, metrics in scoap.items():
score = metrics['CC0'] + metrics['CC1'] + metrics['CO']
writer.writerow([signal, metrics['CC0'], metrics['CC1'],
metrics['CO'], score])
print(f"Exported to CSV: {csv_path}")
# Export to Cadence Genus TCL format
tcl_path = paths['results'] / "testability_report.tcl"
with open(tcl_path, 'w') as f:
f.write("# OpenTestability SCOAP Results\\n")
f.write("# Generated testability report\\n\\n")
for signal, metrics in scoap.items():
if metrics['CC0'] + metrics['CC1'] > 15:
f.write(f"# Hard to control: {signal}\\n")
f.write(f"report_timing -from {signal}\\n")
print(f"Exported to TCL: {tcl_path}")
Testing and Validation
Example 10: Validate Against Known Results
"""Validate SCOAP calculations against expected values."""
from opentestability.core.scoap import calculate_scoap
# Simple test circuit
gates = [
{"type": "AND2X1", "output": "n1", "inputs": ["a", "b"]},
{"type": "OR2X1", "output": "n2", "inputs": ["n1", "c"]},
{"type": "INVX1", "output": "z", "inputs": ["n2"]}
]
inputs = ["a", "b", "c"]
outputs = ["z"]
# Calculate
results = calculate_scoap(gates, inputs, outputs)
# Expected values
expected = {
'a': {'CC0': 1, 'CC1': 1},
'b': {'CC0': 1, 'CC1': 1},
'c': {'CC0': 1, 'CC1': 1},
'n1': {'CC0': 2, 'CC1': 3},
'n2': {'CC0': 4, 'CC1': 3},
'z': {'CC0': 4, 'CC1': 5}
}
# Validate
all_correct = True
for signal, exp_metrics in expected.items():
for metric, exp_value in exp_metrics.items():
actual_value = results[signal][metric]
if actual_value != exp_value:
print(f"MISMATCH: {signal}.{metric} = {actual_value}, "
f"expected {exp_value}")
all_correct = False
if all_correct:
print("✓ All SCOAP calculations correct!")
else:
print("✗ Some calculations incorrect")
More Examples
Check the examples/ directory for:
- Real circuit netlists
- SDC timing constraints
- Genus TCL scripts
- Reconvergence integration examples
- Batch processing scripts
- Additional Python examples