Vighnesh Iyer
I'm a PhD student in the EECS department at UC Berkeley advised by Prof. Bora Nikolic.
I work in the areas of design methodology, hardware modeling and verification, ML for DV, and power modeling. In the past, I have built a high-performance monadic RTL testbench API, designed a VIP and random generator library for a testing framework for Chisel circuits, built RTL coverage prediction models, and explored specification mining for bug localization.
Conference Reviews[Listing]
Misc Articles[Listing]
- Machine Learning for Chip Placement: The Saga
- Ideas for Hammer's Next API
- Chiplet (CompArch) Research in Academia: Is it Sensible?
- Discussion with Eric Quinnell from Tesla
- Undergrad Projects in the SLICE Lab (Hardware Verification)
Research Topics[Listing]
Research Agenda
High performance testbench APIs and a SystemVerilog/UVM parity DV environment
- Use a high-level general-purpose language (Scala) to describe testbench logic, VIPs, scoreboards, and constrained random stimulus generators. Prove that we don't need to be tied down to the crippled and poorly supported SystemVerilog language, and we don't have to sacrifice performance either.
- First-class support for polyglot testbenches (e.g. using Python libraries for linear algebra or ML, C/C++ for driver/kernel code for co-simulation) on a unified runtime (e.g. GraalVM)
- March towards feature parity with the industry standard toolchain (UVM + SystemVerilog + VCS/Xcelium): temporal property specification language and functional coverage APIs in Scala for Chisel
- Particular focus in Fall 2023:
- Extending our prior work in SimCommand to improve the feature set and testbench performance, and make performance optimizations within chiseltest
- Standardizing interfaces throughout Chisel RTL codebases to enable unified VIPs and test environments
- Reviving cosimulation infrastructure for accelerators, such as Gemmini, to evaluate large workloads accurately without resorting to FPGA simulation
Machine learning for coverage closure, bug hunting, constraint tuning, and regression suite construction + intelligent fuzzing
- Investigating techniques for predicting RTL-level coverage from stimulus / random generator features
- Evaluating different methods for solving the 'missing data problem' associated with blackbox supervised learning approaches
- Investigating the utility of fine-grained input features in predicting complex output features such as time-domain coverage metrics
- Evaluating coverage model-guided bug hunting / state exploration techniques and constraint tuning approaches for targeting specific coverpoints (e.g. semantic fuzzing using constrained random)
The first area is about demonstrating that verification can be more ergonomic and performant than the status quo. It is engineering focused, but still has many unanswered research questions.
The second area is research focused: we are working on techniques that may not pan out. ML has been very successful in continuous domain problems and learning fuzzy relationships, but not as successful in discrete domain problems with strict combinatorial relationships.