Jump to content

Yield (Circuit)

From Wikipedia, the free encyclopedia

Yield is a critical metric in integrated circuit (IC) reliability engineering, measuring the proportion of manufactured chips that meet specified performance and functional requirements. These specifications may include timing, power, area, and noise margins, among others. Despite highly controlled manufacturing processes, inherent variability in semiconductor fabrication can cause deviations that affect final circuit behavior.

In modern semiconductor production, yield directly influences manufacturing cost and product viability. High yield means more functional chips per wafer, reducing cost per chip and maximizing economic return. Conversely, low yield leads to increased waste, reduced throughput, and higher production costs, especially in advanced process nodes such as 7nm and below, where each wafer represents a significant financial investment.

To mitigate these risks, circuit designers must address yield proactively during the design phase. This involves not only estimating the yield under expected process variations but also optimizing the design to make it more robust. Yield considerations are now an integral part of electronic design automation (EDA) workflows, where simulation and optimization tools help engineers navigate the complex trade-offs between performance, area, power, and manufacturability.

Consequently, two key challenges arise in yield-centric design: yield estimation (also referred to as yield analysis), which seeks to accurately compute the probability of a circuit meeting specifications under variation; and yield optimization, which aims to adjust design parameters to improve this probability. Both tasks are essential for ensuring that designs are both functional and cost-effective in real-world fabrication.

Background

[edit]

As semiconductor technology advances into nanometer scales, integrated circuits become increasingly sensitive to process-induced variability. The manufacturing of ICs involves numerous steps—such as photolithography, etching, doping, and deposition—each of which introduces variability at different scales. These variations manifest both globally (across wafers or dies) and locally (within a single die), influencing critical electrical parameters such as threshold voltage, channel length, oxide thickness, and mobility.[1]

One key source of variability is random dopant fluctuation (RDF), where the discrete and stochastic nature of dopant atoms in the transistor channel leads to unpredictable threshold voltage shifts. Similarly, line-edge roughness (LER) causes variations in transistor gate dimensions, further affecting drive strength and switching speed. Other contributors include oxide thickness variations, work-function fluctuations, and stress-induced mobility changes.

These variations are especially consequential for analog and mixed-signal circuits, where performance often hinges on precise matching of components and predictable analog behavior.[2][3] For example, mismatch in current mirrors or differential pairs can lead to significant gain errors or offset voltages, degrading circuit accuracy. Moreover, high-speed or high-frequency circuits—such as those in RF applications—are vulnerable to parasitic variation and layout-dependent effects, making yield degradation even more pronounced.

In digital designs, process variations can lead to timing violations, increased leakage currents, or reduced noise margins, particularly in timing-critical paths. For memory circuits like SRAM, variation-induced read/write failures and access time degradation are well-documented challenges. As a result, ensuring that circuits perform correctly under these variations is a central concern across all IC domains.

To capture the statistical nature of these effects, yield is defined as the probability that a circuit, fabricated under such variations, satisfies all performance constraints. This probabilistic view allows designers to analyze how process variability translates into performance uncertainty, and how often these uncertainties push a design outside of acceptable limits.

As variability continues to grow with each process node, the challenge of maintaining high yield becomes more complex. Addressing this challenge requires close collaboration between technology developers, circuit designers, and EDA tool providers to develop models, methodologies, and algorithms that can accurately predict, analyze, and mitigate the impact of process variations throughout the IC design and fabrication pipeline.

Formulation

[edit]
Formal definition of yield (or failure rate): From a geometric perspective, failure rate estimation essentially corresponds to computing the area of the circuit performance surface that lies above the threshold. However, since circuit performance can only be evaluated via simulation, there is no explicit analytical expression for it.

This section presents a formal definition of circuit yield. Let, denote design parameters (e.g. length and width of a transistor) which are controlled by designer, where defines the feasible design space. And the manufacturing process variations are assumed fully captured by the random variables . Without loss of generality, variational parameters (also referred as process parameters) are often modeled as a standard normal distribution:

Circuit performance is determined by both design and variation parameters, and can be modeled as a deterministic function . If all performance specifications are satisfied, the circuit is considered qualified. To formalize this, an indicator function is defined to determine whether a given circuit instance meets all required specifications:

where is the -th performance specification. For a given design , yield is defined as the probability that a manufactured circuit meets all design specifications under process variations, which is mathematically formalized as:

In some literature, for convenience, failure rate is often used as an equivalent representation of yield ().

Yield estimation

[edit]

The yield estimation problem concerns how to accurately estimate the yield value with minimal simulation cost, especially for high-dimensional circuits. The main challenge lies in the fact that can only be evaluated through time-consuming circuit simulations. As a result, the yield function does not have a closed-form expression and must be estimated using numerical methods.

Currently, the industrial gold standard for yield estimation is the Monte Carlo method (MC), which approximates the yield as:

where are independent and identically distributed samples drawn from the distribution . The major drawback of the MC method lies in its inefficiency. In practice, yield targets are often extremely high, leading to a rare-event setting where only a small subset of samples violate the specifications. Consequently, a large number of simulations is required to achieve statistically accurate yield estimates. For example, estimating a failure rate of requires approximately simulations for 10% relative error, which may take several hours.[4]

To improve the efficiency of yield estimation, one can either reduce the number of required simulations or lower the cost of each simulation. These two strategies correspond to two major classes of optimization techniques: importance sampling and surrogate modeling, respectively.

Comparison between Importance Sampling and Monte Carlo:Monte Carlo sampling draws random samples from the entire space of variational parameters according to a standard normal distribution. In contrast, importance sampling generates samples from a proposal distribution that is biased toward the failure region, thereby improving the efficiency of yield estimation.

Importance sampling (IS) is a statistical technique used to improve the efficiency of Monte Carlo simulations, particularly in rare-event scenarios where standard sampling would require an impractically large number of evaluations.[5][6][7][8] In the context of yield estimation, IS focuses computational resources on the failure region—the set of process variation instances that cause a circuit to violate specifications—which is otherwise sparsely represented under normal distributions.

Instead of sampling from the original probability distribution of process variations (typically a standard normal), IS introduces a proposal distribution that over-samples the critical or rare regions. The estimator then corrects for this bias using a likelihood ratio, ensuring that the final yield estimate remains unbiased.

Mathematically, importance sampling can be expressed as:

where are independent and identically distributed samples drawn from the distribution . By carefully designing the suitable proposal distribution , importance sampling can obtain sufficient failure samples with significantly fewer simulations, enabling accurate yield estimation ().

The key challenge in IS is choosing a good proposal distribution . An ideal would concentrate samples in the failure region without entirely neglecting other feasible domains, thereby balancing variance reduction with estimator accuracy. Poorly chosen can lead to high variance or numerical instability due to large likelihood ratios.

Surrogate modeling employs data-driven machine learning techniques to approximate the behavior of computationally expensive circuit simulators. The key idea is to learn a predictive model that maps input design and process parameters to output performance metrics, mimicking the true simulator with much lower evaluation cost.

Once trained to a sufficient level of accuracy, the surrogate model can be used as a drop-in replacement for the original simulator in yield estimation workflows. This substitution drastically reduces the number of high-fidelity simulations required, making it possible to perform large-scale statistical analysis, optimization, and sensitivity studies within reasonable computational budgets.

Surrogate modeling is especially beneficial in high-dimensional design spaces or when the simulation time per sample is prohibitively high. It enables rapid exploration of the design space and supports iterative processes such as Bayesian optimization and adaptive sampling.

Common surrogate models include Gaussian processes (GP),[9] conditional normalizing flows (CNF),[10] low-rank tensor approximations,[11] Bayesian neural networks,[12] and radial basis function networks.[13]

These approaches enable more efficient use of simulation budgets, especially in high-dimensional or compute-intensive scenarios, and form the foundation for scalable yield-aware design methodologies.

Yield optimization

[edit]

While accurate yield estimation is critical, the ultimate objective for circuit designers is to optimize yield—that is, to find a design configuration that maximizes the probability of meeting performance specifications under process variations. Yield optimization balances two competing goals: maximizing the statistical success rate (yield) and minimizing the computational cost associated with evaluating different design candidates.

Formally, yield optimization can be posed as the following optimization problem:

  

where denotes the yield function corresponding to design , and is the feasible design space.

A major challenge in this formulation is that has no closed-form expression. Evaluating for any given design point requires computationally intensive simulations—often via Monte Carlo methods or their advanced variants. Furthermore, the yield function is generally non-convex, non-smooth, and lacks gradient information, making traditional gradient-based optimization algorithms inapplicable.[9]

To address this, yield optimization is often treated as a black-box optimization problem, where the objective function can only be accessed through function evaluations (i.e., simulations). Among black-box methods, Bayesian optimization (BO) has become particularly prominent due to its sample efficiency.[14]

In Bayesian yield optimization, a Gaussian Process (GP) model is used as a surrogate to approximate the unknown yield function. This model captures both predictions and associated uncertainty. At each iteration, an acquisition function—such as Expected Improvement (EI) or Upper Confidence Bound (UCB)—is optimized to select the next design point. The selected point is then evaluated using a yield estimation method, and the GP model is updated with the new data. This process is repeated iteratively until a satisfactory yield is achieved or the simulation budget is exhausted.[9][3][15][16]

This framework enables designers to navigate complex, high-dimensional design spaces with relatively few expensive evaluations. In addition to Bayesian methods, alternative black-box algorithms—such as Differential Evolution (DE)—have also been explored for yield optimization, especially in cases where model assumptions for Bayesian methods do not hold or evaluation noise is high.[13]

As circuit designs continue to grow in complexity and variability, efficient yield optimization remains a vital component of design automation, bridging the gap between theoretical robustness and practical manufacturability.

Some Existing Works

[edit]

Yield Estimation

[edit]

Importance Sampling

[edit]

Minimum-norm importance sampling (MNIS)[5] proposes a highly efficient technique specifically designed for rare-event analysis in memory circuits, particularly 6T-SRAM. It introduces a norm minimization principle inspired by Large Deviation Theory to construct an effective proposal distribution for IS, achieving massive speedups over standard Monte Carlo simulation while maintaining high accuracy.

Scaled-sigma sampling (SSS)[7] accelerates rare-event analysis by amplifying process variations to increase failure rates during simulation. Failure probabilities under nominal conditions are then inferred through statistical extrapolation. By avoiding direct sampling in low-probability regions, SSS significantly reduces simulation cost while maintaining accuracy.

Hyperspherical Clustering and Sampling (HSCS)[17] is a method combining hyperspherical presampling with clustering to identify multiple failure regions. It builds mixture IS distributions based on spherical exploration of the parametric space. HSCS is shown to be more robust than prior methods in covering disjoint failure regions, providing accurate yield estimation with high sample efficiency.

Adaptive Importance Sampling (AIS)[6] proposes an adaptive method to address the challenge of estimating extremely low failure probabilities in memory circuits. Unlike static IS methods, AIS iteratively updates the sampling distribution by resampling weighted failure samples, improving both efficiency and robustness. It significantly outperforms traditional methods in both single and multiple failure region scenarios, achieving over 4000× speedup in complex circuits.

Adaptive clustering and sampling (ACS)[18] addresses multi-modal failure analysis by clustering observed failures and constructing a weighted Gaussian mixture for guided sampling. The method adaptively explores cone-shaped subspaces, achieving high sample efficiency and outperforming conventional techniques in high-dimensional settings, such as 576-dimensional SRAM circuits.

Every failure is a lesson (EFIAL)[8] enhances minimum-norm importance sampling by incorporating all observed failure samples instead of relying solely on the closest one. The resulting proposal distribution is tuning-free and scales linearly with the number of failures. EFIAL yields up to 13.5× speedup and notable accuracy improvements, especially when combined with pre-sampling techniques such as onion sampling.

Variational importance sampling (VIS)[19] formulates yield estimation as a variational optimization problem. Unlike traditional norm-based methods, VIS places the optimal mean shift vector beyond the failure boundary and employs full-covariance models, skew-normal distributions, and mixture models for multi-modal failures. These refinements enable up to 29× speedup and improved accuracy in rare-event estimation.

Surrogate modeling

Low-Rank Tensor Approximation is a technique for representing high-dimensional functions using a compact sum of low-rank components. It reduces computational complexity by capturing key patterns with fewer parameters, making it efficient for modeling problems with many input variables. Low-rank tensor approximation-based yield estimation (LRTA)[11] constructs a sparse polynomial surrogate using tensor decomposition to model high-dimensional circuit performance. An adaptive sampling strategy focuses training near failure regions, enabling accurate yield estimation with up to 6300× speedup over Monte Carlo, even in dimensions as high as 597.

Convolutional-entropy-based active learning technique proposed in BYA[20]: exploration and exploitation for yield decision margins.

Gaussian Process (GP) is a non-parametric regression model that defines a distribution over functions, where any finite set of input points has a joint Gaussian distribution. It provides both predictive means and uncertainty estimates, making it well-suited for modeling complex systems with limited data and guiding adaptive sampling. Bayesian yield analysis (BYA)[9] uses independent Gaussian processes to model circuit performance metrics and applies a Bernoulli link function to convert outputs into pass/fail probabilities. An entropy-based acquisition function selects new samples that most reduce predictive uncertainty, enabling accurate and efficient yield estimation in high-dimensional spaces. Adaptive Shrinkage Deep Kernel Learning (ASDK)[21] combines deep kernel Gaussian processes with a shrinkage-based feature selection mechanism to identify dominant process variables. The surrogate is refined through entropy-based sampling near failure boundaries, allowing scalable and accurate yield estimation in problems with hundreds of dimensions.

Bayesian Neural Network (BNN) is a probabilistic extension of standard neural networks that introduces uncertainty by treating weights and biases as random variables with prior distributions. This enables it to model predictive uncertainty and prevent overfitting, making it suitable for surrogate modeling in data-scarce, high-dimensional settings. Bayesian Neural Network-based yield estimation (BNN-YE) constructs a global surrogate model of circuit performance using a Bayesian neural network trained on a small number of SPICE simulations. It estimates yield by running Monte Carlo on the trained surrogate, eliminating the need for additional simulations. The method achieves high accuracy with strong generalization due to the regularization effect of Bayesian inference, providing up to 100× speedup over standard Monte Carlo in SRAM yield estimation.

Surrogate modeling + Importance Sampling

[edit]

Some methods combine surrogate modeling with importance sampling, using a learned surrogate to replace expensive SPICE simulations and designing proposal distributions to guide sampling. Yield is then estimated efficiently through importance sampling based on the surrogate.

Radial Basis Function Network (RBF) is a type of feedforward neural network that uses radial basis functions (typically Gaussian) as activation functions in the hidden layer. It is well-suited for interpolation tasks, offering smooth approximations of complex functions with relatively simple training and fast evaluation. Adaptive  Online Surrogate Modeling (AOSM) combines radial basis function networks with importance sampling to accelerate yield estimation. At each iteration, only the most informative sample—typically the current minimum-norm failure—is used to update the surrogate model. This surrogate then replaces SPICE simulations in subsequent importance sampling steps, achieving high efficiency with minimal training cost and preserving accuracy in rare-event estimation.

Yield Optimization

[edit]

Adaptive Bayesian Yield Optimization (ABYO)[3] formulates yield maximization as a Bayesian optimization problem. It leverages Gaussian processes and expected improvement for efficient sampling, and introduces a two-stage Monte Carlo estimator to adaptively allocate computational effort. This approach achieves up to 8× speedup while preserving accuracy in analog and SRAM circuit yield optimization.

Adaptive Online Surrogate Modeling (AOSM)[13] accelerates SRAM yield optimization by combining population-based optimization with online-trained surrogate models. Building on efficient yield estimation, AOSM searches for high-yield designs via two-norm optimization of the optimal mean-shift vector and adaptively refines surrogate accuracy near optimal candidates. This achieves up to 150× speedup, enabling fully automatic and robust SRAM yield optimization within hours.

Bayesian Yield Analysis with Bayesian Optimization (BYA-BO)[9] accelerates yield optimization by leveraging an efficient Bayesian yield estimation framework. Building on accurate and uncertainty-aware yield estimation, BYA-BO uses active learning and a unified Bayesian surrogate to rapidly identify high-yield designs. This approach achieves up to 5× speedup over previous state-of-the-art methods in SRAM and adder circuit optimization.

Kernel Density Estimation-based Bayesian Optimization (KDE-BO)[22] reframes high-sigma yield estimation as a surrogate-guided importance sampling task. It combines Gaussian process surrogates with kernel density estimation (KDE) to construct accurate importance distributions, and adaptively refines both models during optimization. This approach enables over 100× speedup while maintaining accuracy in high-sigma yield analysis and design.

Max-value Entropy Search-based Bayesian Optimization (MES-BO)[23] accelerates yield optimization by leveraging accurate, uncertainty-aware Bayesian yield estimation. Building on efficient yield estimation with Gaussian process regression, MES introduces an information-theoretic acquisition function to better explore the design space, reducing simulation cost while achieving robust high-yield designs for analog and SRAM circuits.

Optimal Proposal Transfer (OPT)[10] formulates yield optimization as a proposal distribution transfer problem. OPT leverages conditional normalizing flows to transfer optimal proposals between designs, achieving robust and efficient yield estimation without surrogate models. This enables up to 12× speedup and consistent high-yield results on analog and SRAM circuits. The paper was nominated for Best Paper at the 2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD).

Bayesian Neural Network-based Yield Optimization (BNN-YO)[12] accelerates yield optimization by first constructing an accurate Bayesian neural network surrogate for yield estimation. Building on this foundation, BNN-YEO employs a smooth indicator approximation to enable efficient, gradient-based search for high-yield circuit designs. This approach achieves up to 20× speedup over previous methods in SRAM yield optimization, while effectively preventing overfitting.

See also

[edit]

References

[edit]
  1. ^ Lin, Yibo; Alawieh, Mohamed Baker; Ye, Wei; Pan, David Z. (2018). "Machine Learning for Yield Learning and Optimization". 2018 IEEE International Test Conference (ITC). IEEE: 1–10. doi:10.1109/TEST.2018.8624733. ISBN 978-1-5386-8382-8.
  2. ^ Liu, Bo; Fernandez, Francisco V.; Gielen, Georges G. E. (2011). "Efficient and Accurate Statistical Analog Yield Optimization and Variation-Aware Circuit Sizing Based on Computational Intelligence Techniques". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 30 (6): 793–805. doi:10.1109/TCAD.2011.2106850. ISSN 1937-4151.
  3. ^ a b c Wang, Mengshuo; Lv, Wenlong; Yang, Fan; Yan, Changhao; Cai, Wei; Zhou, Dian; Zeng, Xuan (2018). "Efficient Yield Optimization for Analog and SRAM Circuits via Gaussian Process Regression and Adaptive Yield Estimation". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 37 (10): 1929–1942. doi:10.1109/TCAD.2017.2778061. ISSN 1937-4151.
  4. ^ "Analog Design Centering and Sizing". SpringerLink. doi:10.1007/978-1-4020-6004-5.pdf.
  5. ^ a b Dolecek, Lara; Qazi, Masood; Shah, Devavrat; Chandrakasan, Anantha (2008). "Breaking the simulation barrier: SRAM evaluation through norm minimization". 2008 IEEE/ACM International Conference on Computer-Aided Design: 322–329. doi:10.1109/ICCAD.2008.4681593.
  6. ^ a b Shi, Xiao; Liu, Fengyuan; Yang, Jun; He, Lei (2018-06-24). "A fast and robust failure analysis of memory circuits using adaptive importance sampling method". Proceedings of the 55th Annual Design Automation Conference. DAC '18. New York, NY, USA: Association for Computing Machinery: 1–6. doi:10.1145/3195970.3195972. ISBN 978-1-4503-5700-5.
  7. ^ a b Sun, Shupeng; Li, Xin; Liu, Hongzhou; Luo, Kangsheng; Gu, Ben (2015). "Fast Statistical Analysis of Rare Circuit Failure Events via Scaled-Sigma Sampling for High-Dimensional Variation Space". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 34 (7): 1096–1109. doi:10.1109/TCAD.2015.2404895. ISSN 1937-4151.
  8. ^ a b Xing, Wei; Liu, Yanfang; Fan, Weijian; He, Lei (2024-11-07). "Every Failure Is A Lesson: Utilizing All Failure Samples To Deliver Tuning-Free Efficient Yield Evaluation". Proceedings of the 61st ACM/IEEE Design Automation Conference. DAC '24. New York, NY, USA: Association for Computing Machinery: 1–6. doi:10.1145/3649329.3657381. ISBN 979-8-4007-0601-1.
  9. ^ a b c d e Yin, Shuo; Jin, Xiang; Shi, Linxu; Wang, Kang; Xing, Wei W. (2022-08-23). "Efficient bayesian yield analysis and optimization with active learning". Proceedings of the 59th ACM/IEEE Design Automation Conference. DAC '22. New York, NY, USA: Association for Computing Machinery: 1195–1200. doi:10.1145/3489517.3530607. ISBN 978-1-4503-9142-9.
  10. ^ a b Liu, Yanfang; Dai, Guohao; Cheng, Yuanqing; Kang, Wang; Xing, Wei W. (2023). "OPT: Optimal Proposal Transfer for Efficient Yield Optimization for Analog and SRAM Circuits". 2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD): 1–9. doi:10.1109/ICCAD57390.2023.10323689.
  11. ^ a b Shi, Xiao; Yan, Hao; Huang, Qiancun; Zhang, Jiajia; Shi, Longxing; He, Lei (2019-06-02). "Meta-Model based High-Dimensional Yield Analysis using Low-Rank Tensor Approximation". Proceedings of the 56th Annual Design Automation Conference 2019. DAC '19. New York, NY, USA: Association for Computing Machinery: 1–6. doi:10.1145/3316781.3317863. ISBN 978-1-4503-6725-7.
  12. ^ a b Dou, Zhenxing; Cheng, Ming; Jia, Ming; Wang, Peng (2024-11-07). "BNN-YEO: an efficient Bayesian Neural Network for yield estimation and optimization". Proceedings of the 61st ACM/IEEE Design Automation Conference. DAC '24. New York, NY, USA: Association for Computing Machinery: 1–6. doi:10.1145/3649329.3658242. ISBN 979-8-4007-0601-1.
  13. ^ a b c Yao, Jian; Ye, Zuochang; Wang, Yan (2015). "An Efficient SRAM Yield Analysis and Optimization Method With Adaptive Online Surrogate Modeling". IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 23 (7): 1245–1253. doi:10.1109/TVLSI.2014.2336851. ISSN 1557-9999.
  14. ^ Pelikan, Martin (2005), Pelikan, Martin (ed.), "Bayesian Optimization Algorithm", Hierarchical Bayesian Optimization Algorithm: Toward a new Generation of Evolutionary Algorithms, Berlin, Heidelberg: Springer, pp. 31–48, doi:10.1007/978-3-540-32373-0_3, ISBN 978-3-540-32373-0, retrieved 2025-06-05
  15. ^ Zhang, Shuhan; Yang, Fan; Zhou, Dian; Zeng, Xuan (2020). "Bayesian Methods for the Yield Optimization of Analog and SRAM Circuits". 2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC): 440–445. doi:10.1109/ASP-DAC47756.2020.9045614.
  16. ^ Weller, Dennis D.; Hefenbrock, Michael; Beigl, Michael; Tahoori, Mehdi B. (2022). "Fast and Efficient High-Sigma Yield Analysis and Optimization Using Kernel Density Estimation on a Bayesian Optimized Failure Rate Model". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 41 (3): 695–708. doi:10.1109/TCAD.2021.3064440. ISSN 1937-4151.
  17. ^ Wu, Wei; Bodapati, Srinivas; He, Lei (2016). "Hyperspherical Clustering and Sampling for Rare Event Analysis with Multiple Failure Region Coverage". Proceedings of the 2016 on International Symposium on Physical Design. ACM: 153–160. doi:10.1145/2872334.2872360. ISBN 978-1-4503-4039-7.
  18. ^ Shi, Xiao; Yan, Hao; Wang, Jinxin; Xu, Xiaofen; Liu, Fengyuan; Shi, Longxing; He, Lei (2019-04-04). "Adaptive Clustering and Sampling for High-Dimensional and Multi-Failure-Region SRAM Yield Analysis". Proceedings of the 2019 International Symposium on Physical Design. ISPD '19. New York, NY, USA: Association for Computing Machinery: 139–146. doi:10.1145/3299902.3309748. ISBN 978-1-4503-6253-5.
  19. ^ Liu, Yanfang; He, Lei; Xing, Wei W. (2025-04-09). "Beyond the Yield Barrier: Variational Importance Sampling Yield Analysis". Proceedings of the 43rd IEEE/ACM International Conference on Computer-Aided Design. ICCAD '24. New York, NY, USA: Association for Computing Machinery: 1–9. doi:10.1145/3676536.3676672. ISBN 979-8-4007-1077-3.
  20. ^ Yin, Shuo; Jin, Xiang; Shi, Linxu; Wang, Kang; Xing, Wei W. "Efficient bayesian yield analysis and optimization with active learning". Proceedings of the 59th ACM/IEEE Design Automation Conference. ACM: 1195–1200. doi:10.1145/3489517.3530607. ISBN 978-1-4503-9142-9.
  21. ^ Yin, Shuo; Dai, Guohao; Xing, Wei W. (2023-01-16). "High-Dimensional Yield Estimation Using Shrinkage Deep Features and Maximization of Integral Entropy Reduction". Proceedings of the 28th Asia and South Pacific Design Automation Conference. ACM: 283–289. doi:10.1145/3566097.3567907. ISBN 978-1-4503-9783-4.
  22. ^ Weller, Dennis D.; Hefenbrock, Michael; Beigl, Michael; Tahoori, Mehdi B. (2022). "Fast and Efficient High-Sigma Yield Analysis and Optimization Using Kernel Density Estimation on a Bayesian Optimized Failure Rate Model". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 41 (3): 695–708. doi:10.1109/TCAD.2021.3064440. ISSN 0278-0070.
  23. ^ Zhang, Shuhan; Yang, Fan; Zhou, Dian; Zeng, Xuan (2020-01-13). "Bayesian Methods for the Yield Optimization of Analog and SRAM Circuits". 2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC). IEEE: 440–445. doi:10.1109/ASP-DAC47756.2020.9045614. ISBN 978-1-7281-4123-7.