[NOTE in progress] Simulation Optimization
簡單記錄一下關于仿真優化的一些知識點和思考。主要基于:Handbook of Simulation Optimization, Michael Fu
Table of Contents
Overview
Discrete Optimization
Three fundamental type of errors:
Optimality Conditions
Different scenarios depending on the solution space size:
Ranking and Selection
Ordinal Optimization (OO)
Globally Convergent Adaptive Random Search
Locally Convergent Adaptive Random Search
Commercial Solvers
Overview
這是本書的overview 實際上也可以看做是這一field的overview.
- SimuOpt : optimize, when the obj function ?cannot be computed directly, but can be simulated, with noise (focus on stochastic simulation environment).
一種分類方式:Discrete vs Continuous
- Discrete Optimization
- Solution space is small -> Ranking & Selection (based on statistics or simulation budget allocation)
- Solution space is large be finite -> Ordinal Optimization (no need to estimate accurately every candidate, only need to know their?order. Much faster convergence (exponential))
- Solution space is countably infinite -> Random Search (globally or locally convergent)
- Continuous Opt
- RSM (Response Surface Methodology). Also has constraint considerations and robust variants
- Stochastic Approximation (RM, KW, simutaneous perturbation stochastic approximation for high-dim pbs)
- SAA (Sample Average Approximation) with consideration on stochastic constraints.?
- Random Search, focus on estimation and on the search procedure. Model-based RS is newer class, assuming probability matrix is known.
Since stochasticity is the keyword, some base knowledge is important for DO as well as for CO.
- Statistics
- How to estimate a solution
- How to know soluiton x is better than y
- How to know to what extent we are covering the optimal solution in the search
- How many replications de we need...
- Hypothesis testing
- Stochastic constraints
- Variance reduction
- ...
Discrete Optimization
Three fundamental type of errors:
- The optimial solution is never simulated (about search)
- The opt that was?simulated is not?selected (about estimation)
- The one?selected is not well estimated (about estimation)
Optimality Conditions
- are needed to 1) ensure the correctness of the algo; 2) define the stopping criteria
- for constrain free non-linear optimization, we stop at a settle point
- for integer optimization, we check the gap between LB and UB
- here for SBO, it's difficult because:
- the cost of solution g(x) can only be estimated
- no structural info can be used to prune solution zone
- complete enumeration of the solution space is often computationally intractable
Different scenarios depending on the solution space size:
- Small. Less than hundreds of candidate. The key is then how to well estimate all solutions and return the best. Practically we analyze the Probability of Selection Correctness. (PSC). Algo stops til??where x* is the selected best solution.
- Large.
- Impossible to simulation all candidates. The idea is then to find a "good enough" solution, which means that x* is among the t-best solutions, with a certain probability.. This is used in ordinal optimization
- Or, choose methods with globally convergence () or locally convergence () guarantee.??is the set of all local optimums depending on the definition of neighborhood structure. Local optimum can be tested statistically by controling the type1 and type2 error. Because a neighborhood is often not large.
- Hypothesis testing: if the hypothesis is right, what's the probabilty of our observation? This is a proof by contradiction, emphasizing the rejection instead of the acceptance.
- (Meta)heuristics often found in commertial solvers. These algorithms work well for difficult deterministic integer programs, and they are somewhat tolerant of sampling variabilities. However, they typically do not satisfy any optimality conditions for DOvS problems and may be misled by sampling variabilities.
Ranking and Selection
Two formulations are concerned:?
- indifferent zone formulation (IZF)
- bayesian formulation (BF)
IZF (Frequentist)
Assume??which is the most difficult case. The objective is to find x1 which is at least?-better than all the others.
Based on the principle of the?above 3 procedures, further procedures include:
BF (Bayesian) (Does not provide PSC guarantees)
Used when prior information is available.
Helps to choose the next solution to explore, based on prior information and previous sample results, and also the simulation budgets. This involves a MDP problem and can be possibly solved by ADP/RL.
Conclusion
Brankle et al. found that no R&S procedure is dominent in all situations. (thousands of pb structures tested). BF is often more efficient in terms of nb of samples, but it doesn't provide correct-selection optimality guarantee like frequentist does.
Ordinal Optimization (OO)
When the solution space is large, OO proposes "sofe optimization", which selects a subset S from??and limit the analysis to S. We are interested in the probability that?, where T is the set of top t solutions in the whole space.??is called the alignment level and the probability is alignment probability.
Two basic idea behind OO:
OO is more an analysis than new algorithm, the procedure will be:
In practical i don't think this is so interesting, since it just tells you that, the larger??is, the better.
Globally Convergent Adaptive Random Search
Designed for large but finite solution space. Guarantee?
Generic GCARS:
Several algoritms are described:
Locally Convergent Adaptive Random Search
Similar to GCARC, but with a statistical procedure to test the local optimality of .
COMPASS (Convergen Optimization via Most-Promising-Area Stochastic Search)
?
AHA (Adaptive Hyperbox Algo)
Like COMPASS, but define the neighborhood as the hyperbox around x* :??where d is the dimension of x.
Commercial Solvers
Most simulation modeling softwares includes SBO tool, but most of them are based on R&S or meta-heuristics like SA. Meta-heuristics have been observed to be effective on difficult deterministic optimization problems but they usually provide no performance guarantees. Some advises are:
Conclusion
Most of the above mentioned algorithms are black-box algorithms that do not depend on problem structures. This can be considered in defining the neighborhood in LCRS, for instance.
?
?
?
總結
以上是生活随笔為你收集整理的[NOTE in progress] Simulation Optimization的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 求两数的最大公约数算法
- 下一篇: 章泽天又晒27岁生日照:刚刚又重返18岁