Bayesian Optimization: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 4: Line 4:


== Introduction ==
== Introduction ==
Bayesian Optimization is a sequential model-based approach to solving problems. In particular, it prescribes a prior belief over the possible objective functions, and then sequentially refine the model as data are observed via Bayesian posterior updating. {https://dash.harvard.edu/bitstream/handle/1/27769882/BayesOptLoop.pdf;sequence=1<nowiki/>}
Bayesian Optimization is a sequential model-based approach to solving problems. In particular, it prescribes a prior belief over the possible objective functions, and then sequentially refine the model as data are observed via Bayesian posterior updating. <ref>http://krasserm.github.io/2018/03/21/bayesian-optimization/</ref>


Bayesian Optimization is useful in machine learning. Since Machine Learning consists of black box optimization problem where the objective function is a black box function, where the analytical expression for the function is unknown, Bayesian optimization can be useful here. They attempt to find the global optimum in a minimum number of steps. {http://krasserm.github.io/2018/03/21/bayesian-optimization/<nowiki/>}
Bayesian Optimization is useful in machine learning. Since Machine Learning consists of black box optimization problem where the objective function is a black box function<ref>https://arxiv.org/abs/1012.2599</ref>, where the analytical expression for the function is unknown, Bayesian optimization can be useful here. They attempt to find the global optimum in a minimum number of steps.  
 
Bayesian Optimization has shown tremendous solutions for a wide variety of design problems. Certain application include; robotics, envrionmental monitoring, combinatorial optimization, adaptive Monte Carlo, reinforcement learning. <ref>https://dash.harvard.edu/bitstream/handle/1/27769882/BayesOptLoop.pdf;sequence=1</ref>


== Theory, Methodology and or Algorithmic Discussion ==
== Theory, Methodology and or Algorithmic Discussion ==
Bayesian Optimization incorporates the prior belief about
Bayesian Optimization incorporates the prior belief about  
 
== References ==
1) http://krasserm.github.io/2018/03/21/bayesian-optimization/
 
2) https://arxiv.org/abs/1012.2599
 
3) https://dash.harvard.edu/bitstream/handle/1/27769882/BayesOptLoop.pdf;sequence=1
 
4)

Revision as of 19:08, 27 November 2021

Author : By Deepa Korani (dmk333@cornell.edu)

Steward : Fenqgi You

Introduction

Bayesian Optimization is a sequential model-based approach to solving problems. In particular, it prescribes a prior belief over the possible objective functions, and then sequentially refine the model as data are observed via Bayesian posterior updating. [1]

Bayesian Optimization is useful in machine learning. Since Machine Learning consists of black box optimization problem where the objective function is a black box function[2], where the analytical expression for the function is unknown, Bayesian optimization can be useful here. They attempt to find the global optimum in a minimum number of steps.

Bayesian Optimization has shown tremendous solutions for a wide variety of design problems. Certain application include; robotics, envrionmental monitoring, combinatorial optimization, adaptive Monte Carlo, reinforcement learning. [3]

Theory, Methodology and or Algorithmic Discussion

Bayesian Optimization incorporates the prior belief about

References

1) http://krasserm.github.io/2018/03/21/bayesian-optimization/

2) https://arxiv.org/abs/1012.2599

3) https://dash.harvard.edu/bitstream/handle/1/27769882/BayesOptLoop.pdf;sequence=1

4)