Difference between revisions of "RMSProp"

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
(Undo revision 1249 by Jason Huang (talk))
Tags: Undo Replaced
Line 4: Line 4:
== Introduction ==
== Introduction ==
RMSProp, so call root mean square propagation, is an optimization algorithm/method dealing with Artificial Neural Network (ANN) for machine learning. It is also a currently developed algorithm compared to the Stochastic Gradient Descent (SGD) algorithm, momentum method. And even one of the foundations of Adam algorithm development.
It is an unpublished optimization algorithm, using the adaptive learning rate method, first proposed in the Coursera course [https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf “Neural Network for Machine Learning” lecture 6] by Geoff Hinton. Astonished is that this informally revealed, an unpublished algorithm is intensely famous nowadays.
== Theory and methodology ==
'''Artificial Neural Network'''
Artificial Neural Network can be regarded as the human brain and conscious center of Acritical Intelligence(AI), presenting the imitation of what the brain going to be when humans thinking. Scientists are trying to build the concept of ANN close real neurons with their biological ‘parent’.
[[File:A single Artificial Neuron presented as a mathematic function|thumb]]

Revision as of 00:40, 19 November 2020

Author: Jason Huang (SysEn 6800 Fall 2020)

Steward: Allen Yang, Fengqi You