C# (CSharp) Encog.Neural.Networks.Training.Lma LevenbergMarquardtTraining - 7개의 예제가 발견되었습니다. 이것들은 오픈소스 프로젝트에서 추출된 C# (CSharp)의 Encog.Neural.Networks.Training.Lma.LevenbergMarquardtTraining에 대한 실세계 최고 등급의 예제들입니다. 예제들을 평가하여 예제의 품질 향상에 도움을 줄 수 있습니다.
Trains a neural network using a Levenberg Marquardt algorithm (LMA). This training technique is based on the mathematical technique of the same name. The LMA interpolates between the Gauss-Newton algorithm (GNA) and the method of gradient descent (similar to what is used by backpropagation. The lambda parameter determines the degree to which GNA and Gradient Descent are used. A lower lambda results in heavier use of GNA, whereas a higher lambda results in a heavier use of gradient descent. Each iteration starts with a low lambda that builds if the improvement to the neural network is not desirable. At some point the lambda is high enough that the training method reverts totally to gradient descent. This allows the neural network to be trained effectively in cases where GNA provides the optimal training time, but has the ability to fall back to the more primitive gradient descent method LMA finds only a local minimum, not a global minimum. References: C. R. Souza. (2009). Neural Network Learning by the Levenberg-Marquardt Algorithm with Bayesian Regularization. Website, available from: http://crsouza.blogspot.com/2009/11/neural-network-learning-by-levenberg_18.html http://www.heatonresearch.com/wiki/LMA http://en.wikipedia.org/wiki/Levenberg%E2%80%93Marquardt_algorithm http://en.wikipedia.org/wiki/Finite_difference_method http://mathworld.wolfram.com/FiniteDifference.html http://www-alg.ist.hokudai.ac.jp/~jan/alpha.pdf - http://www.inference.phy.cam.ac.uk/mackay/Bayes_FAQ.html