learngd
Gradient descent weight and bias learning function
Syntax
[dW,LS] = learngd(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learngd('code
')
Description
learngd
is the gradient descent weight and bias learning
function.
[dW,LS] = learngd(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
takes several
inputs:
W |
|
P |
|
Z |
|
N |
|
A |
|
T |
|
E |
|
gW |
|
gA |
|
D |
|
LP | Learning parameters, none, |
LS | Learning state, initially should be |
and returns
dW |
|
LS | New learning state |
Learning occurs according to learngd
’s learning parameter, shown here
with its default value.
LP.lr - 0.01 | Learning rate |
info = learngd('
returns useful
information for each supported code
')code
character vector:
'pnames' | Names of learning parameters |
'pdefaults' | Default learning parameters |
'needg' | Returns 1 if this function uses |
Examples
Here you define a random gradient gW
for a weight going to a layer with
three neurons from an input with two elements. Also define a learning rate of 0.5.
gW = rand(3,2); lp.lr = 0.5;
Because learngd
only needs these values to calculate a weight change
(see “Algorithm” below), use them to do so.
dW = learngd([],[],[],[],[],[],[],gW,[],[],lp,[])
Algorithms
learngd
calculates the weight change dW
for a given
neuron from the neuron’s input P
and error E
, and the
weight (or bias) learning rate LR
, according to the gradient descent
dw = lr*gW
.
Version History
Introduced before R2006a