I can't really figure out what's wrong with my neural network class (Lua)





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}







2















local matrix = require("matrixx")
local Class = require("class")
NeuralNetwork = Class{}

function NeuralNetwork:init(input_nodes, hidden_nodes, output_nodes)
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes

self.weights_ih = matrix(self.hidden_nodes, self.input_nodes, math.random())
self.weights_ho = matrix(self.output_nodes, self.hidden_nodes, math.random())

self.bias_h = matrix(self.hidden_nodes, 1, math.random())
self.bias_o = matrix(self.output_nodes, 1, math.random())


self.learning_rate = 0.1
end

function NeuralNetwork:feedforward(input_array)
--Generating the Hidden Outputs
local inputs = matrix(input_array)
for i=1, #input_array do
inputs[i][1] = input_array[i]
end

local hidden = self.weights_ih * inputs
hidden = hidden + self.bias_h

--Activation Function
hidden = matrix.map(hidden, tanh)

--Generating the output's output
local output = self.weights_ho * hidden
output = output + self.bias_o
output = matrix.map(output, tanh)

return output
end

function NeuralNetwork:train(input_array, target_array)

--Generating the Hidden Outputs
local inputs = matrix(input_array)
for i=1, #input_array do
inputs[i][1] = input_array[i]
end

local hidden = self.weights_ih * inputs
hidden = hidden + self.bias_h

--Activation Function
hidden = matrix.map(hidden, tanh)

--Generating the output's output
local outputs = self.weights_ho * hidden
outputs = outputs + self.bias_o
outputs = matrix.map(outputs, tanh)

--Convert Targets Array to Matrix object
local targets = matrix(#target_array, 1)
for i=1, #target_array do
targets[i][1] = target_array[i]
end

--Calculate the error
local output_errors = targets - outputs

--Calculate gradient
local gradients = matrix.map(outputs, tanhd)
gradients = gradients * output_errors
gradients = gradients * self.learning_rate

-- Calculate deltas
local hidden_T = matrix.transpose(hidden)

local weight_ho_deltas = gradients * hidden_T

-- Adjust the weights by deltas
self.weights_ho = self.weights_ho + weight_ho_deltas
-- Adjust the bias by its deltas (which is just the gradients)
self.bias_o = self.bias_o + gradients

-- Calculate the hidden layer errors
local who_t = matrix.transpose(self.weights_ho)
local hidden_errors = who_t * output_errors

-- Calculate hidden gradient
local hidden_gradient = matrix.map(hidden, tanhd)
hidden_gradient = hidden_gradient * hidden_errors * self.learning_rate

-- Calcuate input->hidden deltas
local inputs_T = matrix.transpose(inputs)
local weight_ih_deltas = hidden_gradient * inputs_T

self.weights_ih = self.weights_ih + weight_ih_deltas
-- Adjust the bias by its deltas (which is just the gradients)
self.bias_h = self.bias_h + hidden_gradient

self.weights_ih:print()
print()
self.weights_ho:print()
print()
end

function sigmoid(x)
return 1 / (1 + math.exp(-x))
end

function dsigmoid(x)
return sigmoid(x) * (1 - sigmoid(x))
end

function tanh(x)
return (math.exp(x) - math.exp(-x)) / (math.exp(x) + math.exp(-x))
end

function tanhd(x)
return 1 / math.cosh(x)^2
end

--MAIN
local nn = NeuralNetwork(2, 2, 1)
local training_data = {
{
inputs = {0, 1},
target = {1}
},
{
inputs = {1, 1},
target = {0}
},
{
inputs = {1, 0},
target = {1}
},
{
inputs = {0, 0},
target = {0}
}
}

for i = 1, 30 do
local data = training_data[math.floor(math.random(#training_data))]
nn:train(data.inputs, data.target)
end
nn:feedforward({0, 1}):print()
nn:feedforward({1, 1}):print()
nn:feedforward({0, 0}):print()
nn:feedforward({1, 0}):print()


I wrote this NeuralNetwork class.
I used a class library and a matrix library
Respectively class matrix



Seems like it's all correct to me (ideally, at least), btw when I instantiate a new NN with 2 inputs, 2 hidden neurons and 1 output and try to solve a XOR, it doesn't work.



What am i missing? Maybe I misunderstood the matrix library, hope someone can help me



EDIT:
I added a map function in the library to apply a math function to every number in a matrix.



function matrix.map( m1, func )
local mtx = {}

for i = 1,#m1 do
mtx[i] = {}
for j = 1,#m1[1] do
mtx[i][j] = func(m1[i][j])
end
end
return setmetatable( mtx, matrix_meta )
end









share|improve this question

























  • What exactly happens when you instantiate it?

    – dan
    Jan 4 at 0:29











  • There is no function matrix.map in the module matrix.lua.

    – Egor Skriptunoff
    Jan 4 at 3:27











  • dan: it should create a new neural network with given (inputs, hidden neurons, output) @EgorSkriptunoff Oh right! Thanks. I created that function. It applies a function to every number in a matrix. I tested it and it works. I'll edit my question, thank you

    – caralse
    Jan 4 at 4:06













  • BTW, the line local inputs = matrix(input_array) is a subject to this bug when your NN has more than 3 inputs. Happily, it doesn't apply to your current NN which is very small.

    – Egor Skriptunoff
    Jan 4 at 10:48











  • Your error is in the line local gradients = matrix.map(outputs, tanhd) You're passing the result of tanh to tanhd (you should pass the argument of tanh to tanhd instead). In other words, you have f'(f(x)) instead of f'(x)

    – Egor Skriptunoff
    Jan 4 at 11:52


















2















local matrix = require("matrixx")
local Class = require("class")
NeuralNetwork = Class{}

function NeuralNetwork:init(input_nodes, hidden_nodes, output_nodes)
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes

self.weights_ih = matrix(self.hidden_nodes, self.input_nodes, math.random())
self.weights_ho = matrix(self.output_nodes, self.hidden_nodes, math.random())

self.bias_h = matrix(self.hidden_nodes, 1, math.random())
self.bias_o = matrix(self.output_nodes, 1, math.random())


self.learning_rate = 0.1
end

function NeuralNetwork:feedforward(input_array)
--Generating the Hidden Outputs
local inputs = matrix(input_array)
for i=1, #input_array do
inputs[i][1] = input_array[i]
end

local hidden = self.weights_ih * inputs
hidden = hidden + self.bias_h

--Activation Function
hidden = matrix.map(hidden, tanh)

--Generating the output's output
local output = self.weights_ho * hidden
output = output + self.bias_o
output = matrix.map(output, tanh)

return output
end

function NeuralNetwork:train(input_array, target_array)

--Generating the Hidden Outputs
local inputs = matrix(input_array)
for i=1, #input_array do
inputs[i][1] = input_array[i]
end

local hidden = self.weights_ih * inputs
hidden = hidden + self.bias_h

--Activation Function
hidden = matrix.map(hidden, tanh)

--Generating the output's output
local outputs = self.weights_ho * hidden
outputs = outputs + self.bias_o
outputs = matrix.map(outputs, tanh)

--Convert Targets Array to Matrix object
local targets = matrix(#target_array, 1)
for i=1, #target_array do
targets[i][1] = target_array[i]
end

--Calculate the error
local output_errors = targets - outputs

--Calculate gradient
local gradients = matrix.map(outputs, tanhd)
gradients = gradients * output_errors
gradients = gradients * self.learning_rate

-- Calculate deltas
local hidden_T = matrix.transpose(hidden)

local weight_ho_deltas = gradients * hidden_T

-- Adjust the weights by deltas
self.weights_ho = self.weights_ho + weight_ho_deltas
-- Adjust the bias by its deltas (which is just the gradients)
self.bias_o = self.bias_o + gradients

-- Calculate the hidden layer errors
local who_t = matrix.transpose(self.weights_ho)
local hidden_errors = who_t * output_errors

-- Calculate hidden gradient
local hidden_gradient = matrix.map(hidden, tanhd)
hidden_gradient = hidden_gradient * hidden_errors * self.learning_rate

-- Calcuate input->hidden deltas
local inputs_T = matrix.transpose(inputs)
local weight_ih_deltas = hidden_gradient * inputs_T

self.weights_ih = self.weights_ih + weight_ih_deltas
-- Adjust the bias by its deltas (which is just the gradients)
self.bias_h = self.bias_h + hidden_gradient

self.weights_ih:print()
print()
self.weights_ho:print()
print()
end

function sigmoid(x)
return 1 / (1 + math.exp(-x))
end

function dsigmoid(x)
return sigmoid(x) * (1 - sigmoid(x))
end

function tanh(x)
return (math.exp(x) - math.exp(-x)) / (math.exp(x) + math.exp(-x))
end

function tanhd(x)
return 1 / math.cosh(x)^2
end

--MAIN
local nn = NeuralNetwork(2, 2, 1)
local training_data = {
{
inputs = {0, 1},
target = {1}
},
{
inputs = {1, 1},
target = {0}
},
{
inputs = {1, 0},
target = {1}
},
{
inputs = {0, 0},
target = {0}
}
}

for i = 1, 30 do
local data = training_data[math.floor(math.random(#training_data))]
nn:train(data.inputs, data.target)
end
nn:feedforward({0, 1}):print()
nn:feedforward({1, 1}):print()
nn:feedforward({0, 0}):print()
nn:feedforward({1, 0}):print()


I wrote this NeuralNetwork class.
I used a class library and a matrix library
Respectively class matrix



Seems like it's all correct to me (ideally, at least), btw when I instantiate a new NN with 2 inputs, 2 hidden neurons and 1 output and try to solve a XOR, it doesn't work.



What am i missing? Maybe I misunderstood the matrix library, hope someone can help me



EDIT:
I added a map function in the library to apply a math function to every number in a matrix.



function matrix.map( m1, func )
local mtx = {}

for i = 1,#m1 do
mtx[i] = {}
for j = 1,#m1[1] do
mtx[i][j] = func(m1[i][j])
end
end
return setmetatable( mtx, matrix_meta )
end









share|improve this question

























  • What exactly happens when you instantiate it?

    – dan
    Jan 4 at 0:29











  • There is no function matrix.map in the module matrix.lua.

    – Egor Skriptunoff
    Jan 4 at 3:27











  • dan: it should create a new neural network with given (inputs, hidden neurons, output) @EgorSkriptunoff Oh right! Thanks. I created that function. It applies a function to every number in a matrix. I tested it and it works. I'll edit my question, thank you

    – caralse
    Jan 4 at 4:06













  • BTW, the line local inputs = matrix(input_array) is a subject to this bug when your NN has more than 3 inputs. Happily, it doesn't apply to your current NN which is very small.

    – Egor Skriptunoff
    Jan 4 at 10:48











  • Your error is in the line local gradients = matrix.map(outputs, tanhd) You're passing the result of tanh to tanhd (you should pass the argument of tanh to tanhd instead). In other words, you have f'(f(x)) instead of f'(x)

    – Egor Skriptunoff
    Jan 4 at 11:52














2












2








2








local matrix = require("matrixx")
local Class = require("class")
NeuralNetwork = Class{}

function NeuralNetwork:init(input_nodes, hidden_nodes, output_nodes)
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes

self.weights_ih = matrix(self.hidden_nodes, self.input_nodes, math.random())
self.weights_ho = matrix(self.output_nodes, self.hidden_nodes, math.random())

self.bias_h = matrix(self.hidden_nodes, 1, math.random())
self.bias_o = matrix(self.output_nodes, 1, math.random())


self.learning_rate = 0.1
end

function NeuralNetwork:feedforward(input_array)
--Generating the Hidden Outputs
local inputs = matrix(input_array)
for i=1, #input_array do
inputs[i][1] = input_array[i]
end

local hidden = self.weights_ih * inputs
hidden = hidden + self.bias_h

--Activation Function
hidden = matrix.map(hidden, tanh)

--Generating the output's output
local output = self.weights_ho * hidden
output = output + self.bias_o
output = matrix.map(output, tanh)

return output
end

function NeuralNetwork:train(input_array, target_array)

--Generating the Hidden Outputs
local inputs = matrix(input_array)
for i=1, #input_array do
inputs[i][1] = input_array[i]
end

local hidden = self.weights_ih * inputs
hidden = hidden + self.bias_h

--Activation Function
hidden = matrix.map(hidden, tanh)

--Generating the output's output
local outputs = self.weights_ho * hidden
outputs = outputs + self.bias_o
outputs = matrix.map(outputs, tanh)

--Convert Targets Array to Matrix object
local targets = matrix(#target_array, 1)
for i=1, #target_array do
targets[i][1] = target_array[i]
end

--Calculate the error
local output_errors = targets - outputs

--Calculate gradient
local gradients = matrix.map(outputs, tanhd)
gradients = gradients * output_errors
gradients = gradients * self.learning_rate

-- Calculate deltas
local hidden_T = matrix.transpose(hidden)

local weight_ho_deltas = gradients * hidden_T

-- Adjust the weights by deltas
self.weights_ho = self.weights_ho + weight_ho_deltas
-- Adjust the bias by its deltas (which is just the gradients)
self.bias_o = self.bias_o + gradients

-- Calculate the hidden layer errors
local who_t = matrix.transpose(self.weights_ho)
local hidden_errors = who_t * output_errors

-- Calculate hidden gradient
local hidden_gradient = matrix.map(hidden, tanhd)
hidden_gradient = hidden_gradient * hidden_errors * self.learning_rate

-- Calcuate input->hidden deltas
local inputs_T = matrix.transpose(inputs)
local weight_ih_deltas = hidden_gradient * inputs_T

self.weights_ih = self.weights_ih + weight_ih_deltas
-- Adjust the bias by its deltas (which is just the gradients)
self.bias_h = self.bias_h + hidden_gradient

self.weights_ih:print()
print()
self.weights_ho:print()
print()
end

function sigmoid(x)
return 1 / (1 + math.exp(-x))
end

function dsigmoid(x)
return sigmoid(x) * (1 - sigmoid(x))
end

function tanh(x)
return (math.exp(x) - math.exp(-x)) / (math.exp(x) + math.exp(-x))
end

function tanhd(x)
return 1 / math.cosh(x)^2
end

--MAIN
local nn = NeuralNetwork(2, 2, 1)
local training_data = {
{
inputs = {0, 1},
target = {1}
},
{
inputs = {1, 1},
target = {0}
},
{
inputs = {1, 0},
target = {1}
},
{
inputs = {0, 0},
target = {0}
}
}

for i = 1, 30 do
local data = training_data[math.floor(math.random(#training_data))]
nn:train(data.inputs, data.target)
end
nn:feedforward({0, 1}):print()
nn:feedforward({1, 1}):print()
nn:feedforward({0, 0}):print()
nn:feedforward({1, 0}):print()


I wrote this NeuralNetwork class.
I used a class library and a matrix library
Respectively class matrix



Seems like it's all correct to me (ideally, at least), btw when I instantiate a new NN with 2 inputs, 2 hidden neurons and 1 output and try to solve a XOR, it doesn't work.



What am i missing? Maybe I misunderstood the matrix library, hope someone can help me



EDIT:
I added a map function in the library to apply a math function to every number in a matrix.



function matrix.map( m1, func )
local mtx = {}

for i = 1,#m1 do
mtx[i] = {}
for j = 1,#m1[1] do
mtx[i][j] = func(m1[i][j])
end
end
return setmetatable( mtx, matrix_meta )
end









share|improve this question
















local matrix = require("matrixx")
local Class = require("class")
NeuralNetwork = Class{}

function NeuralNetwork:init(input_nodes, hidden_nodes, output_nodes)
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes

self.weights_ih = matrix(self.hidden_nodes, self.input_nodes, math.random())
self.weights_ho = matrix(self.output_nodes, self.hidden_nodes, math.random())

self.bias_h = matrix(self.hidden_nodes, 1, math.random())
self.bias_o = matrix(self.output_nodes, 1, math.random())


self.learning_rate = 0.1
end

function NeuralNetwork:feedforward(input_array)
--Generating the Hidden Outputs
local inputs = matrix(input_array)
for i=1, #input_array do
inputs[i][1] = input_array[i]
end

local hidden = self.weights_ih * inputs
hidden = hidden + self.bias_h

--Activation Function
hidden = matrix.map(hidden, tanh)

--Generating the output's output
local output = self.weights_ho * hidden
output = output + self.bias_o
output = matrix.map(output, tanh)

return output
end

function NeuralNetwork:train(input_array, target_array)

--Generating the Hidden Outputs
local inputs = matrix(input_array)
for i=1, #input_array do
inputs[i][1] = input_array[i]
end

local hidden = self.weights_ih * inputs
hidden = hidden + self.bias_h

--Activation Function
hidden = matrix.map(hidden, tanh)

--Generating the output's output
local outputs = self.weights_ho * hidden
outputs = outputs + self.bias_o
outputs = matrix.map(outputs, tanh)

--Convert Targets Array to Matrix object
local targets = matrix(#target_array, 1)
for i=1, #target_array do
targets[i][1] = target_array[i]
end

--Calculate the error
local output_errors = targets - outputs

--Calculate gradient
local gradients = matrix.map(outputs, tanhd)
gradients = gradients * output_errors
gradients = gradients * self.learning_rate

-- Calculate deltas
local hidden_T = matrix.transpose(hidden)

local weight_ho_deltas = gradients * hidden_T

-- Adjust the weights by deltas
self.weights_ho = self.weights_ho + weight_ho_deltas
-- Adjust the bias by its deltas (which is just the gradients)
self.bias_o = self.bias_o + gradients

-- Calculate the hidden layer errors
local who_t = matrix.transpose(self.weights_ho)
local hidden_errors = who_t * output_errors

-- Calculate hidden gradient
local hidden_gradient = matrix.map(hidden, tanhd)
hidden_gradient = hidden_gradient * hidden_errors * self.learning_rate

-- Calcuate input->hidden deltas
local inputs_T = matrix.transpose(inputs)
local weight_ih_deltas = hidden_gradient * inputs_T

self.weights_ih = self.weights_ih + weight_ih_deltas
-- Adjust the bias by its deltas (which is just the gradients)
self.bias_h = self.bias_h + hidden_gradient

self.weights_ih:print()
print()
self.weights_ho:print()
print()
end

function sigmoid(x)
return 1 / (1 + math.exp(-x))
end

function dsigmoid(x)
return sigmoid(x) * (1 - sigmoid(x))
end

function tanh(x)
return (math.exp(x) - math.exp(-x)) / (math.exp(x) + math.exp(-x))
end

function tanhd(x)
return 1 / math.cosh(x)^2
end

--MAIN
local nn = NeuralNetwork(2, 2, 1)
local training_data = {
{
inputs = {0, 1},
target = {1}
},
{
inputs = {1, 1},
target = {0}
},
{
inputs = {1, 0},
target = {1}
},
{
inputs = {0, 0},
target = {0}
}
}

for i = 1, 30 do
local data = training_data[math.floor(math.random(#training_data))]
nn:train(data.inputs, data.target)
end
nn:feedforward({0, 1}):print()
nn:feedforward({1, 1}):print()
nn:feedforward({0, 0}):print()
nn:feedforward({1, 0}):print()


I wrote this NeuralNetwork class.
I used a class library and a matrix library
Respectively class matrix



Seems like it's all correct to me (ideally, at least), btw when I instantiate a new NN with 2 inputs, 2 hidden neurons and 1 output and try to solve a XOR, it doesn't work.



What am i missing? Maybe I misunderstood the matrix library, hope someone can help me



EDIT:
I added a map function in the library to apply a math function to every number in a matrix.



function matrix.map( m1, func )
local mtx = {}

for i = 1,#m1 do
mtx[i] = {}
for j = 1,#m1[1] do
mtx[i][j] = func(m1[i][j])
end
end
return setmetatable( mtx, matrix_meta )
end






lua neural-network artificial-intelligence perceptron luajit






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jan 4 at 4:11







caralse

















asked Jan 4 at 0:21









caralsecaralse

113




113













  • What exactly happens when you instantiate it?

    – dan
    Jan 4 at 0:29











  • There is no function matrix.map in the module matrix.lua.

    – Egor Skriptunoff
    Jan 4 at 3:27











  • dan: it should create a new neural network with given (inputs, hidden neurons, output) @EgorSkriptunoff Oh right! Thanks. I created that function. It applies a function to every number in a matrix. I tested it and it works. I'll edit my question, thank you

    – caralse
    Jan 4 at 4:06













  • BTW, the line local inputs = matrix(input_array) is a subject to this bug when your NN has more than 3 inputs. Happily, it doesn't apply to your current NN which is very small.

    – Egor Skriptunoff
    Jan 4 at 10:48











  • Your error is in the line local gradients = matrix.map(outputs, tanhd) You're passing the result of tanh to tanhd (you should pass the argument of tanh to tanhd instead). In other words, you have f'(f(x)) instead of f'(x)

    – Egor Skriptunoff
    Jan 4 at 11:52



















  • What exactly happens when you instantiate it?

    – dan
    Jan 4 at 0:29











  • There is no function matrix.map in the module matrix.lua.

    – Egor Skriptunoff
    Jan 4 at 3:27











  • dan: it should create a new neural network with given (inputs, hidden neurons, output) @EgorSkriptunoff Oh right! Thanks. I created that function. It applies a function to every number in a matrix. I tested it and it works. I'll edit my question, thank you

    – caralse
    Jan 4 at 4:06













  • BTW, the line local inputs = matrix(input_array) is a subject to this bug when your NN has more than 3 inputs. Happily, it doesn't apply to your current NN which is very small.

    – Egor Skriptunoff
    Jan 4 at 10:48











  • Your error is in the line local gradients = matrix.map(outputs, tanhd) You're passing the result of tanh to tanhd (you should pass the argument of tanh to tanhd instead). In other words, you have f'(f(x)) instead of f'(x)

    – Egor Skriptunoff
    Jan 4 at 11:52

















What exactly happens when you instantiate it?

– dan
Jan 4 at 0:29





What exactly happens when you instantiate it?

– dan
Jan 4 at 0:29













There is no function matrix.map in the module matrix.lua.

– Egor Skriptunoff
Jan 4 at 3:27





There is no function matrix.map in the module matrix.lua.

– Egor Skriptunoff
Jan 4 at 3:27













dan: it should create a new neural network with given (inputs, hidden neurons, output) @EgorSkriptunoff Oh right! Thanks. I created that function. It applies a function to every number in a matrix. I tested it and it works. I'll edit my question, thank you

– caralse
Jan 4 at 4:06







dan: it should create a new neural network with given (inputs, hidden neurons, output) @EgorSkriptunoff Oh right! Thanks. I created that function. It applies a function to every number in a matrix. I tested it and it works. I'll edit my question, thank you

– caralse
Jan 4 at 4:06















BTW, the line local inputs = matrix(input_array) is a subject to this bug when your NN has more than 3 inputs. Happily, it doesn't apply to your current NN which is very small.

– Egor Skriptunoff
Jan 4 at 10:48





BTW, the line local inputs = matrix(input_array) is a subject to this bug when your NN has more than 3 inputs. Happily, it doesn't apply to your current NN which is very small.

– Egor Skriptunoff
Jan 4 at 10:48













Your error is in the line local gradients = matrix.map(outputs, tanhd) You're passing the result of tanh to tanhd (you should pass the argument of tanh to tanhd instead). In other words, you have f'(f(x)) instead of f'(x)

– Egor Skriptunoff
Jan 4 at 11:52





Your error is in the line local gradients = matrix.map(outputs, tanhd) You're passing the result of tanh to tanhd (you should pass the argument of tanh to tanhd instead). In other words, you have f'(f(x)) instead of f'(x)

– Egor Skriptunoff
Jan 4 at 11:52












0






active

oldest

votes












Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54031615%2fi-cant-really-figure-out-whats-wrong-with-my-neural-network-class-lua%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54031615%2fi-cant-really-figure-out-whats-wrong-with-my-neural-network-class-lua%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Monofisismo

Angular Downloading a file using contenturl with Basic Authentication

Olmecas