In notes/MultiLayerPerceptron.ipynb
, there is a dimensional conflict in this function:
def mlp_fun(x, Weight, Bias, Func):
f = Variable(x, requires_grad=False)
NumOfLayers = len(Weight)
for i in range(NumOfLayers):
f = Func[i](torch.matmul(Weight[i], f) + Bias[i])
return f
I have printed all steps for a 1,2,1 sized network, below are the results:
While the result of torch.matmul(Weight[0], x)
is a 1x2 matrix, Bias[0]
is a 2x1 vector and their summation is a 2x2 matrix.
This leads to a dimensional conflict in f
results.