You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @KindXiaoming,
Many thanks and congratulations on this amazing work. Following up on this, I have been trying to understand the architecture of the network and I want to report an inconsistency (to the best of my understanding) between the paper and the implementation.
Please look at the screenshot from the paper below:
However, this is inconsistent with the implementation where a bias term is added to the summation of all outputs from the previous layer to represent the input to the next layer.
Here is a screenshot from the code:
hi, thank you for reporting this. yes, in the code the bias term is by default included, but I did not write them out since it can be absorbed into any activation functions. Bias terms are needed for sparsity regularization (without them, regularization seem to behave weird. But maybe there is a better way).
Hi,
for a trained model, is there any way to get the equation that defines the activation function (spline) for some specific connection?
Since we can get the plots, I would like to get the equation also, e.g., f(x) = ax +b for some linear activation functions, including a and b values.
Best,
The text was updated successfully, but these errors were encountered: