Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Now it's been ages since I dabbled with neural nets so this might be completely silly, but can't a change in dimension be thought of as forcing the weights to/from certain nodes to be zero?


Ah, that would stop certain dimensions from changing, but the output would still be the same size.


While technically still the same size, I think he's proposing that it's, in a sense, isomorphic to a dimension change if the fix to zero is propogates throughout the remainder of the layers (until the next 'change' that is).

Or something like that.


I hadn't entirely fleshed out the idea, but yeah.

Take a simple NN with 3 layers: 5 neurons in the input layer, 3 in the hidden and 1 output.

Force inputs to neuron 4 and 5 in the hidden layer to be zero, and force inputs to neurons 2-5 to be zero in the output layer (and ignore their output). I'm assuming the transfer function obeys f(0) = 0, if not, fix output to zero as well.

My thought was this would be similar to how you enforce boundary conditions when solving partial differential equations by directly setting the value of certain matrix elements before running the solver.

Again, may be completely silly.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: