Backpropagation with asymmetric weights

A number of recent papers have explored learning in deep neural networks without backpropagation, often motivated by the apparent biological implausibility of backpropagation. …

Posted on

Nelder Mead Optimization with F# + Fable

Gradient descent is a spectacularly effective optimization technique, but it’s not the only method for optimizing non-convex functions. …

Posted on

Conditional Random Fields for Company Names

Let’s assume we have a sequence of words, and we want to predict, as accurately as possible, whether each word is a name, verb, or some other part of speech. …

Posted on