After a little bit of reading on these two terms, I have the impression they are used for the same thing. So is there actually a difference between these two concepts, and if so, how are they different?
Solved – Difference between invertible NN and flow-based NN
machine learningneural networksnormalizing-flow
Related Question
- Solved – Difference between rulefit and random forest
- Solved – Difference between multitask learning and transfer learning
- Neural Networks – Difference Between ‘Activation’ and ‘Activation Function’
- Solved – Difference between “Sampling” and “Subsampling”
- Solved – ny difference between Sensitivity and Recall
Best Answer
After some more reading I came to following conclusion:
Normalizing flows are invertible NN $f$ that also have a tractable determinant of the Jacobian $D_x f$ as well as a tractable inverse $f^{-1}$. This allows for following interpretation: Let $X \sim p_X, Z \sim p_Z$ be some random variable with $Z = f(X)$. Then $$p_X(x) = p_Z(f(x)) \det D_x f .$$ Because $f$ has a tractable inverse $f^{-1}$ we can therefore easily sample from one of the two distributions $p_X, p_Z$ by sampling from the other one and using the transformation above.
This could be applied in the following way (just as an example): We could train $f$ such that $p_X$ represents a distribution of images (e.g. represented by MNIST) and $p_Z$ a Gaussian. Then we can easily sample from the distribution of images by sampling $Z \sim p_Z$ (Gaussian) and just transforming it back to $X = f^{-1}(Z) \sim p_X$.