Tuesday, March 27, 2018

A Primer on Diffusion: Random Walks in 1D

Consider a particle, initially at the origin, jumping around randomly on a 1D lattice. The particle tosses a fair coin, and decides to jump left or right.

A particular trajectory of the particle may look like the following:


Suppose the particle makes \(n_{+}\) hops to the right, and \(n_{-}\) hops to the left. Then, the total number of steps \(N = n_{+} + n_{-}\), and the position at the end is \(x = n_{+} - n_{-}\).

The process is probabilistic, and the outcome of any single trajectory is impossible to predict. However, let us enumerate the number of ways in which a random walk of \(N\) steps, results in \(n_{+}\) hops to the right. This is given by, \begin{align*}
W(x, N) & = {}^N C_{n_{+}}\\
& =  \dfrac{N!}{N-n_{+}!n_{+}!}\\
& = \dfrac{N!}{n_{-}!n_{+}!}
\end{align*} The probability \(p(x, N)\) of ending up at \(x\) after \(N\) steps can be obtained by dividing \(W(x, N)\) by the total number of paths. Since we can make two potential choices at each step, the total number of paths is \(2^N\).
\[p(x, N) = \dfrac{W(x,N)}{2^N}.\]
For large \(N\), Stirling's approximation is \(N! \approx \sqrt{2 \pi N} (N/e)^N\). For \(x \ll N\), this implies, \[p(x, N) = \dfrac{1}{\sqrt{2 \pi N}} \exp\left(-\dfrac{x^2}{2N}\right)\]
Both the distributions have the same shape. However, because one is a discrete distribution, while the other is continuous, they have different normalizations, and hence different actual values of \(p(x,N)\).

Sunday, March 25, 2018

Links: Probability, Statistics, and Monte Carlo

1. A beautiful visual introduction to some concepts in probability and statistics (link)
Seeing Theory was created by Daniel Kunin while an undergraduate at Brown University. The goal of this website is to make statistics more accessible through interactive visualizations (designed using Mike Bostock’s JavaScript library D3.js).
It starts from relatively basic concepts, touches on some intermediate-level topics (Basic Probability, Compound Probability, Probability Distributions, Frequentist Inference, Bayesian Inference, Regression Analysis)

2. You are not a Monte Carlo Simulation (link)

It is now well-established that humans feel the pain of loss more strongly than the pleasure of an equivalent amount of gain. This interesting quirk may be shelved as an unfortunate cognitive bias (like confirmation bias); something for our rational mind to overcome.

However, one can the follow up question: why? A first level explanation is as follows: Suppose you invest $100. You lose 50% one day, and gain 100% the next day. You are now back to square one ($100 * 0.50 * 2.0 = $100). A gain twice the size of the loss was necessary to stay neutral.

Corey Hoffstein argues remarkably well that for individuals average outcomes are less meaningful than median outcomes. That the logarithmic scale for utility is more appropriate than a linear scale. And that loss aversion - that silly behavioral quirk - might be a powerful survival technique that helps us live to fight another day.

I loved this insightful post. If nothing else, do yourself a favor and read the summary.


Thursday, March 22, 2018

Links to matplotlib Resources

I wanted to pull together a list of matplotlib resources that I need to consult frequently.

1. SciPy Lectures: The entire series is great, including the introduction to matplotlib.

2. Tutorials from J.R. Johansson and Nicholas P. Rougier

3. A couple of my own Jupyter notebooks on customizing styles, and multiplots.