*2018*

Inspired by **a blog post** by Vedran Sekara, I figured it would be fun to write some code that takes an image of a face and converts it into a network.

The algorithm is super straight forward. You take the face, and convert it to black and white, then sample points on that image with a bias towards the dark regions. Finally, you connect each point to a number of its nearest neighbors. What you are left with is a facework.

Thrilled as I was to see this actually work, I immediately became concerned that dark skinned people would be harder to netface. The solution was to simply *exponentiate and renormalize* the probability distribution that you sample with. The probability distribution is essentially the greyscale image of the face and if your raise that to higher powers (in other words, if the initial probability distribution is *$p$* you and you change it to *$p$*^{2} or higher) you increase the contrast of the sampling.

Below you see Akon (the most dark skinned celebrity I can think of) at different levels of contrast. It's not perfect, but you can kinda tell it's him.

The sampling process is essentially iterative, so another fun thing one can do is plot the evolution of the facework. I do this by stitching together faceworks with an increasing number of samples and rendering the result as a gif.