These are the top selfies in Karpathy’s study, taken from all over the internet.
Using two million self portraits from the web, Stanford computer science graduate Andrej Karpathy trained an artificial neural network to sort out which selfies are good and which are bad.
Karpathy used a network capable of processing 140 million different parameters and input millions of photos to get results. His experiment began by running a script to collect images on the web tagged “#selfie”. After narrowing the initial five million images to two million photos, he ranked the number of positive responses (likes) based on their audience size (followers). He then worked with a sample of 100 images—50 positive and 50 negative.
His findings reveal some interesting similarities, including the following: all the top 100 selfies were of women, most of the images followed the classic rule of thirds with the face occupying the top one third of the image and most of the subjects had long hair. On the flip side, bad selfies usually had dim lighting, were of a group shot and the subject’s head occupied most of the frame.
Karpathy concluded that “a good portion of the variability between what makes a good or bad selfie can be explained by the style of the image” and not just by the attractiveness of the person.
Now you know what makes a good selfie. Read more about his findings on his blog.
Here are the “worst” in the study. They are often taken in the dark and involve group shots.
A glimpse at the selfies included in the study.
Karpathy also programmed the network to automatically crop an image to yield the strongest selfie.
from TAXI Daily News http://ift.tt/1kY1dGR