Hate UML?

Draw sequence diagrams in seconds.

Keeping Abreast of Pornographic Research in Computer Science
Posted on: 2009-04-25 08:00:09

Burgeoning numbers of Ph.D's and grad students are choosing to study pornography. Techniques for the analysis of "objectionable images" are gaining increased attention (and grant money) from governments and research institutions around the world, as well as Google. But what, exactly, does computer science have to do with porn? In the name of academic persuit, let's roll up our sleeves and plunge deeply into this often hidden area that lies between the covers of top-shelf research journals.


One cannot do research in image processing without an encounter with Lena (pronounced Lenna). The image of the woman with a feathered hat has become the de-facto test image for many algorithms, and appears in thousands of articles and conference papers. And it is of pornographic pedigree:

Alexander Sawchuk estimates that it was in June or July of 1973 when he, then an assistant professor of electrical engineering at the USC Signal and Image Processing Institute (SIPI), along with a graduate student and the SIPI lab manager, was hurriedly searching the lab for a good image to scan for a colleague's conference paper. They had tired of their stock of usual test images, dull stuff dating back to television standards work in the early 1960s. They wanted something glossy to ensure good output dynamic range, and they wanted a human face. Just then, somebody happened to walk in with a recent issue of Playboy.

The engineers tore away the top third of the centerfold so they could wrap it around the drum of their Muirhead wirephoto scanner, which they had outfitted with analog-to-digital converters (one each for the red, green, and blue channels) and a Hewlett Packard 2100 minicomputer. The Muirhead had a fixed resolution of 100 lines per inch and the engineers wanted a 512 x 512 image, so they limited the scan to the top 5.12 inches of the picture, effectively cropping it at the subject's shoulders.

The rest of the story (and the rest of Lena) can be found here. Indeed, the 70s marked the beginning of a long relationship between computer science and pornography. However, after the birth of the world wide web, things really got hot and heavy.

Finding Naked People

In the 1990s the world wide web began to explode, pumping information of all kinds into the homes of the technologically savvy at rates as high as 9600 bits per second. It was the time when search engines such as Webcrawler, Altavista, and Yahoo began the arduous task of spidering the scattered bits of information in Internet servers everywhere. The problem was that someone might search for a completely innocuous query such as the Trojan Room Coffee Pot, and come up with images that were unexpected and inappropriate, and depending on one's tastes, objectionable.

It's not likely to be on his business card, but David A. Forsyth is an expert in web pornography, having served on the NRC committee for this topic. It is evident from his web page that he has a sense of humour, which explains the superbly descriptive title for his 1996 paper, Finding Naked People. Forsyth was one of the first researchers to study the problem of identifying objectionable content.

One of Forsyth's research areas is tracking people in images and videos and figuring our their pose. In the general case, the system has to cope with the fact that people can wear clothes. It would be easier if the subjects all wore the same colour, or didn't wear anything at all. Finding Naked People describes a way of first masking out areas of skin. The areas are then grouped together into human figures (visualized by drawing a stick figure on the image). The crux of the paper is the grouping algorithm. The grouper knows rules such as how limbs fit together into a body, and the fact that a person cannot have more than two arms. Using the rules, it figures out how to superimpose a body onto the skin patches. If it can successfully do this, the image is probably a naked person. If it cannot, then it is probably something else, like a lamp.

Here is a visualization of the skin probability field from the paper, with the grouper output segments superimposed on top:

More probability masks can be found in Proceedings of the 4th European Conference on Computer Vision, volume II on page 598. Be careful -- the pages tend to stick.

It's better with more than one

Finding Naked People piqued a lot of interest in the field of objectionable images, and the skin matching idea is now the first step in many algorithms. However, as James Ze Wang of Stanford notes, "it takes about 6 minutes on a workstation for the figure grouper in their algorithm to process a suspect image passed by the skin filter."

In their System for Screening Objectionable Images, Wang and his colleagues describe the WIPETM method for screening content. They use a wavelet edge detection algorithm to obtain the shape of the image. Edge detection transforms an image into the outlines of the object. Wavelet edge detection allows them to tune it to detect sharp or increasingly blurry edges until well-defined shapes appear.

Image moments allow one to treat any shape as a flat, physical object (like a plate). You can figure out the centre of gravity, axis of symmetry, and other properties that don't change when you move, rotate, or change the size of the object. This typically results in a set of 3 to 7 numbers that you can use to compare how similar shapes are. They were used in early OCR (optical character recognition) algorithms circa 1962.

Wang uses both edge detection and image moments in the analysis. His algorithm is different from modern ones, because an image must pass a series of five YES/NO tests. Future algorithms would combine the detectors using statistical methods and give a probability estimate.

  1. If the image is small, it is assumed to be an icon, and allowed. Icons (such as a mail envelope) were frequently used on the world wide web in the 1990s.
  2. If the image contains few continuous tones, it is considered to be a drawing and is allowed to pass.
  3. If a great portion of the colours of the image are human body colors, then the image is rejected as porn. The algorithm is pretty smart -- if a patch identified as skin has lots of edges in it, it is probably not really skin and is removed from the analysis. (This also counts as the texture matching step)
  4. Finally, the edge (outline) image is converted into 21 numbers representing the translation, scale, and rotation invariant moments. If the 21 numbers are to close to anything already in the database, the image is rejected.
Here are some examples where the algorithm fails. We have blurred them to protect the eyes of the gentle reader. For high resolution versions, you'll have to refer to  Proceedings of the 4th International Workshop on Interactive Distributed Multimedia Systems and Telecommunication Services on page 20 (the dog-eared one).

Getting a leg up on skin models

Skin detection is an important step in porn detection, but figuring out which colours represent skin is a hard problem. Colour depends on the lighting used in the photo, the ethnicity of the participants, and the quality and noise level. Michael J. Jones and James M. Rehg at Compaq studied the problem in detail. They first manually labeled hundreds of images, highlighting all the areas that were skin using a custom drawing application. Once you have billions of pixels that you know are skin, and billions that you know are not, you can easily classify them using introductory math:

The paper describes how to find the probability function, P, using a database of images painstakingly highlighted by an army of enthusiastic research interns. However, as a porn detector, the method needs work.

It will be obvious to anyone who has bought a digital camera recently how to improve this system. Before reading on, can you spot the solution?

Taking the ogle out of Google

In recent years, Google has had its hands full with the problem of pornographic imagery. Henry A. Rowley, Yushi Jing, and Shumeet Baluja at the Mountain View campus, have developed a system that combines skin detection with a number of different features. After applying face detection, they can deduce that the pixels around the face represent skin colour, and therefore find other skin pixels in the image. If the face is the majority of the image, as in a portrait, the image is safe. They use a colour histogram to detect artificial images such as screen shots. (so dirty cartoons are safe?).

Doing what only Google could, they must have set a record for the rate of pornographic analysis. They evaluated the speed of the algorithm on a corpus of around 1.5 billion thumbnail images of less than 150x150 pixels. "Processing the entire corpus took less than 8 hours," the team bragged, "using 2,500 computers."

Bags of visual words (Arm, leg, or . . .?)

In 2008, Thomas Deselaers et al. came up with a unique way of finding porn, from the world of artificial intelligence. Large news databases can automatically classify news articles based on the words in them. Articles containing the names of political figures or sports jargon can be easily categorized by machines, that don't need to really understand what the article is about. Techniques exist so that the machines can learn on their own which words or names are important. The same methods can be applied to images, using visual words.

To create the visual vocabulary, they extract image patches around "points of interest", parts of the image that are likely to contain features. They are then scaled to a common size, and analyzed using PCA to find commonalities. It is similar to face detection, but for things that aren't faces. It also takes colour into account in the analysis. Because colour is a part of the "vocabulary" already, skin detection is unnecessary.

Using this technique, Deselaers is even able to go beyond simple YES/NO classification and reach a new level of precision. The algorithm can rate images into one of five categories of increasing levels of offensiveness, from benign, to lightly dressed, to partly nude, fully nude, and porn. The paper contains examples from each category, and is guaranteed to offend somebody.

Corpus non indutus

At the end of the Google paper, the authors speculate on how to spur further advances:

...because of the ubiquity of the Internet, search engines, and the widespread proliferation of electronic images, adult-content detection is an important problem to address. To improve the rate of progress in this field it would be useful to establish a large fixed test set which can be used by both researchers and commercial ventures.

Yes, bring on the grant-sponsored porn, so that researchers can make the world a better place. But despite the years of study, one question remains unanswered: if such a corpus existed, how would we find it?

For a good time, read this

Want more programming tech talk?
Add to Circles on Google Plus
Subscribe to posts

Post comment

Real Name:
Your Email (Not displayed):

Text only. No HTML. If you write "http:" your message will be ignored.
Choose an edit password if you want to be able to edit or delete your comment later.
Editing Password (Optional):

Joe Beese

2009-04-26 00:02:18
"And it is porn?"

You have a very strange definition of pornography. It's hard to imagine a more tasteful or artistic nude.


2009-04-26 12:55:45
@Joe - I think Steve means it is porn because it came from playboy, not because she outrageously seductive eyes ;)


2009-05-31 23:56:59
thank you for the article. it was illuminating. I did find very interesting how mathematics are able to translate everything, even pornography, in an equation.

anyway, I have a question for you, do you have any idea about how to found data or studies about pornography and race?

I want to know:

- actor men/women percentage in the differents races

- consumer men/women percentage in the different races




Other posts by Steve

Yes, You Absolutely Might Possibly Need an EIN to Sell Software to the US How Asana Breaks the Rules About Per-Seat Pricing 5 Ways PowToon Made Me Want to Buy Their Software How I run my business selling software to Americans 0, 1, Many, a Zillion Give your Commodore 64 new life with an SD card reader 20 lines of code that will beat A/B testing every time [comic] Appreciation of xkcd comics vs. technical ability VP trees: A data structure for finding stuff fast Why you should go to the Business of Software Conference Next Year Four ways of handling asynchronous operations in node.js Type-checked CoffeeScript with jzbuild Zero load time file formats Finding the top K items in a list efficiently An instant rhyming dictionary for any web site Succinct Data Structures: Cramming 80,000 words into a Javascript file. Throw away the keys: Easy, Minimal Perfect Hashing Why don't web browsers do this? Fun with Colour Difference Compressing dictionaries with a DAWG Fast and Easy Levenshtein distance using a Trie The Curious Complexity of Being Turned On Cross-domain communication the HTML5 way Five essential steps to prepare for your next programming interview Minimal usable Ubuntu with one command Finding awesome developers in programming interviews Compress your JSON with automatic type extraction JZBUILD - An Easy Javascript Build System Pssst! Want to stream your videos to your iPod? "This is stupid. Your program doesn't work," my wife told me The simple and obvious way to walk through a graph Asking users for steps to reproduce bugs, and other dumb ideas Creating portable binaries on Linux Bending over: How to sell your software to large companies Regular Expression Matching can be Ugly and Slow C++: A language for next generation web apps qb.js: An implementation of QBASIC in Javascript Zwibbler: A simple drawing program using Javascript and Canvas You don't need a project/solution to use the VC++ debugger Boring Date (comic) barcamp (comic) How IE <canvas> tag emulation works I didn't know you could mix and match (comic) Sign here (comic) It's a dirty job... (comic) The PenIsland Problem: Text-to-speech for domain names Pitching to VCs #2 (comic) Building a better rhyming dictionary Does Android team with eccentric geeks? (comic) Comment spam defeated at last Pitching to VCs (comic) How QBASIC almost got me killed Blame the extensions (comic) How to run a linux based home web server Microsoft's generosity knows no end for a year (comic) Using the Acer Aspire One as a web server When programmers design web sites (comic) Finding great ideas for your startup Game Theory, Salary Negotiation, and Programmers Coding tips they don't teach you in school When a reporter mangles your elevator pitch Test Driven Development without Tears Drawing Graphs with Physics Free up disk space in Ubuntu Keeping Abreast of Pornographic Research in Computer Science Exploiting perceptual colour difference for edge detection Experiment: Deleting a post from the Internet Is 2009 the year of Linux malware? Email Etiquette How a programmer reads your resume (comic) How wide should you make your web page? Usability Nightmare: Xfce Settings Manager cairo blur image surface Automatically remove wordiness from your writing Why Perforce is more scalable than Git Optimizing Ubuntu to run from a USB key or SD card UMA Questions Answered Make Windows XP look like Ubuntu, with Spinning Cube Effect See sound without drugs Standby Preventer Stock Picking using Python Spoke.com scam Stackoverflow.com Copy a cairo surface to the windows clipboard Simulating freehand drawing with Cairo Free, Raw Stock Data Installing Ubuntu on the Via Artigo Why are all my lines fuzzy in cairo? A simple command line calculator Tool for Creating UML Sequence Diagrams Exploring sound with Wavelets UMA and free long distance UMA's dirty secrets Installing the Latest Debian on an Ancient Laptop Dissecting Adsense HTML/ Javascript/ CSS Pretty Printer Web Comic Aggregator Experiments in making money online How much cash do celebrities make? Draw waveforms and hear them Cell Phones on Airplanes Detecting C++ memory leaks What does your phone number spell? A Rhyming Engine Rules for Effective C++ Cell Phone Secrets