There are many photos of Tom Hanks, but none like the images of the leading everyman shown at the Black Hat computer security conference Wednesday: They were made by machine learning algorithms, not a camera.
Philip Tully, a data scientist at security company FireEye, generated the hoax Hankses to test how easily open source software from artificial intelligence labs could be adapted to misinformation campaigns. His conclusion: “People with not a lot of experience can take these machine learning models and do pretty powerful things with them,” he says.
Seen at full resolution, FireEye’s fake Hanks images have flaws like unnatural neck folds and skin textures. But they accurately reproduce the familiar details of the actor’s face like his brow furrows and green-gray eyes, which gaze cooly at the viewer. At the scale of a social network thumbnail, the AI-made images could easily pass as real.
Read the full article in Wired.