Facebook says that it is banning “deepfakes,” those high-tech doctored videos and audios that are essentially indistinguishable from the real thing.
That’s excellent news — an important step in the right direction. But the company didn’t go quite far enough, and important questions remain.
Policing deepfakes isn’t simple. As Facebook pointed out in its announcement this week, media can be manipulated for benign reasons, for example to make video sharper and audio clearer. Some forms of manipulation are clearly meant as jokes, satires, parodies or political statements — as, for example, when a rock star or politician is depicted as a giant. That’s not Facebook’s concern.
Facebook’s announcement also makes it clear that even if a video is not removed under the new policy, other safeguards might be triggered. If, for example, a video contains graphic violence or nudity, it will be taken down. And if it is determined to be false by independent third-party fact-checkers, those who see it or share it will see a warning informing them that it is false. Its distribution will also be greatly reduced in Facebook’s News Feed.
The new approach is a major step in the right direction, but two problems remain. The first is that even if a deepfake is involved, the policy does not apply if it depicts deeds rather than words.
Nothing in the new policy would address those depictions. That’s a serious gap. The second problem is that the prohibition is limited to products of artificial intelligence or machine learning. But why?
Suppose that videos are altered in other ways — for example, by slowing down them down so as to make someone appear drunk or drugged, as in the case of an infamous
doctored video of Nancy Pelosi.
Or suppose that a series of videos, directed against a candidate for governor, are produced not with artificial intelligence or machine learning, but nonetheless in such a way as to run afoul of the first condition; that is, they have been edited or synthesised so as to make the average person think that the candidate said words that she did not actually say. What matters is not the particular technology used to deceive people, but whether unacceptable deception has occurred.
Facebook must fear that a broader prohibition would create a tough line-drawing problem. In its public explanation, it also noted that if it “simply removed all manipulated videos flagged by fact-checkers as false,” the videos would remain available elsewhere online. By labelling them as false, the company said, “We’re providing people with important information and context.” Facebook seems to think that removal does less good, on balance, than a clear warning: “False.”
Maybe so, but in the context of deepfakes, Facebook has now concluded that removal is better than a warning. In terms of human psychology, that’s almost certainly the right conclusion.