Google’s new deepfake policy has taken a more firm stand against one of the most concerning uses of AI and it’s a good start to combating the problem. Google’s existing guidelines on the removal of fake, non-consensual explicit content have been streamlined further to help block images from appearing in the search. It all hinges on a victim successfully passing the requirements for making a request but once they do, they will have a better chance of limiting the spread of such images.
The deepfake problem hasn’t gone away, Google can’t do anything about the content on other apps and websites, which is where the most damage is done, but it is still a big step in slowing down the spread of non-consensual content. The company has committed to “investing in industry-wide partnerships and expert engagement to tackle it as a society,” which is what we want to see.

Image: Google
Understanding the Google Deepfake Policy
Google’s product manager Emma Higham states, “These efforts are designed to give people added peace of mind, especially if they’re concerned about similar content about them popping up in the future.” The rise of deepfakes and the parallel ascension of AI is a link that cannot be denied but the circulation of these explicit deepfakes has more to do with people than any particular flaw in technology. If a user’s quest for acquiring such deepfakes can be nipped right at the start, it’s all the better.
There are two particular changes in Google’s deepfake removal policy that take center stage here. The first is that the company will make it easier to remove non-consensual explicit content from their searches and filter out similar results. The second change is an improvement in the ranking system, which will retard the relevance of such search results by surfacing high-quality non-explicit content instead. If you’re looking for a better explanation of how Google intends to combat deepfakes, then let’s dive right into it.
Google Simplifies the Process of Explicit Content Removal
When a user wants explicit deepfake content removed from the Google search results, they have to first reach out to the company and start a removal request. There are three criteria that have to be met for someone to initiate a removal process which include being identifiably depicted in the images, confirming that the images are both fake and explicit, and finally, they have to verify that the distribution of the images was non-consensual. These policies are not new, they have been in use for a while now, but Google’s response to the request is being updated.
Once the Google team approves the request, the company will begin to filter out that content as well as “all explicit results on similar searches about them.” This will prevent users from modifying their search terms to access the content and will limit any similar images from surfacing. As a result, once the image is removed, Google will also continue looking for duplicates to eliminate those as well.
According to Higham, “These efforts are designed to give people added peace of mind, especially if they’re concerned about similar content about them popping up in the future.” Without having used Google’s deepfake removal policy, we can’t comment on its efficiency or response speed. The hope remains that no one will have to test the systems out anytime soon but our experience with reality tells us the hope will be short lived.

Image: Pexels
The Search Ranking System is Also Changing as Google Combats Deepfakes
Google’s explicit content removal policy will now alter the ranking system slightly so that when someone searches for terms related to the explict content, the system will prioritize showcasing high-quality, non-explicit content, such as surfacing related news stories or articles that discuss the impact of deepfakes. This system will help further suppress sites that are willingly spreading non-consensual content in favor of more legitimate web pages. Now when an individual goes to Google deepfakes, the search results will attempt to prioritize other content that talks about deepfakes rather than the offending images themselves.
This system of down-raking explicit fake content is a good idea and repeat violators will find their sites further pushed down, allowing the deepfake removal process to be more efficient, limiting those same sites from showcasing their repeat offenses as well.
Google has recently received considerable critical attention for multiple reasons, from its own AI blunders to its partnership with Reddit which strengthens its monopoly on the Search industry. The company also faced some backlash for a password bug that caused some Chrome users to lose access to their saved passwords, but all the controversy aside, it’s good to see the company take action in favor of its community.
Is the Google Policy on Explicit Content Removal Enough to Resolve the Issue?
Most people who want to look for specific information usually turn to the search option in their preferred apps or turn to their browsers to locate the relevant information. Google’s updated policy should help reduce the searcher’s access to the deepfakes on top search articles, but the delay between the victim filing the report and the action to block search results can still prove to be a difficult time for everyone involved. Considering the number of platforms where such images are shared, there is also the additional challenge of putting an end to the proliferation of such images permanently.
Google has also acknowledged the technical challenge of separating non-consensual content from the explicit but consensual content we have available through movies and other means, but regardless, they appear to be headed in the right direction.
The US Copyright Office has also pushed for better laws and regulation of AI-created replicas with its own reports, but it will be a while before lawmakers can set firm guidelines in place for the misuse of AI. Other social media platforms have also introduced their own rules and policies around the use of AI and content generated with it, but more effective regulations will take time.
The mob mentality we have whenever a deepfake is “leaked” is the greatest strike against building a safe space where such behavior is not repeated, but until we can combat that mindset, we give credit to Google for taking the problem more seriously.