A recent Fastly Threat Insights Report revealed that more than one-third of the global internet traffic is made up of bots. With 36 percent of the activity online being attributed to bots and only 64% originating from humans, we’re heading closer to the future predicted by the dead internet theory, which isn’t a pleasant thought.
According to the internet bots theory, human activity is being replaced by bots designed to keep us engaged and drive traffic towards target websites and platforms, even if the content within those spaces is also artificially generated to keep us coming back for more. The internet was once designed by humans for humans, but we’re no longer certain this is true—the nature of the internet is changing as we speak.
36% of Global Internet Traffic Originates from Bots
The Fastly report centered around cybersecurity concerns and many interesting facts and figures that came to light during their study, but one odd data point that emerged was that 36% of global internet traffic could be attributed to bots. These bots are essentially software applications or scripts that perform automated functions tirelessly. These bots are not a new manifestation of the internet and have been around for years, but they are significantly more common in 2024 than they were perhaps even five years ago.
The rise of internet bots has been one of the unfortunate consequences of the evolution of the internet and the increasing use of AI has only made matters worse. Bots are not always unwanted and intrusive—companies employ bots to interact with visitors on their website to help them navigate around the page or find information relevant to them, although one could argue that even these applications of bots are always unwanted, intrusive, and unpleasant.
The pop-up messages stalk users as they try to read an article or understand the product and ruin the experience of being on a website. Still, on the off chance that it might help one of every hundred visitors engage with the page, companies continue to invest in their personalized bots.
Is the Rise of Internet Bots a Bad Thing?
The rise of internet bots is supported by the increasing number of companies utilizing bots within their support systems and platforms, but these uses are only a small part of a much larger problem. There are more malicious uses of bots at play. These little collections of codes are capable of generating fake news and redirecting traffic toward specified platforms to bloat the activity on the page.
YouTubers have seen odd engagement patterns from bots and Instagram pages are regularly visited by a slew of them soliciting interaction of any kind. While it does help them boost engagement on their pages and have their content reach more “real” viewers, it also creates an unreliable system of inflated views.
When are you going to do this YouTube?
Please help fight bots in the comments.This makes our channel engagement bad, spam’s can mislead audiences and so much. One cannot focus on the comment section because of this @YouTube @YouTubeIndia @YouTubeCreators @YTCreatorsIndia pic.twitter.com/CVrO7Sp4Ft
— asif.eth (@asifeth) August 23, 2024
So I’m not the only one this is happening to, but my latest video is being swarmed with literally hundreds of bots, with NSFW avatars, and names. They’re auto replying to every comment with random copies of other comments.
I hope YouTube is doing something about this ASAP. pic.twitter.com/uZtQ2jbQ7W
— Pikasprey 🐻☕ Android VTuber (@Pikasprey) October 13, 2021
Tech companies are aware of the issue, but it also doesn’t hurt them when these bots increase ad views and as a result, increase their ad revenue. If 500 bots view an ad on YouTube when only 100 real viewers actually found the video, both the company and the YouTubers benefit from the system. There are entire bot farms available across the globe where such fake views and engagement can be bought in order to push content to a larger number of viewers.
These farms generate clicks and interactions with the target content to convince the advertiser that their ad has reached a large audience or convince an app algorithm that a particular bit of content is appealing to viewers and needs to be shown to more of them. Businesses ultimately suffer because the hundreds of dollars spent on creating the perfect targeted ads never generate any equivalent returns considering those ads were created on misleading data points for an audience that doesn’t exist.
The bots that generate fake news traffic are the worst application of this technology, creating false and misleading content that it knows will generate the clicks it craves. With the inclusion of AI, these bots are getting smarter and more efficient than ever before.
What Is the Dead Internet Theory?
The Dead Internet Theory suggests that human activity on the Internet is being replaced by automated, computer-generated bots and AIs that control the ebb and flow of the online space.
The theory has been around since the late 2010s, circulating 4Chan and other various discussion forums and generating more support for its authenticity as people began to see evidence of it in their everyday interactions with the internet.
For the early days of DeviantArt, we’re now at a point where images can be generated using a handful of keywords and the click of a button. AI tools are capable of generating news excerpts with their very own clickbait titles that they can customize for different platforms. Creating and spreading information is easier than ever before.
Deepfakes can be constructed with considerable ease and recognizable celebrity voices can be used to create clips of them saying things they may have never fathomed before. The initial content generation still requires a human hand to start the process, but with AI becoming “smarter,” the internet bot theory feels like a more real possibility. Political campaigns around falsified content are already beginning to emerge, and while Google has made some attempts to combat sexually explicit deepfakes in its search, it is hardly enough to combat the larger problem.
If you believe the global growth of internet traffic via bots is not a problem that will affect you, the issue might hit home closer than you think. Most dating platforms have their own bot problems, along with the usual stream of fake accounts. According to CNN, the FTC sued Match Group for enticing people to sign up for paid plans using deceptive business practices that included applying a “looser standard for preventing non-subscribers from seeing messages from potentially fraudulent accounts than it did for its paying customers.”
This would tempt customers to pay to view the profiles that expressed interest in them, only for them to later see a “fraudulent communication” notice. Match defended itself stating that this wasn’t the work of romance scammers but “spam, bots, and other users attempting to use the service for their own commercial purposes.” This is only one example of an instance where bots affect the ongoings of daily life. In many cases, bots have become smart enough to generate realistic content that makes it harder for algorithms to detect what is human-created and what isn’t.
“a full half of YouTube traffic was ‘bots masquerading as people,’ a portion so high that employees feared an inflection point after which YouTube’s systems for detecting fraudulent traffic would begin to regard bot traffic as real and human traffic as fake.” https://t.co/xx59djakBI
— Daniel McCarthy (@ToryAnarchist) December 27, 2018
AI bots are replacing human activity and human-generated content at great speeds, blurring the line between what is real and fake. The establishment of checks and safeguards has yet to be able to keep up with the scale of the issue. The rise of internet bots is more dangerous than we think and the problem needs to be addressed as quickly as possible. It is becoming increasingly important for individuals to become careful consumers who don’t trust the information available online without substantial research and discussion of their own.