As much as we’d like to dole out the credit to the many players who have been working on AI for years before it became a mainstream conversation, we have to admit that ever since OpenAI burst onto the scene with ChatGPT, we’ve steamrolled towards global AI adoption in every single thing we do. From the smallest business decision to the evolution of wearable technology, myriad industries have been caught up in pursuit of artificial intelligence in one way or another. PwC believes that AI’s contribution to the global economy could go up to $15.7 trillion by 2030. According to their numbers, $6.6 trillion of these numbers will result from increased productivity with $9.1 trillion rising from consumption-side effects. These big numbers appear entirely plausible considering how relentless we’ve been in seeking quality AI.

Flashy applications of AI are what we discuss most often, and the appeal of these technological advances is undeniable. The recently launched Rabbit R1 device revealed at the Consumer Electronic Show in Las Vegas introduced the potential of AI to become a full-time personal assistant that can take care of our multiple apps and how we use them. From ordering groceries to booking flights, the AI pocket companion promises to manage your personal accounts to do it all. Now do we need a device for tasks that are simple enough to handle on our own? Not really, but the Tamagotchi-styled object promises to take care of your needs instead of the other way around and with four batches of 10,000 devices sold out within days of its launch, there is clearly a market for such AI-powered technology.

The Humane AI Pin that was announced in 2023 also raised some eyebrows but the response to that device has been comparatively underwhelming. The applications of AI far exceed just making our lives more interesting. Its application in the healthcare industry has not been as fast-paced due to the sensitive nature of the work that needs to be done, but there are many transformative initiatives that are occurring here as well. From running diagnostics to maintaining health records, there are many uses for AI in health care but the burden of “switchover disruptions” is just one aspect of the problem when it comes to the use of AI in the field.

AI In Healthcare

AI In Healthcare—Alarm Bells Ringing

The idea is an ingenious one—if AI could simplify the process of providing healthcare, expanding who has access towards it, conducting delicate procedures, and overall improving the quality of our lives, then finding ways to integrate artificial intelligence into the field should be a top priority. The problem is that we’re still quite a while away from perfecting such technology, and as fun as AI is, there is still a pervasive sense of mistrust over AI, which for the majority of the population is still a novel and unfamiliar concept. 

To integrate AI into the healthcare field, we would require the service to be absolutely flawless in its functioning before it could be trusted to do its job, and this is where a major concern over “AI hallucinations” comes into play. Such technology has been known to confabulate data and provide false, unsupported information on occasion, some AI tools more than others. Under any circumstance that remains an undesirable outcome, but more so in the healthcare industry where lives hinge on making accurate assessments. Humans are also given to making the wrong choice every once in a while but they’re also more likely to catch the mistake and replace it with corrective action from what we understand of AI. 

Another ethical concern that comes in with AI is the lack of an intuitive response system which might be necessary while making a diagnosis about a particular patient. Various cultures and races have distinct differences in what is or isn’t normal for their biology, and the kind of treatments that are provided also need to be adapted accordingly. An AI tool might be more suited to match symptoms against a 10-point checklist, but unless every cultural characteristic and variable is fed into the system, the AI service will remain insufficient and incomprehensive. 

It is also important to consider the training of such tools. The advent of AI mainly takes the form of large language models that run such advanced programming and the training for these models requires extensive data sets to be fed into the system. This allows the model to “learn” and respond better, but in the process, it raises critical questions about the legitimacy of the data as well as the legality of using it. 

Ethical Concerns

Ethical Concerns Surrounding the Training of LLMs

One of the major areas of conflict that arises around AI models is the material that is used to train them. For such AIs and LLMs to run, they require extensive amounts of data to be used as fodder for the AI to generate its knowledge base as well as pick up on the nuances of language to answer the questions you have for it. Depending on the use case, the models scour the World Wide Web as one of its many sources, in order to build on that information and provide you with your results.

Now in a much more basic sense, a lot of this information is available online for free anyway so it shouldn’t necessarily matter if people find that information through you or someone else—but it does make a difference. By gathering data and providing it in one place, a chatbot like ChatGPT might simplify the process of finding data for someone who is looking for an answer but it then eliminates the traffic you might have seen on your website. Moreover, if the sources being used to feed these models are unreliable or unverified, it could only lead the AI to reproduce inaccurate information and present them as facts when they are not. 

Many artists, writers, and other creatives have been struggling with their creative works being “stolen” by AI. Content reproduced by image generators is just that—reproductions of existing artist styles by merging and overlapping features to create something new. Is this ethical or fair to the original artists? AI companies claim this is fair use but when hours of work and effort by the artists are smushed together in seconds to create something new, the artists lose out on any credit and revenue for their effort in creating the original work. By making it cheaper to turn to an AI bot for your image generation needs, companies indirectly discourage customers from turning to artists, while simultaneously profiting off of their works.

The New York Times recently sued OpenAI and Microsoft for training their AI model on content that was maintained behind a paywall. The chatbots were accused of displaying information that users would have to pay NYT to access, also often regurgitating content without linking to the source as the New York Times. Considering the hours, sometimes months, of labor that goes into articles published by the news platform, it is understandable why they felt a lawsuit was necessary to address the issue. Again, OpenAI has claimed that the memorizing and reproducing of content was a rare bug that they were working to drive to zero but the fact that this still occurs is far from acceptable and needs to be addressed as a more pressing concern in the present, before these AI tools are rolled out to the masses. 

This is far from the only lawsuit against AI platforms, primarily OpenAI, and it is likely only the beginning of many more to come. While the conversation around copyrights and fair use is becoming more commonplace, another matter that arises is the privacy concerns around LLMs and generative AI.

LitterCam

What Is Left of Privacy in the Modern World?

The idea of privacy has been a foregone concern in the digital era that we live in, and constant surveillance is no longer something out of the ordinary. The conspiracy around birds not being real and instead being carriers of drones to spy on American citizens is both comical and worrying in that we’re all hyper-aware of the possibilities of surveillance, but also often irrationally angry at the wrong thing. Many users of assistants like Alexa and Siri have often found their devices responding to questions that were not directed at them, indicating just how enthusiastically these devices are always listening. And listen they should, as their entire purpose is to be available whenever you need, to play your favorite song or add some snacks to your grocery list. 

Now, with the advent of AI, there are increasing concerns around the use of personal information that is collected from innumerable sources where data is available. If they’re always listening, how much insight do these devices have into your daily conversations and personal opinions? Again, these AI models need to be trained with real information, and it doesn’t get more real than your daily, lived experience. This doesn’t necessarily have to be a scary idea, Amazon has been refining its recommendation systems very dedicatedly and the mechanism is known for predicting your purchases even before you know you need something. This is done very efficiently by reviewing your purchase history, consolidating products you might have looked at in the past, compiling a list of other products similar shoppers purchased, and also throwing in offers your way that might appeal to you enough to hit the purchase button. It makes for a convenient shopping experience, customized right from your couch at home. 

This isn’t the only real-time benefit we’re seeing from AI. LitterCam is an innovative AI service that is able to identify when drivers on the road litter, and after human verification, a request is sent to the Driver and Vehicle Licensing Agency (DVLA) for driver details so that a fine can be issued to the offending driver. Again, this is quite a fascinating application of AI that addresses a problem we never imagined it would, but it does bring up the question of how easy it is now to track our movement and collect personal information from it.

Similarly, facial recognition technology (FRT) has seen quite an uptick in its adoption, with the global market for such tech predicted to grow to $12,670.22 million by 2028, according to PR Newswire. On one hand, biometrics are often a much safer way to protect your data compared to a password that anyone can memorize, so you might be able to boost your data security more efficiently the more refined such tech becomes. At the same time, there is a lack of transparency in how the organizations collect this information and how it is put to use, and with FRT being built into surveillance tech even in public spaces, there is no way to consent to the collection of such data. More concerning is the fact that AI is also providing mischief-mongers with the tools they need to fool such technology. 

Adversa, a company that aims to promote our trust in AI, recently had its AI Red Team expose how AI could be tricked into using a photo of yourself to convince an AI to believe you were someone else entirely. Using its interestingly named system, the Adversarial Octopus, the company was able to trick PimEyes, reportedly one of the most advanced facial recognition search engines, into thinking that a picture of a reporter was actually Elon Musk. The platform was easily befuddled merely by adding noise to the image, and if such tools are misused, hackers across the globe could have a much easier time bypassing any of the security measures you have in place that utilizes FRT. 

AI might be simplifying our lives, but it can also simplify the job for those who intend to disrupt it. Deepfakes are a prime example of just how terrifying it can be to live in a world built on AI, when anyone could use your image or voice to morph it with anything else they like at the touch of a button. 

Cybersecurity

Cybersecurity Concerns May Need to Be Addressed First

While we battle with what AI does and what it learns in the process, the biggest safety concern might have to be the vulnerabilities in the security of AI systems. Data poisoning, intentional or not, could easily allow someone to infiltrate the system and convince it to provide results or make decisions in their favor. Similarly, hackers who gain access to these private systems, perhaps to an AI tool designed to support a specific business, could find a one-stop shop for all their ill-intended needs. From business details to financial information of its customer base, the AI could reveal information never intended for the public eye. 

Samsung banned the use of ChatGPT and other AI tools on all of its workers’ company devices after a data leak occurred back in May. The company followed up by developing its own internal AI tool which is now available to its customers as well, but not every company can create its own AI tool from scratch. Companies like Microsoft and Amazon have begun offering AI services that businesses can use to build their own internal AI services but this brings up a few security concerns on how secure these systems are to begin with. A central security flaw could mean all the businesses on the AI trail getting exposed to these online threats which remain undesirable no matter how you look at it.

Just as AI providers have been working to ramp up the security of their services, cybercriminals have been devising more ingenious ways of misusing the very same AI to further their attacks. A report released by Deep Instinct back in August last year consolidated inputs from more than 650 senior security operation professionals in the U.S. The report found that 75 percent of the professionals had witnessed an increase in cyber attacks over the preceding 12 months. 85 percent of them attributed it to the increase in bad actors using generative AI. While bigger companies might have the resources to predict and respond to this surge of malicious intent, small businesses might not be able to find the resources or develop the know-how in a timely manner in order to navigate these threats.

Not all hope is lost and not all is dark in the world of AI. 69 percent of senior executives of a PwC study state their organizations are preparing to use generative AI in cyber security over the next 12 months and their strategies could provide other businesses with sufficient insight as well. IBM’s Cost of a Data Breach 2023 global survey found that using AI and automation could benefit organizations by saving up to $1.8 million in data breach costs and early identification if employed correctly. The IBM Security QRadar Suite is just one of many security options for companies to consider while adopting AI, making it apparent that if a business determines artificial intelligence to be a priority for the future of the company, their integration of AI must be a comprehensive one that looks beyond the benefits to the actual implications of such tech use. 

The ethical use and applications of AI are not likely to be resolved anytime soon, just as we are far from determining whether AI art is really “art.” While we work to navigate through the many challenges that we will continue to face with the use of AI, one thing to keep in mind is the importance of maintaining a precautionary approach towards it. Not every bit of AI technology needs to become a part of your business module just because it appears to be the “next big thing” and security must remain a top priority for your company in the long run if you want to avoid being overwhelmed by the loopholes of a technology that we are only just beginning to understand.