The risks of artificial intelligence to individuals and businesses have increased as the technology has advanced. The concept of AI was once exciting when it was first used in the 1950’s to unite mathematics, electronics, and automation in a game of checkers. Today, artificial intelligence is considered scary because it is commonly used to violate privacy, influence opinions, and eliminate jobs. Because of bad actors using the technology for malicious intent, we live in an age where it’s difficult to know what’s real and what isn’t.
AI is used on social media platforms to track a user’s likes, follows, and comments then deliver the news, information, and advertisements AI algorithms deem appropriate for the individual. Because humans program the algorithms, their personal bias and intent affects who the technology targets and what curated content an individual sees. With the development of Generative AI, artificial intelligence that can generate text, images, and other media, those wanting to influence opinions have been enabled to create and deliver fake news to the masses via social media channels. Recently Google warned of an increase in generative AI attacks by cyber criminals using phishing emails, text, and social media messaging to steal money and gain access to sensitive information.
Generative AI is used to target employees at corporations with phishing emails designed to access company data. A data breach poses one of the most destructive threats to a business because it can lead to a lengthy disruption in operation, damage to reputation, and regulatory penalties when sensitive information is compromised. Considering the threat AI poses to businesses, it’s ironic that 35% of businesses currently use AI in their operations. The implementation of artificial intelligence has caused over 210,000 workers being laid off in 2023.
So, what can be done about the risks society faces from the growth of artificial intelligence? In October, the Biden-Harris Administration announced plans to establish the U.S. Artificial Intelligence Safety Institute (USAISI) to lead the government’s efforts on AI safety and trust. Meanwhile, individuals should be wary of any text or email they receive yet weren’t expecting, even when it appears to be from an employer, friend, or family member. To spot AI generated images and videos, look for blurred backgrounds or facial features that appear off center, and listen for words and phrases that are repeated.