The other day, I was mindlessly going about my business when I received a Facebook notification on my phone. Someone I’ve never even spoken to in high school had commented on a post I shared prior.
This struck a nerve. When the hell did people become so damn opinionated that they feel the need to comment on a near stranger’s post? Obviously, the girl had cared enough that she couldn’t refrain herself!
What has social media done to us, what is it doing to us?!
The Beginning of a Chain Reaction
The emergence of digital technology has become the cornerstone of my (the millennial) generation. Social media, in particular, has facilitated the way we connect with each other, with organizations, and transformed the way we engage with information. We are more confident, self-expressive, and very socially aware. Thus, many are vocal and opinionated. At the same time, years of media priming have desensitized us and decreased our trust in the information we see on social media. It is safe to say that social media shapes the approaches that companies must adopt to reach us.
Initially, companies had to build a relationship with their consumers. More recently, to achieve this, they are also expected to be authentic, political, and socially active and responsible. Our social media dependency makes it increasingly difficult for marketers to reach us. In response, marketers are turning to Artificial Intelligence (AI) in an attempt to build a more personalized relationship with us.
Market research conducted by the Interactive Advertising Bureau and Winterberry Group earlier this month revealed: “artificial intelligence and blockchain technology are expected to become bigger priorities this year.” Marketers, publishers, and tech developers plan to occupy their time with AI-related functions such as: “cross-device audience recognition” and “predictive modeling and/ or segmentation.” Through digital channels, AI-powered bots and technology not only mine for our information by watching and recording our digital interactions but also learns to automate and improve itself through algorithms. Considering that social media and virtual assistants (a form of AI) have already penetrated our daily, personal and physical lives, this increased dependency on AI for personal data raises concerns surrounding privacy. And while the imminent dangers of AI don’t appear to exist beyond privacy issues, those who are more perceptive warn of more serious ramifications surrounding AI’s automation process and the fact that the Internet and such digital technology remain almost unregulated.
In The Past…
Both Stephen Hawking and Elon Musk have warned us that while AI can save humanity, it can also be “our greatest existential threat.” If we fail to prepare for the pitfalls, not only are we facing the threat of “powerful autonomous weapons,” “new ways for the few to oppress the many,” and “economic disruption”, but “once it develops to the point that it can improve and replicate itself”, we are facing the impending doom that AI will supersede humanity and replace humans entirely. However far-fetched these scenarios seem, we have already entered into a threshold that allows AI to distort our reality and disrupt humanity.
You Guessed It, Fake News
As some have mentioned before me, a Buzzfeed article explains that”our platformed and algorithmically optimized world is vulnerable.” Platforms using AI to prioritize “clicks, shares, ads, and money over quality of information” has already facilitated misinformation campaigns, propaganda, and polarized opinions. The sophistication of AI has evolved the ability to make false information appear credible. It seamlessly actualized this previously implausible threat right under our nose, before we even realize it was even happening. However, fake news is only the beginning.
The article further explains, “this past summer, more than one million fake bot accounts flooded the FCC’s open comments system to ‘amplify the call to repeal net neutrality protections,'” thus “undermining the authenticity of the entire open comments system.” Additionally, people have begun adopting AI for audio and video manipulation to create realistic and believable video clips. Taken together, the probable dangers of AI become far more realistic, and its potential can be much more sinister. The lack of regulation puts us on the path towards laser phishing and diplomacy manipulation.
Laser phishing is essentially “using AI to scan things, like our social media presences, and craft false but believable messages from people we know.” The heightened use of AI has led some to believe that this is inevitable. Whereas, the fear with diplomacy manipulation is that these “increasingly believable AI-powered bots will be able to effectively compete with real humans for legislator and regulator attention because it will be too difficult to tell the difference.”
Is It AI Or Is It Us?
Some argue that the problem lies in the capitalistic nature of the tech industry. These people think that for AI to become catastrophic, as Elon Musk and Stephen Hawking believe, it must demonstrate the ability to have insight—to recognize its own condition. Well news flash, IT HAS ALREADY HAPPENED! The generative adversarial network (GAN) technology “is a neural network capable of learning without human supervision.” It has “‘imagination and introspection’ and ‘can tell how well the generator is doing without relying on human feedback.”
On the other hand, there are those who pledge only to use AI for good. However, without set boundaries in our capitalistic economy, it is difficult to distinguish what is ethical and what is not. I mean, is the way that marketers’ mine for our data ethical? Moreover, should the automatic abilities of AI fall into the wrong hands and be used to exploit our digital media consumption, the consequences could still be, if not even more disastrous. Like millennials to advertisements on social media, people may eventually stop paying attention to and become desensitized to the news, “and that fundamental level of informedness required for functional democracy becomes unstable.” Not so far off from Elon and Stephen’s fears now, is it?
The fact of the matter is that rapid advancements in technology have already made what seems impossible, a reality. While social media such as Facebook acts as a content accelerator, increased use of AI technology by even marketers bring these looming threats much closer to hand. Even those that were skeptics are now more receptive to these possibilities. Advisors urge that we must “seriously consider the implications” of our actions and “explore the worst-case scenarios.” It started with social media and marketing strategies; all it takes now is one wrong move.
So…
What are your thoughts on where we are headed? As marketers increase their use of AI to learn every detail of our lives, do you believe or agree with the fears surrounding AI? Should carefully thought out boundaries be set, to limit marketers use of AI to access our info/ target us? Should we, individually and as humanity, be more aware of our behaviors and monitor them?
6 Responses to I’ve Got a Bone to Pick: What Are We Doing And Where Are We Headed?