When it comes to AI, Americans have pretty polarizing opinions, whether it is helping them or hurting them. A recent study by the Pew Research Centre dug deep into the areas where people are concerned with the use of AI.
The research aimed to discover how most people perceive AI in their daily lives. They gave the subjects different scenarios where AI is used and asked whether they think it hurts or helps them.
Today, we will discuss all those daily life scenarios and find out which one most people find concerning.
Keeping Personal Information Private
People are more worried about privacy and ethical data handling in AI algorithms. 59% of Americans who participated in the survey believe that AI is more detrimental than beneficial when it comes to protecting personal privacy, highlighting fears of data breaches and misuse.
The study also notes a significant level of uncertainty among the public, with 32% of participants admitting they are unaware of how AI systems collect, process, and use their data. Only 8% view AI’s role in privacy positively, underscoring the pressing need for more transparent AI operations and robust data protection measures to build public trust.
Interestingly, while noting the demographics of the responses, it was found that people who are in college or are graduates are the ones who believe AI is detrimental. However, those who were in college or junior high either believed AI helped or weren’t sure about its effect.
Customer Service
Customer service (CS) heavily relies on humans and their soft skills. However, AI is slowly replacing that as well. However, not many people seem to agree with this decision. Almost 34% of the respondents believed that using AI for customer service is hurting more than it is helping, while 28% think AI should handle CS.
Those who are against using AI for CS believe that AI cannot replicate human empathy and understanding, which are crucial for effective customer service. CS issues are often complex, requiring human judgment and problem-solving skills.
On the contrary, those who support AI for CS might say that AI-powered chatbots can handle a large volume of routine inquiries quickly and efficiently. This allows human agents to address more complex issues.
Police Maintaining Public Safety
It is a fact that AI can analyze data in bulk to identify patterns and give predictions. It can help identify potential crime hotspots, allowing police to allocate resources effectively. This is why 24% of the respondents feel it is a good idea for police to use AI. However, using AI in surveillance and data analysis raises concerns about privacy infringement. Plus, there is a risk that AI systems could perpetuate existing biases in law enforcement, leading to discriminatory outcomes. So, 26% of the respondents believe police should avoid using AI for maintaining public safety.
Deepfakes
Deepfakes can be for creating compelling fake videos or audio clips of public figures saying or doing things they never did, leading to widespread misinformation and erosion of trust in media. With the rise of AI, they are only becoming more and more realistic. Due to such technology, individuals can be subjected to character assassination through deepfakes, damaging their personal and professional lives.
Weapons Automatization
Autonomous weapons systems raise several concerns related to ethics and safety. Autonomous weapons operate without direct human intervention, raising questions about accountability and the potential for unintended consequences. Plus, autonomous weapons may struggle to distinguish between combatants and civilians, leading to increased civilian casualties.
Concentration of Power
A few large tech companies could dominate the AI landscape, leading to reduced competition and innovation. Besides, powerful entities could develop AI systems that speculate existing biases or discriminate against specific demographics. In worst cases, the government could use AI to surveil citizens, control information, and suppress dissent, which is a serious concern.
Dependence on AI
Overreliance on AI is one of the biggest concerns people can have. Increased automation could lead to significant job losses, causing economic disruption. Not to mention, overreliance on AI could erode human skills and capabilities, making people more dependent on technology. As we get more involved with AI in our daily lives, it will become more and more challenging to address ethical questions about its use and impact, causing massive ethical dilemmas.
Online Product & Service Recommendations
The majority of the participants are satisfied with this aspect of AI. According to the study, 49% of the participants believe that AI has helped find relevant products and services online, while only 15% think otherwise.
It is justified because, with countless online options, recommendations help users quickly narrow down their choices to items that align with their preferences and needs. Plus, with personalized recommendations, users can find new products or services, expanding their horizons and potentially finding hidden gems. Besides, needless to say, tailored recommendations create a more engaging and satisfying shopping experience as users feel understood and valued.
Auto-driving Cars & Trucks
Auto-driving cars are futuristic for sure, but they also have a mixed response. While 37% of the respondents agree that self-driving cars and trucks would be of help, 19% disagree with that. Since we have only seen very little of such vehicles so far, 44% of the respondents are still unsure how they feel about it.
The reason behind people’s support for this idea could be the optimism that it could reduce accidents caused by human error. Self-driving cars can also manage traffic flow better, leading to less congestion and faster travel times. Besides, it could help people with disabilities who cannot drive, providing greater independence.
However, those who are not in favor of it could be doubtful about the reliability of the technology, especially in complex driving conditions. Besides, vulnerability to hacking could lead to catastrophic consequences.
AI-assisted Healthcare
Doctors using AI for improved healthcare is still a very new concept, yet people are hopeful about it. 37% of the respondents believe it should help. It can be explained by the fact that AI can analyze medical data faster and more accurately to identify patterns and detect diseases earlier than human doctors. This would lead to more accurate and timely diagnoses. On a broader level, AI can speed up the process of discovering new drugs through simulations of molecular interactions and predict drug efficacy.
In a recent news article by Forbes, it was mentioned that Microsft will soon launch a GPT -4 compatible AI technology, which will work as a universal translator for healthcare information. This will greatly help patients to understand their own health records.
However, 20% of participants believe that doctors using AI for treatment can hurt. A major concern could be job loss, as AI could replace doctors or other healthcare professionals, leading to job losses. There could also be concerns about the security and privacy of patient data used to train AI algorithms.
Self-Assessed Healthcare Using AI
When it comes to self-assisted healthcare, the majority of the people (33%) still believed that AI is helpful, while 19% disagreed with them. Now, here, the concern and acceptance are neck to neck. Those who believe that AI-powered health apps and devices can provide healthcare information and services to people in remote or underserved areas support this.
The other group might find AI diagnoses untrustworthy. For example, according to the study, most higher-educated participants agreed that AI-assisted healthcare can be a positive help. However, those with less education or awareness are mostly unsure about it.
Finding Information Online
So far, people have been using search engines to find information online; however, with new generative AI models, this has changed quite a bit. People now put their search query, and the integrated AI generated an answer to their question.
33% of the respondents find it convenient as they don’t have to go to the websites independently, saving them time. On the other hand, 27% of the participants felt concerned about it. This could be because AI models can sometimes generate incorrect or misleading information, which is known as hallucinations. Plus, being trained on a set of data, AI models are often biased, and hence, their answers can be skewed. Most importantly, overreliance on AI for information could hinder the development of critical thinking skills.