AI – The Good, Bad, and Scary
- by nlqip
AI and machine learning (ML) optimizes processes by making recommendations for optimizing productivity, reducing cycles, and maximizing efficiency. AI also optimizes human capital by performing mundane & repetitive tasks 24×7 without the need for rest and minimizing human errors.
There are numerous benefits as to how AI can benefit society. As much as AI can propel human progress forward, it can be consequential to our own detriment without proper guidance. We need to understand the risks and challenges that comes with AI. Growing your knowledge in the new era of AI will help you and your organization evolve.
AI can be a battlefield of good and evil. There’s the power to do good and the power to do evil. Here are some examples on the Good, Bad, and Scary of AI.
Good
- Cybersecurity – Detect and respond to cyber-attacks with automation capabilities at machine speed and predict behavioral anomalies and defend against cyber threats before an actual attack occurs
- Banking & Finance – Detect and prevent fraud, manage risks, enable personalized services, and automate financial-decision processing
- Healthcare – Optimize patient interactions, develop personalized treatment plans, attain better patient experience, improve patient data accuracy, and reduce misfiled patient records
- Manufacturing – Predict maintenance, detect defects and quality issues, enhance productivity, generate product & component designs, and optimize inventory & demand forecasting
- Retail – Secure self-checkout that helps loss prevention, optimize retail operations & supply chain, and enhance customer experiences
- Smart cities & IoT – Manage traffic of autonomous vehicles & self-driving, manage energy consumption, optimize water usage, and streamline waste management through real-time sensor data
- Telecom – Predict network congestion and proactively reroute traffic to avoid outages
Bad
- Cybercriminals – Leverage AI-powered tools and social engineering to steal identities, generate ransomware attacks, perform targeted national state attacks, and destroy national critical infrastructure
- Computing resources – Require heavy power supply, Thermal Design Power (TDP), Graphics Processing Unit (GPU), and Random Access Memory (RAM)
- Environmental impact – Impact of intensive computing resources have on carbon footprint and environment
- Energy cost – Rise in electric power usage and water for cooling and increasing computational costs translates into carbon emissions
- Bias & Discrimination – Propagate biases as a result of bad training data, incomplete data, and poorly trained AI model
- Inequality – Widen the gap between the rich and poor and increase inequality in society
- Privacy – Loss of data privacy from insecure AI systems, unencrypted data sources, and misuse & abuse
- Skills loss – Reduce human critical thinking skills to uncover root issues, solve complex problems, and ability to write at college level and professionally
Scary
- Job loss and displacement – Replace humans with robots across every sector to perform highly skilled professional jobs
- Overreliance on AI – Rely heavily on AI to make important decisions like electing medical procedures, making life or death decisions, or choosing political candidates
- Dominance of AI – Potential ability of AI to surpass human intelligence and take control
- Monopoly by tech – a select number of tech companies could monopolize the economy and have undue influence over the social construct of our daily lives from buying patterns to everyday decision-making
- Deepfakes – Generate deepfakes with manipulated videos and images to influence discussions on social media and online forums
- Propaganda & Disinformation – Deploy human and bot campaigns to spread disinformation and propaganda to manipulate public opinion
- Censorship – AI chatbots restricting access to media content and removing unfavorable online speech all pose a risk to internet freedom and a democratic society
In the example of deepfakes and spreading disinformation, how does an AI system verify the authenticity of the video or image of the individual? How does it validate the source and validity of the information? How does one separate fact from fiction? How do we mitigate skepticism when the information is in fact the truth?
Concerns found in research surveys
The bad and scary of AI is not far fetch from reality. These concerns raise more than just eyebrows with the average consumers. They hit pretty close to home. In Cisco’s 2023 consumer privacy survey,
- 75% of respondents were concerned they could lose their jobs or be replaced by Gen AI
- 72% of respondents indicated that having products and solutions audited for bias would make them “somewhat” or “much more” comfortable with AI
- 86% were concerned the information they get from Gen AI could be wrong and could be detrimental for humanity.
According to a 2023 Pew Research survey,
- 52% of Americans were more concerned than excited about the increased use of AI. Those who were familiar with AI have grown more concerned about the role of AI climbing 16 points to 47% while those who hear a little about AI grew 19 points to 58% from the previous year
- When it came to AI for good such as doctors providing patient care or people finding products, services they are interested in online, Americans perceived AI as more helpful than hurtful – that’s 46% and 49% respectively. Demographics with higher education and income say AI is having a positive impact
- Despite the good and benefits that come with AI, loss of data privacy stood out as a major concern across demographics. 53% of Americans said AI is doing more to hurt than help people keep their personal information private. 59% of college graduates said the same.
Navigating through the uncertainties
While there are arguments predicting doomsday there are some AI experts who argue that it’s not all doom and gloom.
We are far from overreliance on AI or dominance of AI. We are far from machine intelligence surpassing human intelligence let alone achieving it. Another perspective on this is how machine intelligence can augment human capabilities like we see with Google search? Further, how can humans and machines communicate and work together more intelligently than alone?
When we look at the advancement of technology from personal computing on the PC to revolutionizing the way we communicate and connect over the Internet, there’s a lot we can glean and learn from. The Internet transformed the way we work and spawned new jobs that never existed before. They range from web developers to data scientists, application developers, software engineers, search engine optimization specialist, digital marketer, social media manager, and many more with the growth of 5G wireless and the Internet of Things (IoT).
Each of these roles increased productivity and solved problems. We can expect the same as the generative AI market continues to evolve and mature – boost productivity, solve new problems more efficiently, and see a new wave of job creation.
- Developing a mindset – GenAI is still in its early days, and we really don’t know enough to predict all the negative implications to society to be pessimistic. The new era of AI opens doors to infinite possibilities to explore creativity and new innovations. An optimistic view is the positive impact AI could have on mitigating existential threats like climate change, AI itself, and more.
- Training and Education – AI isn’t going away. What’s scarier than the scary of AI itself, is the fear holding back your personal growth. Education is key. Determine what new skills and capabilities your current job and the next generation workforce will need. Learn to use AI tools. There are hundreds of AI tools available in every industry.
- Managing Risk – Managing the inherent risk with AI starts with being aware of the risks and challenges of AI. Cybercriminals aren’t going away either. While not every criminal is prevented from creating the “scary” with AI, combatting AI-powered criminals with the modern next gen SOC powered by AI is necessary to achieve resilience.
- Securing AI – Embedding security from the start and throughout each of the stages of the AI systems development lifecycle is not only best practice but mitigates the use of AI by adversaries for bad
- Regulating Privacy – The U.S. needs to have a national privacy and data protection law. Unlike places like the EU with GDPR, we don’t have one today. In fact, the EU is on its way to be the first global body to regulate AI. Without a comprehensive approach to privacy and data protection laws with clear legal regulation, the U.S. lags in its ability to protect its citizens’ security and privacy.
- Developing Trust – Being more transparent and explaining how AI systems and tools work, ensuring human involvement, and instituting an AI ethics management program would make more individuals comfortable with AI.
These are just some tips to help guide individuals and organizations – both private and public – navigate the uncertainties of AI – the Good, Bad, and Scary. Ultimately, the fate of humanity is up to each of us, organizations, and society as a whole. Let’s hope we make the right choices. Our future depends on it.
To learn more · Explore our Cybersecurity consulting services to help.
Source link
lol
AI and machine learning (ML) optimizes processes by making recommendations for optimizing productivity, reducing cycles, and maximizing efficiency. AI also optimizes human capital by performing mundane & repetitive tasks 24×7 without the need for rest and minimizing human errors. There are numerous benefits as to how AI can benefit society. As much as AI can…
Recent Posts
- Bob Sullivan Discovers a Scam That Strikes Twice
- A Vulnerability in Apache Struts2 Could Allow for Remote Code Execution
- CISA Adds One Known Exploited Vulnerability to Catalog | CISA
- Xerox To Buy Lexmark For $1.5B In Blockbuster Print Deal
- Vulnerability Summary for the Week of December 16, 2024 | CISA