AI in the workplace: The good, the bad, and the algorithmic
- by nlqip
Artificial Intelligence (AI) is a hot topic at the moment. It’s everywhere. You probably already use it every day. That chatbot you’re talking to about your lost parcel? Powered by conversational AI. The ‘recommended’ items lined up under your most frequently brought Amazon purchases? Driven by AI/ML (machine learning) algorithms. You might even use generative AI to help write your LinkedIn posts or emails.
But where does the line stop? When AI tackles monotonous and repetitive tasks, as well as research and create content at a much faster pace than any human could, why would we even need humans at all? Is the ‘human element’ actually required for a business to function? Let’s dig deeper into the benefits, challenges, and risks regarding the best person (or entity?) for the job: robot or human?
Why AI works
AI has the power to optimize business processes and reduce time spent on tasks that eat into employees’ general productivity and business output during their working day. Already, companies are adopting AI for multiple functions, whether that be reviewing resumes for job applications, identifying anomalies in customer datasets, or writing content for social media.
And, they can do all this in a fraction of the time it would take for humans. In circumstances where early diagnosis and intervention are everything, the deployment of AI can have a hugely positive impact across the board. For example, an AI-enhanced blood test could reportedly help predict Parkinson’s disease up to seven years before the onset of symptoms – and that’s just the tip of the iceberg.
Thanks to their ability to uncover patterns in vast amounts of data, AI technologies can also support the work of law enforcement agencies, including by helping them identify and predict likely crime scenes and trends. AI-driven tools also have a role to play in combatting crime and other threats in the online realm and in helping cybersecurity professionals do their jobs more effectively.
AI’s ability to save businesses money and time is nothing new. Think about it: the less time employees spend on tedious tasks such as scanning documents and uploading data, the more time they can spend on business strategy and growth. In some cases, full-time contracts may no longer be needed, so the business would spend less money on overheads (understandably, this isn’t great for employment rates).
AI-based systems may also help eliminate the risk of human error. There is the saying ‘we’re only human’ for a reason. We all can make mistakes, especially after five coffees, only three hours of sleep, and a looming deadline ahead. AI-based systems can work around the clock without ever getting tired. In a way, they have a level of reliability you will not get with even the most detail-orientated and methodological human.
The limitations of AI
Make no mistake, however: on closer inspection, things do get a little more complicated. While AI systems can minimize errors associated with fatigue and distraction, they are not infallible. AI, too, can make errors and ‘hallucinate’; i.e., spout falsehoods while presenting it as if it were correct, especially if there are issues with the data it was trained on or with the algorithm itself. In other words, AI systems are only as good as the data they are trained on (which requires human expertise and oversight).
Carrying on this theme, while humans can claim to be objective, we are all susceptible to unconscious bias based on our own lived experiences, and it is hard, impossible even, to turn that off. AI doesn’t inherently create bias; rather, it can amplify existing biases present in the data it is trained on. Put differently, an AI tool trained with clean and unbiased data can indeed produce purely data-driven outcomes and cure biased human decision-making. Saying that, this is no mean feat and ensuring fairness and objectivity in AI systems requires continuous effort in data curation, algorithm design, and ongoing monitoring.
A study in 2022 showed that 54% of technology leaders stated to be very or extremely concerned about AI bias. We’ve already seen the disastrous consequences that using biased data can have on businesses. For example, from the use of bias datasets from a car insurance company in Oregon, women are charged approximately 11.4% more for their car insurance than men – even if everything else is exactly the same! This can easily lead to a damaged reputation and loss of customers.
With AI being fed on expansive datasets, this brings up the question of privacy. When it comes to personal data, actors with malicious intent may be able to find ways to bypass the privacy protocols and access this data. While there are ways to create a more secure data environment across these tools and systems, organizations still need to be vigilant about any gaps in their cybersecurity with this extra data surface area that AI entails.
Additionally, AI cannot understand emotions in the way (most) humans do. Humans on the other side of an interaction with AI may feel a lack of empathy and understanding that they might get from a real ‘human’ interaction. This can impact customer/user experience as shown by the game, World of Warcraft, which lost millions of players by replacing their customer service team – who used to be real people who would even go into the game themselves to show players how to perform actions – with AI bots that lack that humor and empathy.
With its limited dataset, AI’s lack of context can cause issues around data interpretation. For example, cybersecurity experts may have a background understanding of a specific threat actor, enabling them to identify and flag warning signs that a machine may not if it doesn’t align perfectly with its programmed algorithm. It’s these intricate nuances that have the potential for huge consequences further down the line, for both the business and its customers.
So while AI may lack context and understanding of its input data, humans lack an understanding of how their AI systems work. When AI operates in ‘black boxes’, there is no transparency into how or why the tool has resulted in the output or decisions it has provided. Being unable to identify the ‘workings out’ behind the scenes can cause people to question its validity. Additionally, if something goes wrong or its input data is poisoned, this ‘black box’ scenario makes it hard to identify, manage and solve the issue.
Why we need people
Humans aren’t perfect. But when it comes to talking and resonating with people and making important strategic decisions, surely humans are the best candidates for the job?
Unlike AI, people can adapt to evolving situations and think creatively. Without the predefined rules, limited datasets, and prompts AI uses, humans can use their initiative, knowledge, and past experiences to tackle challenges and solve problems in real time.
This is particularly important when making ethical decisions, and balancing business (or personal) goals with societal impact. For example, AI tools used in hiring processes may not consider the broader implications of rejecting candidates based on algorithmic biases, and the further consequences this could have on workplace diversity and inclusion.
As the output from AI is created from algorithms, it also runs the risk of being formulaic. Consider generative AI used to write blogs, emails, and social media captions: repetitive sentence structures can make copy clunky and less engaging to read. Content written by humans will most likely have more nuances, perspective, and, let’s face it, personality. Especially for brand messaging and tone of voice, it can be hard to mimic a company’s communication style using the strict algorithms AI follows.
With that in mind, while AI might be able to provide a list of potential brand names for example, it’s the people behind the brand who really understand their audiences and would know what would resonate best. And with human empathy and the ability to ‘read the room’, humans can better connect with others, fostering stronger relationships with customers, partners, and stakeholders. This is particularly useful in customer service. As mentioned later, poor customer service can lead to lost brand loyalty and trust.
Last but not least, humans can adapt quickly to evolving conditions. If you need an urgent company statement about a recent event or need to pivot away from a campaign’s particular targeted message, you need a human. Re-programming and updating AI tools takes time, which may not be appropriate in certain situations.
What’s the answer?
The most effective approach to cybersecurity is not to rely solely on AI or humans but to use the strengths of both. This could mean using AI to handle large-scale data analysis and processing while relying on human expertise for decision-making, strategic planning, and communications. AI should be used as a tool to aid and enhance your workforce, not replace it.
AI lies at the heart of ESET products, enabling our cybersecurity experts to put their attention into creating the best solutions for ESET customers. Learn how ESET leverages AI and machine learning for enhanced threat detection, investigation, and response.
Source link
lol
Artificial Intelligence (AI) is a hot topic at the moment. It’s everywhere. You probably already use it every day. That chatbot you’re talking to about your lost parcel? Powered by conversational AI. The ‘recommended’ items lined up under your most frequently brought Amazon purchases? Driven by AI/ML (machine learning) algorithms. You might even use generative…
Recent Posts
- Windows 10 KB5046714 update fixes bug preventing app uninstalls
- Eight Key Takeaways From Kyndryl’s First Investor Day
- QNAP pulls buggy QTS firmware causing widespread NAS issues
- N-able Exec: ‘Cybersecurity And Compliance Are A Team Sport’
- Hackers breach US firm over Wi-Fi from Russia in ‘Nearest Neighbor Attack’