AI Will Replace Those Who Don’t Learn How To Use AI
Whether you are excited or concerned about the future of AI, correlates to how much we understand about AI and what is coming. There is too much to unpack in this article, so let’s start with a couple of beginning concepts.
AI will replace people that don’t learn how to use AI
Several recent studies show that only 20% of the workforce is concerned about AI replacing them, while 60% are excited about how it will help them, and the remaining 20% are uncertain. Employees know that their jobs will change but are not afraid of it. AI is more than just a potential opportunity to boost a company’s operations; it can also be used to build better organizations. Companies are already using AI to create sustainable talent pipelines, drastically improve working methods, and make faster, data-driven structures. One of our biggest questions with regard to AI is, “How will AI replace people”. The simple answer is that AI will replace people who don’t learn how to use AI.
It is projected that the early jobs that will need to be reskilled are Marketing and Data Analysis. These areas are where AI shines and can take large amounts of data and market information and provide good analysis in minutes. We need to teach our employees not only how to use AI but how to use it responsibly. Upskilling our current workforce involves helping them not only grow in their current jobs but expand what they can offer us. One of the best places to start is learning prompt engineering or prompt learning.
What do I need to know about AI?
The first AI was the Google architecture “Transformative,” which came out in 2017. Since then, the growth has been exponential. We recently met Ross Hartman of Kiingo AI. Ross is considered a thought leader on AI and is a business consultant to help layout a roadmap for your business and thrive with AI. You can check to schedule a Discovery Call with Ross. As Ross recently said, “AI is the worst today that it will ever be”. We’ve looked at this before but believe it is worth repeating, so we have gathered some basic definitions that may help.
- AI (Artificial Intelligence). The goal of AI is to give machines the ability to complete tasks that would normally require human intelligence. Some of the ones we have grown accustomed to are actions like Vision systems, statistical data, etc.
- Generative AI is the ability to create “human-like” content. It can be audio, visual, etc.; many of us use these daily when we say, “Alexa, turn on the lights” or “Hey Siri, what is the weather going to be today?” These include things like Alexa and Google Assistant. It’s easy to go from “Wow, that’s cool” to “OMG, machines will replace us” without breaking it down. It’s important to know that all AI uses the same underlying architecture so other technology can use one breakthrough.
- ChatBots – A chatbot is like a parrot. It can mimic and repeat words it has heard with some understanding of context but not necessarily understand the meaning.
- Large Language Model (LLM) is the technology that learns words and phrases through large volumes of text and predicts the next word or suggested phrases. This allows us to deal with language, reason, and logic that we otherwise consider “human capabilities). Think of it in your email when a suggested phrase pops up to complete the thought.
- Learning Prompting is learning how to use AI and is probably the most important place to start. Here’s a good place to start.
Risks and Threats
A recent study shows that 11% of the data that employees paste into ChatGPT is confidential. Misuse, whether intentional or accidental, can have serious consequences. AI relies on data, and many of the existing platforms will use the data that the users are putting into it. This is why the important topic of developing an AI policy within our organizations needs to be an immediate conversation. Data is naturally stored and used for AI to learn. Knowing which platforms are the most secure and how this data is kept private is paramount. Knowing that financial or customer data entered into some platforms will be used and consequently stored forever needs to be understand by all of us.
We all see how we currently use social media and our neighborhood apps and believe we can all agree that there are many things we, as human beings are not proud of.
Mo Gawdat – author and former chief business officer of Google X says it best in an article by BBC. “The answer to our future, if we were to re-imagine it, is not found in trying to control the machines or program them in ways that restrict them to serving humanity, it’s found in raising them like a sentient being, and literally raising them like one of our children. And as we observe how humanity has been behaving in front of those machines – the way we respond to tweets or the way we interact with the news and so on – we are not being very good parents; we are not showing the best of us. And if the machines were to mimic our intelligence, and become more of who we are, we are in trouble. The only way we can get our future to be re-imagined as a Utopia, is to actually start behaving like the kinds of parents who could teach those machines the values that would make them want to care about us.”
Making way for Applied AI
AI is more than just a potential opportunity to boost a company’s operations; it can also be used to build better organizations. Companies are already using AI to create sustainable talent pipelines, drastically improve ways of working, and make faster, data-driven structures. Half the respondents in a recent survey say their organization is unprepared to react to future shocks. Those able to bounce forward—and quickly—out of serial crises will gain significant advantages over those that don’t.