Artificial intelligence is here to stay. Many structures and systems make use of AI, such as banks, social media platforms and airports. Some corporation’s AI might already know what your face looks like. On the downside, maybe we are one day closer to the spooky realities of the movies “Minority Report” or “The Terminator.”
Choosing to be terrified is an honest and acceptable response. However, like the introduction of the internet in the early-1990s, we are about to hit another gear in computer productivity. When you have the option, how do you determine an AI engagement level that is appropriate and beneficial for you?
Back in my day, to do a book report about “To Kill a Mockingbird,” you actually had to read the book. There wasn’t an option for an AI platform to type it for you. You have probably heard or said something like this about AI in the last year. Is that a fair assessment of what is happening?
Students using AI to write their book reports miss the point of the assignment and don’t learn anything from the book. However, figuring out how to prompt AI to write a unique, accurate book report is also a skill. Don’t get mad at me yet — I’m not advising students to use AI to cheat on assignments. However, a new, valuable skill set is being developed right now by early AI adopters: finding the balance between AI productivity and common sense.
Ask the Right Questions
In this fictional example, I’m an expert sports data analyst for ESPN. Specifically, let’s say I focus on what plays work best against the Detroit Lions in the fourth quarter. I use this information to tell the ESPN camera crew where to focus to get the best shots of every play based on patterns I’ve observed. I already have an enormous database of information that shows a specific running play works for a 10-yard gain 80 percent of the time in a fourth-quarter scenario.
If I uploaded all this data to an AI platform and asked it to predict another play that would work for a big gain, it may give me a good answer. With my sports knowledge, I could evaluate the AI answer, do some spot checks or write it off as a good or bad suggestion for where to point the cameras in the next game, but I wouldn’t have to rework all the data.
The AI platform might use my data as evidence to train its model to better answer similar questions. Someone who works for the Chicago Bears could ask the same AI about which plays are most successful against the Lions in the fourth quarter. They may get an excellent AI answer detailing these scenarios.
However, as the fictional ESPN writer, I might not have intended this. In a sense, I trained the AI well with hard-fought data from my research to give the answer to this fourth-quarter question to anyone who knows how to ask for it.
The Detroit Lions could also use this data. They could take this information and decide to sub in a less-tired linebacker in the fourth quarter to prevent that play from being effective or change their defensive coverage. This information is now publicly accessible for anyone who knows how to ask for it, but is it now worthless if the Lions are also reacting to it?
AI helps crunch numbers in a million ways in a few seconds, but it relies on the users to help determine if the answer is good. If the AI said to focus all the ESPN cameras on the punter on first down, I would know that is a dumb answer and bad data was used to generate it. Balancing quick AI data analysis and common sense, I would know the underlying data was faulty, or my search question wasn’t easy for the AI to understand.
In a PHCP scenario, let’s say I own a 30-truck plumbing company. I am trying to figure out if it is worthwhile to have a service tech on call 24/7. Generally, the techs don’t like to do it, and most of the time, no customers call. However, every once and a while, one of those midnight service calls is an enormous win for the company and leads to additional profitable business.
AI could lead me toward a solution based on the phone call records I have that document types of service requests, local weather information and an event calendar for this ZIP code. AI might help me determine that when it is below 20 degrees at night on days an electronic dance music concert is at the big arena downtown, there is a high likelihood of getting a no hot water call from any of the hotels within walking distance of the arena.
Perhaps the concertgoers who are sweaty from dancing all night take a shower in their hotel rooms in a short window of time after the concert. This large demand and the slower recovery rate of the water heaters in the winter are the underlying recipe for a service call.
It would make sense to schedule the tech who lives nearby to be on call those nights, but not all nights or even all concert nights, only the dance music nights. In fact, you use your common sense to determine it makes the most sense to have the tech stay at one of the downtown hotels that night to respond immediately if needed.
It is realistic to assume that AI could help find this type of pattern with the proper data and questioning. You could also figure out that pattern without AI, but it may take years to crunch all those numbers and find a pattern on your own.
When used well, AI could help you count cards instead of gambling to some extent. Who wouldn’t want to do that?
AI isn’t making novice users experts. It doesn’t filter “To Kill a Mockingbird” through the lived experience of a 16-year-old’s brain. Expert users who know how to ask specific questions and evaluate the accuracy of an answer combine the best of both. They use AI to do hundreds of work hours and apply common sense to determine whether it was done well.
As odd as it sounds, some 16-year-old is figuring this out right now instead of reading “To Kill a Mockingbird.” While he cheats himself out of the literature learning experience, he develops a new skill. He uses AI to aggregate a ton of data and form an opinion — similar to reading a book and writing a report.
Again, I’m not advocating using AI to write the book report, but AI is an emerging tool that could boost our research productivity the same way the internet was a quicker way to access information than driving to the library to read an encyclopedia.
Is AI worth the privacy risk of inputting data? Is the information that you input safe? It would be best to assume no. The terms and conditions of AI platforms are vague at best. Some platforms will give you an answer to the question, “Does this AI utilize user-provided data to update and train its model?” Most don’t give you specifics or note that their terms and conditions may change.
If you avoid AI, you risk being left behind or outpaced by your competitors — similar to someone who insists on using an abacus to add numbers instead of a calculator or an Excel spreadsheet. There is no perfect answer for when to use or avoid AI. However, the potential productivity benefits are enormous if you know how to ask the right questions and evaluate the validity of the answer.