AI Threats to CEOs
By Dan Hilbert, CEO
By Dan Hilbert, CEO
The Myth and Dangers of AIs to CEOs
(That Vendors Don’t Often Tell You)
The Business Rage of AI
AI is the rage in the software world. The Business Enterprise software vendors are compelled to claim and hype their AI. Nearly all of their competitors are hyping their AI which in most cases is a Learning System, not legit AI.
As CEOs, your teams are requesting that you approve the budget for AI software that vendors claim will make you younger and regrow your hair! Do the vendors explain the teachings and the rules of their AI? Seldom! "This is their “Secret Sauce." With software vendor competition so fierce, they simply can't expose their core code to other vendors. There is a compromise for both sides, CEOs and vendors.
CEO Risk Management Responsibilities
In many industries, Risk Management is the main responsibility of CEOs. The 1st time I led a software company for investors, I was truly stunned by my contract. A half a page was for my compensation package. 2.5-pages were for my behavior. 9-pages were for my Risk Management responsibilities.
AS CEOs, we must assess the possible risk of an immature, unconstrained, incorrectly taught AI with lax guidance and excess data access rules. How do you assess this? Our CEO paradox is that we need effective AIs to compete.
Below, I provide “7 Rules for Assessing Risk and Value of AI Software.”
Quick History of AI
Personally, I was involved in the earliest stages of AI development, long before it was called AI. In my early years of headhunting in Austin, I became a specialist in NLP (Neuro Linguistic Programming). Overnight, IBM, Schlumbeger, Eagle-Signal and MCC began hiring the few NLP experts in the country for their Austin offices. NLP was the code formation of AI and still is in most advanced AIs.
Before maturation of NLP into legit AI, advanced software vendors used sophisticated Learning Systems. Learning Systems are also a bedrock of many quality AIs. Learning Systems are built on advanced mathematics, complex rules and usually Big Data. The coded learning system rules provided safety and direction to the Learning Systems reducing business risks.
The 7 Rules for Assessing Risk and Value of AI Software
A poorly designed uncontained AI will create unexpected risks to you and your business. As CEOs, we don’t want to be in front of our Boards trying to explain, “The AI assessment team I assigned, just didn’t get it right.” You know how well that’s going over - think..... “Led Zeppelin” and “Where’s my Exec Headhunter file?”
The following is a basic guide for CEOs in assessing the risk and potential of a new AI software product from software vendors:
Explain all data the AI can access
What are the basics of your AI “teachings”
Is this constrained AI or unconstrained AI? In business, as CEOs, we can’t take major risks with unconstrained AI. Hallucinations occur. Incorrect correlations and predictions occur.
What basic rules does your AI have for containment? Can you guarantee in the contract that the AI will not explore outside of these boundaries?
When an AI can touch government or industry regulations, how does your AI handle this?
Since we also deal with workforce data, particularly Diversity and their government regs, we have a Diversity module that checks every AI recommendation before allowing the recommendation to reach our clients.
How will your IT team responsible for the AI application monitor AI recommendations and predictions?
This can be difficult. Some IT specialists are “Doom and Gloom,” experts. It’s in their nature. Some are, “Let’s try everything new and shiny.” Your job is to find a trusted mix. AI means some level of change management and new learning. We can’t allow ourselves to get too influenced by the extremes.
Is the potential business value of this AI worth the risk? The risk to your reputation? Your career? Your ability to lead your company? Take care of your employees and investors?