FOMO forcing you to move from GenAI to Agentic AI?
If your data and security challenges with GenAI are keeping you from getting your full ROI, you'll face the same challenges with Agentic AI
Companies are currently exploring Agentic AI—at least that’s the sentiment conveyed by tech vendors, driven by FOMO (fear of missing out)—without fully figuring out how to incorporate Generative AI (GenAI) into their operations. It’s like expecting a baby who has just learned to roll over to start walking immediately.
Let me illustrate this with an example from a recent personal experience. I’m forced to look for new home insurance because I live in a wildfire area. My neighborhood in Folsom, CA, wasn’t deemed high-risk until the wildfires in Southern California last year, even though I have been here since 2007 (I’m sure this has nothing to do with climate change – hope the sarcasm is coming through). As a result, the major insurance company I was dealing with wanted to cancel my home insurance. Consequently, I decided not to allow them to continue providing my auto insurance or personal umbrella insurance.
To make a long story short, I tried using their chatbot to cancel these two policies. The chatbot was adept at taking my questions and converting them into prompts, processing them through its GenAI system, most likely using a large language model (LLM) with internal knowledge that includes the chat histories of its best agents, company insurance policies, and so forth. However, after several exchanges, the chatbot ultimately transferred me to a human representative at a call center. To my dismay, this representative repeated many of the same questions I had already answered during my conversation with the chatbot. When I inquired why this was the case, the call center agent explained that the chatbot did not have access to my data. For security reasons, they needed to ask preliminary questions to retrieve my information before canceling my policies and querying the customer database themselves.
As intelligent as the chatbot was, the core issue was related to data management and security. Last year, while working on diagrams for my previous company to show how our database would fit into both GenAI and Agentic AI systems, I revisited a report from MIT Technology Review Insights titled "Data Strategies for AI Leaders." Although that report is now over two years old, one of its three key takeaways still rings true, and I’d like to quote it here:
“The rise of AI exacerbates longstanding challenges in data management—data governance, security, and privacy (cited by 59%), data quality and timeliness (53%), and data integration (48%)—and may supply the urgency needed to finally address them.”
It’s evident that one, two, or all three of these challenges prevented this major insurance company from transforming its chatbot into a tool that delivered meaningful actions rather than merely providing a humanlike but ultimately fruitless conversation. Security and privacy concerns, coupled with insufficient data integration, rendered this interaction frustrating and negated any value from their investment in an AI-driven chatbot. Essentially, instead of working on something while listening to bad instrumental versions of songs that remind me how old I am and using a system they already had, they’re declaring victory to folks on AI surveys when answering the question, “Have you fully integrated AI into your workflow and customer-facing systems?”
It doesn’t matter whether it’s GenAI or Agentic AI; the issues all center on data security, data pipelines, and their integration into AI platforms. More on that in a subsequent post.
