Skip to main content

Getting Started with Generative AI

November 7, 2025  | Alex Terlecky

As AI becomes more prominent in daily life and the workplace, it may be both challenging to keep up with and difficult to understand how to utilize the emerging technologies—if at all. Especially in the public sector, where organizations are managing public data, critical infrastructure, and tight budgets, AI might seem like technology best kept at arm’s length.  

But the reality is that AI is here to stay—in fact, it’s been with us for decades in the form of computer chess, IBM’s Watson, and digital assistants such as Apple’s Siri. In this article, we want to help you understand what it means when we say “AI,” how you can take baby-steps toward creating operational efficiencies, and what you can do to prepare for what’s next. 

What we mean when we say AI  

Nowadays, AI has become a catch-all for any kind of automated technology. However, it’s important to understand the nuances when discussing AI, given the varying degrees of sophistication and the speed of advancements. Today’s AI is not yesterday’s AI, and it won’t be tomorrow’s either. 

Prior to the LLM (large-language model) boon in the last few years, early AI systems were rule-based and relied on predefined algorithms. AI performed specific tasks such as image recognition or operated as simple chatbots that populated early instant messaging systems and provided inaccurate customer service. They could perform specific tasks, such as playing chess, but lacked the ability to learn from new data or adapt to novel situations. 

In our present, we have been introduced to Generative AI such as ChatGPT, Google’s Gemini, and Anthropic’s Claude which do exactly what it sounds like—they generate text, images, audio, and even video based on user prompts. Since they are trained on human language, these models can understand context and produce coherent and contextually relevant outputs that rival human-produced content.  

The way they are able to do this is still through predictive models, but these models use advanced techniques like unsupervised or semi-supervised learning for training and neural networks to generate new data. Essentially, if you look under the hood of an LLM like ChatGPT—strictly referring to text-generation—you will find a program that is running probabilities of the next word in the sentence it’s producing.  

When you enter a prompt asking a question, the program still operates off of an algorithm and selects the next word with the highest probability. If the word “the” has been determined to have a 99% probability of being the next word, while “a” has a 100%, the next word will be “a.” You can even see this happening in real-time when you are working on a Word document and the software suggests a possible ending to your sentence.  

Now, contrast that with the future of AI: the Holy Grail of Artificial Generative Intelligence (AGI). This is a theoretical form of AI that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks at a level comparable to human intelligence. AGI would be able to reason, plan, solve complex problems, and understand complex ideas. 

Unlike current AI systems, which are specialized and limited to specific tasks, AGI would have the flexibility and adaptability of human cognition. It would be capable of transferring knowledge from one domain to another and learning from experience in a more generalized way.  

Although it may be a lot to digest, at their core, these are simply tools for us to practice, utilize, and master to help our organizations grow, increase our productivity, and shift our focus away from rote, meaningless tasks to provide a more meaningful member experience.  

Where to begin? 

In the long-road of AI development, we’ve arrived at a point where businesses and organizations have access to these new tools that can help increase efficiencies, improve productivity, and begin to take over repetitive tasks such as data entry, research, and help creating emails or blog posts (Disclosure: the Risk Management Review is committed to producing original written content and while we use AI for research, we do not use AI for written content generation or editing purposes).  

While this will undoubtedly result in job-loss and job-restructuring, taking the appropriate steps now can help mitigate and control these variables, while still using the AI tools available to help manage personal workloads and shift the focus to areas that can benefit from an increased focus, such as membership and customer service. 

The first step is to create a space for team discussions and conversations about AI. These discussions will be important to address both the positive and negatives around this emerging technology. But the one thing that is sure—this is not a technology that is going away. Having discussions with your team now can help best position your organization to decide what AI use-cases you are comfortable with, which use-cases are off limits, and what areas to pay attention to as the technology continues to rapidly advance. And by introducing a space for these conversations to take place, you can provide staff with an appropriate forum to air concerns, discuss new developments, and recommit to your organizational values and how AI may or may not align with them. 

A good place to start is by adding a discussion item to your team meetings, or an upcoming meeting of the Board of Directors. These don’t have to be structured and can operate as an open discussion format. Although there might not always be a lively discussion, this has two effects: it sends a message to your team that leadership is committed to understanding how  this technology fits within your organization, and secondly, it offers that space where pro-AI staff can talk about how they’re using it to change their jobs and anti-AI staff can articulate their concerns.  

Next steps 

Once you’ve waded into the waters of AI and started having discussions with your staff, there are a few practical next steps to manage AI integrations, which are appearing in ubiquitous places such as on our Google searches, in Microsoft products, and through many companies’ customer service departments. 

First, is to create an internal AI-use Policy. This allows you to take what you’ve learned from your staff or Board discussions and get something concrete into writing, so staff understands what is acceptable and unacceptable in terms of AI-use. This document can help you audit your current AI use, evaluate compliance and regulatory obligations, and establish accountability. This also is something that can be externally facing, which shows your customers or members how you are committed to appropriate AI-use. An AI-use policy is a helpful tool and should be treated as a living document. Given how fast this technology is advancing, this document will grow and change as well.  

Beyond this step, organizations can begin to experiment with pilot projects that test the limits of AI. You can start this by identifying a problem within your organization or workflow and imagining a solution that generative AI can help solve. Focus on small, specific tasks and determine the data you need to accomplish this goal. For example, the creation of an AI Agent can be a starting point to see if LLMs are up-to-snuff when it comes to providing the assistance you need to internal staff searching for answers to their questions, or to customers looking for help with understanding your services. Remember that failure is an acceptable outcome to learn what works and what doesn’t for your organization.  

Finally, a long-term step is to begin thinking about managing your unstructured data. A majority of data—with some analyst estimates put as high as 90%—is unstructured, which means it exists in the forms of text, video, audio, and emails found in places that are not localized or accessible, such as in a searchable database. But as AI continues to advance, it will become powerful enough to access, convert, and interpret this data, which will provide users access to it the same way they access structured data. By identifying the places your data lives within your organization, you can prepare strategies on how to organize and access it to ultimately allow a data stream to flow between your organization’s unstructured data and any AI systems you employ.  

Learning more by learning more 

While AI use and development may seem daunting, worrisome, or an exciting new frontier, the best advice for managing it is to continue learning about it and practicing with it. You can access free versions of Chat-GPT and begin practicing with prompt engineering, or you can read a different news article about new technology each day. In any case, by learning a little bit at a time, you will have a leg up compared to those who don’t. 

We’re not trying to sugarcoat the risks of AI. It will displace jobs, and there are ethical concerns to be aware of, but at the end of the day, the users, leaders, and organizations are going to decide what is appropriate and what is off-limits. By committing to learning more, experimenting, and offering your voice to the discussion, you can help shape what happens to AI and what our shared future looks like.