Our previous artificial intelligence (AI) series provided some broad context into common AI terms and theoretical use cases. Our next series will provide a beginning-to-end map of how AI can be piloted, scaled and tested in a financial setting.
In our next series on AI, each article will present AI development through the lens of a financial company planning to test and deploy AI applications in their systems.
As a refresher, let’s define the main pillars of data-driven, automated intelligence systems:
- Data science generates knowledge and insights
- Machine learning identifies patterns and enables predictions from data
- Artificial intelligence produces actions that were believed to require human intelligence
Our prior series also covered how these pillars can be used for more predictive actions, allowing businesses to better estimate customer responses.
Piloting an AI application
Perhaps you’ve decided to test an AI application with a small-scale pilot. What steps are usually taken in to a piloting phase? What questions should you ask before starting your own pilot program?
Based on what we experienced in our own internal pilots, we recommend answering the following questions:
- What exactly are you testing? Are you testing the technology itself, or its feasibility in your broader systems? People don’t always know how to react to an AI pilot program, so you need to have clear goals on the purpose of your testing.
- How does it map back to tangible business benefits? This is the “why” of your testing rationale. It helps clarify to senior stakeholders what they can expect from the results.
- How will people interface with the program? Controlled pilots offer a mere glimpse into the full functionalities (and detriments) of any AI application.
Scaling an AI application
Congratulations, you successfully piloted your AI application. Now, you want to scale that performance across the business. What steps usually go in to this scaling phase? What are the lingering questions that arise during a scaling process?
Here’s what we recommend asking at this stage:
- How will you move from feasibility to visibility? Pilot programs are all about testing the feasibility of a given technology. Once it’s deemed feasible you need a plan to publish your results internally or externally.
- How different is the target architecture from the pilot system? Pilots are, by definition, small-scale tests of new applications. If the eventual client-facing architecture differs greatly from the pilot system, you may need to factor in greater time for adoption.
Analyzing results from an AI application
Now that you piloted and scaled an AI application for customer-facing use, how will you track results and map back to program objectives? Aligning results to goals is typical of any use case, and AI pilots are no different.
Here’s what we recommend asking at this stage:
- Did you discover new metrics or objectives? You may have discovered new details from your testing phase or found results to measure that differed from your hypotheses.
- How do the new processes compare to your current systems? Provide a substantial comparison between the new AI application and your legacy platform. What are the significant takeaways?
Testing, scaling and analyzing an AI application can position your organization for more efficient workflows, while helping manage risk factors from emerging cybersecurity threats.
If you’re interested in learning more about AI applications in finance, check out this series of AI articles.