The question of how GPT’s large language models (LLMs) will influence BT teams’ day-to-day work in the years ahead, however, appears less certain.
Assuming that GPT’s models continue to advance at a quick pace—as seems to be the case with the release of GPT-4—and that organizations remain eager to use them, here are a few predictions on how they’ll influence BT teams in the years to come.
BT teams will (almost) treat GPT models as members of their team
In order to get more value from a given GPT model, BT teams will look to gather personalized, actionable output from it.
With this in mind, they’ll use a module for a given GPT model inside of their network so that it can access and get trained on all of their assets. The module should then be able to provide all kinds of recommendations and actions on behalf of BT teams.
For example, it can learn how a process like quote-to-cash works at your organization through the documentation you’ve built out. When you pair this with the knowledge it pulls from external sources, the GPT model should be able to provide intelligent recommendations for improving your process. Taking it a step further, if it can access your instance of an enterprise automation platform like Workato, it can go on to modify your existing quote-to-cash automations and/or build new ones.
BT teams will make significant mistakes due to “Hallucinations”
You’ve likely heard about—and have even had first-hand experience with—ChatGPT’s faulty output.
The reality is that in its current form—and likely for the foreseeable future—, GPT’s family of large language models will occasionally “hallucinate”, or produce unexpected, inaccurate information. What’s more, this misinformation will likely be conveyed with confidence and appear alongside information that’s correct, making the inaccuracies difficult to pinpoint.
This means that if a BT team takes the output at face value, they might end up making decisions that lead them to fall short of their goals. Moreover, these decisions might adversely impact lines of business, thereby hurting the trust and reputation that the BT team has worked so hard to build and maintain.
While GPT-4 and other models that come out in the future will help cut back on errors, BT teams should continue to pressure test the output they receive and use the models inside their networks so that it becomes smarter with respect to their organization.
Here’s more on why that last point is so crucial:
BT teams will focus even more on understanding and solving business problems
The technical skills that BT personnel relied on in the past (e.g. coding) are getting utilized less and less with the rise of low-code/no-code tools—and both current and future GPT models will only accelerate this trend. That said, BT’s interpersonal skills and ability to problem solve when engaging with lines of business will remain (if not become more) critical.
Flood summarizes all of this succinctly:
“We can use technology to solve technology problems, but we still need people to solve business problems.”
In addition, as BT teams continue using ChatGPT, they’ll become better at asking initial questions and follow-ups in order to arrive at their desired output. BT teams will likely find this skill handy when engaging with business stakeholders to uncover solutions and their associated requirements.
So, how are you currently (or planning on) using large language models to transform your team’s operations? And what are your thoughts on how they’ll influence BT teams? You can share your answers to these questions as well as ask them to thousands of your peers in our Systematic+ Community.