How large language models will impact the integration and automation space

LLMs impact on integration and automation

Note: This article was published on 5/9/2023. We’ve gone on to launch AI@Work, a suite of AI features and resources to power your automations.

Large language models (LLMs), like GPT-4, are quickly transforming how employees, teams, and organizations operate.

Employees can use them to streamline tedious, day-to-day tasks (e.g. responding to emails) so that they can focus on higher-order work; all the while, specific teams can leverage them to develop and implement innovative solutions. For instance, product leaders can use LLMs to deliver intuitive, personalized, and powerful product features, allowing users to see a faster time-to-value.

An integration platform as a service (iPaaS) can, unsurprisingly, also benefit from LLMs, but only when the solution applies these models carefully to account for data privacy.

We’ll explain how a modern iPaaS (as well as an embedded iPaaS) stands to gain from incorporating LLMs, the measures you’ll need to take to safeguard your data when using them, and our tentative plans for adopting them.

Implement even more powerful automations

There’s obviously significant value in using a modern iPaaS like Workato to integrate internal applications (or your customer’s) and automate processes end-to-end—whether that’s  order-to-cash, lead routing, incident management, etc. 

But when you add generative AI into specific steps within a workflow automation, it becomes that much more impactful to your users. This is especially true when the generative AI can be accessed from the applications your employees already work in (e.g. Slack).

For instance, our friends over at Arcanum AI built automated solutions that use Workato and GPT to transform various financial processes. 

Here’s how they implemented the solution for financial reporting:

1. An employee accesses Arcanum AI’s finance assistant, “Archie”, by typing in the command “/archie”.

2. The employee selects a task Archie can perform; in this case, it’s writing a financial report.

3. The employee specifies the data they want Archie to pull from, including the month and  specific financial information.

4. Once the employee clicks “Send”, Archie pulls the requested data from your accounting software and uses GPT to write the financial report in near real-time; he then shares the preview of the report with the employee.  

5. Archie lets the employee view, share, or download the report with the click of a button.

Take additional precautions for protecting data that’s used in integrations and automations 

Your applications likely use highly-sensitive data (some of which might fall into the bucket of personal identifiable information, or PII)​​, whether that’s related to your employees, clients, prospects, and so on. Connecting your applications to LLMs, therefore, needs to be approached and managed carefully. 

Here are some tips that can help:

  • Read the terms and conditions of the LLMs’ APIs. More specifically, try to understand how they use your data; there will likely be a variety of LLM APIs you can connect to over time, so you should be able to compare them from a data security perspective and choose the one that meets your requirements the best.
  • Go through the proper development lifecycle for these integrations and automations. And, once pushed to production, review the data that’s flowing through to the LLM’s connector to ensure it isn’t receiving anything that’s unintended.
  • Securely authenticate to LLM’s APIs. Workato can help you do just that with our OpenAI Connector. 

Workato will embrace LLMs aggressively—yet cautiously 

Finally, in addition to providing connectors for LLMs like GPT, you might be asking yourself how we plan to incorporate LLMs into our platform. 

While I won’t go into too many specifics—we’ll have separate articles that do this in the near future—, I can say that our team is actively exploring and building co-pilot* capabilities with GPT-4. 

Note: we may not use “co-pilot” when we take the associated capabilities live.

This’ll allow our users to implement automations faster and more effectively. For example, a user could simply explain what they’re hoping to solve and our co-pilot could build out specific steps of the automation—or even build it end-to-end. 

We’re also exploring other ways the co-pilot can assist users, such as building custom connectors faster by using the co-pilot to generate code in our Connector SDK

That said, as we evaluate our options and make these platform investments, we’re prioritizing our customers’ data security. That’ll likely lead us to make these co-pilot capabilities optional, and it might even push us to use our own custom-trained LLM (where we’d use automation data from a pre-trained, open-source model). The latter would not only eliminate the security risks of sharing metadata with 3rd parties but also allow our co-pilot to be more accurate when carrying out tasks.

There’s a lot more to come on this soon. In the meantime, you can learn how Workato and LLMs can work together today by watching the recording of a webinar I recently co-hosted with the brilliant Asa Cox (the Founder and CEO of Arcanum AI). 

We covered specific automations they’ve built with Workato and GPT—like the one I highlighted earlier—, and Asa addressed more high-level questions and concerns around GPT, such as its future in industries that have to follow strict data privacy and security standards.

Watch the webinar recording