How to write a function

Learn all you need to know about creating Voiceflow function code

📘

Additional resources

Functions CMS: Learn how to manage and organize your functions effectively using Voiceflow's Content Management System.
Using the Function Step: Instructions on how to implement and configure the Function step on the Voiceflow canvas.
Functions Starter Pack: Start by importing some utility functions directly into your project. Click to download import file.

🚧

Environment limitations

Please note that certain JavaScript methods, such as setTimeout(), are not supported out-of-the-box due to their dependence on browser or Node.js runtime APIs and not part of the ECMAScript (JavaScript) language specification itself. This JavaScript reference document describes all built-in objects supported by functions code.

Introduction to Functions

In Voiceflow, functions allow you to create reusable, user-defined steps that can perform tasks ranging from simple text manipulation to making complex API calls. This guide will walk you through the process of coding a function and utilizing network requests within Voiceflow.

Example Functions

Functions Starter Pack: Start by importing some utility functions directly into your project. Click to download import file.

Extract Chunks from a Knowledge Base Response:Click to Download

Send a Query to Mistral 7xB and Parse Answer (via. together.ai): Click to Download

Implementing Function Code

A function in Voiceflow is defined by two main components: the function interface and the function code. The function interface outlines the inputs, outputs, and paths, while the function code dictates the behaviour of the function. In this document, we'll be covering the function code only.

Starting with the Main Function

Every function is contained within the main function that is the default export. This is the entry point for Voiceflow to execute your code when the function step is triggered.

export default async function main(args) {  
  // Your function logic goes here  
}

Processing Input Variables

The function accepts a single value, called the arguments object, args, which contains the data passed into the function when using the function step. In this case, the args.inputVars contains a single field called text.:

const { text } = args.inputVars;

Performing Transformations

In the function, you may perform operations such as transforming text:

const uppercaseText = text.toUpperCase();

Returning Runtime Commands

The function concludes by returning an object containing runtime commands:

return {  
  outputVars: {  
    output: uppercaseText  
  },  
  next: {  
    path: 'success'  
  },  
  trace: [  
    {  
      type: 'text',  
      payload: {  
        message: `Converting ${text} to ${uppercaseText}`  
      }  
    }  
  ],  
}

The runtime commands include:

  • Output Variables Command: Assigns values to output variables.
  • Next Command: Directs the assistant to exit the function step through a specific port.
  • Trace Command: Generates traces that form part of the agent's response.

Making Network Requests

Voiceflow functions have access to a modified fetch API for making network requests. This enables functions to interact with third-party APIs or your own backend services.

Example: GET Request with the Fetch API
Here's how to make a GET request to retrieve data from an API:

export default async function main(args) {	  
  const response = await fetch(`https://cat-fact.herokuapp.com/facts`);  
  const responseBody = response.json; // Accessing the response body

  // ... (process responseBody)  
}

Mapping Data from Response

To map and process the data from the API response, use JavaScript array methods like map:

const facts = responseBody.map(fact => fact.text);

Creating Traces from Data

Create traces for each item you want to include in the assistant’s response:

return {  
  next: {  
    path: 'success'  
  },  
  trace: facts.map(text => ({  
    type: "text",  
    payload: {  
      message: text  
    }  
  }))  
}

The finished function should should look like this. Don’t forget to add a path to the function interface with return value success.

When you run this function within a Voiceflow project, the assistant will recite the fetched cat facts, then move on to the next step through the 'success' port. You can link this to a text step that, for example, could say "Done!" to signal the end of the interaction.

Specification

🚧

Node modules imports

Functions code does not fully support module imports, whether it be the CommonJS format or ESModule format.

Function Code Specification

  • Written in JavaScript / ECMAScript.
  • Contains a default exported main function.
  • Accepts a single argument called the argument object.
  • The argument object contains a field called inputVars containing input variable values passed by the step.
  • Returns runtime commands to dictate the assistant's actions.

Runtime Commands

The RuntimeCommands is a JSON object, which when returned, specifies the behaviour of a function step. Three types of commands are supported:

  • Next Command: Dictates the path to follow after the function executes.
  • Output Variables Command: Sets the output variables with the values to be used later in the conversation.
  • Trace Command: Produces traces as part of the agent's response.

The schema for the runtime commands is given below as a TypeScript interface:

interface RuntimeCommands {
	next?: {
		path: string;
	};
	trace?: Trace[];
	outputVars?: Record<string, string | number | boolean>;
}

📘

Next command with a default port

If the function has no paths defined, then a default port is automatically generated. You do not need to send a next command to leave through the default port.

Supported Traces

Traces are response segments from an interaction with the assistant. Voiceflow supports various trace types, including text, visual content, cards, and more. Below are the TypeScript schemas for the supported trace types:

Available on all project types

interface VisualTrace {
  type: "visual",
  payload: {
    image: string;
  }
}

interface DebugTrace {
  type: "debug",
  payload: {
    message: string;
  }
}

Available on chat projects

interface TextTrace {
  type: "text";
  payload: {
    message: string;
  }
}

interface Button {
  name: string;
  payload: {
    actions: Array<{
      type: "open_url",
      url: string;
    }>;
  }
}

interface Card {
  imageUrl: string;
  title: string;
  description: {
    text: string;
  };
  buttons?: Array<Button>;
}

interface CardTrace {
  type: "cardV2",
  payload: Card;
}

interface CarouselTrace {
  type: "carousel",
  payload: {
    cards: Array<Card>;
  }
}

Available on voice projects

interface SpeakTrace {
  type: "speak";
  payload: {
    message: string;
  }
}

interface AudioTrace {
  type: "audio";
  payload: {
    src: string; // `src` must be base64 audio data encoded as a string
  }
}

Voiceflow Fetch API

Functions code has access to a modified fetch API, called the Voiceflow Fetch APl. This is mostly identical to the standard Fetch API, but there are some important differences.

For example, to perform a POST request:

  1. The first argument of fetch is the URL of the server
  2. The second argument is an options object supporting the standard fetch options such as method, headers, and body
await fetch(
  `<YOUR-NGROK-URL-HERE>`,
  {
    method: 'POST',
    headers: {
      "Content-Type": "application/json",
    },
    body: JSON.stringify({
      name,
      age
    })
  }
);

The main difference with the standard Fetch API is how to access the response body of a fetch request. In the standard Fetch API, you would use the .json() method. However, in the Voiceflow Fetch API, the response body is available through the `` field.

// Standard Fetch API
const response = await fetch("https://someurl.com");
const responseBody = await response.json();

// Voiceflow Fetch API
const responseBody = (await fetch("https://someurl.com")).json;

Extended Fetch Options

To change how the response body is parsed, you may pass in a third argument to fetch called the extended fetch options. For example, to parse the response instead as plain text, we would do the following:

const responseBody = await fetch(
	"https://someurl.com", 
	requestInit, 
	{ parseType: 'text' }
);
const responseContent = responseBody.text;

The type definition for the extended fetch options is given below:

export interface ExtendedFetchOptions {
    parseType?: 'arrayBuffer' | 'blob' | 'json' | 'text';
}

To access the parsed result, access the corresponding property of the return value like so:

const data = (await fetch(url, requestInit, { parseType: 'arrayBuffer' })).arrayBuffer;

const data = (await fetch(url, requestInit, { parseType: 'blob' })).blob;

const data = (await fetch(url, requestInit, { parseType: 'json' })).json;

const data = (await fetch(url, requestInit, { parseType: 'text' })).text;

Conclusion

By following the steps and specifications provided in this guide, you can implement robust functions within Voiceflow. These functions can transform user input, interact with APIs, and control the flow of the conversation, enhancing the capabilities of your Voiceflow assistant.

Happy coding, and we look forward to seeing what you build with Voiceflow!