3: The platform in action
You are here
Convex Tour
- The reactor
- Convex and your app
- The platform in action
Learn about harnessing the broader backend platform capabilities to connect the Reactor to external resources like third-party APIs
In Parts 1 and 2 you iterated on a fullstack chat app, using query and mutation functions to implement new business logic and the Convex React client to invoke those functions from the frontend.
So far, all the app's data and functions have been self-contained within the Convex platform: a pure, deterministic oasis where you enjoy end-to-end type-safety and transactional guarantees that your data will never be incorrect or inconsistent.
But what happens when you need to interact with the "real" world outside of Convex? How do you call external APIs, access data stored elsewhere, or perform any other "side effects" your specific app might need?
To find out, in this module you'll modify the chat app to integrate an AI chat agent powered by OpenAI's API! Along the way, you'll learn how to:
- Write Convex actions to connect the Router to arbitrary external resources
- Select the right runtime environment for an action and set needed env vars
- Access an API from a Convex action
- Run mutations and queries from actions to execute business logic
- Use the Convex scheduler to invoke an action from a mutation
Ready to get your GPT on? Let's go!
Before you begin: Get an OpenAI Account and API Key
The AI agent in the chat will be powered by the OpenAI API, which allows you to use OpenAI's state-of-the-art LLMs programmatically.
In order to access this API, you'll need to create a free OpenAI account and create a new API secret key.
GPT API FTW!
Let's use the now-famous GPT language model to create an AI agent that will be able to answer users' questions from the chat. We'll use OpenAI's Chat Completions API to request "completions" (responses) to user input.
Connect your project to OpenAI
Install the OpenAI client
OpenAI provides an npm package for easy access to their APIs from JS/TS. Open a
Terminal in the convex-tour-chat
root directory and install the openai
package as a project dependency:
npm install openai
Set your API key
On the Convex dashboard (remember you can open it with npx convex dashboard
),
navigate to your deployment Settings page. There, you'll have the option to
add new variables to your environment, which is the safest way to access a
secret like an API key from your functions.
Create an environment variable named OPENAI_API_KEY and set its value to the secret key you generated for your OpenAI account.
Screenshot: find the Settings page and add an environment variable on the dashboard
Now your Convex project has everything it needs to access the API. Time to make it happen with an action function!
Write your first action
Fire up OpenAI
Remember that queries and mutations run deterministically, enabling transactional guarantees that keep your data consistent, correct, and automatically reactive. For this reason they cannot call third-party APIs. Actions are the escape hatch for interacting with the outside world from Convex.
Like queries and mutations, actions live in TypeScript modules within the
convex/
directory in your project's root.
Create a module for your AI action
Create a new file convex/openai.ts
where you'll import the openai
client
library you installed earlier and instantiate a client with the API key from
your project's environment.
"use node";
import OpenAI from "openai";
// Initialize the OpenAI client with the given API key
const apiKey = process.env.OPENAI_API_KEY!;
const openai = new OpenAI({ apiKey });
Start up the development server with npm run dev
, if it's not running already.
Get to node your runtime
What's that little "use node"
doing at the start of the file?
Actions run by default in the same Default Convex Runtime as queries and mutations. This makes updating them and running them fast, but the runtime currently has a limited set of capabilities implemented – not every NPM package works in it.
For cases where you need libraries or features the default runtime doesn't
support, Convex actions can be configured to run in a "traditional" Node runtime
by placing them in a file which starts with the "use node"
directive.
What are the differences between the Convex and Node runtimes for actions?
Read about the advantages and disadvantages of each runtime in detail here: Function Runtimes
A little less conversation, a little more action
OK, now you're ready to actually get your AI on!
Analogous to query()
and mutation()
, Convex provides an action()
constructor that defines an action function, which accepts an object defining
the function's args
and handler
.
Get ready for action
In convex/openai.ts
, import the action()
constructor and export a new action
called chat
that accepts a messageBody
string as its argument. Similar to
mutations, action handlers accept two arguments: an ActionContext ctx
and an
arguments object as defined in args
.
Solution
"use node";
import OpenAI from "openai";
import { action } from "./_generated/server";
import { v } from "convex/values";
// Initialize the OpenAI client with the given API key
const apiKey = process.env.OPENAI_API_KEY!;
const openai = new OpenAI({ apiKey });
export const chat = action({
args: {
messageBody: v.string(),
},
handler: async (ctx, args) => {
// TODO
},
});
Great, now you just need to draw the rest of the owl!
In the action's handler
, you'll use the openai
client instantiated earlier
to call the OpenAI
Chat endpoint.
Looking at the API documentation, it expects a language model
name
('gpt-3.5-turbo' works for this app) and a messages
array that gives the model
the context of the chat to be completed.
Get GPT to complete the chat
Complete the handler
function body to get a response from
openai.chat.completions.create
.
Pass gpt-3.5-turbo
as the model name, and in the messages
array provide two
messages
: one 'system' message that tells GPT how you want it to respond, and
one 'user' message that passes on the message content to respond to.
Then, grab the text content of GPT's answer from the response from OpenAI, which is a nested object of the form (simplified):
{
choices: [{ message: { content: "This is the response text from me, GPT" } }];
}
Solution
handler: async (ctx, args) => {
const response = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: [
{
// Provide a 'system' message to give GPT context about how to respond
role: "system",
content:
"You are a terse bot in a group chat responding to questions with 1-sentence answers.",
},
{
// Pass on the chat user's message to GPT
role: "user",
content: args.messageBody,
},
],
});
// Pull the message content out of the response
const messageContent = response.choices[0].message?.content;
},
Almost there! Time to go back to the first argument of the action's handler
function. When the action runs, Convex will pass an ActionContext
as the first
argument to the handler, which includes the utility method runMutation
(along
with runQuery
and runAction
, which you don't need right now). This gives
actions the opportunity to invoke other Convex functions as needed.
Similar to the useMutation
hook on the frontend, runMutation
accepts a
Convex function belonging to the api
Convex generates from your codebase.
Send GPT's response as a new message
Import the api
into your openai.ts
module:
import { api } from "./_generated/api";
In the action's handler, use ctx.runMutation
to execute the existing
api.messages.send
mutation to add a new message to the chat, passing through
the chat completion response you received from OpenAI (or a fallback string in
case the response didn't have any content, for whatever reason):
Solution
Your convex/openai.ts
module should now look something like this:
import OpenAI from "openai";
import { action } from "./_generated/server";
import { api } from "./_generated/api";
import { v } from "convex/values";
// Initialize the OpenAI client with the given API key
const apiKey = process.env.OPENAI_API_KEY!;
const openai = new OpenAI({ apiKey });
export const chat = action({
args: {
messageBody: v.string(),
},
handler: async (ctx, args) => {
const response = await openai.chat.completions.create({
model: "gpt-3.5-turbo", // "gpt-4" also works, but is so slow!
messages: [
{
// Provide a 'system' message to give GPT context about how to respond
role: "system",
content:
"You are a terse bot in a group chat responding to questions with 1-sentence answers.",
},
{
// Pass on the chat user's message to GPT
role: "user",
content: args.messageBody,
},
],
});
// Pull the message content out of the response
const messageContent = response.choices[0].message?.content;
// Send GPT's response as a new message
await ctx.runMutation(api.messages.send, {
author: "ChatGPT",
body: messageContent || "Sorry, I don't have an answer for that.",
});
},
});
Put your action into action
To make sure the chat
action works as intended, you can test-run it in the
Dashboard's "Functions" tab, or with the CLI using the convex run
command
(substituting in your own question, of course!):
npx convex run openai:chat '{"messageBody":"What is a serverless function?"}'
If all went well, you'll see a new document in the messages
table, and in the
chat itself!
From mutation to action
At this point your action still isn't connected to the UI, so there is no way to trigger it from the chat. Time to fix that!
Right on schedule
As mentioned earlier, queries and mutations are deterministic functions that always run in the Convex Runtime, whereas actions can be nondeterministic and run in the Node runtime. If a deterministic mutation called a nondeterministic action directly, that determinism would be lost!
However, the Convex scheduler provides a safe way for mutations to indirectly invoke other functions (whether queries, mutations, or actions). Using the scheduler, a mutation can "queue up" an action to run after the mutation has successfully executed, which allows Convex to make sure that the mutation did not encounter errors before trying to run the action.
Schedule the Chat action after the send mutation
In convex/messages.ts
, edit the send
mutation to schedule the openai:chat
action after sending the new message.
To do this, you'll need to use the scheduler
object from the ctx
MutationContext that the handler receives as its first argument.
Within the handler body, add a call to ctx.scheduler.runAfter
to run the
api.openai.chat
action after the mutation has completed. The asynchronous
runAfter
method takes 3 arguments:
- duration in milliseconds the scheduler should wait to run the scheduled
function (
0
milliseconds makes sense in this case) - the function to schedule
- an arguments object to pass through to the scheduled function (in this case, the message body)
You probably don't want the AI agent to respond to every message in the chat,
so wrap the scheduled action in a conditional so that it will only respond to
messages starting with @gpt
and check the author
so it won't respond to
itself.
Solution
import { api } from "./_generated/api";
// ...
export const send = mutation({
args: { body: v.string(), author: v.string() },
handler: async (ctx, args) => {
const { body, author } = args;
// Send a new message.
await ctx.db.insert("messages", { body, author });
if (body.startsWith("@gpt") && author !== "ChatGPT") {
// Schedule the chat action to run immediately
await ctx.scheduler.runAfter(0, api.openai.chat, {
messageBody: body,
});
}
},
});
Enjoy your new GPT BFF!
Your AI chat agent is now ready to go! Try chatting with it using the @gpt
tag.
Recap
- Convex actions let you access arbitrary external resources, such as third-party libraries and APIs
- Environment Variables can be added to your Convex project to save secrets such as API keys
- Unlike queries and mutations which always run in the Convex Runtime,
actions can optionally run in Node using the directive
"use node";
- Deterministic mutation functions can't call actions directly, but they can indirectly invoke them using the scheduler
Go forth and Convex!
You've now completed all 3 parts of the Convex Tour, and built an AI-enabled chat app in the process - amazing work!
But we hope this is just the beginning of your Convex journey, so on the next page we've collected some resources you might want to explore next. Choose your own adventure!