Skip to main content

Log Streams

Log streams enable streaming of log events from your Convex deployment to supported destinations, such as Datadog or a custom webhook.

Convex deployments generate log events in a variety of ways, from console.log calls within Convex functions to audit trails of configuration changes in your deployment. By default, all logs are visible in your Convex deployment's Logs page and function-generated logs through the browser console of your logs or using the Convex CLI. These provide a quick and easy way to explore your most recent logs, while log streams support more complex querying and storage use cases.

Currently, log streams are only available to Pro users.

Log Stream Integrations are in beta

Log Stream Integrations are currently a beta feature. If you have feedback or feature requests, let us know on Discord!

Configuring log streams

To configure an integration, navigate to the Deployment Settings in the Dashboard, and the "Integrations" tab in the sidebar. This page provides a list of your configured integrations, their current health status, and other integrations available to be configured. To configure a integration, click on the card and follow the setup directions.

Integrations Page

We currently support two log streams with plans to support many more. These are

Datadog

Configuring a Datadog log stream requires specifying:

  • The site location of your Datadog deployment
  • A Datadog API key
  • A comma-separated list of tags that will be passed using the ddtags field in all payloads sent to Datadog. This can be used to include any other metadata that can be useful for querying or categorizing your Convex logs ingested by your Datadog deployment.

Datadog Log Stream Configuration

Axiom

Configuring an Axiom log stream requires specifying:

  • The name of your Axiom dataset
  • An Axiom API key
  • An optional list of attributes and their values to be included in all log events send to Axiom. These will be sent via the attributes field in the Ingest API.

Webhook

A webhook log stream is the simplest and most generic stream, allowing piping logs via POST requests to any URL you configure. The only parameter required to set up this stream is the desired webhook URL.

Log event data model (beta)

Log events have a well-defined JSON schema that allow building complex, type-safe pipelines ingesting log events.

This data model is currently in beta, so is subject to change.

System fields

System fields are reserved fields which are included on log events and prefixed by an underscore.

All log events include the following system fields:

  • _topic: string that categorizes a log event by its internal source
  • _timestamp: Unix epoch timestamp in milliseconds. This is as an integer.

Log sources

This section outlines the source and data model of all log events.

console logs

Convex function logs via the console API.

Schema:

  • _topic = "_console"
  • _timestamp = Unix epoch timestamp in milliseconds
  • _functionType = "query" | "mutation" | "action" | "httpAction"
  • _functionPath =
    • If this is an HTTP action, this is a string of the HTTP method and URL pathname i.e. POST /my_endpoint
    • Otherwise, this is a path to function within convex/ directory including an optional module export identifier i.e. myDir/myFile:myFunction.
  • _functionCached = true | false. This field is only set if _functionType = "query" and says if this log event came from a cached function execution.
  • message = payload string of arguments to console API

Example query log event:

{
"_topic": "_console",
"_timestamp": 1695066350531,
"_functionType": "query",
"_functionPath": "myDir/myFile",
"_functionCached": true,
"message": "[LOG] 'My log message'"
}

Function execution record logs

Function executions which log a record of their execution and their result.

Schema:

  • _topic = "_execution_record"
  • _timestamp = Unix epoch timestamp in milliseconds
  • _functionType = "query" | "mutation" | "action" | "httpAction"
  • _functionPath = path to function within convex/ directory including module export identifier
  • _functionCached = true | false. This field is only set if _functionType = "query" and says if this log event came from a cached function execution.
  • status = "success" | "failure"
  • reason = error message from function. Only set if status = "failure"
  • executionTimeMs = length of execution of this function in milliseconds
  • databaseReadBytes = the database read bandwidth used by this function in bytes
  • databaseWriteBytes = the database write bandwidth used by this function in bytes
  • storageReadBytes = the file storage read bandwidth this function used in bytes
  • storageWriteBytes = the file storage write bandwidth this function used in bytes

Example execution record log from an HTTP action:

{
"_topic": "_execution_record",
"_timestamp": 1695066350531,
"_functionType": "httpAction",
"_functionPath": "POST /sendImage",
"status": "failure",
"reason": "Unexpected Error: Some error message\n\n at ....",
"executionTimeMs": 73
}

Audit trail logs

Audit logs of deployment events.

Schema:

  • _topic = "_audit_log"
  • _timestamp = Unix epoch timestamp in milliseconds
  • action = "create_environment_variable" | "update_environment_variable" | "delete_environment_variable" | "replace_environment_variable" | "push_config" | "build_indexes" | "change_deployment_state"
  • actionMetadata = object whose fields depends on the value of the action field.

Example push_config audit log:

{
"_topic": "_audit_log",
"_timestamp": 1695066350531,
"action": "push_config",
"actionMetadata": {
"modules": {
"added": ["ffmpeg.js", "fetch.js", "test.js"],
"removed": ["removed.js"]
}
}
}

Verification logs

Internal logging events used to verify access to a log stream.

Schema

  • _topic = "_verification"
  • _timestamp = Unix epoch timestamp in milliseconds.
  • message = Convex connection test

Guarantees

Log events provide a best-effort delivery guarantee. Log streams are buffered in-memory and sent out in batches to your deployment's configured streams. This means that logs can be dropped if ingestion throughput is too high. Similarly, due to network retries, it is possible for a log event to be duplicated in a log stream.

That's it! Your logs are now configured to stream out. If there is a log streaming destination that you would like to see supported, please let us know!