Skip to main content
Git Interface

Manage Log Streaming

Learn how to enable and configure Log Streaming in your Section project.


Log streaming on Section is a standard Unified Logging Layer using a managed Cloud Native Foundation Fluentd infrastructure.

Section supports, among others, the following Fluentd destination endpoints:

  • AWS S3
  • Datadog
  • Elastic Cloud
  • Elastic Search
  • Google Cloud
  • Grafana Cloud Loki
  • Logtail
  • New Relic
  • Rsyslog
  • Splunk
  • Sumo Logic

Log streaming can be enabled simply by adding a single configuration file to a project's git repository.

High Level Overview

Here’s an overview of how to initiate log streaming:

  1. Review existing documentation of your Logging solution such as Datadog or Splunk and get the required API key or logging solution credentials.
  2. Clone your Section Project git repository to your local computer.
  3. Add a new file to the root of your repository named fluent-match.conf.
  4. Paste the example Fluentd match configuration from below into the new file.
  5. Replace the text INSERT_YOUR_LOGGING_PROVIDER_KEY_HERE in the file with your Provider API key.
  6. Save the file.
  7. Commit and push the changes.
  8. Logs will begin streaming within 30-60 seconds.

See below - Enable Log Streaming - for provider specific examples of the configuration file (fluent-match.conf).


Your Section Project logs should begin to appear in your Log Streaming destination in a few minutes.

Enable Log Streaming to supported Log Streaming destinations

Follow the steps above to add a fluent-match.conf to your Section Project git repository, but use the example contents below for your respective destination.


The fluent-match.conf file cannot contain the @include directive, and must be smaller than 1 Megabyte.


Ensure your S3 bucket already exists, and create a AWS IAM user with an API access key that is at least has the s3:PutObject action on the bucket contents. An example IAM policy is available here

This example will upload a log file per PoP every 5 minutes, each log file will be named with a timestamp followed by a unique identifier.

<match **>
@type s3

check_bucket false
check_object false
path logs/ # adjust as required
# Default s3_object_key_format is "%{path}%{time_slice}_%{index}.%{file_extension}"
s3_object_key_format %{path}%{time_slice}_%{uuid_flush}.%{file_extension}
store_as gzip # or "json" for uncompressed logs
time_slice_format %Y%m%d%H%M

@type file
path /var/log/fluent/s3
timekey 300 # 5 minute partition
timekey_wait 30s
timekey_use_utc true
chunk_limit_size 256m


Follow the steps below to enable Log Streaming to Datadog:

  1. Follow the Datadog documentation to create a Datadog API key to use with your Section project.
  2. Clone your Section Project git repository to your local computer.
  3. Add a new file to the root of your repository named fluent-match.conf.
  4. Paste the example Fluentd match configuration from below into the new file.
  5. Replace the text INSERT_YOUR_DATADOG_API_KEY_HERE in the file with your Datadog API key created above.
  6. Save the file.
  7. Add the fluent-match.conf file to your git repository, commit the change, and push the commits to Section.

Example Datadog configuration for Fluentd:

<match **>
# from
@type datadog
@id awesome_agent

@type memory
flush_thread_count 4
flush_interval 3s
chunk_limit_size 5m
chunk_limit_records 500

Grafana Cloud Loki

The Grafana Cloud URL and credentials are available in the Loki Stack Details page. Your Grafana Cloud API Key should have the MetricsPublisher role.

Note: use either <label>...</label> or extra_labels to set at least one label. (Docs)

<match **>
@type loki
extra_labels {"env":"dev"}
flush_interval 10s
flush_at_shutdown true
buffer_chunk_limit 1m

Google Cloud

Google Cloud requires a separate credentials file to be added to your repository. Add the fluent-match.conf as follows to send all your logs to Google Cloud.

<match **>
# From
@type google_cloud
use_metadata_service false
vm_id none
zone none

Then add a second file next to that called fluent-google-cloud.json. See the instructions under Creating a service account on Google's docs for instructions on how to generate this file. Download it from the console and add it to the root of your Section repository as fluent-google-cloud.json.


Make sure you replace YOUR_LOGTAIL_SOURCE_TOKEN with your own source token from

<match *>
@type logtail
@id output_logtail
flush_interval 2 # in seconds

New Relic

Replace YOUR_LICENSE_KEY below with your New Relic license key. For more details, see New Relic's Fluentd log forwarding documentation.

Note: The New Relic plugin for Fluentd overwrites the message field with the contents of the log field before sending the data to New Relic, therefore the record_transformer filter is required for logs to be shipped appropriately.

<filter **>
@type record_transformer
offset ${record["log"]["offset"]}
enable_ruby true
remove_keys log

<match **>
@type newrelic
license_key YOUR_LICENSE_KEY

Sumo Logic

Your SUMOLOGIC_COLLECTOR_URL can be found by going to your collection under App Catalog then Collection and clicking on the Show URL link on the collection.

<match **>
# From
@type sumologic
log_format json
open_timeout 10

Disable Log Streaming

Delete the fluent-match.conf file from the Section Project git repository to disable Log Streaming.