API Key Rotation Options for Azure

API Key Rotation Options for Azure

To ensure uninterrupted access to the API, we recommend renewing your key as soon as possible. You can easily renew your key by following these steps:

Option 1 - Automate API Key Rotation

This option ensures that your API key is automatically rotated without any manual intervention, guaranteeing that you always have a valid API key.

We recommend using this integration pattern for automating API key rotation for consumer applications which are in Azure Environment.

API Key Rotation Options for AWS

API Key Rotation Options for AWS

To ensure uninterrupted access to the API, we recommend renewing your key as soon as possible. You can easily renew your key by following these steps:

Option 1 - Automate API Key Rotation

This option ensures that your API key is automatically rotated without any manual intervention, guaranteeing that you always have a valid API key.

We recommend using this integration pattern for automating API key rotation for consumer applications which are in AWS Environment.

Publisher End to End Journey

Publisher End to End Journey

Publishing an API involves making it available in a developer portal, ensuring that its endpoints are stable and reliable for other applications to use. As an API publisher, you are responsible for monitoring and managing daily API usage and handling various administrative tasks related to your APIs.

By following below steps, you can ensure a smooth and efficient process for publishing your API, making it accessible and reliable for developers and applications.

Note: Please ensure you are logged on to Anywhere Confluence for accessing below links.

API Key Rotation User Guide

API Key Rotation User Guide
.tab-recommend { font-style: normal; font-weight: bold; margin: 20px 0px; font-size: 28px; color: var(--blue-1); } .custom-panel li { line-height: 25px; font-size: 14px; } .custom-panel img { width:100%; height: auto; }
Local Machine
  • AWS API Key Rotator
  • Azure API Key Rotator

Datadog Integration and best practices

Datadog Integration and best practices

Getting Started

This blog provides high level overview of steps and best practices for integrating your product/application with Datadog.

Here are steps for integration

  • Setup integration with Datadog - Please refer to Datadog Integration with AWS for details.

  • Adjust datadog defaults & Set up Logs - You are free to choose any format for logging - Please refer to Log Structuring and Best Practices for details. This is most critical section.

  • Set up dashboard, Monitors & Alerts - Please refer to Dashboards, Monitors and Alerts for details

  • Integrate your dashboards & Alerts with your Application/Product - Please refer to Integrate your dashboard and alert section to learn about multiple ways of integrating dashboard with your application. Also, you can publish alerts to Team/Slack Channels, Create Service now tickets as needed by your application.

Datadog Integration with AWS

Please reach out to Kang Ko (kang.ko@anywhere.re) to setup the integration.

PLEASE NOTE - DATADOG COSTS CAN BE EASILY CONTROLLED BY ADJUSTING DEFAULTS by reducing noise and logging business process errors. It will be best practice to send all data to datadog but use only needed logs to aid business process logging to keep the costs low.

Datadog’s Amazon Web Services integration collects logs, events, and all metrics from CloudWatch for over 90 AWS services.

Different Integration Ways

Before analyzing each technique, let's look at how each method works.

Single-click integration (logs and metrics)

If you are looking to get started quickly, this integration configures your account with Lambda Forwarder and Metric Polling. See those methods for details on the 1-click integration here.

Lambda Forwarder (logs)

  • Datadog Forwarder is an AWS Lambda function that ships logs, custom metrics, and traces from your environment to Datadog. The Forwarder can:

  • Forward CloudWatch, ELB, S3, CloudTrail, VPC, SNS, and CloudFront logs to Datadog.

  • Forward S3 events to Datadog.

  • Forward Kinesis data stream events to Datadog (only CloudWatch logs are supported).

  • Forward custom metrics from AWS Lambda functions using CloudWatch logs.

  • Forward traces from AWS Lambda functions using CloudWatch logs.

  • Generate and submit enhanced Lambda metrics (aws.lambda.enhanced.*) parsed from the AWS REPORT log: duration, billed_duration, max_memory_used, timeouts, out_of_memory, and estimated_cost.

For more information to set up Datadog Forwarder please refer to this link.

Kinesis Firehose (logs and metrics)

Within your AWS account, you configure a Kinesis Firehose Delivery Stream to relay to Datadog. Subsequently, you configure the Firehose to subscribe to Cloudwatch Log Groups and Cloudwatch Metrics. The subscriptions forward logs and metrics to the Firehose. The Firehose buffers data until a time limit expires or the buffer exceeds a specified size, at which point; the Firehose forwards the data to Datadog.

In the architecture above, Amazon Kinesis Data Streams receive the information from an on-premise application running the KPL. This information is transmitted over the VPN and using a VPC endpoint; it reaches the Kinesis data streams. It feeds the entire Amazon Kinesis Data Firehose flow to reach the Datadog.

Metric polling (metrics)

After configuring an IAM Role in your AWS account, you configure Datadog to crawl your account for Cloudwatch Metrics. The Datadog crawler runs every ~10 minutes and extracts new data points from Cloudwatch.

Datadog Agent (logs and metrics)

The Datadog Agent is a daemon installed on an EC2 box or as a container in a docker cluster. The agent collects and receives logs and metrics from its machine (and cluster if desired) and sends them to Datadog.

Firelens

Firelens is an AWS logging driver that allows you to route docker container logs running on ECS. This option requires running a sidecar, which is an additional docker container running alongside your main container.

Comparison

This dimension includes granularity of the data and metadata to filter out unwanted or noisy logs and metrics. This dimension also measures how easy it is to configure for Containers and Serverless.

Method

Granularity

Containers

Serverless

Lambda Forwarder (Automatic)

Poor

Easy

Easy

Lambda Forwarder (Manual)

Great

Medium

Medium

Kinesis Firehose

Great

Medium

Medium

Datadog Agent

Great

Hard

N/A

Metric polling

Poor

Easy

Easy

Firelens

Great

Medium

N/A

Which option is best?

The one-click integration is excellent if you want to try out Datadog but lacks control and will become a burden to your team and wallet when you add environments and AWS accounts.

For most, we recommend a combination of 2 options to achieve full observability. Datadog Agent and Kinesis Firehose.

If you are running EC2 workloads, you need to install the Datadog Agent. However, if you are running an ECS or EKS cluster, it is best to run the Datadog Agent as a daemon container that runs on every worker node.

For everything else, we prefer the Kinesis option to gain over noise and cost.

Adjust datadog defaults and Best Practices

Use Ingestion control to set up datadog defaults. Managing datadog defaults will significantly reduce datadog costs for you.

Ingestion Control Best Practices

Ingestion Control

Ingestion controls affect what traces are sent by your applications to Datadog. APM metrics are always calculated based on all traces and are not impacted by ingestion controls.

The Ingestion Control page provides visibility at the Agent and tracing libraries level into the ingestion configuration of your applications and services. From the ingestion control configuration page, you can:

  • Gain visibility on your service-level ingestion configuration and adjust trace sampling rates for high throughput services.

  • Understand which ingestion mechanisms are responsible for sampling most of your traces.

  • Investigate and act on potential ingestion configuration issues, such as limited CPU or RAM resources for the Agent.

Exclusion Rules

By default, logs indexes have no exclusion filter: all logs matching the Index Filter are indexed.

But because your logs are not all and equally valuable, exclusion filters control which logs flowing in your index should be removed. Excluded logs are discarded from indexes, but still flow through the Livetail and can be used to generate metrics and archived.

To add an exclusion filter:

  1. Navigate to Log Indexes.

  2. Expand the pipeline for which you want to add an exclusion filter.

  3. Click Add an Exclusion Filter.

Exclusion filters are defined by a query, a sampling rule, and an active/inactive toggle:

  • Default query is *, meaning all logs flowing in the index would be excluded. Scope down exclusion filter to only a subset of logs with a log query.

Example: Below query excludes 200 status code and shows logs for other status codes:

  • Default sampling rule is Exclude 100% of logs matching the query. Adapt sampling rate from 0% to 100% and decide if the sampling rate applies on individual logs, or group of logs defined by the unique values of any attribute.

  • Default toggle is active, meaning logs flowing in the index are discarded according to the exclusion filter configuration. Toggle this to inactive to ignore this exclusion filter for new logs flowing in the index.

Note: Index filters for logs are only processed with the first active exclusion filter matched. If a log matches an exclusion filter (even if the log is not sampled out), it ignores all following exclusion filters in the sequence.

Use drag and drop on the list of exclusion filters to reorder them according to your use case.

Switch off, switch on

You might not need your DEBUG logs until you actually need them when your platform undergoes an incident, or want to carefully observe the deployment of a critical version of your application. Setup a 100% exclusion filter on the status:DEBUG, and toggle it on and off from Datadog UI or through the API when required.

Summary across all environments

Get an overview of the total ingested data over the past hour, and an estimation of your monthly usage against your monthly allocation, calculated with the active APM infrastructure (hosts, Fargate tasks, and serverless functions).

If the monthly usage is under 100%, the projected ingested data fits in your monthly allotment. A monthly usage value over 100% means that the monthly ingested data is projected to be over your monthly allotment.

Managing ingestion for all services at the Agent level

Before going into your services ingestion configuration in tracing libraries, a share of the ingested volume is controllable from the Datadog Agent.

You can control three ingestion mechanisms by configuring sampling in the Datadog Agent:

  • Head-based Sampling: When no sampling rules are set for a service, the Datadog Agent automatically computes sampling rates to be applied in libraries, targeting 10 traces per second per Agent. The setting DD_APM_MAX_TPS allows you to change the target number of traces per second.

  • Error Spans Sampling: For traces not caught by head-based sampling, the Datadog Agent catches local error traces up to 10 traces per second per Agent. The setting DD_APM_ERROR_TPS allows you to change the target number of traces per second.

  • Rare Spans Sampling: For traces not caught by head-based sampling, the Datadog Agent catches local rare traces up to 5 traces per second per Agent. The setting DD_APM_DISABLE_RARE_SAMPLER allows you to disable the collection of rare traces.

Control in the application

Do not overfeed logs into datadog as you might end up facing difficulty in analyzing and monitoring the logs. You can segregate mandatory log attributes and optional log attributes. When we have any attribute with large size values in the service, we can define OPTIN services wherein you will have the privilege to select only some particular services attribute which is important to log. So only when a service has OPTIN as true the related so called large attribute values can be logged.

This way we can have only the required content in place for logs to analyze it in optimal way.

Filter logs

To send only a specific subset of logs to Datadog use the log_processing_rules parameter in your configuration file with the exclude_at_match or include_at_match type.

Exclude at match

PARAMETER

DESCRIPTION

exclude_at_match

If the specified pattern is contained in the message, the log is excluded and not sent to Datadog.

For example, to filter OUT logs that contain a Datadog email address, use the following log_processing_rules:

Configuration file:

logs:
  - type: file
    path: /my/test/file.log
    service: cardpayment
    source: java
    log_processing_rules:
    - type: exclude_at_match
      name: exclude_datadoghq_users
      ## Regexp can be anything
      pattern: \w+@datadoghq.com

Include at match

PARAMETER

DESCRIPTION

include_at_match

Only logs with a message that includes the specified pattern are sent to Datadog. If multiple include_at_match rules are defined, all rules patterns must match in order for the log to be included.

For example, to filter IN logs that contain a Datadog email address, use the following log_processing_rules:

Configuration file :

logs:
  - type: file
    path: /my/test/file.log
    service: cardpayment
    source: java
    log_processing_rules:
    - type: include_at_match
      name: include_datadoghq_users
      ## Regexp can be anything
      pattern: \w+@datadoghq.com

If you want to match one or more patterns you must define them in a single expression:

logs:
  - type: file
    path: /my/test/file.log
    service: cardpayment
    source: java
    log_processing_rules:
    - type: include_at_match
      name: include_datadoghq_users
      pattern: abc|123

If the patterns are too long to fit legibly on a single line you can break them into multiple lines:

logs:
  - type: file
    path: /my/test/file.log
    service: cardpayment
    source: java
    log_processing_rules:
    - type: include_at_match
      name: include_datadoghq_users
      pattern: "abc\
|123\
|\\w+@datadoghq.com"

Log Structuring and Best Practices

Logs are one of the most valuable assets that a developer has. The goal of structured logging is to bring a more defined format and details to your logging. So, let’s look into some of the ways to make log structured and efficient.

Users can log data in any format of their choice and use GROK Parsing to get the logs they need for Product/Application Monitoring. User can use both JSON and GROK Parsing as per needs.

JSON Structure

If your application logs are in JSON format, Datadog automatically parses the log messages to extract log attributes as below :

For other formats, Datadog allows you to enrich your logs with the help of Grok Parser. 

XML Structure

If your logs are in XML format the XML parser transforms messages into JSON.

Log:

<book category="CHILDREN">
  <title lang="en">Harry Potter</title>
  <author>J K. Rowling</author>
  <year>2005</year>
</book>

With the following parsing rule:

Rule:

rule %{data::xml}

Result:

{
"book": {
  "year": "2005",
  "author": "J K. Rowling",
  "category": "CHILDREN",
  "title": {
    "lang": "en",
    "value": "Harry Potter"
  }
}
}

Notes:

  • If the XML contains tags that have both an attribute and a string value between the two tags, a value attribute is generated.

    • For example: <title lang="en">Harry Potter</title> is converted to {"title": {"lang": "en", "value": "Harry Potter" } }

Repeated tags are automatically converted to arrays.

For example: <bookstore><book>Harry Potter</book><book>Everyday Italian</book></bookstore> is converted to { "bookstore": { "book": [ "Harry Potter", "Everyday Italian" ] } }

ASCII control characters

If your logs contain ASCII control characters, they are serialized upon ingestion. These can be handled by explicitly escaping the serialized value within your grok parser.

Note : For other formats, Datadog allows you to enrich your logs with the help of Grok Parser. 

GROK Parser

The Grok syntax provides an easier way to parse logs than pure regular expressions. The Grok Parser enables you to extract attributes from semi-structured text messages.

Grok comes with reusable patterns to parse integers, IP addresses, hostnames, etc. These values must be sent into the grok parser as strings.

You can write parsing rules with the %{MATCHER:EXTRACT:FILTER} syntax:

  • Matcher: A rule (possibly a reference to another token rule) that describes what to expect (number, word, notSpace, etc.).

  • Extract (optional): An identifier representing the capture destination for the piece of text matched by the Matcher.

  • Filter (optional): A post-processor of the match to transform it.

Example for a classic unstructured log:

john connected on 11/08/2017

With the following parsing rule:

MyParsingRule %{word:user} connected on %{date("MM/dd/yyyy"):date}

After processing, the following structured log is generated:

Steps to create GROK Parsing

  • To create grok parser processor we need to have a pipeline created with predefined log source. Once we have the pipeline we can add processors to the pipeline.

  • Grok parser have automatically generated parsing rules and these rules are based on this pipeline’s query.

  • When there are no parsing rules matching we have the NO MATCH and data is not parsed. Now we should define the rule to match our log sample as below:

  • Now we have the MATCH for our log and now the logs are parsed as shown below:

Scrub sensitive data from your logs

If your logs contain sensitive information that need redacting, configure the Datadog Agent to scrub sensitive sequences by using the log_processing_rules parameter in your configuration file with the mask_sequences type.

This replaces all matched groups with the value of the replace_placeholder parameter.For example, redact credit card numbers:

Configuration file :

logs:
 - type: file
   path: /my/test/file.log
   service: cardpayment
   source: java
   log_processing_rules:
      - type: mask_sequences
        name: mask_credit_cards
        replace_placeholder: "[masked_credit_card]"
        ##One pattern that contains capture groups
        pattern: (?:4[0-9]{12}(?:[0-9]{3})?|[25][1-7][0-9]{14}|6(?:011|5[0-9][0-9])[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|(?:2131|1800|35\d{3})\d{11})

With Agent version 7.17+, the replace_placeholder string can expand references to capture groups such as $1$2 and so forth. If you want a string to follow the capture group with no space in between, use the format ${<GROUP_NUMBER>}.

For instance, to scrub user information from the log User email: foo.bar@example.com, use:

  • pattern: "(User email: )[^@]*@(.*)"

  • replace_placeholder: "$1 masked_user@${2}"

This sends the following log to Datadog: User email: masked_user@example.com

Logging Traceid

While logging each request we should have an attribute with unique value so that we can trace that particular log to check any issue that we are looking for. As this is a meta data, have “trace-id” as a header parameter. We can define an attribute to log this trace id so that we can easily filter out that log using this trace id. Similarly, the same log can be traced across end to end flow either in the back end or consumer application logs using the same trace id.

Logging attributes

To form a meaningful log we have few important parameters to be logged so that we will have the proper details while analyzing the log.

Using the attributes logged we can create facets in datadog to filter with those attributes.

Below are some of the samples for important attributes:

  • Host

  • IP address

  • URL

  • Trace id

  • Timestamp

  • Source

  • Target

  • Name of the API/Service

  • Environment

  • Response Status code

  • Message

  • Headers

  • Time duration

 Facets

Facets are user-defined tags and attributes from your indexed logs. They are meant for either qualitative or quantitative data analysis. As such, you can use them in your Log Explorer to:

Facets also allow you to manipulate your logs in your log monitors, log widgets in dashboards, and notebooks.

Qualitative facets

Use qualitative facets when you need:

·       To get relative insights for values. For instance, create a facet on http.network.client.geoip.country.iso_code to see the top countries most impacted per number of 5XX errors on your NGINX web access logs, enriched with the Datadog GeoIP Processor.

·       To count unique values. For instance, create a facet on user.email from your Kong logs to know how many users connect every day to your website.

  • To frequently filter your logs against particular values. For instance, create a facet on an environment tag to scope troubleshooting down to development, staging, or production environments.

Note: Although it is not required to create facets to filter on attribute values, defining them on attributes that you often use during investigations can help reduce your time to resolution.

TYPES

Qualitative facets can have a string or numerical (integer) type. While assigning string type to a dimension works in all case, using integer types on a dimension enables range filtering on top of all aforementioned capabilities. For instance, http.status_code:[200 TO 299] is a valid query to use on a integer-type dimension. See search syntax for reference.

MEASURES

Use measures when you need:

  • To aggregate values from multiple logs. For instance, create a measure on the size of tiles served by the Varnish cache of a map server and keep track of the average daily throughput, or top-most referrers per sum of tile size requested.

  • To range filter your logs. For instance, create a measure on the execution time of Ansible tasks, and see the list of servers having the most runs taking more than 10s.

  • To sort logs against that value. For instance, create a measure on the amount of payments performed with your Python microservice. You can then search all the logs, starting with the one with the highest amount.

Metrics

Log-based metrics are a cost-efficient way to summarize log data from the entire ingest stream. This means that even if you use exclusion filters to limit what you store for exploration, you can still visualize trends and anomalies over all of your log data at 10s granularity for 15 months.

With log-based metrics, you can generate a count metric of logs that match a query or a distribution metric of a numeric value contained in the logs, such as request duration.

Billing Note: Metrics created from ingested logs are billed as Custom Metrics.

Generate a log-based metric

To generate a new log-based metric, go to the Configuration page of your Datadog account and select the Generate Metrics tab, then the New Metric+ button.

You can also create metrics from an Analytics search by selecting the “Generate new metric” option from the Export menu.

Add a new log-based metric

  1. Input a query to filter the log stream: The query syntax is the same as for the Log Explorer Search. Only logs ingested with a timestamp within the past 20 minutes are considered for aggregation.

  2. Select the field you would like to track: Select * to generate a count of all logs matching your query or enter a log attribute (for example, @network.bytes_written) to aggregate a numeric value and create its corresponding count, min, max, sum, and avg aggregated metrics. If the log attribute facet is a measure, the value of the metric is the value of the log attribute.

  3. Add dimensions to group by: By default, metrics generated from logs do not have any tags unless explicitly added. Any attribute or tag dimension that exists in your logs (for example, @network.bytes_written, env) can be used to create metric tags. Metric tags names are equal to the originating attribute or tag name, without the @.

  4. Add percentile aggregations: For distribution metrics, you can optionally generate p50, p75, p90, p95, and p99 percentiles. Percentile metrics are also considered custom metrics, and billed accordingly.

  5. Name your metric: Log-based metric names must follow the custom metric naming convention.

Note: Data points for Log-based metrics are generated at ten second intervals.

Update a log-based metric

After a metric is created, the following fields can be updated:

·      Stream filter query: To change the set of matching logs to be aggregated into metrics

·      Aggregation groups: To update the tags or manage the cardinality of the generated metrics

·      Percentile selection: Check or uncheck the Calculate percentiles box to remove or generate percentile metrics

To change the metric type or name, a new metric must be created.

Logs usage metrics

Usage metrics are estimates of your current Datadog usage in near real-time. They enable you to:

·      Graph your estimated usage.

·      Create monitors around your estimated usage.

·      Get instant alerts about spikes or drops in your usage.

·      Assess the potential impact of code changes on your usage in near real-time.

Log Management usage metrics come with three tags that can be used for more granular monitoring:

An extra status tag is available on the datadog.estimated_usage.logs.ingested_events metric to reflect the log status (info, warning, etc.).

Dashboards, Monitors and Alerts

Create a Datadog Dashboard

Datadog Dashboards enable you to efficiently monitor your infrastructure and integrations by displaying and tracking key metrics. Datadog provides a set of out-of-the-box dashboards for many features and integrations.

Steps to create New Dashboard

  • To create a dashboard, click +New Dashboard on the Dashboard List page or New Dashboard from the navigation menu.

  • Enter a dashboard name and choose a layout option.

For more information on creating Dashboard please refer to this link.

A common way to start a dashboard is by encountering a similar dashboard already in use and adjusting it to suit your needs. If you find a dashboard that answers many of the questions you want your dashboard to answer:

  • Clone it by opening the dashboard and selecting Clone dashboard from the Settings menu (the gear icon on the right-hand side). This creates an unlinked copy of the dashboard; changes you make in the new copy do not affect the source widget.

  • Edit the clone by opening it and clicking Edit widgets.

  • Delete widgets you do not need by selecting Delete from the widget’s Settings menu.

  • Move things around to suit your needs. Groups and individual widgets can be dragged and dropped into new locations in the dashboard.

  • Copy in widgets you like from other dashboards by hovering over the widget and typing Command + C (Ctrl + C on Windows). Paste it into your dashboard by opening the dashboard and typing Command + V (Ctrl + V on Windows).

  • Use the Export to Dashboard option provided by many Datadog views for data they show. For example, Logs Explorer and Log Analytics views have shared options to export logs lists and metrics to dashboards.

Monitors

Monitors allow you to watch a metric or check that you care about and notifies your team when a defined threshold has exceeded.

Create Monitors

Create a monitor using the specified options.

MONITOR TYPES

The type of monitor chosen from:

  • anomaly: query alert

  • APM: query alert or trace-analytics alert

  • composite: composite

  • custom: service check

  • event: event alert

  • forecast: query alert

  • host: service check

  • integration: query alert or service check

  • live process: process alert

  • logs: log alert

  • metric: query alert

  • network: service check

  • outlier: query alert

  • process: service check

  • rum: rum alert

  • SLO: slo alert

  • watchdog: event alert

  • event-v2: event-v2 alert

  • audit: audit alert

  • error-tracking: error-tracking alert.

For more information, see Creating Monitors.

Alerts

This guide shows how to create alerts that would not notify each single group meeting the condition, but only when a given percent of them do. This is helpful, for example, if you want a monitor that alerts only when a given percentage of hosts or containers reach a critical state.

Create Alerts

If you want to receive a notification when 40 percent of hosts have a CPU usage above 50 percent. Leverage the min_cutoff and count_nonzero functions:

  • Use the min_cutoff function to count the number of hosts that have CPU usage above 50 percent.

  • Use the count_nonzero function to count the total number of hosts.

  • Divide one by the other for the resulting percentage of hosts with CPU usage above 50 percent.

  • Then, set the condition to alert if the percentage of hosts in that condition reaches 40 percent.

This monitor tracks the percentage of hosts that have a CPU usage above 50 percent within the last ten minutes and generates a notification if more than 40 percent of those hosts meet the specified condition.

For information on Alerts please refer this link.

Integrate Alerts with your application

Webhook

Webhooks enable you to Connect to your services and Alert your services when a metric alert is triggered.

Setup

Go to Integrations in Datadog and search for Webhook application and install it. Now you can start configuring webhooks. Provide a webhook name and the Webhook URL.

Note: Custom headers must be in JSON format.

Usage

To use your webhook, add @webhook-<WEBHOOK_NAME> in the text of the metric alert you want to trigger the webhook. It triggers a POST request to the URL you set with the following content in JSON format. The timeout for any individual request is 15 seconds. Datadog only issues a retry if there is an internal error (badly formed notification message), or if Datadog receives a 5XX response from the webhook endpoint. Missed connections are retried 5 times.

To add your own custom fields to the request, you can also specify your own payload in the Payload field. If you want your payload to be URL-encoded, check the Encode as form checkbox and specify your payload in JSON format. Use the variables in the following section.

There are predefined variables in webhook which we can make use while sending data to integrating application. Refer the variables in this link.

Email

When alerts are triggered in datadog we can integrate with Emails to notify the respective Team members. If your application does not have an existing Datadog integration, and you don’t want to create a custom Agent check, you can send events with email. This can also be done with messages published to an Amazon SNS topic; read the Create Datadog Events from Amazon SNS Emails guide for more information.

Setup

Before you can send events with email, you need a dedicated email address from Datadog:

  • Log in to your Datadog account.

  • From the Account menu at the bottom left, select Organization Settings.

  • Click the Events API emails tab.

  • Choose the format for your messages from the Format dropdown (Plain text or JSON).

  • Click the Create Email button.

The Events API emails section displays all the emails available for your applications and who created them.

For more information on Email please refer this link.

MS Teams Integration

The RapDev Microsoft Teams integration monitors call quality reports and provides metrics, monitors and dashboards that provide insights into call activity and experience.

This integration includes

  • Multiple dashboards

  • Recommended monitors on call quality metrics

  • Metric lookup tables for call metadata and performance qualifiers

The Microsoft Teams integration requires minimum permissions on your Active Directory tenant and is simple to install, enabling your organization to quickly deploy and begin reporting on Microsoft Teams call quality reports.

Integrate Dashboard with your application     

IFrame

Using IFrame you can embed Datadog dashboard or a widget on your website. 

Embed Dashboard

To share an entire dashboard publicly, generate a URL:

  • On the dashboard’s page, click the settings cog in the upper right.

  • Select Generate public URL.

  • Under Time & Variable Settings, configure your desired options for the time frame and whether users can change it, as well as which tags are visible for selectable template variables.

  • Copy the URL and click Done.

To authorize one or more specific email addresses to view a dashboard page:

  • On the dashboard’s page, click the settings cog in the upper right.

  • Select Generate public URL.

  • Select Only specified people for indicating who can access this dashboard.

  • Input the email addresses for people you would like to share your dashboard with.

  • Under Time & Variable Settings, configure your desired options for the time frame and whether users can change it, as well as which tags are visible for selectable template variables.

  • (Optional) Copy the URL to share; the specified email addresses also receive an email with the link.

  • Click Done.

For more information on more Dashboard sharing please refer this link.

Embed Widget

To share a graph from a Timeboard or Screenboard:

  • For the graph you want to share, click the pencil icon in the upper right corner.

  • Under the Graph your data section, select the Share tab.

  • Pick a timeframe for your graph.

  • Pick a graph size.

  • Choose to include the legend or not.

  • Get the embed code with the Generate embed code button.

Revoke

To revoke the keys used to share individual (embedded) graphs:

  • Navigate to Integrations -> Embeds to see a list of all shared graphs.

  • Click on the Revoke button next to the graph you want to stop sharing.

  • The graph is moved to the Revoked list.

Applying restrictions:

You can restrict access on a per IP address basis to your dashboard. Email Datadog support to enable the IP address include listing feature that allows administrators to provide a list of IP addresses that have access to shared dashboards. Once enabled, manage your restrictions on your organization’s Security page.

Dark mode:

Dark mode is available on public dashboards for individual users. Click the sun or moon icon in the upper right to toggle between modes. Additionally, the URL parameter theme is available. The possible values are dark and light

Data Dog APIs

Datadog supports extensive APIs from logging data to Datadog to managing the dashboards or alerts.

Send logs

POST https://http-intake.logs.datadoghq.com/api/v2/logs

Send your logs to your Datadog platform over HTTP. Limits per HTTP request are:

  • Maximum content size per payload (uncompressed): 5MB

  • Maximum size for a single log: 1MB

  • Maximum array size if sending multiple logs in an array: 1000 entries

Sample Curl:

# Curl command
curl -X POST "https://http-intake.logs.datadoghq.com/api/v2/logs" \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "DD-API-KEY: ${DD_API_KEY}" \
-d @- << EOF
[
  {
    "ddsource": "nginx",
    "ddtags": "env:staging,version:5.1",
    "hostname": "i-012345678",
    "message": "2019-11-19T14:37:58,995 INFO [process.name][20081] Hello World",
    "service": "payment"
  }
]

Manage Dashboards

Let’s look at some of the APIs using which we can manage the dashboards.

Interact with dashboard lists through the API to make it easier to organize, find, and share all of your dashboards with your team and organization.

Create a new Dashboard

POST https://api.datadoghq.com/api/v1/dashboard

Request body :

Get a Dashboard

GET https://api.datadoghq.com/api/v1/dashboard/{dashboard_id}

Path Parameters :

Get all Dashboard

GET https://api.datadoghq.com/api/v1/dashboard

Query Strings :

Update a Dashboard

PUT https://api.datadoghq.com/api/v1/dashboard/{dashboard_id}

Path Parameters :

Delete a Dashboard

DELETE https://api.datadoghq.com/api/v1/dashboard/{dashboard_id}

Path Parameters :

For more information on more Dashboard APIs please refer this link

 

 

ABAC - Policies

ABAC - Policies

What is Policy from ABAC Standpoint?

Policy is a set of rules that can be applied to specific API product to ensure only desired access is allowed for a specific consumer request.

You can set a policy with the attributes (properties) of the subject, object or action instead of themselves (the string) to control the access.

 

Why is Policy needed for ABAC?

ABAC - Users

ABAC - Users

What is user from ABAC Standpoint?

Users are part of organization and application. Admin can add in user and grant access to users to create/manage/view ABAC policies.

 

Why is application needed for ABAC?

User will create/view/Modify ABAC policies that are specific to application/organization.

Users with admin access can set policy rules at application level to control consumer access for specific attributes.

 

How to manage Users using ABAC UI?

ABAC - Application

ABAC - Application

 

What is application from ABAC Standpoint?

Application is nothing but API product that is available on Portal for consumer use.

  • Application should have same name as API product name (name that appears on developer portal) from ABAC Standpoint.
  • Organization owns API products (Applications) and ABAC Policies will control access to these API products to facilitate attribute level access.

 

Why is application needed for ABAC?

ABAC - Organization

ABAC - Organization

 

What is Organization from ABAC Standpoint?

Organization is a group that

  • Owns API Product from ABAC Standpoint. It can be any publisher group that owns or wants to publish API product
  • Owns API products (Applications) and ABAC Policies will control access to these API products to facilitate attribute level access.

 

Why is Organization needed for ABAC?