Table of Contents | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
|
...
Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service. You can use Route 53 to perform domain registration and DNS routing. It is designed to give developers and businesses an extremely reliable and cost effective way to route end users to Internet applications.
Device Information
Entity | Particulars |
---|---|
Vendor Name | Amazon Web Services |
Product Name | Route 53 |
Type of Device | Cloud |
Collection Method
Log Type | Ingestion label | Preferred Logging Protocol - Format | Log Collection Method | Data Source |
---|---|---|---|---|
AWS Route 53 DNS | AWS_ROUTE_53 | Cloud Storage - Structured /JSON | C2C | https://cloud.google.com/chronicle/docs/reference/feed-management-api#amazon_s3 |
Device Configuration
Adaptive MxDR supports log collection using S3 or SQS.
...
Public DNS query logs are sent directly to the cloudwatch loggroup. Please refer to the following page to create a Cloudwatch loggroup: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html
Resolver query logs can be sent to an S3 bucket. Please refer to the following page to create a S3 bucket. https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-bucket.html
We support two methods to ingest logs into Chronicle from AWS Route 53. The Lambda+S3+SQS method is preferred, while the Data Firehose method is secondary.
Based on our observations, Firehose generally incurs higher costs.
Method 1: Lambda+S3+SQS
To configure logging for DNS queries
...
Add tags as per your organization policy and select configure query logging.
...
If you would like to configure SQS, please Please follow below steps .to attach SQS with S3:
Follow all the steps provided above to enable logging to S3.
Create SQS and attach it with S3. Please refer: Configuring AWS Simple Queue Service (SQS) with S3 Storage
Method 2: Amazon Data Firehose
Please find more details on Firehose Configuration on the below page:
Amazon Data Firehose to Chronicle: C2C-Push
Important links:
Please refer below page to check required IAM user and KMS Key policies for S3, SQS and KMS
...
Below are the URL details which we need to allow for connectivity (Please identify URLs by referring AWS document according to your services and regions):
IAM: For any logging source IAM URL should be allowed : https://docs.aws.amazon.com/general/latest/gr/iam-service.html
S3: For S3 or SQS logging source, S3 URL should be allowed: https://docs.aws.amazon.com/general/latest/gr/s3.html
SQS: For SQS logging source, SQS URL should be allowed: https://docs.aws.amazon.com/general/latest/gr/sqs-service.html
Integration Parameters
S3
...
Property
...
Default Value
...
Description
...
REGION
...
N/A
...
Select the region of your S3 bucket
...
S3 URI
...
N/A
...
The S3 URI to ingest. (It will be a combination of S3 bucket name and prefix. Example: S3://<S3 bucket name>/<prefix>)
...
URI IS A
...
N/A
...
The type of file indicated by the URI. Valid values are:
FILES
: The URI points to a single file which will be ingested with each execution of the feed.FOLDERS
: The URI points to a directory. All files contained within the directory will be ingested with each execution of the feed.FOLDERS_RECURSIVE
: The URI points to a directory. All files and directories contained within the indicated directory will be ingested, including all files and directories within those directories, and so on.
...
SOURCE DELETION OPTION
...
...
Whether to delete source files after they have been transferred to Chronicle. This reduces storage costs. Valid values are:
SOURCE_DELETION_NEVER
: Never delete files from the source.SOURCE_DELETION_ON_SUCCESS
:Delete files and empty directories from the source after successful ingestion.SOURCE_DELETION_ON_SUCCESS_FILES_ONLY
:Delete files from the source after successful ingestion.
SQS:
Parameter | Default |
---|
Value | Description | |
---|---|---|
REGION | N/A | Select the region of your S3 bucket |
QUEUE NAME | N/A | The SQS queue name. |
ACCOUNT NUMBER | N/A | The account number for the SQS queue and S3 bucket. |
QUEUE ACCESS KEY ID | N/A | This is the 20 character ID associated with your Amazon IAM account. |
QUEUE SECRET ACCESS KEY | N/A | This is the 40 character access key associated with your Amazon IAM account. |
SOURCE DELETION OPTION | N/A | Whether to delete source files after they have been transferred to Chronicle. This reduces storage costs. Valid values are:
|
S3 BUCKET ACCESS KEY ID | No | This is the 20 character ID associated with your Amazon IAM account. Only specify if using a different access key for the S3 bucket. |
S3 BUCKET SECRET ACCESS KEY | No | This is the 40 character access key associated with your Amazon IAM account. Only specify if using a different access key for the S3 bucket. |
ASSET NAMESPACE | No | To assign an asset namespace to all events that are ingested from a particular feed, set the |
Integration via Amazon Data Firehose:
Configure Amazon Data Firehose on Google Chronicle instance and copy a Endpoint URL & Secret key.