About The Device
Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service. You can use Route 53 to perform domain registration and DNS routing. It is designed to give developers and businesses an extremely reliable and cost effective way to route end users to Internet applications.
Device Information
Entity | Particulars |
---|---|
Vendor Name | Amazon Web Services |
Product Name | Route 53 |
Type of Device | Cloud |
Collection Method
Log Type | Ingestion label | Preferred Logging Protocol - Format | Log Collection Method | Data Source |
---|---|---|---|---|
AWS Route 53 DNS | AWS_ROUTE_53 | Cloud Storage - Structured /JSON | C2C | https://cloud.google.com/chronicle/docs/reference/feed-management-api#amazon_s3 |
Device Configuration
Adaptive MxDR supports log collection using S3 or SQS.
Prerequisite
Public DNS query logs are sent directly to the cloudwatch loggroup. Please refer to the following page to create a Cloudwatch loggroup: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html
Resolver query logs can be sent to an S3 bucket. Please refer to the following page to create a S3 bucket. https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-bucket.html
We support two methods to ingest logs into Chronicle from AWS Route 53. The Lambda+S3+SQS method is preferred, while the Data Firehose method is secondary.
Based on our observations, Firehose generally incurs higher costs.
Method 1: Lambda+S3+SQS
To configure logging for DNS queries
Sign in to the AWS Management Console and open the Route 53 console at https://console.aws.amazon.com/route53/
In the navigation pane, choose Hosted zones.
Choose the hosted zone that you want to configure query logging for.
Click Configure query logging.
In Configure query logging, choose an existing Log group.
If you receive an alert about permissions (this happens if you haven't configured query logging with the new console before), do one of the following:
If you have 10 resource policies already, you can't create any more. Select any of your resource policies, and select Edit . Editing will give Route 53 permissions to write logs to your log groups. Choose Save. The alert goes away and you can continue to the next step.
If you have never configured query logging before (or if you haven't created 10 resource policies already), you need to grant permissions to Route 53 to write logs to your CloudWatch Logs groups. Choose Grant permissions. The alert goes away and you can continue to the next step.
Click Create.
Chronicle does not support logging from CloudWatch log groups. To continuously export these logs in S3 bucket, please refer below page. Please note that lambda execution has separate costing. CloudWatch Log Group to S3 Bucket Export Automation using Lambda
To configure logging for resolver queries
Sign in to the AWS Management Console and open the Route 53 console at https://console.aws.amazon.com/route53/
In the navigation pane, in Resolver select Query logging.
Select configure query logging.
In Configure query logging, provide name for your configuration.
Select Destination for query logs as S3 bucket.
After selecting the destination, please select VPCs to log queries for: (Resolver logs DNS queries that originate in the VPCs that you choose here.)
Select Add VPC.
Select your VPC and click Add.
Add tags as per your organization policy and select configure query logging.
If you would like to configure SQS, please follow below steps.
Follow all the steps provided above to enable logging to S3.
Create SQS and attach it with S3. Please refer: Configuring AWS Simple Queue Service (SQS) with S3 Storage
Method 2: Amazon Data Firehose
Please find more details on Firehose Configuration on the below page:
Amazon Data Firehose to Chronicle: C2C-Push
Important links:
Please refer below page to check required IAM user and KMS Key policies for S3, SQS and KMS
IAM User and KMS Key Policies Required for AWS
For more details on how to get required credentials for integration parameters please refer:
Get Credentials for AWS Storage
Below are the URL details which we need to allow for connectivity (Please identify URLs by referring AWS document according to your services and regions):
IAM: For any logging source IAM URL should be allowed : https://docs.aws.amazon.com/general/latest/gr/iam-service.html
S3: For S3 or SQS logging source, S3 URL should be allowed: https://docs.aws.amazon.com/general/latest/gr/s3.html
SQS: For SQS logging source, SQS URL should be allowed: https://docs.aws.amazon.com/general/latest/gr/sqs-service.html
Integration Parameters
SQS:
Parameter Display Name | Required | Description |
---|---|---|
REGION | Yes | Select the region of your S3 bucket |
QUEUE NAME | Yes | The SQS queue name. |
ACCOUNT NUMBER | Yes | The account number for the SQS queue and S3 bucket. |
QUEUE ACCESS KEY ID | Yes | This is the 20 character ID associated with your Amazon IAM account. |
QUEUE SECRET ACCESS KEY | Yes | This is the 40 character access key associated with your Amazon IAM account. |
SOURCE DELETION OPTION | Yes | Whether to delete source files after they have been transferred to Chronicle. This reduces storage costs. Valid values are:
|
S3 BUCKET ACCESS KEY ID | No | This is the 20 character ID associated with your Amazon IAM account. Only specify if using a different access key for the S3 bucket. |
S3 BUCKET SECRET ACCESS KEY | No | This is the 40 character access key associated with your Amazon IAM account. Only specify if using a different access key for the S3 bucket. |
ASSET NAMESPACE | No | To assign an asset namespace to all events that are ingested from a particular feed, set the |
Integration via Amazon Data Firehose:
Configure Amazon Data Firehose on Google Chronicle instance and copy a Endpoint URL & Secret key.