Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

About the Device

AWS Security Hub provides you with a comprehensive view of your security state in AWS and helps you check your environment against security industry standards and best practices.

Security Hub collects security data from across AWS accounts, services, and supported third-party partner products and helps you analyze your security trends and identify the highest priority security issues.

AWS Security Hub reduces the effort of collecting and prioritizing security findings across accounts, from AWS services, and AWS partner tools. The service ingests data using a standard findings format called ASFF.

Device Information

 Entity

Particulars

Vendor Name

Amazon Web Services

Product Name

Security Hub

Type of Device

Cloud

Collection Method

Log Type

 Ingestion label

Preferred Logging Protocol - Format

Log Collection Method

Data Source

AWS Security Hub

 AWS_SECURITY_HUB

 Cloud Storage - JSON

C2C - Storage

https://cloud.google.com/chronicle/docs/reference/feed-management-api#amazon_s3

Device Configuration

Prerequisite:

We support two methods to ingest logs into Chronicle from AWS Security Hub. The Lambda+S3+SQS method is preferred, while the Data Firehose method is secondary.

Based on our observations, Firehose generally incurs higher costs.

Device Configuration

Method 1: Lambda+S3+SQS

Please follow below steps to forward security hub findings to Cloudwatch log group:

  1. Navigate to Amazon EventBridge, Click Rules and click create rule.

image-20240508-063217.png
  1. Provide details like Name, Description, Rule type and click next.

image-20240508-063248.png
  1. In Build Event Pattern, select other.

image-20240508-063516.png
  1. For Event source, choose Event Pattern, select Security hub.

    1. Choose Custom pattern (JSON editor)

    2. Copy the following example pattern and paste it into the Event pattern text area. Be sure to replace the existing brackets.

{
  "source": ["aws.securityhub"],
  "detail-type": ["Security Hub Findings - Custom Action", "Security Hub Findings – Imported"]
}

image-20240508-063711.png

  1. On the target screen, select AWS Service as target type. Select target as CloudWatch log group.

    1. Select the Log Group.

image-20240508-063807.png
  1. On the review and create screen, review your configured event pattern, target and click Create rule.

image-20240508-194649.png

If you have enabled security hub, findings will be forwarded to the provided CloudWatch log group.

 As Adaptive MxDR supports logging through S3 or SQS, we have to export logs from CloudWatch to S3. Please follow below page to automate continuous export of logs from cloudwatch loggroup to S3 bucket using lambda.CloudWatch Log Group to S3 Bucket Export Automation using Lambda

 Once logs are available in the S3 bucket, please configure SQS, please follow below step:

  1. Follow the guide provided above for log export from Cloudwatch log group to S3 bucket.

  2. Create SQS and attach it with S3. Please refer: Configuring AWS Simple Queue Service (SQS) with S3 Storage

Method 2: Amazon Data Firehose

Please find more details on Firehose Configuration on the below page:

Amazon Data Firehose to Chronicle: C2C-Push

Important Links:

Integration parameters

SQS:

Parameter Display Name

Required

Description

REGION

Yes

Select the region of your S3 bucket

QUEUE NAME

Yes

The SQS queue name.

ACCOUNT NUMBER

Yes

The account number for the SQS queue and S3 bucket.

QUEUE ACCESS KEY ID

Yes

This is the 20 character ID associated with your Amazon IAM account.

QUEUE SECRET ACCESS KEY

Yes

This is the 40 character access key associated with your Amazon IAM account.

SOURCE DELETION OPTION

Yes

Whether to delete source files after they have been transferred to Chronicle. This reduces storage costs. Valid values are:

  • SOURCE_DELETION_NEVER: Never delete files from the source.

  • SOURCE_DELETION_ON_SUCCESS:Delete files and empty directories from the source after successful ingestion.

  • SOURCE_DELETION_ON_SUCCESS_FILES_ONLY:Delete files from the source after successful ingestion.

S3 BUCKET ACCESS KEY ID

No

This is the 20 character ID associated with your Amazon IAM account. Only specify if using a different access key for the S3 bucket.

S3 BUCKET SECRET ACCESS KEY

No

This is the 40 character access key associated with your Amazon IAM account. Only specify if using a different access key for the S3 bucket.

ASSET NAMESPACE

No

To assign an asset namespace to all events that are ingested from a particular feed, set the "namespace" field within details. The namespace field is a string.

Integration via Amazon Data Firehose:

Configure Amazon Data Firehose on Google Chronicle instance and copy a Endpoint URL & Secret key.

 

  • No labels