Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Log Type

 Ingestion label

Preferred Logging Protocol - Format

Log Collection Method

Data Source

 Symantec Event export

SYMANTEC_EVENT_EXPORT

JSON

C2C

https://cloud.google.com/chronicle/docs/reference/feed-management-api#symantec-event-export  C2C - Storage

Device Configuration

  1. Sign in to the SEP 15/14.2 console.

  2. Select Integration.

  3. Click Client Application and copy the Customer ID and Domain ID, which are used when you create a Chronicle feed.

  4. Click + Add and provide an application name.

  5. Click Add and provide name.

  6. Navigate to the Details page and perform the following actions:

    • In the Devices Group Management section, select View.

    • In the Alerts & Events Rule Management section, select View.

    • In the Investigation Incident section, select View.

  7. Click Save.

  8. Click the menu (vertical ellipses) located at the end of the application name and click Client Secret.

...

  1. Copy the CLIENT ID, CLIENT SECRET & OAUTH CREDENTIALS (OAUTH REFRESH TOKEN), which are required when you configure the Chronicle feed. 

...

Events can be streamed to cloud storage data buckets. You can add or edit a Data Bucket stream type to stream and export events into the cloud storage buckets.

  1. Log in the cloud console

  2. Navigate to Integration > Event Stream

  3. Click Add if you want to add a new event stream. Else, select an existing event stream in the grid and edit the fields in the event stream details flyout.

  4. In the Add Event Stream, select Data Bucket as the Stream Type.

  5. Type a Name for the event stream that you are configuring for the cloud storage.

  6. In Data Bucket, configure the following options:

Field

Description

Bucket

Type the bucket name that you have already created for your cloud storage. 

Provider

Select a cloud platform from the options provided:

  • Amazon S3

  • Google Cloud Storage

  • Microsoft Azure Storage

Region

Type the region of the data center where your cloud storage account is created.

Bucket State

Enable the event stream to initiate streaming of events into your selected cloud storage bucket.

Directory

Type the directory location of your cloud storage bucket where the events are stored in the files.

Log Rotation

Set the Time and Size as per your organizations retention policy

  1. Type the KEY and SECRET, the unique identifiers of your storage accounts on the cloud platforms.

    These identifiers are used to authorize Symantec Endpoint Security to stream events into the cloud storages.

    The unique identifier is known by different names for the cloud platforms that are listed in the PROVIDER drop-down menu.

Refer to this KB for details on configuring the cloud platforms.

  1. Uncheck the COMPRESSION checkbox.

  2. In the Query Filter section, search and filter the event type_ids from the EVENT TYPE ID list to include the corresponding events in the event stream.

If the event type_ids that you selected are already selected and queried by other streams, you must enable the event stream that you are creating to continue streaming of events with no data loss.

  1. Keep OCSF schema unchecked.

  2. Click Test Connection to verify that the connection with the cloud storage account is established.

  3. Click Create.

Streaming into data buckets is not supported for tenants of the India data center.

Image Added

If you have used AWS S3 as storage provided, please follow below steps to attach SQS to S3:

  1. Follow all the steps provided above to store logs in S3 bucket

  2. Create SQS and attach it with S3. Please refer Configuring AWS Simple Queue Service (SQS) with S3 Storage 

Please refer below page to check required IAM user policies.

IAM User and KMS Key Policies Required for AWS

For more details on how to get required credentials for integration parameters please refer:

Get Credentials for AWS Storage

Integration Parameters

Amazon SQS:

Parameter Display Name

Default Value

Description

OAUTH TOKEN ENDPOINT

REGION

N/A

Select the region of your S3 bucket

QUEUE NAME

N/A

The SQS queue name.

ACCOUNT NUMBER

N/A

The

endpoint to retrieve the OAuth token.OAUTH CLIENT OAUTH CLIENT SECRET

account number for the SQS queue and S3 bucket.

QUEUE ACCESS KEY ID

N/A

This is the 20 character ID associated with your Amazon IAM account.

QUEUE SECRET ACCESS KEY

N/A

This is the 40 character access key associated with your Amazon IAM account.

SOURCE DELETION OPTION

N/A

Whether to delete source files after they have been transferred to Chronicle. This reduces storage costs. Valid values are:

  • SOURCE_DELETION_NEVER: Never delete files from the source.

  • SOURCE_DELETION_ON_SUCCESS:Delete files and empty directories from the source after successful ingestion.

  • SOURCE_DELETION_ON_SUCCESS_FILES_ONLY:Delete files from the source after successful ingestion.

S3 BUCKET ACCESS KEY ID

N/A

The OAuth client ID.

This is the 20 character ID associated with your Amazon IAM account. Only specify if using a different access key for the S3 bucket.

S3 BUCKET SECRET ACCESS KEY

N/A

The OAuth client secret.

OAUTH REFRESH TOKEN

N/A

An OAuth 2.0 token used to refresh access tokens when they expire. Provide “OAUTH CREDENTIALS“ which will be your “OAUTH REFRESH TOKEN“

This is the 40 character access key associated with your Amazon IAM account. Only specify if using a different access key for the S3 bucket.

ASSET NAMESPACE

N/A

To assign an asset namespace to all events that are ingested from a particular feed, set the "namespace" field within details. The namespace field is a string.

Microsoft Azure Blob Storage:

Parameter Display Name

Default Value

Description

AZURE URI

N/A

The URI pointing to a Azure Blob Storage blob or container. Container names are insights-logs-signinlogs , insights-logs-auditlogs & RiskyUsers

URI IS A

Directory which includes subdirectories

The type of object indicated by the URI. Valid values are:

  • FILES: The URI points to a single blob that will be ingested with each execution of the feed.

  • FOLDERS_RECURSIVE: The URI points to a Blob Storage container.

SOURCE DELETION OPTION

Never delete files

Source file deletion is not supported in Azure. This field's value must be set to SOURCE_DELETION_NEVER.

Shared Key OR SAS Token

 

A shared key, a 512-bit random string in base64 encoding, authorized to access Azure Blob Storage. Required if not specifying an SAS Token.
OR
A Shared Access Signature authorized to access the Azure Blob Storage container.

ASSET NAMESPACE

 

To assign an asset namespace to all events that are ingested from a particular feed, set the "namespace" field within details. The namespace field is a string.

Google Cloud Storage:

Parameter Display Name

Default Value

Description

 STORAGE BUKCET URI

 N/A

 The URI which corresponds to the Google Cloud Storage bucket. The format is the same format used by gsutil to specify a resource.

URI IS A

N/A

The type of object indicated by bucketUri. Valid values are:

  • FILES: The URI points to a single file which will be ingested with each execution of the feed.

  • FOLDERS: The URI points to a directory. All files contained within the directory will be ingested with each execution of the feed.

  • FOLDERS_RECURSIVE: The URI points to a directory. All files and directories contains within the indicated directory will be ingested, including all files and directories within those directories, and so on.

SOURCE DELETION OPTION

N/A

Whether to delete source files after they have been transferred to Google Security Operations. This reduces storage costs. Valid values are:

  • SOURCE_DELETION_NEVER: Never delete files from the source.

  • SOURCE_DELETION_ON_SUCCESS:Delete files and empty directories from the source after successful ingestion.

  • SOURCE_DELETION_ON_SUCCESS_FILES_ONLY:Delete files from the source after successful ingestion.