Salesforce eCDN

About The Device

Salesforce eCDN is an integrated feature within Salesforce B2C Commerce which is powered by Cloudflare WAF that optimizes content delivery by distributing it across a network of global servers. This network ensures that digital assets, such as images and videos, are delivered from the server closest to the end user, reducing load times and improving site performance. The eCDN also enhances security by integrating features like Web Application Firewall (WAF) and HTTPS enforcement, ensuring that content delivery is both fast and secure. The system's configuration revolves around zones and custom hostnames, allowing for precise control over how and where content is delivered across different instances of the platform.

Device Information

 Entity

Particulars

 Entity

Particulars

Vendor Name

Salesforce

Product Name

eCDN

Type of Device

Cloud

Collection Method

Log Type

 Ingestion label

Preferred Logging Protocol - Format

Log Collection Method

Data Source

Log Type

 Ingestion label

Preferred Logging Protocol - Format

Log Collection Method

Data Source

 Salesforce eCDN (Cloudflare WAF)

 CLOUDFLARE_WAF

CLOUDFLARE_AUDIT

CLOUDFLARE

API Pull

C2C-Push

Feed management API  |  Google Security Operations  |  Google Cloud

Device Configuration

AMxDR supports log collection using S3 and SQS.

Prerequisites:

  1. Access token generated using API Client with sfcc.cdn-zones and sfcc.cdn-zones.rw scopes assigned to it, please refer vendor document to create API Client, assign scopes and generate access token - https://developer.salesforce.com/docs/commerce/commerce-api/guide/authorization-for-admin-apis.html

  2. Amazon S3 bucket is created. Please refer the following page to create a S3 bucket.  Getting started with Amazon S3 - Amazon Simple Storage Service

  3. AWS Simple Queue Service (SQS) with S3 Storage configured. Please refer the following document to configure SQS with S3 storage. Accenture MDR Quick Start Guide in Configuring AWS Simple Queue Service (SQS) with S3 Storage

Device Configuration:

  1. Update S3 Bucket Policy:

a. Edit and paste the policy into S3 > Bucket > Permissions > Bucket Policy, replacing the Resource value with your own bucket path. The AWS Principal is owned by CDN provider and should not be changed.

{ "Id": "PutObjPolicy", "Version": "2012-10-17", "Statement": [ { "Sid": "AllowObjectFromeCDNLogpush", "Action": [ "s3:PutObject" ], "Effect": "Allow", "Resource": "arn:aws:s3:::example-bucket/*", "Principal": { "AWS": [ "arn:aws:iam::391854517948:user/cloudflare-logpush" ] } } ] }
  1. Create Ownership Token

a. To create an eCDN Logpush job, you must demonstrate ownership of the S3 bucket. The ownership token file is written to the S3 bucket destination. You must create an ownership challenge token for each path as they are unique for different paths within the same S3 bucket.

curl "https://{shortcode}.api.commercecloud.salesforce.com/cdn/zones/v1/organizations/f_ecom_bcxj_prd/zones/{zoneId}/logpush/ownership" --request 'POST' \ --header 'Authorization: Bearer <access_token>' \ --header 'Content-Type: application/json' \ --data-raw `{ "destinationPath": "s3://example-bucket/log?region=us-east-1" }`

You can use the special string {DATE} in the URL path to separate logs into daily subdirectories; for example s3://customer-bucket/logs/{DATE}?region=us-east-1&sse=AES256. The name of the directory is replaced with the date in YYYYMMDD format (for example, 20230215) when the logs are stored.

  1. Create Logpush Job

curl --location `https://{shortcode}.api.commercecloud.salesforce.com/cdn/zones/v1/organizations/f_ecom_bcxj_prd/zones/{zoneId}/logpush/jobs` --request 'POST' \ --header 'Authorization: Bearer <access_token>' \ --header 'Content-Type: application/json' \ --data-raw `{ "name": "ExampleLogPushJob", "destinationPath": "s3://example-bucket/log?region=us-east-1", "logType": "firewall_events", "ownershipChallengeToken": "Put your ownership challenge token here", }`

Required Parameters:

  • Name: Suggest using your site name as the job name; the name cannot be changed after the job is created

  • Destination Path: Provide the S3 bucket path for receiving logs. Additional configuration parameters like region must be included.

  • Ownership Challenge Token: Provide the ownership challenge token that you previously created

  • Do not use logFields filters as we want to push logs with all available fields.

  1. Enable Logpush Job 

a. The job is not enabled upon creation of the eCDN Logpush Job. You need to use the enable Logpush Job to start receiving logs into the S3 bucket.

  1. Repeat steps 1 to 4 to create a new log push job for ‘http_requests’ by inputting "logType": "http_requests", in Step 3.

Please refer below page to check required IAM user policies.

https://mdrkb.atlassian.net/wiki/x/hoCdEQ

Below are the URL details which we need to allow for connectivity (Please identify URLs by referring AWS document according to your services and regions):

IAM: For any logging source IAM URL should be allowed

AWS Identity and Access Management endpoints and quotas - AWS General Reference

S3: For S3 or SQS logging source, S3 URL should be allowed.

Amazon Simple Storage Service endpoints and quotas - AWS General Reference

SQS: For SQS logging source, SQS URL should be allowed.

Amazon Simple Queue Service endpoints and quotas - AWS General Reference

Integration Parameters:

SQS:

Property

Default Value

Description

Property

Default Value

Description

REGION

Yes

Select the region of your S3 bucket

QUEUE NAME

Yes

The SQS queue name.

ACCOUNT NUMBER

Yes

The account number for the SQS queue and S3 bucket.

QUEUE ACCESS KEY ID

Yes

This is the 20 character ID associated with your Amazon IAM account.

QUEUE SECRET ACCESS KEY

Yes

This is the 40 character access key associated with your Amazon IAM account.

SOURCE DELETION OPTION

Yes

Whether to delete source files after they have been transferred to Chronicle. This reduces storage costs. Valid values are:

  • SOURCE_DELETION_NEVER: Never delete files from the source.

  • SOURCE_DELETION_ON_SUCCESS:Delete files and empty directories from the source after successful ingestion.

  • SOURCE_DELETION_ON_SUCCESS_FILES_ONLY:Delete files from the source after successful ingestion.

S3 BUCKET ACCESS KEY ID

No

This is the 20 character ID associated with your Amazon IAM account. Only specify if using a different access key for the S3 bucket.

S3 BUCKET SECRET ACCESS KEY

No

This is the 40 character access key associated with your Amazon IAM account. Only specify if using a different access key for the S3 bucket.

ASSET NAMESPACE

No

To assign an asset namespace to all events that are ingested from a particular feed, set the "namespace" field within details. The namespace field is a string.

About Accenture:
Accenture is a leading global professional services company that helps the world’s leading businesses, governments and other organizations build their digital core, optimize their operations, accelerate revenue growth and enhance citizen services—creating tangible value at speed and scale. We are a talent and innovation led company with 738,000 people serving clients in more than 120 countries. Technology is at the core of change today, and we are one of the world’s leaders in helping drive that change, with strong ecosystem relationships. We combine our strength in technology with unmatched industry experience, functional expertise and global delivery capability. We are uniquely able to deliver tangible outcomes because of our broad range of services, solutions and assets across Strategy & Consulting, Technology, Operations, Industry X and Accenture Song. These capabilities, together with our culture of shared success and commitment to creating 360° value, enable us to help our clients succeed and build trusted, lasting relationships. We measure our success by the 360° value we create for our clients, each other, our shareholders, partners and communities. Visit us at www.accenture.com.

About Accenture Security
Accenture Security is a leading provider of end-to-end cybersecurity services, including advanced cyber defense, applied cybersecurity solutions and managed security operations. We bring security innovation, coupled with global scale and a worldwide delivery capability through our network of Advanced Technology and Intelligent Operations centers. Helped by our team of highly skilled professionals, we enable clients to innovate safely, build cyber resilience and grow with confidence. Follow us @AccentureSecure on Twitter or visit us at www.accenture.com/security.

Legal notice: Accenture, the Accenture logo, and other trademarks, service marks, and designs are registered or unregistered trademarks of Accenture and its subsidiaries in the United States and in foreign countries. All trademarks are properties of their respective owners. This document is intended for general informational purposes only and does not take into account the reader’s specific circumstances, and may not reflect the most current developments. Accenture disclaims, to the fullest extent permitted by applicable law, any and all liability for the accuracy and completeness of the information in this presentation and for any acts or omissions made based on such information. Accenture does not provide legal, regulatory, audit, or tax advice. Readers are responsible for obtaining such advice from their own legal counsel or other licensed professionals.