About the Device
Log files are the records that Linux stores for administrators to keep track and monitor important events about the server, kernel, services, and applications running on it. In this post, we’ll go over the top Linux log files server administrators should monitor.
What are Linux log files?
Log files are a set of records that Linux maintains for the administrators to keep track of important events. They contain messages about the server, including the kernel, services and applications running on it.
Linux provides a centralized repository of log files that can be located under the /var/log directory.
The log files generated in a Linux environment can typically be classified into four different categories:
Application Logs
Event Logs
Service Logs
System Logs
Why monitor Linux log files?
Log management is an integral part of any server administrator’s responsibility.
By monitoring Linux log files, you can gain detailed insight on server performance, security, error messages and underlying issues by. If you want to take a proactive vs. a reactive approach to server management, regular log file analysis is 100% required.
Device Information
Entity | Particulars |
---|---|
Vendor Name | Amazon Web Services |
Product Name | Linux |
Type of Device | Cloud |
Collection Method
Log Type | Ingestion label | Preferred Logging Protocol - Format | Log Collection Method |
---|---|---|---|
Linux Auditing System (AuditD) | AUDITD | Syslog KV/Unstructured | C2C |
Unix system | NIX_SYSTEM | Syslog Unstructured | C2C |
Device Configuration
Pre-requisites:
An AWS account that you can sign in to.
A user who's a Global Administrator or AWS EC2, System Manager, CloudWatch, IAM Administrator.
To configure Linux Audit System (AUDITD) service logging in Linux OS
The below steps are validated on following Linux distributions - Ubuntu v22.04.4 LTS, RHEL v9.3, Debian v12.5 and SUSE Linux Enterprise v15.5.
Login into Linux CLI with root or similar privileges.
Deploy and enable the audit daemon and the audit dispatching framework by running the following command.
If you have already deployed the daemon and framework, you can skip this step.
For Debian and Ubuntu OS - sudo apt-get install auditd audispd-plugins For RedHat OS - sudo yum install audit audispd-plugins For SUSE OS sudo zypper install audit audit-audispd-plugins
Enable auditd service
sudo systemctl enable auditd.service
To enable logging of all commands, which include the user and root, add the following lines to
/etc/audit/rules.d/audit.rules
:
-a exit,always -F arch=b64 -S execve -a exit,always -F arch=b32 -S execve
Verify that the parameters in the
/etc/audit/plugins.d/syslog.conf
file match the following values:
active = yes direction = out path = /sbin/audisp-syslog type = always args = LOG_LOCAL6 format = string
In case of SUSE, we need to enable RSYSLOG traditional file format which gives timestamp in the expected format.
Please open /etc/rsyslog.conf
Comment out this one line: $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
Restart auditd and rsyslog service.
systemctl restart auditd.service systemctl restart rsyslog.service
To Configure Unified CloudWatch Agent to fetch logs from the EC2 Linux instances
Setup IAM instance profile roles for Systems Manager
Create IAM Role
Attach all the policies displayed in below image and save the Role.
Attach the Role created to Linux EC2 server machine(s).
To install Cloud Watch Agent on Linux server,
Navigate to AWS Systems Manager > Run Command
Select AWS-ConfigureAWSPackage
Add AmazonCloudWatchAgent as Name in Command Parameters.
Select target Linux machine(s) where you want to install AWS CloudWatch Agent.
Scroll down and click Run.
Once the command executes, select the instance id and view the output. There should not be any error in output.
Creating Configurations file and saving it to AWS Systems Parameter Store.
Log in to the Instance which has been selected as the target for CloudWatchAgent installation in the previous step.
Start terminal and launch
/opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard
Provide the response to every query asked on the
config_wizard
as shown in below screenshots.
Only select “YES” for CollectD if it is installed otherwise the configuration file will be failed.
d. In case of Linux, Adaptive MxDR supports /var/log/secure and /var/log/messages
files so provide Log file path twice and provide appropriate Log group name as well.
e. In case of SUSE, Adaptive MxDR supports /var/log/messages
only.
Once done, the Systems Manager Parameter store can be checked for the newly created configuration file with the parameters provided as an input.
Navigate to AWS Systems Manager > Parameter Store
Copy the newly created Parameter Store name.
Start the Agent while pointing to the configuration file using AWS Systems Manager.
In order to start the newly configured Cloud Watch Agents on the instances, use Run Command.
Navigate to AWS Systems Manager > Run Command
Navigate to next page and search for AmazonCloudWatch-ManageAgent using the search box.
d. Specify the name of newly created Parameter Store Configuration file.
e. Select target (Linux Machines) where you want to run the command. Scroll down and click Run.
f. Retain default values for Other parameters, i.e., timeout time is kept 600 Seconds.
g. Retain default values for Rate Control.
h. Currently, we're refraining from sending any events to the S3 bucket. Please leave it unchecked.
i. SNS notifications are presently disabled, but once enabled, they can be configured with the details of the SNS. Please leave it unchecked.
j. AWS command line interface command remains unchanged, use the default value.
k. Once you've finished configuring all the settings mentioned above, click RUN. This will prompt the Systems Manager to initiate the CloudWatch Agent on the instance, initiating the log Flow.
Navigate to CloudWatch > Logs to see the logs getting collected. The newly created Log Groups will automatically be created once the logs are getting shipped through the instance by the agents.
Please refer to the following page to check required IAM user policies: Accenture MDR Quick Start Guide in Configuring IAM User and KMS Key Policies
Below are the URL details which we need to allow for connectivity (Please identify URLs by referring AWS document according to your services and regions):
IAM: For any logging source IAM URL should be allowed
https://docs.aws.amazon.com/general/latest/gr/iam-service.html
Cloudwatch: For cloudwatch logging source, cloudwatch URL should be allowed.
https://docs.aws.amazon.com/general/latest/gr/cwl_region.html
Please note that if the CyberHub is installed on AWS, please select Application running on an AWS compute service while creating access key.
As Adaptive MxDR support logging through S3 or SQS, we have to export logs from cloudwatch to S3. Please follow below page to automate continuous export of logs from cloudwatch loggroup to S3 bucket using lambda.
CloudWatch Log Group to S3 Bucket Export Automation using Lambda
Once logs are available in the S3 bucket, If you wish to configure SQS, please follow below steps:
Follow the guide provided above for log export from Cloudwatch log group to S3 bucket.
Create SQS and attach it with S3. Please refer: Configuring AWS Simple Queue Service (SQS) with S3 Storage
Important Links:
For more details on how to get required credentials for integration parameters please refer:Get Credentials for AWS Storage
Please refer below page to check required IAM user and KMS Key policies for S3, SQS and KMS: IAM User and KMS Key Policies Required for AWS
Below are the URL details which we need to allow for connectivity (Please identify URLs by referring AWS document according to your services and regions):
IAM: For any logging source IAM URL should be allowed: https://docs.aws.amazon.com/general/latest/gr/iam-service.html
S3: For S3 or SQS logging source, S3 URL should be allowed: https://docs.aws.amazon.com/general/latest/gr/s3.html
SQS: For SQS logging source, SQS URL should be allowed: https://docs.aws.amazon.com/general/latest/gr/sqs-service.html
Integration Parameters
S3:
Property | Default Value | Description |
REGION | N/A | Select the region of your S3 bucket |
S3 URI | N/A | The S3 URI to ingest. (It will be a combination of S3 bucket name and prefix. Eg- S3://<S3 bucket name>/<prefix>) |
URI IS A | N/A | The type of file indicated by the URI. Valid values are:
|
SOURCE DELETION OPTION | N/A | Whether to delete source files after they have been transferred to Chronicle. This reduces storage costs. Valid values are:
|
SQS:
Property | Default Value | Description |
---|---|---|
REGION | N/A | Select the region of your S3 bucket |
QUEUE NAME | N/A | The SQS queue name. |
ACCOUNT NUMBER | N/A | The account number for the SQS queue and S3 bucket. |
QUEUE ACCESS KEY ID | N/A | This is the 20 character ID associated with your Amazon IAM account. |
QUEUE SECRET ACCESS KEY | N/A | This is the 40 character access key associated with your Amazon IAM account. |
SOURCE DELETION OPTION | N/A | Whether to delete source files after they have been transferred to Chronicle. This reduces storage costs. Valid values are:
|