I found that the OpenSearch data node's disk space was full, and then executed "delete index_prefix*" from the dev tools in the OpenSearch Dashboards. After execution, the index name suffix no longer contains the time format. What should I do to fix?
+
+
Note
+
The following operation will delete the currently written index, resulting in data loss.
+
+
+
Open the Centralized Logging with OpenSearch console, find the pipeline which has this issue and choose View details.
+
Go to Monitoring > Lambda Processor, and click on the link(start with /aws/lambda/CL-xxx) under Lambda Processor.
+
+
+
+
Go to Lambda console > Configuration > Concurrency, choose Edit, select Reserve concurrency and set it to 0.
+
+
+
+
+
Open the OpenSearch Dashboards, go to Dev Tools, input DELETE your_index_name and click to send request.
+
+
+
+
+
Input GET _cat/indices/your_index_name and click to send request. If "status" is 404 and "type" is index_not_found_exception in the returned result, it means success. Otherwise, please repeat step 4.
+
+
+
+
+
Input POST /your_index_name/_rollover and click to send request.
+
+
+
Go to Lambda console > Configuration > Concurrency, choose Edit, select Reserve concurrency and set it to the value you want, or select Use unreserved account concurrency, save.
+
+
+
Standard Operating Procedure for Proxy Stack Connection Problems
+
When I access OpenSearch dashboards through the proxy, the browser shows 504 gateway timeout
+
Possible Root cause:
+
a. If instances keeps terminating and initializing
+
i. Wrong security Group
+
+
b. Instances are not keep terminating
+
i. VPC peering request not accepted
+
+ ii. Peering with the wrong VPC
+
+ iii. Route table has the wrong routes
+
+
c. Check if VPC Peering is working.
+
When I access OpenSearch dashboards through the proxy, the browser shows "Site can't be reached"
+
+
Possible root cause:
+
1. Application Load Balancer is deployed inside private subnet
+
+2. The proxy stack has just been re-deployed, it takes at least 15mins for DNS server to resolve the new Load Balancer endpoint address
+
+
Solution:
+
1. ALB deploy location is wrong, just delete the proxy stack and create a new one
+
+2. wait for 15 mins
+
+
I set the log collection path to /log_path/*.log, what will be the impact?
+
+
Note
+
Normally we don't recommend using wildcard * as a prefix for matching logs. If there are hundreds, or even thousands of files in the directory, this will seriously affect the rate of FluentBit's log collection, and it is recommended that you can remove outdated files on a regular basis.
+
+
The log file names are the same for different systems, but the log path contains the system name in order to differentiate between the different systems. I wish to create a pipeline to handle this, how should I set the log path?
+
+
Note
+
Let's go through an example:
+
For example, we have 3 environments, dev, staging, prod. The log paths are /log_path/dev/jvm.log, /log_path/staging/jvm.log, and /log_path/prod/jvm.log. In this scenario if you wish to create only one pipeline, you can set the log path as follows:
+
+
/log_path/*/jvm.log.
+
+
In EKS environment, I am using DaemonSet mode to collect logs, but my logs are not using standard output mode, how should I configure the Yaml file for deployment?
+
As we know, if you create a pipeline and the selected log source is EKS in the CLO, the system will automatically generate the content in YAML format for you to assist you in creating the deployment file for you to deploy FluentBit. You can match the log path /your_log_path/ in the YAML file and remove the Parser cri_regex. Please refer to the following screenshot for details:
+
diff --git a/en/search/search_index.json b/en/search/search_index.json
index f004dc3b..7714c753 100644
--- a/en/search/search_index.json
+++ b/en/search/search_index.json
@@ -1 +1 @@
-{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"The Centralized Logging with OpenSearch solution provides comprehensive log management and analysis functions to help you simplify the build of log analytics pipelines. Built on top of Amazon OpenSearch Service, the solution allows you to streamline log ingestion, log processing, and log visualization. You can leverage the solution in multiple use cases such as to abide by security and compliance regulations, achieve refined business operations, and enhance IT troubleshooting and maintenance. Use this navigation table to quickly find answers to these questions: If you want to \u2026 Read\u2026 Know the cost for running this solution Cost Understand the security considerations for this solution Security Know which AWS Regions are supported for this solution Supported AWS Regions Get started with the solution quickly to import an Amazon OpenSearch Service domain, build a log analytics pipeline, and access the built-in dashboard Getting started Learn the operations related to Amazon OpenSearch Service domains Domain management Walk through the processes of building log analytics pipelines AWS Services logs and Applications logs Encountering issues when using the solution Troubleshooting Go through a hands-on workshop designed for this solution Workshop This implementation guide describes architectural considerations and configuration steps for deploying the Centralized Logging with OpenSearch solution in the AWS cloud. It includes links to CloudFormation templates that launches and configures the AWS services required to deploy this solution using AWS best practices for security and availability. The guide is intended for IT architects, developers, DevOps, data engineers with practical experience architecting on the AWS Cloud.","title":"Overview"},{"location":"implementation-guide/alarm/","text":"There are different types of log alarms: log processor alarms, buffer layer alarms, and source alarms (only for application log pipeline). The alarms will be triggered when the defined condition is met. Log alarm type Log alarm condition Description Log processor alarms Error invocation # >= 10 for 5 minutes, 1 consecutive time When the number of log processor Lambda error calls is greater than or equal to 10 within 5 minutes (including 5 minutes), an email alarm will be triggered. Log processor alarms Failed record # >= 1 for 1 minute, 1 consecutive time When the number of failed records is greater than or equal to 1 within a 1-minute window, an alarm will be triggered. Log processor alarms Average execution duration in last 5 minutes >= 60000 milliseconds In the last 5 minutes, when the average execution time of log processor Lambda is greater than or equal to 60 seconds, an email alarm will be triggered. Buffer layer alarms SQS Oldest Message Age >= 30 minutes When the age of the oldest SQS message is greater than or equal to 30 minutes, it means that the message has not been consumed for at least 30 minutes, an email alarm will be triggered. Source alarms (only for application log pipeline) Fluent Bit output_retried_record_total >= 100 for last 5 minutes When the total number of retry records output by Fluent Bit in the past 5 minutes is greater than or equal to 100, an email alarm will be triggered. You can choose to enable log alarms or disable them according to your needs. Enable log alarms Sign in to the Centralized Logging with OpenSearch console. In the left navigation bar, under Log Analytics Pipelines , choose AWS Service Log or Application Log . Select the log pipeline created and choose View details . Select the Alarm tab. Switch on Alarms if needed and select an exiting SNS topic. If you choose Create a new SNS topic , you need to provide email address for the newly-created SNS topic to notify. Disable log alarms Sign in to the Centralized Logging with OpenSearch console. In the left navigation bar, under Log Analytics Pipelines , choose AWS Service Log or Application Log . Select the log pipeline created and choose View details . Select the Alarm tab. Switch off Alarms .","title":"Log alarms"},{"location":"implementation-guide/alarm/#enable-log-alarms","text":"Sign in to the Centralized Logging with OpenSearch console. In the left navigation bar, under Log Analytics Pipelines , choose AWS Service Log or Application Log . Select the log pipeline created and choose View details . Select the Alarm tab. Switch on Alarms if needed and select an exiting SNS topic. If you choose Create a new SNS topic , you need to provide email address for the newly-created SNS topic to notify.","title":"Enable log alarms"},{"location":"implementation-guide/alarm/#disable-log-alarms","text":"Sign in to the Centralized Logging with OpenSearch console. In the left navigation bar, under Log Analytics Pipelines , choose AWS Service Log or Application Log . Select the log pipeline created and choose View details . Select the Alarm tab. Switch off Alarms .","title":"Disable log alarms"},{"location":"implementation-guide/faq/","text":"Frequently Asked Questions General Q: What is Centralized Logging with OpenSearch solution? Centralized Logging with OpenSearch is an AWS Solution that simplifies the building of log analytics pipelines. It provides to customers, as complementary of Amazon OpenSearch Service, capabilities to ingest and process both application logs and AWS service logs without writing code, and create visualization dashboards from out-of-the-box templates. Centralized Logging with OpenSearch automatically assembles the underlying AWS services, and provides you a web console to manage log analytics pipelines. Q: What are the supported logs in this solution? Centralized Logging with OpenSearch supports both AWS service logs and EC2/EKS application logs. Refer to the supported AWS services , and the supported application log formats and sources for more details. Q: Does Centralized Logging with OpenSearch support ingesting logs from multiple AWS accounts? Yes. Centralized Logging with OpenSearch supports ingesting AWS service logs and application logs from a different AWS account in the same region. For more information, see cross-account ingestion . Q: Does Centralized Logging with OpenSearch support ingesting logs from multiple AWS Regions? Currently, Centralized Logging with OpenSearch does not automate the log ingestion from a different AWS Region. You need to ingest logs from other regions into pipelines provisioned by Centralized Logging with OpenSearch. For AWS services which store the logs in S3 bucket, you can leverage the S3 Cross-Region Replication to copy the logs to the Centralized Logging with OpenSearch deployed region, and import incremental logs using the manual mode by specifying the log location in the S3 bucket. For application logs on EC2 and EKS, you need to set up the networking (for example, Kinesis VPC endpoint, VPC Peering), install agents, and configure the agents to ingest logs to Centralized Logging with OpenSearch pipelines. Q: What is the license of this solution? This solution is provided under the Apache-2.0 license . It is a permissive free software license written by the Apache Software Foundation. It allows users to use the software for any purpose, to distribute it, to modify it, and to distribute modified versions of the software under the terms of the license, without concern for royalties. Q: How can I find the roadmap of this solution? This solution uses GitHub project to manage the roadmap. You can find the roadmap here . Q: How can I submit a feature request or bug report? You can submit feature requests and bug report through the GitHub issues. Here are the templates for feature request , bug report . Q: How can I use stronger TLS Protocols to secure traffic, namely TLS 1.2 and above? By default, CloudFront uses the TLSv1 security policy along with a default certificate. Changing the TLS settings for CloudFront depends on the presence of your SSL certificates. If you don't have your own SSL certificates, you won't be able to alter the TLS setting for CloudFront. In order to configure TLS 1.2 or above, you will need a custom domain. This setup will enable you to enforce stronger TLS protocols for your traffic. To learn how to configure a custom domain and enable TLS 1.2+ for your service, you can follow the guide provided here: Use a Custom Domain with AWS AppSync, Amazon CloudFront, and Amazon Route 53 . Setup and configuration Q: Can I deploy Centralized Logging with OpenSearch on AWS in any AWS Region? Centralized Logging with OpenSearch provides two deployment options: option 1 with Cognito User Pool, and option 2 with OpenID Connect. For option 1, customers can deploy the solution in AWS Regions where Amazon Cognito User Pool, AWS AppSync, Amazon Kinesis Data Firehose (optional) are available. For option 2, customers can deploy the solution in AWS Regions where AWS AppSync, Amazon Kinesis Data Firehose (optional) are available. Refer to supported regions for deployment for more information. Q: What are the prerequisites of deploying this solution? Centralized Logging with OpenSearch does not provision Amazon OpenSearch clusters, and you need to import existing OpenSearch clusters through the web console. The clusters must meet the requirements specified in prerequisites . Q: Why do I need a domain name with ICP recordal when deploying the solution in AWS China Regions? The Centralized Logging with OpenSearch console is served via CloudFront distribution which is considered as an Internet information service. According to the local regulations, any Internet information service must bind to a domain name with ICP recordal . Q: What versions of OpenSearch does the solution work with? Centralized Logging with OpenSearch supports Amazon OpenSearch Service, with OpenSearch 1.3 or later. Q: What are the index name rules for OpenSearch created by the Log Analytics Pipeline? You can change the index name if needed when using the Centralized Logging with OpenSearch console to create a log analytics pipeline. If the log analytics pipeline is created for service logs, the index name is composed of - - -<00000x>, where you can define a name for Index Prefix and service-type is automatically generated by the solution according to the service type you have chosen. Moreover, you can choose different index suffix types to adjust index rollover time window. YYYY-MM-DD-HH: Amazon OpenSearch will roll the index by hour. YYYY-MM-DD: Amazon OpenSearch will roll the index by 24 hours. YYYY-MM: Amazon OpenSearch will roll the index by 30 days. YYYY: Amazon OpenSearch will roll the index by 365 days. It should be noted that in OpenSearch, the time is in UTC 0 time zone. Regarding the 00000x part, Amazon OpenSearch will automatically append a 6-digit suffix to the index name, where the first index rule is 000001, rollover according to the index, and increment backwards, such as 000002, 000003. If the log analytics pipeline is created for application log, the index name is composed of - -<00000x>. The rules for index prefix and index suffix, 00000x are the same as those for service logs. Q: What are the index rollover rules for OpenSearch created by the Log Analytics Pipeline? Index rollover is determined by two factors. One is the Index Suffix in the index name. If you enable the index rollover by capacity, Amazon OpenSearch will roll your index when the index capacity equals or exceeds the specified size, regardless of the rollover time window. Note that if one of these two factors matches, index rollover can be triggered. For example, we created an application log pipeline on January 1, 2023, deleted the application log pipeline at 9:00 on January 4, 2023, and the index name is nginx-YYYY-MM-DD-<00000x>. At the same time, we enabled the index rollover by capacity and entered 300GB. If the log data volume increases suddenly after creation, it can reach 300GB every hour, and the duration is 2 hours and 10 minutes. After that, it returns to normal, and the daily data volume is 90GB. Then OpenSearch creates three indexes on January 1, the index names are nginx-2023-01-01-000001, nginx-2023-01-01-000002, nginx-2023-01-01-000003, and then creates one every day Indexes respectively: nginx-2023-01-02-000004, nginx-2023-01-03-000005, nginx-2023-01-04-000006. Q: Can I deploy the solution in an existing VPC? Yes. You can either launch the solution with a new VPC or launch the solution with an existing VPC. When using an existing VPC, you need to select the VPC and the corresponding subnets. Refer to launch with Cognito User Pool or launch with OpenID Connect for more details. Q: I did not receive the email containing the temporary password when launching the solution with Cognito User Pool. How can I resend the password? Your account is managed by the Cognito User Pool. To resend the temporary password, you can find the user pool created by the solution, delete and recreate the user using the same email address. If you still have the same issue, try with another email address. Q: How can I create more users for this solution? If you launched the solution with Cognito User Pool, go to the AWS console, find the user pool created by the solution, and you can create more users. If you launched the solution with OpenID Connect (OIDC), you should add more users in the user pool managed by the OIDC provider. Note that all users have the same privileges. Pricing Q: How will I be charged and billed for the use of this solution? The solution is free to use, and you are responsible for the cost of AWS services used while running this solution. You pay only for what you use, and there are no minimum or setup fees. Refer to the Centralized Logging with OpenSearch Cost section for detailed cost estimation. Q: Will there be additional cost for cross-account ingestion? No. The cost will be same as ingesting logs within the same AWS account. Log Ingestion Q: What is the log agent used in the Centralized Logging with OpenSearch solution? Centralized Logging with OpenSearch uses AWS for Fluent Bit , a distribution of Fluent Bit maintained by AWS. The solution uses this distribution to ingest logs from Amazon EC2 and Amazon EKS. Q: I have already stored the AWS service logs of member accounts in a centralized logging account. How should I create service log ingestion for member accounts? In this case, you need to deploy the Centralized Logging with OpenSearch solution in the centralized logging account, and ingest AWS service logs using the Manual mode from the logging account. Refer to this guide for ingesting Application Load Balancer logs with Manual mode. You can do the same with other supported AWS services which output logs to S3. Q: Why there are some duplicated records in OpenSearch when ingesting logs via Kinesis Data Streams? This is usually because there is no enough Kinesis Shards to handle the incoming requests. When threshold error occurs in Kinesis, the Fluent Bit agent will retry that chunk . To avoid this issue, you need to estimate your log throughput and set a proper Kinesis shard number. Please refer to the Kinesis Data Streams quotas and limits . Centralized Logging with OpenSearch provides a built-in feature to scale-out and scale-in the Kinesis shards, and it would take a couple of minutes to scale out to the desired number. Q: How to install log agent on CentOS 7? Log in to your CentOS 7 machine and install SSM Agent manually. sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm sudo systemctl enable amazon-ssm-agent sudo systemctl start amazon-ssm-agent Go to the Instance Group panel of Centralized Logging with OpenSearch console, create Instance Group , select the CentOS 7 machine, choose Install log agent and wait for its status to be offline . Log in to CentOS 7 and install fluent-bit 1.9.3 manually. export RELEASE_URL = ${ FLUENT_BIT_PACKAGES_URL :- https ://packages.fluentbit.io } export RELEASE_KEY = ${ FLUENT_BIT_PACKAGES_KEY :- https ://packages.fluentbit.io/fluentbit.key } sudo rpm --import $RELEASE_KEY cat << EOF | sudo tee /etc/yum.repos.d/fluent-bit.repo [fluent-bit] name = Fluent Bit baseurl = $RELEASE_URL/centos/VERSION_ARCH_SUBSTR gpgcheck=1 repo_gpgcheck=1 gpgkey=$RELEASE_KEY enabled=1 EOF sudo sed -i 's|VERSION_ARCH_SUBSTR|\\$releasever/\\$basearch/|g' /etc/yum.repos.d/fluent-bit.repo sudo yum install -y fluent-bit-1.9.3-1 # Modify the configuration file sudo sed -i 's/ExecStart.*/ExecStart=\\/opt\\/fluent-bit\\/bin\\/fluent-bit -c \\/opt\\/fluent-bit\\/etc\\/fluent-bit.conf/g' /usr/lib/systemd/system/fluent-bit.service sudo systemctl daemon-reload sudo systemctl enable fluent-bit sudo systemctl start fluent-bit 4. Go back to the Instance Groups panel of the Centralized Logging with OpenSearch console and wait for the CentOS 7 machine status to be Online and proceed to create the instance group. Q: How can I consume CloudWatch custom logs? You can use Firehose to subscribe CloudWatch logs and transfer logs into Amazon S3. Firstly, create subscription filters with Amazon Kinesis Data Firehose based on this guide . Next, follow the instructions to learn how to transfer logs to Amazon S3. Then, you can use Centralized Logging with OpenSearch to ingest logs from Amazon S3 to OpenSearch. Log Visualization Q: How can I find the built-in dashboards in OpenSearch? Please refer to the AWS Service Logs and Application Logs to find out if there is a built-in dashboard supported. You also need to turn on the Sample Dashboard option when creating a log analytics pipeline. The dashboard will be inserted into the Amazon OpenSearch Service under Global Tenant . You can switch to the Global Tenant from the top right coder of the OpenSearch Dashboards.","title":"FAQ"},{"location":"implementation-guide/faq/#frequently-asked-questions","text":"","title":"Frequently Asked Questions"},{"location":"implementation-guide/faq/#general","text":"Q: What is Centralized Logging with OpenSearch solution? Centralized Logging with OpenSearch is an AWS Solution that simplifies the building of log analytics pipelines. It provides to customers, as complementary of Amazon OpenSearch Service, capabilities to ingest and process both application logs and AWS service logs without writing code, and create visualization dashboards from out-of-the-box templates. Centralized Logging with OpenSearch automatically assembles the underlying AWS services, and provides you a web console to manage log analytics pipelines. Q: What are the supported logs in this solution? Centralized Logging with OpenSearch supports both AWS service logs and EC2/EKS application logs. Refer to the supported AWS services , and the supported application log formats and sources for more details. Q: Does Centralized Logging with OpenSearch support ingesting logs from multiple AWS accounts? Yes. Centralized Logging with OpenSearch supports ingesting AWS service logs and application logs from a different AWS account in the same region. For more information, see cross-account ingestion . Q: Does Centralized Logging with OpenSearch support ingesting logs from multiple AWS Regions? Currently, Centralized Logging with OpenSearch does not automate the log ingestion from a different AWS Region. You need to ingest logs from other regions into pipelines provisioned by Centralized Logging with OpenSearch. For AWS services which store the logs in S3 bucket, you can leverage the S3 Cross-Region Replication to copy the logs to the Centralized Logging with OpenSearch deployed region, and import incremental logs using the manual mode by specifying the log location in the S3 bucket. For application logs on EC2 and EKS, you need to set up the networking (for example, Kinesis VPC endpoint, VPC Peering), install agents, and configure the agents to ingest logs to Centralized Logging with OpenSearch pipelines. Q: What is the license of this solution? This solution is provided under the Apache-2.0 license . It is a permissive free software license written by the Apache Software Foundation. It allows users to use the software for any purpose, to distribute it, to modify it, and to distribute modified versions of the software under the terms of the license, without concern for royalties. Q: How can I find the roadmap of this solution? This solution uses GitHub project to manage the roadmap. You can find the roadmap here . Q: How can I submit a feature request or bug report? You can submit feature requests and bug report through the GitHub issues. Here are the templates for feature request , bug report . Q: How can I use stronger TLS Protocols to secure traffic, namely TLS 1.2 and above? By default, CloudFront uses the TLSv1 security policy along with a default certificate. Changing the TLS settings for CloudFront depends on the presence of your SSL certificates. If you don't have your own SSL certificates, you won't be able to alter the TLS setting for CloudFront. In order to configure TLS 1.2 or above, you will need a custom domain. This setup will enable you to enforce stronger TLS protocols for your traffic. To learn how to configure a custom domain and enable TLS 1.2+ for your service, you can follow the guide provided here: Use a Custom Domain with AWS AppSync, Amazon CloudFront, and Amazon Route 53 .","title":"General"},{"location":"implementation-guide/faq/#setup-and-configuration","text":"Q: Can I deploy Centralized Logging with OpenSearch on AWS in any AWS Region? Centralized Logging with OpenSearch provides two deployment options: option 1 with Cognito User Pool, and option 2 with OpenID Connect. For option 1, customers can deploy the solution in AWS Regions where Amazon Cognito User Pool, AWS AppSync, Amazon Kinesis Data Firehose (optional) are available. For option 2, customers can deploy the solution in AWS Regions where AWS AppSync, Amazon Kinesis Data Firehose (optional) are available. Refer to supported regions for deployment for more information. Q: What are the prerequisites of deploying this solution? Centralized Logging with OpenSearch does not provision Amazon OpenSearch clusters, and you need to import existing OpenSearch clusters through the web console. The clusters must meet the requirements specified in prerequisites . Q: Why do I need a domain name with ICP recordal when deploying the solution in AWS China Regions? The Centralized Logging with OpenSearch console is served via CloudFront distribution which is considered as an Internet information service. According to the local regulations, any Internet information service must bind to a domain name with ICP recordal . Q: What versions of OpenSearch does the solution work with? Centralized Logging with OpenSearch supports Amazon OpenSearch Service, with OpenSearch 1.3 or later. Q: What are the index name rules for OpenSearch created by the Log Analytics Pipeline? You can change the index name if needed when using the Centralized Logging with OpenSearch console to create a log analytics pipeline. If the log analytics pipeline is created for service logs, the index name is composed of - - -<00000x>, where you can define a name for Index Prefix and service-type is automatically generated by the solution according to the service type you have chosen. Moreover, you can choose different index suffix types to adjust index rollover time window. YYYY-MM-DD-HH: Amazon OpenSearch will roll the index by hour. YYYY-MM-DD: Amazon OpenSearch will roll the index by 24 hours. YYYY-MM: Amazon OpenSearch will roll the index by 30 days. YYYY: Amazon OpenSearch will roll the index by 365 days. It should be noted that in OpenSearch, the time is in UTC 0 time zone. Regarding the 00000x part, Amazon OpenSearch will automatically append a 6-digit suffix to the index name, where the first index rule is 000001, rollover according to the index, and increment backwards, such as 000002, 000003. If the log analytics pipeline is created for application log, the index name is composed of - -<00000x>. The rules for index prefix and index suffix, 00000x are the same as those for service logs. Q: What are the index rollover rules for OpenSearch created by the Log Analytics Pipeline? Index rollover is determined by two factors. One is the Index Suffix in the index name. If you enable the index rollover by capacity, Amazon OpenSearch will roll your index when the index capacity equals or exceeds the specified size, regardless of the rollover time window. Note that if one of these two factors matches, index rollover can be triggered. For example, we created an application log pipeline on January 1, 2023, deleted the application log pipeline at 9:00 on January 4, 2023, and the index name is nginx-YYYY-MM-DD-<00000x>. At the same time, we enabled the index rollover by capacity and entered 300GB. If the log data volume increases suddenly after creation, it can reach 300GB every hour, and the duration is 2 hours and 10 minutes. After that, it returns to normal, and the daily data volume is 90GB. Then OpenSearch creates three indexes on January 1, the index names are nginx-2023-01-01-000001, nginx-2023-01-01-000002, nginx-2023-01-01-000003, and then creates one every day Indexes respectively: nginx-2023-01-02-000004, nginx-2023-01-03-000005, nginx-2023-01-04-000006. Q: Can I deploy the solution in an existing VPC? Yes. You can either launch the solution with a new VPC or launch the solution with an existing VPC. When using an existing VPC, you need to select the VPC and the corresponding subnets. Refer to launch with Cognito User Pool or launch with OpenID Connect for more details. Q: I did not receive the email containing the temporary password when launching the solution with Cognito User Pool. How can I resend the password? Your account is managed by the Cognito User Pool. To resend the temporary password, you can find the user pool created by the solution, delete and recreate the user using the same email address. If you still have the same issue, try with another email address. Q: How can I create more users for this solution? If you launched the solution with Cognito User Pool, go to the AWS console, find the user pool created by the solution, and you can create more users. If you launched the solution with OpenID Connect (OIDC), you should add more users in the user pool managed by the OIDC provider. Note that all users have the same privileges.","title":"Setup and configuration"},{"location":"implementation-guide/faq/#pricing","text":"Q: How will I be charged and billed for the use of this solution? The solution is free to use, and you are responsible for the cost of AWS services used while running this solution. You pay only for what you use, and there are no minimum or setup fees. Refer to the Centralized Logging with OpenSearch Cost section for detailed cost estimation. Q: Will there be additional cost for cross-account ingestion? No. The cost will be same as ingesting logs within the same AWS account.","title":"Pricing"},{"location":"implementation-guide/faq/#log-ingestion","text":"Q: What is the log agent used in the Centralized Logging with OpenSearch solution? Centralized Logging with OpenSearch uses AWS for Fluent Bit , a distribution of Fluent Bit maintained by AWS. The solution uses this distribution to ingest logs from Amazon EC2 and Amazon EKS. Q: I have already stored the AWS service logs of member accounts in a centralized logging account. How should I create service log ingestion for member accounts? In this case, you need to deploy the Centralized Logging with OpenSearch solution in the centralized logging account, and ingest AWS service logs using the Manual mode from the logging account. Refer to this guide for ingesting Application Load Balancer logs with Manual mode. You can do the same with other supported AWS services which output logs to S3. Q: Why there are some duplicated records in OpenSearch when ingesting logs via Kinesis Data Streams? This is usually because there is no enough Kinesis Shards to handle the incoming requests. When threshold error occurs in Kinesis, the Fluent Bit agent will retry that chunk . To avoid this issue, you need to estimate your log throughput and set a proper Kinesis shard number. Please refer to the Kinesis Data Streams quotas and limits . Centralized Logging with OpenSearch provides a built-in feature to scale-out and scale-in the Kinesis shards, and it would take a couple of minutes to scale out to the desired number. Q: How to install log agent on CentOS 7? Log in to your CentOS 7 machine and install SSM Agent manually. sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm sudo systemctl enable amazon-ssm-agent sudo systemctl start amazon-ssm-agent Go to the Instance Group panel of Centralized Logging with OpenSearch console, create Instance Group , select the CentOS 7 machine, choose Install log agent and wait for its status to be offline . Log in to CentOS 7 and install fluent-bit 1.9.3 manually. export RELEASE_URL = ${ FLUENT_BIT_PACKAGES_URL :- https ://packages.fluentbit.io } export RELEASE_KEY = ${ FLUENT_BIT_PACKAGES_KEY :- https ://packages.fluentbit.io/fluentbit.key } sudo rpm --import $RELEASE_KEY cat << EOF | sudo tee /etc/yum.repos.d/fluent-bit.repo [fluent-bit] name = Fluent Bit baseurl = $RELEASE_URL/centos/VERSION_ARCH_SUBSTR gpgcheck=1 repo_gpgcheck=1 gpgkey=$RELEASE_KEY enabled=1 EOF sudo sed -i 's|VERSION_ARCH_SUBSTR|\\$releasever/\\$basearch/|g' /etc/yum.repos.d/fluent-bit.repo sudo yum install -y fluent-bit-1.9.3-1 # Modify the configuration file sudo sed -i 's/ExecStart.*/ExecStart=\\/opt\\/fluent-bit\\/bin\\/fluent-bit -c \\/opt\\/fluent-bit\\/etc\\/fluent-bit.conf/g' /usr/lib/systemd/system/fluent-bit.service sudo systemctl daemon-reload sudo systemctl enable fluent-bit sudo systemctl start fluent-bit 4. Go back to the Instance Groups panel of the Centralized Logging with OpenSearch console and wait for the CentOS 7 machine status to be Online and proceed to create the instance group. Q: How can I consume CloudWatch custom logs? You can use Firehose to subscribe CloudWatch logs and transfer logs into Amazon S3. Firstly, create subscription filters with Amazon Kinesis Data Firehose based on this guide . Next, follow the instructions to learn how to transfer logs to Amazon S3. Then, you can use Centralized Logging with OpenSearch to ingest logs from Amazon S3 to OpenSearch.","title":"Log Ingestion"},{"location":"implementation-guide/faq/#log-visualization","text":"Q: How can I find the built-in dashboards in OpenSearch? Please refer to the AWS Service Logs and Application Logs to find out if there is a built-in dashboard supported. You also need to turn on the Sample Dashboard option when creating a log analytics pipeline. The dashboard will be inserted into the Amazon OpenSearch Service under Global Tenant . You can switch to the Global Tenant from the top right coder of the OpenSearch Dashboards.","title":"Log Visualization"},{"location":"implementation-guide/include-dashboard/","text":"You can access the built-in dashboard in Amazon OpenSearch to view log data. For more information, see Access Dashboard . You can click the below image to view the high-resolution sample dashboard.","title":"Include dashboard"},{"location":"implementation-guide/monitoring/","text":"Types of metrics The following types of metrics are available on the Centralized Logging with OpenSearch console. Log source metrics Fluent Bit FluentBitOutputProcRecords - The number of log records that this output instance has successfully sent. This is the total record count of all unique chunks sent by this output. If a record is not successfully sent, it does not count towards this metric. FluentBitOutputProcBytes - The number of bytes of log records that this output instance has successfully sent. This is the total byte size of all unique chunks sent by this output. If a record is not sent due to some error, then it will not count towards this metric. FluentBitOutputDroppedRecords - The number of log records that have been dropped by the output. This means they met an unrecoverable error or retries expired for their chunk. FluentBitOutputErrors - The number of chunks that have faced an error (either unrecoverable or retrievable). This is the number of times a chunk has failed, and does not correspond with the number of error messages you see in the Fluent Bit log output. FluentBitOutputRetriedRecords - The number of log records that experienced a retry. Note that this is calculated at the chunk level, and the count increased when an entire chunk is marked for retry. An output plugin may or may not perform multiple actions that generate many error messages when uploading a single chunk. FluentBitOutputRetriesFailed - The number of times that retries expired for a chunk. Each plugin configures a Retry_Limit which applies to chunks. Once the Retry_Limit has been reached for a chunk, it is discarded and this metric is incremented. FluentBitOutputRetries - The number of times this output instance requested a retry for a chunk. Network Load Balancer SyslogNLBActiveFlowCount - The total number of concurrent flows (or connections) from clients to targets. This metric includes connections in the SYN_SENT and ESTABLISHED states. TCP connections are not terminated at the load balancer, so a client opening a TCP connection to a target counts as a single flow. SyslogNLBProcessedBytes - The total number of bytes processed by the load balancer, including TCP/IP headers. This count includes traffic to and from targets, minus health check traffic. Buffer metrics Log Buffer is a buffer layer between the Log Agent and OpenSearch clusters. The agent uploads logs into the buffer layer before being processed and delivered into the OpenSearch clusters. A buffer layer is a way to protect OpenSearch clusters from overwhelming. Kinesis Data Stream KDSIncomingBytes \u2013 The number of bytes successfully put to the Kinesis stream over the specified time period. This metric includes bytes from PutRecord and PutRecords operations. Minimum, Maximum, and Average statistics represent the bytes in a single put operation for the stream in the specified time period. KDSIncomingRecords \u2013 The number of records successfully put to the Kinesis stream over the specified time period. This metric includes record counts from PutRecord and PutRecords operations. Minimum, Maximum, and Average statistics represent the records in a single put operation for the stream in the specified time period. KDSPutRecordBytes \u2013 The number of bytes put to the Kinesis stream using the PutRecord operation over the specified time period. KDSThrottledRecords \u2013 The number of records rejected due to throttling in a PutRecords operation per Kinesis data stream, measured over the specified time period. KDSWriteProvisionedThroughputExceeded \u2013 The number of records rejected due to throttling for the stream over the specified time period. This metric includes throttling from PutRecord and PutRecords operations. The most commonly used statistic for this metric is Average. When the Minimum statistic has a non-zero value, records will be throttled for the stream during the specified time period. When the Maximum statistic has a value of 0 (zero), no records will be throttled for the stream during the specified time period. SQS SQSNumberOfMessagesSent - The number of messages added to a queue. SQSNumberOfMessagesDeleted - The number of messages deleted from the queue. Amazon SQS emits the NumberOfMessagesDeleted metric for every successful deletion operation that uses a valid receipt handle, including duplicate deletions. The following scenarios might cause the value of the NumberOfMessagesDeleted metric to be higher than expected: - Calling the DeleteMessage action on different receipt handles that belong to the same message: If the message is not processed before the visibility timeout expires, the message becomes available to other consumers that can process it and delete it again, increasing the value of the NumberOfMessagesDeleted metric. Calling the DeleteMessage action on the same receipt handle: If the message is processed and deleted, but you call the DeleteMessage action again using the same receipt handle, a success status is returned, increasing the value of the NumberOfMessagesDeleted metric. SQSApproximateNumberOfMessagesVisible - The number of messages available for retrieval from the queue. SQSApproximateAgeOfOldestMessage - The approximate age of the oldest non-deleted message in the queue. After a message is received three times (or more) and not processed, the message is moved to the back of the queue and the ApproximateAgeOfOldestMessage metric points at the second-oldest message that hasn't been received more than three times. This action occurs even if the queue has a redrive policy. Because a single poison-pill message (received multiple times but never deleted) can distort this metric, the age of a poison-pill message isn't included in the metric until the poison-pill message is consumed successfully. When the queue has a redrive policy, the message is moved to a dead-letter queue after the configured Maximum Receives . When the message is moved to the dead-letter queue, the ApproximateAgeOfOldestMessage metric of the dead-letter queue represents the time when the message was moved to the dead-letter queue (not the original time the message was sent). Log processor metrics The Log Processor Lambda is responsible for performing final processing on the data and bulk writing it to OpenSearch. TotalLogs \u2013 The total number of log records or events processed by the Lambda function. ExcludedLogs \u2013 The number of log records or events that were excluded from processing, which could be due to filtering or other criteria. LoadedLogs \u2013 The number of log records or events that were successfully processed and loaded into OpenSearch. FailedLogs \u2013 The number of log records or events that failed to be processed or loaded into OpenSearch. ConcurrentExecutions \u2013 The number of function instances that are processing events. If this number reaches your concurrent executions quota for the Region, or the reserved concurrency limit on the function, then Lambda throttles additional invocation requests. Duration \u2013 The amount of time that your function code spends processing an event. The billed duration for an invocation is the value of Duration rounded up to the nearest millisecond. Throttles \u2013 The number of invocation requests that are throttled. When all function instances are processing requests and no concurrency is available to scale up, Lambda rejects additional requests with a TooManyRequestsException error. Throttled requests and other invocation errors don't count as either Invocations or Errors. Invocations \u2013 The number of times that your function code is invoked, including successful invocations and invocations that result in a function error. Invocations aren't recorded if the invocation request is throttled or otherwise results in an invocation error. The value of Invocations equals the number of requests billed.","title":"Monitoring"},{"location":"implementation-guide/monitoring/#types-of-metrics","text":"The following types of metrics are available on the Centralized Logging with OpenSearch console.","title":"Types of metrics"},{"location":"implementation-guide/monitoring/#log-source-metrics","text":"","title":"Log source metrics"},{"location":"implementation-guide/monitoring/#fluent-bit","text":"FluentBitOutputProcRecords - The number of log records that this output instance has successfully sent. This is the total record count of all unique chunks sent by this output. If a record is not successfully sent, it does not count towards this metric. FluentBitOutputProcBytes - The number of bytes of log records that this output instance has successfully sent. This is the total byte size of all unique chunks sent by this output. If a record is not sent due to some error, then it will not count towards this metric. FluentBitOutputDroppedRecords - The number of log records that have been dropped by the output. This means they met an unrecoverable error or retries expired for their chunk. FluentBitOutputErrors - The number of chunks that have faced an error (either unrecoverable or retrievable). This is the number of times a chunk has failed, and does not correspond with the number of error messages you see in the Fluent Bit log output. FluentBitOutputRetriedRecords - The number of log records that experienced a retry. Note that this is calculated at the chunk level, and the count increased when an entire chunk is marked for retry. An output plugin may or may not perform multiple actions that generate many error messages when uploading a single chunk. FluentBitOutputRetriesFailed - The number of times that retries expired for a chunk. Each plugin configures a Retry_Limit which applies to chunks. Once the Retry_Limit has been reached for a chunk, it is discarded and this metric is incremented. FluentBitOutputRetries - The number of times this output instance requested a retry for a chunk.","title":"Fluent Bit"},{"location":"implementation-guide/monitoring/#network-load-balancer","text":"SyslogNLBActiveFlowCount - The total number of concurrent flows (or connections) from clients to targets. This metric includes connections in the SYN_SENT and ESTABLISHED states. TCP connections are not terminated at the load balancer, so a client opening a TCP connection to a target counts as a single flow. SyslogNLBProcessedBytes - The total number of bytes processed by the load balancer, including TCP/IP headers. This count includes traffic to and from targets, minus health check traffic.","title":"Network Load Balancer"},{"location":"implementation-guide/monitoring/#buffer-metrics","text":"Log Buffer is a buffer layer between the Log Agent and OpenSearch clusters. The agent uploads logs into the buffer layer before being processed and delivered into the OpenSearch clusters. A buffer layer is a way to protect OpenSearch clusters from overwhelming.","title":"Buffer metrics"},{"location":"implementation-guide/monitoring/#kinesis-data-stream","text":"KDSIncomingBytes \u2013 The number of bytes successfully put to the Kinesis stream over the specified time period. This metric includes bytes from PutRecord and PutRecords operations. Minimum, Maximum, and Average statistics represent the bytes in a single put operation for the stream in the specified time period. KDSIncomingRecords \u2013 The number of records successfully put to the Kinesis stream over the specified time period. This metric includes record counts from PutRecord and PutRecords operations. Minimum, Maximum, and Average statistics represent the records in a single put operation for the stream in the specified time period. KDSPutRecordBytes \u2013 The number of bytes put to the Kinesis stream using the PutRecord operation over the specified time period. KDSThrottledRecords \u2013 The number of records rejected due to throttling in a PutRecords operation per Kinesis data stream, measured over the specified time period. KDSWriteProvisionedThroughputExceeded \u2013 The number of records rejected due to throttling for the stream over the specified time period. This metric includes throttling from PutRecord and PutRecords operations. The most commonly used statistic for this metric is Average. When the Minimum statistic has a non-zero value, records will be throttled for the stream during the specified time period. When the Maximum statistic has a value of 0 (zero), no records will be throttled for the stream during the specified time period.","title":"Kinesis Data Stream"},{"location":"implementation-guide/monitoring/#sqs","text":"SQSNumberOfMessagesSent - The number of messages added to a queue. SQSNumberOfMessagesDeleted - The number of messages deleted from the queue. Amazon SQS emits the NumberOfMessagesDeleted metric for every successful deletion operation that uses a valid receipt handle, including duplicate deletions. The following scenarios might cause the value of the NumberOfMessagesDeleted metric to be higher than expected: - Calling the DeleteMessage action on different receipt handles that belong to the same message: If the message is not processed before the visibility timeout expires, the message becomes available to other consumers that can process it and delete it again, increasing the value of the NumberOfMessagesDeleted metric. Calling the DeleteMessage action on the same receipt handle: If the message is processed and deleted, but you call the DeleteMessage action again using the same receipt handle, a success status is returned, increasing the value of the NumberOfMessagesDeleted metric. SQSApproximateNumberOfMessagesVisible - The number of messages available for retrieval from the queue. SQSApproximateAgeOfOldestMessage - The approximate age of the oldest non-deleted message in the queue. After a message is received three times (or more) and not processed, the message is moved to the back of the queue and the ApproximateAgeOfOldestMessage metric points at the second-oldest message that hasn't been received more than three times. This action occurs even if the queue has a redrive policy. Because a single poison-pill message (received multiple times but never deleted) can distort this metric, the age of a poison-pill message isn't included in the metric until the poison-pill message is consumed successfully. When the queue has a redrive policy, the message is moved to a dead-letter queue after the configured Maximum Receives . When the message is moved to the dead-letter queue, the ApproximateAgeOfOldestMessage metric of the dead-letter queue represents the time when the message was moved to the dead-letter queue (not the original time the message was sent).","title":"SQS"},{"location":"implementation-guide/monitoring/#log-processor-metrics","text":"The Log Processor Lambda is responsible for performing final processing on the data and bulk writing it to OpenSearch. TotalLogs \u2013 The total number of log records or events processed by the Lambda function. ExcludedLogs \u2013 The number of log records or events that were excluded from processing, which could be due to filtering or other criteria. LoadedLogs \u2013 The number of log records or events that were successfully processed and loaded into OpenSearch. FailedLogs \u2013 The number of log records or events that failed to be processed or loaded into OpenSearch. ConcurrentExecutions \u2013 The number of function instances that are processing events. If this number reaches your concurrent executions quota for the Region, or the reserved concurrency limit on the function, then Lambda throttles additional invocation requests. Duration \u2013 The amount of time that your function code spends processing an event. The billed duration for an invocation is the value of Duration rounded up to the nearest millisecond. Throttles \u2013 The number of invocation requests that are throttled. When all function instances are processing requests and no concurrency is available to scale up, Lambda rejects additional requests with a TooManyRequestsException error. Throttled requests and other invocation errors don't count as either Invocations or Errors. Invocations \u2013 The number of times that your function code is invoked, including successful invocations and invocations that result in a function error. Invocations aren't recorded if the invocation request is throttled or otherwise results in an invocation error. The value of Invocations equals the number of requests billed.","title":"Log processor metrics"},{"location":"implementation-guide/release-notes/","text":"Date Changes March 2023 Initial release. April 2023 Released version 1.0.1 Fixed deployment failure due to S3 ACL changes. June 2023 Released version 1.0.3 Fixed the EKS Fluent Bit deployment configuration generation issue. August 2023 Released version 2.0.0 Added feature of ingesting log from S3 bucket continuously or on-demand Added log pipeline monitoring dashboard into the solution console Supported one-click enablement of pipeline alarms Added an option to automatically attach required IAM policies when creating an Instance Group Displayed an error message on the console when the installation of log agent fails Updated Application log pipeline creation process by allowing customer to specify a log source Added validations to OpenSearch domain when importing a domain or selecting a domain to create log pipeline Supported installing log agent on AL2023 instances Supported ingesting WAF (associated with CloudFront) sampled logs to OpenSearch in other regions except us-east-1 Allowed the same index name in different OpenSearch domains September 2023 Released version 2.0.1 Fixed the following issues: Automatically adjust log processor Lambda request's body size based on AOS instance type When you create an application log pipeline and select Nginx as log format, the default sample dashboard option is set to \"Yes\" Monitoring page cannot show metrics when there is only one dot The time of the data point of the monitoring metrics does not match the time of the abscissa November 2023 Released version 2.1.0 Added Light Engine to provide an Athena-based serverless and cost-effective log analytics engine to analyze infrequent access logs Added OpenSearch Ingestion to provide more log processing capabilities, with which OSI can provision compute resource (OCU)and pay per ingestion capacity Supported parsing logs in nested JSON format Supported CloudTrail logs ingestion from the specified bucket manually Fix can not list instances when creating instance group issue Fix the EC2 instance launch by the Auto Scaling group will fail to pass the health check issue December 2023 Released version 2.1.1 Fixed the following issues: Instance should not be added to the same Instance Group Cannot deploy CLO in UAE region Log ingestion error in light engine when not specified time key in the log config May 2024 Released version 2.2.0 Added the feature of ingesting logs from Windows instances Added the support of Light Engine for log analytics in AWS service logs (VPC Flow logs, Amazon RDS logs, AWS CloudTrail logs) and application logs (Syslog as log source and Amazon S3 as log source) Supported Unix time Added auto-generated tag 'CLOSolutionCostAnalysis' to view solution cost in AWS Billing and Cost Management service Added AWS Lambda concurrency configuration during pipeline creation Added newly supported Regions: Asia Pacific (Hyderabad, Jakarta,Melbourne), Israel (Tel Aviv), Canada (Calgary), Europe (Spain, Zurich), Middle East (UAE) Added new version release notification in the solution web console","title":"Revisions"},{"location":"implementation-guide/source/","text":"Visit our GitHub repository to download the source code for this solution. The solution template is generated using the AWS Cloud Development Kit (CDK) . Refer to the README.md file for additional information.","title":"Developer guide"},{"location":"implementation-guide/trouble-shooting/","text":"Troubleshooting The following help you to fix errors or problems that you might encounter when using Centralized Logging with OpenSearch. Error: Failed to assume service-linked role arn:x:x:x:/AWSServiceRoleForAppSync The reason for this error is that the account has never used the AWS AppSync service. You can deploy the solution's CloudFormation template again. AWS has already created the role automatically when you encountered the error. You can also go to AWS CloudShell or the local terminal and run the following AWS CLI command to Link AppSync Role aws iam create-service-linked-role --aws-service-name appsync.amazonaws.com Error: Unable to add backend role Centralized Logging with OpenSearch only supports Amazon OpenSearch Service domain with Fine-grained access control enabled. You need to go to Amazon OpenSearch Service console, and edit the Access policy for the Amazon OpenSearch Service domain. Error\uff1aUser xxx is not authorized to perform sts:AssumeRole on resource If you see this error, please make sure you have entered the correct information during cross account setup , and then please wait for several minutes. Centralized Logging with OpenSearch uses AssumeRole for cross-account access. This is the best practice to temporary access the AWS resources in your member account. However, these roles created during cross account setup take seconds or minutes to be affective. Error: PutRecords API responded with error='InvalidSignatureException' Fluent-bit agent reports PutRecords API responded with error='InvalidSignatureException', message='The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.' Please restart the fluent-bit agent. For example, on EC2 with Amazon Linux2, run command: sudo service fluent-bit restart Error: PutRecords API responded with error='AccessDeniedException' Fluent-bit agent deployed on EKS Cluster reports \"AccessDeniedException\" when sending records to Kinesis. Verify that the IAM role trust relations are correctly set. With the Centralized Logging with OpenSearch console: Open the Centralized Logging with OpenSearch console. In the left sidebar, under Log Source , choose EKS Clusters . Choose the EKS Cluster that you want to check. Click the IAM Role ARN which will open the IAM Role in AWS Console. Choose the Trust relationships to verify that the OIDC Provider, the service account namespace and conditions are correctly set. You can get more information from Amazon EKS IAM role configuration My CloudFormation stack is stuck on deleting an AWS::Lambda::Function resource when I update the stack. How to resolve it? The Lambda function resides in a VPC, and you need to wait for the associated ENI resource to be deleted. The agent status is offline after I restart the EC2 instance, how can I make it auto start on instance restart? This usually happens if you have installed the log agent, but restart the instance before you create any Log Ingestion. The log agent will auto restart if there is at least one Log Ingestion. If you have a log ingestion, but the problem still exists, you can use systemctl status fluent-bit to check its status inside the instance. I have switched to Global tenant. However, I still cannot find the dashboard in OpenSearch. This is usually because Centralized Logging with OpenSearch received 403 error from OpenSearch when creating the index template and dashboard. This can be fixed by re-run the Lambda function manually by following the steps below: With the Centralized Logging with OpenSearch console: Open the Centralized Logging with OpenSearch console, and find the AWS Service Log pipeline which has this issue. Copy the first 5 characters from the ID section. E.g. you should copy c169c from ID c169cb23-88f3-4a7e-90d7-4ab4bc18982c Go to AWS Console > Lambda. Paste in function filters. This will filter in all the lambda function created for this AWS Service Log ingestion. Click the Lambda function whose name contains \"OpenSearchHelperFn\". In the Test tab, create a new event with any Event name. Click the Test button to trigger the Lambda, and wait the lambda function to complete. The dashboard should be available in OpenSearch. Error from Fluent-bit agent: version `GLIBC_2.25' not found This error is caused by old version of glibc . Centralized Logging with OpenSearch with version later than 1.2 requires glibc-2.25 or above. So you must upgrade the existing version in EC2 first. The upgrade command for different kinds of OS is shown as follows: Important We strongly recommend you run the commands with environments first. Any upgrade failure may cause severe loss. Redhat 7.9 For Redhat 7.9, the whole process will take about 2 hours,and at least 10 GB storage is needed. # install library yum install -y gcc gcc-c++ m4 python3 bison fontconfig-devel libXpm-devel texinfo bzip2 wget echo /usr/local/lib >> /etc/ld.so.conf # create tmp directory mkdir -p /tmp/library cd /tmp/library # install gmp-6.1.0 wget https://ftp.gnu.org/gnu/gmp/gmp-6.1.0.tar.bz2 tar xjvf gmp-6.1.0.tar.bz2 cd gmp-6.1.0 ./configure --prefix=/usr/local make && make install ldconfig cd .. # install mpfr-3.1.4 wget https://gcc.gnu.org/pub/gcc/infrastructure/mpfr-3.1.4.tar.bz2 tar xjvf mpfr-3.1.4.tar.bz2 cd mpfr-3.1.4 ./configure --with-gmp=/usr/local --prefix=/usr/local make && make install ldconfig cd .. # install mpc-1.0.3 wget https://gcc.gnu.org/pub/gcc/infrastructure/mpc-1.0.3.tar.gz tar xzvf mpc-1.0.3.tar.gz cd mpc-1.0.3 ./configure --prefix=/usr/local make && make install ldconfig cd .. # install gcc-9.3.0 wget https://ftp.gnu.org/gnu/gcc/gcc-9.3.0/gcc-9.3.0.tar.gz tar xzvf gcc-9.3.0.tar.gz cd gcc-9.3.0 mkdir build cd build/ ../configure --enable-checking=release --enable-language=c,c++ --disable-multilib --prefix=/usr make -j4 && make install ldconfig cd ../.. # install make-4.3 wget https://ftp.gnu.org/gnu/make/make-4.3.tar.gz tar xzvf make-4.3.tar.gz cd make-4.3 mkdir build cd build ../configure --prefix=/usr make && make install cd ../.. # install glibc-2.31 wget https://ftp.gnu.org/gnu/glibc/glibc-2.31.tar.gz tar xzvf glibc-2.31.tar.gz cd glibc-2.31 mkdir build cd build/ ../configure --prefix=/usr --disable-profile --enable-add-ons --with-headers=/usr/include --with-binutils=/usr/bin --disable-sanity-checks --disable-werror make all && make install make localedata/install-locales # clean tmp directory cd /tmp rm -rf /tmp/library Ubuntu 22 sudo ln -s /snap/core20/1623/usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 sudo ln -s /snap/core20/1623/usr/lib/x86_64-linux-gnu/libssl.so.1.1 /usr/lib/x86_64-linux-gnu/libssl.so.1.1 sudo ln -s /usr/lib/x86_64-linux-gnu/libsasl2.so.2 /usr/lib/libsasl2.so.3 Amazon Linux 2023 sudo su - yum install -y wget perl unzip gcc zlib-devel mkdir /tmp/openssl cd /tmp/openssl wget https://www.openssl.org/source/openssl-1.1.1s.tar.gz tar xzvf openssl-1.1.1s.tar.gz cd openssl-1.1.1s ./config --prefix=/usr/local/openssl11 --openssldir=/usr/local/openssl11 shared zlib make make install echo /usr/local/openssl11/lib/ >> /etc/ld.so.conf ldconfig","title":"Troubleshooting"},{"location":"implementation-guide/trouble-shooting/#troubleshooting","text":"The following help you to fix errors or problems that you might encounter when using Centralized Logging with OpenSearch.","title":"Troubleshooting"},{"location":"implementation-guide/trouble-shooting/#error-failed-to-assume-service-linked-role-arnxxxawsserviceroleforappsync","text":"The reason for this error is that the account has never used the AWS AppSync service. You can deploy the solution's CloudFormation template again. AWS has already created the role automatically when you encountered the error. You can also go to AWS CloudShell or the local terminal and run the following AWS CLI command to Link AppSync Role aws iam create-service-linked-role --aws-service-name appsync.amazonaws.com","title":"Error: Failed to assume service-linked role arn:x:x:x:/AWSServiceRoleForAppSync"},{"location":"implementation-guide/trouble-shooting/#error-unable-to-add-backend-role","text":"Centralized Logging with OpenSearch only supports Amazon OpenSearch Service domain with Fine-grained access control enabled. You need to go to Amazon OpenSearch Service console, and edit the Access policy for the Amazon OpenSearch Service domain.","title":"Error: Unable to add backend role"},{"location":"implementation-guide/trouble-shooting/#erroruser-xxx-is-not-authorized-to-perform-stsassumerole-on-resource","text":"If you see this error, please make sure you have entered the correct information during cross account setup , and then please wait for several minutes. Centralized Logging with OpenSearch uses AssumeRole for cross-account access. This is the best practice to temporary access the AWS resources in your member account. However, these roles created during cross account setup take seconds or minutes to be affective.","title":"Error\uff1aUser xxx is not authorized to perform sts:AssumeRole on resource"},{"location":"implementation-guide/trouble-shooting/#error-putrecords-api-responded-with-errorinvalidsignatureexception","text":"Fluent-bit agent reports PutRecords API responded with error='InvalidSignatureException', message='The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.' Please restart the fluent-bit agent. For example, on EC2 with Amazon Linux2, run command: sudo service fluent-bit restart","title":"Error: PutRecords API responded with error='InvalidSignatureException'"},{"location":"implementation-guide/trouble-shooting/#error-putrecords-api-responded-with-erroraccessdeniedexception","text":"Fluent-bit agent deployed on EKS Cluster reports \"AccessDeniedException\" when sending records to Kinesis. Verify that the IAM role trust relations are correctly set. With the Centralized Logging with OpenSearch console: Open the Centralized Logging with OpenSearch console. In the left sidebar, under Log Source , choose EKS Clusters . Choose the EKS Cluster that you want to check. Click the IAM Role ARN which will open the IAM Role in AWS Console. Choose the Trust relationships to verify that the OIDC Provider, the service account namespace and conditions are correctly set. You can get more information from Amazon EKS IAM role configuration","title":"Error: PutRecords API responded with error='AccessDeniedException'"},{"location":"implementation-guide/trouble-shooting/#my-cloudformation-stack-is-stuck-on-deleting-an-awslambdafunction-resource-when-i-update-the-stack-how-to-resolve-it","text":"The Lambda function resides in a VPC, and you need to wait for the associated ENI resource to be deleted.","title":"My CloudFormation stack is stuck on deleting an AWS::Lambda::Function resource when I update the stack. How to resolve it?"},{"location":"implementation-guide/trouble-shooting/#the-agent-status-is-offline-after-i-restart-the-ec2-instance-how-can-i-make-it-auto-start-on-instance-restart","text":"This usually happens if you have installed the log agent, but restart the instance before you create any Log Ingestion. The log agent will auto restart if there is at least one Log Ingestion. If you have a log ingestion, but the problem still exists, you can use systemctl status fluent-bit to check its status inside the instance.","title":"The agent status is offline after I restart the EC2 instance, how can I make it auto start on instance restart?"},{"location":"implementation-guide/trouble-shooting/#i-have-switched-to-global-tenant-however-i-still-cannot-find-the-dashboard-in-opensearch","text":"This is usually because Centralized Logging with OpenSearch received 403 error from OpenSearch when creating the index template and dashboard. This can be fixed by re-run the Lambda function manually by following the steps below: With the Centralized Logging with OpenSearch console: Open the Centralized Logging with OpenSearch console, and find the AWS Service Log pipeline which has this issue. Copy the first 5 characters from the ID section. E.g. you should copy c169c from ID c169cb23-88f3-4a7e-90d7-4ab4bc18982c Go to AWS Console > Lambda. Paste in function filters. This will filter in all the lambda function created for this AWS Service Log ingestion. Click the Lambda function whose name contains \"OpenSearchHelperFn\". In the Test tab, create a new event with any Event name. Click the Test button to trigger the Lambda, and wait the lambda function to complete. The dashboard should be available in OpenSearch.","title":"I have switched to Global tenant. However, I still cannot find the dashboard in OpenSearch."},{"location":"implementation-guide/trouble-shooting/#error-from-fluent-bit-agent-version-glibc_225-not-found","text":"This error is caused by old version of glibc . Centralized Logging with OpenSearch with version later than 1.2 requires glibc-2.25 or above. So you must upgrade the existing version in EC2 first. The upgrade command for different kinds of OS is shown as follows: Important We strongly recommend you run the commands with environments first. Any upgrade failure may cause severe loss.","title":"Error from Fluent-bit agent: version `GLIBC_2.25' not found"},{"location":"implementation-guide/trouble-shooting/#redhat-79","text":"For Redhat 7.9, the whole process will take about 2 hours,and at least 10 GB storage is needed. # install library yum install -y gcc gcc-c++ m4 python3 bison fontconfig-devel libXpm-devel texinfo bzip2 wget echo /usr/local/lib >> /etc/ld.so.conf # create tmp directory mkdir -p /tmp/library cd /tmp/library # install gmp-6.1.0 wget https://ftp.gnu.org/gnu/gmp/gmp-6.1.0.tar.bz2 tar xjvf gmp-6.1.0.tar.bz2 cd gmp-6.1.0 ./configure --prefix=/usr/local make && make install ldconfig cd .. # install mpfr-3.1.4 wget https://gcc.gnu.org/pub/gcc/infrastructure/mpfr-3.1.4.tar.bz2 tar xjvf mpfr-3.1.4.tar.bz2 cd mpfr-3.1.4 ./configure --with-gmp=/usr/local --prefix=/usr/local make && make install ldconfig cd .. # install mpc-1.0.3 wget https://gcc.gnu.org/pub/gcc/infrastructure/mpc-1.0.3.tar.gz tar xzvf mpc-1.0.3.tar.gz cd mpc-1.0.3 ./configure --prefix=/usr/local make && make install ldconfig cd .. # install gcc-9.3.0 wget https://ftp.gnu.org/gnu/gcc/gcc-9.3.0/gcc-9.3.0.tar.gz tar xzvf gcc-9.3.0.tar.gz cd gcc-9.3.0 mkdir build cd build/ ../configure --enable-checking=release --enable-language=c,c++ --disable-multilib --prefix=/usr make -j4 && make install ldconfig cd ../.. # install make-4.3 wget https://ftp.gnu.org/gnu/make/make-4.3.tar.gz tar xzvf make-4.3.tar.gz cd make-4.3 mkdir build cd build ../configure --prefix=/usr make && make install cd ../.. # install glibc-2.31 wget https://ftp.gnu.org/gnu/glibc/glibc-2.31.tar.gz tar xzvf glibc-2.31.tar.gz cd glibc-2.31 mkdir build cd build/ ../configure --prefix=/usr --disable-profile --enable-add-ons --with-headers=/usr/include --with-binutils=/usr/bin --disable-sanity-checks --disable-werror make all && make install make localedata/install-locales # clean tmp directory cd /tmp rm -rf /tmp/library","title":"Redhat 7.9"},{"location":"implementation-guide/trouble-shooting/#ubuntu-22","text":"sudo ln -s /snap/core20/1623/usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 sudo ln -s /snap/core20/1623/usr/lib/x86_64-linux-gnu/libssl.so.1.1 /usr/lib/x86_64-linux-gnu/libssl.so.1.1 sudo ln -s /usr/lib/x86_64-linux-gnu/libsasl2.so.2 /usr/lib/libsasl2.so.3","title":"Ubuntu 22"},{"location":"implementation-guide/trouble-shooting/#amazon-linux-2023","text":"sudo su - yum install -y wget perl unzip gcc zlib-devel mkdir /tmp/openssl cd /tmp/openssl wget https://www.openssl.org/source/openssl-1.1.1s.tar.gz tar xzvf openssl-1.1.1s.tar.gz cd openssl-1.1.1s ./config --prefix=/usr/local/openssl11 --openssldir=/usr/local/openssl11 shared zlib make make install echo /usr/local/openssl11/lib/ >> /etc/ld.so.conf ldconfig","title":"Amazon Linux 2023"},{"location":"implementation-guide/uninstall/","text":"Uninstall the Centralized Logging with OpenSearch Warning You will encounter IAM role missing error if you delete the Centralized Logging with OpenSearch main stack before you delete the log pipelines. Centralized Logging with OpenSearch console launches additional CloudFormation stacks to ingest logs. If you want to uninstall the Centralized Logging with OpenSearch solution. We recommend you to delete log pipelines (incl. AWS Service log pipelines and application log pipelines) before uninstall the solution. Step 1. Delete Application Log Pipelines Important Please delete all the log ingestion before deleting an application log pipeline. Go to the Centralized Logging with OpenSearch console, in the left sidebar, choose Application Log . Click the application log pipeline to view details. In the ingestion tab, delete all the application log ingestion in the pipeline. Uninstall/Disable the Fluent Bit agent. EC2 (Optional): after removing the log ingestion from Instance Group. Fluent Bit will automatically stop ship logs, it is optional for you to stop the Fluent Bit in your instances. Here are the command for stopping Fluent Bit agent. sudo service fluent-bit stop sudo systemctl disable fluent-bit.service EKS DaemonSet (Mandatory): if you have chosen to deploy the Fluent Bit agent using DaemonSet, you need to delete your Fluent Bit agent. Otherwise, the agent will continue ship logs to Centralized Logging with OpenSearch pipelines. kubectl delete -f ~/fluent-bit-logging.yaml EKS SideCar (Mandatory): please remove the fluent-bit agent in your .yaml file, and restart your pod. Delete the Application Log pipeline. Repeat step 2 to Step 5 to delete all your application log pipelines. Step 2. Delete AWS Service Log Pipelines Go to the Centralized Logging with OpenSearch console, in the left sidebar, choose AWS Service Log . Select and delete the AWS Service Log Pipeline one by one. Step 3. Clean up imported OpenSearch domains Delete Access Proxy , if you have created the proxy using Centralized Logging with OpenSearch console. Delete Alarms , if you have created alarms using Centralized Logging with OpenSearch console. Delete VPC peering Connection between Centralized Logging with OpenSearch's VPC and OpenSearch's VPC. Go to AWS VPC Console . Choose Peering connections in left sidebar. Find and delete the VPC peering connection between the Centralized Logging with OpenSearch's VPC and OpenSearch's VPC. You may not have Peering Connections if you did not use the \"Automatic\" mode when importing OpenSearch domains. (Optional) Remove imported OpenSearch Domains. (This will not delete the Amazon OpenSearch domain in the AWS account.) Step 4. Delete Centralized Logging with OpenSearch stack Go to the CloudFormation console . Find CloudFormation Stack of the Centralized Logging with OpenSearch solution. (Optional) Delete S3 buckets created by Centralized Logging with OpenSearch. Important The S3 bucket whose name contains LoggingBucket is the centralized bucket for your AWS service log. You might have enabled AWS Services to send logs to this S3 bucket. Deleting this bucket will cause AWS Services failed to send logs. Choose the CloudFormation stack of the Centralized Logging with OpenSearch solution, and select the Resources tab. In search bar, enter AWS::S3::Bucket . This will show all the S3 buckets created by Centralized Logging with OpenSearch solution, and the Physical ID field is the S3 bucket name. Go to S3 console, and find the S3 bucket using the bucket name. Empty and Delete the S3 bucket. Delete the CloudFormation Stack of the Centralized Logging with OpenSearch solution","title":"Uninstall the solution"},{"location":"implementation-guide/uninstall/#uninstall-the-centralized-logging-with-opensearch","text":"Warning You will encounter IAM role missing error if you delete the Centralized Logging with OpenSearch main stack before you delete the log pipelines. Centralized Logging with OpenSearch console launches additional CloudFormation stacks to ingest logs. If you want to uninstall the Centralized Logging with OpenSearch solution. We recommend you to delete log pipelines (incl. AWS Service log pipelines and application log pipelines) before uninstall the solution.","title":"Uninstall the Centralized Logging with OpenSearch"},{"location":"implementation-guide/uninstall/#step-1-delete-application-log-pipelines","text":"Important Please delete all the log ingestion before deleting an application log pipeline. Go to the Centralized Logging with OpenSearch console, in the left sidebar, choose Application Log . Click the application log pipeline to view details. In the ingestion tab, delete all the application log ingestion in the pipeline. Uninstall/Disable the Fluent Bit agent. EC2 (Optional): after removing the log ingestion from Instance Group. Fluent Bit will automatically stop ship logs, it is optional for you to stop the Fluent Bit in your instances. Here are the command for stopping Fluent Bit agent. sudo service fluent-bit stop sudo systemctl disable fluent-bit.service EKS DaemonSet (Mandatory): if you have chosen to deploy the Fluent Bit agent using DaemonSet, you need to delete your Fluent Bit agent. Otherwise, the agent will continue ship logs to Centralized Logging with OpenSearch pipelines. kubectl delete -f ~/fluent-bit-logging.yaml EKS SideCar (Mandatory): please remove the fluent-bit agent in your .yaml file, and restart your pod. Delete the Application Log pipeline. Repeat step 2 to Step 5 to delete all your application log pipelines.","title":"Step 1. Delete Application Log Pipelines"},{"location":"implementation-guide/uninstall/#step-2-delete-aws-service-log-pipelines","text":"Go to the Centralized Logging with OpenSearch console, in the left sidebar, choose AWS Service Log . Select and delete the AWS Service Log Pipeline one by one.","title":"Step 2. Delete AWS Service Log Pipelines"},{"location":"implementation-guide/uninstall/#step-3-clean-up-imported-opensearch-domains","text":"Delete Access Proxy , if you have created the proxy using Centralized Logging with OpenSearch console. Delete Alarms , if you have created alarms using Centralized Logging with OpenSearch console. Delete VPC peering Connection between Centralized Logging with OpenSearch's VPC and OpenSearch's VPC. Go to AWS VPC Console . Choose Peering connections in left sidebar. Find and delete the VPC peering connection between the Centralized Logging with OpenSearch's VPC and OpenSearch's VPC. You may not have Peering Connections if you did not use the \"Automatic\" mode when importing OpenSearch domains. (Optional) Remove imported OpenSearch Domains. (This will not delete the Amazon OpenSearch domain in the AWS account.)","title":"Step 3. Clean up imported OpenSearch domains"},{"location":"implementation-guide/uninstall/#step-4-delete-centralized-logging-with-opensearch-stack","text":"Go to the CloudFormation console . Find CloudFormation Stack of the Centralized Logging with OpenSearch solution. (Optional) Delete S3 buckets created by Centralized Logging with OpenSearch. Important The S3 bucket whose name contains LoggingBucket is the centralized bucket for your AWS service log. You might have enabled AWS Services to send logs to this S3 bucket. Deleting this bucket will cause AWS Services failed to send logs. Choose the CloudFormation stack of the Centralized Logging with OpenSearch solution, and select the Resources tab. In search bar, enter AWS::S3::Bucket . This will show all the S3 buckets created by Centralized Logging with OpenSearch solution, and the Physical ID field is the S3 bucket name. Go to S3 console, and find the S3 bucket using the bucket name. Empty and Delete the S3 bucket. Delete the CloudFormation Stack of the Centralized Logging with OpenSearch solution","title":"Step 4. Delete Centralized Logging with OpenSearch stack"},{"location":"implementation-guide/update/","text":"Time to upgrade : Approximately 20 minutes Warning The following upgrade documentation only supports Centralized Logging with OpenSearch version 2.x and later. If you are using older versions, such as v1.x or any version of Log Hub, please refer to the Discussions on GitHub . Upgrade Overview Use the following steps to upgrade the solution on AWS console. Step 1. Update the CloudFormation Stack Step 2. Refresh the web console Step 1. Update the CloudFormation stack Go to the AWS CloudFormation console . Select the Centralized Logging with OpenSearch main stack, and click the Update button. Choose Replace current template , and enter the specific Amazon S3 URL according to your initial deployment type. Refer to Deployment Overview for more details. Cognito User Pool Type Link Launch with a new VPC https://solutions-reference.s3.amazonaws.com/centralized-logging-with-opensearch/latest/CentralizedLogging.template Launch with an existing VPC https://solutions-reference.s3.amazonaws.com/centralized-logging-with-opensearch/latest/CentralizedLoggingFromExistingVPC.template OpenID Connect (OIDC) Type Link Launch with a new VPC in AWS Regions https://solutions-reference.s3.amazonaws.com/centralized-logging-with-opensearch/latest/CentralizedLoggingWithOIDC.template Launch with an existing VPC in AWS Regions https://solutions-reference.s3.amazonaws.com/centralized-logging-with-opensearch/latest/CentralizedLoggingFromExistingVPCWithOIDC.template Launch with a new VPC in AWS China Regions https://solutions-reference.s3.amazonaws.com/centralized-logging-with-opensearch/latest/CentralizedLoggingWithOIDC.template Launch with an existing VPC in AWS China Regions https://solutions-reference.s3.amazonaws.com/centralized-logging-with-opensearch/latest/CentralizedLoggingFromExistingVPCWithOIDC.template Under Parameters , review the parameters for the template and modify them as necessary. Choose Next . On Configure stack options page, choose Next . On Review page, review and confirm the settings. Check the box I acknowledge that AWS CloudFormation might create IAM resources . Choose Update stack to deploy the stack. You can view the status of the stack in the AWS CloudFormation console in the Status column. You should receive a UPDATE_COMPLETE status in approximately 15 minutes. Step 2. Refresh the web console Now you have completed all the upgrade steps. Please click the refresh button in your browser.","title":"Upgrade the solution"},{"location":"implementation-guide/update/#upgrade-overview","text":"Use the following steps to upgrade the solution on AWS console. Step 1. Update the CloudFormation Stack Step 2. Refresh the web console","title":"Upgrade Overview"},{"location":"implementation-guide/update/#step-1-update-the-cloudformation-stack","text":"Go to the AWS CloudFormation console . Select the Centralized Logging with OpenSearch main stack, and click the Update button. Choose Replace current template , and enter the specific Amazon S3 URL according to your initial deployment type. Refer to Deployment Overview for more details. Cognito User Pool Type Link Launch with a new VPC https://solutions-reference.s3.amazonaws.com/centralized-logging-with-opensearch/latest/CentralizedLogging.template Launch with an existing VPC https://solutions-reference.s3.amazonaws.com/centralized-logging-with-opensearch/latest/CentralizedLoggingFromExistingVPC.template OpenID Connect (OIDC) Type Link Launch with a new VPC in AWS Regions https://solutions-reference.s3.amazonaws.com/centralized-logging-with-opensearch/latest/CentralizedLoggingWithOIDC.template Launch with an existing VPC in AWS Regions https://solutions-reference.s3.amazonaws.com/centralized-logging-with-opensearch/latest/CentralizedLoggingFromExistingVPCWithOIDC.template Launch with a new VPC in AWS China Regions https://solutions-reference.s3.amazonaws.com/centralized-logging-with-opensearch/latest/CentralizedLoggingWithOIDC.template Launch with an existing VPC in AWS China Regions https://solutions-reference.s3.amazonaws.com/centralized-logging-with-opensearch/latest/CentralizedLoggingFromExistingVPCWithOIDC.template Under Parameters , review the parameters for the template and modify them as necessary. Choose Next . On Configure stack options page, choose Next . On Review page, review and confirm the settings. Check the box I acknowledge that AWS CloudFormation might create IAM resources . Choose Update stack to deploy the stack. You can view the status of the stack in the AWS CloudFormation console in the Status column. You should receive a UPDATE_COMPLETE status in approximately 15 minutes.","title":"Step 1. Update the CloudFormation stack"},{"location":"implementation-guide/update/#step-2-refresh-the-web-console","text":"Now you have completed all the upgrade steps. Please click the refresh button in your browser.","title":"Step 2. Refresh the web console"},{"location":"implementation-guide/applications/","text":"Application Log Analytics Pipelines Centralized Logging with OpenSearch supports ingesting application logs from the following log sources: Amazon Instance Group : the solution automatically installs log agent (Fluent Bit 1.9), collects application logs on EC2 instances and then sends logs into Amazon OpenSearch. Amazon EKS cluster : the solution generates all-in-one configuration file for customers to deploy the log agent (Fluent Bit 1.9) as a DaemonSet or Sidecar. After log agent is deployed, the solution starts collecting pod logs and sends them to Amazon OpenSearch Service. Amazon S3 : the solution either ingests logs in the specified Amazon S3 location continuously or performs one-time ingestion. You can also filter logs based on Amazon S3 prefix or parse logs with custom Log Config. Syslog : the solution collects syslog logs through UDP or TCP protocol. Amazon OpenSearch Service is suitable for real-time log analytics and frequent queries and has full-text search capability. As of release 2.1.0, the solution starts to support log ingestion into Light Engine, which is suitable for non real-time log analytics and infrequent queries and has SQL-like search capability. After creating a log analytics pipeline, you can add more log sources to the log analytics pipeline. For more information, see add a new log source . Important If you are using Centralized Logging with OpenSearch to create an application log pipeline for the first time, you are recommended to learn the concepts and supported log formats and log sources . Supported Log Formats and Log Sources The table lists the log formats supported by each log source. For more information about how to create log ingestion for each log format, refer to Log Config . Log Format Instance Group EKS Cluster Amazon S3 Syslog Nginx Yes Yes Yes No Apache HTTP Server Yes Yes Yes No JSON Yes Yes Yes Yes Single-line Text Yes Yes Yes Yes Multi-line Text Yes Yes Yes (Not support in Light Engine Mode) No Multi-line Text (Spring Boot) Yes Yes Yes (Not support in Light Engine Mode) No Syslog RFC5424/RFC3164 No No No Yes Syslog Custom No No No Yes Windows Event Yes No No No IIS Configuration Mode Yes No No No","title":"Overview"},{"location":"implementation-guide/applications/#application-log-analytics-pipelines","text":"Centralized Logging with OpenSearch supports ingesting application logs from the following log sources: Amazon Instance Group : the solution automatically installs log agent (Fluent Bit 1.9), collects application logs on EC2 instances and then sends logs into Amazon OpenSearch. Amazon EKS cluster : the solution generates all-in-one configuration file for customers to deploy the log agent (Fluent Bit 1.9) as a DaemonSet or Sidecar. After log agent is deployed, the solution starts collecting pod logs and sends them to Amazon OpenSearch Service. Amazon S3 : the solution either ingests logs in the specified Amazon S3 location continuously or performs one-time ingestion. You can also filter logs based on Amazon S3 prefix or parse logs with custom Log Config. Syslog : the solution collects syslog logs through UDP or TCP protocol. Amazon OpenSearch Service is suitable for real-time log analytics and frequent queries and has full-text search capability. As of release 2.1.0, the solution starts to support log ingestion into Light Engine, which is suitable for non real-time log analytics and infrequent queries and has SQL-like search capability. After creating a log analytics pipeline, you can add more log sources to the log analytics pipeline. For more information, see add a new log source . Important If you are using Centralized Logging with OpenSearch to create an application log pipeline for the first time, you are recommended to learn the concepts and supported log formats and log sources .","title":"Application Log Analytics Pipelines"},{"location":"implementation-guide/applications/#supported-log-formats-and-log-sources","text":"The table lists the log formats supported by each log source. For more information about how to create log ingestion for each log format, refer to Log Config . Log Format Instance Group EKS Cluster Amazon S3 Syslog Nginx Yes Yes Yes No Apache HTTP Server Yes Yes Yes No JSON Yes Yes Yes Yes Single-line Text Yes Yes Yes Yes Multi-line Text Yes Yes Yes (Not support in Light Engine Mode) No Multi-line Text (Spring Boot) Yes Yes Yes (Not support in Light Engine Mode) No Syslog RFC5424/RFC3164 No No No Yes Syslog Custom No No No Yes Windows Event Yes No No No IIS Configuration Mode Yes No No No","title":"Supported Log Formats and Log Sources"},{"location":"implementation-guide/applications/create-log-config/","text":"Log Config Centralized Logging with OpenSearch solution supports creating log configs for the following log formats: JSON Apache Nginx Syslog Single-ine text Multi-line text For more information, refer to supported log formats and log sources . The following describes how to create log config for each log format. Create a JSON config Sign in to the Centralized Logging with OpenSearch Console. In the left sidebar, under Resources , choose Log Config . Choose Create a log config . Specify Config Name . Choose JSON in the log type dropdown list. In the Sample log parsing section, paste a sample JSON log and click Parse log to verify if the log parsing is successful.JSON type support nested Json with a maximum nesting depth of X. If your JSON log sample is nested JSON, choose Pase Log and it displays a list of field type options for each layer. If needed, you can set the corresponding field type for each layer of fields. If you choose Remove to delete a field. The field type will be automatically inferred by OpenSearch. For Example: {\"timestamp\": \"2023-11-06T08:29:55.266Z\", \"correlationId\": \"566829027325526589\", \"processInfo\": { \"startTime\": \"2023-11-06T08:29:55.266Z\", \"hostname\": \"ltvtix0apidev01\", \"domainId\": \"e6826d97-a60f-45cb-93e1-b4bb5a7add29\", \"groupId\": \"group-2\", \"groupName\": \"grp_dev_bba\", \"serviceId\": \"instance-1\", \"serviceName\": \"ins_dev_bba\", \"version\": \"7.7.20210130\" }, \"transactionSummary\": { \"path\": \"https://www.leadmission-critical.info/relationships\", \"protocol\": \"https\", \"protocolSrc\": \"97\", \"status\": \"exception\", \"serviceContexts\": [ { \"service\": \"NSC_APP-117127_DCTM_Get Documentum Token\", \"monitor\": true, \"client\": \"Pass Through\", \"org\": null, \"app\": null, \"method\": \"getTokenUsingPOST\", \"status\": \"exception\", \"duration\": 25270 } ] } } Check if each fields type mapping is correct. You can change the type by selecting the dropdown menu in the second column. For all supported types, see Data Types . Note You must specify the datetime of the log using key \u201ctime\u201d. If not specified, system time will be added. For nested JSON, the Time Key must be on the first level. Specify the Time format . The format syntax follows strptime . Check this for details. (Optional) In the Filter section, you add some conditions to filter logs at the log agent side. The solution will ingest logs that match ALL the specified conditions only. Select Create . Create an Apache HTTP server log config Apache HTTP Server (httpd) is capable of writing error and access log files to a local directory. You can configure Centralized Logging with OpenSearch to ingest Apache HTTP server logs. Sign in to the Centralized Logging with OpenSearch Console. In the left sidebar, under Resources , choose Log Config . Click the Create a log config button. Specify Config Name . Choose Apache HTTP server in the log type dropdown menu. In the Apache Log Format section, paste your Apache HTTP server log format configuration. It is in the format of /etc/httpd/conf/httpd.conf and starts with LogFormat . For example: LogFormat \"%h %l %u %t \\\"%r\\\" %>s %b \\\"%{Referer}i\\\" \\\"%{User-Agent}i\\\"\" combined (Optional) In the Sample log parsing section, paste a sample Apache HTTP server log to verify if the log parsing is successful. For example: 127.0.0.1 - - [22/Dec/2021:06:48:57 +0000] \"GET /xxx HTTP/1.1\" 404 196 \"-\" \"curl/7.79.1\" Choose Create . Create an Nginx log config Sign in to the Centralized Logging with OpenSearch Console. In the left sidebar, under Resources , choose Log Config . Click the Create a log config button. Specify Config Name . Choose Nginx in the log type dropdown menu. In the Nginx Log Format section, paste your Nginx log format configuration. It is in the format of /etc/nginx/nginx.conf and starts with log_format . For example: log_format main '$remote_addr - $remote_user [$time_local] \"$request\" ' '$status $body_bytes_sent \"$http_referer\" ' '\"$http_user_agent\" \"$http_x_forwarded_for\"'; (Optional) In the Sample log parsing section, paste a sample Nginx log to verify if the log parsing is successful. For example: 127.0.0.1 - - [24/Dec/2021:01:27:11 +0000] \"GET / HTTP/1.1\" 200 3520 \"-\" \"curl/7.79.1\" \"-\" (Optional) In the Filter section, you add some conditions to filter logs at the log agent side. The solution will ingest logs that match ALL the specified conditions only. Select Create . Create a Syslog config Sign in to the Centralized Logging with OpenSearch Console. In the left sidebar, under Resources , choose Log Config . Click the Create a log config button. Specify Config Name . Choose Syslog in the log type dropdown menu. Note that Centralized Logging with OpenSearch also supports Syslog with JSON format and single-line text format. RFC5424 Paste a sample RFC5424 log. For example: <35>1 2013-10-11T22:14:15Z client_machine su - - - 'su root' failed for joe on /dev/pts/2 Choose Parse Log . Check if each fields type mapping is correct. You can change the type by selecting the dropdown menu in the second column. For all supported types, see Data Types . Note You must specify the datetime of the log using key \u201ctime\u201d. If not specified, system time will be added. Specify the Time format . The format syntax follows strptime . Check this manual for details. For example: %Y-%m-%dT%H:%M:%SZ (Optional) In the Filter section, you add some conditions to filter logs at the log agent side. The solution will ingest logs that match ALL the specified conditions only. Select Create . RFC3164 Paste a sample RFC3164 log. For example: <35>Oct 12 22:14:15 client_machine su: 'su root' failed for joe on /dev/pts/2 Choose Parse Log . Check if each fields type mapping is correct. You can change the type by selecting the dropdown menu in the second column. For all supported types, see Data Types . Note You must specify the datetime of the log using key \u201ctime\u201d. If not specified, system time will be added. Since there is no year in the timestamp of RFC3164, it cannot be displayed as a time histogram in the Discover interface of Amazon OpenSearch. Specify the Time format . The format syntax follows strptime . Check this for details. For example: %b %m %H:%M:%S (Optional) In the Filter section, you add some conditions to filter logs at the log agent side. The solution will ingest logs that match ALL the specified conditions only. Select Create . Custom In the Syslog Format section, paste your Syslog log format configuration. It is in the format of /etc/rsyslog.conf and starts with template or $template . The format syntax follows Syslog Message Format . For example: <%pri%>1 %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% %msg%\\n In the Sample log parsing section, paste a sample Nginx log to verify if the log parsing is successful. For example: <35>1 2013-10-11T22:14:15.003Z client_machine su - - 'su root' failed for joe on /dev/pts/2 Check if each fields type mapping is correct. Change the type by selecting the dropdown menu in the second column. For all supported types, see Data Types . Note You must specify the datetime of the log using key \u201ctime\u201d. If not specified, system time will be added. Specify the Time format . The format syntax follows strptime . Check this manual for details. (Optional) In the Filter section, you add some conditions to filter logs at the log agent side. The solution will ingest logs that match ALL the specified conditions only. Select Create . Create a single-line text config Sign in to the Centralized Logging with OpenSearch Console. In the left sidebar, under Resources , choose Log Config . Click the Create a log config button. Specify Config Name . Choose Single-line Text in the log type dropdown menu. Write the regular expression in Rubular to validate first and enter the value. For example: (?\\S+)\\s*-\\s*(?\\S+)\\s*\\[(?\\d+/\\S+/\\d+:\\d+:\\d+:\\d+)\\s+\\S+\\]\\s*\"(?\\S+)\\s+(?\\S+)\\s+\\S+\"\\s*(?\\S+)\\s*(?\\S+)\\s*\"(?[^\"]*)\"\\s*\"(?[^\"]*)\"\\s*\"(?[^\"]*)\".* In the Sample log parsing section, paste a sample Single-line text log and click Parse log to verify if the log parsing is successful. For example: 127.0.0.1 - - [24/Dec/2021:01:27:11 +0000] \"GET / HTTP/1.1\" 200 3520 \"-\" \"curl/7.79.1\" \"-\" Check if each fields type mapping is correct. Change the type by selecting the dropdown menu in the second column. For all supported types, see Data Types . Note You must specify the datetime of the log using key \u201ctime\u201d. If not specified, system time will be added. Specify the Time format . The format syntax follows strptime . Check this manual for details. (Optional) In the Filter section, you add some conditions to filter logs at the log agent side. The solution will ingest logs that match ALL the specified conditions only. Select Create . Create a multi-line text config Sign in to the Centralized Logging with OpenSearch Console. In the left sidebar, under Resources , choose Log Config . Click the Create a log config button. Specify Config Name . Choose Multi-line Text in the log type dropdown menu. Java - Spring Boot For Java Spring Boot logs, you could provide a simple log format. For example: %d{yyyy-MM-dd HH:mm:ss.SSS} %-5level [%thread] %logger : %msg%n Paste a sample multi-line log. For example: 2022-02-18 10:32:26.400 ERROR [http-nio-8080-exec-1] org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is java.lang.ArithmeticException: / by zero] with root cause java.lang.ArithmeticException: / by zero at com.springexamples.demo.web.LoggerController.logs(LoggerController.java:22) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke Choose Parse Log . Check if each fields type mapping is correct. You can change the type by selecting the dropdown menu in the second column. For all supported types, see Data Types . Note You must specify the datetime of the log using key \u201ctime\u201d. If not specified, system time will be added. Specify the Time format . The format syntax follows strptime . Check this for details. (Optional) In the Filter section, you add some conditions to filter logs at the log agent side. The solution will ingest logs that match ALL the specified conditions only. Select Create . Custom For other kinds of logs, you could specify the first line regex pattern. For example: (?