The Aerospike Knowledge Base has moved to https://support.aerospike.com. Content on https://discuss.aerospike.com is being migrated to either https://support.aerospike.com or https://docs.aerospike.com. Maintenance on articles stored in this repository ceased on December 31st 2022 and this article may be stale. If you have any questions, please do not hesitate to raise a case via https://support.aerospike.com.
FAQ - What materials and logs are likely to be required by Aerospike Support?
Detail
All support cases are individual and may require specific diagnostic information. This document lists basic case types and the logs that may be required. This is a general guide and not definitive. Having these logs available upon opening the support case can reduce waiting times and speed up the investigation.
Answer
The following logs and materials may be required
In all support cases
In all cases, Aerospike Support are likely to request a collectinfo. This contains vital information about system configuration and performance. To generate this do as follows:
- Go to Real-time Data Platform - Multi-model NoSQL | Aerospike
- Download and install the latest tools release on your system
- Run the following command:
$ sudo asadm -e collectinfo
It is important to use the latest tools release as we continually improve our asadm tool to gather more useful information
Cluster integrity issues
In these cases it is helpful to have the following:
- /var/log/aerospike/aerospike.log from all nodes in the cluster making sure to cover the time the issue started
- Output from any network monitoring such as SAR
XDR issues - Aerospike 3.8 and above
- /var/log/aerospike/aerospike.log from all nodes in the local and remote DC
- Any output from monitoring tools such as Graphite
XDR issues - releases prior to Aerospike 3.8
- /var/log/aerospike/aerospike.log from all nodes in the local and remote DC
- /var/log/aerospike/asxdr.log from all nodes in the local and remote DC
Any crash including OOM kills / suspected memory leaks
- /var/log/aerospike/aerospike.log from affected node(s)
- top output around time of crash
- system kernel messages
- Any monitoring output e.g. GraphX
Other useful pieces of information when opening the case
- Have there been any recent system changes
- Has the data volume changed?
- Has this happened before? If so, how often, is there a pattern to the occurrence?
- Does this affect all cluster nodes or just some?
- If this is AWS, are the nodes in the same availability zone?
- If this is Google Compute Engine, were live migrations happening?
In situations where one node has been affected over a period of time, or any situation where multiple nodes are affected, the incident timeline is invaluable information. Aerospike Support then match this to log and diagnostic information to build a picture of what happened.
Notes
- On Redhat/Centos 7.x systems, systemd may be managing logging. In this case extract logs as discussed in the following document:
http://www.aerospike.com/docs/operations/manage/systemd
- Aerospike log location can be configured in aerospike.conf and as such log location may vary from the paths shown above. The logging stanza in aerospike.conf lists the definitive location. An example is shown below:
logging {
file /var/log/aerospike/aerospike.log {
context any info
}
}
- Earlier versions of Aerospike may log to citrusleaf.log this can be considered analogous to aerospike.log
Keywords
INFORMATION SUPPORT CASE COLLECTINFO LOGS
Timestamp
11th August 2016