Was this page helpful?
In the event there is an issue you would like to report to ScyllaDB support, you need to submit logs and other files which help the support team diagnose the issue. Only the ScyllaDB support team members can read the data you share.
In general, there are two types of issues:
ScyllaDB failure - There is some kind of failure, possibly due to a connectivity issue, a timeout, or otherwise, where the ScyllaDB server or the ScyllaDB nodes are not working. These cases require you to send ScyllaDB Doctor vitals and ScyllaDB logs, as well as Core Dump files (if available), to ScyllaDB support.
ScyllaDB performance - you have noticed some type of degradation of service with ScyllaDB reads or writes. If it is clearly a performance case and not a failure, refer to Report a performance problem.
Once you have used our diagnostic tools to report the current status, you need to Send files to ScyllaDB support for further analysis.
Make sure the ScyllaDB system logs are configured properly to report info level messages: install debug info.
Note
If you are unsure which reports need to be included, Open a support ticket or GitHub issue and consult with the ScyllaDB team.
ScyllaDB Doctor is a troubleshooting tool that checks the node status regarding
system requirements, configuration, and tuning. The collected information is
output as a .vitals.json
file and an archive file with ScyllaDB logs.
You need to run the tool on every node in the cluster.
Download ScyllaDB Doctor as a Linux package or a generic tarball:
Ubuntu/Debian (DEB): https://downloads.scylladb.com/downloads/scylla-doctor/deb/
RHEL/Rocky (RPM): https://downloads.scylladb.com/downloads/scylla-doctor/rpm/
Tarball: https://downloads.scylladb.com/downloads/scylla-doctor/tar/
Run ScyllaDB Doctor on every node in the cluster.
If you installed ScyllaDB Doctor with DEB or RPM, you can run it with
the scylla-doctor
command.
If you downloaded the tarball, extract the scylla_doctor.pyz
file and
copy the file to all nodes in the cluster. Next, execute the following
command from the directory where you copied scylla_doctor.pyz
on each node:
sudo ./scylla_doctor.pyz --save-vitals <unique-host-id>.vitals.json
Make sure you provide a unique host identifier in the filename, such as the host IP.
Running ScyllaDB Doctor will generate:
<unique-host-id>.vitals.json
- ScyllaDB Doctor vitals
scylla_logs_<timestamp>.tar.gz
- ScyllaDB logs
Authenticated Clusters
If CQL authentication is enabled on the cluster, you need to additionally
provide CQL credentials with permissions to perform the DESCRIBE SCHEMA
command using the following parameters:
-sov CQL,user,<CQL user name> -sov CQL,password,<CQL password>
ScyllaDB Doctor employs cqlsh installed on a given node using the provided credentials. Make sure to set up any additional configuration required to use cqlsh, such as TLS-related information, in the .cqlshrc file.
Collect the .vitals.json
and log files from each node into a local
directory with a name identifying your cluster and compress them into an archive.
In the following example, the Linux tar
command is used to compress
the files in the my_cluster_123
directory:
tar czvf my_cluster_123_vitals.tgz my_cluster_123
Upload the archive using the instructions in the Send files to ScyllaDB support section.
When ScyllaDB fails, it creates a core dump which can later be used to debug the issue. The file is written to /var/lib/scylla/coredump
. If there is no file in the directory, see Troubleshooting Core Dump.
Procedure
The core dump file can be very large. Make sure to zip it using xz
or similar.
xz -z core.21692
Upload the compressed file to upload.scylladb.com. See Send files to ScyllaDB support.
In the event the /var/lib/scylla/coredump
directory is empty, the following solutions may help. Note that this section only handles some of the reasons why a core dump file is not created. It should be noted that in some cases where a core dump file fails to create not because it is in the wrong location or because the system is not configured to generate core dump files, but because the failure itself created an issue where the core dump file wasn’t created or is not accessible.
If ScyllaDB restarts for some reason and there is no core dump file, the OS system daemon needs to be modified.
Procedure
Open the custom configuration file. /etc/systemd/coredump.conf.d/custom.conf
.
Refer to generate core dumps for details.
Note
You will need spare disk space larger than that of ScyllaDB’s RAM.
If the scylla/coredump
directory is empty even after you changed the custom configuration file, it might be that Automatic Bug Reporting Tool (ABRT) is running and all core dumps are pipelined directly to it.
Procedure
Check the /proc/sys/kernel/core_pattern
file.
If it contains something similar to |/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t %h %e 636f726500
replace the contents with core
.
If you are experiencing a performance issue when using ScyllaDB, let us know and we can help. To save time and increase the likelihood of a speedy solution, it is important to supply us with as much information as possible.
Include the following information in your report:
Complete ScyllaDB Doctor Vitals
A Server Metrics Report
A Client Metrics Report
The contents of your tracing data. See Collecting Tracing Data.
There are two types of metrics you need to collect: ScyllaDB Server and ScyllaDB Client (node). The ScyllaDB Server metrics can be displayed using an external monitoring service like ScyllaDB Monitoring Stack or they can be collected using scyllatop and other commands.
Note
It is highly recommended to use the ScyllaDB monitoring stack so that the Prometheus metrics collected can be shared.
There are several commands you can use to see if there is a performance issue on the ScyllaDB Server. Note that checking the CPU load using top
is not a good metric for checking ScyllaDB.
Use scyllatop
instead.
Note
To help the ScyllaDB support team assess your problem, it is best to pipe the results to a file which you can attach with ScyllaDB Doctor vitals and ScyllaDB logs.
Check the Send files to ScyllaDB supportgauge-load
. If the load is close to 100%, the bottleneck is ScyllaDB CPU.
scyllatop *gauge-load
Check if one of ScyllaDB core is busier than the others:
sar -P ALL
Check the load on one CPU (0 in this example)
perf top -C0
Check if the disk utilization percentage is close to 100%. If yes, the disk might be the bottleneck.
ostat -x 1`` to observe the disk utilization.
Collect run time statistics.
sudo perf record --call-graph dwarf -C 0 -F 99 -p $(ps -C scylla -o pid --no-headers) -g sleep 10
Alternatively, you can run the sudo ./collect-runtime-info.sh
` which does all of the above, except scyllatop and uploads the compressed result to s3.
The script contents is as follows:
#!/bin/bash -e
mkdir report
rpm -qa > ./report/rpm.txt
journalctl -b > ./report/journalctl.txt
df -k > ./report/df.txt
netstat > ./report/netstat.txt
sar -P ALL > ./report/sar.txt
iostat -d 1 10 > ./report/iostat.txt
sudo perf record --call-graph dwarf -C 0 -F 99 -p $(ps -C scylla -o pid --no-headers) -g --output ./report/perf.data sleep 10
export report_uuid=$(uuidgen)
echo $report_uuid
tar c report | xz > report.tar.xz
curl --request PUT --upload-file report.tar.xz "scylladb-users-upload.s3.amazonaws.com/$report_uuid/report.tar.xz"
echo $report_uuid
You can also see the results in ./report dir
When using Grafana and Prometheus to monitor ScyllaDB, sharing the metrics stored in Prometheus is very useful. This procedure shows how to gather the metrics from the monitoring server.
Procedure
Validate Prometheus instance is running
docker ps
Download the DB, using your CONTAINER ID instead of a64bf3ba0b7f
sudo docker cp a64bf3ba0b7f:/prometheus /tmp/prometheus_data
Zip the file.
sudo tar -zcvf /tmp/prometheus_data.tar.gz /tmp/prometheus_data/
Upload the file you created in step 3 to upload.scylladb.com (see Send files to ScyllaDB support).
Check the client CPU using top
. If the CPU is close to 100%, the bottleneck is the client CPU. In this case, you should add more loaders to stress ScyllaDB.
Once you have collected and compressed your reports, send them to ScyllaDB for analysis.
Procedure
Generate a UUID:
export report_uuid=$(uuidgen)
echo $report_uuid
Upload all required report files:
curl -X PUT https://upload.scylladb.com/$report_uuid/yourfile -T yourfile
For example with the Scylla Doctor’s vitals:
curl -X PUT https://upload.scylladb.com/$report_uuid/my_cluster_123_vitals.tgz -T my_cluster_123_vitals.tgz
The UUID you generated replaces the variable $report_uuid
at runtime. yourfile
is any file you need to send to ScyllaDB support.
If you have not done so already, supply ScyllaDB support with the UUID. Keep in mind that although the ID you supply is public, only ScyllaDB support team members can read the data you share. In the ticket/issue you open, list the documents you have uploaded.
Procedure
Do one of the following:
If you are a ScyllaDB customer, open a Support Ticket and include the UUID within the ticket.
If you are a ScyllaDB user, open an issue on GitHub and include the UUID within the issue.
ScyllaDB benchmark results for an example of the level of details required in your reports.
Was this page helpful?