ScyllaDB University LIVE, FREE Virtual Training Event | March 21
Register for Free
ScyllaDB Documentation Logo Documentation
  • Server
  • Cloud
  • Tools
    • ScyllaDB Manager
    • ScyllaDB Monitoring Stack
    • ScyllaDB Operator
  • Drivers
    • CQL Drivers
    • DynamoDB Drivers
  • Resources
    • ScyllaDB University
    • Community Forum
    • Tutorials
Download
ScyllaDB Docs ScyllaDB Open Source Scylla for Administrators Procedures Cluster Management Procedures Replace a Dead Node in a Scylla Cluster

Caution

You're viewing documentation for a previous version. Switch to the latest stable version.

Replace a Dead Node in a Scylla Cluster¶

Replace dead node operation will cause the other nodes in the cluster to stream data to the node that was replaced. This operation can take some time (depending on the data size and network bandwidth).

This procedure is for replacing one dead node. To replace more than one dead node, run the full procedure to completion one node at a time.

Prerequisites¶

  • Verify the status of the node using nodetool status command, the node with status DN is down and need to be replaced

Datacenter: DC1
Status=Up/Down
State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens  Owns (effective)                         Host ID         Rack
UN  192.168.1.201  112.82 KB  256     32.7%             8d5ed9f4-7764-4dbd-bad8-43fddce94b7c   B1
UN  192.168.1.202  91.11 KB   256     32.9%             125ed9f4-7777-1dbn-mac8-43fddce9123e   B1
DN  192.168.1.203  124.42 KB  256     32.6%             675ed9f4-6564-6dbd-can8-43fddce952gy   B1

Warning

It’s essential to ensure the replaced (dead) node will never come back to the cluster, which might lead to a split-brain situation. Remove the replaced (dead) node from the cluster network or VPC.

  • Log in to the dead node and manually remove the data if you can. Delete the data with the following commands:

    sudo rm -rf /var/lib/scylla/data
    sudo find /var/lib/scylla/commitlog -type f -delete
    sudo find /var/lib/scylla/hints -type f -delete
    sudo find /var/lib/scylla/view_hints -type f -delete
    
  • Login to one of the nodes in the cluster with (UN) status. Collect the following info from the node:

    • cluster_name - cat /etc/scylla/scylla.yaml | grep cluster_name

    • seeds - cat /etc/scylla/scylla.yaml | grep seeds:

    • endpoint_snitch - cat /etc/scylla/scylla.yaml | grep endpoint_snitch

    • Scylla version - scylla --version

Procedure¶

Note

If your Scylla version is earlier than Scylla Open Source 4.3 or Scylla Enterprise 2021.1, check if the dead node is a seed node by running cat /etc/scylla/scylla.yaml | grep seeds:.

  • If the dead node’s IP address is listed, the dead node is a seed node. Replace the seed node following the instructions in Replacing a Dead Seed Node.

  • If the dead node’s IP address is not listed, the dead node is not a seed node. Replace it according to the procedure below.

We recommend checking the seed nodes configuration of all nodes. Refer to Seed Nodes for details

  1. Install Scylla on a new node, see Getting Started for further instructions. Follow the Scylla install procedure up to scylla.yaml configuration phase. Ensure that the Scylla version of the new node is identical to the other nodes in the cluster.

    Note

    Make sure to use the same Scylla patch release on the new/replaced node, to match the rest of the cluster. It is not recommended to add a new node with a different release to the cluster. For example, use the following for installing Scylla patch release (use your deployed version)

    • Scylla Enterprise - sudo yum install scylla-enterprise-2018.1.9

    • Scylla open source - sudo yum install scylla-3.0.3

  2. In the scylla.yaml file edit the parameters listed below. The file can be found under /etc/scylla/.

    • cluster_name - Set the selected cluster_name

    • listen_address - IP address that Scylla uses to connect to other Scylla nodes in the cluster

    • seeds - Set the seed nodes

    • auto_bootstrap - By default, this parameter is set to true, it allows new nodes to migrate data to themselves automatically

    • endpoint_snitch - Set the selected snitch

    • rpc_address - Address for client connection (Thrift, CQL)

  3. Add the replace_address_first_boot parameter to the scylla.yaml config file on the new node. This line can be added to any place in the config file. After a successful node replacement, there is no need to remove it from the scylla.yaml file. (Note: The obsolete parameter “replace_address” is not supported and should not be used). The value of the replace_address_first_boot parameter should be the IP address of the node to be replaced.

    For example (using the address of the failed node from above):

    replace_address_first_boot: 192.168.1.203

  4. Start Scylla node.

    sudo systemctl start scylla-server
    
    docker exec -it some-scylla supervisorctl start scylla
    

    (with some-scylla container already running)

  5. Verify that the node has been added to the cluster using nodetool status command.

    For example:

    Datacenter: DC1
    Status=Up/Down
    State=Normal/Leaving/Joining/Moving
    --  Address        Load       Tokens  Owns (effective)                         Host ID         Rack
    UN  192.168.1.201  112.82 KB  256     32.7%             8d5ed9f4-7764-4dbd-bad8-43fddce94b7c   B1
    UN  192.168.1.202  91.11 KB   256     32.9%             125ed9f4-7777-1dbn-mac8-43fddce9123e   B1
    DN  192.168.1.203  124.42 KB  256     32.6%             675ed9f4-6564-6dbd-can8-43fddce952gy   B1
    

    192.168.1.203 is the dead node.

    The replacing node 192.168.1.204 will be bootstrapping data. We will not see 192.168.1.204 during the bootstrap.

    Datacenter: dc1
    ===============
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    --  Address    Load       Tokens       Owns    Host ID                               Rack
        UN  192.168.1.201  112.82 KB  256     32.7%             8d5ed9f4-7764-4dbd-bad8-43fddce94b7c   B1
    UN  192.168.1.202  91.11 KB   256     32.9%             125ed9f4-7777-1dbn-mac8-43fddce9123e   B1
    

    Use nodetool gossipinfo to see 192.168.1.204 is in HIBERNATE status.

    /192.168.1.204
      generation:1553759984
      heartbeat:104
      HOST_ID:655ae64d-e3fb-45cc-9792-2b648b151b67
      STATUS:hibernate,true
      RELEASE_VERSION:3.0.8
      X3:3
      X5:
      NET_VERSION:0
      DC:DC1
      X4:0
      SCHEMA:2790c24e-39ff-3c0a-bf1c-cd61895b6ea1
      RPC_ADDRESS:192.168.1.204
      X2:
      RACK:B1
      INTERNAL_IP:192.168.1.204
    
      /192.168.1.203
      generation:1553759866
      heartbeat:2147483647
     HOST_ID:655ae64d-e3fb-45cc-9792-2b648b151b67
      STATUS:shutdown,true
      RELEASE_VERSION:3.0.8
      X3:3
      X5:0:18446744073709551615:1553759941343
      NET_VERSION:0
      DC:DC1
      X4:1
      SCHEMA:2790c24e-39ff-3c0a-bf1c-cd61895b6ea1
      RPC_ADDRESS:192.168.1.203
      RACK:B1
      LOAD:1.09776e+09
      INTERNAL_IP:192.168.1.203
    

    After the bootstrapping is over, nodetool status will show:

    Datacenter: DC1
    Status=Up/Down
    State=Normal/Leaving/Joining/Moving
    --  Address        Load       Tokens  Owns (effective)                         Host ID         Rack
    UN  192.168.1.201  112.82 KB  256     32.7%             8d5ed9f4-7764-4dbd-bad8-43fddce94b7c   B1
    UN  192.168.1.202  91.11 KB   256     32.9%             125ed9f4-7777-1dbn-mac8-43fddce9123e   B1
    UN  192.168.1.204  124.42 KB  256     32.6%             675ed9f4-6564-6dbd-can8-43fddce952gy   B1
    
  6. Run the nodetool repair command on the node that was replaced to make sure that the data is synced with the other nodes in the cluster. You can use Scylla Manager to run the repair.

    Note

    When Repair Based Node Operations (RBNO) for replace is enabled, there is no need to rerun repair.

Setup RAID Following a Restart¶

In case you need to to restart (stop + start, not reboot) an instance with ephemeral storage, like EC2 i3, or i3en nodes, you should be aware that:

ephemeral volumes persist only for the life of the instance. When you stop, hibernate, or terminate an instance, the applications and data in its instance store volumes are erased. (see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/storage-optimized-instances.html)

In this case, the node’s data will be cleaned after restart. To remedy this, you need to recreate the RAID again.

  1. Stop the Scylla server on the node you restarted. The rest of the commands will run on this node as well.

    sudo systemctl stop scylla-server
    
    docker exec -it some-scylla supervisorctl stop scylla
    

    (without stopping some-scylla container)

  2. Run the following command, remembering not to mount an invalid RAID disk after reboot:

    sudo sed -e '/.*scylla/s/^/#/g' -i /etc/fstab
    
  3. Run the following command to enable auto_bootstrap after restart to sync the data:

    sudo sed -e '/auto_bootstrap:.*/s/False/True/g' -i /etc/scylla/scylla.yaml
    
  4. Run the following command, replacing 172.30.0.186 with the listen_address / rpc_address of the node that you are restarting:

    echo 'replace_address_first_boot: 172.30.0.186' | sudo tee --append /etc/scylla/scylla.yaml
    
  5. Run the following command to re-setup RAID

    sudo /opt/scylladb/scylla-machine-image/scylla_create_devices
    
  6. Start Scylla Server

    sudo systemctl stop scylla-server
    
    docker exec -it some-scylla supervisorctl stop scylla
    

    (without stopping some-scylla container)

Sometimes the public/ private IP of instance is changed after restart. If so refer to the Replace Procedure above.

Was this page helpful?

PREVIOUS
Remove a Node from a Scylla Cluster (Down Scale)
NEXT
Replace More Than One Dead Node In A Scylla Cluster
  • Create an issue
  • Edit this page

On this page

  • Replace a Dead Node in a Scylla Cluster
    • Prerequisites
    • Procedure
    • Setup RAID Following a Restart
ScyllaDB Open Source
  • 5.1
    • master
    • 6.2
    • 6.1
    • 6.0
    • 5.4
    • 5.2
    • 5.1
  • Getting Started
    • Install Scylla
      • ScyllaDB Web Installer for Linux
      • Scylla Unified Installer (relocatable executable)
      • Air-gapped Server Installation
      • What is in each RPM
      • Scylla Housekeeping and how to disable it
      • Scylla Developer Mode
      • Scylla Configuration Reference
    • Configure Scylla
    • ScyllaDB Requirements
      • System Requirements
      • OS Support by Platform and Version
      • Scylla in a Shared Environment
    • Migrate to ScyllaDB
      • Migration Process from Cassandra to Scylla
      • Scylla and Apache Cassandra Compatibility
      • Migration Tools Overview
    • Integration Solutions
      • Integrate Scylla with Spark
      • Integrate Scylla with KairosDB
      • Integrate Scylla with Presto
      • Integrate Scylla with Elasticsearch
      • Integrate Scylla with Kubernetes
      • Integrate Scylla with the JanusGraph Graph Data System
      • Integrate Scylla with DataDog
      • Integrate Scylla with Kafka
      • Integrate Scylla with IOTA Chronicle
      • Integrate Scylla with Spring
      • Shard-Aware Kafka Connector for Scylla
      • Install Scylla with Ansible
      • Integrate Scylla with Databricks
    • Tutorials
  • Scylla for Administrators
    • Administration Guide
    • Procedures
      • Cluster Management
      • Backup & Restore
      • Change Configuration
      • Maintenance
      • Best Practices
      • Benchmarking Scylla
      • Migrate from Cassandra to Scylla
      • Disable Housekeeping
    • Security
      • Scylla Security Checklist
      • Enable Authentication
      • Enable and Disable Authentication Without Downtime
      • Generate a cqlshrc File
      • Reset Authenticator Password
      • Enable Authorization
      • Grant Authorization CQL Reference
      • Role Based Access Control (RBAC)
      • Scylla Auditing Guide
      • Encryption: Data in Transit Client to Node
      • Encryption: Data in Transit Node to Node
      • Generating a self-signed Certificate Chain Using openssl
      • Encryption at Rest
      • LDAP Authentication
      • LDAP Authorization (Role Management)
    • Admin Tools
      • Nodetool Reference
      • CQLSh
      • REST
      • Tracing
      • Scylla SStable
      • Scylla Types
      • SSTableLoader
      • cassandra-stress
      • SSTabledump
      • SSTable2json
      • SSTable Index
      • Scylla Logs
      • Seastar Perftune
      • Virtual Tables
    • ScyllaDB Monitoring Stack
    • ScyllaDB Operator
    • ScyllaDB Manager
    • Upgrade Procedures
      • Scylla Enterprise
      • Scylla Open Source
      • Scylla Open Source to Scylla Enterprise
      • Scylla AMI
    • System Configuration
      • System Configuration Guide
      • scylla.yaml
      • Scylla Snitches
    • Benchmarking Scylla
  • Scylla for Developers
    • Learn To Use Scylla
      • Scylla University
      • Course catalog
      • Scylla Essentials
      • Basic Data Modeling
      • Advanced Data Modeling
      • MMS - Learn by Example
      • Care-Pet an IoT Use Case and Example
    • Scylla Alternator
    • Scylla Features
      • Scylla Open Source Features
      • Scylla Enterprise Features
    • Scylla Drivers
      • Scylla CQL Drivers
      • Scylla DynamoDB Drivers
  • CQL Reference
    • CQLSh: the CQL shell
    • Appendices
    • Compaction
    • Consistency Levels
    • Consistency Level Calculator
    • Data Definition
    • Data Manipulation
    • Data Types
    • Definitions
    • Global Secondary Indexes
    • Additional Information
    • Expiring Data with Time to Live (TTL)
    • Additional Information
    • Functions
    • JSON Support
    • Materialized Views
    • Non-Reserved CQL Keywords
    • Reserved CQL Keywords
    • ScyllaDB CQL Extensions
  • Scylla Architecture
    • Scylla Ring Architecture
    • Scylla Fault Tolerance
    • Consistency Level Console Demo
    • Scylla Anti-Entropy
      • Scylla Hinted Handoff
      • Scylla Read Repair
      • Scylla Repair
    • SSTable
      • Scylla SSTable - 2.x
      • ScyllaDB SSTable - 3.x
    • Compaction Strategies
    • Raft Consensus Algorithm in ScyllaDB
  • Troubleshooting Scylla
    • Errors and Support
      • Report a Scylla problem
      • Error Messages
      • Change Log Level
    • Scylla Startup
      • Ownership Problems
      • Scylla will not Start
      • Scylla Python Script broken
    • Cluster and Node
      • Failed Decommission Problem
      • Cluster Timeouts
      • Node Joined With No Data
      • SocketTimeoutException
      • NullPointerException
    • Data Modeling
      • Scylla Large Partitions Table
      • Scylla Large Rows and Cells Table
      • Large Partitions Hunting
    • Data Storage and SSTables
      • Space Utilization Increasing
      • Disk Space is not Reclaimed
      • SSTable Corruption Problem
      • Pointless Compactions
      • Limiting Compaction
    • CQL
      • Time Range Query Fails
      • COPY FROM Fails
      • CQL Connection Table
      • Reverse queries fail
    • Scylla Monitor and Manager
      • Manager and Monitoring integration
      • Manager lists healthy nodes as down
  • Knowledge Base
    • Upgrading from experimental CDC
    • Compaction
    • Counting all rows in a table is slow
    • CQL Query Does Not Display Entire Result Set
    • When CQLSh query returns partial results with followed by “More”
    • Run Scylla and supporting services as a custom user:group
    • Decoding Stack Traces
    • Snapshots and Disk Utilization
    • DPDK mode
    • Debug your database with Flame Graphs
    • How to Change gc_grace_seconds for a Table
    • Gossip in Scylla
    • Increase Permission Cache to Avoid Non-paged Queries
    • How does Scylla LWT Differ from Apache Cassandra ?
    • Map CPUs to Scylla Shards
    • Scylla Memory Usage
    • NTP Configuration for Scylla
    • Updating the Mode in perftune.yaml After a ScyllaDB Upgrade
    • POSIX networking for Scylla
    • Scylla consistency quiz for administrators
    • Recreate RAID devices
    • How to Safely Increase the Replication Factor
    • Scylla and Spark integration
    • Increase Scylla resource limits over systemd
    • Scylla Seed Nodes
    • How to Set up a Swap Space
    • Scylla Snapshots
    • Scylla payload sent duplicated static columns
    • Stopping a local repair
    • System Limits
    • How to flush old tombstones from a table
    • Time to Live (TTL) and Compaction
    • Scylla Nodes are Unresponsive
    • Update a Primary Key
    • Using the perf utility with Scylla
    • Configure Scylla Networking with Multiple NIC/IP Combinations
  • ScyllaDB University
  • Scylla FAQ
  • Contribute to ScyllaDB
  • Glossary
  • Alternator: DynamoDB API in Scylla
    • Getting Started With ScyllaDB Alternator
    • Scylla Alternator for DynamoDB users
Docs Tutorials University Contact Us About Us
© 2025, ScyllaDB. All rights reserved. | Terms of Service | Privacy Policy | ScyllaDB, and ScyllaDB Cloud, are registered trademarks of ScyllaDB, Inc.
Last updated on 13 May 2025.
Powered by Sphinx 7.4.7 & ScyllaDB Theme 1.8.6