ScyllaDB University LIVE, FREE Virtual Training Event | March 21
Register for Free
ScyllaDB Documentation Logo Documentation
  • Server
  • Cloud
  • Tools
    • ScyllaDB Manager
    • ScyllaDB Monitoring Stack
    • ScyllaDB Operator
  • Drivers
    • CQL Drivers
    • DynamoDB Drivers
  • Resources
    • ScyllaDB University
    • Community Forum
    • Tutorials
Download
ScyllaDB Docs ScyllaDB Open Source ScyllaDB for Administrators Procedures Cluster Management Procedures Handling Cluster Membership Change Failures

Caution

You're viewing documentation for a previous version. Switch to the latest stable version.

Handling Cluster Membership Change Failures¶

A failure may happen in the middle of a cluster membership change (that is bootstrap, decommission, removenode, or replace), such as loss of power. If that happens, you should ensure that the cluster is brought back to a consistent state as soon as possible. Further membership changes might be impossible until you do so.

For example, a node that crashed in the middle of decommission might leave the cluster in a state where it considers the node to still be a member, but the node itself will refuse to restart and communicate with the cluster. This particular case is very unlikely - it requires a specifically timed crash to happen, after the data streaming phase of decommission finishes but before the node commits that it left. But if it happens, you won’t be able to bootstrap other nodes (they will try to contact the partially-decommissioned node and fail) until you remove the remains of the node that crashed.

Handling a Failed Bootstrap¶

If a failure happens when trying to bootstrap a new node to the cluster, you can try bootstrapping the node again by restarting it.

If the failure persists or you decided that you don’t want to bootstrap the node anymore, follow the instructions in the cleaning up after a failed membership change section to remove the remains of the bootstrapping node. You can then clear the node’s data directories and attempt to bootstrap it again.

Handling a Failed Decommission¶

There are two cases.

Most likely the failure happened during the data repair/streaming phase - before the node tried to leave the token ring. Look for a log message containing “leaving token ring” in the logs of the node that you tried to decommission. For example:

INFO  2023-03-14 13:08:38,323 [shard 0] storage_service - decommission[5b2e752e-964d-4f36-871f-254491f4e8cc]: leaving token ring

If the message is not present, the failure happened before the node tried to leave the token ring. In that case you can simply restart the node and attempt to decommission it again.

If the message is present, the node attempted to leave the token ring, but it might have left the cluster only partially before the failure. Do not try to restart the node. Instead, you must make sure that the node is dead and remove any leftovers using the removenode operation. See cleaning up after a failed membership change. Trying to restart the node after such failure results in unpredictable behavior - it may restart normally, it may refuse to restart, or it may even try to rebootstrap.

If you don’t have access to the node’s logs anymore, assume the second case (the node might have attempted to leave the token ring), do not try to restart the node, instead follow the cleaning up after a failed membership change section.

Handling a Failed Removenode¶

Simply retry the removenode operation.

If you somehow lost the host ID of the node that you tried to remove, follow the instructions in cleaning up after a failed membership change.

Handling a Failed Replace¶

Replace is a special case of bootstrap, but the bootstrapping node tries to take the place of another dead node. You can retry a failed replace operation by restarting the replacing node.

If the failure persists or you decided that you don’t want to perform the replace anymore, follow the instructions in cleaning up after a failed membership change section to remove the remains of the replacing node. You can then clear the node’s data directories and attempt to replace again. Alternatively, you can remove the dead node which you initially tried to replace using removenode, and perform a regular bootstrap.

Cleaning up after a Failed Membership Change¶

After a failed membership change, the cluster may contain remains of a node that tried to leave or join - other nodes may consider the node a member, possibly in a transitioning state. It is important to remove any such “ghost” members. Their presence may reduce the cluster’s availability, performance, or prevent further membership changes.

You need to determine the host IDs of any potential ghost members, then remove them using the removenode operation. Note that after a failed replace, there may be two different host IDs that you’ll want to find and run removenode on: the new replacing node and the old node that you tried to replace. (Or you can remove the new node only, then try to replace the old node again.)

Step One: Determining Host IDs of Ghost Members¶

  • After a failed bootstrap, you need to determine the host ID of the node that tried to bootstrap, if it managed to generate a host ID (it might not have chosen the host ID yet if it failed very early in the procedure, in which case there’s nothing to remove). Look for a message containing system_keyspace - Setting local host id to in the node’s logs, which will contain the node’s host ID. For example: system_keyspace - Setting local host id to f180b78b-6094-434d-8432-7327f4d4b38d. If you don’t have access to the node’s logs, read the generic method below.

  • After a failed decommission, you need to determine the host ID of the node that tried to decommission. You can search the node’s logs as in the failed bootstrap case (see above), or you can use the generic method below.

  • After a failed removenode, you need to determine the host ID of the node that you tried to remove. You should already have it, since executing a removenode requires the host ID in the first place. But if you lost it somehow, read the generic method below.

  • After a failed replace, you need to determine the host ID of the replacing node. Search the node’s logs as in the failed bootstrap case (see above), or you can use the generic method below. You may also want to determine the host ID of the replaced node - either to attempt replacing it again after removing the remains of the previous replacing node, or to remove it using nodetool removenode. You should already have the host ID of the replaced node if you used the replace_node_first_boot option to perform the replace.

If you cannot determine the ghost members’ host ID using the suggestions above, use the method described below. The approach differs depending on whether Raft is enabled in your cluster.

  1. Make sure there are no ongoing membership changes.

  2. Execute the following CQL query on one of your nodes to retrieve the Raft group 0 ID:

    select value from system.scylla_local where key = 'raft_group0_id'
    

    For example:

    cqlsh> select value from system.scylla_local where key = 'raft_group0_id';
    
     value
    --------------------------------------
     607fef80-c276-11ed-a6f6-3075f294cc65
    
  3. Use the obtained Raft group 0 ID to query the set of all cluster members’ host IDs (which includes the ghost members), by executing the following query:

    select server_id from system.raft_state where group_id = <group0_id>
    

    replace <group0_id> with the group 0 ID that you obtained. For example:

    cqlsh> select server_id from system.raft_state where group_id = 607fef80-c276-11ed-a6f6-3075f294cc65;
    
     server_id
    --------------------------------------
     26a9badc-6e96-4b86-a8df-5173e5ab47fe
     7991e7f5-692e-45a0-8ae5-438be5bc7c4f
     aff11c6d-fbe7-4395-b7ca-3912d7dba2c6
    
  4. Execute the following CQL query to obtain the host IDs of all token ring members:

    select host_id, up from system.cluster_status;
    

    For example:

    cqlsh> select peer, host_id, up from system.cluster_status;
    
     peer      | host_id                              | up
    -----------+--------------------------------------+-------
     127.0.0.3 |                                 null | False
     127.0.0.1 | 26a9badc-6e96-4b86-a8df-5173e5ab47fe |  True
     127.0.0.2 | 7991e7f5-692e-45a0-8ae5-438be5bc7c4f |  True
    

    The output of this query is similar to the output of nodetool status.

    We included the up column to see which nodes are down and the peer column to see their IP addresses.

    In this example, one of the nodes tried to decommission and crashed as soon as it left the token ring but before it left the Raft group. Its entry will show up in system.cluster_status queries with host_id = null, like above, until the cluster is restarted.

  5. A host ID belongs to a ghost member if:

    • It appears in the system.raft_state query but not in the system.cluster_status query,

    • Or it appears in the system.cluster_status query but does not correspond to any remaining node in your cluster.

    In our example, the ghost member’s host ID was aff11c6d-fbe7-4395-b7ca-3912d7dba2c6 because it appeared in the system.raft_state query but not in the system.cluster_status query.

    If you’re unsure whether a given row in the system.cluster_status query corresponds to a node in your cluster, you can connect to each node in the cluster and execute select host_id from system.local (or search the node’s logs) to obtain that node’s host ID, collecting the host IDs of all nodes in your cluster. Then check if each host ID from the system.cluster_status query appears in your collected set; if not, it’s a ghost member.

    A good rule of thumb is to look at the members marked as down (up = False in system.cluster_status) - ghost members are eventually marked as down by the remaining members of the cluster. But remember that a real member might also be marked as down if it was shutdown or partitioned away from the rest of the cluster. If in doubt, connect to each node and collect their host IDs, as described in the previous paragraph.

  1. Make sure there are no ongoing membership changes.

  2. Execute the following CQL query on one of your nodes to obtain the host IDs of all token ring members:

    select peer, host_id, up from system.cluster_status;
    

    For example:

    cqlsh> select peer, host_id, up from system.cluster_status;
    
     peer      | host_id                              | up
    -----------+--------------------------------------+-------
     127.0.0.3 | 42405b3b-487e-4759-8590-ddb9bdcebdc5 | False
     127.0.0.1 | 4e3ee715-528f-4dc9-b10f-7cf294655a9e |  True
     127.0.0.2 | 225a80d0-633d-45d2-afeb-a5fa422c9bd5 |  True
    

    The output of this query is similar to the output of nodetool status.

    We included the up column to see which nodes are down.

    In this example, one of the 3 nodes tried to decommission but crashed while it was leaving the token ring. The node is in a partially left state and will refuse to restart, but other nodes still consider it as a normal member. We’ll have to use removenode to clean up after it.

  3. A host ID belongs to a ghost member if it appears in the system.cluster_status query but does not correspond to any remaining node in your cluster.

    If you’re unsure whether a given row in the system.cluster_status query corresponds to a node in your cluster, you can connect to each node in the cluster and execute select host_id from system.local (or search the node’s logs) to obtain that node’s host ID, collecting the host IDs of all nodes in your cluster. Then check if each host ID from the system.cluster_status query appears in your collected set; if not, it’s a ghost member.

    A good rule of thumb is to look at the members marked as down (up = False in system.cluster_status) - ghost members are eventually marked as down by the remaining members of the cluster. But remember that a real member might also be marked as down if it was shutdown or partitioned away from the rest of the cluster. If in doubt, connect to each node and collect their host IDs, as described in the previous paragraph.

    In our example, the ghost member’s host ID is 42405b3b-487e-4759-8590-ddb9bdcebdc5 because it is the only member marked as down and we can verify that the other two rows appearing in system.cluster_status belong to the remaining 2 nodes in the cluster.

In some cases, even after a failed topology change, there may be no ghost members left - for example, if a bootstrapping node crashed very early in the procedure or a decommissioning node crashed after it committed the membership change but before it finalized its own shutdown steps.

If any ghost members are present, proceed to the next step.

Step Two: Removing the Ghost Members¶

Given the host IDs of ghost members, you can remove them using removenode; follow the documentation for removenode operation.

If you’re executing removenode too quickly after a failed membership change, an error similar to the following might pop up:

nodetool: Scylla API server HTTP POST to URL '/storage_service/remove_node' failed: seastar::rpc::remote_verb_error (node_ops_cmd_check: Node 127.0.0.2 rejected node_ops_cmd=removenode_abort from node=127.0.0.1 with ops_uuid=0ba0a5ab-efbd-4801-a31c-034b5f55487c, pending_node_ops={b47523f2-de6a-4c38-8490-39127dba6b6a}, pending node ops is in progress)

In that case simply wait for 2 minutes before trying removenode again.

If removenode returns an error like:

nodetool: Scylla API server HTTP POST to URL '/storage_service/remove_node' failed: std::runtime_error (removenode[12e7e05b-d1ae-4978-b6a6-de0066aa80d8]: Host ID 42405b3b-487e-4759-8590-ddb9bdcebdc5 not found in the cluster)

and you’re sure that you’re providing the correct Host ID, it means that the member was already removed and you don’t have to clean up after it.

Was this page helpful?

PREVIOUS
Cluster membership changes and LWT consistency
NEXT
Repair-Based Node Operations (RBNO)
  • Create an issue
  • Edit this page

On this page

  • Handling Cluster Membership Change Failures
    • Handling a Failed Bootstrap
    • Handling a Failed Decommission
    • Handling a Failed Removenode
    • Handling a Failed Replace
    • Cleaning up after a Failed Membership Change
      • Step One: Determining Host IDs of Ghost Members
      • Step Two: Removing the Ghost Members
ScyllaDB Open Source
  • 5.4
    • master
    • 6.2
    • 6.1
    • 6.0
    • 5.4
    • 5.2
    • 5.1
  • Getting Started
    • Install ScyllaDB
      • Launch ScyllaDB on AWS
      • Launch ScyllaDB on GCP
      • Launch ScyllaDB on Azure
      • ScyllaDB Web Installer for Linux
      • Install ScyllaDB Linux Packages
      • Install ScyllaDB Without root Privileges
      • Air-gapped Server Installation
      • ScyllaDB Housekeeping and how to disable it
      • ScyllaDB Developer Mode
    • Configure ScyllaDB
    • ScyllaDB Configuration Reference
    • ScyllaDB Requirements
      • System Requirements
      • OS Support by Linux Distributions and Version
      • Cloud Instance Recommendations
      • ScyllaDB in a Shared Environment
    • Migrate to ScyllaDB
      • Migration Process from Cassandra to Scylla
      • Scylla and Apache Cassandra Compatibility
      • Migration Tools Overview
    • Integration Solutions
      • Integrate Scylla with Spark
      • Integrate Scylla with KairosDB
      • Integrate ScyllaDB with Presto
      • Integrate Scylla with Elasticsearch
      • Integrate Scylla with Kubernetes
      • Integrate Scylla with the JanusGraph Graph Data System
      • Integrate Scylla with DataDog
      • Integrate Scylla with Kafka
      • Integrate Scylla with IOTA Chronicle
      • Integrate Scylla with Spring
      • Shard-Aware Kafka Connector for Scylla
      • Install Scylla with Ansible
      • Integrate Scylla with Databricks
      • Integrate Scylla with Jaeger Server
      • Integrate Scylla with MindsDB
    • Tutorials
  • ScyllaDB for Administrators
    • Administration Guide
    • Procedures
      • Cluster Management
      • Backup & Restore
      • Change Configuration
      • Maintenance
      • Best Practices
      • Benchmarking Scylla
      • Migrate from Cassandra to Scylla
      • Disable Housekeeping
    • Security
      • ScyllaDB Security Checklist
      • Enable Authentication
      • Enable and Disable Authentication Without Downtime
      • Creating a Custom Superuser
      • Generate a cqlshrc File
      • Reset Authenticator Password
      • Enable Authorization
      • Grant Authorization CQL Reference
      • Certificate-based Authentication
      • Role Based Access Control (RBAC)
      • Encryption: Data in Transit Client to Node
      • Encryption: Data in Transit Node to Node
      • Generating a self-signed Certificate Chain Using openssl
      • Configure SaslauthdAuthenticator
    • Admin Tools
      • Nodetool Reference
      • CQLSh
      • REST
      • Tracing
      • Scylla SStable
      • Scylla Types
      • SSTableLoader
      • cassandra-stress
      • SSTabledump
      • SSTable2json
      • SSTableMetadata
      • Scylla Logs
      • Seastar Perftune
      • Virtual Tables
      • SELECT * FROM MUTATION_FRAGMENTS() Statement
    • ScyllaDB Monitoring Stack
    • ScyllaDB Operator
    • ScyllaDB Manager
    • Upgrade Procedures
      • ScyllaDB Versioning
      • ScyllaDB Open Source Upgrade
      • ScyllaDB Open Source to ScyllaDB Enterprise Upgrade
      • ScyllaDB Image
      • ScyllaDB Enterprise
    • System Configuration
      • System Configuration Guide
      • scylla.yaml
      • ScyllaDB Snitches
    • Benchmarking ScyllaDB
    • ScyllaDB Diagnostic Tools
  • ScyllaDB for Developers
    • Learn To Use ScyllaDB
      • ScyllaDB University
      • Course catalog
      • ScyllaDB Essentials
      • Basic Data Modeling
      • Advanced Data Modeling
      • MMS - Learn by Example
      • Care-Pet an IoT Use Case and Example
    • Scylla Alternator
    • Scylla Features
      • Scylla Open Source Features
      • Scylla Enterprise Features
    • Scylla Drivers
      • Scylla CQL Drivers
      • Scylla DynamoDB Drivers
    • Workload Attributes
  • CQL Reference
    • CQLSh: the CQL shell
    • Appendices
    • Compaction
    • Consistency Levels
    • Consistency Level Calculator
    • Data Definition
    • Data Manipulation
      • SELECT
      • INSERT
      • UPDATE
      • DELETE
      • BATCH
    • Data Types
    • Definitions
    • Global Secondary Indexes
    • Expiring Data with Time to Live (TTL)
    • Functions
    • Wasm support for user-defined functions
    • JSON Support
    • Materialized Views
    • Non-Reserved CQL Keywords
    • Reserved CQL Keywords
    • ScyllaDB CQL Extensions
  • ScyllaDB Architecture
    • ScyllaDB Ring Architecture
    • ScyllaDB Fault Tolerance
    • Consistency Level Console Demo
    • ScyllaDB Anti-Entropy
      • Scylla Hinted Handoff
      • Scylla Read Repair
      • Scylla Repair
    • SSTable
      • ScyllaDB SSTable - 2.x
      • ScyllaDB SSTable - 3.x
    • Compaction Strategies
    • Raft Consensus Algorithm in ScyllaDB
  • Troubleshooting ScyllaDB
    • Errors and Support
      • Report a Scylla problem
      • Error Messages
      • Change Log Level
    • ScyllaDB Startup
      • Ownership Problems
      • Scylla will not Start
      • Scylla Python Script broken
    • Upgrade
      • Inaccessible configuration files after ScyllaDB upgrade
    • Cluster and Node
      • Failed Decommission Problem
      • Cluster Timeouts
      • Node Joined With No Data
      • SocketTimeoutException
      • NullPointerException
      • Failed Schema Sync
    • Data Modeling
      • Scylla Large Partitions Table
      • Scylla Large Rows and Cells Table
      • Large Partitions Hunting
    • Data Storage and SSTables
      • Space Utilization Increasing
      • Disk Space is not Reclaimed
      • SSTable Corruption Problem
      • Pointless Compactions
      • Limiting Compaction
    • CQL
      • Time Range Query Fails
      • COPY FROM Fails
      • CQL Connection Table
    • ScyllaDB Monitor and Manager
      • Manager and Monitoring integration
      • Manager lists healthy nodes as down
  • Knowledge Base
    • Upgrading from experimental CDC
    • Compaction
    • Consistency in ScyllaDB
    • Counting all rows in a table is slow
    • CQL Query Does Not Display Entire Result Set
    • When CQLSh query returns partial results with followed by “More”
    • Run Scylla and supporting services as a custom user:group
    • Customizing CPUSET
    • Decoding Stack Traces
    • Snapshots and Disk Utilization
    • DPDK mode
    • Debug your database with Flame Graphs
    • How to Change gc_grace_seconds for a Table
    • Gossip in Scylla
    • Increase Permission Cache to Avoid Non-paged Queries
    • How does Scylla LWT Differ from Apache Cassandra ?
    • Map CPUs to Scylla Shards
    • Scylla Memory Usage
    • NTP Configuration for Scylla
    • Updating the Mode in perftune.yaml After a ScyllaDB Upgrade
    • POSIX networking for Scylla
    • Scylla consistency quiz for administrators
    • Recreate RAID devices
    • How to Safely Increase the Replication Factor
    • Scylla and Spark integration
    • Increase Scylla resource limits over systemd
    • Scylla Seed Nodes
    • How to Set up a Swap Space
    • Scylla Snapshots
    • Scylla payload sent duplicated static columns
    • Stopping a local repair
    • System Limits
    • How to flush old tombstones from a table
    • Time to Live (TTL) and Compaction
    • Scylla Nodes are Unresponsive
    • Update a Primary Key
    • Using the perf utility with Scylla
    • Configure Scylla Networking with Multiple NIC/IP Combinations
  • Reference
    • AWS Images
    • Configuration Parameters
    • Glossary
    • ScyllaDB Enterprise vs. Open Source Matrix
  • ScyllaDB FAQ
  • Contribute to ScyllaDB
  • Alternator: DynamoDB API in Scylla
    • Getting Started With ScyllaDB Alternator
    • ScyllaDB Alternator for DynamoDB users
Docs Tutorials University Contact Us About Us
© 2025, ScyllaDB. All rights reserved. | Terms of Service | Privacy Policy | ScyllaDB, and ScyllaDB Cloud, are registered trademarks of ScyllaDB, Inc.
Last updated on 13 May 2025.
Powered by Sphinx 7.4.7 & ScyllaDB Theme 1.8.6