Was this page helpful?
Caution
You're viewing documentation for a previous version of ScyllaDB Open Source. Switch to the latest stable version.
Note
If you upgraded your cluster from version 5.4, see After Upgrading from 5.4.
You can remove nodes from your cluster to reduce its size.
Run the nodetool status command to check the status of the nodes in your cluster.
Datacenter: DC1
Status=Up/Down
State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 192.168.1.201 112.82 KB 256 32.7% 8d5ed9f4-7764-4dbd-bad8-43fddce94b7c B1
UN 192.168.1.202 91.11 KB 256 32.9% 125ed9f4-7777-1dbn-mac8-43fddce9123e B1
UN 192.168.1.203 124.42 KB 256 32.6% 675ed9f4-6564-6dbd-can8-43fddce952gy B1
If the node status is Up Normal (UN), run the nodetool decommission command
to remove the node you are connected to. Using nodetool decommission
is the recommended method for cluster scale-down operations. It prevents data loss
by ensuring that the node you’re removing streams its data to the remaining nodes in the cluster.
If the node is Joining, see Safely Remove a Joining Node.
If the node status is Down, see Removing an Unavailable Node.
Warning
Review current disk space utilization on existing nodes and make sure the amount of data streamed from the node being removed can fit into the disk space available on the remaining nodes. If there is not enough disk space on the remaining nodes, the removal of a node will fail. Add more storage to remaining nodes before starting the removal procedure.
Run the nodetool netstats
command to monitor the progress of the token reallocation.
Run the nodetool status
command to verify that the node has been removed.
Datacenter: DC1
Status=Up/Down
State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 192.168.1.201 112.82 KB 256 32.7% 8d5ed9f4-7764-4dbd-bad8-43fddce94b7c B1
UN 192.168.1.202 91.11 KB 256 32.9% 125ed9f4-7777-1dbn-mac8-43fddce9123e B1
Manually remove the data and commit log stored on that node.
When a node is removed from the cluster, its data is not automatically removed. You need to manually remove the data to ensure it is no longer counted against the load on that node. Delete the data with the following commands:
sudo rm -rf /var/lib/scylla/data
sudo find /var/lib/scylla/commitlog -type f -delete
sudo find /var/lib/scylla/hints -type f -delete
sudo find /var/lib/scylla/view_hints -type f -delete
If the node status is Down Normal (DN), you should try to restore it. Once the node is up, use the nodetool decommission
command (see Removing a Running Node) to remove it.
If all attempts to restore the node have failed and the node is permanently down, you can remove the node by running the nodetool removenode
command providing the Host ID of the node you are removing. See nodetool removenode for details.
Warning
Using nodetool removenode
is a fallback procedure that should only be used when a node is permanently down and cannot
be recovered.
You must never use nodetool removenode
to remove a running node that can be reached by other nodes in the cluster.
Example:
nodetool removenode 675ed9f4-6564-6dbd-can8-43fddce952gy
The nodetool removenode
command notifies other nodes that the token range it owns needs to be moved and
the nodes should redistribute the data using streaming. Using the command does not guarantee the consistency of the rebalanced data if
stream sources do not have the most recent data. In addition, if some nodes are unavailable or another error occurs,
the nodetool removenode
operation will fail. To ensure successful operation and preserve consistency among replicas, you should:
Make sure the status of all other nodes in the cluster is Up Normal (UN). If one or more nodes are unavailable, see nodetool removenode for instructions.
Run a full cluster repair before nodetool removenode
, so all existing replicas have the most up-to-date data.
In the case of node failures during the removenode
operation, re-run repair before running
nodetool removenode
(not required when Repair Based Node Operations (RBNO) for removenode
is enabled).
The procedure described above applies to clusters where consistent topology updates are enabled. The feature is automatically enabled in new clusters.
If you’ve upgraded an existing cluster from version 5.4, ensure that you manually enabled consistent topology updates. Without consistent topology updates enabled, you must consider the following limitations while applying the procedure:
It’s essential to ensure the removed node will never come back to the cluster, which might adversely affect your data (data resurrection/loss). To prevent the removed node from rejoining the cluster, remove that node from the cluster network or VPC.
You can only remove one node at a time. You need to verify that the node has been removed before removing another one.
If nodetool decommission
starts executing but fails in the middle, for example,
due to a power loss, consult the
Handling Membership Change Failures
document.
Was this page helpful?