Was this page helpful?
Note
Switching from one type of snitch to another is NOT supported for clusters where one or more keyspaces have tablets enabled.
NOTE: If you create a new keyspace, it has tablets enabled by default.
This procedure describes the steps that need to be done when switching from one type of snitch to another. Such a scenario can be when increasing the cluster and adding more data-centers in different locations. Snitches are responsible for specifying how ScyllaDB distributes the replicas. The procedure is dependent on any changes in the cluster topology.
Note - Switching a snitch requires a full cluster shutdown, so It is highly recommended to choose the right snitch for your needs at the cluster setup phase.
Cluster Status |
Needed Procedure |
---|---|
No change in network topology |
Set the new snitch |
Network topology was changed |
Set the new snitch and run repair |
Changes in network topology mean that there are changes in the racks or data-centers where the nodes are located.
For example:
No topology changes
Original cluster: three nodes cluster on a single data-center with Simplesnitch or Ec2snitch.
Change to: three nodes in one data-center and one rack using a GossipingPropertyFileSnitch or Ec2multiregionsnitch.
Topology changes
Original cluster: three nodes using the Simplesnitch or Ec2snitch in a single data-center.
Change to: nine nodes in two data-centers using the GossipingPropertyFileSnitch or Ec2multiregionsnitch (add a new data-center).
Stop all the nodes in the cluster.
sudo systemctl stop scylla-server
docker exec -it some-scylla supervisorctl stop scylla
(without stopping some-scylla container)
In the scylla.yaml
file edit the endpoint_snitch. The file can be found under /etc/scylla/
. Change the endpoint_snitch to all the nodes in the cluster.
For example:
endpoint_snitch: GossipingPropertyFileSnitch
In the cassandra-rackdc.properties
file edit the rack and data-center information.
For example, Ec2MultiRegionSnitch
:
A node in the us-east-1
region, us-east is the data center name, and 1 is the rack location.
Start all the nodes in the cluster in parallel.
sudo systemctl start scylla-server
docker exec -it some-scylla supervisorctl start scylla
(with some-scylla container already running)
Run full repair (consult with the table above if this action is needed).
Was this page helpful?
On this page