Автоматические выборы лидера | Tarantool
Документация на русском языке
поддерживается сообществом
Concepts Replication Автоматические выборы лидера

Автоматические выборы лидера

В Tarantool, начиная с версии 2.6.1, есть встроенная функциональность для управления автоматическими выборами лидера (automated leader election) в наборе реплик (replica set). Эта функциональность повышает отказоустойчивость систем на базе Tarantool и снижает зависимость от внешних инструментов для управления набором реплик.

To learn how to configure and monitor automated leader elections, check Managing leader elections.

Ниже описаны следующие темы:

В Tarantool используется модификация Raft — алгоритма синхронной репликации и автоматических выборов лидера. Полное описание алгоритма Raft можно прочитать в соответствующем документе.

Синхронная репликация и выборы лидера в Tarantool реализованы как две независимые подсистемы. Это означает, что можно настроить синхронную репликацию, а для выборов лидера использовать альтернативный алгоритм. Встроенный механизм выборов лидера, в свою очередь, не требует использования синхронных спейсов. Синхронной репликации посвящен этот раздел документации. Процесс выборов лидера описан ниже.

Примечание

The system behavior can be specified exactly according to the Raft algorithm. To do this:

Автоматические выборы лидера в Tarantool гарантируют, что в каждый момент времени в наборе реплик будет максимум один лидер — узел, доступный для записи. Все остальные узлы будут принимать исключительно запросы на чтение.

Когда функция выборов включена, жизненный цикл набора реплик разделен на так называемые термы (term). Каждый терм описывается монотонно растущим числом. После первой загрузки узла значение его терма равно 1. Когда узел обнаруживает, что не является лидером и при этом лидера в наборе реплик уже какое-то время нет, он увеличивает значение своего терма и начинает новый тур выборов.

Выборы лидера происходят посредством голосования. Узел, начинающий выборы, голосует сам за себя и отправляет другим запросы на голос. Каждый экземпляр голосует за первый узел, от которого пришел такой запрос, и далее в течение всего терма ожидает избрания лидера, не выполняя никаких действий.

The node that collected a quorum of votes defined by the replication.synchro_quorum parameter becomes the leader and notifies other nodes about that. Also, a split vote can happen when no nodes received a quorum of votes. In this case, after a random timeout, each node increases its term and starts a new election round if no new vote request with a greater term arrives during this time. Eventually, a leader is elected.

If any unfinalized synchronous transactions are left from the previous leader, the new leader finalizes them automatically.

All the non-leader nodes are called followers. The nodes that start a new election round are called candidates. The elected leader sends heartbeats to the non-leader nodes to let them know it is alive.

In case there are no heartbeats for the period of replication.timeout * 4, a non-leader node starts a new election if the following conditions are met:

  • The node has a quorum of connections to other cluster members.
  • None of these cluster members can see the leader node.

Примечание

A cluster member considers the leader node to be alive if the member received heartbeats from the leader at least once during the replication.timeout * 4, and there are no replication errors (the connection is not broken due to timeout or due to an error).

Terms and votes are persisted by each instance to preserve certain Raft guarantees.

При голосовании узлы отдают предпочтение экземплярам, где сохранены самые новые данные. Поэтому, если прежний лидер перед тем, как стать недоступным, отправит кворуму реплик какую-либо информацию, она не будет потеряна.

When election is enabled, there must be connections between each node pair so as it would be the full mesh topology. This is needed because election messages for voting and other internal things need a direct connection between the nodes.

In the classic Raft algorithm, a leader doesn’t track its connectivity to the rest of the cluster. Once the leader is elected, it considers itself in the leader position until receiving a new term from another cluster node. This can lead to a split situation if the other nodes elect a new leader upon losing the connectivity to the previous one.

The issue is resolved in Tarantool version 2.10.0 by introducing the leader fencing mode. The mode can be switched by the replication.election_fencing_mode configuration parameter. When the fencing is set to soft or strict, the leader resigns its leadership if it has less than replication.synchro_quorum of alive connections to the cluster nodes. The resigning leader receives the status of a follower in the current election term and becomes read-only. Leader fencing can be turned off by setting the replication.election_fencing_mode configuration parameter to off.

In soft mode, a connection is considered dead if there are no responses for 4 * replication.timeout seconds both on the current leader and the followers.

In strict mode, a connection is considered dead if there are no responses for 2 * replication.timeout seconds on the current leader and for 4 * replication.timeout seconds on the followers. This improves chances that there is only one leader at any time.

Fencing applies to the instances that have the replication.election_mode set to «candidate» or «manual».

There can still be a situation when a replica set has two leaders working independently (so-called split-brain). It can happen, for example, if a user mistakenly lowered the replication.synchro_quorum below N / 2 + 1. In this situation, to preserve the data integrity, if an instance detects the split-brain anomaly in the incoming replication data, it breaks the connection with the instance sending the data and writes the ER_SPLIT_BRAIN error in the log.

Eventually, there will be two sets of nodes with the diverged data, and any node from one set is disconnected from any node from the other set with the ER_SPLIT_BRAIN error.

Once noticing the error, a user can choose any representative from each of the sets and inspect the data on them. To correlate the data, the user should remove it from the nodes of one set, and reconnect them to the nodes from the other set that have the correct data.

Любой узел, участвующий в процессе выборов, реплицирует данные только с последнего избранного лидера. Это позволяет избежать ситуации, в которой прежний лидер после выборов нового все еще пытается отправлять изменения на реплики.

Числовые значения термов также выполняют функцию своеобразного фильтра. Например, если на двух узлах включена функция выборов и значение терма node1 меньше значения терма node2, то узел node2 не будет принимать транзакций от узла node1.

replication:
  election_mode: <string>
  election_fencing_mode: <string>
  election_timeout: <seconds>
  timeout: <seconds>
  synchro_quorum: <count>

It is important to know that being a leader is not the only requirement for a node to be writable. The leader should also satisfy the following requirements:

  • The database.mode option is set to rw.
  • The leader shouldn’t be in the orphan state.

Nothing prevents you from setting the database.mode option to ro, but the leader won’t be writable then. The option doesn’t affect the election process itself, so a read-only instance can still vote and become a leader.

To monitor the current state of a node regarding the leader election, use the box.info.election function.

Example:

tarantool> box.info.election
---
- state: follower
  vote: 0
  leader: 0
  term: 1
...

The Raft-based election implementation logs all its actions with the RAFT: prefix. The actions are new Raft message handling, node state changing, voting, and term bumping.

Leader election doesn’t work correctly if the election quorum is set to less or equal than <cluster size> / 2. In that case, a split vote can lead to a state when two leaders are elected at once.

For example, suppose there are five nodes. When the quorum is set to 2, node1 and node2 can both vote for node1. node3 and node4 can both vote for node5. In this case, node1 and node5 both win the election. When the quorum is set to the cluster majority, that is (<cluster size> / 2) + 1 or greater, the split vote is impossible.

That should be considered when adding new nodes. If the majority value is changing, it’s better to update the quorum on all the existing nodes before adding a new one.

Also, the automated leader election doesn’t bring many benefits in terms of data safety when used without synchronous replication. If the replication is asynchronous and a new leader gets elected, the old leader is still active and considers itself the leader. In such case, nothing stops it from accepting requests from clients and making transactions. Non-synchronous transactions are successfully committed because they are not checked against the quorum of replicas. Synchronous transactions fail because they are not able to collect the quorum – most of the replicas reject these old leader’s transactions since it is not a leader anymore.

Нашли ответ на свой вопрос?
Обратная связь