Setup | Tarantool

Setup

This chapter explains how to download and set up Tarantool Enterprise Edition and run a sample application provided with it.

The recommended system requirements for running Tarantool Enterprise are as follows.

To fully ensure the fault tolerance of a distributed data storage system, at least three physical computers or virtual servers are required.

For testing/development purposes, the system can be deployed using a smaller number of servers; however, it is not recommended to use such configurations for production.

  1. As host operating systems, Tarantool Enterprise Edition supports Red Hat Enterprise Linux and CentOS versions 7.5 and higher.

    Note

    Tarantool Enterprise can run on other systemd-based Linux distributions but it is not tested on them and may not work as expected.

  2. glibc 2.17-260.el7_6.6 and higher is required. Take care to check and update, if needed:

    $ rpm -q glibc
    glibc-2.17-196.el7_4.2
    $ yum update glibc
    

Hereinafter, “storage servers” or “Tarantool servers” are the computers used to store and process data, and “administration server” is the computer used by the system operator to install and configure the product.

The Tarantool cluster has a full mesh topology, therefore all Tarantool servers should be able to communicate and send traffic from and to TCP/UDP ports used by the cluster’s instances (see advertise_uri: <host>:<port> and config: advertise_uri: '<host>:<port>' in /etc/tarantool/conf.d/*.yml for each instance). For example:

# /etc/tarantool/conf.d/*.yml

myapp.s2-replica:
  advertise_uri: localhost:3305 # this is a TCP/UDP port
  http_port: 8085

all:
  ...
  hosts:
    storage-1:
      config:
        advertise_uri: 'vm1:3301' # this is a TCP/UDP port
        http_port: 8081

To configure remote monitoring or to connect via the administrative console, the administration server should be able to access the following TCP ports on Tarantool servers:

  • 22 to use the SSH protocol,
  • ports specified in instance configuration to monitor the HTTP-metrics.

Additionally, it is recommended to apply the following settings for sysctl on all Tarantool servers:

$ # TCP KeepAlive setting
$ sysctl -w net.ipv4.tcp_keepalive_time=60
$ sysctl -w net.ipv4.tcp_keepalive_intvl=5
$ sysctl -w net.ipv4.tcp_keepalive_probes=5

This optional setup of the Linux network stack helps speed up the troubleshooting of network connectivity when the server physically fails. To achieve maximum performance, you may also need to configure other network stack parameters that are not specific to the Tarantool DBMS. For more information, please refer to the Network Performance Tuning Guide section of the RHEL7 user documentation.

The latest release packages of Tarantool Enterprise are available in the customer zone at Tarantool website. Please contact support@tarantool.io for access.

Each package is distributed as a tar + gzip archive and includes the following components and features:

  • Static Tarantool binary for simplified deployment in Linux environments.
  • tt command-line utility that provides a unified command-line interface for managing Tarantool-based applications. See tt CLI utility for details.
  • Tarantool Cluster Manager – a web-based interface for managing Tarantool EE clusters. See Tarantool Cluster Manager for details.
  • Selection of open and closed source modules.
  • Sample application walking you through all included modules

Archive contents:

  • tarantool is the main executable of Tarantool.

  • tt command-line utility.

  • tcm is the Tarantool Cluster Manager executable.

  • tarantoolctl is the utility script for installing supplementary modules and connecting to the administrative console.

    Important

    tarantoolctl is deprecated in favor of the tt CLI utility.

  • examples/ is the directory containing sample applications:

    • pg_writethrough_cache/ is an application showcasing how Tarantool can cache data written to, for example, a PostgreSQL database;
    • ora_writebehind_cache/ is an application showcasing how Tarantool can cache writes and queue them to, for example, an Oracle database;
    • docker/ is an application designed to be easily packed into a Docker container;
  • rocks/ is the directory containing a selection of additional open and closed source modules included in the distribution as an offline rocks repository. See the rocks reference for details.

  • templates/ is the directory containing template files for your application development environment.

  • deprecated/ is a set of modules that are no longer supported:

    • vshard-zookeeper-orchestrator is a Python application for launching orchestrator,
    • zookeeper-scm files are the ZooKeeper integration modules (require usr/ libraries).

The delivered tar + gzip archive should be uploaded to a server and unpacked:

$ tar xvf tarantool-enterprise-sdk-<version>.tar.gz

No further installation is required as the unpacked binaries are almost ready to go. Go to the directory with the binaries (tarantool-enterprise) and add them to the executable path by running the script provided by the distribution:

$ source ./env.sh

Make sure you have enough privileges to run the script and that the file is executable. Otherwise, try chmod and chown commands to adjust it.

Next, set up your development environment as described in the developer’s guide.

Found what you were looking for?
Feedback