Settings reference

Common server settings

In the current version of Neo4j, the clustering-related parameters use the causal_clustering namespace. This will be replaced with a more suitable namespace in an upcoming release.

Parameter Instance type Explanation

dbms.clustering.enable

Single

This setting allows a SINGLE instance to form a cluster with one or more READ_REPLICA instances. Must be set to TRUE on the SINGLE instance only, as the setting is ignored on CORE and READ_REPLICA instances.

dbms.mode

All

This setting configures the operating mode of the database. In version 4.4, there are three possible modes:

  1. SINGLE and READ_REPLICA instances

  2. CORE only instances

  3. CORE and READ_REPLICA instances

Example: dbms.mode=READ_REPLICA defines a Read Replica instance

dbms.read_only

-

This setting is not supported.

causal_clustering.minimum_core_cluster_size_at_formation

Core

Minimum number of Core instances as Primary servers required to form a Core cluster.

Example: causal_clustering.minimum_core_cluster_size_at_formation=3 specifies that the cluster will form when at least three Core instances have discovered each other.

causal_clustering.minimum_core_cluster_size_at_runtime

Core

The minimum size of the dynamically adjusted voting set (which only Core members may be a part of).

Adjustments to the voting set happen automatically as the availability of Core instances changes, due to explicit operations such as starting or stopping a member, or unintended issues such as network partitions. Please note that this dynamic scaling of the voting set is generally desirable, as under some circumstances it can increase the number of instance failures which may be tolerated.

A majority of the voting set must be available before members are voted in or out.

Example: causal_clustering.minimum_core_cluster_size_at_runtime=3 specifies that the cluster should not try to dynamically adjust below three Core instances in the voting set.

causal_clustering.discovery_type

Core and Read Replica

This setting specifies the strategy that the instance uses to determine the addresses for other instances in the cluster to contact for bootstrapping. Possible values are: LIST, DNS, SRV, and K8S.

LIST

Treat causal_clustering.initial_discovery_members as a list of addresses of Core instances to contact for discovery.

DNS

Treat causal_clustering.initial_discovery_members as a domain name to resolve via DNS. Expect DNS resolution to provide A records with hostnames or IP addresses of Core instances to contact for discovery, on the port specified by causal_clustering.initial_discovery_members.

SRV

Treat causal_clustering.initial_discovery_members as a domain name to resolve via DNS. Expect DNS resolution to provide SRV records with hostnames or IP addresses and ports, of Core instances to contact for discovery.

K8S

Access the Kubernetes list service API to derive addresses of Core instances to contact for discovery. Requires causal_clustering.kubernetes.label_selector to be a Kubernetes label selector for Kubernetes services running a Core each and causal_clustering.kubernetes.service_port_name to be a service port name identifying the discovery port of Core services. The value of causal_clustering.initial_discovery_members is ignored for this option.

The value of this setting determines how causal_clustering.initial_discovery_members is interpreted. Detailed information about discovery and discovery configuration options is given in Discovery using DNS with multiple records.

Example: causal_clustering.discovery_type=DNS combined with causal_clustering.initial_discovery_members=cluster01.example.com:5000 fetch all DNS A records for cluster01.example.com and attempt to reach Neo4j instances listening on port 5000 for each A record’s IP address.

causal_clustering.initial_discovery_members

Core and Read Replica

The network addresses of an initial set of Core instance members that are available to bootstrap this Core or Read Replica instance. In the default case, the initial discovery members are given as a comma-separated list of address/port pairs, and the default port for the discovery service is :5000.

It is good practice to set this parameter to the same value on all Core instances.

The behavior of this setting can be modified by configuring the setting causal_clustering.discovery_type. This is described in detail in Discovery using DNS with multiple records.

Example: causal_clustering.discovery_type=LIST combined with core01.example.com:5000,core02.example.com:5000,core03.example.com:5000 will attempt to reach Neo4j instances listening on core01.example.com, core01.example.com and core01.example.com; all on port 5000.

causal_clustering.discovery_advertised_address

All

The address/port setting that specifies where the instance advertises that it listens for discovery protocol messages from other members of the cluster. If this instance is included in the initial_discovery_members of other cluster members, the value there must exactly match this advertised address.

Example: causal_clustering.discovery_advertised_address=192.168.33.21:5001 indicates that other cluster members can communicate with this instance using the discovery protocol at host 192.168.33.20 and port 5001.

causal_clustering.raft_advertised_address

Core

The address/port setting that specifies where the Neo4j instance advertises to other members of the cluster that it listens for Raft messages within the Core cluster.

Example: causal_clustering.raft_advertised_address=192.168.33.20:7000 listens for cluster communication in the network interface bound to 192.168.33.20 on port 7000.

causal_clustering.transaction_advertised_address

All

The address/port setting that specifies where the instance advertises where it listens for requests for transactions in the transaction-shipping catchup protocol.

Example: causal_clustering.transaction_advertised_address=192.168.33.20:6001 listens for transactions from cluster members on the network interface bound to 192.168.33.20 on port 6001.

causal_clustering.discovery_listen_address

All

The address/port setting that specifies which network interface and port the Neo4j instance binds to for the cluster discovery protocol.

Example: causal_clustering.discovery_listen_address=0.0.0.0:5001 will listen for cluster membership communication on any network interface at port 5001.

causal_clustering.raft_listen_address

Core

The address/port setting that specifies which network interface and port the Neo4j instance binds to for cluster communication. This setting must be set in coordination with the address this instance advertises it listens at in the setting causal_clustering.raft_advertised_address.

Example: causal_clustering.raft_listen_address=0.0.0.0:7000 listens for cluster communication on any network interface at port 7000.

causal_clustering.transaction_listen_address

All

The address/port setting that specifies which network interface and port the Neo4j instance binds to for cluster communication. This setting must be set in coordination with the address this instance advertises it listens at in the setting causal_clustering.transaction_advertised_address.

Example: causal_clustering.transaction_listen_address=0.0.0.0:6001 listens for cluster communication on any network interface at port 6001.

causal_clustering.store_copy_max_retry_time_per_request

Core and Read Replica

Condition for when store copy should eventually fail. A request is allowed to retry for any amount of attempts as long as the configured time has not been met. For very large stores or other reason that might make transferring of files slow this could be increased.

Example: causal_clustering.store_copy_max_retry_time_per_request=60min

Multi-data center settings

Parameter Explanation

causal_clustering.multi_dc_license

Enables multi-data center features. Requires appropriate licensing.

Example: causal_clustering.multi_dc_license=true will enable the multi-data center features.

causal_clustering.server_groups

A list of group names for the server used when configuring load balancing and replication policies.

Example: causal_clustering.server_groups=us,us-east will add the current instance to the groups us and us-east.

causal_clustering.leadership_priority_group.<database>

The group of servers which should be preferred when selecting leaders for the specified database. If the instance currently acting as leader for this database is not a member of the configured server group, then the cluster will attempt to transfer leadership to an instance which is a member. It is not guaranteed that leadership will always be held by a server in the desired group. For example, if no member of the desired group is available or has up-to-date store contents. The cluster will seek to preserve availability, over respecting the leadership_priority_group setting.

To set a default leadership_priority_group for all databases that do not have an explicitly set leadership_priority_group, the <database> can be omitted. See causal_clustering.leadership_priority_group.

Example: causal_cluster.leadership_priority_group.foo=us will ensure that if the leader for foo is not held by a server configured with causal_clustering.server_groups=us, the cluster will attempt to transfer leadership to a server which is.

causal_clustering.upstream_selection_strategy

An ordered list in descending preference of the strategy which Read Replicas use to choose upstream database server from which to pull transactional updates.

Example: causal_clustering.upstream_selection_strategy=connect-randomly-within-server-group,typically-connect-to-random-read-replica will configure the behavior so that the Read Replica will first try to connect to any other instance in the group(s) specified in causal_clustering.server_groups. Should we fail to find any live instances in those groups, then we will connect to a random Read Replica. A value of user-defined will enable custom strategy definitions using the setting causal_clustering.user_defined_upstream_strategy.

causal_clustering.user_defined_upstream_strategy

Defines the configuration of upstream dependencies. Can only be used if causal_clustering.upstream_selection_strategy is set to user-defined.

Example: causal_clustering.user_defined_upstream_strategy=groups(north2); groups(north); halt() will look for servers in the north2. If none are available it will look in the north server group. Finally, if we cannot resolve any servers in any of the previous groups, then rule chain will be stopped via halt().

causal_clustering.load_balancing.plugin

The load balancing plugin to use. One pre-defined plugin named server_policies is available by default.

Example: causal_clustering.load_balancing.plugin=server_policies will enable custom policy definitions.

causal_clustering.load_balancing.config.server_policies.<policy-name>

Defines a custom policy under the name <policy-name>. Note that load balancing policies are cluster-global configurations and should be defined the exact same way on all core machines.

Example: causal_clustering.load_balancing.config.server_policies.north1_only=groups(north1)→min(2); halt(); defines a load balancing policy named north1_only.
Queries are sent only to servers in the north1 server group, provided there are two of them available. If there are less than two servers in north1, the chain is halted.

By default, the load balancer sends read requests only to replicas/followers, which means these two servers must be of that kind. To allow reads on the leader, set to causal_clustering.cluster_allow_reads_on_leader to true.