Configure a Neo4j Helm deployment

Helm is different from “package managers”, such as apt, yum, and npm, because, in addition to installing applications, Helm allows rich configuration of applications. The customized configuration should be expressed declaratively in a YAML formatted file, and then passed during installation.

For more information, see Helm official documentation.

Create a custom values.yaml file

  1. Ensure your Neo4j Helm chart repository is up to date and get the latest charts. For more information, see Configure the Neo4j Helm chart repository.

  2. To see what options are configurable on the Neo4j Helm chart that you want to deploy, use helm show values and the Helm chart, such as neo4j/neo4j-standalone, neo4j/neo4j-cluster-core, neo4j/neo4j-cluster-read-replica, neo4j/neo4j-cluster-headless-service, and neo4j/neo4j-cluster-loadbalancer. For example:

    helm show values neo4j/neo4j-standalone
    # Default values for Neo4j.
    # This is a YAML-formatted file.
    
    ## @param nameOverride String to partially override common.names.fullname
    nameOverride: ""
    ## @param fullnameOverride String to fully override common.names.fullname
    fullnameOverride: ""
    # disableLookups will disable all the lookups done in the helm charts
    # You can enable this when executing helm commands with --dry-run command
    disableLookups: false
    
    neo4j:
      # Name of your cluster
      name: ""
    
      # If password is not set or empty a random password will be generated during installation
      password: ""
    
      # Existing secret to use for initial database password
      passwordFromSecret: ""
    
      # Neo4j Edition to use (community|enterprise)
      edition: "community"
      # set edition: "enterprise" to use Neo4j Enterprise Edition
      #
      # To use Neo4j Enterprise Edition you must have a Neo4j license agreement.
      #
      # More information is also available at: https://neo4j.com/licensing/
      # Email inquiries can be directed to: licensing@neo4j.com
      #
      # Set acceptLicenseAgreement: "yes" to confirm that you have a Neo4j license agreement.
      acceptLicenseAgreement: "no"
      #
      # set offlineMaintenanceModeEnabled: true to restart the StatefulSet without the Neo4j process running
      # this can be used to perform tasks that cannot be performed when Neo4j is running such as `neo4j-admin dump`
      offlineMaintenanceModeEnabled: false
      #
      # set resources for the Neo4j Container. The values set will be used for both "requests" and "limit".
      resources:
        cpu: "1000m"
        memory: "2Gi"
    
      #add labels if required
      labels:
    
    
    # Volumes for Neo4j
    volumes:
      data:
        #labels for data pvc on creation (Valid only when mode set to selector | defaultStorageClass | dynamic | volumeClaimTemplate)
        labels: { }
        #Set it to true when you do not want to use the subPathExpr
        disableSubPathExpr: false
        # REQUIRED: specify a volume mode to use for data
        # Valid values are share|selector|defaultStorageClass|volume|volumeClaimTemplate|dynamic
        # To get up-and-running quickly, for development or testing, use "defaultStorageClass" for a dynamically provisioned volume of the default storage class.
        mode: ""
    
        # Only used if mode is set to "selector"
        # Will attach to existing volumes that match the selector
        selector:
          storageClassName: "manual"
          accessModes:
            - ReadWriteOnce
          requests:
            storage: 100Gi
          # A helm template to generate a label selector to match existing volumes n.b. both storageClassName and label selector must match existing volumes
          selectorTemplate:
            matchLabels:
              app: "{{ .Values.neo4j.name }}"
              helm.neo4j.com/volume-role: "data"
    
        # Only used if mode is set to "defaultStorageClass"
        # Dynamic provisioning using the default storageClass
        defaultStorageClass:
          accessModes:
            - ReadWriteOnce
          requests:
            storage: 10Gi
    
        # Only used if mode is set to "dynamic"
        # Dynamic provisioning using the provided storageClass
        dynamic:
          storageClassName: "neo4j"
          accessModes:
            - ReadWriteOnce
          requests:
            storage: 100Gi
    
        # Only used if mode is set to "volume"
        # Provide an explicit volume to use
        volume:
          # If set an init container (running as root) will be added that runs:
          #   `chown -R <securityContext.fsUser>:<securityContext.fsGroup>` AND `chmod -R g+rwx`
          # on the volume. This is useful for some filesystems (e.g. NFS) where Kubernetes fsUser or fsGroup settings are not respected
          setOwnerAndGroupWritableFilePermissions: false
    
          # Example (using a specific Persistent Volume Claim)
          # persistentVolumeClaim:
          #   claimName: my-neo4j-pvc
    
        # Only used if mode is set to "volumeClaimTemplate"
        # Provide an explicit volumeClaimTemplate to use
        volumeClaimTemplate: {}
    
      # provide a volume to use for backups
      # n.b. backups will be written to /backups on the volume
      # any of the volume modes shown above for data can be used for backups
      backups:
        #labels for backups pvc on creation (Valid only when mode set to selector | defaultStorageClass | dynamic | volumeClaimTemplate)
        labels: { }
        #Set it to true when you do not want to use the subPathExpr
        disableSubPathExpr: false
        mode: "share" # share an existing volume (e.g. the data volume)
        share:
          name: "data"
    
      # provide a volume to use for logs
      # n.b. logs will be written to /logs/$(POD_NAME) on the volume
      # any of the volume modes shown above for data can be used for logs
      logs:
        #labels for logs pvc on creation (Valid only when mode set to selector | defaultStorageClass | dynamic | volumeClaimTemplate)
        labels: { }
        #Set it to true when you do not want to use the subPathExpr
        disableSubPathExpr: false
        mode: "share" # share an existing volume (e.g. the data volume)
        share:
          name: "data"
    
      # provide a volume to use for csv metrics (csv metrics are only available in Neo4j Enterprise Edition)
      # n.b. metrics will be written to /metrics/$(POD_NAME) on the volume
      # any of the volume modes shown above for data can be used for metrics
      metrics:
        #labels for metrics pvc on creation (Valid only when mode set to selector | defaultStorageClass | dynamic | volumeClaimTemplate)
        labels: { }
        #Set it to true when you do not want to use the subPathExpr
        disableSubPathExpr: false
        mode: "share" # share an existing volume (e.g. the data volume)
        share:
          name: "data"
    
      # provide a volume to use for import storage
      # n.b. import will be mounted to /import on the underlying volume
      # any of the volume modes shown above for data can be used for import
      import:
        #labels for import pvc on creation (Valid only when mode set to selector | defaultStorageClass | dynamic | volumeClaimTemplate)
        labels: { }
        #Set it to true when you do not want to use the subPathExpr
        disableSubPathExpr: false
        mode: "share" # share an existing volume (e.g. the data volume)
        share:
          name: "data"
    
      # provide a volume to use for licenses
      # n.b. licenses will be mounted to /licenses on the underlying volume
      # any of the volume modes shown above for data can be used for licenses
      licenses:
        #labels for licenses pvc on creation (Valid only when mode set to selector | defaultStorageClass | dynamic | volumeClaimTemplate)
        labels: { }
        #Set it to true when you do not want to use the subPathExpr
        disableSubPathExpr: false
        mode: "share" # share an existing volume (e.g. the data volume)
        share:
          name: "data"
    
    #add additional volumes and their respective mounts
    #additionalVolumes:
    #  - name: neo4j1-conf
    #    emptyDir: {}
    #
    #additionalVolumeMounts:
    #  - mountPath: "/config/neo4j1.conf"
    #    name: neo4j1-conf
    
    # ldapPasswordFromSecret defines the secret which holds the password for ldap system account
    # Secret key name must be LDAP_PASS
    # This secret is accessible by Neo4j at the path defined in ldapPasswordMountPath
    ldapPasswordFromSecret: ""
    
    # The above secret gets mounted to the path mentioned here
    ldapPasswordMountPath: ""
    
    #nodeSelector labels
    #please ensure the respective labels are present on one of the cluster nodes or else helm charts will throw an error
    nodeSelector: {}
    #  label1: "value1"
    #  label2: "value2"
    
    # Services for Neo4j
    services:
      # A ClusterIP service with the same name as the Helm Release name should be used for Neo4j Driver connections originating inside the
      # Kubernetes cluster.
      default:
        # Annotations for the K8s Service object
        annotations: { }
    
      # A LoadBalancer Service for external Neo4j driver applications and Neo4j Browser
      neo4j:
        enabled: true
    
        # Annotations for the K8s Service object
        annotations: { }
    
        spec:
          # Type of service.
          type: LoadBalancer
    
          # in most cloud environments LoadBalancer type will receive an ephemeral public IP address automatically. If you need to specify a static ip here use:
          # loadBalancerIP: ...
    
        # ports to include in neo4j service
        ports:
          http:
            enabled: true #Set this to false to remove HTTP from this service (this does not affect whether http is enabled for the neo4j process)
            # uncomment to publish http on port 80 (neo4j default is 7474)
            # port: 80
          https:
            enabled: true #Set this to false to remove HTTPS from this service (this does not affect whether https is enabled for the neo4j process)
            # uncomment to publish http on port 443 (neo4j default is 7474)
            # port: 443
          bolt:
            enabled: true #Set this to false to remove BOLT from this service (this does not affect whether https is enabled for the neo4j process)
            # Uncomment to explicitly specify the port to publish Neo4j Bolt (7687 is the default)
            # port: 7687
          backup:
            enabled: false #Set this to true to expose backup port externally (n.b. this could have security implications. Backup is not authenticated by default)
            # Uncomment to explicitly specify the port to publish Neo4j Backup (6362 is the default)
            # port: 6362
    
      # A service for admin/ops tasks including taking backups
      # This service is available even if the deployment is not "ready"
      admin:
        enabled: true
        # Annotations for the admin service
        annotations: { }
        spec:
          type: ClusterIP
        # n.b. there is no ports object for this service. Ports are autogenerated based on the neo4j configuration
    
      # A "headless" service for admin/ops and Neo4j cluster-internal communications
      # This service is available even if the deployment is not "ready"
      internals:
        enabled: false
        # Annotations for the internals service
        annotations: { }
        # n.b. there is no ports object for this service. Ports are autogenerated based on the neo4j configuration
    
    
    # Neo4j Configuration (yaml format)
    config:
      dbms.config.strict_validation: "false"
    
      # The amount of memory to use for mapping the store files.
      # The default page cache memory assumes the machine is dedicated to running
      # Neo4j, and is heuristically set to 50% of RAM minus the Java heap size.
      #dbms.memory.pagecache.size: "74m"
    
      #The number of Cypher query execution plans that are cached.
      #dbms.query_cache_size: "10"
    
      # Java Heap Size: by default the Java heap size is dynamically calculated based
      # on available system resources. Uncomment these lines to set specific initial
      # and maximum heap size.
      #dbms.memory.heap.initial_size: "317m"
      #dbms.memory.heap.max_size: "317m"
    
    apoc_config: {}
    #  apoc.trigger.enabled: "true"
    #  apoc.jdbc.apoctest.url: "jdbc:foo:bar"
    
    #apoc_credentials allow you to set configs like apoc.jdbc.<aliasname>.url and apoc.es.<aliasname>.url via a kubernetes secret mounted on the provided path
    #ensure the secret exists beforehand or else an error will be thrown by the helm chart
    #aliasName , secretName and secretMountPath are compulsory fields. Empty fields will result in error
    #please ensure you are using the compatible apoc-extended plugin jar while using apoc_credentials
    #please ensure the secret is created with the key named as "URL"
      #Ex: kubectl create secret generic jdbcsecret --from-literal=URL="jdbc:mysql://30.0.0.0:3306/Northwind?user=root&password=password"
    apoc_credentials: {}
    #   jdbc:
    #    aliasName: "jdbc"
    #    secretName: "jdbcsecret"
    #    secretMountPath: "/secret/jdbcCred"
    #
    #   elasticsearch:
    #     aliasName: "es"
    #     secretName: "essecret"
    #     secretMountPath: "/secret/esCred"
    
    
    # securityContext defines privilege and access control settings for a Pod. Making sure that we dont run Neo4j as root user.
    securityContext:
      runAsNonRoot: true
      runAsUser: 7474
      runAsGroup: 7474
      fsGroup: 7474
      fsGroupChangePolicy: "Always"
    
    # containerSecurityContext defines privilege and access control settings for a Container. Making sure that we dont run Neo4j as root user.
    containerSecurityContext:
      runAsNonRoot: true
      runAsUser: 7474
      runAsGroup: 7474
    
    # Readiness probes are set to know when a container is ready to be used.
    # Because Neo4j uses Java these values are large to distinguish between long Garbage Collection pauses (which don't require a restart) and an actual failure.
    # These values should mark Neo4j as not ready after at most 5 minutes of problems (20 attempts * max 15 seconds between probes)
    readinessProbe:
      failureThreshold: 20
      timeoutSeconds: 10
      periodSeconds: 5
    
    # Liveness probes are set to know when to restart a container.
    # Because Neo4j uses Java these values are large to distinguish between long Garbage Collection pauses (which don't require a restart) and an actual failure.
    # These values should trigger a restart after at most 10 minutes of problems (40 attempts * max 15 seconds between probes)
    livenessProbe:
      failureThreshold: 40
      timeoutSeconds: 10
      periodSeconds: 5
    
    # Startup probes are used to know when a container application has started.
    # If such a probe is configured, it disables liveness and readiness checks until it succeeds
    # When restoring Neo4j from a backup it's important that startup probe gives time for Neo4j to recover and/or upgrade store files
    # When using Neo4j clusters it's important that startup probe give the Neo4j cluster time to form
    startupProbe:
      failureThreshold: 1000
      periodSeconds: 5
    
    # top level setting called ssl to match the "ssl" from "dbms.ssl.policy"
    ssl:
      # setting per "connector" matching neo4j config
      bolt:
        privateKey:
          secretName:  # we set up the template to grab `private.key` from this secret
          subPath:  # we specify the privateKey value name to get from the secret
        publicCertificate:
          secretName:  # we set up the template to grab `public.crt` from this secret
          subPath:  # we specify the publicCertificate value name to get from the secret
        trustedCerts:
          sources: [ ] # a sources array for a projected volume - this allows someone to (relatively) easily mount multiple public certs from multiple secrets for example.
        revokedCerts:
          sources: [ ]  # a sources array for a projected volume
      https:
        privateKey:
          secretName:
          subPath:
        publicCertificate:
          secretName:
          subPath:
        trustedCerts:
          sources: [ ]
        revokedCerts:
          sources: [ ]
    
    # Kubernetes cluster domain suffix
    clusterDomain: "cluster.local"
    
    # Override image settings in Neo4j pod
    image:
      imagePullPolicy: IfNotPresent
      # set a customImage if you want to use your own docker image
      # customImage: my-image:my-tag
    
      #imagePullSecrets list
      #  imagePullSecrets:
      #    - "demo"
    
      #imageCredentials list for which secret of type docker-registry will be created automatically using the details provided
      # registry , username , password , email are compulsory field for an imageCredential , without any ,  helm chart will throw an error
      # imageCredential name should be part of the imagePullSecrets list or else the respective imageCredential will be ignored and no secret creation will be done
    #  imageCredentials:
    #    - registry: ""
    #      username: ""
    #      password: ""
    #      email: ""
    #      name: ""
    
    statefulset:
      metadata:
        #Annotations for Neo4j StatefulSet
        annotations:
    #      imageregistry: "https://hub.docker.com/"
    #      demo: alpha
    
    # additional environment variables for the Neo4j Container
    env: {}
    
    # Other K8s configuration to apply to the Neo4j pod
    podSpec:
    
      #Annotations for Neo4j pod
      annotations:
        #imageregistry: "https://hub.docker.com/"
        #demo: alpha
    
      nodeAffinity:
    #    requiredDuringSchedulingIgnoredDuringExecution:
    #      nodeSelectorTerms:
    #        - matchExpressions:
    #            - key: topology.kubernetes.io/zone
    #              operator: In
    #              values:
    #                - antarctica-east1
    #                - antarctica-west1
    #    preferredDuringSchedulingIgnoredDuringExecution:
    #      - weight: 1
    #        preference:
    #          matchExpressions:
    #            - key: another-node-label-key
    #              operator: In
    #              values:
    #                - another-node-label-value
    
      # Anti Affinity
      # If set to true then an anti-affinity rule is applied to prevent database pods with the same `neo4j.name` running on a single Kubernetes node.
      # If set to false then no anti-affinity rules are applied
      # If set to an object then that object is used for the Neo4j podAntiAffinity
      podAntiAffinity: true
    
      #Add tolerations to the Neo4j pod
      tolerations:
    #    - key: "key1"
    #      operator: "Equal"
    #      value: "value1"
    #      effect: "NoSchedule"
    #    - key: "key2"
    #      operator: "Equal"
    #      effect: "NoSchedule"
    #      value: "value2"
    
      #Priority indicates the importance of a Pod relative to other Pods.
      # More Information : https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/
      priorityClassName: ""
    
      #This indicates that the neo4j instance be included to the loadbalancer. Can be set to exclude to not add the stateful set to loadbalancer
      loadbalancer: "include"
    
      # Name of service account to use for the Neo4j Pod (optional)
      # this is useful if you want to use Workload Identity to grant permissions to access cloud resources e.g. cloud object storage (AWS S3 etc.)
      serviceAccountName: ""
    
      # How long the Neo4j pod is permitted to keep running after it has been signalled by Kubernetes to stop. Once this timeout elapses the Neo4j process is forcibly terminated.
      # A large value is used because Neo4j takes time to flush in-memory data to disk on shutdown.
      terminationGracePeriodSeconds: 3600
    
      # initContainers for the Neo4j pod
      initContainers: [ ]
    
      # additional runtime containers for the Neo4j pod
      containers: [ ]
    
    # print the neo4j user password set during install to the `helm install` log
    logInitialPassword: true
    
    # Jvm configuration for Neo4j
    jvm:
      # If true any additional arguments are added after the Neo4j default jvm arguments.
      # If false Neo4j default jvm arguments are not used.
      useNeo4jDefaultJvmArguments: true
      # additionalJvmArguments is a list of strings. Each jvm argument should be a separate element:
      additionalJvmArguments: []
      # - "-XX:+HeapDumpOnOutOfMemoryError"
      # - "-XX:HeapDumpPath=/logs/neo4j.hprof"
      # - "-XX:MaxMetaspaceSize=180m"
      # - "-XX:ReservedCodeCacheSize=40m"

    You can amend any of these settings. Passing that file during installation overrides the default Helm chart configuration of the Neo4j installation on Kubernetes and the configuration of the Neo4j database itself.

  3. Create the neo4j-values.yaml file with your preferred configuration. For example:

    # neo4j-values.yaml
    
    neo4j:
      password: "my-password"
      resources:
        cpu: "2"
        memory: "5Gi"
    
    volumes:
      data:
        mode: "defaultStorageClass"
    
    # Neo4j configuration (yaml format)
    config:
      dbms.default_database: "neo4j"
      dbms.config.strict_validation: "true"
  4. Pass the neo4j-values.yaml file during installation.

    helm install <release-name> neo4j/neo4j-standalone -f neo4j-values.yaml

    To see the values that have been set for a given release, use helm get values <release-name>.

    Some examples of possible K8s configurations
    • Configure (or disable completely) the Kubernetes LoadBalancer that exposes Neo4j outside the Kubernetes cluster by modifying the externalService object in the values.yml file.

    • Set the securityContext used by Neo4j Pods by modifying the securityContext object in the values.yml file.

    • Configure manual persistent volume provisioning or set the StorageClass to be used as the Neo4j persistent storage.

    Some examples of possible Neo4j configurations
    • All Neo4j configuration (neo4j.conf) settings can be set directly on the config object in the values.yaml file.

    • Neo4j can be configured to use SSL certificates contained in Kubernetes Secrets by modifying the ssl object in the values file.

Set Neo4j configuration

The Neo4j Helm chart does not use a neo4j.conf file. Instead, the Neo4j configuration is set in the Helm deployment’s values.yaml file under the config object.

The config object should contain a string map of neo4j.conf setting name to value. For example, this config object configures the Neo4j metrics:

# Neo4j configuration (yaml format)
config:
  metrics.enabled: "true"
  metrics.namespaces.enabled: "false"
  metrics.csv.interval: "10s"
  metrics.csv.rotation.keep_number: "2"
  metrics.csv.rotation.compression: "NONE"

All Neo4j config values must be YAML strings. It is important to put quotes around the values, such as "true", "false", and "2", so that they are handled correctly as strings.

All neo4j.conf settings are supported except for dbms.jvm.additional. Additional JVM settings can be set on the jvm object in the Helm deployment values.yaml file, as shown in the example:

# Jvm configuration for Neo4j
jvm:
  additionalJvmArguments:
  - "-XX:+HeapDumpOnOutOfMemoryError"
  - "-XX:HeapDumpPath=/logs/neo4j.hprof"

To find out more about configuring Neo4j and the neo4j.conf file, see Configuration and The neo4j.conf file.

Set an initial password

You can set an initial password for accessing Neo4j in the values.yaml file. If no initial password is set, the Neo4j Helm chart will automatically generate one. In cluster deployments, the same password must be set for all cluster members.

neo4j:
 # If not set or empty a random password will be generated
 password: ""

The password will be printed out in the Helm install output, unless --set logInitialPassword=false is used.

The initial Neo4j password is stored in a Kubernetes Secret. The password can be extracted from the Secret using this command:

kubectl get secret <release-name>-auth -oyaml | yq -r '.data.NEO4J_AUTH' | base64 -d

To change the initial password, follow the steps in Reset the Neo4j user password.

Once you change the password in Neo4j, the password stored in Kubernetes Secrets will still exist but will no longer be valid.

Configure SSL

The Neo4j SSL Framework can be used with Neo4j Helm charts. You can specify SSL policy objects for bolt, https, cluster, backup, and fabric. SSL public certificates and private keys to use with a Neo4j Helm deployment must be stored in Kubernetes Secrets.

To enable Neo4j SSL policies, configure the ssl.<policy name> object in the Neo4j Helm deployment’s values.yaml file to reference the Kubernetes Secrets containing the SSL certificates and keys to use. This example shows how to configure the bolt SSL policy:

ssl:
 bolt:
   privateKey:
     secretName: bolt-cert
     subPath: private.key
   publicCertificate:
     secretName: bolt-cert
     subPath: public.crt

When a private key is specified in the values.yaml file, the Neo4j ssl policy is enabled automatically. To disable a policy, add dbms.ssl.policy.{{ $name }}.enabled: "false" to the config object.

Unencrypted http is not disabled automatically when https is enabled. If https is enabled, add dbms.connector.http.enabled: "false" to the config object to disable http.

For more information on configuring SSL policies, see SSL Framework configuration.

The following examples show how to deploy a Neo4j cluster with configured SSL policies.

Create a self-signed certificate

If you do not have a self-signed certificate to use, follow the steps to create one:

  1. Create a new folder for the self-signed certificate. This example uses the /neo4j-ssl folder.

    mkdir neo4j-ssl
    cd neo4j-ssl
  2. Create the private.key and public.crt for the self-signed certificate by using the openssl command and passing all the values in the subj argument:

    openssl req -newkey rsa:2048 -nodes -keyout private.key -x509 -days 365 -out public.crt -subj "/C=GB/ST=London/L=London/O=Neo4j/OU=IT Department"
  3. Verify that the private.key and public.crt files are created:

    ls -lst
    Example output
    -rw-r--r--  1 user  staff  1679  28 Dec 15:00 private.key
    -rw-r--r--  1 user  staff  1679  28 Dec 15:00 public.crt

Configure an SSL policy using a tls Kubernetes secret

This example shows how to configure an SSL policy for intra-cluster communication using a self-signed certificate stored in a tls Kubernetes secret.

  1. Create a Kubernetes TSL secret using the public.crt and private.key files:

    You must have a Kubernetes cluster running and the kubectl command installed. For more information, see Prerequisites.
    1. To create a TLS secret, use the tls option and a secret name, e.g., neo4j-tls:

      kubectl create secret tls neo4j-tls --cert=/path/to/neo4j-ssl/public.crt --key=/path/to/neo4j-ssl/private.key
    2. Verify that the secret is created:

      kubectl get secret
      Example output
      NAME                  TYPE                                  DATA   AGE
      neo4j-tls             kubernetes.io/tls                     2      4s
    3. Verify that the secret contains the public.crt and private.key files:

      kubectl get secret neo4j-tls -o yaml
      Example output
      apiVersion: v1
      data:
        tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLakNDQWhJQ0NRRGRYYVg1Y29mczdEQU5CZ2txaGtpRzl3MEJBUXNGQURCWE1Rc3dDUVlEVlFRR0V3SkgKUWpFUE1BMEdBMVVFQ0F3R1RHOXVaRzl1TVE4d0RRWURWUVFIREFaTWIyNWtiMjR4RGpBTUJnTlZCQW9NQlU1bApielJxTVJZd0ZBWURWUVFMREExSlZDQkVaWEJoY25SdFpXNTBNQjRYRFRJeU1USXlPREl4TURjeU5sb1hEVEl6Ck1USXlPREl4TURjeU5sb3dWekVMTUFrR0ExVUVCaE1DUjBJeER6QU5CZ05WQkFnTUJreHZibVJ2YmpFUE1BMEcKQTFVRUJ3d0dURzl1Wkc5dU1RNHdEQVlEVlFRS0RBVk9aVzgwYWpFV01CUUdBMVVFQ3d3TlNWUWdSR1Z3WVhKMApiV1Z1ZERDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDBRc0c2Ukwrd3hxZSt3CjJGSWljZldVaUFtdmNqeVdlS0lKaThuT2tBSGIvSTYzUUU2L3ZpR3RNeEI3S28xdUJLNlVPZXBaeU91UzE2bUMKaitpMDAwbmFnWkR3RGNyRXd3UUE1cTBGMC90VXB5UHBaL1p3clhEaGFDOXhzVnFnVms0TXl5aUtTNzRIOUc2UgprUUV4dHBaNFArcTlaRHVFVk1KVGVaL2pQNGZoTkg2MUpSTVdORTJ3NjNUWkx2ZGMyUitXL2U5N3h2TGQ5Y0FnCjlqTm9FMHo5UHRmczB2L2lyUGhuUHpzWHQ5bzE0MWlnOVFZNjNtMzBxQ0NaYnpMRlR6WFgvdTUvTSsycFB3WXoKcUNOTUZYYW1ITlAxdlRPWFlRTG1iYW1JdVplYnVPNEVlUHZ6WUVXSmEyUi9oTmhtUDNvM2tRVFAzdmF1UEFjZQpSQlJZS09NQ0F3RUFBVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBWVh6SkIzNU55cExDUEdvQXVJdGt5dHBFCk1ZeSs5YnlrV3BEVnhRaXZLUHQ3d1NUaWZjNU1QdW5NUy9xYmxnaWprWm5IaWVLeEdoU3lQL283QndtMzJDSnAKQUFsSjJ3RjhIVXVSSGpYcUU5dkNQeFdtVlVJS2ExOWN5V0tUYWhySWU1eWZkQWNkbUJmRzJNWnY0dEdFeWxsUgo0Vk81STdRNjVWZDlGQnB0U3JjS3R1WUtBUzg2RTBHZmlmMWxCakdUZTFZbkhvK1RZTVpoVEUvN3RlNHZ1M251CjA4Y1BmbS9RYThSNFBXZDZNbXVDaTJYcDduWVlEMmp3WklCSENtMUU3U1RrdS9JRk5kOWFWRW91VG5KR1pCWFcKeWVzWG9OMXhOb3kvMXZFdElhV2xXZW1GcGo4clJ6VGJQekQ1TEpiNDBSRFVOTXN3NytLUXczV3BBMjVKUHc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
        tls.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2QUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktZd2dnU2lBZ0VBQW9JQkFRQzlFTEJ1a1Mvc01hbnYKc05oU0luSDFsSWdKcjNJOGxuaWlDWXZKenBBQjIveU90MEJPdjc0aHJUTVFleXFOYmdTdWxEbnFXY2pya3RlcApnby9vdE5OSjJvR1E4QTNLeE1NRUFPYXRCZFA3VktjajZXZjJjSzF3NFdndmNiRmFvRlpPRE1zb2lrdStCL1J1CmtaRUJNYmFXZUQvcXZXUTdoRlRDVTNtZjR6K0g0VFIrdFNVVEZqUk5zT3QwMlM3M1hOa2ZsdjN2ZThieTNmWEEKSVBZemFCTk0vVDdYN05MLzRxejRaejg3RjdmYU5lTllvUFVHT3Q1dDlLZ2dtVzh5eFU4MTEvN3VmelB0cVQ4RwpNNmdqVEJWMnBoelQ5YjB6bDJFQzVtMnBpTG1YbTdqdUJIajc4MkJGaVd0a2Y0VFlaajk2TjVFRXo5NzJyandICkhrUVVXQ2pqQWdNQkFBRUNnZ0VBTmF6OVNnYXlJazVmUG90b2Zya0V2WUh6dFR3NEpIZGJ2RFVWbUsrcU5yenIKME9DNXd5R3dxd0x2RW1qRlJlM01Lbnd1alJmOGNOVDVvVWhON3ZVWFgwcEhxb3hjZmdxcWl3SnVld1RDa0FJUwppYUdFUUhUdzZMRTEwUEpvTmFCN29DRUZ0SGErMWk2UCtLd2ZETVcrWHEyNUI3M0pMUlIrczhUYkxNZHBpL3VvCjRmTFNJV0xDV09MZThUTlU0ck5vVDQ0enY0eUhUOXAyV3liSUNrL3F5bVV3bTlhRHFnYzRJRzk4YXVVNG5JYVQKenk2T3NBODdONW9FME0rcHlUdEFJcmxRZFBXUzBBZ28xZUJCcWplL1I3MTI4TmdHVzhOTzVMWDBtNit2YzhyVgpaTHh0N1d0NThucXR2WlI3QTF5SU9lbWtocHl2Q3hrNVRxSmZQRlJxRVFLQmdRRGlOL3NBZncrZ1dvTFpLbTNyCm50WVkyRW9TOTBkQ0wwd293SGFGa0lsL2hIWXduQi9qaWlnMU5ZbEhDNzNPVDdDc2UycS9tS0xhMzZBVHlpTHcKZjN1T0J3NmNFZ2RJZlU5aDBtZjJCNFZXdEVEeDJMSU94MEtZU2VrYldTTVZQZ2w2SkhNb3hLdjNMbEx5R1RiMApZQmtKVmpRdkVLS1dpa1FLMUdPYnZtdzFWUUtCZ1FEVjlJLzc5WFJuN1EzZ3M4Z2JqZWhDejRqNHdtNWdpNFM2CkVsVzVJWkFidDh3QWZPdVIzUm4wQW41NFl0ZW1HUk1seDF0L1ZUM1IzK0J5bmVTZEgzbUJ6eVJEQysvRGhBTlYKNVZPckk5SFhnVTRMSElVMmNwVVZxYVo3N1J1b2JnTmlDenBmOVZPVkNadzdmQzRPYkFqcTMzQ3RtT2taR0hRbAo2dkJtNm1ubFZ3S0JnQ2FnOW95TUplZjA3TGtXcExTQ1ovN1FHRDRLMmJFMGtHVzVEOFFZL1ZHNEZkS1JKbVRkCmQ2WTJZUjJ2cEpheFJ2TDlGQ3BwYncyKzkvL0pHWlJGd0p4dEdoS09oWTNjVUF6ZE9BRnNJVm0vNkFNa1JLdC8KWFNEU0ppc1VXb2hMRXFVM3lpNWcveGh6WVppVHM2MmhKMFZQNGhOVFhPQWw5aDUvVEE4UlFqc05Bb0dBTm84Twp5R2xuTGJrOWVMZGZwK2NmK3ltQS9DNVloellNdW9aQ1pkc3hMR0JLSFRXOXZJeHRPZFFJL0JuNGM5cWhEMWt1CjgrR0F5aXdVeUNXTFRxWGdEa0lNTlN5dUQyVnlsRXpPY1MzSkxQTkVPNEVpVnlnUTdGMCtud3R2cWh1anNUUzcKeGd5Qks5Z3ZodHU3d3VHNXhHc0dDTDZkY2xEU0RYbERwSHJTVmpFQ2dZQWx0STNjMzJxaG5KU2xHSGhjdW1wRwpReGpvYnJBUUxUa3dyOWk2TkNuS0EyNVR1SVFXa1NiN2JUWWtuUi80WDhOT2w2U2EvYm9QK2dncWNJM0haSk05CkxJRnpPUTFWT1luQ2ZYZVd0SmlHQklwUExadFdobnA3NGVhdmJKYW9udlhVVGNZcm5qcytIWGhpaFhjOUhENWsKeEJEaWJKYUlEbXg2T1FpVWI2RndJZz09Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K
      kind: Secret
      metadata:
        creationTimestamp: "2023-01-03T16:04:11Z"
        managedFields:
        - apiVersion: v1
          fieldsType: FieldsV1
          fieldsV1:
            f:data:
              .: {}
              f:tls.crt: {}
              f:tls.key: {}
            f:type: {}
          manager: kubectl
          operation: Update
          time: "2023-01-03T16:04:11Z"
        name: neo4j-tls
        namespace: default
        resourceVersion: "77665"
        uid: 7feb1270-476e-4efd-afbe-57dc797768d9
      type: kubernetes.io/tls
  2. Configure ssl object in the ssl-values.yaml file using the created secret:

    ssl:
    # setting per "connector" matching neo4j config
      bolt:
        privateKey:
          secretName: neo4j-tls
          subPath: tls.key
        publicCertificate:
          secretName: neo4j-tls
          subPath: tls.crt
      https:
        privateKey:
          secretName: neo4j-tls
          subPath: tls.key
        publicCertificate:
          secretName: neo4j-tls
          subPath: tls.crt
        trustedCerts:
          sources:
          - secret:
              name: neo4j-tls
              items:
              - key: tls.crt
                path: public.crt
      cluster:
        privateKey:
          secretName: neo4j-tls
          subPath: tls.key
        publicCertificate:
          secretName: neo4j-tls
          subPath: tls.crt
        trustedCerts:
          sources:
          - secret:
              name: neo4j-tls
              items:
              - key: tls.crt
                path: public.crt
        revokedCerts:
          sources: [ ]

Now you are ready to deploy the Neo4j cluster using the configured ssl-values.yaml file and the Neo4j Helm charts.

Configure an SSL policy using a generic Kubernetes secret

This example shows how to configure an SSL policy for intra-cluster communication using a self-signed certificate stored in a generic Kubernetes secret.

  1. Create a Kubernetes generic secret using the public.crt and private.key files:

    You must have a Kubernetes cluster running and the kubectl command installed. For more information, see Prerequisites.
    1. Get the Base64-encoded value of your public.crt and private.key:

      cat public.crt| base64
      cat private.key| base64
    2. Using the Base64-encoded values of your public.crt and private.key, create a secret.yaml file:

      apiVersion: v1
      data:
        public.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLakNDQWhJQ0NRRGRYYVg1Y29mczdEQU5CZ2txaGtpRzl3MEJBUXNGQURCWE1Rc3dDUVlEVlFRR0V3SkgKUWpFUE1BMEdBMVVFQ0F3R1RHOXVaRzl1TVE4d0RRWURWUVFIREFaTWIyNWtiMjR4RGpBTUJnTlZCQW9NQlU1bApielJxTVJZd0ZBWURWUVFMREExSlZDQkVaWEJoY25SdFpXNTBNQjRYRFRJeU1USXlPREl4TURjeU5sb1hEVEl6Ck1USXlPREl4TURjeU5sb3dWekVMTUFrR0ExVUVCaE1DUjBJeER6QU5CZ05WQkFnTUJreHZibVJ2YmpFUE1BMEcKQTFVRUJ3d0dURzl1Wkc5dU1RNHdEQVlEVlFRS0RBVk9aVzgwYWpFV01CUUdBMVVFQ3d3TlNWUWdSR1Z3WVhKMApiV1Z1ZERDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDBRc0c2Ukwrd3hxZSt3CjJGSWljZldVaUFtdmNqeVdlS0lKaThuT2tBSGIvSTYzUUU2L3ZpR3RNeEI3S28xdUJLNlVPZXBaeU91UzE2bUMKaitpMDAwbmFnWkR3RGNyRXd3UUE1cTBGMC90VXB5UHBaL1p3clhEaGFDOXhzVnFnVms0TXl5aUtTNzRIOUc2UgprUUV4dHBaNFArcTlaRHVFVk1KVGVaL2pQNGZoTkg2MUpSTVdORTJ3NjNUWkx2ZGMyUitXL2U5N3h2TGQ5Y0FnCjlqTm9FMHo5UHRmczB2L2lyUGhuUHpzWHQ5bzE0MWlnOVFZNjNtMzBxQ0NaYnpMRlR6WFgvdTUvTSsycFB3WXoKcUNOTUZYYW1ITlAxdlRPWFlRTG1iYW1JdVplYnVPNEVlUHZ6WUVXSmEyUi9oTmhtUDNvM2tRVFAzdmF1UEFjZQpSQlJZS09NQ0F3RUFBVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBWVh6SkIzNU55cExDUEdvQXVJdGt5dHBFCk1ZeSs5YnlrV3BEVnhRaXZLUHQ3d1NUaWZjNU1QdW5NUy9xYmxnaWprWm5IaWVLeEdoU3lQL283QndtMzJDSnAKQUFsSjJ3RjhIVXVSSGpYcUU5dkNQeFdtVlVJS2ExOWN5V0tUYWhySWU1eWZkQWNkbUJmRzJNWnY0dEdFeWxsUgo0Vk81STdRNjVWZDlGQnB0U3JjS3R1WUtBUzg2RTBHZmlmMWxCakdUZTFZbkhvK1RZTVpoVEUvN3RlNHZ1M251CjA4Y1BmbS9RYThSNFBXZDZNbXVDaTJYcDduWVlEMmp3WklCSENtMUU3U1RrdS9JRk5kOWFWRW91VG5KR1pCWFcKeWVzWG9OMXhOb3kvMXZFdElhV2xXZW1GcGo4clJ6VGJQekQ1TEpiNDBSRFVOTXN3NytLUXczV3BBMjVKUHc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
        private.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2QUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktZd2dnU2lBZ0VBQW9JQkFRQzlFTEJ1a1Mvc01hbnYKc05oU0luSDFsSWdKcjNJOGxuaWlDWXZKenBBQjIveU90MEJPdjc0aHJUTVFleXFOYmdTdWxEbnFXY2pya3RlcApnby9vdE5OSjJvR1E4QTNLeE1NRUFPYXRCZFA3VktjajZXZjJjSzF3NFdndmNiRmFvRlpPRE1zb2lrdStCL1J1CmtaRUJNYmFXZUQvcXZXUTdoRlRDVTNtZjR6K0g0VFIrdFNVVEZqUk5zT3QwMlM3M1hOa2ZsdjN2ZThieTNmWEEKSVBZemFCTk0vVDdYN05MLzRxejRaejg3RjdmYU5lTllvUFVHT3Q1dDlLZ2dtVzh5eFU4MTEvN3VmelB0cVQ4RwpNNmdqVEJWMnBoelQ5YjB6bDJFQzVtMnBpTG1YbTdqdUJIajc4MkJGaVd0a2Y0VFlaajk2TjVFRXo5NzJyandICkhrUVVXQ2pqQWdNQkFBRUNnZ0VBTmF6OVNnYXlJazVmUG90b2Zya0V2WUh6dFR3NEpIZGJ2RFVWbUsrcU5yenIKME9DNXd5R3dxd0x2RW1qRlJlM01Lbnd1alJmOGNOVDVvVWhON3ZVWFgwcEhxb3hjZmdxcWl3SnVld1RDa0FJUwppYUdFUUhUdzZMRTEwUEpvTmFCN29DRUZ0SGErMWk2UCtLd2ZETVcrWHEyNUI3M0pMUlIrczhUYkxNZHBpL3VvCjRmTFNJV0xDV09MZThUTlU0ck5vVDQ0enY0eUhUOXAyV3liSUNrL3F5bVV3bTlhRHFnYzRJRzk4YXVVNG5JYVQKenk2T3NBODdONW9FME0rcHlUdEFJcmxRZFBXUzBBZ28xZUJCcWplL1I3MTI4TmdHVzhOTzVMWDBtNit2YzhyVgpaTHh0N1d0NThucXR2WlI3QTF5SU9lbWtocHl2Q3hrNVRxSmZQRlJxRVFLQmdRRGlOL3NBZncrZ1dvTFpLbTNyCm50WVkyRW9TOTBkQ0wwd293SGFGa0lsL2hIWXduQi9qaWlnMU5ZbEhDNzNPVDdDc2UycS9tS0xhMzZBVHlpTHcKZjN1T0J3NmNFZ2RJZlU5aDBtZjJCNFZXdEVEeDJMSU94MEtZU2VrYldTTVZQZ2w2SkhNb3hLdjNMbEx5R1RiMApZQmtKVmpRdkVLS1dpa1FLMUdPYnZtdzFWUUtCZ1FEVjlJLzc5WFJuN1EzZ3M4Z2JqZWhDejRqNHdtNWdpNFM2CkVsVzVJWkFidDh3QWZPdVIzUm4wQW41NFl0ZW1HUk1seDF0L1ZUM1IzK0J5bmVTZEgzbUJ6eVJEQysvRGhBTlYKNVZPckk5SFhnVTRMSElVMmNwVVZxYVo3N1J1b2JnTmlDenBmOVZPVkNadzdmQzRPYkFqcTMzQ3RtT2taR0hRbAo2dkJtNm1ubFZ3S0JnQ2FnOW95TUplZjA3TGtXcExTQ1ovN1FHRDRLMmJFMGtHVzVEOFFZL1ZHNEZkS1JKbVRkCmQ2WTJZUjJ2cEpheFJ2TDlGQ3BwYncyKzkvL0pHWlJGd0p4dEdoS09oWTNjVUF6ZE9BRnNJVm0vNkFNa1JLdC8KWFNEU0ppc1VXb2hMRXFVM3lpNWcveGh6WVppVHM2MmhKMFZQNGhOVFhPQWw5aDUvVEE4UlFqc05Bb0dBTm84Twp5R2xuTGJrOWVMZGZwK2NmK3ltQS9DNVloellNdW9aQ1pkc3hMR0JLSFRXOXZJeHRPZFFJL0JuNGM5cWhEMWt1CjgrR0F5aXdVeUNXTFRxWGdEa0lNTlN5dUQyVnlsRXpPY1MzSkxQTkVPNEVpVnlnUTdGMCtud3R2cWh1anNUUzcKeGd5Qks5Z3ZodHU3d3VHNXhHc0dDTDZkY2xEU0RYbERwSHJTVmpFQ2dZQWx0STNjMzJxaG5KU2xHSGhjdW1wRwpReGpvYnJBUUxUa3dyOWk2TkNuS0EyNVR1SVFXa1NiN2JUWWtuUi80WDhOT2w2U2EvYm9QK2dncWNJM0haSk05CkxJRnpPUTFWT1luQ2ZYZVd0SmlHQklwUExadFdobnA3NGVhdmJKYW9udlhVVGNZcm5qcytIWGhpaFhjOUhENWsKeEJEaWJKYUlEbXg2T1FpVWI2RndJZz09Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K
      kind: Secret
      metadata:
        name: neo4j-tls
        namespace: default
      type: Opaque
    3. Create the generic secret using the kubectl create command and the secret.yaml file:

      kubectl create -f /path/to/secret.yaml
      Example output
      secret/neo4j-tls created
    4. Verify that the secret is created:

      kubectl get secret
      Example output
      NAME        TYPE     DATA   AGE
      neo4j-tls   Opaque   2      23m
  2. Configure the ssl object in the ssl-values.yaml file using the created secret:

    ssl:
    # setting per "connector" matching neo4j config
      bolt:
        privateKey:
          secretName: neo4j-tls
          subPath: private.key
        publicCertificate:
          secretName: neo4j-tls
          subPath: public.crt
      https:
        privateKey:
          secretName: neo4j-tls
          subPath: private.key
        publicCertificate:
          secretName: neo4j-tls
          subPath: public.crt
        trustedCerts:
          sources:
          - secret:
              name: neo4j-tls
              items:
              - key: public.crt
                path: public.crt
      cluster:
        privateKey:
          secretName: neo4j-tls
          subPath: private.key
        publicCertificate:
          secretName: neo4j-tls
          subPath: public.crt
        trustedCerts:
          sources:
          - secret:
              name: neo4j-tls
              items:
              - key: public.crt
                path: public.crt
        revokedCerts:
          sources: [ ]

Now you are ready to deploy the Neo4j cluster using the ssl-values.yaml file and the Neo4j Helm charts.

Deploy a Neo4j cluster with SSL certificates

Deploy a Neo4j cluster using the Neo4j Helm charts and the ssl-values.yaml file.

  1. Install the core-1:

    helm install core-1 neo4j/neo4j-cluster-core --set neo4j.acceptLicenseAgreement=yes --set neo4j.password=my-password --set neo4j.edition="enterprise" --set volumes.data.mode=defaultStorageClass -f ~/Documents/neo4j-ssl/ssl-values.yaml
  2. Repeat the command from the previous step for core-2 and core-3.

  3. Verify that the Neo4j cluster is running:

    kubectl get pods
    Example output
    NAME                       READY   STATUS    RESTARTS   AGE
    core-1-0                   1/1     Running   0          2m
    core-2-0                   1/1     Running   0          2m
    core-3-0                   1/1     Running   0          2m
  4. Connect to one of the cores and verify that the /certificates/cluster directory contains the certificates:

    kubectl exec -it core-1-0 -- bash
    neo4j@core-1-0:~$ cd certificates/
    neo4j@core-1-0:~/certificates$ ls -lst
    Example output
    total 12
    4 drwxr-xr-x 2 root root 4096 Jan  3 16:07 bolt
    4 drwxr-xr-x 3 root root 4096 Jan  3 16:07 cluster
    4 drwxr-xr-x 3 root root 4096 Jan  3 16:07 https
    neo4j@core-1-0:~/certificates$ cd cluster/
    neo4j@core-1-0:~/certificates/cluster$ ls -lst
    Example output
    total 8
    4 -rw-r--r-- 1 root neo4j 1159 Jan  3 16:06 public.crt
    0 drwxrwsrwt 3 root neo4j  100 Jan  3 16:06 trusted
    4 -rw-r--r-- 1 root neo4j 1704 Jan  3 16:06 private.key
    neo4j@core-1-0:~/certificates/cluster$ cd trusted/
    neo4j@core-1-0:~/certificates/cluster/trusted$ ls -lst
    Example output
    total 4
    4 -rw-r--r-- 1 root neo4j 1159 Jan  3 16:06 public.crt
  5. Exit the pod:

    exit
  6. Install a load balancer for the Neo4j cluster:

    helm install lb neo4j/neo4j-cluster-loadbalancer
  7. Verify that the load balancer is also running:

    kubectl get services  | grep lb
    Example output
    lb-neo4j           LoadBalancer   10.0.177.22    20.62.191.227   7474:31164/TCP,7473:32555/TCP,7687:30541/TCP  36m
  8. Connect to the Neo4j cluster using one of the following options:

    • Neo4j Browser:

      1. Open a web browser and type https://lb-EXTERNAL_IP:7473 (in this example, https://20.62.191.227:7473/browser/). You should see the Neo4j browser.

      2. Authenticate using the user neo4j and the password you set when deploying the cores, in this example, my-password.

      3. Verify that the cluster is online by running :sysinfo or CALL dbms.cluster.overview():

        Cluster sysinfo
      4. Run CALL dbms.listConfig('ssl') YIELD name, value to verify that the configuration is deployed as expected.

    • Cypher Shell:

      1. Open a terminal and connect to one of the cluster pods:

        kubectl exec -it core-1-0 -- bash
      2. Navigate to the bin directory and connect to core-1 using cypher-shell:

        neo4j@core-1-0:~$ cd bin
        neo4j@core-1-0:~/bin$ ./cypher-shell -u neo4j -p my-password -a neo4j+ssc://core-1.default.svc.cluster.local:7687
        Example output
        Connected to Neo4j using Bolt protocol version 4.4 at neo4j+ssc://core-1.default.svc.cluster.local:7687 as user neo4j.
        Type :help for a list of available commands or :exit to exit the shell.
        Note that Cypher queries must end with a semicolon.
        neo4j@neo4j>
      3. Verify that the cluster is online by running CALL dbms.cluster.overview():

        neo4j@core-1-0:~/bin$ CALL dbms.cluster.overview();
        Example output
        +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
        | id                                     | addresses                                                                                           | databases                               | groups                                         |
        +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
        | "8edca247-a917-48d8-bfae-624509f3e515" | ["bolt://core-2.default.svc.cluster.local:7687", "http://localhost:7474", "https://localhost:7473"] | {neo4j: "LEADER", system: "FOLLOWER"}   | ["$(bash -c 'echo pod-${POD_NAME}')", "cores"] |
        | "e6ccd8ba-ad23-47b2-a417-8c0b925e1e7a" | ["bolt://core-3.default.svc.cluster.local:7687", "http://localhost:7474", "https://localhost:7473"] | {neo4j: "FOLLOWER", system: "FOLLOWER"} | ["$(bash -c 'echo pod-${POD_NAME}')", "cores"] |
        | "6c7ee59b-ad7e-4819-a29a-ab6c5838d7a2" | ["bolt://core-1.default.svc.cluster.local:7687", "http://localhost:7474", "https://localhost:7473"] | {neo4j: "FOLLOWER", system: "LEADER"}   | ["$(bash -c 'echo pod-${POD_NAME}')", "cores"] |
        +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
        
        3 rows
        ready to start consuming query after 11 ms, results consumed after another 1 ms
        neo4j@neo4j>
      4. Run CALL dbms.listConfig('ssl') YIELD name, value; to verify that the configuration is deployed as expected.

Configure SSO

Neo4j supports SSO authentication and authorization through identity providers implementing the OpenID Connect (OIDC) standard.

To configure the Neo4j helm deployment to use SSO authentication, first, you need to configure your identity provider for authentication and authorization using ID tokens. And then, you configure the Neo4j helm deployment to use that identity provider for authentication by adding all the SSO configurations to the values.yaml file.

For more information on how to configure your identity provider and what settings you should define, see Neo4j Single Sign-On (SSO) configuration.

An example of configuring Neo4j to use Azure SSO for authentication
config:
  dbms.security.oidc.azure.audience: "00f3a7d3-d855-4849-9e3c-57d7b6e12794"
  dbms.security.oidc.azure.params: "client_id=00f3a7d3-d855-4849-9e3c-57d7b6e12794;response_type=code;scope=openid profile email"
  dbms.security.oidc.azure.well_known_discovery_uri: "https://login.microsoftonline.com/da501982-4ca7-420c-8926-1e65b5bf565f/v2.0/.well-known/openid-configuration"
  dbms.security.authorization_providers: "oidc-azure,native"
  dbms.security.authentication_providers: "oidc-azure,native"
  dbms.security.oidc.azure.display_name: "Azure SSO on K8s"
  dbms.security.oidc.azure.auth_flow: "pkce"
  token_type_principal=id_token;token_type_authentication=id_token"
  dbms.security.oidc.azure.config: "principal=unique_name;code_challenge_method=S256;
  dbms.security.oidc.azure.claims.username: "sub"
  dbms.security.oidc.azure.claims.groups: "groups"
  dbms.security.oidc.azure.authorization.group_to_role_mapping: "e197354c-bd75-4524-abbc-d44325904567=editor;fa31ce67-9e4d-4999-bf6d-25c55258d116=publisher"

sub is the only claim guaranteed to be unique and stable. Other claims, such as email or preferred_username, may change over time and should not be used for authentication. Neo4j may assign permissions to a user based on this username value in a hybrid authorization configuration. Thus, changing the username claim from sub is not recommended. For details, see Microsoft documentation as well as the OpenId spec.

Configure LDAP password through secret

Enterprise Edition

To configure the Neo4j Helm deployment to use the LDAP system password through secret, you need to create a Kubernetes secret with the LDAP password and then add the secret name and the mount path to the values.yaml file.

  1. Create a secret with the LDAP password by running the following command. The secret must have a key called LDAP_PASS.

    kubectl create secret generic <secret-name> --from-literal=LDAP_PASS=<ldap-password>
  2. Add the secret name to the values.yaml file.

    # ldapPasswordFromSecret defines the secret which holds the password for ldap system account
    # Secret key name must be LDAP_PASS
    # This secret is accessible by Neo4j at the path defined in ldapPasswordMountPath
    ldapPasswordFromSecret: "" (1)
    # The above secret gets mounted to the path mentioned here
    ldapPasswordMountPath: "" (2)
    1  — The secret name as it appears in the Kubernetes cluster.
    2  — The path where the secret will be mounted as a volume in the Neo4j container.

Configure resource allocation

CPU and memory

The resources (CPU, memory) for the Neo4j container are configured by setting neo4j.resources object in the values.yaml file. In the resource requests, you can specify how much CPU and memory the Neo4j container needs, while in the resource limits, you can set a limit on these resources in case the container tries to use more resources than its requests allow.

neo4j:
  resources:
    requests:
     cpu: "1000m"
     memory: "2Gi"
    limits:
     cpu: "2000m"
     memory: "4Gi"

If no resource requests and resource limits are specified, the values set in the resources object are used for both the Neo4j container’s resource requests and resource limits.

neo4j:
  resources:
    cpu: "2"
    memory: "5Gi"

The minimum for a Neo4j instance is 0.5 CPU and 2GB memory.
If invalid or less than the minimum values are provided, Helm will throw an error, for example:

Error: template: neo4j-standalone/templates/_helpers.tpl:157:11: executing "neo4j.resources.evaluateCPU" at <fail (printf "Provided cpu value %s is less than minimum. \n %s" (.Values.neo4j.resources.cpu) (include "neo4j.resources.invalidCPUMessage" .))>: error calling fail: Provided cpu value 0.25 is less than minimum.
 cpu value cannot be less than 0.5 or 500m
JVM heap and page cache

You configure Neo4j to use the memory provided to the container by setting the parameters dbms.memory.heap.max_size and dbms.memory.pagecache.size. Combined, they must not exceed the memory configuration of the Neo4j container.
In Kubernetes, running processes in the Neo4j container, which exceed the configured memory limit are killed by the underlying operating system. Therefore, it is recommended to allow an additional 1GB of memory headroom so that heap + pagecache + 1GB < available memory.

For example, a 5GB container could be configured like this:

neo4j:
  resources:
    cpu: "2"
    memory: "5Gi"

# Neo4j configuration (yaml format)
config:
  dbms.memory.heap.initial_size: "3G"
  dbms.memory.heap.max_size: "3G"
  dbms.memory.pagecache.size: "1G"

dbms.memory.pagecache.size and dbms.memory.heap.initial_size are not the only settings available in Neo4j to manage memory usage. For full details of how to configure memory usage in Neo4j, see Performance - Memory Configuration.

Configure a service account

In some deployment situations, it may be desirable to assign a Kubernetes Service Account to the Neo4j pod. For example, if processes in the pod want to connect to services that require Service Account authorization. To configure the Neo4j pod to use a Kubernetes service account, set podSpec.serviceAccountName to the name of the service account to use.

For example:

# neo4j-values.yaml
neo4j:
  password: "my-password"

podSpec:
  serviceAccountName: "neo4j-service-account"

The service account must already exist. In the case of clusters, the Neo4j Helm chart creates and configures the service account.

Configure a custom container image

The helm chart uses the official Neo4j Docker image that matches the version of the Helm chart. To configure the helm chart to use a different container image, set the image.customImage property in the values.yaml file.

This can be necessary when public container repositories are not accessible for security reasons. For example, this values.yaml file configures Neo4j to use my-container-repository.io as the container repository:

# neo4j-values.yaml
neo4j:
  password: "my-password"

image:
  customImage: "my-container-repository.io/neo4j:4.4-enterprise"

Configure and install APOC core only

APOC core is shipped with Neo4j, but it is not installed in the Neo4j plugins directory. If APOC core is the only plugin that you want to add to Neo4j, it is not necessary to perform plugin installation as described in Install Plugins. Instead, you can configure the helm deployment to use APOC core by upgrading the deployment with these additional settings in the values.yaml file:

  1. Configure APOC core by loading and unresticting the functions and procedures you need (for more details see APOC installation guide). For example:

    config:
      dbms.directories.plugins: "/var/lib/neo4j/labs"
      dbms.security.procedures.unrestricted: "apoc.cypher.doIt"
      dbms.security.procedures.allowlist: "apoc.math.maxInteger,apoc.cypher.doIt"
  2. Under apoc_config, configure the APOC settings you want, for example:

    apoc_config:
      apoc.trigger.enabled: "true"
      apoc.jdbc.neo4j.url: "jdbc:foo:bar"
      apoc.import.file.enabled: "true"
  3. Run helm upgrade to apply the changes:

    helm upgrade <release-name> neo4j/neo4j-standalone -f values.yaml
  4. After the Helm upgrade rollout is complete, verify that APOC core has been configured by running the following Cypher query using cypher-shell or Neo4j Browser:

    RETURN apoc.version()
  5. Verify the APOC configs using the apoc.config.list() procedure:

    CALL apoc.config.list() YIELD key, value WHERE key = "apoc.jdbc.neo4j.url" RETURN *;

Configure credentials for plugin’s aliases using APOC-extended

From 4.4.23, the Neo4j Helm chart supports configuring credentials for the plugin’s aliases using a Kubernetes secret mounted on the provided path. This feature is available apoc.jdbc.<aliasname>.url and apoc.es.<aliasname>.url via APOC-extended.

The secret must be created beforehand and must contain the key-named URL, otherwise, the Helm chart throws an error. For example: kubectl create secret generic jdbcsecret --from-literal=URL="jdbc:mysql://30.0.0.0:3306/Northwind?user=root&password=password"

Under apoc_credentials, configure aliasName, secretName, and secretMountPath. For example:

apoc_credentials: {}
#   jdbc:
#    aliasName: "jdbc"
#    secretName: "jdbcsecret"
#    secretMountPath: "/secret/jdbcCred"
#
#   elasticsearch:
#     aliasName: "es"
#     secretName: "essecret"
#     secretMountPath: "/secret/esCred"

Install Plugins

There are three recommended methods for adding Neo4j plugins to Neo4j Helm chart deployments. You can use:

Add plugins using an automatic plugin download

You can configure the Neo4j deployment to automatically download and install plugins. If licenses are required for the plugins, you must provide the licenses in a secret.

Install GDS Community Edition (CE)

GDS Community Edition does not require a license. To add the GSD CE, configure the Neo4j values.yaml and set the env to download the plugins:

neo4j:
  name: licenses
  acceptLicenseAgreement: "yes"
  edition: enterprise
volumes:
  data:
    mode: defaultStorageClass
env:
  NEO4J_PLUGINS: '["graph-data-science"]'
config:
  dbms.security.procedures.unrestricted: "gds.*"

Install GDS Enterprise Edition (EE) and Bloom plugins

To install GDS EE and Bloom, you must provide a license for each plugin. You provide the licenses in a secret.

  1. Create a secret containing the licenses:

    kubectl create secret  generic --from-file=gds.license,bloom.license gds-bloom-license
  2. Configure the Neo4j values.yaml file using the secret as the /licenses volume mount, and set the env to download the plugins:

    neo4j:
      name: licenses
      acceptLicenseAgreement: "yes"
      edition: enterprise
    volumes:
      data:
        mode: defaultStorageClass
      licenses:
        mode: volume
        volume:
          secret:
            secretName: gds-bloom-license
            items:
              - key: gds.license
                path: gds.license
              - key: bloom.license
                path: bloom.license
    env:
      NEO4J_PLUGINS: '["graph-data-science", "bloom"]'
    config:
      gds.enterprise.license_file: "/licenses/gds.license"
      dbms.security.procedures.unrestricted: "gds.*,bloom.*"
      server.unmanaged_extension_classes: "com.neo4j.bloom.server=/bloom,semantics.extension=/rdf"
      dbms.security.http_auth_allowlist: "/,/browser.*,/bloom.*"
      dbms.bloom.license_file: "/licenses/bloom.license"

Add plugins using an init container

Init containers are specialized containers that run before the main containers (in this case, the Neo4j container) in a pod. You can use an init container to add plugins that have already been downloaded on an internal file server and are available to you via the internal network.

In the following example a YAML file, called plugin_initContainer.yaml file, configures an init container to download the plugins, in this case, apoc, from an internal file server to the /plugins directory, which is backed by a persistent volume. Then, it deploys a Neo4j standalone server. When the Neo4j container starts, it automatically installs the plugins from the /plugins directory.

  1. Configure what operations the init container should run using the initContainers property in the plugin_initContainer.yaml file.

    Some Neo4j plugins, such as Bloom and GDS Enterprise, require a license activation key, which needs to be placed in a directory accessible by the Neo4j Docker container, for example, mounted to /licenses (default). To obtain a valid license, reach out to your Neo4j account representative or write to licensing@neo4j.com.

    neo4j:
      resources:
        cpu: "0.5"
        memory: "2G"
    ​
      password: "password"
    ​
      edition: "enterprise"
      acceptLicenseAgreement: "yes"
    ​
    ​
    volumes:
      data:
        mode: defaultStorageClass
      plugins:
        mode: "share"
        share:
          name: "data"
    # licenses:
      # mode: "share"
      # share:
      #   name: "data"
    podSpec:
      initContainers:
        - name: get-plugins
          command: ["wget", "/path/to/the/downloaded/plugin-file/apoc-version-all.jar", "-O", "/plugins/apoc.jar"]
      # - name: get-licenses
      #   command: ["wget", "/path/to/the/downloaded/plugin-file/plugin-file-version-all.jar", "-O", "/licenses/plugin-file.license"]
    ​
    config:
      dbms.directories.plugins: "/plugins"
      # dbms.directories.licenses: "/licenses"
      dbms.security.procedures.unrestricted: "apoc.*"
      dbms.config.strict_validation: "false"
      dbms.security.procedures.allowlist: "apoc.*"
    
    apoc_config:
      apoc.trigger.enabled: "true"
      apoc.jdbc.neo4j.url: "jdbc:foo:bar"
      apoc.import.file.enabled: "true"
  2. Deploy a Neo4j standalone server using the plugin_initContainer.yaml file and the neo4j/standalone Helm chart:

    helm install neo4j neo4j/neo4j-standalone -f ~/path/to/plugin_initContainer.yaml
  3. After the pod changes its status from Init to Running, verify that the mount has the apoc.jar:

    k exec -it standalone-0 -- bash
    Example output
    neo4j@standalone-0:/$ cd /plugins
    neo4j@standalone-0:/plugins$ ls -lst
    total 21140
    21140 -rw-r--r-- 1 neo4j neo4j 21645102 Nov 28 12:03 apoc.jar

    Wait for the pod to change its state to READY.

  4. Using the <EXTERNAL-IP> of the LoadBalancer service, open the Neo4j Browser at http://<EXTERNAL-IP>:7474/browser.

  5. Use the password that you have configured in the plugin_initContainer.yaml file.

  6. Run a query to verify that the Apoc plugin is installed and configured. For example, you can use the following query to get the value of your jdbc URL:

    CALL apoc.config.list() YIELD key, value WHERE key = "apoc.jdbc.neo4j.url" RETURN *;

    It should return "jdbc:foo:bar" as configured in the apoc.config.

Add plugins using a custom container image

The best method for adding plugins to Neo4j running in Kubernetes is to create a new Docker container image that contains both Neo4j and the Neo4j plugins. This way, you can ensure when building the container that the correct plugin version for the Neo4j version of the container is used and that the resulting image encapsulates all Neo4j runtime dependencies.

The Neo4j Bloom plugin (https://neo4j.com/download-center/#bloom) requires a license activation key, which needs to be placed in a directory accessible by the Neo4j Docker container, for example, mounted to /licenses (default). To obtain a valid license, reach out to your Neo4j account representative or write to licensing@neo4j.com.

Building a Docker container image that is based on the official Neo4j Docker image and does not override the official image’s ENTRYPOINT and COMMAND is the recommended method to use with the Neo4j Helm chart, as shown in this example Dockerfile:

ARG  NEO4J_VERSION
FROM neo4j:{NEO4J_VERSION}

# copy my-plugins into the Docker image
COPY my-plugins/ /var/lib/neo4j/plugins

# install the apoc core plugin that is shipped with Neo4j
RUN cp /var/lib/neo4j/labs/apoc-* /var/lib/neo4j/plugins

Once the docker image has been built, push it to a container repository that is accessible to your Kubernetes cluster.

CONTAINER_REPOSITORY="my-container-repository.io"
IMAGE_NAME="my-neo4j"

# export this so that it's accessible as a docker build arg
export NEO4J_VERSION=4.4.34-enterprise

docker build --build-arg NEO4J_VERSION --tag ${CONTAINER_REPOSITORY}/${IMAGE_NAME}:${NEO4J_VERSION} .
docker push ${CONTAINER_REPOSITORY}/${IMAGE_NAME}:${NEO4J_VERSION}

To use the image that you have created, in the Neo4j Helm deployment’s values.yaml file, set image.customImage to use the image. For more details, see Configure a custom container image.

Many plugins require additional Neo4j configuration to work correctly. Plugin configuration should be set on the config object in the Helm deployment’s values.yaml file. In some cases, plugin configuration can cause Neo4j’s strict config validation to fail. Strict config validation can be disabled by setting dbms.config.strict_validation: "false".

Add plugins using a plugins volume

An alternative method for adding Neo4j plugins to a Neo4j Helm deployment uses a plugins volume mount. With this method, the plugin jar files are stored on a Persistent Volume that is mounted to the /plugins directory of the Neo4j container.

The Neo4j Bloom plugin (https://neo4j.com/download-center/#bloom) requires a license activation key, which needs to be placed in a directory accessible by the Neo4j Docker container, for example, mounted to /licenses (default). To obtain a valid license, reach out to your Neo4j account representative or write to licensing@neo4j.com.

The simplest way to set up a persistent /plugins volume is to share the Persistent Volume that is used for storing Neo4j data. This example shows how to configure that in the Neo4j Helm deployment values.yaml file:

# neo4j-values.yaml
volumes:
  data:
    # your data volume configuration
    ...

  plugins:
    mode: "share"
    share:
      name: "data"

Details of different ways to configure volume mounts are covered in Mapping volume mounts to persistent volumes.

The Neo4j container now has an empty /plugins directory backed by a persistent volume. Plugin jar files can be copied onto the volume using kubectl cp. Because it is backed by a persistent volume, plugin files will persist even if the Neo4j pod is restarted or moved.

Neo4j loads plugins only on startup. Therefore, you must restart the Neo4j pod to load them once all plugins are in place.

For example:

# Copy plugin files into the Neo4j container:

kubectl cp my-plugins/* <namespace>/<neo4j-pod-name>:/plugins/

# Restart Neo4j
kubectl rollout restart statefulset/<neo4j-statefulset-name>

# Verify plugins are still present after the restart:

kubectl exec <neo4j-pod-name> -- ls /plugins