1. Third Party Cookies
    1. Overview
    2. Password Reset
    1. System Notifications
    2. Data Manager Notifications
    1. Troubleshooting Load Times
    2. Cache Warming
    1. Updating License Key
    1. Web Accessibility
    1. Menu Tuning
    2. Cache Warming
    1. Password Settings

Setup: High Availability

High Availability

Curator can be configured to run in a high availability (HA) infrastructure to ensure better "up time" for your users as well as handling more concurrent user load. The standard components of the HA infrastructure are:

  • Load balancer - Domain name is pointed here. Routes user traffic to one of the application nodes. Example: AWS Elastic Load Balancer (ELB).
  • Application nodes (2 or more) - Where Curator is installed. Example: AWS Elastic Compute Cloud (EC2).
  • Database - The application nodes will use this to store data. Having a single database keeps things synchronized. Example: AWS Relational Database Service (RDS).
  • Filesystem - The application nodes will use this to store thumbnails, backups, etc. Example: AWS Elastic File System (EFS). HA diagramNote: the example above shows AWS services but Azure and other cloud providers are viable options.

Requirements

Below are the specific requirements for each component of the infrastructure and what needs to be completed prior to the install.

  • Load Balancer
    • This is where SSL will need to be handled. The most common setup is to terminate SSL at the load balancer as opposed to having certificates on each application node.
    • Prior to install: You don't need to have the load balancer ready prior to the install. This can be configured afterwards.
  • Application nodes
    • The same server requirements for the standard Curator installation exist for each application node. Those requirements can be found here: https://curator.interworks.com/requirements.
    • Prior to install: The application nodes need to be spun up and have root SSH or admin RDP access. The SSH or RDP access should be tested before the installation.
  • Database
    • The database should be separate from the application nodes. Although it can be configured inside one of the application nodes it defeats the purpose of high availability because if that node goes down none of the others will be functional.

    • Prior to install: The database needs to be spun up, an empty database needs to be created (it can be called curator for simplicity), and it needs to be accessible from each application node. Test the accessibility from each application node with something like this:

      sudo mysql -u $DBUSER--password=$DBPASSWORD -h $DBHOST -e "SHOW DATABASES"
      

      Replace variables with your credentials and database host. You should be able to see the empty database you created in the output.

  • Filesystem
    • Like the database, the filesystem needs to be separate from the application nodes to prevent unnecessary downtime. Cloud services (AWS, Azure, etc.) have great options for this that have built in redundancy. If you aren't using a cloud service and are using Windows for the infrastructure you can use the database node for the shared filesystem with a network drive.
    • Prior to install: The filesystem needs to be spun up and accessible by each application node. This should be tested by creating a simple text file and ensuring it's visible from each application node.

Other Infrastructures

There are numerous ways to do high availability that include different patterns with the database and filesystem setups. What's described above is certainly not the only method but it is the simplest for Curator. We're happy to help troubleshoot if you take a different route or run into issues with the infrastructure detailed above. While we're fully responsible for errors rising from the application code, the success of the infrastructure will be dependent on your team that manages it.

Post Install

Once the install and configuration is complete at the server-level make sure to go to the Curator backend > Settings > Curator > Worker Nodes and add each application node to the list. This will ensure when a software upgrade is initiated or the application cache is cleared each node will stay synchronized.