HashiBox is a local environment to simulate a highly-available cloud with Consul, Nomad, and Vault. OSS and Enterprise versions of each product are supported. Consul Connect is enabled and uses Vault as CA provider.
The main goal of HashiBox is to provide a local setup respecting environment parity for simulating a Cloud Platform from end-to-end before going in production.
This repository simulates a region called
us in which there are 3 datacenters:
us-west-1with IP address 192.168.x.10.
us-west-2with IP address 192.168.x.20.
us-east-1with IP address 192.168.x.30.
In each datacenter we install 2 nodes:
- One acting as a server for Consul, Nomad, and Vault with IP address 192.168.60.x.
- One acting as a client for Consul and Nomad with IP address 192.168.61.x. Docker is also installed for running Nomad jobs inside containers.
Here is a summary schema to better understand how this works:
Cloning the repository
You can clone the repository with:
$ git clone https://github.com/nunchistudio/hashibox
Before continuing and installing the setup, it's important to take a look at the directory structure to better understand how it works:
Vagrantfile: This is the file to setup the required nodes using Vagrant. This also takes care of exposing the private network for each node with the IP addresses given earlier.
Makefile: This file allows to populate environment variables and automate every tasks within the environment.
bolt.yaml: Required file to leverage the Bolt command-line within this directory.
inventory.yaml: This file is used by Bolt and allows us to organize our nodes per groups so we can then run tasks on different groups of nodes such as every nodes acting as
clients, or every nodes in the
scripts/: This contains the automation scripts used in the
Makefile, executed on your local machine.
modules/: This contains Bolt tasks and plans to execute on the remote nodes.
uploads/: This directory contains the files uploaded on each node, in each datacenter, for each region.
us/: Applied for the
_defaults/: This directory contains the default configuration files that will be applied to all nodes present in the
<datacenter>/: Each directory contains the specific configuration files of the datacenter.