ArcEdge on Equinix Metal

This guide provides a step-by-step workflow to deploy ArcEdge on Equinix Metal. ArcEdge is deployed as a virtual machine on Equinix Metal using the KVM hypervisor with Libvirt virtualization manager on Ubuntu.

Prerequisites

  1. Sign up for Equinix Metal at https://metal.equinix.com/start/ with a valid payment method
  2. Ensure your SSH keys are added to your Equinix Metal account, so that you can connect to your Equinix Metal server when it is provisioned. Follow the steps outlined here https://metal.equinix.com/developers/docs/accounts/ssh-keys/ on Adding Your SSH Key to Your Account
  3. Ensure the ArcEdge QCOW2 image is accessible and can be copied or downloaded. You might need to sign up for an Arrcus account if not already done. Please contact Arrcus customer support for help.

Deploying the Server on Equinix Metal

  1. There are multiple ways to deploy servers on the Equinix Metal platform, https://metal.equinix.com/developers/docs/deploy. If you are not sure which is the right one for you yet, follow the steps to provision a server on-demand as outlined here https://metal.equinix.com/developers/docs/deploy/on-demand
  2. Choose the following options for the server if not certain

a. Location (Choose appropriately)

b. Hardware (c3.small.x86)

c. Operating system (Ubuntu 20.10)

d. Choose the option to customize SSH keys in order to choose only specific keys for access.

3. Once the server is up and running, make sure you are able to login to the server via ssh root@<server-ip-address>. Note that the server will automatically be assigned one public IPv4 and one public IPv6 address.

Installing and Configuring Virtualization Software

Login to the server via ssh and run the following commands to install KVM and libvirt software in the deployed server.

  1. Install KVM and libvirt packages to create and manage virtual machines using the following commands.

sudo apt install -y qemu qemu-kvm libvirt-daemon libvirt-clients bridge-utils virt-manager

2. Verify installation

# Verify that libvirtd is active/running

sudo systemctl status libvirtd

# Enable libvirtd

sudo systemctl enable --now libvirtd

# Verify that kvm kernel modules are installed

lsmod | grep -i kvm

# Verify that virsh lists an empty table of VM domains

virsh list --all

3. Edit "/etc/libvirt/qemu.conf" to add the following line

security_driver = "none"

4. Restart libvirtd to take in the config change

sudo systemctl restart libvirtd

Configuring the network for ArcEdge deployments

When configuring the network for ArcEdge deployments, we can run virtual machines in either NAT mode or Route mode. Since we want the ArcEdge VMs to have their own public IP address, we will run them in Route mode. To configure the ArcEdge deployment in Route mode, follow the steps outlines below.

  1. Reserve Public IPv4 addresses on Equinix Metal using this guide. Reserve a /29 subnet to create upto 5 ArcEdge VMs in the same server. If 1 is sufficient reserve a /30 subnet.
  2. Follow “Configure the network” section of the following guide to configure the network in Route mode.

a. Make sure the vmbr0 bridge interface is up and running.

sudo systemctl restart libvirtd

root@da-c3-small-x86-01:~# ip addr show vmbr0

10: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc

noqueue state UP group default qlen 1000

link/ether 52:54:00:2d:a1:37 brd ff:ff:ff:ff:ff:ff

inet 147.28.143.169/29 brd 147.28.143.175 scope global vmbr0

valid_lft forever preferred_lft forever

Deploying the ArcEdge VM

  1. Download the ArcEdge qcow2 image from the Arrcus Downloads page.
  2. Copy the ArcEdge image to the following path /var/lib/libvirt/images on the Equinix server.
  3. Change ownership and permissions.

a. Run "chown libvirt-qemu /var/lib/libvirt/images/<arcedge-image>"

b. Run "chgrp libvirt /var/lib/libvirt/images/<arcedge-image>"

4. Use this command template to install the VM:

virt-install --name arcedge-vm-<number> \

--ram 8192 \

--disk /var/lib/libvirt/images/<arcedge-image> \

--import \

--vcpus 8 \

--os-type linux \

--os-variant ubuntu20.04 \

--network bridge=vmbr0 \

--graphics none \

--console pty,target_type=serial

Immediately after running the "virt-install" command, a VM console will appear. Wait some time for the VM to boot.

5. Log in to ArcEdge image with root/YouReallyNeedToChangeThis

ArcEdge (c) Arrcus, Inc.

localhost login: root

Password:

Last login: Tue Mar 29 20:11:50 UTC 2022 on ttyS0

Linux localhost 4.19.84-arrcus #1631485384 SMP Thu Feb 24 22:30:02 UTC 2022 x86_64

The programs included with the Debian GNU/Linux system are free software;

the exact distribution terms for each program are described in the

individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent

permitted by applicable law.

6. Configure SSH

root@localhost:~# cli

Welcome to the ArcEdge CLI

root connected from 127.0.0.1 using console on localhost

root@localhost# config

Entering configuration mode terminal

root@localhost(config)# system ssh-server enable true

root@localhost(config)# system aaa authentication admin-user admin-password Arrcus2021

root@localhost(config)# system ssh-server permit-root-login true

root@localhost(config)# commit

Commit complete.

root@localhost(config)# end

7. Check the DHCP address of the VM with "show interface ma1 dynamic"

root@localhost# show interface ma1 dynamic

dynamic ip 147.28.143.171

dynamic prefix-length 29

dynamic ipv6-link-local fe80::5054:ff:fe5f:f209

8. Verify SSH to the public IPv4 address over the public Internet

ssh root@147.28.143.171

The authenticity of host '147.28.143.171 (147.28.143.171)' can't be established.

ECDSA key fingerprint is SHA256:XUPXzu9WxQjTkCwotls7J24IlzIr7ia+YjIFXthEnqY.

Are you sure you want to continue connecting (yes/no/[fingerprint])? yes

Warning: Permanently added '147.28.143.171' (ECDSA) to the list of known hosts.

ArcEdge (c) Arrcus, Inc.

root@147.28.143.171's password:

root@localhost:~# cli

Welcome to the ArcEdge CLI

root connected from 70.234.233.187 using ssh on localhost

Creating tunnels between ArcEdge instances

  1. Deploy 2 ArcEdge instances using the instructions in the previous section.
  2. To create an IPSEC tunnel between the instances follow instructions provided in the IPSEC Configuration Guide. If this link is not accessible, please contact Arrcus support for help.