Skip to content

Knowledge Base#

Welcome to the VergeOS Knowledge Base

Overview

Key Points

  • The VergeOS Knowledge Base contains troubleshooting guides, best practices, and how-to articles
  • Use the search function, categories, and tags to find relevant information quickly
  • Articles follow a consistent format with overview, prerequisites, steps, and troubleshooting sections
  • Updated regularly to reflect the latest VergeOS features and improvements

Welcome to the VergeOS Knowledge Base! This resource is designed to help you find solutions quickly, optimize your VergeOS environment, and troubleshoot common issues. Whether you're new to VergeOS or an experienced administrator, this guide will help you navigate our documentation effectively.

How to Use the Knowledge Base

Search Functionality

The fastest way to find specific information is through our search function:

  1. Use the search bar at the top of the page
  2. Enter keywords related to your question or issue
  3. Review the search results to find relevant articles
  4. For more focused results, use specific terms (e.g., "UEFI VM boot" instead of just "boot")

Articles are organized into logical categories to help you browse related content:

  • Getting Started: Introductory guides and basic concepts
  • Installation: Guides for installing and setting up VergeOS
  • Virtual Machines: Everything related to virtual machines
  • Networking: Network configuration and troubleshooting
  • Storage & vSAN: Storage management and optimization
  • Tenants: Multi-tenancy setup and management
  • Troubleshooting: Common issues and their solutions
  • API & Automation: Programmatic access and automation guides
  • Best Practices: Recommendations for optimal performance

Using Tags

Tags provide another way to discover related content:

  1. Articles are tagged with relevant keywords
  2. Click on a tag to see all articles with that tag
  3. Use tags to find specific technologies or features (e.g., "UEFI", "virtio", "snapshot")

Understanding Article Structure

Each article follows a consistent format to help you quickly find the information you need:

  1. Title & Overview: A brief description of what the article covers
  2. Key Points: Important takeaways highlighted at the beginning
  3. Prerequisites: What you need before starting
  4. Steps: Clear, numbered instructions
  5. Troubleshooting: Common issues and solutions
  6. Additional Resources: Related articles and external references
  7. Document Information: Last update date and applicable VergeOS version

Types of Content

How-To Guides

Step-by-step instructions for completing specific tasks, such as:

  • Creating a new VM
  • Setting up network routing
  • Configuring backups

Troubleshooting Articles

Guides to diagnose and resolve common issues:

  • VMs not booting
  • Network connectivity problems
  • Storage performance issues

Best Practices

Recommendations for optimizing your VergeOS environment:

  • VM sizing guidelines
  • Network design principles
  • Storage tier usage

Reference Information

Detailed technical information:

  • API documentation
  • Configuration parameters
  • System requirements

Contributing to the Knowledge Base

We continuously improve our documentation based on user feedback:

  • At the bottom of each article, you'll find a "Need Help?" section where you can provide feedback
  • If you discover outdated information or errors, please let us know
  • Suggest new topics that should be covered in the Knowledge Base

Staying Updated

The Knowledge Base is regularly updated to reflect the latest VergeOS features, improvements, and fixes:

  • Check the "Last Updated" date at the bottom of each article
  • Articles are tagged with the applicable VergeOS version
  • New content is added regularly based on user feedback and product updates

Additional Support Resources

If you can't find what you're looking for in the Knowledge Base:

  • Product Guide: For comprehensive information about VergeOS features and capabilities
  • Support Portal: Submit support tickets and track their progress
  • Community Forums: Connect with other VergeOS users and share experiences

Feedback

Need Help?

If you have suggestions for improving the Knowledge Base or can't find the information you're looking for, please reach out to our support team. We're committed to making our documentation as helpful and comprehensive as possible.


Document Information

  • Last Updated: 2025-03-10
  • VergeOS Version: 4.13.0

VM and Tenant Hot-Plug Capabilities

This article explains which resources can be modified on running virtual machines and tenants without requiring a power cycle, and which changes require a restart.

Virtual Machine Hot-Plug

The Allow Hotplug setting on a VM (enabled by default) controls whether drives and NICs can be added or removed while the VM is running.

What Can Be Hot-Plugged

Resource Hot-Plug Notes
Drives Yes Guest OS must support hot-add; Virtio-SCSI interface recommended
NICs Yes Widely supported by modern guest operating systems
Drive Resize Yes Virtio-SCSI drives can be expanded without power cycle; guest OS may need to rescan or extend filesystem

What Requires a Power Cycle

Resource Power Cycle Required Notes
RAM Yes Memory changes always require VM to be powered off and back on
CPU Cores Yes Core count changes always require a power cycle
Console Type Yes VNC/Spice/Serial changes take effect on next power on
Video Card Yes Video adapter changes require power cycle
Machine Type Yes Chipset changes require power cycle
UEFI/BIOS Yes Boot mode changes require power cycle

Hot-Plug Requirements

For drive and NIC hot-plug to work:

  1. Allow Hotplug must be enabled on the VM (this is the default)
  2. The guest operating system must support hot-plug for the device type
  3. For drives, using the Virtio-SCSI interface provides the best hot-plug compatibility

Performing Hot-Plug Operations

Adding a drive or NIC while VM is running:

  1. Navigate to the VM dashboard
  2. Click Drives or NICs on the left menu
  3. Click New to add the device
  4. The device appears in the guest OS (may require a rescan in some operating systems)

Removing a drive or NIC while VM is running:

  1. Navigate to the VM dashboard
  2. Click Drives or NICs on the left menu
  3. Select the device to remove
  4. Click Hotplug on the left menu to safely detach it
  5. Once detached, the device can be deleted

Guest OS Considerations

Before hot-removing a drive, ensure it is not in use by the guest OS. Unmount filesystems and remove from any volume groups or RAID arrays first.


Tenant Node Resources

Tenant nodes behave differently from VMs regarding resource changes.

Tenant Resources Can Be Changed Live

Resource Live Change Notes
RAM Yes Tenant node memory can be increased or decreased without restart
CPU Cores Yes Tenant node cores can be increased or decreased without restart
Storage Yes Storage tiers can be added or expanded without restart

Tenant nodes are designed for flexible resource allocation. You can adjust a tenant node's RAM and cores from the parent system without interrupting workloads running inside the tenant.

How to Modify Tenant Node Resources

  1. From the tenant dashboard, click Nodes on the left menu
  2. Double-click the tenant node to modify
  3. Click Edit on the left menu
  4. Adjust Cores and/or RAM as needed
  5. Click Submit

The changes take effect immediately without requiring the tenant or its workloads to restart.

Resource Planning

While tenant node resources can be changed on the fly, the resources must actually be available on the physical host. The system validates availability when the change is submitted.


Summary

Component RAM CPU Drives NICs
VM Power cycle Power cycle Hot-plug Hot-plug
Tenant Node Live change Live change N/A N/A

For VMs, plan RAM and CPU requirements before powering on, as changes require a power cycle. Drives and NICs can be adjusted on the fly.

For tenants, resources can be adjusted at any time without service interruption, making it easy to scale tenant environments as needs change.

CPU Overprovisioning and Resource Planning

Overview

CPU overprovisioning (also called overcommit) allows you to allocate more virtual CPU cores to workloads than you have physical cores available. This guide explains how VergeOS handles CPU resources, the implications of overprovisioning, and best practices for capacity planning.

How VergeOS CPU Allocation Works

Virtual vs Physical Cores

When you assign vCPUs to a VM, you're allocating virtual cores that are scheduled onto physical CPU cores. VergeOS does not reserve physical cores exclusively for VMs—instead, it uses time-slicing to share physical resources.

Key points: - vCPUs are not pinned to physical cores by default - Multiple vCPUs from different VMs can share the same physical core - The hypervisor scheduler manages CPU time allocation

The "Max cores per machine" Setting

This cluster setting controls the maximum number of CPU cores that can be allocated to a single workload (VM, tenant node, or NAS service).

Location: Infrastructure > Clusters > [Cluster Name] > Edit

Critical Constraints

  • This value should never exceed the total physical cores on your smallest node
  • In most cases, keep it within a single CPU socket for optimal NUMA performance
  • VMs exceeding this limit after a change cannot migrate until cores are reduced

CPU Overcommit Ratios

What is an Overcommit Ratio?

The ratio of total allocated vCPUs to total physical cores:

Overcommit Ratio = Total Allocated vCPUs / Total Physical Cores

Example: A 2-node cluster with 32 cores each (64 total) running VMs with 96 total vCPUs has a 1.5:1 overcommit ratio.

Workload Type Ratio Notes
Light/Office workloads 4:1 to 6:1 Desktop VMs, file servers
Mixed general purpose 2:1 to 4:1 Typical enterprise mix
Database/Application servers 1:1 to 2:1 Performance-sensitive
High-performance computing 1:1 or less CPU-bound workloads

Start Conservative

Begin with lower ratios and increase based on monitoring. It's easier to add capacity than recover from poor performance.

Performance Implications

When Overcommit Works Well

  • Bursty workloads: VMs that have occasional CPU spikes but are mostly idle
  • Diverse timing: Workloads that peak at different times
  • I/O-bound applications: VMs waiting on disk or network more than CPU

When Overcommit Causes Problems

  • CPU-bound workloads: Applications constantly using 100% CPU
  • Latency-sensitive applications: Real-time systems, VoIP, trading
  • Simultaneous demand: All VMs needing CPU at the same time

Signs of Over-Overcommitment

  1. High CPU ready time: VMs waiting for physical CPU availability
  2. Inconsistent performance: Applications performing well sometimes, poorly other times
  3. Guest OS showing high CPU: But hypervisor shows lower utilization

Capacity Planning

Calculating Available CPU Capacity

For a cluster with N+1 redundancy:

Available Cores = (Nodes - 1) × Cores per Node
Usable vCPUs = Available Cores × Target Overcommit Ratio

Example: 4-node cluster, 32 cores each, 2:1 target ratio - Available: (4-1) × 32 = 96 cores - Usable vCPUs: 96 × 2 = 192 vCPUs

Migration Considerations

When a node fails or enters maintenance: - All VMs must fit on remaining nodes - Each VM must fit within the "Max cores per machine" setting - VMs with many vCPUs may become stranded if no single node can host them

Migration Readiness

If a VM has 64 vCPUs but your nodes only have 32 cores, that VM cannot migrate during maintenance or failure events. Keep VM core counts within single-node capacity.

Best Practices

General Guidelines

  1. Monitor before allocating: Understand actual CPU usage patterns before adding capacity
  2. Size VMs appropriately: Start with fewer vCPUs and increase based on need
  3. Reserve headroom: Keep 20-30% capacity available for bursts and failover
  4. Use CPU limits sparingly: They prevent VMs from using available idle resources

Cluster Design

  1. Consistent node sizing: Makes capacity planning simpler
  2. Plan for N+1: Always assume one node will be unavailable
  3. Document assumptions: Record your overcommit targets and reasons

VM Configuration

  1. Match vCPUs to workload: More vCPUs does not always mean better performance
  2. Consider NUMA: For large VMs, keep vCPUs within NUMA node boundaries
  3. Test performance: Benchmark with realistic workloads

Monitoring CPU Health

Key Metrics to Watch

Metric Healthy Range Action if Exceeded
Cluster CPU utilization < 70% average Add nodes or reduce VMs
Node CPU utilization < 80% sustained Check VM distribution
Individual VM CPU Varies by workload Right-size or investigate

Using the VergeOS Dashboard

  1. Navigate to Infrastructure > Clusters
  2. View CPU utilization graphs
  3. Click individual nodes to see per-node metrics
  4. Check VM CPU statistics under each VM's dashboard

For more details on cluster monitoring, see Clusters Overview.

FAQ

Can I assign more vCPUs than physical cores to a single VM?

Yes, but it's rarely beneficial. A VM with more vCPUs than physical cores on a node can experience scheduling delays as the hypervisor waits for enough cores to become available simultaneously.

Does VergeOS support CPU pinning?

VergeOS does not support CPU pinning (affinity). Under the hood, VergeOS uses the Linux Completely Fair Scheduler (CFS) for CPU scheduling. Each vCPU is mapped to a Linux process/thread, and all vCPU threads share the same CFS run queue. The scheduler uses global fairness logic to determine which process gets CPU time and in what order.

This design ensures optimal resource utilization and maintains VM mobility for live migration and failover. When you oversubscribe CPU resources, you are sharing that pool of physical cores with other VMs and tenants.

How does this affect software licensing?

Some software (Oracle, SQL Server) is licensed per physical core or socket. VergeOS's dynamic scheduling means you cannot "hard partition" CPU resources. Consult your software vendor's virtualization licensing policies—many offer per-vCPU or per-VM licensing models that work better with modern hypervisors.

What about NUMA?

For VMs with many vCPUs, VergeOS attempts to keep memory and CPU allocations within the same NUMA node when possible. For best NUMA performance, keep VM vCPU counts at or below a single socket's core count.

Tenant External IP Quick Start Guide

Overview

This guide covers the most common scenarios for providing external/public IP addresses to VMs running inside VergeOS tenants. Whether you need a single VM accessible from the internet or an entire IP block for a customer, this guide walks through the configuration step by step.

Related Documentation

For more advanced scenarios including virtual switch ports and detailed NAT configurations, see How to Use External IPs in Tenants.

Prerequisites

  • A VergeOS system with at least one tenant configured
  • Available public/external IP addresses on your root External network
  • Administrative access to both root and tenant environments

Scenario 1: Single External IP for One Tenant VM

Use case: You have one public IP and want a specific tenant VM to be accessible from the internet (e.g., for RDP, SSH, or web services).

Step 1: Assign the IP to the Tenant (Root Level)

  1. Navigate to Networks > External (your root external network)
  2. Click IP Addresses in the left menu
  3. Click New to add a new IP address
  4. Configure: - Type: Virtual IP - IP Address: Enter your public IP (e.g., 203.0.113.50) - Owner Type: Tenant - Owner: Select your tenant
  5. Click Submit
  6. Return to the External network dashboard and click Apply Rules

Step 2: Assign the IP to the Tenant Network (Tenant Level)

  1. Log into the tenant UI
  2. Navigate to Networks > External
  3. Click IP Addresses - you should see the IP with description "External IP from service provider"
  4. Select the IP and click Edit
  5. Set: - Owner Type: Network - Owner: Select the internal network where your VM is connected
  6. Click Submit
  7. Click Apply Rules on the External network

Step 3: Create NAT Rules for the VM

On the tenant's internal network (where your VM is connected):

  1. Navigate to Rules in the left menu
  2. Create a DNAT rule (incoming traffic): - Name: Inbound to VM - Action: Translate - Direction: Incoming - Destination Type: My IP Addresses - Destination: Select the external IP - Target Type: IP/Custom - Target: Enter the VM's internal IP (e.g., 10.0.0.50)
  3. Create an SNAT rule (outgoing traffic): - Name: Outbound from VM - Action: Translate - Direction: Outgoing - Source Type: IP/Custom - Source: Enter the VM's internal IP - Target Type: My IP Addresses - Target: Select the external IP - Pin: Top
  4. Click Apply Rules

Step 4: Allow Traffic Through Firewall

Still on the tenant's internal network rules:

  1. Create an Accept rule for your service: - Name: Allow RDP (or your service) - Action: Accept - Protocol: TCP - Destination Port: 3389 (or your service port) - Destination Type: My IP Addresses - Destination: Select the external IP
  2. Click Apply Rules

Your VM should now be accessible from the internet on the specified port.


Scenario 2: IP Block for Multiple Tenant VMs

Use case: You have a /29 or larger block of public IPs and want to assign them directly to VMs.

Step 1: Create a Network Block (Root Level)

  1. Navigate to Networks > External
  2. Click Network Blocks in the left menu
  3. Click New
  4. Configure: - Network Block: Enter your CIDR block (e.g., 203.0.113.48/29) - Owner Type: Tenant - Owner: Select your tenant
  5. Click Submit
  6. Click Apply Rules
  7. Navigate to Tenant Networks, filter by "Needs FW Apply: Yes"
  8. Select your tenant's network and click Apply Rules

Step 2: Create a Network from the Block (Tenant Level)

  1. Log into the tenant UI
  2. Navigate to Networks > External
  3. Click Network Blocks - you should see your block
  4. Select the block and click New Network
  5. The network settings are pre-configured with: - Address Type: Static (with your block's addressing) - DHCP: Enabled with available IPs
  6. Give the network a Name (e.g., Public-Network)
  7. Click Submit
  8. Click Power On to start the network

Step 3: Connect VMs

  1. Edit your VM and add a NIC connected to the new public network
  2. The VM can either: - Use DHCP to receive an IP automatically - Be configured with a static IP from the block

Firewall Considerations

VMs on the public network are directly exposed. Configure firewall rules on the network or within the guest OS to restrict access.


Scenario 3: External Access Without Public IP on VM

Use case: You want internet users to reach a tenant VM, but the VM should keep its private IP.

This is the same as Scenario 1 but uses only the DNAT/SNAT rules. The VM keeps its internal IP while the NAT rules translate traffic.

Advantages: - VM doesn't need to know about the public IP - Simpler VM configuration - Can change public IPs without reconfiguring VMs


Troubleshooting

VM Not Accessible from Internet

  1. Check rule application: Ensure "Apply Rules" was clicked on all affected networks
  2. Verify IP ownership chain: Root External → Tenant External → Tenant Internal Network
  3. Check NAT rules: Both DNAT (inbound) and SNAT (outbound) are required
  4. Test internally first: Can you reach the VM from within the tenant?
  5. Review firewall rules: Is traffic being blocked before reaching NAT?

VM Can't Reach Internet

  1. Check SNAT rule: Ensure outbound translation is configured
  2. Verify default route: The tenant's external network needs proper routing
  3. Check DNS: Verify DNS is configured on the VM or DHCP is providing it

Ping Works But Services Don't

  1. Check port-specific rules: Accept rules are needed for each service port
  2. Verify service is running: Check the service is listening on the VM
  3. Check guest firewall: Windows Firewall or iptables may be blocking

Quick Reference

Task Location Action
Assign IP to tenant Root External > IP Addresses Set Owner Type: Tenant
Assign IP to network Tenant External > IP Addresses Set Owner Type: Network
Create DNAT rule Tenant Internal > Rules Action: Translate, Direction: Incoming
Create SNAT rule Tenant Internal > Rules Action: Translate, Direction: Outgoing
Allow traffic Tenant Internal > Rules Action: Accept, specify port

How to Configure a Volume for VM Exports

The VergeOS NAS supports a volume type specifically for exporting selected virtual machines (VMs). The VM Export volume provides a controlled way to export selected VMs to a NAS volume that can then be synchronized to a remote storage system (e.g. existing NAS appliance) or shared via CIFS or NFS for access by external backup tools or other applications.

This export mechanism can be useful for customers who want to synchronize VergeOS VM snapshots to storage hardware they already own, rather than deploying an additional VergeOS system for backup purposes. While VergeOS recommends using its native replication features for the most efficient and fully integrated protection workflow, the VM Export Volume offers flexibility for environments that need to meet compliance requirements or maintain external portability of their VM data.

The following steps describe how to configure and use a NAS-hosted VM export volume.

Preparing the VMs for Export

  1. Edit each VM you want to export:
    • Navigate to the VM settings and enable the option for Allow Export.

Setting Up the NAS Service

To host the VM export volume, you will need a NAS service. Use an existing NAS service or create a new service using the following instructions:

  1. Navigate to NAS > List.

A listing of current NAS services is displayed. You can select an existing service or continue the following instructions to create a new service.

  1. Click New.
  2. Provide Name, Hostname, TimeZone, and Networking for the NAS service.
  3. Click Submit to initialize the NAS service.

Starting the NAS Service

  1. Select the NAS service from the list.
  2. Click Power On to bring the NAS online and prepare it to host the export volume.

Creating a NAS User

You’ll need to create a user to access the NAS:

  1. Navigate to NAS > List > double-click the NAS Service to be used.
  2. Click NAS Users > New.
  3. Provide a username and password.
  4. Click Submit to save the new NAS user.

Creating a New Volume for VM Export

  1. Select NAS > + New Volume from the top menu.
  2. Configure the volume:
    • NAS Service: select the NAS service from above
    • Name: provide a name for the volume, e.g. "VM-export"
    • Filesystem Type: Verge.io VM Export
    • Quiesced: Typically should be selected to provide application-consistent VM snapshots. ⚠️ VM Guest Agent must be installed and registered to provide a quiesced VM snapshot.
    • Max exports to store: default=3; determines the maximum number of export instances that will be stored at a time
    • Enable current folder: default=enabled. Exports are contained in folders named according to date/time of the export. With this option enabled, an additional folder named "current" is created to continually contain a branch of the most recent export. This is often helpful to provide an absolute path to retrieve the latest VM snapshots.
  3. Click Submit.

After you click Submit, the export volume’s dashboard opens where you can run operations on it. To access this dashboard later, navigate to NAS > Volumes in the top menu, then double-click the volume in the list.

Running the VM Export (manual start)

  1. Under Export VMs (mid-page), click the Start button to initiate the VM export process.
  2. Confirm by clicking Yes at the prompt.

Accessing the Exported Data

You can quickly view the contents of the export volume using the Browse option on the left menu.

To access the exported VM snapshots, set up a CIFS or NFS share for the NAS volume:

Setting up a CIFS Share for the Exported Data

  1. Create a CIFS Share:

    • Navigate to NAS > CIFS Shares > New.
    • Select the export volume as the target volume.
    • Provide a Share Name and assign the NAS user you created in step 4 to access the share.
  2. Access the Share:

    • Browse to \\IPorDNSnameoftheNAS\CIFSShareYouCreated.
    • Use the NAS user credentials when prompted.

For Windows Users

You may need to edit the Group Policy (GPO) or modify the Windows Registry to connect using the Guest account if Guest mode is enabled.

Setting up an NFS Share for the Exported Data

Instructions for creating an NFS share can be found here.

Synchronizing Exported Data to an External System

Exported VM data can be pushed to an external system using a NAS volume sync.

  1. Create a Remote Volume to mount the external volume to the VergeOS NAS (requires standard NFS or CIFS access)
  2. Create a Volume Sync to synchronize data to the Remote Volume. Volume syncs can be started on-demand manually and can also be scheduled using the Start Profile setting.

Automating the VM Export

You can schedule regular exports by configuring a task and a schedule trigger.

Available Schedules

VergeOS includes multiple pre-installed schedules (e.g., "Daily at midnight"). Refer to the Schedules Guide for instructions on creating custom schedules.

Creating a Scheduled Export Task

  1. Navigate to the VM Export Volume: NAS > Volumes > double-click the export volume.
  2. Scroll down to the Export VMs section and click the Tasks button.
  3. Click New on the left menu to create a new export task.
  4. Configure the new task fields:
    • Name: provide a descriptive name, e.g. start-vm-export
    • Object Type: VM Export (pre-selected when accessed from the volume)
    • Object: select the export volume (pre-selected when accessed from the volume)
    • Action: Start Export
  5. Click Submit to save the new task.

Assigning a Schedule to the Task

  1. After submitting, the task detail page opens. Click Schedule Triggers in the left menu.
  2. Click New on the left menu.
  3. Select the desired Schedule from the dropdown list (e.g., "Daily at midnight").
  4. Click Submit to activate the scheduled export.

Configuring Export Settings

Use the Settings button in the Export VMs section to modify export options such as the quiesce setting and maximum exports to store.

By completing these steps, you will have a VM export volume configured to generate exportable snapshots of selected VMs and make them available to third‑party backup solutions or external storage systems.

For most environments, VergeOS’s built‑in snapshot and replication features remain the most efficient and integrated method for protecting and synchronizing VM data between VergeOS systems. The VM Export volume workflow is intended for scenarios where compliance policies, existing storage investments, or portability requirements call for VM data to be maintained outside the primary VergeOS infrastructure. This provides organizations with flexibility and assurance that their VM data can be integrated into broader backup strategies or external storage ecosystems when needed.


Document Information

  • Last Updated: 2026-01-23
  • vergeOS Version: 26.0.2.2

NAS Volume Browser API Reference

Overview

Key Points

  • The volume_browser API is asynchronous - create a job, then poll for results
  • You must include ?fields=id,status,result when polling or the result won't be returned
  • Use empty string "" for the root directory path (not /)
  • The NAS service VM must be running to browse volumes

The volume_browser API provides file system browsing capabilities for NAS volumes. This is useful for automation, integrations, and building custom file management tools.

Prerequisites

  • A running NAS service with at least one online volume
  • API access with appropriate permissions
  • The volume's SHA1 key identifier (found in the volume dashboard URL or API)

How It Works

Browsing a volume is a two-step process:

  1. POST to /api/v4/volume_browser to create a browse job
  2. GET to /api/v4/volume_browser/{job_id}?fields=id,status,result to poll for results

Step 1: Create a Browse Request

Endpoint

POST /api/v4/volume_browser

Request Body

{
  "volume": "62db5fcd888082246b9346c0e65311334d91ed2c",
  "query": "get-dir",
  "params": {
    "dir": "",
    "limit": 1000,
    "offset": null,
    "filter": {
      "extensions": ""
    },
    "volume": "62db5fcd888082246b9346c0e65311334d91ed2c",
    "sort": ""
  }
}

Field Reference

Field Type Required Description
volume string Yes Volume key (SHA1 hash identifier)
query string Yes Operation type: get-dir, rename, delete, paste
params object Yes Query parameters (see below)

Params Object

Field Type Description
dir string Directory path to browse. Use "" for root.
limit integer Maximum number of entries to return (e.g., 1000)
offset integer/null Pagination offset, null for first page
filter.extensions string Filter by file extensions (empty string for all)
volume string Volume key (must match top-level volume)
sort string Sort field (empty string for default)

Response

{
  "location": "/v4/volume_browser/9a00434b882b9933512cc9d3abfd557a182d8fd3",
  "dbpath": "volume_browser/9a00434b882b9933512cc9d3abfd557a182d8fd3",
  "$row": 1,
  "$key": "9a00434b882b9933512cc9d3abfd557a182d8fd3"
}

The $key field contains the job ID needed for polling.

Step 2: Poll for Results

Endpoint

GET /api/v4/volume_browser/{job_id}?fields=id,status,result

Critical: Request the Result Field

The result field is NOT returned by default. You must explicitly request it with ?fields=id,status,result. Without this parameter, you will only receive status information.

Without ?fields=id,status,result:

{
  "id": "9a00434b882b9933512cc9d3abfd557a182d8fd3",
  "query": "get-dir",
  "status": "complete",
  "command": ""
}

With ?fields=id,status,result:

{
  "id": "9a00434b882b9933512cc9d3abfd557a182d8fd3",
  "status": "complete",
  "result": [
    {"name": "documents", "size": 4096, "date": 1706120819, "type": "directory"},
    {"name": "file.txt", "size": 1024, "date": 1769198797, "type": "file"}
  ]
}

Status Values

Status Description
running Job is still processing
complete Job finished successfully
error Job failed (check result for error message)

Polling Strategy

1. POST to create job
2. Wait 200-500ms
3. GET with ?fields=id,status,result
4. If status == "running", wait and retry (up to 30 attempts)
5. If status == "complete", process result
6. If status == "error", handle error

Result Format

When status is complete, the result field contains an array of file/directory entries:

[
  {
    "name": "document.pdf",
    "n_name": "document.pdf",
    "size": 102400,
    "date": 1769136871,
    "type": "file"
  },
  {
    "name": "images",
    "n_name": "images",
    "size": 4096,
    "date": 1769197982,
    "type": "directory"
  }
]

Entry Fields

Field Type Description
name string File or directory name
n_name string Normalized name (lowercase)
size integer Size in bytes
date integer Modification time (Unix timestamp)
type string "file" or "directory"

Empty Directories

For empty directories, result will be an empty array:

{
  "id": "8cb12559b689f5a52472bd8882dde1c095b2ab64",
  "status": "complete",
  "result": []
}

Examples

cURL

# Step 1: Create browse job
JOB_ID=$(curl -s -X POST "https://your-vergeos.example.com/api/v4/volume_browser" \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "volume": "62db5fcd888082246b9346c0e65311334d91ed2c",
    "query": "get-dir",
    "params": {
      "dir": "",
      "limit": 1000,
      "offset": null,
      "filter": {"extensions": ""},
      "volume": "62db5fcd888082246b9346c0e65311334d91ed2c",
      "sort": ""
    }
  }' | jq -r '."$key"')

# Step 2: Poll for results (IMPORTANT: include fields parameter)
sleep 1
curl -s "https://your-vergeos.example.com/api/v4/volume_browser/${JOB_ID}?fields=id,status,result" \
  -H "Authorization: Bearer $TOKEN" | jq

Python

import requests
import time

def browse_volume(base_url, token, volume_key, path=""):
    headers = {
        "Authorization": f"Bearer {token}",
        "Content-Type": "application/json"
    }

    # Step 1: Create browse job
    payload = {
        "volume": volume_key,
        "query": "get-dir",
        "params": {
            "dir": path,  # Use "" for root
            "limit": 1000,
            "offset": None,
            "filter": {"extensions": ""},
            "volume": volume_key,
            "sort": ""
        }
    }

    response = requests.post(
        f"{base_url}/api/v4/volume_browser",
        headers=headers,
        json=payload,
        verify=False
    )
    job_id = response.json()["$key"]

    # Step 2: Poll for results
    for _ in range(30):
        time.sleep(0.5)

        # IMPORTANT: Request the result field explicitly
        response = requests.get(
            f"{base_url}/api/v4/volume_browser/{job_id}?fields=id,status,result",
            headers=headers,
            verify=False
        )
        data = response.json()

        if data["status"] == "complete":
            return data.get("result") or []
        elif data["status"] == "error":
            raise Exception(f"Browse failed: {data.get('result')}")

    raise TimeoutError("Browse operation timed out")

Troubleshooting

Common Issues

Result field is empty or missing

  • You must include ?fields=id,status,result in your GET request
  • Without this parameter, only status information is returned

"VM must be in running state to issue a query"

  • The NAS service VM is not running
  • Navigate to NAS > NAS Services and start the service

"Error getting volumes VM service: No such file or directory"

  • The volume's NAS service doesn't exist or was deleted
  • Verify the volume is associated with a valid NAS service

"Resource '/v4/volume_browser/' not found"

  • Empty job ID in poll request
  • Ensure you extract $key correctly from the POST response

Common Mistakes

  1. Using path instead of dir - The field is named dir, not path
  2. Sending params as JSON string - The params field must be an object, not a JSON-encoded string
  3. Missing params fields - All fields in the params object are expected
  4. Forgetting ?fields=id,status,result - Without this, no file data is returned

Requirements

  • The NAS service VM must be running to browse volumes
  • The volume must be online (mounted)
  • User must have read permissions on the volume

Additional Resources

Feedback

Need Help?

If you need further assistance or have any questions about this article, please don't hesitate to reach out to the VergeOS Support Team.

Removing a Node from a VergeOS Cluster

Overview

This guide covers the process of removing a physical node from a VergeOS system. This procedure is used for hardware decommissioning, cluster downsizing, or node replacement scenarios.

Important Limitation

Node removal can only be performed on the last node in the system. Nodes must be removed in reverse order (last added, first removed).

Prerequisites

Before removing a node, ensure the following:

  • vSAN Health: All vSAN tiers must be healthy (green status)
  • Cluster Size: At least two nodes must remain in the cluster after removal
  • Recent Snapshot: A current system snapshot is recommended before proceeding
  • Backup Access: Verify IPMI or console access is available for troubleshooting

Removal Process

Step 1: Enable Maintenance Mode

  1. Navigate to System > Nodes.
  2. Double-click the last node in the system.
  3. Click Enable Maintenance on the left menu.
  4. Click Yes to confirm.
  5. Wait for all workloads to migrate off the node. Monitor the Running Machines section until it is empty.

Non-Migratable Workloads

VMs with GPU passthrough or host CPU type must be powered off manually before the node can enter maintenance mode.

Step 2: Offline and Delete Drives

Before powering off the node, all drives must be removed from the vSAN.

  1. From the node's dashboard, click Drives on the left menu.
  2. Select all drives listed for the node.
  3. Click Offline from the action menu.
  4. Click Yes to confirm.
  5. Wait for all drives to show Offline status.
  6. With all drives still selected, click Delete from the action menu.
  7. Click Yes to confirm the deletion.

Wait for Data Migration

After deleting the drives, the vSAN will migrate data to remaining drives. Monitor the vSAN dashboard and wait for repairs to complete before proceeding to the next step.

Step 3: Power Off the Node

Verify vSAN Health Before Proceeding

Before powering off the node, navigate to Infrastructure > vSAN Tiers and confirm all tiers are green (healthy). Do not proceed until the vSAN has fully recovered from the drive removal in Step 2.

  1. Once all drives are deleted and the node status shows Maintenance Mode, click Power Off on the left menu.
  2. Click Yes to confirm.
  3. Wait for the node to fully power down.

Step 4: Delete the Node

  1. With the node powered off, click Delete on the left menu.
  2. Click Yes to confirm the deletion.

Step 5: Wait for Final Repairs

After the node is deleted, the vSAN will perform final data redistribution across the remaining nodes.

  1. Navigate to System > vSAN to monitor repair progress.
  2. Wait for all tiers to return to green (healthy) status before performing any other operations.

Do Not Interrupt Repairs

Do not power off, restart, or remove additional nodes while vSAN repairs are in progress. Allow the repairs to complete fully.

Post-Removal Verification

After repairs complete:

  • Verify all vSAN tiers show green status
  • Confirm cluster resources are balanced across remaining nodes
  • Check system logs for any warnings or errors

Additional Resources

Secure Boot and Boot Integrity for Physical Nodes

Key Points

  • VergeOS does not use traditional UEFI Secure Boot for physical nodes
  • VergeOS implements its own boot integrity verification that prevents tampered images from booting
  • This approach provides practical tamper protection without the limitations of UEFI Secure Boot

Overview

VergeOS takes a different approach to boot security than traditional UEFI Secure Boot. While VergeOS physical nodes do not use BIOS-level Secure Boot, the system implements its own boot integrity mechanism that provides robust protection against tampering.

VergeOS Boot Integrity Protection

VergeOS has implemented its own methodology to ensure boot integrity. This mechanism verifies that the VergeOS image has not been modified or tampered with before allowing the system to boot.

How It Works

When a VergeOS node starts up, the system validates the integrity of the boot image. If the image has been tampered with or modified in any way, VergeOS will refuse to boot. This ensures that only authentic, unmodified VergeOS software runs on your infrastructure.

Comparison with Traditional UEFI Secure Boot

Aspect UEFI Secure Boot VergeOS Boot Integrity
Tamper protection Prevents boot of unsigned/modified OS Prevents boot of tampered VergeOS images
Alternative OS boot Blocks unsigned operating systems entirely Does not prevent booting a different OS
Implementation BIOS/firmware level, requires signed keys Software-level verification
Practical security Protects against unauthorized OS loading Protects against VergeOS image tampering

The key difference is timing and scope:

  • UEFI Secure Boot operates at the BIOS level and won't even attempt to boot an unsigned or unregistered operating system
  • VergeOS Boot Integrity allows the boot process to begin but will not complete boot if the VergeOS image has been tampered with

This means that while someone could theoretically install a completely different operating system on the hardware, they cannot modify the VergeOS image itself and have it boot successfully. Any tampering with VergeOS system files will be detected and prevent boot.

Why VergeOS Does Not Use UEFI Secure Boot

Traditional UEFI Secure Boot presents several challenges that make it impractical for VergeOS:

Certification Requirements

UEFI Secure Boot requires operating systems to be signed with keys that are registered in the system firmware. This process is controlled by a limited set of gatekeepers:

  • Microsoft maintains control over the primary Secure Boot key infrastructure
  • Only a small number of Linux distributions have registered Secure Boot keys (primarily Red Hat and Ubuntu)
  • Most Linux distributions that support Secure Boot actually bootstrap through Ubuntu's signed bootloader (shim) to load their own GRUB bootloader

Practical Limitations

For specialized operating systems like VergeOS, obtaining and maintaining Secure Boot certification would require:

  • Ongoing relationship with certificate authorities
  • Re-signing with each update
  • Dealing with potential key revocation scenarios

Security Implications

Practical Security

For most deployment scenarios, VergeOS boot integrity protection provides equivalent practical security to UEFI Secure Boot. Your VergeOS infrastructure is protected against image tampering, which is the primary concern for production environments.

What VergeOS Boot Integrity Protects Against

  • Modification of VergeOS system files
  • Injection of malicious code into the VergeOS image
  • Tampering with the boot process after VergeOS installation

What Requires Physical Security Controls

  • Installation of an entirely different operating system (requires physical access and storage reformation)
  • BIOS/firmware-level attacks (mitigate with physical security and firmware passwords)

Physical Security

As with any infrastructure, physical security of your nodes remains important. VergeOS boot integrity protects the software layer, while physical access controls protect against hardware-level attacks.

Disabling UEFI Secure Boot

Before installing VergeOS on physical nodes, you must disable UEFI Secure Boot in the system BIOS/UEFI settings. The specific steps vary by hardware manufacturer, but generally involve:

  1. Enter the BIOS/UEFI setup during system boot (typically F2, Del, or F10)
  2. Navigate to the Security or Boot section
  3. Locate the Secure Boot option and set it to Disabled
  4. Save changes and exit
  5. Proceed with the VergeOS installation

Pro Tip

Document your BIOS settings before making changes. Some enterprise servers may have additional security settings that interact with Secure Boot, such as TPM configuration or boot device restrictions.

Troubleshooting

Common Issues

  • Problem: Node fails to boot after VergeOS installation
  • Solution: Verify that UEFI Secure Boot is disabled in BIOS settings. Some systems may re-enable Secure Boot after firmware updates.

  • Problem: "Boot integrity check failed" error message

  • Solution: This indicates the VergeOS image has been modified. Re-download and reinstall VergeOS from an official source.

  • Problem: Cannot find Secure Boot option in BIOS

  • Solution: Check under Security, Boot, or Authentication menus. Some systems label it as "UEFI Security" or require administrator/setup password to access.

Additional Resources

Feedback

Need Help?

If you need further assistance or have any questions about this article, please don't hesitate to reach out to the VergeOS Support Team.

Automation Example

Send Slack Channel Notification and Email Alert on Sync Error

Key Points

  • The VergeOS Task Engine allows you to automate operations, triggered by specific system conditions, events, or scheduled times. Using modular, reusable components (tasks, events, schedules, and webhooks) you can easily configure automation tailored to your environment.
  • The following example demonstrates how tasks, events and webhooks work together to automatically send simultaneous alerts (Slack channel and email) in response to any error generated by a sync operation.

Use Case

Administrators of a DR/BC service provider need to be notified immediately if a nightly sync job encounters errors. Early notification gives them time to investigate and attempt remediation, increasing the likelihood that they can reinitiate the sync and complete it within the available synchronization window.

The company uses multiple redundant alerting systems to notify administrators of critical issues: one system delivers alerts via email, while another posts messages to a dedicated Slack channel.

Using VergeOS Task Engine components, administrators can instantly trigger alerts in both systems whenever an important sync job reports errors. The automation consists of a webhook, two tasks, and an event. The steps below walk you through the full configuration:

1. Configure the Webhook

The webhook defines the target URL and authentication method required by the external Slack system.

System > Tasks Dashboard > New Webhook

Create webhook - Light Mode Create webhook - Dark Mode

2. Create a Task to Send the Email

This task defines the email alert that will be sent.

System > Tasks Dashboard > New Task

Create webhook - Light Mode Create webhook - Dark Mode

3. Create a Task to Send the Webhook

This task defines the message payload that will be sent to Slack via the webhook.

System > Tasks Dashboard > New Task

Create Slack webhook task - Light Mode Create Slack webhook task - Dark Mode

4. Assign an Event Trigger to the Send-slack-payload Task

Assigning an event trigger allows us to specify the condition (a sync error) that will fire the task.

From the Send-slack-payload task dashboard: Event Triggers > New

Assign Event Trigger to Slack Task- Light Mode Assign Event Trigger to Slack Task- Dark Mode

5. Assign an Event Trigger to the 'send-email-alert' Task

We apply the same event trigger to this task so that the email alert is also sent when there is a sync error.

System > Tasks Dashboard > Tasks > select the send-email-alert task > Event Triggers > New

Assign Event Trigger to Email Task - Light Mode Assign Event Trigger to Email Task - Dark Mode

This automation ensures that administrators are notified immediately when a sync job encounters an error, allowing them to act promptly, providing the best chance to resolve the issue before the synchronization window closes.

Triggers Based on Multiple Objects

In this example, the trigger is tied to a single outgoing sync. If you want the same trigger to apply to multiple, or even all, outgoing sync jobs, assign a shared Tag to those syncs. You can then configure the trigger to fire whenever any sync with that tag produces an error.

Troubleshooting

Common Issues

  • Webhook not firing: Verify the webhook URL is correct and the external service (Slack) is accessible from your VergeOS environment.
  • Email not received: Check that SMTP settings are properly configured under System > Settings > SMTP.
  • Event trigger not activating: Ensure the trigger is assigned to the correct sync object and the event type is set to "Error".

Additional Resources

Feedback

Need Help?

If you need further assistance or have any questions about this article, please don't hesitate to reach out to the VergeOS Support Team.

Power on/Power off VMs Based on User Login/Logout and Schedule

Automated Task Example

Key Points

  • The VergeOS Task Engine allows you to automate operations, triggered by specific events or scheduled times. Using modular, reusable components (tasks, events, schedules, and webhooks) you can easily configure automation tailored to your environment.
  • The following example displays the use of tags, tasks, events and schedules used together to seamlessly bring workloads online and offline as they are needed, improving resource efficiency.
Use Case

User JThompson relies on multiple GPU-powered virtual machines to perform 3D modeling and animation work. These VMs consume significant compute and memory resources, and leaving them running when idle is wasteful.

By configuring automation to power them on only when needed - when JThompson logs into the system, and to shut them down when the user logs out or at a scheduled time (for example, every Friday at 6pm), we can ensure that resources are available exactly when needed while avoiding unnecessary usage.

The automation consists of creating tags, defining tasks, and attaching event and schedule triggers. The steps below walk you through the full configuration:

1. Create a Tag Category and Tag

A tag category organizes related tags. We then create a specific tag within that category to designate the VMs to control with this automation.

System > Tags > New

Create tag category - Light Mode Create tag category - Dark Mode

Double-click category created above > New

New tag - Light Mode

New tag - Dark Mode

2. Assign the Tag to the VMs to Automatically Power On/Off

This will identify the VMs that should be controlled by the automation.

Virtual Machines > List > select VMs > Assign Tags > select the tag from above

Assign tag to VMs - Light Mode

Assign tag to VMs - Dark Mode

The VMs will now show the assigned tag in the Tags column.

VMs with tag - Light Mode

VMs with tag - Dark Mode

3. Create a Task to Power On VMs

This task defines the action of starting up the tagged virtual machines.

System > Tasks Dashboard > New Task

Task to power on tagged VMs - Light Mode

Task to power on tagged VMs  - Dark Mode

4. Configure an Event Trigger for User Login

Here we define the activity that will invoke the task (JThompson logs into the system).

From the new task dashboard: Event Triggers > New

Event trigger user login - Light Mode

Event trigger user login - Dark Mode

5. Create a Task to Power Off the VMs

This defines the action of powering down the tagged virtual machines.

System > Tasks Dashboard > New Task

Task to power off tagged VMs - Light Mode

Task to power off tagged VMs - Dark Mode

6. Configure an Event Trigger for User Logout

This configures the task to launch when JThompson logs out.

From the new task dashboard:
Event Triggers > New

Event trigger user logout - Light Mode

Event trigger user logout - Dark Mode

7. Create a Schedule for Fridays at 6:00pm

Creating a schedule allows us to define specific dates/times. After creating the schedule it can be applied to our task and other tasks.

System > Tasks Dashboard > New Schedule

Schedule COB - Light Mode

Schedule COB - Dark Mode

8. Create a Schedule Trigger for the Power Off

We apply the schedule (Fridays at 6pm) to the task to automatically power off the VMs every Friday evening.

From the dashboard of the new task:
Schedule Triggers > New

Schedule trigger - Light Mode

Schedule trigger - Dark Mode

Verification

  • Log in as JThompson → tagged VMs should power on automatically
  • Log out → VMs should power off
  • At Friday 6:00pm → VMs should power off even if the user is still logged in

Troubleshooting

  • If VMs do not power on, verify the tag is assigned to each VM.
  • If schedule triggers do not fire, confirm the system time zone is correct.
  • If login/logout triggers fail, ensure the user account name matches exactly.

This automation ensures that the GPU‑powered VMs are only active when the designated user is logged in. When the user logs out, or at the scheduled cutoff time (Friday 6pm), the VMs are powered down to conserve resources. The pattern can be applied to other high‑resource workloads such as integration test environments, interactive machine learning, CAD rendering stations, or financial modeling clusters, or any system that benefits from running only when needed.

Utilizing an iGPU for AI Acceleration

Key Points

  • iGPUs (built directly into many modern server CPUs) can provide accelerated AI inference for small to medium-sized AI models
  • An iGPU can deliver lower heat output, reduced power consumption, and solid throughput compared to CPU‑only execution or discrete GPUs
  • Using iGPU enables AI workloads to take advantage of onboard GPU power that would often be underutilized
More Information

An iGPU (integrated graphics processing unit) is built directly into the CPU rather than existing as a separate graphics card. Within VergeOS, an iGPU can be used for private AI acceleration, allowing you to leverage hardware already present in many modern servers. This approach reduces the processing burden on CPU cores while typically requiring less power and cooling than dedicated add‑on GPUs. Because iGPUs are optimized for parallel matrix and vector operations, they are generally more efficient than CPUs for inference workloads.

iGPUs share system memory with the CPU, which can be a bottleneck. However, for small to medium models this limitation is usually manageable. If your goal is maximum performance, discrete GPU hardware remains the preferred option.

For lightweight inference tasks (e.g., small language models), iGPUs are often more power‑efficient. For heavy training workloads, neither CPU nor iGPU is ideal.

On personal servers running modest AI models, iGPU inference offers a sweet spot: less heat, lower power draw, and decent throughput.

Benefits of Using an iGPU

  • Lower power consumption and reduced cooling requirements compared to CPU‑only execution
  • Leverages hardware already included in CPUs, avoiding additional purchase costs
  • Better throughput per watt than CPU cores
  • Offloads compute burden from CPU cores, improving overall system responsiveness

Modern servers often include capable iGPUs. Even older or less powerful servers can provide meaningful acceleration by utilizing this built‑in hardware.

Prerequisites

  • Server CPU must have an integrated GPU (iGPU)
  • Verify iGPU is enabled in BIOS settings
  • IOMMU/VT-d (Intel) or AMD-Vi (AMD) may need to be enabled for passthrough

High-Level Steps

To use iGPU for private AI models, the following high-level steps are necessary:

  • Create Resource Group: type = "Host GPU"
  • Add iGPU device(s) from physical nodes to the resource group
  • Assign the resource group to one or more AI models

Configuring a Resource Group with an iGPU

  1. Navigate to Infrastructure > Nodes and double‑click the node containing the iGPU.
  2. Select the PCI Devices card or left‑menu option.
  3. From the device list, set the Type filter (top of the Type column) to Display Controller to show only display devices.
  4. Select the iGPU device from the filtered list. ⚠️ Be careful when selecting devices for resource groups. Choosing the wrong device can cause issues.
  5. Click Make Resource in the left menu.
  6. Create a new Resource Group:
    If no groups exist, the entry form will appear automatically.
    Otherwise, select Attach to: --New Group-- and click Next
  7. Configure Resource Group fields:
    * Name: Provide a descriptive name, e.g. "iGPU"
    * Type: Select Host GPU
    * Max vRAM: Limit the amount of system RAM available to the iGPU (default = 0; no limit).
    • On systems running other workloads, set a max vRAM to prevent contention between workloads and iGPU usage.
    • If max vRAM is set too low, models may fail to load and produce errors.
  8. Click Submit to save the new Resource Group with the selected iGPU.
  9. Follow the prompts at the top of the dashboard to View the node, place the node into Maintenance Mode (see message at top of node dashboard), and Reload the driver (prompted at the top of the dashboard once the node is in maintenance mode).
  10. Exit Maintenance Mode on the node once the driver reload completes.
  11. Navigate to Infrastructure > Resources > Groups and verify your new resource group appears with Enabled checked. Double-click the group to confirm the Node Resources section lists your iGPU device.

Adding Additional Node iGPUs to the Same Resource Group

Pooling iGPUs Across Nodes

By adding iGPUs from multiple nodes into the same resource group, you create a shared pool of acceleration resources that AI models can draw from. For more details, see Resource Groups

To add more iGPUs:

  • Repeat steps 1–5 above.
  • When prompted for a resource group to Attach to, select the existing Host GPU resource group.
  • Follow dashboard prompts to place the node into maintenance mode and reload the driver.
  • Exit maintenance mode after the reload completes.

Assigning the iGPU Resource Group to a Model

Once configured, assign the Host GPU resource group to your AI model:

  1. Navigate to Private AI > Models.
  2. Select the AI model you want to accelerate.
  3. Click Edit in the left menu.
  4. In the GPU Resource Group Allocation field, select your iGPU resource group.
  5. Click Submit to save.

The model will now draw from any available iGPUs in the resource group. For more information on Private AI configuration, see Private AI Configuration.

Managing Files

How To Create, Upload, and Manage Files

The Files section provides a central location to upload files for use in VergeOS. It supports ISOs, VM disk images, logos for custom branding, and general file sharing between sites and tenants.

How to Upload Files to VergeOS

To upload a file, ensure that it is in one of the supported formats.

Other extensions can be uploaded to the server but may not be recognized as usable by VergeOS.

  1. Select Files from the top menu and click Upload.
  2. You may also choose Upload from URL if you are sharing a file from another site or have a URL.

Once the file transfer completes, it will be available in the Files list for use.

  1. To create a Public Link, select a file and click Add Public Link from the left menu.

  2. Choose the format for sharing the file:

  • Anonymous (uuid): Appends the file's UUID to the end of the link.
  • Custom: Allows for a custom name to be used in the link.
  • Use file name: Appends the existing file name and extension to the link.
Expiration Type
  • Never Expire: The link will remain active indefinitely.
  • Set Date: Set a specific expiration date and time for the link.

The Public Link can be shared with other systems, for general file sharing, or with local tenants to provide access without requiring an internet download. However, sharing via this method uses network bandwidth. For a more efficient way to share files to a tenant, see the Sharing Files to Tenants guide.

In the Files section, you can also:

  • Manipulate Public Links
  • Download files
  • Edit file names and storage tiers
  • View and Remove References to files
  • Delete files."

Document Information

  • Last Updated: 2024-08-29
  • vergeOS Version: 4.12.6

Importing VMs from Files

Importing via Files allows you to import a single VM at a time by uploading VM data files (such as VMX, VMDK, OVF, VHD/X) to VergeOS and then selecting them for import.

Importing a VM (Configuration and Disks) from Files

Hyper-V VMs

Hyper-V VMs should be exported to OVA/OVF or VMware formats before upload, or you can use the Create VM Shell, Import VM Disks method described below to create the VM first, and then import disks.

  1. Upload the configuration and disk image files to the vSAN. For instructions, see Managing Files.
  2. Click Virtual Machines on the top menu.
  3. Select + New VM.
  4. From the options list, select --Import from Files--. The files uploaded to the vSAN will appear on the right under Selections Available. Click to select the VM configuration file (e.g., *.vmx, *.ovf).
  5. Click Next at the bottom of the screen.
  6. The VM Name will default to the name of the configuration file unless you specify a custom name.
  7. By default, the Preserve MAC Address option is selected. If you wish to assign a new MAC address to the VM, deselect this option.
  8. Select the Preferred Tier, or leave it at the default. This specifies the storage tier for the VM's disks. See Preferred Tier Usage for more details.
  9. Click Submit to create the VM. The new VM's dashboard will be presented.

Create VM Shell, Import VM Disks

If you cannot import the entire configuration, you can create a VM shell (a disk-less VM) and then import individual disk files.

  1. Upload the disk image files to the vSAN. See Managing Files for details.
  2. Create a new Custom VM with appropriate hardware specifications. See the Creating VMs section in the VergeOS help guide.
  3. Add a new drive to the VM, ensuring you select Import Disk in the Files field.
  4. Choose the correct Interface (IDE, SATA, virtio-scsi, virtio-legacy, etc.). Using SATA often helps with driver compatibility in guest OSs.
  5. Select the File from the list of uploaded files (*.vhd, *.vhdx, *.qcow, raw, etc.).
  6. Repeat for additional drives if necessary.
  7. Start the VM and verify that it boots correctly.

Supported File Types

The following file types are supported for VM imports using files: - IMG (Raw Disk Image) - RAW (Binary Disk Image) - QCOW (Legacy QEMU) - QCOW2 (QEMU, Xen) - VDI (VirtualBox) - VHD/VPC (Legacy Hyper-V) - VHDX (Hyper-V) - OVA (VMware, VirtualBox) - OVF (VMware, VirtualBox) - VMDK (VMware) - VMX (VMware)

Troubleshooting Issues

Failure to Boot into the OS

This is often a driver issue. You may encounter a Windows Inaccessible Boot Device error or similar.

Steps to resolve:

  1. Change the drive interface from virtio-scsi to IDE or SATA. This often resolves driver issues.
  2. Once the guest OS boots, install the virtio drivers by attaching them via a virtual CD-ROM or downloading them from virtio-win.
  3. Shut down the VM.
  4. Switch the drive interface back to virtio-scsi.
  5. Start the VM again.

Document Information

  • Last Updated: 2024-08-29
  • vergeOS Version: 4.12.6

Customizing the VergeOS User Interface

VergeOS supports custom branding at every tenancy layer (if permitted), using the Themes feature. This guide walks you through modifying the user interface (UI) with your organization’s branding elements—including logos, colors, and favicons.

What's New in VergeOS v26

  • Light and Dark Modes: VergeOS includes default light and dark themes, and supports custom themes based on either mode.
  • Multiple Theme Variants: Administrators can create multiple branded themes, such as light and dark versions, and make them available for user selection.

How to Change your UI Branding

Follow these steps to customize your VergeOS environment:

  1. Upload Logo and Icon files:
    • Upload the desired logo in .png or .jpg format. For best results, use a 144x36 image for the large logo and 44x44 for the small logo.
    • If desired, upload a favicon in .ico format.

Instructions for uploading files to the vSAN can be found here

  1. Create one or more Custom Themes: to implement your branding
    Refer to the Themes Product Guide for instructions.

  2. Control User Theme Options:
    All enabled themes are presented as options for users to select (from the utility bar). To enforce exclusive custom branding, you must disable the standard default VergeOS themes (light/dark).

Best Practices for UI Customization

Visual Design

  • Create Light and Dark Variants: Offer both light and dark versions of your custom theme to support different lighting environments and user preferences. Light mode enhances visibility in bright settings, while dark mode reduces glare and can conserve battery life on OLED devices.
  • Consistency: Ensure that the colors and logos you choose for the branding align with your organization's style guide for consistency.
  • Logo Dimensions: Upload logos with the correct dimensions to prevent distortion or scaling issues.

Accessibility & Readability

  • Readability: Use contrasting colors for text and background elements to ensure readability across different devices.
  • Test Across Themes: Verify that all UI elements remain legible and visually coherent in both light and dark modes.

Theme Management & User Experience

  • Preview Before Enabling: Setting a new theme to disabled initially can give you time to verify branding guidelines and usability before enabling it for user selection.
  • Disable Unused Themes: Users can select from all enabled themes. To enforce exclusive branding, disable the default VergeOS themes (light/dark); remove any outdated custom themes you no longer wish to offer.
  • Name Themes Clearly: Use descriptive names (e.g., “Acme Light” or “Acme Dark”) to help users easily identify the appropriate version.

By following these steps and using the available options, you can fully customize the VergeOS UI to reflect your organization's brand. Make sure to save your settings once you're satisfied with the changes.


Document Information

  • Last Updated: 2025-10-29
  • vergeOS Version: 26.0

Configure Authoritative DNS in VergeOS

Overview

This guide walks you through configuring authoritative DNS services in VergeOS using the built-in BIND DNS server. You'll learn how to enable DNS services on a network, create DNS views for access control, configure DNS zones for your domains, and manage DNS records. By the end of this guide, you'll have a fully functional authoritative DNS server running in your VergeOS environment.

Authoritative DNS allows your VergeOS system to be the definitive source for DNS records in domains you control, providing complete management over domain name resolution while integrating with your existing network infrastructure.

What You'll Learn

After completing this guide, you will be able to:

Enable BIND DNS services on VergeOS networks and understand the configuration options • Create and configure DNS views to control access and implement split-horizon DNS • Set up DNS zones for domains you want to host authoritatively • Manage DNS records including A, CNAME, MX, and other record types • Configure firewall rules to allow DNS traffic while maintaining security • Set up zone transfers for redundant DNS configurations • Verify DNS functionality and troubleshoot common issues

Common User Questions This Guide Answers: - How do I enable authoritative DNS on my VergeOS network? - What's the difference between LAN and WAN DNS views and how do I configure them? - How do I create a DNS zone for my domain? - What firewall rules do I need for public-facing DNS services? - How do I add different types of DNS records? - How do I set up DNS zone transfers for redundancy? - Why isn't my DNS server providing internet resolution to internal clients?

Requirements

VergeOS Version: v4.0 or later
Access Level: Cluster Admin or Tenant Admin permissions
Network Prerequisites: Properly configured network with external connectivity (for public DNS)
Domain Requirements: Domain registered with ability to modify name server settings (for public DNS)

Background Knowledge: - Understanding of DNS fundamentals (zones, records, resolution) - Familiarity with VergeOS network configuration - Basic understanding of firewall rules

Time Estimate

Completion Time: 20-45 minutes
Setup Prerequisites: 10-15 minutes (firewall rules, network verification)

Quick Reference

Action Location Purpose
Enable BIND DNS Networks > Edit Network > DNS: Bind Activates authoritative DNS services
Create DNS View Networks > DNS Views > New Configure client access policies
Create DNS Zone Networks > DNS Views > [View] > DNS Zones > New Add domain for authoritative hosting
Add DNS Records Networks > DNS Views > [View] > DNS Zones > [Zone] Manage domain records
Configure Firewall Networks > Rules > New Allow DNS traffic (ports 53 UDP/TCP)
Test DNS Networks > Diagnostics > DNS Lookup Verify DNS functionality

Step-by-Step Configuration

Step 1: Enable BIND DNS on a Network

The first step is enabling BIND DNS services on the network that will provide authoritative DNS. You can enable DNS on any existing network (internal or external), though external networks are typically used for public-facing DNS services.

  1. Navigate to your target network: - Networks > List - Select the network where you want to enable DNS services - Click Edit from the left menu

  2. Configure DNS settings: - Locate the DNS dropdown field - Select Bind from the available options:

    • Bind - Enables full authoritative DNS server capabilities
    • Simple - Basic DNS forwarding (not authoritative)
    • Disabled - No DNS services
    • Other Network - Forward to another network's DNS
  3. Configure additional network settings if needed: - Verify IP Address Type is set to Static (required for DNS services) - Ensure appropriate DNS server list entries for upstream resolution - Configure DHCP settings if clients will receive DNS configuration automatically

  4. Submit the configuration: - Click Submit to save the network configuration - The network will need to be restarted to apply DNS changes

Important Network Behavior Change

When you enable BIND on a network, recursion is disabled by default. This means internal systems will use the network's default route for internet DNS resolution until you configure DNS views with recursion enabled.

Step 2: Create DNS Views

DNS views control how the DNS server responds to different types of clients. This section explains how to create both internal (LAN) and external (WAN) views for split-horizon DNS configuration.

  1. Access DNS Views: - From your DNS-enabled network dashboard - Click DNS Views from the left menu - You'll see an empty list ready for view configuration

  2. Create the LAN (Internal) View: - Click New from the left menu - Configure the LAN view settings:

    • Network: Select your DNS network from the dropdown
    • Name: Enter LAN (or another descriptive internal name)
    • Recursion: Check the box to enable recursion
    • Order ID: Enter 1 (views are processed in order)
  3. Configure LAN view client matching: - In the Match Clients section, click the + icon to add client networks - Add your internal network ranges (these clients will get recursive DNS):

    • 10.0.0.0/8 (private Class A networks)
    • 192.168.0.0/16 (private Class C networks)
    • 172.16.0.0/12 (private Class B networks)
    • Add any other internal networks used in your environment
    • Match Destinations: Leave empty for most configurations
    • Max Cache Size: Set to 32 MB (or adjust based on your needs)
  4. Create the WAN (External) View: - Click New from the left menu again - Configure the WAN view settings:

    • Network: Select your DNS network from the dropdown
    • Name: Enter WAN (or another descriptive external name)
    • Recursion: Leave unchecked (disabled for security)
    • Order ID: Enter 2 (processed after LAN view)
  5. Configure WAN view client matching: - Match Clients: Leave empty (matches all other clients not caught by LAN view) - Match Destinations: Leave empty - Max Cache Size: Set to 32 MB or as appropriate

  6. Submit both views: - Click Submit for each view configuration - Both views should now appear in your DNS Views list

View Processing Logic

Views are processed in order based on Order ID. The first view that matches the client's IP address will handle the request. LAN view (Order ID 1) catches internal clients and provides recursion, while WAN view (Order ID 2) catches all other clients and provides authoritative answers only.

Step 3: Configure DNS Zones

DNS zones define the domains for which your server is authoritative. This section covers creating zones and configuring their basic settings.

  1. Access zone configuration: - From the DNS Views list, click on your LAN view - Click DNS Zones from the left menu - Click New to create a new zone

  2. Configure basic zone settings: - Type: Select Primary (this server will be the master) - Domain: Enter your domain name (e.g., company.com) - Name Server: Enter the FQDN of your name server

    • For external name servers: ns1.example.com. (note the trailing period)
    • For in-domain name servers: ns1.company.com. (requires glue records)
    • Email: Enter admin email in DNS format (e.g., admin.company.com)
  3. Configure zone timing settings: - Default TTL: Set to 1h (3600 seconds) for most environments - Negative Cache TTL: Set to 10m (600 seconds) - Refresh Interval: Set to 3h (10800 seconds) for zone transfers - Retry Interval: Set to 30m (1800 seconds) for failed transfers - Expiry Period: Set to 3w (1814400 seconds) for zone expiration

  4. Configure zone transfer settings (for redundancy): - Notify: Select Yes to enable change notifications - Also Notify: Enter IP addresses of secondary DNS servers (semicolon-separated)

    • Example: 192.168.1.100;203.0.113.50
    • Allow Transfer: Enter IP addresses authorized for zone transfers
    • Example: 192.168.1.100;203.0.113.50
    • Forwarders: Leave empty for authoritative zones
  5. Submit the zone configuration: - Click Submit to create the zone - The zone will appear in your DNS Zones list

  6. Create the zone in the WAN view (for public access): - Go back to DNS Views and select your WAN view - Repeat the zone creation process with identical settings - This ensures both internal and external clients can resolve the domain

Name Server Recommendations

For public domains, use name servers outside your domain (e.g., ns1.dnsProvider.com) to avoid circular dependencies. If using in-domain name servers (e.g., ns1.yourDomain.com), you must configure glue records at your domain registrar.

Step 4: Add DNS Records

With zones created, you can now add DNS records for your domains. This section covers creating common record types.

  1. Access DNS records: - Navigate to DNS Views > [Your View] > DNS Zones - Click on your domain zone to access its records - You'll see automatically created NS (Name Server) records

  2. Create A records (IPv4 address mappings): - Click New from the left menu - Configure the A record:

    • Type: Select A
    • Host: Enter the hostname (e.g., www for www.company.com)
    • Value: Enter the IPv4 address (e.g., 192.168.1.100)
    • TTL: Use 0 for default, or specify custom TTL
    • Order ID: Use ascending numbers (1, 2, 3, etc.)
  3. Create CNAME records (aliases): - Click New from the left menu - Configure the CNAME record:

    • Type: Select CNAME
    • Host: Enter the alias name (e.g., mail)
    • Value: Enter the target FQDN with trailing period (e.g., smtp.company.com.)
    • TTL: Use 0 for default
    • Order ID: Continue sequential numbering
  4. Create MX records (mail servers): - Click New from the left menu - Configure the MX record:

    • Type: Select MX
    • Host: Leave blank for domain apex, or enter subdomain
    • Value: Enter priority and mail server (e.g., 10 mail.company.com.)
    • TTL: Use 0 for default
    • Order ID: Continue sequential numbering
  5. Create domain apex record: - Click New from the left menu - Configure the apex A record:

    • Type: Select A
    • Host: Enter the full domain with trailing period (e.g., company.com.)
    • Value: Enter the IPv4 address for the bare domain
    • TTL: Use 0 for default
    • Order ID: Use 1 (typically first record)
  6. Repeat in both views: - Add identical records to both LAN and WAN views - This ensures consistent resolution for internal and external clients - Consider different IP addresses for split-horizon configurations

FQDN Format Requirements

Always use fully qualified domain names (ending with a period) in the Value field to avoid confusion. For Host fields, you can use either relative names (e.g., www) or FQDNs (e.g., www.company.com.). Using FQDNs consistently is recommended.

Step 5: Configure Network Firewall Rules

DNS services require specific firewall rules to allow client queries and zone transfers. This section covers the necessary rule configurations.

  1. Navigate to network rules: - From your DNS network dashboard - Click Rules from the left menu - Review existing rules to avoid conflicts

  2. Create UDP DNS rule (for standard queries): - Click New from the left menu - Configure the UDP rule:

    • Name: Allow DNS UDP
    • Action: Select Accept
    • Direction: Select Incoming
    • Protocol: Select UDP
    • Port/Port Range: Enter 53
    • Source: Leave as Any for public DNS, or specify networks for private DNS
    • Interface: Select Any or specific interface
  3. Create TCP DNS rule (for zone transfers and large responses): - Click New from the left menu
    - Configure the TCP rule:

    • Name: Allow DNS TCP
    • Action: Select Accept
    • Direction: Select Incoming
    • Protocol: Select TCP
    • Port/Port Range: Enter 53
    • Source: For security, specify only authorized secondary DNS server IPs
    • Interface: Select Any or specific interface
  4. Apply firewall rules: - Click Submit for each rule - From the network dashboard, click Apply Rules from the left menu - Confirm rule application to activate the changes

TCP Rule Security

For public-facing DNS, only allow TCP port 53 from authorized secondary DNS servers to prevent potential zone transfer attacks. UDP port 53 can typically be allowed from anywhere for standard DNS queries.

Step 6: Verify DNS Configuration

After configuration, it's important to test your DNS services to ensure proper functionality.

  1. Test DNS from the network: - Navigate to your DNS network dashboard - Click Diagnostics from the left menu - Select DNS Lookup from the Query dropdown - Configure the test:

    • Host: Enter a domain you configured (e.g., www.company.com)
    • Query Type: Select A or appropriate record type
    • DNS Server: Leave blank to use local DNS, or specify 127.0.0.1
    • Click Send to perform the lookup
  2. Test recursive resolution (internal clients): - Use the same Diagnostics interface - Test external domain resolution (e.g., google.com) - Should work for LAN view clients, fail for WAN view clients

  3. Test from external clients: - Use external DNS testing tools (e.g., nslookup, dig) - Query your DNS server's public IP address - Verify authoritative responses for your domains

  4. Verify zone transfer functionality (if configured): - From secondary DNS servers, test zone transfer requests - Check DNS logs for successful transfer notifications - Verify record synchronization between primary and secondary

Testing Tips

Use external DNS testing websites like whatsmydns.net to verify global DNS propagation. Test both authoritative responses (for your domains) and recursive resolution (for internal clients) to ensure complete functionality.

Advanced Configuration

Setting Up Zone Transfers

Zone transfers provide redundancy by replicating your DNS zones to secondary servers.

  1. Configure primary zone for transfers: - In your zone configuration, enable Notify: Yes - Add secondary server IPs to Also Notify field - Add same IPs to Allow Transfer field - Ensure TCP port 53 firewall rules allow these IPs

  2. Configure secondary server: - Create a new zone with Type: Secondary - Specify the primary server's IP address - No records need to be manually created (they're transferred automatically)

Split-Horizon DNS Implementation

Provide different answers to internal vs external clients:

  1. Create different records in LAN vs WAN views: - LAN view: Internal IP addresses for services - WAN view: Public IP addresses for the same services - Clients get appropriate addresses based on their location

  2. Example configuration: - LAN view: mail.company.com192.168.1.50 (internal mail server) - WAN view: mail.company.com203.0.113.50 (public mail server IP)

Troubleshooting Index

Problem/Symptom Likely Cause Solution
Internal clients can't reach internet sites Recursion not enabled in LAN view Enable recursion in LAN DNS view
External clients get no DNS response Firewall blocking UDP port 53 Add firewall rule allowing UDP port 53
Zone transfer fails TCP port 53 blocked or IP not authorized Check firewall rules and Allow Transfer settings
DNS queries timeout Network connectivity issues Verify network routing and firewall rules
Records not resolving FQDN format incorrect Ensure trailing periods on FQDNs in Value fields
Secondary not receiving updates Notify settings incorrect Verify Also Notify IPs and notification settings

Common DNS Resolution Issues

Problem: Clients cannot resolve external domains - Check: LAN view recursion setting enabled - Check: Upstream DNS servers configured on network - Check: Default gateway configured for internet access

Problem: Public DNS queries not working - Check: Firewall allows UDP port 53 from internet - Check: Network has public IP address - Check: Domain registrar has correct name server settings

Problem: Zone transfers not working - Check: TCP port 53 allowed from secondary servers - Check: Secondary server IPs in Allow Transfer list - Check: Network connectivity between primary and secondary

Next Steps

After successfully configuring authoritative DNS, consider these additional implementations:

Monitoring and Maintenance:

  • Set up DNS query logging and monitoring
  • Implement automated DNS record management
  • Plan for DNS server updates and maintenance

Advanced Features:

  • Explore DNSSEC implementation for enhanced security
  • Consider DNS load balancing for high availability
  • Investigate integration with dynamic DNS services

Related Documentation:

For additional support with DNS configuration, contact VergeOS Support with specific details about your implementation requirements and any error messages encountered.


Document Information

  • Document Type: Knowledge Base Configuration Guide
  • Category: Network Services
  • Last Updated: 2025-09-11
  • VergeOS Version: 4.12 and later

VM Lifecycle Management API Overview

Key Points

  • Complete REST API for virtual machine lifecycle management and automation
  • Programmatically create, configure, manage, and delete VMs using HTTP endpoints
  • Four-stage API workflow: Creation → Power Management → Configuration → Advanced Operations
  • Comprehensive API documentation optimized for developers and automation tools

Stage: API Overview (Entry Point) Input: Developer requirements, automation needs Output: Guided workflow to specific API documentation Navigation Path: - Start here → Choose specific operation → Follow detailed guides - Complete lifecycle: Creation → Power → Configuration → Advanced

This Document Helps With

  • "VM API overview and getting started"
  • "REST API workflow for VM management"
  • "API endpoint reference and navigation"
  • "Developer onboarding for VergeOS APIs"
  • "Infrastructure as code planning"
  • "Automation workflow design"
  • "API integration guidance"
  • "VM management automation strategy"

API Workflow Stages

1. VM Creation API

Create virtual machines programmatically with drives, devices, and network interfaces using REST API calls. → See: VM Creation

2. VM Power Management API

Start, stop, reboot, and monitor VM power states through API endpoints. → See: VM Power Management

3. VM Configuration API

Modify CPU, RAM, storage drives, and network settings via API calls. → See: VM Configuration

4. Advanced VM Operations API

Clone VMs, create snapshots, delete virtual machines, and troubleshoot issues using API methods. → See: VM Advanced Operations

Key API Concepts

  • VM Key vs Machine Key: API distinction for VM settings vs hardware operations
  • REST API Authentication: Bearer token authentication required for all API calls
  • Resource Groups: UUID-based API parameters for device passthrough (GPUs, PCI devices)
  • Storage Tiers: API-configurable performance levels (1-5) for drive placement
  • HTTP Methods: GET, POST, PUT, DELETE operations for complete VM lifecycle management

Primary API Endpoints

API Operation HTTP Method REST Endpoint Purpose
Create VM POST /api/v4/vms Create new virtual machine
VM Actions POST /api/v4/vm_actions Power operations, clone, snapshot
Update VM PUT /api/v4/vms/{id} Modify VM configuration
Add Storage POST /api/v4/machine_drives Attach drives to VM
Add Network POST /api/v4/machine_nics Configure network interfaces
Add Devices POST /api/v4/machine_devices Attach GPU/PCI devices
Delete VM DELETE /api/v4/vms/{id} Remove virtual machine
VM Status GET /api/v4/vms/{id} Query VM information
Power State GET /api/v4/machine_status/{id} Check runtime status

API Quick Reference by Stage

Stage Primary Operations Key Endpoints Documentation
Creation Create VM, Add drives, Add NICs, Add devices POST /api/v4/vms, POST /api/v4/machine_* VM Creation
Power Start, Stop, Reboot, Monitor POST /api/v4/vm_actions, GET /api/v4/machine_status VM Power Management
Config Update CPU/RAM, Manage drives, Manage NICs PUT /api/v4/vms, POST/PUT/DELETE /api/v4/machine_* VM Configuration
Advanced Clone, Snapshot, Delete, Troubleshoot POST /api/v4/vm_actions, DELETE /api/v4/vms VM Advanced Operations

Common API Error Scenarios

  • Authentication Issues: Invalid API key, expired tokens, insufficient permissions
  • Resource Constraints: Insufficient storage, memory limits, CPU quotas exceeded
  • Configuration Errors: Invalid parameters, missing required fields, malformed JSON
  • State Conflicts: VM already running, operation in progress, resource locked
  • Network Issues: API timeout, connection refused, service unavailable
  • Validation Errors: Invalid VM names, unsupported configurations, constraint violations

API Getting Started Guide

  1. Create VMs - REST API calls to create virtual machines
  2. Power Control - API endpoints to start, stop, and manage VMs
  3. Configure VMs - API methods to modify VM settings and hardware
  4. Advanced Operations - API calls for cloning, backup, and troubleshooting

Common API Use Cases

  • Automated VM Provisioning: Create VMs programmatically for cloud automation
  • Infrastructure as Code: Manage virtual machine infrastructure through API calls
  • DevOps Integration: Integrate VM management into CI/CD pipelines using REST APIs
  • Monitoring and Alerting: Query VM status and power states via API endpoints
  • Backup and Recovery: Automate VM snapshots and cloning through API operations
  • Resource Management: Programmatically manage VM CPU, memory, and storage allocation

API Authentication

All VM lifecycle API endpoints require authentication:

curl -H "Authorization: Bearer YOUR_API_KEY" \
     -H "Content-Type: application/json" \
     https://your-vergeos.example.com/api/v4/vms

Start with the VM Creation API documentation to begin automating your virtual machine management workflows.

VM Advanced Operations API

Key Points

  • Clone VMs with complete configuration and drive copying
  • Create and restore VM snapshots for backup and recovery
  • Safely delete VMs with automatic resource cleanup
  • Comprehensive error handling and troubleshooting guidance

This guide covers advanced virtual machine operations in VergeOS, including cloning, snapshot management, deletion, and troubleshooting. These operations provide powerful capabilities for VM lifecycle management and disaster recovery.

Stage: VM Advanced Operations (4 of 4) Input: VM key (42), operation type, parameters Output: Cloned VMs, snapshots, cleanup confirmation Previous: VM configured → VM Configuration Common Operations: - Clone for templates → New VM creation cycle - Snapshot for backup → Recovery workflows - Delete for cleanup → End of lifecycle

This Document Helps With

  • "How to clone VMs via API"
  • "Creating VM snapshots and backups"
  • "Restoring VMs from snapshots"
  • "Safely deleting VMs and cleanup"
  • "VM troubleshooting and diagnostics"
  • "Template creation workflows"
  • "Disaster recovery operations"
  • "Bulk VM management"
  • "Resource cleanup automation"

Quick Reference

Primary Endpoints

  • VM Actions: POST /api/v4/vm_actions
  • VM Deletion: DELETE /api/v4/vms/{vm_key}
  • VM Listing: GET /api/v4/vms

Key Actions

  • clone: Create complete VM copy
  • snapshot: Create VM snapshot
  • restore: Restore from snapshot

Authentication

-H "Authorization: Bearer YOUR_API_KEY"
-H "Content-Type: application/json"

Prerequisites

VM must exist → See VM Creation

API Quick Reference

Operation Method Endpoint Key Type Purpose
Clone VM POST /api/v4/vm_actions VM key Create complete copy
Create Snapshot POST /api/v4/vm_actions VM key Point-in-time backup
Restore Snapshot POST /api/v4/vm_actions VM key Recovery operation
List Snapshots GET /api/v4/vms Filter query Find snapshots
Delete VM DELETE /api/v4/vms/{id} VM key Complete removal
VM Status GET /api/v4/vms/{id} VM key Configuration check
Operation Status GET /api/v4/machine_status/{id} Machine key Runtime monitoring

Troubleshooting Index

  • 409 Conflict: Clone name exists, VM already running, operation in progress
  • 507 Insufficient Storage: Not enough space for clone, snapshot storage full
  • 403 Forbidden: API key permissions, VM access denied, cluster restrictions
  • 404 Not Found: VM not found, snapshot not found, invalid VM key
  • 408 Request Timeout: Clone operation timeout, snapshot creation timeout
  • 422 Unprocessable Entity: Invalid clone parameters, snapshot restore conflict
  • 500 Internal Server Error: Storage system issues, hypervisor problems, cluster failures

VM Cloning

POST /api/v4/vm_actions

Description: Creates a complete copy of a VM including all drives and configuration.

Basic Clone

{
  "params": {
    "name": "test clone",
    "quiesce": "true"
  },
  "action": "clone",
  "vm": "42"
}

Complete API Call:

curl -X POST "https://your-vergeos.example.com/api/v4/vm_actions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "params": {
      "name": "test clone",
      "quiesce": "true"
    },
    "action": "clone",
    "vm": "42"
  }'

Response Example:

{
  "response": {
    "vmkey": "43",
    "machinekey": "55",
    "machinestatuskey": "55",
    "clusterkey": "1"
  }
}

Advanced Clone Options

{
  "params": {
    "name": "production-clone",
    "description": "Production server clone for testing",
    "preserve_macs": "true",
    "preserve_device_uuids": "true",
    "quiesce": "true",
    "cluster": "2"
  },
  "action": "clone",
  "vm": "42"
}

Clone Parameters

Parameter Type Required Description
name string Yes Name for the cloned VM
description string No Description for the clone
quiesce string No Quiesce VM before cloning ("true"/"false") for data consistency
preserve_macs string No Preserve MAC addresses ("true"/"false")
preserve_device_uuids string No Preserve device UUIDs ("true"/"false")
cluster string No Target cluster ID

Clone Options

  • Quiesce: Use "quiesce": "true" to ensure data consistency by pausing the VM briefly
  • Preserve MACs: Use "preserve_macs": "true" to keep the same MAC addresses (may cause network conflicts)
  • Preserve Device UUIDs: Use "preserve_device_uuids": "true" to maintain device identifiers
  • Cross-Cluster: Specify different cluster ID to clone to another cluster

Clone Workflow Example

# Step 1: Create clone
curl -X POST "https://your-vergeos.example.com/api/v4/vm_actions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "params": {
      "name": "backup-clone-$(date +%Y%m%d)",
      "description": "Automated backup clone",
      "quiesce": "true"
    },
    "action": "clone",
    "vm": "42"
  }'

# Step 2: Verify clone creation (using returned vmkey)
curl "https://your-vergeos.example.com/api/v4/vms/43?fields=name,description,created" \
  -H "Authorization: Bearer YOUR_API_KEY"

# Step 3: Check clone power state
curl "https://your-vergeos.example.com/api/v4/machine_status/55?fields=powerstate,status" \
  -H "Authorization: Bearer YOUR_API_KEY"

VM Snapshots

Creating Snapshots

{
  "vm": "42",
  "action": "snapshot",
  "params": {
    "name": "pre-update-snapshot",
    "description": "Before system update"
  }
}

Complete API Call:

curl -X POST "https://your-vergeos.example.com/api/v4/vm_actions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "vm": "42",
    "action": "snapshot",
    "params": {
      "name": "pre-update-snapshot",
      "description": "Before system update - $(date)"
    }
  }'

Restoring from Snapshots

{
  "vm": "42",
  "action": "restore",
  "params": {
    "snapshot_id": "snapshot-67890"
  }
}

Complete API Call:

curl -X POST "https://your-vergeos.example.com/api/v4/vm_actions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "vm": "42",
    "action": "restore",
    "params": {
      "snapshot_id": "snapshot-67890"
    }
  }'

Listing VM Snapshots

GET /api/v4/vms

Use filters to find snapshots:

curl "https://your-vergeos.example.com/api/v4/vms?filter=is_snapshot%20eq%20true%20and%20name%20contains%20'web-server'" \
  -H "Authorization: Bearer YOUR_API_KEY"

Find All Snapshots for a VM:

curl "https://your-vergeos.example.com/api/v4/vms?filter=is_snapshot%20eq%20true%20and%20parent_vm%20eq%2042" \
  -H "Authorization: Bearer YOUR_API_KEY"

Snapshot Management Workflow

# Step 1: Create snapshot before maintenance
curl -X POST "https://your-vergeos.example.com/api/v4/vm_actions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "vm": "42",
    "action": "snapshot",
    "params": {
      "name": "maintenance-snapshot-$(date +%Y%m%d-%H%M)",
      "description": "Pre-maintenance snapshot"
    }
  }'

# Step 2: List snapshots to find snapshot ID
curl "https://your-vergeos.example.com/api/v4/vms?filter=is_snapshot%20eq%20true%20and%20parent_vm%20eq%2042&fields=name,description,created" \
  -H "Authorization: Bearer YOUR_API_KEY"

# Step 3: Restore if needed (after maintenance issues)
curl -X POST "https://your-vergeos.example.com/api/v4/vm_actions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "vm": "42",
    "action": "restore",
    "params": {
      "snapshot_id": "found-snapshot-id"
    }
  }'

VM Deletion and Cleanup

Complete VM Deletion

DELETE /api/v4/vms/{vm_key}

Description: Deletes a VM and automatically removes all associated resources including drives, NICs, devices, and configurations.

curl -X DELETE "https://your-vergeos.example.com/api/v4/vms/42" \
  -H "Authorization: Bearer YOUR_API_KEY"

Response: 200 OK on successful deletion.

Automatic Cleanup

When you delete a VM using DELETE /api/v4/vms/{vm_key}, VergeOS automatically removes:

  • All drives attached to the VM
  • All network interfaces (NICs)
  • All devices (GPU, PCI passthrough, USB, TPM, etc.)
  • VM configuration and metadata
  • Cloud-init files and configurations
  • VM notes and documentation
  • Associated machine resources

Pre-Deletion Considerations

Before deleting a VM, consider:

  1. Data Backup: Ensure important data is backed up
  2. Snapshots: VM snapshots may be deleted with the VM
  3. Dependencies: Check if other systems depend on this VM
  4. Network Configuration: Note any special network configurations
  5. Licensing: Consider software licensing implications

Safe Deletion Process

curl -X POST "https://your-vergeos.example.com/api/v4/vm_actions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "action": "kill",
    "vm": "42"
  }'
Step 2: Verify Power State
curl "https://your-vergeos.example.com/api/v4/machine_status/54?fields=powerstate" \
  -H "Authorization: Bearer YOUR_API_KEY"
Step 3: Create Final Backup (Optional)
curl -X POST "https://your-vergeos.example.com/api/v4/vm_actions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "params": {
      "name": "final-backup-before-deletion",
      "description": "Final backup before VM deletion"
    },
    "action": "clone",
    "vm": "42"
  }'
Step 4: Delete VM and All Resources
curl -X DELETE "https://your-vergeos.example.com/api/v4/vms/42" \
  -H "Authorization: Bearer YOUR_API_KEY"

VM Key Usage

Use the VM key (e.g., 42) from the VM creation response or VM listing, not the machine key. The deletion process automatically handles all associated machine resources.

No Manual Cleanup Required

Unlike some virtualization platforms, VergeOS handles complete resource cleanup automatically. You do not need to manually:

  • Delete individual drives
  • Remove network interfaces
  • Detach devices
  • Clean up configuration files
  • Remove machine status entries

The single DELETE /api/v4/vms/{vm_key} operation handles all cleanup automatically.

Finding Orphaned Resources

# Find drives without associated machines
curl "https://your-vergeos.example.com/api/v4/machine_drives?filter=machine%20eq%20null" \
  -H "Authorization: Bearer YOUR_API_KEY"

# Find NICs without associated machines
curl "https://your-vergeos.example.com/api/v4/machine_nics?filter=machine%20eq%20null" \
  -H "Authorization: Bearer YOUR_API_KEY"

Error Handling and Troubleshooting

Common Error Scenarios

VM Creation Failures

Error: 400 Bad Request - Invalid machine type

{
  "error": "Invalid machine_type 'invalid-type'. Valid options: pc, q35, pc-i440fx-*, pc-q35-*"
}

Solution: Use valid machine types from the supported list.

Power State Conflicts

Error: 409 Conflict - VM already running

{
  "error": "Cannot power on VM: already in running state"
}

Solution: Check current power state before issuing power commands.

Resource Constraints

Error: 507 Insufficient Storage

{
  "error": "Insufficient storage space in tier 3 for requested disk size"
}

Solution: Choose different storage tier or reduce disk size.

Clone Failures

Error: 409 Conflict - Clone name already exists

{
  "error": "VM with name 'test clone' already exists"
}

Solution: Use unique names for cloned VMs.

Monitoring VM Operations

Checking Operation Status

Many VM operations are asynchronous. Monitor progress using:

# Check VM status
curl "https://your-vergeos.example.com/api/v4/vms/42?fields=machine%23status" \
  -H "Authorization: Bearer YOUR_API_KEY"

# Check machine status for runtime information
curl "https://your-vergeos.example.com/api/v4/machine_status/54?fields=status,powerstate" \
  -H "Authorization: Bearer YOUR_API_KEY"
Operation Timeouts

Set appropriate timeouts for long-running operations:

  • VM Creation: 5-10 minutes
  • Clone Operations: 10-30 minutes (depending on size)
  • Snapshot Creation: 2-5 minutes
  • Snapshot Restore: 5-15 minutes
  • Power State Changes: 30-60 seconds
  • VM Deletion: 2-5 minutes

Retry Logic

Implement retry logic for transient failures:

import time
import requests

def wait_for_operation_completion(vm_id, max_retries=20):
    """Wait for VM operation to complete"""
    for attempt in range(max_retries):
        response = requests.get(
            f"https://your-vergeos.example.com/api/v4/vms/{vm_id}",
            params={"fields": "machine#status#status as operation_status"},
            headers={"Authorization": "Bearer YOUR_API_KEY"}
        )

        status = response.json().get("operation_status", "")
        if status not in ["cloning", "snapshotting", "restoring"]:
            return True

        time.sleep(10)  # Wait 10 seconds between checks

    return False

# Example usage
if wait_for_operation_completion("42"):
    print("Operation completed successfully")
else:
    print("Operation timed out")

Debugging VM Issues

Check VM Configuration
# Get complete VM configuration
curl "https://your-vergeos.example.com/api/v4/vms/42?fields=most" \
  -H "Authorization: Bearer YOUR_API_KEY"
Check Machine Status
# Get runtime status and errors
curl "https://your-vergeos.example.com/api/v4/machine_status/54?fields=most" \
  -H "Authorization: Bearer YOUR_API_KEY"
Check System Resources
# Check cluster resources
curl "https://your-vergeos.example.com/api/v4/clusters/1?fields=resources" \
  -H "Authorization: Bearer YOUR_API_KEY"

# Check storage tiers
curl "https://your-vergeos.example.com/api/v4/storage_tiers" \
  -H "Authorization: Bearer YOUR_API_KEY"

Best Practices

  1. Always test operations in development environments first
  2. Create snapshots before major changes
  3. Monitor resource usage during operations
  4. Implement proper error handling in automation scripts
  5. Use descriptive names for clones and snapshots
  6. Clean up unused resources regularly
  7. Document operational procedures for your team

Related Operations

Need Help?

For additional support with VM advanced operations: - Check the VergeOS documentation portal - Contact VergeOS support with specific error messages - Review system logs for detailed error information - Consult the VergeOS community forums

VM Configuration API

Key Points

  • Modify VM settings like CPU, RAM, console, and video through REST API
  • Manage drives with resizing, adding, and removal capabilities
  • Update network interfaces and their configurations
  • Add documentation notes to VMs for operational tracking

This guide covers modifying virtual machine configurations in VergeOS after creation, including CPU/RAM updates, drive management, network interface changes, and adding operational notes.

Stage: VM Configuration (3 of 4) Input: VM key (42) + Machine key (54), configuration changes Output: Updated VM settings, modified hardware Previous: VM powered on → VM Power Management Common Next Steps: - Advanced operations → VM Advanced Operations - Power cycle for changes → VM Power Management

This Document Helps With

  • "How to change VM CPU and RAM"
  • "Adding storage drives to existing VMs"
  • "Resizing VM drives and storage"
  • "Managing VM network interfaces"
  • "Adding notes and documentation to VMs"
  • "Hotplug operations and live changes"
  • "VM performance tuning"
  • "Storage expansion workflows"
  • "Network reconfiguration"

Quick Reference

Primary Endpoints

  • VM Settings: PUT /api/v4/vms/{id}
  • VM Notes: POST /api/v4/note_actions
  • Drive Management: POST/PUT/DELETE /api/v4/machine_drives
  • NIC Management: POST/PUT/DELETE /api/v4/machine_nics

Key Concepts

  • VM key: Use for VM settings (CPU, RAM, console)
  • Machine key: Use for hardware (drives, NICs, devices)
  • Hotplug: Some changes require VM restart

Authentication

-H "Authorization: Bearer YOUR_API_KEY"
-H "Content-Type: application/json"

Prerequisites

VM must be created first → See VM Creation

API Quick Reference

Operation Method Endpoint Key Type Purpose
Update VM PUT /api/v4/vms/{id} VM key CPU, RAM, console settings
Add Note POST /api/v4/note_actions VM key Documentation
Add Drive POST /api/v4/machine_drives Machine key Storage expansion
Resize Drive PUT /api/v4/machine_drives/{id} Drive key Increase disk size
Remove Drive DELETE /api/v4/machine_drives/{id} Drive key Storage removal
Add NIC POST /api/v4/machine_nics Machine key Network interface
Update NIC PUT /api/v4/machine_nics/{id} NIC key Network changes
Remove NIC DELETE /api/v4/machine_nics/{id} NIC key Interface removal

Troubleshooting Index

  • 400 Bad Request: Invalid RAM size, invalid CPU count, malformed JSON
  • 409 Conflict: VM must be stopped, hotplug not supported, resource in use
  • 507 Insufficient Storage: Tier full, disk size too large, quota exceeded
  • 403 Forbidden: API key permissions, VM access denied, cluster restrictions
  • 422 Unprocessable Entity: Drive cannot be shrunk, invalid interface type
  • 404 Not Found: VM not found, drive not found, NIC not found, invalid vnet

CPU and RAM Updates

PUT /api/v4/vms/{id}

Description: Updates VM configuration. Uses the VM key (not machine key) for VM-level settings.

Request Body Example:

{
  "ram": 16384,
  "cpu_cores": 3,
  "console": "spice",
  "video": "qxl",
  "show_advanced": "true",
  "nested_virtualization": "true",
  "disable_hypervisor": "true"
}

Complete API Call:

curl -X PUT "https://your-vergeos.example.com/api/v4/vms/42" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "ram": 16384,
    "cpu_cores": 3,
    "console": "spice",
    "video": "qxl",
    "show_advanced": "true",
    "nested_virtualization": "true",
    "disable_hypervisor": "true"
  }'

Common Configuration Parameters

Parameter Type Description Restart Required
ram integer RAM in MB Usually yes
cpu_cores integer Number of CPU cores Usually yes
console string Console type (spice, vnc, none) On next start
video string Video adapter (qxl, virtio, std, cirrus) On next start
nested_virtualization string Enable nested virtualization ("true"/"false") Yes
disable_hypervisor string Disable hypervisor ("true"/"false") Yes
guest_agent string Enable guest agent ("true"/"false") On next start
uefi string Enable UEFI boot ("true"/"false") Yes
secure_boot string Enable secure boot ("true"/"false") Yes

VM Key vs Machine Key

  • VM settings (CPU, RAM, console, video): Use VM key (e.g., 42) with /api/v4/vms/{vm_key}
  • Hardware changes (drives, NICs, devices): Use machine key (e.g., 54) with /api/v4/machine_* endpoints

Configuration Changes

  • CPU and RAM changes usually require VM restart
  • Console and video changes take effect on next VM start
  • Nested virtualization and hypervisor settings require VM restart
  • Always check allow_hotplug setting for hot-add capabilities

Adding Notes to VMs

POST /api/v4/note_actions

Description: Adds or updates notes for a VM in the VergeOS UI for documentation purposes.

Request Body:

{
  "owner": "vms/42",
  "action": "update",
  "params": {
    "text": "This is a test VM"
  }
}

Complete API Call:

curl -X POST "https://your-vergeos.example.com/api/v4/note_actions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "owner": "vms/42",
    "action": "update",
    "params": {
      "text": "Production web server - Updated 2025-08-24"
    }
  }'

Parameters:

Name Type Required Description
owner string Yes Resource identifier (format: "vms/{vm_key}")
action string Yes Action to perform ("update")
params.text string Yes Note text content

VM Notes

Notes are visible in the VergeOS UI and help with VM documentation, maintenance schedules, or configuration details. Use the VM key (not machine key) in the owner field.

Drive Management

Adding New Drives

Use the machine drives endpoint to add storage after VM creation:

POST /api/v4/machine_drives
{
  "machine": "54",
  "name": "Data Drive",
  "interface": "virtio-scsi",
  "media": "disk",
  "disksize": 536870912000,
  "preferred_tier": "2"
}

Complete API Call:

curl -X POST "https://your-vergeos.example.com/api/v4/machine_drives" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "machine": "54",
    "name": "Data Drive",
    "interface": "virtio-scsi",
    "media": "disk",
    "disksize": 536870912000,
    "preferred_tier": "2"
  }'

Resizing Drives

PUT /api/v4/machine_drives/{drive_id}

Description: Increases the size of an existing drive. Note that drives can only be expanded, not shrunk.

{
  "disksize": 1073741824000
}

Complete API Call:

curl -X PUT "https://your-vergeos.example.com/api/v4/machine_drives/55" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "disksize": 1073741824000
  }'

Drive Resizing

  • Drives can only be expanded, never shrunk
  • The guest OS may need to be configured to recognize the new size
  • Some file systems require manual expansion after drive resize

Removing Drives

Before deletion, drives must be hot-unplugged if the VM is running:

Step 1: Hot-Unplug Drive (if VM is running)
curl -X POST "https://your-vergeos.example.com/api/v4/vm_actions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "vm": "42",
    "action": "hotplugdrive",
    "params": {
      "device": "drive-id-here",
      "unplug": true
    }
  }'
Step 2: Delete the Drive
DELETE /api/v4/machine_drives/{drive_id}
curl -X DELETE "https://your-vergeos.example.com/api/v4/machine_drives/55" \
  -H "Authorization: Bearer YOUR_API_KEY"

Drive Management Examples

Adding a CDROM/ISO
{
  "machine": "54",
  "media": "cdrom",
  "interface": "ahci",
  "media_source": "7"
}
Adding an Import Drive
{
  "machine": "54",
  "name": "Ubuntu Server",
  "description": "Ubuntu 22.04 LTS",
  "interface": "virtio-scsi",
  "media": "import",
  "media_source": 123,
  "preferred_tier": "3"
}

Network Interface Management

Adding NICs

POST /api/v4/machine_nics
{
  "machine": "54",
  "name": "Secondary Network",
  "interface": "virtio",
  "vnet": "8",
  "enabled": true
}

Complete API Call:

curl -X POST "https://your-vergeos.example.com/api/v4/machine_nics" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "machine": "54",
    "name": "Secondary Network",
    "interface": "virtio",
    "vnet": "8",
    "enabled": true
  }'

Updating NIC Configuration

PUT /api/v4/machine_nics/{nic_id}
{
  "vnet": "10",
  "enabled": true
}

Complete API Call:

curl -X PUT "https://your-vergeos.example.com/api/v4/machine_nics/78" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "vnet": "10",
    "enabled": true
  }'

Removing NICs

DELETE /api/v4/machine_nics/{nic_id}
curl -X DELETE "https://your-vergeos.example.com/api/v4/machine_nics/78" \
  -H "Authorization: Bearer YOUR_API_KEY"

NIC Configuration Parameters

Parameter Type Required Description
machine string Yes Machine ID
vnet string Yes Virtual network ID
name string No NIC name
interface string No NIC interface type (virtio, e1000, rtl8139)
enabled boolean No NIC enabled state

Virtual Network Keys

The vnet parameter uses the network's key/ID. You can find network keys by listing available networks via the networks API endpoint.

Complete Configuration Workflow

Here's an example of updating a VM's complete configuration:

# Step 1: Update VM settings (CPU, RAM, console)
curl -X PUT "https://your-vergeos.example.com/api/v4/vms/42" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "ram": 32768,
    "cpu_cores": 8,
    "console": "spice",
    "video": "virtio"
  }'

# Step 2: Add operational note
curl -X POST "https://your-vergeos.example.com/api/v4/note_actions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "owner": "vms/42",
    "action": "update",
    "params": {
      "text": "Upgraded to 32GB RAM and 8 cores for increased workload - 2025-08-24"
    }
  }'

# Step 3: Add additional storage
curl -X POST "https://your-vergeos.example.com/api/v4/machine_drives" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "machine": "54",
    "name": "Application Data",
    "interface": "virtio-scsi",
    "media": "disk",
    "disksize": 1073741824000,
    "preferred_tier": "2"
  }'

# Step 4: Add secondary network interface
curl -X POST "https://your-vergeos.example.com/api/v4/machine_nics" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "machine": "54",
    "name": "Management Network",
    "interface": "virtio",
    "vnet": "5",
    "enabled": true
  }'

Configuration Best Practices

Before Making Changes

  1. Check VM Status: Ensure VM is in appropriate state for changes
  2. Backup Important Data: Create snapshots before major changes
  3. Review Dependencies: Consider impact on running applications
  4. Plan Downtime: Some changes require VM restart

After Making Changes

  1. Verify Configuration: Check that changes were applied correctly
  2. Test Functionality: Ensure VM operates as expected
  3. Update Documentation: Add notes about configuration changes
  4. Monitor Performance: Watch for any performance impacts

Hotplug Considerations

# Check if VM supports hotplug
curl "https://your-vergeos.example.com/api/v4/vms/42?fields=allow_hotplug" \
  -H "Authorization: Bearer YOUR_API_KEY"

Hotplug Support

The allow_hotplug setting enables hot-adding and hot-removing drives and NICs while the VM is running:

  • Drives: Can be added/removed on the fly (guest OS must support it; Virtio-SCSI recommended)
  • NICs: Can be added/removed on the fly (widely supported by guest operating systems)
  • CPU/RAM: Changes always require a VM power cycle

See VM Hot-Plug Capabilities for complete details.

Error Handling

Common Configuration Errors

Error: 400 Bad Request - Invalid RAM size

{
  "error": "RAM size must be at least 512 MB and at most 1048576 MB"
}

Error: 409 Conflict - VM must be stopped

{
  "error": "Cannot modify CPU cores while VM is running without hotplug support"
}

Solution: Stop the VM or check hotplug capabilities before making changes.

Error: 507 Insufficient Storage

{
  "error": "Insufficient storage space in tier 2 for requested disk size"
}

Solution: Choose different storage tier or reduce disk size.

Related Operations

Need Help?

For additional support with VM configuration: - Check the VergeOS documentation portal - Contact VergeOS support with specific error messages - Review system logs for detailed error information - Consult the VergeOS community forums

VM Power Management API

Key Points

  • Control VM power states through REST API endpoints
  • Support for graceful and forced power operations
  • Monitor VM power state and runtime status
  • Understand VM key vs Machine key for different status checks

This guide covers managing virtual machine power states in VergeOS, including starting, stopping, rebooting, and monitoring VMs. The VergeOS API provides comprehensive power management capabilities with both graceful and forced operations.

Stage: VM Power Management (2 of 4) Input: VM key (42) from creation, power operation type Output: Power state changes, runtime status Previous: VM created → VM Creation Common Next Steps: - Configure VM settings → VM Configuration - Advanced operations → VM Advanced Operations

This Document Helps With

  • "How to start/stop VMs via API"
  • "Checking VM power status"
  • "Graceful vs forced VM shutdown"
  • "VM reboot and reset operations"
  • "Monitoring VM power state"
  • "Power management automation"
  • "VM startup troubleshooting"
  • "Scheduled power operations"
  • "Resource optimization through power control"

Quick Reference

Primary Endpoints

  • Power Actions: POST /api/v4/vm_actions
  • VM Status: GET /api/v4/vms/{id}
  • Power State: GET /api/v4/machine_status/{machine_id}

Key Actions

  • poweron: Start VM
  • poweroff: Graceful shutdown (ACPI)
  • kill: Force power off
  • reset: Reboot VM

Authentication

-H "Authorization: Bearer YOUR_API_KEY"
-H "Content-Type: application/json"

Prerequisites

VM must be created first → See VM Creation

API Quick Reference

Operation Method Endpoint Key Type Purpose
Power On POST /api/v4/vm_actions VM key Start virtual machine
Power Off POST /api/v4/vm_actions VM key Graceful shutdown (ACPI)
Force Off POST /api/v4/vm_actions VM key Immediate termination
Reboot POST /api/v4/vm_actions VM key Restart VM
VM Info GET /api/v4/vms/{id} VM key Configuration data
Power State GET /api/v4/machine_status/{id} Machine key Runtime status

Troubleshooting Index

  • 409 Conflict: VM already running, VM not running, power state mismatch
  • 507 Insufficient Resources: Not enough cluster resources, memory/CPU unavailable
  • 403 Forbidden: API key permissions, cluster access denied, VM access restricted
  • 404 Not Found: Invalid VM key, VM deleted, machine key not found
  • 408 Request Timeout: Power operation timeout, VM unresponsive, cluster communication failure
  • 500 Internal Server Error: Hypervisor issues, node problems, storage failures

Starting VMs

POST /api/v4/vm_actions

Description: Powers on a virtual machine and waits for it to reach running state.

Power On Request:

{
  "action": "poweron",
  "params": {},
  "vm": "42"
}

Complete API Call:

curl -X POST "https://your-vergeos.example.com/api/v4/vm_actions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "action": "poweron",
    "params": {},
    "vm": "42"
  }'

Response: 201 Created when action is initiated.

Best Practices

  • Always verify VM configuration before powering on
  • Ensure all required drives and network interfaces are attached
  • Check cluster resource availability
  • Verify VM is not already running to avoid conflicts

Stopping VMs

Graceful Power Off (ACPI)

Description: Sends an ACPI shutdown signal to the guest operating system, allowing it to shut down cleanly.

{
  "action": "poweroff",
  "vm": "42"
}

Complete API Call:

curl -X POST "https://your-vergeos.example.com/api/v4/vm_actions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "action": "poweroff",
    "vm": "42"
  }'

Force Power Off (Kill)

Description: Immediately terminates the VM without allowing the guest OS to shut down cleanly. Use only when graceful shutdown fails.

{
  "action": "kill",
  "vm": "42"
}

Complete API Call:

curl -X POST "https://your-vergeos.example.com/api/v4/vm_actions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "action": "kill",
    "vm": "42"
  }'

Force Power Off

Using kill action may cause data loss or corruption. Always try graceful poweroff first and only use kill when necessary.

Rebooting VMs

Graceful Reboot (ACPI)

Description: Sends an ACPI reset signal to the guest operating system for a clean restart.

{
  "action": "reset",
  "params": {
    "graceful": true
  },
  "vm": "42"
}

Complete API Call:

curl -X POST "https://your-vergeos.example.com/api/v4/vm_actions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "action": "reset",
    "params": {
      "graceful": true
    },
    "vm": "42"
  }'

Hard Reset (Power Cycle)

Description: Immediately restarts the VM without allowing the guest OS to shut down cleanly.

{
  "action": "reset",
  "vm": "42"
}

Complete API Call:

curl -X POST "https://your-vergeos.example.com/api/v4/vm_actions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "action": "reset",
    "vm": "42"
  }'

VM Status and Information

GET /api/v4/vms/{id}

Description: Retrieves VM configuration and metadata using various field filters.

Get Complete VM Information:

curl "https://your-vergeos.example.com/api/v4/vms/42?fields=most" \
  -H "Authorization: Bearer YOUR_API_KEY"

Response Example:

{
  "$key": 42,
  "name": "test",
  "machine": 54,
  "description": "test vm",
  "enabled": true,
  "created": 1755991665,
  "modified": 1755993248,
  "is_snapshot": false,
  "machine_type": "pc-q35-9.0",
  "allow_hotplug": true,
  "guest_agent": true,
  "cpu_cores": 3,
  "cpu_type": "host",
  "ram": 16384,
  "console": "spice",
  "video": "qxl",
  "sound": "none",
  "os_family": "linux",
  "rtc_base": "utc",
  "boot_order": "cd",
  "console_pass_enabled": false,
  "usb_tablet": true,
  "uefi": true,
  "secure_boot": false,
  "serial_port": false,
  "boot_delay": 5,
  "uuid": "821e96ec-2479-7cc4-7c14-c623557bdd2b",
  "need_restart": false,
  "console_status": 42,
  "cloudinit_datasource": "none",
  "imported": false,
  "created_from": "custom",
  "migration_method": "auto",
  "note": "This is a test VM",
  "power_cycle_timeout": 0,
  "allow_export": true,
  "creator": "admin",
  "nested_virtualization": true,
  "disable_hypervisor": true,
  "usb_legacy": false
}

VM Power State and Runtime Status

GET /api/v4/machine_status/{machine_id}

Description: Retrieves the actual runtime status and power state of a VM using the machine key.

Check VM Power State:

curl "https://your-vergeos.example.com/api/v4/machine_status/54?fields=most" \
  -H "Authorization: Bearer YOUR_API_KEY"

Stopped VM Response Example

{
  "$key": 54,
  "machine": 54,
  "running": false,
  "migratable": true,
  "node": null,
  "migrated_node": null,
  "migration_destination": null,
  "started": 1755993338,
  "local_time": 0,
  "status": "stopped",
  "status_info": "",
  "state": "offline",
  "powerstate": false,
  "last_update": 1755993358,
  "running_cores": 3,
  "running_ram": 16384,
  "agent_version": "",
  "agent_features": [],
  "agent_guest_info": []
}

Running VM Response Example

{
  "$key": 44,
  "machine": 44,
  "running": true,
  "migratable": true,
  "node": 3,
  "migrated_node": null,
  "migration_destination": null,
  "started": 1755460982,
  "local_time": 0,
  "status": "running",
  "status_info": "",
  "state": "online",
  "powerstate": true,
  "last_update": 1755993927,
  "running_cores": 6,
  "running_ram": 12288,
  "agent_version": "",
  "agent_features": [],
  "agent_guest_info": []
}

VM vs Machine Status

  • VM Information (/api/v4/vms/{vm_key}): Configuration, settings, and metadata
  • Power State (/api/v4/machine_status/{machine_key}): Runtime status, power state, and resource usage
  • Always use the machine key (not VM key) to check actual power state and runtime status

Status Fields

  • powerstate: Boolean indicating if VM is powered on
  • running: Boolean indicating if VM is currently running
  • status: Text status ("running", "stopped", etc.)
  • state: Overall state ("online", "offline")
  • node: Which physical node the VM is running on (null if stopped)

Power State Monitoring

Checking Power State Only

For quick power state checks, you can request specific fields:

# Check just the power state
curl "https://your-vergeos.example.com/api/v4/machine_status/54?fields=powerstate,running,status" \
  -H "Authorization: Bearer YOUR_API_KEY"

Response:

{
  "powerstate": true,
  "running": true,
  "status": "running"
}

Monitoring Power State Changes

import time
import requests

def wait_for_power_state(machine_id, desired_state, max_retries=10):
    """Wait for VM to reach desired power state"""
    for attempt in range(max_retries):
        response = requests.get(
            f"https://your-vergeos.example.com/api/v4/machine_status/{machine_id}",
            params={"fields": "powerstate,running,status"},
            headers={"Authorization": "Bearer YOUR_API_KEY"}
        )

        data = response.json()
        if data.get("powerstate") == desired_state:
            return True

        time.sleep(5)  # Wait 5 seconds between checks

    return False

# Example usage
if wait_for_power_state("54", True):
    print("VM is now running")
else:
    print("VM failed to start within timeout")

Common Power Management Workflows

Safe VM Shutdown Workflow

# Step 1: Attempt graceful shutdown
curl -X POST "https://your-vergeos.example.com/api/v4/vm_actions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"action": "poweroff", "vm": "42"}'

# Step 2: Wait and check status (repeat as needed)
sleep 30
curl "https://your-vergeos.example.com/api/v4/machine_status/54?fields=powerstate,status" \
  -H "Authorization: Bearer YOUR_API_KEY"

# Step 3: Force shutdown if graceful failed (after reasonable timeout)
curl -X POST "https://your-vergeos.example.com/api/v4/vm_actions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"action": "kill", "vm": "42"}'

VM Restart Workflow

# Step 1: Graceful reboot
curl -X POST "https://your-vergeos.example.com/api/v4/vm_actions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "action": "reset",
    "params": {"graceful": true},
    "vm": "42"
  }'

# Step 2: Monitor restart progress
curl "https://your-vergeos.example.com/api/v4/machine_status/54?fields=powerstate,status,node" \
  -H "Authorization: Bearer YOUR_API_KEY"

Error Handling

Common Power Management Errors

Error: 409 Conflict - VM already running

{
  "error": "Cannot power on VM: already in running state"
}
Solution: Check current power state before issuing power commands.

Error: 409 Conflict - VM not running

{
  "error": "Cannot power off VM: not in running state"
}
Solution: Verify VM is actually running before attempting shutdown.

Error: 507 Insufficient Resources

{
  "error": "Insufficient cluster resources to start VM"
}
Solution: Check cluster resource availability or reduce VM resource requirements.

Operation Timeouts

Set appropriate timeouts for power operations:

  • Power On: 30-60 seconds
  • Graceful Shutdown: 60-120 seconds
  • Force Shutdown: 10-30 seconds
  • Reboot: 60-120 seconds

Related Operations

Need Help?

For additional support with VM power management: - Check the VergeOS documentation portal - Contact VergeOS support with specific error messages - Review system logs for detailed error information - Consult the VergeOS community forums

VM Creation API

Key Points

  • Create VMs with essential configuration parameters using REST API
  • Support for recipe-based VM creation with complex configurations
  • Add drives, devices, and network interfaces after VM creation
  • Understand VM key vs Machine key distinctions for different operations

This guide covers creating virtual machines in VergeOS, from basic VM creation through adding drives, devices, and network interfaces. The VergeOS API provides comprehensive endpoints for VM creation and hardware configuration.

Stage: VM Creation (1 of 4) Input: API credentials, cluster info, VM specifications Output: VM key (42) + Machine key (54) Next: Use keys for power management Common Next Steps: - Power on VM → VM Power Management - Configure settings → VM Configuration - Advanced operations → VM Advanced Operations

This Document Helps With

  • "How to create a VM via API"
  • "Adding drives during VM setup"
  • "Attaching GPU/PCI devices to VMs"
  • "VM creation with cloud-init"
  • "Understanding VM vs machine keys"
  • "Setting up network interfaces for new VMs"
  • "Recipe-based VM provisioning"
  • "Bulk VM creation automation"
  • "Infrastructure as code VM deployment"

Quick Reference

Primary Endpoints

  • Create VM: POST /api/v4/vms
  • Add Drive: POST /api/v4/machine_drives
  • Add Device: POST /api/v4/machine_devices
  • Add NIC: POST /api/v4/machine_nics

Key Parameters

  • name: VM identifier (required)
  • cluster: Target cluster ID
  • machine: Machine ID from VM creation (for hardware additions)
  • resource_group: UUID for device passthrough

Authentication

-H "Authorization: Bearer YOUR_API_KEY"
-H "Content-Type: application/json"

Next Steps

After VM creation → Power Management (VM Power Management)

API Quick Reference

Operation Method Endpoint Key Type Purpose
Create VM POST /api/v4/vms Returns both Initial creation
Add Drive POST /api/v4/machine_drives Machine key Hardware
Add Device POST /api/v4/machine_devices Machine key GPU/PCI passthrough
Add NIC POST /api/v4/machine_nics Machine key Network interface
Power On POST /api/v4/vm_actions VM key Control
Check Status GET /api/v4/machine_status/{id} Machine key Monitor

Troubleshooting Index

  • 409 Conflict: VM name exists, already running, permission denied
  • 400 Bad Request: Invalid parameters, missing required fields, invalid JSON
  • 507 Insufficient Storage: Tier full, reduce size, choose different tier
  • 403 Forbidden: API key permissions, cluster access denied
  • 404 Not Found: Invalid cluster ID, missing media source, invalid resource group
  • 422 Unprocessable Entity: Invalid drive interface, unsupported media type

Prerequisites

  • Valid VergeOS API credentials with VM management permissions
  • Understanding of VergeOS concepts: clusters, vnets, media sources, and resource groups
  • Basic knowledge of REST API principles and JSON formatting

Authentication

All VM creation operations require authentication using either:

  • API Key: Include in the Authorization header as Bearer YOUR_API_KEY
  • Basic Authentication: Username and password for interactive sessions
  • Session Token: For web-based integrations
# Using API Key
curl -H "Authorization: Bearer YOUR_API_KEY" \
     -H "Content-Type: application/json" \
     https://your-vergeos.example.com/api/v4/vms

Basic VM Creation

POST /api/v4/vms

Description: Creates a new virtual machine with the specified configuration.

Request Parameters:

Name Type Required Description
name string Yes Unique VM name
description string No VM description
cluster string No Target cluster ID (numeric string)
ram integer No RAM in MB (default: 1024)
cpu_cores integer No Number of CPU cores (default: 1)
guest_agent string No Enable guest agent ("true"/"false")
console_pass_hash string No Console password hash (empty string if not used)
video string No Video adapter type (virtio, std, cirrus, etc.)
rtc_base string No RTC base setting (utc, localtime)
uefi string No Enable UEFI boot ("true"/"false")

Request Body Example:

{
  "name": "web-server-01",
  "description": "Production web server",
  "cluster": "1",
  "ram": 8192,
  "cpu_cores": 4,
  "guest_agent": "true",
  "console_pass_hash": "",
  "video": "virtio",
  "rtc_base": "utc",
  "uefi": "true"
}

Response Example:

{
  "location": "/v4/vms/42",
  "dbpath": "vms/42",
  "$row": 42,
  "$key": "42",
  "response": {
    "machine": "54"
  }
}

Response Fields:

Field Type Description
location string API endpoint for the created VM
dbpath string Database path for the VM record
$row integer Database row number
$key string VM ID (used for subsequent API calls)
response.machine string Machine ID (used for drives, NICs, devices)

Error Responses: - 400 Bad Request: Invalid configuration parameters - 409 Conflict: VM name already exists - 403 Forbidden: Insufficient permissions

VM Key vs Machine Key

  • VM key (e.g., "42"): Use for VM settings like CPU, RAM, console
  • Machine key (e.g., "54"): Use for hardware like drives, NICs, devices
  • You get both keys in the VM creation response

Recipe-Based VM Creation

VergeOS supports complex VM creation using recipes that include drives, network interfaces, and devices.

Complete VM with Recipe Configuration

{
  "name": "enterprise-vm",
  "description": "Enterprise application server",
  "cluster": "1",
  "cpu_cores": 8,
  "ram": 16384,
  "guest_agent": "true",
  "video": "virtio",
  "rtc_base": "utc",
  "uefi": "true",
  "secure_boot": "true",
  "console_pass_hash": "",
  "cloudinit_datasource": "nocloud",
  "cloudinit_files": [
    {
      "name": "user-data",
      "contents": "#cloud-config\nusers:\n  - name: admin\n    sudo: ALL=(ALL) NOPASSWD:ALL\n    ssh_authorized_keys:\n      - ssh-rsa AAAAB3NzaC1yc2E..."
    },
    {
      "name": "meta-data",
      "contents": "instance-id: enterprise-vm-001\nlocal-hostname: enterprise-vm"
    }
  ]
}

Adding Drives

Drives must be created separately after VM creation using the machine drives endpoint.

POST /api/v4/machine_drives

Request Parameters:

Name Type Required Description
machine string Yes Machine ID from VM creation
name string No Drive name
media string No Media type (disk, cdrom, import, clone, efidisk)
interface string No Drive interface (virtio-scsi, ide, ahci, etc.)
disksize integer No Disk size in bytes (for new disks)
preferred_tier string No Storage tier (1-5)
media_source string No Source media ID (for import/clone/cdrom)
show_pt string No Override Preferred Tier ("true"/"false") - overrides media source's default tier

Creating a Boot Drive

{
  "machine": "54",
  "name": "OS Drive",
  "media": "disk",
  "interface": "virtio-scsi",
  "disksize": 2199023255552,
  "preferred_tier": "1"
}

Response Example:

{
  "location": "/v4/machine_drives/54",
  "dbpath": "machine_drives/54",
  "$row": 54,
  "$key": "54"
}

Attaching CDROM/ISO

{
  "machine": "54",
  "media": "cdrom",
  "interface": "ahci",
  "media_source": "7"
}

Response Example:

{
  "location": "/v4/machine_drives/55",
  "dbpath": "machine_drives/55",
  "$row": 55,
  "$key": "55"
}

Importing from Media Source

{
  "machine": "54",
  "name": "Ubuntu Server",
  "description": "Ubuntu 22.04 LTS",
  "interface": "virtio-scsi",
  "media": "import",
  "media_source": 123,
  "preferred_tier": "3"
}

Adding Devices (GPU, PCI Passthrough, etc.)

POST /api/v4/machine_devices

Description: Attaches hardware devices like GPUs, PCI devices, USB devices, or TPM to a virtual machine.

Request Parameters:

Name Type Required Description
machine string Yes Machine ID
resource_group string Yes Resource group UUID for the device
settings_args object No Device-specific settings (empty object for basic passthrough)

PCI Passthrough GPU

{
  "machine": "54",
  "resource_group": "1f67f07e-f653-db95-c475-01b8a2ea0ff1",
  "settings_args": {}
}

Response Example:

{
  "location": "/v4/machine_devices/2",
  "dbpath": "machine_devices/2",
  "$row": 2,
  "$key": "2",
  "response": {
    "uuid": "934e250b-a13c-bd8f-104d-a31995b06eba"
  }
}

Finding Resource Groups

The resource_group parameter identifies the specific hardware device to attach. Use these endpoints to find available resource group UUIDs:

  • GET /api/v4/resource_groups - General hardware devices (GPUs, PCI devices, USB, etc.)
  • GET /api/v4/node_nvidia_vgpu_devices - NVIDIA vGPU devices specifically

Finding Available Devices

# General hardware devices
curl "https://your-vergeos.example.com/api/v4/resource_groups" \
  -H "Authorization: Bearer YOUR_API_KEY"

# NVIDIA vGPU devices
curl "https://your-vergeos.example.com/api/v4/node_nvidia_vgpu_devices" \
  -H "Authorization: Bearer YOUR_API_KEY"

Adding Network Interfaces

POST /api/v4/machine_nics

Request Parameters:

Name Type Required Description
machine string Yes Machine ID
vnet string Yes Virtual network ID (key of the target network)
name string No NIC name
interface string No NIC interface type (virtio, e1000, etc.)
enabled boolean No NIC enabled state

Example:

{
  "machine": "54",
  "vnet": "3"
}

Response Example:

{
  "location": "/v4/machine_nics/78",
  "dbpath": "machine_nics/78",
  "$row": 78,
  "$key": "78"
}

Virtual Network Keys

The vnet parameter uses the network's key/ID. For example, vnet "3" might be your external network. You can find network keys by listing available networks via the networks API endpoint.

Complete VM Creation Example

Here's a complete workflow for creating a VM with drives, devices, and network interfaces:

# Step 1: Create the VM
curl -X POST "https://your-vergeos.example.com/api/v4/vms" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "production-server",
    "description": "Production application server",
    "cluster": "1",
    "ram": 16384,
    "cpu_cores": 8,
    "guest_agent": "true",
    "video": "virtio",
    "uefi": "true"
  }'

# Response: VM key = 42, Machine key = 54

# Step 2: Add boot drive
curl -X POST "https://your-vergeos.example.com/api/v4/machine_drives" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "machine": "54",
    "name": "Boot Drive",
    "media": "disk",
    "interface": "virtio-scsi",
    "disksize": 107374182400,
    "preferred_tier": "1"
  }'

# Step 3: Add network interface
curl -X POST "https://your-vergeos.example.com/api/v4/machine_nics" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "machine": "54",
    "vnet": "3"
  }'

# Step 4: Add GPU (optional)
curl -X POST "https://your-vergeos.example.com/api/v4/machine_devices" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "machine": "54",
    "resource_group": "1f67f07e-f653-db95-c475-01b8a2ea0ff1",
    "settings_args": {}
  }'

Related Operations

Need Help?

For additional support with VM creation: - Check the VergeOS documentation portal - Contact VergeOS support with specific error messages - Review system logs for detailed error information - Consult the VergeOS community forums

Understanding VergeOS VM Memory Management

Overview

VergeOS takes a different approach to virtual machine memory management compared to platforms like VMware and Nutanix. Understanding how VergeOS handles memory allocation and reporting is essential for effective capacity planning, performance optimization, and troubleshooting. This article explains why VergeOS memory usage reporting differs from guest operating system reports and the advantages of this design choice.

What You'll Learn

  • Why VergeOS shows allocated memory rather than active memory usage
  • How VergeOS memory management differs from memory ballooning platforms
  • The performance and reliability benefits of VergeOS's approach
  • Best practices for monitoring memory across host and guest levels
  • How this design improves capacity planning and workload migration reliability

Key Concepts

Memory Allocation vs. Memory Usage

Memory Allocation: The amount of physical RAM reserved by the hypervisor for a virtual machine, regardless of how much the guest OS and applications are actively using.

Memory Usage: The amount of memory actually consumed by applications and the operating system within the virtual machine.

In VergeOS, when you assign 8GB of RAM to a VM, the hypervisor immediately reserves 8GB of physical memory on the host, even if the guest OS shows only 2GB in use.

Why VergeOS Shows Allocated Memory

When memory is allocated to a VM, the hypervisor must reserve that full amount in physical RAM regardless of what applications inside the VM are actually using. This is because the guest operating system could request access to any portion of its allocated memory at any time, and the hypervisor must guarantee that memory is available.

VergeOS displays this reserved/allocated memory because it represents the actual physical resources consumed on the host, providing a true picture of resource utilization for capacity planning and performance management.

How VergeOS Differs from Other Platforms

VergeOS Approach: No Memory Ballooning

VergeOS intentionally avoids memory ballooning techniques used by other virtualization platforms. Key characteristics include:

  • Allocated vs. Used: VergeOS shows what's allocated to each VM, which typically isn't the same as guest-level usage
  • Performance First: This eliminates ballooning overhead and complexity
  • Predictable Resource Allocation: What you see is exactly what's reserved on the physical host

Traditional Ballooning Approach

Many virtualization platforms use memory ballooning drivers that:

  • Dynamically report memory usage to the hypervisor
  • Allow memory "overcommitment" by sharing unused memory between VMs
  • Require special drivers (balloon drivers) within each guest OS
  • Create complexity in memory management and potential performance impacts

Benefits of VergeOS's Memory Management Design

1. Predictable Performance

By eliminating balloon driver overhead, VergeOS provides more predictable VM performance. There's no dynamic memory management that could impact application response times or cause unexpected memory pressure.

2. Simplified Capacity Planning

With clear allocation visibility, administrators can easily calculate: - Total memory committed across all VMs - Available memory capacity for new workloads - Resource utilization without complex ballooning calculations

3. Enhanced Reliability

VergeOS avoids dynamic memory management issues that can occur with ballooning, such as: - Memory reclamation delays - Guest OS memory pressure during balloon inflation - Potential application instability during memory operations

4. Guaranteed Migration Success

Critical for High Availability: VergeOS's approach ensures predictable workload migration during node failures. Since the full allocated memory is always reserved, the system can guarantee that all VMs can be migrated to available nodes without memory overcommitment surprises.

If VergeOS used memory ballooning, it could not ensure reliable migration of all workloads to another node during a failure scenario, as the actual memory requirements might exceed the target node's capacity when balloons are deflated.

Migration Reliability

Memory ballooning can create unpredictable migration scenarios. When VMs that appeared to use less memory suddenly require their full allocation during migration, target nodes may lack sufficient resources, potentially causing migration failures during critical moments.

Memory Monitoring Best Practices

Host-Level Monitoring (VergeOS Dashboard)

Use VergeOS dashboards to monitor: - Total allocated memory across all VMs on each node - Available physical memory for new VM deployments
- Memory utilization trends for capacity planning - Node memory status during maintenance and migration operations

Guest-Level Monitoring

Within each VM, use appropriate tools to monitor: - Application memory consumption for performance tuning - Operating system memory usage for optimization - Memory leaks or excessive usage by specific processes - Guest-level performance metrics for troubleshooting

Combined Monitoring Strategy

For comprehensive memory management:

  1. Capacity Planning: Use VergeOS allocation data to plan hardware expansion
  2. Performance Optimization: Use guest-level data to tune applications
  3. Troubleshooting: Compare host allocation with guest usage to identify issues
  4. Resource Optimization: Right-size VMs based on actual guest usage patterns

Practical Example

Consider this scenario:

  • VM Allocated Memory: 8GB (shown in VergeOS)
  • Windows Task Manager: Shows 3GB used
  • Physical Host: Has 8GB reserved for this VM

This is normal and expected behavior. The VergeOS dashboard correctly shows that 8GB of physical memory is committed to this VM, while the guest OS shows its internal usage of that allocated memory.

Troubleshooting Memory Issues

When VergeOS Shows High Memory Usage

If VergeOS shows high memory utilization:

  1. Review VM allocations: Check if VMs are over-allocated for their actual needs
  2. Plan capacity expansion: High allocation percentages indicate need for more physical RAM
  3. Optimize VM sizing: Consider reducing allocations for underutilized VMs

When Guest OS Shows Memory Pressure

If applications report memory issues while VergeOS shows available allocation:

  1. Check guest OS configuration: Verify VM has adequate allocated memory
  2. Review application requirements: Ensure sufficient memory is allocated
  3. Monitor memory leaks: Look for applications consuming excessive memory over time

Memory Performance Issues

For memory-related performance problems:

  1. Verify adequate allocation: Ensure VMs have sufficient memory allocated
  2. Check host memory pressure: Avoid overcommitting total physical RAM
  3. Review storage impact: Memory pressure can cause increased swap activity

Best Practices for Memory Management

Right-Sizing Virtual Machines

  • Start with manufacturer-recommended memory allocations
  • Monitor guest-level usage over time to identify optimization opportunities
  • Avoid significant over-allocation that wastes physical resources
  • Leave adequate buffer for memory spikes and growth

Capacity Planning

  • Plan physical memory capacity based on total VM allocations, not guest usage
  • Account for hypervisor overhead and system memory requirements
  • Maintain 10-15% free capacity for maintenance and unexpected demand
  • Consider future growth when sizing new nodes

Performance Optimization

  • Allocate sufficient memory to avoid guest OS memory pressure
  • Use memory monitoring tools within VMs to identify optimization opportunities
  • Consider workload patterns when planning memory allocation
  • Test application performance with different memory allocations

Next Steps

To deepen your understanding of VergeOS memory management:

Additional Resources

For specific questions about memory allocation in your environment, consult the VergeOS support team at support@verge.io or review the performance monitoring sections in the product documentation.


Document Information

  • Last Updated: 2024-08-15
  • VergeOS Version: 4.12.6+

Scaling Up a vSAN

Standard Operating Procedure Required

Please review the complete vSAN Scale Up SOP for comprehensive preparation, execution, and verification procedures.

To scale up a vSAN, follow the steps below. However, before proceeding, ensure that your current vSAN has at least 30% free capacity.

Important

  • All drives in a tier must be alike. If a drive of an incorrect size is added to an existing tier, the tier will only be able to use the space of the smallest drive.
  • Ensure that your vSAN has at least 30% free capacity unless you are doubling the capacity. If the free space is less than 30% and you are not doubling the drive count, consider scaling out by adding a node or opening up a support ticket for assistance.

Related Documentation

Required Reading: The vSAN Scale Up Standard Operating Procedure contains essential preparation, verification, and troubleshooting steps that must be completed before and after this scale up process.

Steps to Scale Up

  1. Physically add the drives or Fiber Channel LUNs on the node you want to scale up.

  2. Log in to the host system's UI and select the appropriate cluster you want to scale up from the top compute cluster section on the home page.

  3. Select the node that you are scaling up.

  4. Refresh the system to recognize the new drives: - Select Refresh from the left menu, and choose Drives & NICs from the dropdown. - Confirm by selecting Yes.

  5. Select the Scale Up option on the left menu.

  6. The page will now show the newly inserted drives in an offline state. Select the drive(s), then under Node Drives, select the Scale Up function.

  7. Select the appropriate tier for the drive(s) and submit.

Upon completion, the screen will refresh and the drives will disappear from the view. Go back to the main page, where you will see the vSAN tiers change color to yellow, indicating that it is in a repair state. This is expected, and the vSAN will return to a green/healthy state after a few minutes, showing the newly added tier or increased space on an existing tier.

Repeat these steps for each node as necessary.


Document Information

  • Last Updated: 2025-07-27
  • VergeOS Version: 4.13

Tenant Node Planning Guide

Overview

This guide outlines key considerations for determining an optimal number of tenant nodes, compute resource allocation, and placement strategies for VergeOS tenant deployments. Effective tenant node design supports optimal performance and resource utilization while maintaining the isolation and security benefits of tenant environments.

Prerequisites

Before using this guide, you should have a general foundational understanding of tenant concepts; refer to the Tenants Overview if you are new to VergeOS tenants.

Purpose and Scope

This guide helps administrators make informed decisions about:

  • An appropriate number of tenant nodes based on tenant requirements
  • Resource allocation strategies across tenant nodes
  • Physical placement considerations for tenant nodes

What Are Tenant Nodes?

Tenant Nodes Simulate Physical Hosts

Tenant nodes are virtual servers that simulate physical VergeOS nodes, closely replicating the same functionality, to create a private tenant environment. Each tenant consists of one or more tenant nodes that collectively provide compute, storage, and networking infrastructure for the tenant's workloads while maintaining separation and privacy using the tenant encapsulated network.

Tenant Node Characteristics

Secure Inter-Host Communication

  • Tenant can be securely scaled across multiple physical hosts
  • The tenant's protected encapsulated network allows its nodes to communicate with each other securely

Mobility

  • Designed for portability across physical infrastructure
  • Migration between physical hosts for maintenance or load balancing
  • Automatic failover to other physical nodes during hardware failures
  • Live migration capabilities with no service interruption

Matched Resource Allocation

  • Tenant nodes can be deployed across clusters or hosts with different hardware configurations (including specialized equipment like vGPUs) to match varied workload requirements within the tenant.

Horizontal and Vertical Scaling

  • A tenant node's resources can be increased or decreased without a restart
  • Tenant nodes can be added to scale compute resources across multiple physical hosts
  • Existing tenant architecture is seamlessly expanded with new tenant nodes

Single-Node Tenants

Key Points

  • Single-node tenants provide redundancy through automatic failover
  • A single tenant node is preferred when able to satisfy resource requirements
  • Additional tenant nodes can be added, non-disruptively, as needed to scale a tenant's resources

A tenant can run on a single tenant node while still providing redundancy because the system employs a "watchdog" mechanism that will automatically restart a tenant node on a new physical host if its physical server were to fail, or the virtual tenant node is non-responsive for a period. For maintenance operations, a temporary tenant node is automatically created to seamlessly live migrate tenant workloads.

If RAM and core requirements are met with a single tenant node and there are no network or device needs that require tenant nodes to be on multiple hosts, a single node is often preferable for simplicity.

Scaling Flexibility

VergeOS tenants provide disturbance-free resource scaling; you can add resources to your tenant without interfering with running workloads. Tenant nodes should typically be planned and deployed based on current or near-term workload needs, with resources increased as needed. This approach avoids wasting allocated, unused resources.

Non-disruptive scaling options:

  • Add resources to existing tenant nodes
  • Add additional tenant nodes to distribute load
  • Migrate tenant nodes to different physical hosts
  • Scale storage independently of compute resources

For detailed procedures on increasing tenant resources, refer to the Increase Tenant Resources documentation.

Multiple-Node Tenants

For larger tenant deployments or those requiring varied hardware specifications, more than one tenant node may be necessary. The following sections outline conditions that necessitate multiple nodes.

1. Compute resource needs exceeding workload maximums

Multiple tenant nodes are needed when a tenant requires more compute resources than the cluster will allow within a single machine. The amount of memory and number of cores that can be allocated to a single tenant node is limited by cluster settings: Max RAM per machine and Max cores per machine**.

2. Tenant workload requirements

Some application requirements necessitate multiple tenant nodes to allow distributing workloads across physical servers:

  • Clustered Applications: Tenants employing clustered applications (e.g., web farms, Hadoop, database clusters) commonly have requirements to run multiple instances on different physical hosts for high availability, load balancing, or parallel processing.

  • Mixed Hardware Capabilities: To provide a tenant with varying performance profiles or specialized pass-through hardware (vGPU, PCI, USB devices), it may be necessary to deploy multiple tenant nodes running on different physical VergeOS nodes or clusters.

  • Regulatory Requirements: Some tenants may have compliance requirements for hardware separation among workloads.

Determining Tenant Resource Requirements

CPU Requirements

  • Assess the total CPU cores needed for all planned workloads
  • Consider peak usage patterns and performance requirements
  • Account for different workload types (CPU-intensive vs. I/O-bound) where it might make sense to deploy to different nodes or clusters with specialized hardware capabilities.

Memory Requirements

  • Calculate total RAM needed across all planned virtual machines
  • Include memory for planned tenant infrastructure services, such as NAS, AI, etc.
  • The system handles memory overhead through built-in processes.

No Manual Overhead Calculation Required

The VergeOS system automatically accounts for hypervisor and storage overhead when allocating resources to tenant nodes. The memory you assign is fully available to the tenant for distributing among its own workloads.

Right-Sizing Strategy

It's generally recommended to right-size tenant compute resources to match actual workload demands rather than allocating surplus capacity for future growth. This approach optimizes resource utilization and allows for organic scaling as requirements evolve.

Example Configurations

The following examples illustrate different tenant node configurations to demonstrate key planning concepts and requirements.

Example 1 - Small, Single-node Tenant

Scenario:

  • A tenant with only 3 VMs, no special requirements
  • Host cluster settings allow for Max RAM per machine: 64GB RAM and Max cores per machine: 16
  • Host environment includes multiple physical nodes, each containing the same passthrough devices available for tenant workloads

Requirements:

  • Total of 16GB RAM and 8 cores for current tenant workloads
  • Some tenant workloads have a requirement for vGPU devices

Configuration:

  • Tenant Nodes: 1
  • Resources: 16 GB RAM, 8 cores

Rationale: A single node provides sufficient resources for the workload while maintaining simplicity. Automatic failover of tenant nodes ensures redundancy without additional complexity.

Scaling Path:
Add more RAM/cores to the tenant node as resource needs grow (an additional 48GB RAM and 8 cores can be added to this initial node), or add a second tenant node if resource needs exceed 64 GB/16 cores.

Example 2 - Mid-Sized Tenant Running High-Availability Web Applications

Scenario:

  • A mid-sized tenant running customer-facing web applications with a multi-instance, load-balanced/High-availability setup
  • Host cluster settings allow for Max RAM per machine: 128GB RAM and Max cores per machine: 16

Requirements:

  • 4 web servers (2 per host for HA)
  • 2 database servers (primary/replica on separate hosts)

Configuration:

  • Tenant Nodes: 2
  • Node 1: 64 GB RAM, 12 cores (hosts 2 web servers + database primary)
  • Node 2: 64 GB RAM, 12 cores (hosts 2 web servers + database replica)
  • Tenant VMs use HA Group settings to maintain node anti-affinity, helping distribute workloads across separate hosts

This KB article provides information about using HA Groups for node anti-affinity

Rationale: Although host cluster settings allow for enough resources within one tenant node, multiple nodes ensure that the tenant's web servers and database components run on different physical hosts to meet the tenant application HA requirements.

Scaling Path: Add more RAM/cores to the existing nodes as compute needs grow, or add additional tenant node(s) if requirements begin to exceed 256GB/32 cores or further physical workload separation becomes necessary.

Example 3 - Mixed Workload Tenant with Specialized Hardware

Scenario:

  • Tenant customer needs standard compute resources, high-performance for video rendering, and GPU acceleration for video processing workloads
  • Host environment has multiple clusters with varying hardware configurations and performance profiles
  • Applicable Host clusters:
    • Standard (mid-range processors, standard processor/core ratio); Max RAM per machine: 64GB RAM and Max cores per machine: 16
    • vGPU Cluster (high-end processors, vGPU devices); Max RAM per machine: 64GB RAM and Max cores per machine: 16
    • High-Performance Cluster (high-end processors, memory dense) Max RAM per machine: 128 RAM and Max cores per machine: 16

Requirements:

  • 128GB/16 cores for standard VMs (file servers and management tools)
  • 64GB/16 cores for GPU-accelerated VMs for video rendering
  • 48GB/12 cores for high-performance host for editing workstations

Configuration:

  • Tenant Nodes: 4
  • Node 1: 64 GB RAM, 8 cores (Standard Cluster for file servers)
  • Node 2: 64 GB RAM, 8 cores (Standard Cluster for file servers/management tools)
  • Node 3: 64 GB RAM, 16 cores (vGPU Cluster for video processing rendering workstations)
  • Node 4: 48 GB RAM, 8 cores (Premium Cluster for editing workstations)

Rationale: Multiple nodes allow placement on different clusters with different hardware capabilities: standard, GPU-equipped, and high performance. Two nodes are needed in the Standard cluster to provide the needed 128GB RAM for file servers and management tools.

Hardware Matching: Each tenant node is placed on physical infrastructure that matches its workload requirements (tenant node Cluster setting), optimizing both performance and cost.

Scaling Path: For each node/cluster: add more RAM/cores to the existing node if the cluster max settings allow, or add additional tenant node(s).

Example 4 - Tenant with Clustered-Application Requirements

Scenario:

  • Enterprise tenant customer running a distributed analytics platform requiring multi-host deployment for application load balancing and redundancy functions.
  • Host cluster settings allow for Max RAM per machine: 96GB RAM and Max cores per machine: 16

Requirements:

  • Support for 3 application servers across 3 physical hosts (48GB RAM/8 cores each)
  • Support for 3 database servers (16GB RAM/4 cores each)
  • Support for 2 data processing servers (16GB RAM/8 cores each)

Configuration:

  • Tenant Nodes: 4
  • Node 1: 64 GB RAM, 12 cores (1 application server + 1 database server)
  • Node 2: 64 GB RAM, 12 cores (1 application server + 1 database server)
  • Node 3: 64 GB RAM, 12 cores (1 application server + 1 database server)
  • Node 4: 32 GB RAM, 8 cores (2 data processing servers)
  • Tenant VMs use HA Group settings to maintain node anti-affinity, helping distribute workloads across separate hosts

This KB article provides information about using HA Groups for node anti-affinity

Rationale: Four tenant nodes ensure application instances are run across multiple physical hosts while maintaining the ability to run all services.

Scaling Path: Add more RAM/cores to the existing tenant nodes as cluster max settings allow; configure additional nodes when resource needs cannot be met with the original four.

VM Advanced Options

Overview

The VM Advanced Options field allows power users to fine-tune virtual machine parameters beyond what's available in the standard UI. These options provide granular control over CPU features, hardware emulation, and device behavior.

Use with Caution

Advanced options can significantly impact VM performance and stability. Only modify these settings if you understand their implications. Incorrect values may prevent your VM from starting.

Format

Advanced options use a simple key-value format, with one option per line:

option1=value1
option2=value2
option3=value3

Available Options

CPU and Memory

CPU Threads

cpu.threads=2
Sets the number of CPU threads per core. Default is 1.

Use case: Enabling SMT (Simultaneous Multi-Threading) for applications that benefit from hyperthreading.

Memory Pre-allocation

mem-prealloc=1
Pre-allocates all VM memory at startup instead of allocating on demand.

Use case: Reduces memory allocation latency for performance-critical workloads. Useful for real-time applications or when using hugepages.

UUID Configuration

You can customize the VM's UUID (Universally Unique Identifier) to match specific requirements, such as software licensing or migration scenarios.

System UUID
smbios.type1.uuid=550e8400-e29b-41d4-a716-446655440000

Sets the system UUID presented to the guest OS. This is the primary UUID that most software checks.

Use cases: - Migrating VMs from other platforms while preserving licensing - Software that validates against a specific UUID - Cloning VMs that need unique identifiers

UUID Format

UUIDs must be in standard format: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx (8-4-4-4-12 hexadecimal characters). Invalid formats will prevent the VM from starting.

Generating UUIDs

On Linux: uuidgen On Windows PowerShell: [guid]::NewGuid() Online: Use any UUID generator tool

SMBIOS Customization

SMBIOS (System Management BIOS) options allow you to customize the hardware information presented to the guest OS.

Type 0 - BIOS Information
smbios.type0.vendor=American Megatrends Inc.
smbios.type0.version=2.0
smbios.type0.date=01/01/2023
Type 1 - System Information
smbios.type1.product=Custom Server
smbios.type1.version=1.0
smbios.type1.sku=SKU123
smbios.type1.family=Server Family
Type 2 - Baseboard Information
smbios.type2.manufacturer=Custom Manufacturer
smbios.type2.product=Custom Board
smbios.type2.version=1.0
Type 3 - Enclosure Information
smbios.type3.manufacturer=Custom Chassis
smbios.type3.version=1.0
smbios.type3.sku=CHASSIS123
Type 4 - Processor Information
smbios.type4.version=Intel(R) Xeon(R) CPU E5-2680 v4
smbios.type4.manufacturer=Intel

Use cases for SMBIOS: - Software licensing that checks hardware signatures - Applications expecting specific hardware configurations - Testing scenarios requiring specific system identification

Network Interface Tuning

For each NIC, you can tune queue parameters using the NIC's asset ID:

nic1.txqueuelen=2000
nic1.numtxqueues=4
nic1.numrxqueues=4

Use cases: - High-throughput network applications - Reducing network latency - Optimizing for specific network workloads

Machine-Specific Parameters

Customize QEMU machine parameters:

machine.cap-cfpc=broken
machine.cap-sbbc=broken
machine.cap-ibs=broken

Use cases: - Working around CPU security mitigation issues - Compatibility with specific guest operating systems - Performance optimization for trusted environments

RTC (Real-Time Clock) Options

rtc.drift-fix=slew

Use cases: - Fixing time drift issues in VMs - Synchronization requirements for time-sensitive applications

Device-Specific Options

You can set parameters for any device using its asset ID:

device1.guest-reset=true
device1.guest-resets-all=false

For drives:

drive1.cache=writeback
drive1.detect-zeroes=on

Common Use Cases

High-Performance Computing

cpu.threads=2
mem-prealloc=1
nic1.numtxqueues=8
nic1.numrxqueues=8

Windows Licensing Compliance

smbios.type1.manufacturer=Dell Inc.
smbios.type1.product=PowerEdge R740
smbios.type1.serial=ABC123

Preserving UUID After Migration

smbios.type1.uuid=550e8400-e29b-41d4-a716-446655440000
Use when migrating a VM from another platform and the guest OS has software tied to the original UUID.

Network Optimization

nic1.txqueuelen=5000
nic1.numtxqueues=4
nic1.numrxqueues=4

Best Practices

Testing Recommendations

  1. Test advanced options in a non-production environment first
  2. Document any advanced options you use for future reference
  3. Only add options that solve specific problems or requirements
  4. Monitor VM performance after applying advanced options

Troubleshooting

If your VM fails to start after adding advanced options:

  1. Remove all advanced options and try starting the VM
  2. Add options back one at a time to identify the problematic setting
  3. Check the VM logs for specific error messages
  4. Verify the syntax - ensure each option is on its own line with no extra spaces

Version Compatibility

Some advanced options may not be available on all VergeOS versions. Options are processed dynamically, so unsupported options are typically ignored rather than causing errors.

Document Information

  • Last Updated: 2026-01-24
  • VergeOS Version: 26.1

How to Add a TPM Device to a Virtual Machine

Overview

This article provides step-by-step instructions for adding a Trusted Platform Module (TPM) device to a virtual machine in VergeOS. TPM devices provide hardware-based security features including secure boot, encryption key management, and attestation capabilities.

Key Points

  • TPM devices enable hardware-based security features
  • Requires UEFI boot mode for full functionality
  • VM restart required after adding TPM device
  • Guest OS may require additional configuration

Prerequisites

Before adding a TPM device to your VM, ensure the following requirements are met:

  • UEFI Boot Mode: The VM should be configured to use UEFI boot for optimal TPM functionality
  • Supported Guest OS: Ensure your guest operating system supports TPM devices
  • VM Power State: The VM should be powered off before adding the TPM device
  • Proper Permissions: You must have modify permissions for the virtual machine

UEFI Boot Requirement

For Windows VMs requiring TPM (such as Windows 11), UEFI boot mode is mandatory. Legacy BIOS mode will not support TPM functionality.

Steps to Add TPM Device

1. Access VM Configuration

  1. Navigate to the VM Dashboard - Go to Virtual Machines > List - Double-click your target VM from the list

  2. Power off the VM if it's currently running - Click Power Off from the left menu if the VM is running - Wait for the VM to completely shut down

2. Enable UEFI Boot (if not already enabled)

  1. From the VM dashboard, click Edit in the left menu
  2. Locate the UEFI Boot option and enable it
  3. Click Submit to save the changes

UEFI Conversion

If converting an existing VM from Legacy BIOS to UEFI, create a snapshot before making changes to enable easy rollback if needed.

3. Add TPM Device

  1. From the VM dashboard, click Devices in the left menu
  2. Click New from the left menu
  3. Configure the TPM device settings: - Name: Enter a descriptive name (e.g., "TPM-2.0") or leave blank for auto-generation - Type: Select TPM from the dropdown - Description (optional): Add administrative notes about the device - Version: Select TPM version (typically TPM 2.0 for modern requirements)

  4. Click Submit to create the TPM device

4. Start the Virtual Machine

  1. From the VM dashboard, click Power On in the left menu
  2. Wait for the VM to boot completely
  3. Access the VM console to verify TPM functionality

Guest OS Configuration

Windows Configuration

For Windows guests (especially Windows 11):

  1. Verify TPM Detection - Open Device Manager - Look for "Security devices" section - Confirm TPM device is listed and functioning

  2. Enable TPM in Windows - Run tpm.msc from the Run dialog - Verify TPM status shows as "Ready for use" - Initialize TPM if prompted

  3. Configure BitLocker (if needed) - Go to Control Panel > System and Security > BitLocker Drive Encryption - Follow prompts to enable BitLocker with TPM

Linux Configuration

For Linux guests:

  1. Check TPM Detection

    ls /dev/tpm*
    

  2. Install TPM Tools (if needed)

    # Ubuntu/Debian
    sudo apt-get install tpm2-tools
    
    # RHEL/CentOS
    sudo yum install tpm2-tools
    

  3. Verify TPM Functionality

    tpm2_getcap properties-fixed
    

Troubleshooting

Common Issues

  1. TPM Not Detected in Guest OS - Solution: Verify UEFI boot is enabled and VM has been restarted - Check guest OS TPM driver support

  2. Windows 11 Installation Requirements - Solution: Ensure both UEFI boot and TPM 2.0 are enabled before installation - Use Windows 11 compatible installation media

  3. TPM Initialization Errors - Solution:

    1. Power off the VM completely
    2. Remove and re-add the TPM device
    3. Restart the VM and retry initialization
  4. BitLocker Configuration Issues - Solution: Ensure TPM is properly initialized before configuring BitLocker - Check Windows TPM management console (tpm.msc) for status

Performance Considerations

  • TPM devices have minimal performance impact on VM operations
  • No additional CPU or memory resources required
  • TPM operations are handled efficiently by the VergeOS hypervisor

Best Practices

  1. Backup Before Changes - Create a VM snapshot before adding TPM devices - Test TPM functionality in a development environment first

  2. Security Configuration - Enable Secure Boot alongside TPM for enhanced security - Configure appropriate TPM policies based on security requirements

  3. Documentation - Document TPM configuration for compliance and audit purposes - Maintain records of TPM-enabled VMs for security tracking

  4. Updates and Maintenance - Keep guest OS TPM drivers updated - Monitor TPM device status regularly - Include TPM configuration in VM documentation

Supported Features

With TPM enabled, your VM can support:

  • Secure Boot: Verify boot integrity and prevent unauthorized boot modifications
  • BitLocker Drive Encryption: Hardware-based encryption key management
  • Windows Hello: Biometric authentication (with additional hardware)
  • Device Attestation: Verify device integrity and compliance
  • Certificate Storage: Secure storage for digital certificates

Additional Resources


Document Information

  • Last Updated: 2025-07-02
  • VergeOS Version: 4.12.6+
  • Applies to: All VergeOS environments with TPM support

Proper VergeOS System Shutdown Procedure

Overview

This document provides the step-by-step procedure for properly shutting down a VergeOS system. Following the correct shutdown sequence is critical to ensure data integrity, prevent corruption, and maintain the health of your VergeOS environment.

Prerequisites

  • Administrative access to the VergeOS UI
  • Understanding of your cluster topology
  • Identification of all running workloads and tenants
  • Knowledge of controller node locations (Node1 & Node2)

Shutdown Sequence

Step 1: Inventory Running Workloads

Before beginning the shutdown process, you must identify all active workloads across your environment.

  1. Navigate to each Node Dashboard in your cluster
  2. Review the Running Machines section on each node
  3. Document all running workloads including: - Virtual machines (VMs) - Tenant nodes - VMware backup services - NAS services - Any other active services

Step 2: Shutdown Tenant Workloads

If tenants are running on any nodes in your cluster:

  1. Log into each tenant environment that has active workloads
  2. Gracefully shut down all running workloads within each tenant
  3. Verify shutdown completion before proceeding to the next step

Important

Ensure all tenant workloads are properly shut down before proceeding. Failing to do so may result in data loss or corruption.

Step 3: Power Off Host-Level Workloads

After all tenant workloads are shut down, power off all remaining workloads on each node:

  1. Virtual Machines (VMs): Use the graceful shutdown option when possible
  2. Tenant Nodes: Ensure these are powered off after their internal workloads
  3. VMware Backup Services: Stop any active backup operations
  4. NAS Services: Safely stop all NAS-related services
  5. Other Services: Power off any remaining active services

vNet Containers

vNet containers do not need to be manually stopped. They will be gracefully stopped automatically during the cluster shutdown process.

Step 4: Shutdown Individual Nodes

Once all workloads are stopped:

  1. Navigate to the Cluster Dashboard for the cluster you wish to power off
  2. Select Power Off from the left-hand menu
  3. The system will begin shutting down each node in the cluster
  4. Monitor the shutdown progress through the cluster dashboard

Step 5: Shutdown the Entire Cluster

After individual nodes have been shut down:

  1. Navigate to System → Clusters
  2. Select the cluster you want to shut down
  3. Select Power Off from the left menu
  4. Confirm the shutdown when prompted

Multi-Cluster Environment Considerations

Critical Warning

If your environment contains multiple clusters, you must ALWAYS shut down the cluster containing the controller nodes (Node1 & Node2) LAST.

Shutdown Order for Multi-Cluster Environments:

  1. Shut down all non-controller clusters first
  2. Shut down the controller cluster (containing Node1 & Node2) last

This ensures that cluster coordination and management services remain available until all other clusters are safely shut down.

Alternative Method: API Shutdown

For advanced users or automation purposes, you can use the VergeOS API to shutdown clusters:

POST /v4/cluster_actions
{
    "cluster": [cluster_id],
    "action": "shutdown",
    "params": "{}"
}

Proper Power-On Sequence

When powering your VergeOS system back on, follow this sequence:

Single Cluster Environment:

  1. Power on Node1 first
  2. Wait for Node1 to come online completely
  3. Power on Node2
  4. Power on remaining nodes one at a time, waiting approximately 1 minute between each
  5. Verify system status on the main dashboard (should show Green and Online)

Multi-Cluster Environment:

  1. Power on the controller cluster first (Node1, then Node2, then remaining controller nodes)
  2. Wait for controller cluster to be fully online
  3. Power on other clusters following the single-cluster sequence for each

Verification and Monitoring

After completing the shutdown or startup process:

  1. Check the main dashboard for system status indicators
  2. Verify all nodes show appropriate status (Offline for shutdown, Online for startup)
  3. Monitor system logs for any errors or warnings
  4. Test critical services after startup to ensure proper operation

Troubleshooting

Common Issues During Shutdown:

Workloads Won't Shut Down Gracefully: - Check guest OS ACPI settings - Use "Hard Reset" or "Kill Power" options as last resort - Review VM power management settings

Nodes Won't Enter Shutdown: - Verify all workloads are stopped - Check for stuck or non-responsive VMs - Review node logs for error messages

Cluster Shutdown Fails: - Ensure individual nodes are properly shut down first - Check cluster status and connectivity - Verify no active migrations or maintenance operations

Getting Help:

If you encounter issues during the shutdown process: 1. Document the error messages and current system state 2. Check the VergeOS logs for detailed error information 3. Contact VergeOS support with specific details about the issue

Best Practices

  • Plan shutdown windows during low-usage periods
  • Notify users before beginning shutdown procedures
  • Document your specific environment including cluster topology and critical workloads
  • Test the shutdown process in non-production environments first
  • Maintain current backups before performing system shutdowns
  • Use maintenance mode for individual nodes when possible instead of full shutdowns

Summary Checklist

  • Inventory all running workloads across all nodes
  • Shut down tenant workloads gracefully
  • Power off all host-level workloads (VMs, services, etc.)
  • Navigate to Cluster Dashboard and select Power Off
  • Navigate to System → Clusters and power off the entire cluster
  • For multi-cluster: Shut down controller cluster (Node1 & Node2) LAST
  • Verify shutdown completion through dashboard monitoring
  • Document any issues encountered for future reference

Following this procedure ensures a safe and controlled shutdown of your VergeOS environment while maintaining data integrity and system health.


Document Information

  • Document Type: Knowledge Base Article
  • Category: System Administration
  • Tags: shutdown, power-off, cluster-management, system-maintenance
  • Applies to: VergeOS 4.12.6 and later versions

vSAN Tier Status (Journal Walks)

Overview

This page is designed to help you understand VergeFS status metrics provided on the vSAN Tier Dashboard. These metrics provide insight related to Journal Walks, the processes that continually monitor and support vSAN data integrity.

Monitoring vSAN tier status information covered on this page is typically unnecessary during normal operation (general vSAN health and activity can be monitored on the Main Dashboard). The following details are intended for troubleshooting or for users interested in viewing Journal Walk activity specifics. This dashboard is most useful when investigating an issue or tracking the progress of a Journal Walk, such as during an update process.

Journal Walks

VergeFS employs a process called Journal Walks (also referred to as "Walks") to continually verify storage fidelity and safeguard against risks like hardware failures, silent bitrot, power disruptions, and misleading device write confirmations. These walks are automatically triggered, scanning each node to verify possession of its expected data blocks. In the event of any missing data blocks, which may result from: device issues, planned node reboots, or environmental disruptions, VergeFS proactively performs repairs to restore consistency.

Journal Walks operate as a background process; system operations proceed normally while a Journal Walk is in progress.

The system executes three types of Journal Walks:

  • Partial (differential) Walk - targets data changed since last walk transaction for quicker validation
  • Full Walk - scans all data across all nodes
  • Mixed Walk - occurs when a non-controller node reboots; only that node is fully scanned, while other nodes are differentially scanned.

Accessing vSAN Tier Status Information

Navigate to: Infrastructure > vSAN Tiers > double-click the desired tier. This displays the dashboard for the selected vSAN tier. Refer to the Status tile on this page.

Status Data

  • Redundant: (checkbox) Reflects whether the vSAN tier is currently verified as redundant. If unchecked, maintenance mode will be disabled to prevent disruption. The box may appear unchecked during a full Journal Walk until redundancy is confirmed. It also remains unchecked if redundancy cannot be verified, such as when a node is offline after the Journal Walk completes.

  • Encrypted: (checkbox) Shows whether data in the vSAN tier is encrypted. Encryption status is set during installation and remains fixed; this setting cannot be modified after deployment.

  • Working: (checkbox) Indicates that a Journal Walk is actively running for this tier. If no snapshots or data changes are occurring, walks may complete too quickly to register as “working” in the UI.

  • Full Walk: (checkbox) Flags whether a full Journal Walk is in progress. Full walks are triggered by events such as controller startup or topology changes (e.g., node offline or added, drive failure, etc.).

When a node other than the active controller reboots, a Mixed Walk is triggered instead.

  • Walk Progress: Displays the current Journal Walk’s progress as a percentage, or shows “Idle” if no walk is active.

  • Last Walk Time (ms): Duration in milliseconds of the most recent Journal Walk.

  • Last Full Walk Time (ms): Duration in milliseconds of the most recent Full Journal Walk.

  • Current Transaction: A unique ID representing the latest transaction. This value increments with each Journal Walk, whether full, mixed, or differential.

  • Transaction Start Time: Timestamp indicating when the current or most recent Journal Walk began. Useful for diagnosing prolonged or stalled operations. (see Journal Walk Duration below).

  • Repairs: Displays the current count of missing data blocks detected on the tier. It’s normal to see a non-zero value after events such as node failures, maintenance operations, or updates. VergeFS Journal Walks automatically identify and work to correct these detected blocks using redundant data stored on other nodes. If redundancy fails (e.g. double node failure), the system will try to retrieve blocks from a configured repair server. Persistent repair counts (i.e. after several transaction increments) may indicate manual resolution is needed, and contacting VergeIO Support is recommended in such cases.

If missing data blocks have already been detected and a repair server isn’t yet configured, it’s not too late. Setting up a repair server now allows VergeFS to automatically attempt recovery of those blocks during subsequent Journal Walks.

  • Bad Drives: Indicates the number of drives missing since the current Journal Walk began. It’s common to see a non-zero value here after node reboots, maintenance, or updates; this doesn’t automatically signal a drive failure. Missing drives are typically related to offline nodes or detection delays at walk start. If no nodes are offline and this field shows a count, review drive and node status via the System Dashboard for further insight.

Journal Walk Duration

Walk timespans are variable, with several factors that can affect durations, including:

  • Use of NVME Tier 0 for metadata
  • Available memory on controller nodes
  • Quantity of data on the tier
  • Amount of data changes since the last transaction

Walk Time Considerations

  • Updates involve full walks and mixed walks, hence the time it takes for these operations will affect necessary maintenance windows.
  • The time it ultimately takes to make large deletions and data tier migrations (e.g. from one tier to another) will be reliant on differential walk times.
  • Systems that follow published sizing and design recommendations should experience acceptable walk durations. For example, walks triggered during update operations generally fit within standard maintenance windows.

Walk Time Optimization

Walk times depend on the tier size and rate of data change. Adequate resources and proper network design significantly impact walk performance.

Tips to Optimize Journal Walk Times
  • Follow recommended Node Sizing Requirements (e.g. dedicated tier 0 using NVME drives, right-sizing controller memory for your environment)
  • Implement Network Design recommendations (e.g. adequate internode bandwidth of at least 10Gb, isolated, dedicated core networks)
  • Avoid overprovisioning workload RAM on compute-and-storage (HCI) nodes.
  • When possible, schedule maintenance operations that trigger Full or Mixed Walks during scheduled maintenance windows, while avoiding concurrent heavy I/O operations.

If you have questions or concerns about the timeframe of walk transactions, please contact our support team for assistance.

Routing Layer 2 Networks with VergeOS

Routing IP Traffic to L2 physical networks

This article will walk through an efficient method of routing layer 3 IP traffic for layer 2 physical networks with the powerful VergeOS networking system. There are multiple ways to achieve this, however the objective of this article is to provide clear and concise guidance on a specific common scenario. This article should be particularly of use and interest to engineers and administrators of Edge deployments in which physical devices like phones, cctv cameras, or network equipment require communication with workloads in VergeOS that may be running on different IP subnets. Using this method, an operator can achieve direct local communication between those physical devices outside of VergeOS and the internal VergeOS managed networks.

Steps Overview

  1. Create Layer2 External network
    • It is important that this network is created with no IP information. If the IP block is already assigned to this interface, it cannot be added to the default route in the next step.
  2. Create IP block on the External network that will route the traffic (Likely default External)
    • By adding the IP block here, we're telling VergeOS to expect this IP traffic to come in via this interface, and in our next step we will tell Verge networking where to send it.
    • Assign IP block to step 1 external network (or tenant)
    • Upon saving, VergeOS will automatically create route(s) to send inbound traffic with matching destination IPs to our Layer2 network which may also have VMs attached, allowing all clients to communicate across networks within Verge (with appropriate firewall rules) and to networks outside of Verge via the default route.
  3. Return to the Layer2 External to Assign IPs and other L3 options
    • Depending on your upstream configuration, you may need to set additional routes outside of Verge in upstream routers to route traffic destined for this subnet (192.168.2.0/24 in our example) via the Verge External IP.

Helpful Related Documents

Introduction to Network Blocks : Network Blocks Overview
Routing Basics : Routing Internal Networks
Network Rules : VergeOS Product Guide - Network Rules
Network Troubleshooting : VergeOS Product Guide - Network Troubleshooting

Demo Scenario

In our sample scenario below, we'll be routing the 192.168.2.0/24 address space via the VergeOS External interface to a layer2 network named "l2demo". In this scenario, any VM attached to l2demo with a static IP (in the correct address space) or with DHCP enabled would be able to reach the internet, as would any physical devices outside of Verge, via the L2 interface. Inbound traffic could be allowed and controlled via further rules. Outbound traffic could be restricted via further rules as well, paying attention to rule order.

Demo Scenario Details
  • IP Block to Route: 192.168.2.0/24
  • Verge External IP: 10.1.1.2
  • Verge External L2 Network: l2demo
  • L2 Network ID : 1010
  • Verge Upstream Gateway: 10.1.1.1
  • Verge Physical Interface: phys1
Create the L2 network
From the Networks Menu:
  1. Click New External
  2. Name: l2demo
  3. Description: Optional
  4. Layer 2 Type: vLan
  5. Layer 2 ID: 1010
  6. IP Address Type: None
  7. Interface Network: phys1
  8. Click Submit to save.
  9. Click "Power On" on the network page after saving. - Ensure the network powers up without errors in the Log box
Create IP block on External and Assign it
  1. Click "Network Blocks" on the External network page
  2. Click "New"
  3. Network Block: 192.168.2.0/24
  4. Owner Type: Network
  5. Owner: l2demo
  6. Click Submit You will be returned to the Network Blocks page, now showing your new block and the Owner it is assigned to.
Assign IP info to L2 network
  1. Return to the l2demo network
  2. Click "Rules" - Confirm Firewall rules are awaiting application. Check to confirm there is now an Outgoing route; the new rule. We'll apply it later. - Click Cancel to exit back to the network page
  3. Click Edit to assign IP info to the network.
  4. IP Address Type: Static
  5. IP Address: 192.168.2.1
  6. Network Address 192.168.2.0/24
  7. DNS: Simple
  8. DNS Server: 10.1.1.1
  9. DHCP: enable
  10. Dynamic DHCP: enable
  11. DHCP Start Address: 192.168.2.200
  12. DHCP Stop Address: 192.168.2.254
  13. Click Submit to save

At this point you have created everything you should need, but it's still pending application and a network restart to add Layer2 to l2demo. - Return to your Networks Dashboard and click All Networks - Note the External and l2demo networks marked as "Needs FW Apply" and l2demo "Needs Restart" - Restart the l2demo network by selecting it and clicking "Restart". - Accept the offer to apply firewall rules on restart and click "yes" to confirm. - Apply FW Rules on External by selecting it and clicking "Apply Rules". - Click "yes" to confirm - OUTSIDE OF VERGE (In the upstream router): Set a route on 10.1.1.1 to send 192.168.2.0/24 via 10.1.1.2 and set any other required policies for traffic on that device.

Tenant Variation

When applying the above process to a Tenant there are two generally common implementations.

  • Ideally, a tenant VM that needs to access a physical layer 2 network would do so via routes, creating appropriate routes and rules to allow traffic from an internal tenant network to a layer 2 network; e.g. our previous "l2demo" network via the 192.168.2.1 gateway.
  • If Native layer 2 access is required inside tenant networks, see Provide Layer 2 Access to a Tenant to create a Virtual Switch Port connecting the tenant network to the Layer 2 External interface outside of the tenant.

Warning

If devices cannot reach the internet, double-check: - Upstream route to 192.168.2.0/24 is set
- Firewall rules are applied in correct order
- VM or device IP/subnet matches the assigned block


IPsec Example - Dedicated Public IP

The following IPsec example utilizes a dedicated public IP address for a VPN tunnel. The VPN router is bridged to an existing internal network to provide Layer 2-connectivity to that network.

IPsec is a complex framework that supports a vast array of configuration combinations with many ways to achieve the same goal, making it impossible to provide one-size-fits-all instructions. Sample configurations are given for reference and should be tailored to meet the particular environment and requirements.

Consult the IPsec Product Guide Page for step-by-step general instructions on creating an IPsec tunnel.

  • VPN Network Name: vpn-ipsec
  • VPN Router address: 192.168.0.254
  • Local VPN network: 192.168.0.0/24
  • Remote VPN network: 10.10.0.0/16
  • Bridged Internal Network Name: Internal-xyz
  • External Network Name: External

Static Lease

We navigate to Internal-xyz* > IP Addresses > New to reserve a static address for the VPN router on this internal network in order avoiding another entity from taking the same IP address. Full instructions for creating a static lease can be found here: Create a DHCP Static Lease.

VPN Static Lease

VPN Network Configuration

VPN Network Config

Phase 1

Phase 1 Configuration

Phase 2

Phase 2 Configuration

Assigned Public IP Address

The public address must be Assigned from the External network to the VPN network.

Assign Public IP

Default VPN Network Rules

Default Firewall Rules - The following necessary firewall rules are created automatically when a VPN network is created:

  • Allow IKE: Accept incoming UDP traffic on port 500 to My Router IP
  • Allow IPsec NAT-Traversal: Accept incoming UDP traffic on port 4500 to My Router IP
  • Allow ESP: Accept incoming ESP protocol traffic to My Router IP
  • Allow AH: Accept incoming AH protocol traffic to My Router IP

Review Rules

These rules can be modified to restrict to specific source addresses, where appropriate.

Additional VPN Network Rules

Additional rules need to be created on our new VPN network:

Translate Rule: VPN Translate to Router

The translate rule must be moved to the top of the rules list, before the Accept Rules. Instructions for changing the order of rules can be found in the Product Guide: Network Rules - Change the Order of Rules

Default Route Rule: VPN Default Route Rule

Internal Network Rule

A routing rule is needed on Internal-xyz to route its VPN traffic to the VPN network.

VPN Default Route Rule

New rules must be applied on each network to put them into effect.

IPsec Example - Tenant/NAT UI Address

The following example configures an IPsec peer within a VergeOS tenant. In this example, the dedicated IP address used for accessing the tenant UI is also used for the IPsec tunnel, with NAT rules in place to direct tunnel traffic appropriately.

This example pertains to a tenant using a dedicated IP address; tenants using a shared address (via proxy/PAT rules) will require different configuration.

IPsec is a complex framework that supports a vast array of configuration combinations with many ways to achieve the same goal, making it impossible to provide one-size-fits-all instructions. Sample configurations are given for reference and should be tailored to meet the particular environment and requirements.

Consult the IPsec Product Guide Page for step-by-step general instructions on creating an IPsec tunnel.

Host Configuration

Assigning the UI address to a tenant automatically creates rules on the host system (external and tenant networks) to channel traffic appropriately. No further configuration should be needed on the host.

All configuration outlined below is done within the tenant system.

VPN Network Configuration

VPN Network Configuration

Phase 1

Phase 1 Configuration

Phase 2

Phase 2 Configuration

Default VPN Network Rules

Default Firewall Rules - The following necessary firewall rules are created automatically when a VPN network is created:

  • Allow IKE: Accept incoming UDP traffic on port 500 to My Router IP
  • Allow IPsec NAT-Traversal: Accept incoming UDP traffic on port 4500 to My Router IP
  • Allow ESP: Accept incoming ESP protocol traffic to My Router IP
  • Allow AH: Accept incoming AH protocol traffic to My Router IP

Review Rules

These rules can be modified to restrict to specific source addresses, where appropriate.

Additional VPN Network Rules

Additional rules need to be created on our new VPN network:

VPN NAT Rule: VPN NAT Rule

The incoming NAT rule must be moved to the top, before the Accept Rules. Instructions for changing the order of rules can be found in the Product Guide: Network Rules - Change the Order of Rules

Default Route Rule: VPN Default Route Rule

VPN SNAT Rule: VPN Nat Rule

External Network Rules

Translate rules are necessary on the tenant's external network, to send IPsec traffic to the VPN network:

External UDP NAT Rule: VPN Nat Rule

External ESP NAT Rule: VPN Nat Rule

External AH NAT Rule: VPN Nat Rule

Connecting Internal Networks to the VPN

Routing can be configured between the VPN network and other internal networks to provide tunnel access to those networks; see How to Configure Routing Between Networks.

New rules must be applied on each network to put them into effect.

Enabling System SSH Access

Key Points

  • SSH access to a VergeOS system is generally not needed because full access is provided from the UI.
  • SSH should only be enabled for specific hardware diagnostics or other special circumstances.
  • Although VergeOS employs many safety protections, opening SSH on any system can introduce vulnerability.

Important SSH Security Procedures

  • Always use source-controlled external rules to strictly limit ssh access to trusted addresses.
  • Enable SSH access on a temporary basis; disable rules again when done with the session.

Steps to Enable SSH access

SSH Access rules are auto-created, and disabled, during system installation.

  1. Enable the core network rule: Navigate to the Core network dashboard, modify the "SSH Access" rule, select the Enabled option and Submit to save the change. ssh-rule-core.png

  2. Add source control to the external network rule: Navigate to the external network dashboard, modify the "SSH Access" rule to configure specific source IP address(es) and/or address range(s) to tightly control access.

  3. Enable the external network rule: select the Enabled option and Submit to save the change.
    Ex. External Network Rule: ssh-rule-external.png

  4. Apply Rules to both networks.

Warning

Danger

  • VergeOS is a specialized kernel, with a read-only overlay. Do not install additional Debian packages or applications as they can conflict with VergeOS operation and cause system malfunction or data loss. Additionally, extraneous programs are wiped at reboot.
  • Check with VergeOS support before making any modifications at the command line. Issues resulting from unsanctioned command-line changes are the sole responsibility of the customer.

How to Use Root External IPs in Tenants in VergeOS

In VergeOS Virtual Data Centers leveraging Tenants, a tenant may need a public IP address (External IP) from the root External network for use by VMs inside the tenant space.

Using Network Blocks

Network Blocks can be used to assign a group of IPs as a single unit, and often represent the most straightforward method of using an External IP inside a tenant space for VM NICs. They are created on the root External network, or the external interface upon which the IPs you intend to use are routable.

Pro/Cons

Pro - Allows direct assignment of a public IP to a tenant VM's network interface.
Pro - Leverages built-in Layer 3 functions, maintaining full visibility of the configuration for diagnostic and troubleshooting.
Con - Requires at minimum 4 total IP addresses to deliver 1 usable IP to a device; a /30 network block.

Creating a Network Block

In the VergeOS system root:

  1. Navigate to the network that represents the 'edge' of your VergeOS system; this is very often the root External network.
  2. Click Network Blocks from the left menu.
  3. Click New to create a new block.
  4. Enter your Network Block IP address in CIDR format. (a.b.c.d/n)
  5. Optional : Add a helpful description to your block.
  6. Since we're using this block with a tenant, set "Owner Type" to Tenant and set "Owner" to the tenant you'd like to assign this block to.
  7. Click Submit to save and assign the Block.

Block Addressing

When creating Network Blocks, VergeOS will validate the Block when applying the firewall rules in further steps. Failure to set a proper starting IP for a given range will result in an error. e.g. 10.1.2.108/30 would be a valid block. 10.1.2.110/30, while representing in many cases the same block, will fail to validate in the following steps and not function. Validate with a subnet calculator if you are unsure.

  1. Return to the main page for the network you just added the Block to.
  2. At the top, note the "Rule changes need to be applied" message. You may click "Apply Rules" here, or click "Rules" in the left menu to validate before applying. You may also leave rule application for later, but you must return and apply before your Block will be routed and functional.
  3. Navigate to the Tenant Networks view and, using the top filters, select "Needs FW Apply" "Yes"; your tenant network should be listed.
  4. Select the network with a click, then Apply Rules in the left menu to complete delivery of the Network Block to the tenant.

In the Tenant interface:

  1. Navigate to your External network. You will find your root-assigned Network Block listed with the description "External Network Address from service provider".
  2. Select the Block with a click, then click New Network on the left menu. This will create a new Internal network in the tenant using the address block.
  3. Note the Address Type is automatically set to Static with the Network Block address already set. Dynamic DHCP will also be enabled with the available IP range remaining usable already filled.
  4. Give the new network a name.
  5. When finished with any other customizations you require, click Submit to create the network.
  6. You will exit to the new network. Click Power On from the left menu to bring the network online. You will be presented a confirmation window to prevent accidental power-on.

Attach a VM NIC to the network with DHCP enabled to have the IP assigned to the NIC automatically. Alternatively, set the IP address(es) within your guest OS manually.

Creating a Virtual Switch Port

Virtual Switch Ports are another common method to consume root-level IP space by tenant workloads. These are roughly analogous to physical wires in that they allow Layer 2 network traffic to "skip" routed network segments, in this case allowing a Tenant Internal network to communicate directly with a network outside of VergeOS. This may be a "WAN" network directly, or other network configured outside of VergeOS that has address space usable and is routable out to the internet.

Pro/Cons

Pro - Simple configuration within VergeOS, bypassing internal Layer 3 routing configuration.
Pro - Allows direct usage of External IPs on edge devices by consumers.
Pro - Minimal address space overhead; only the IP addresses used by clients.
Con - Virtual Switch Ports only function when both networks they connect are running on the same node.
-This requires the External network and Tenant Node1 use a High Availability (HA) Grouping to maintain their grouping, which may impact HA event expectations.
Con - May make troubleshooting and diagnostic more difficult by removing VergeOS WebUI and native routing visibility.

For instructions on creating a Virtual Switch Port, see Creating a Virtual Switch Port.

Once your Virtual Switch Port is in place, virtual machines and other workloads with NICs connected to the Internal network the vwire is attached to will have a Layer 2 connection out of VergeOS and will function similarly to a VLAN in a traditional switch with regards to addressing and routing.

Address Translation

If a workload must have a consistent IP address, but does not need the address assigned to it directly, Address Translation may be the best method. This allows standard Layer 3 routing from your Public/External IP pool to any given workload in VergeOS via the built-in Rules and Networking system.

Pro/Cons

Pro - Follows standard and well understood routing conventions.
Pro - Allows full route visibility and control within VergeOS WebUI panels, which can aid troubleshooting and future changes.
Con - Does not allow the end device to be assigned the Public IP natively.
Con - Not all network traffic survives Address Translation, particularly if source/destination validation is required.

Due to the very extensive and flexible nature of VergeOS's network possibilities, we will provide 2 example configurations, with the Address Translation at differing points in the routing journey.

DNAT and SNAT Rules on Tenant Internal Network

  1. Navigate to the network that represents the 'edge' of your VergeOS system; this is very often the root External network.
  2. Click IP Addresses from the left menu.
  3. Click New to create a new IP.
  4. Set "Type" to Virtual IP.
  5. Fill in the IP Address.
  6. Optional : Add a helpful description.
  7. Set "Owner Type" to "Tenant"
  8. Set "Owner" to the tenant you'd like to assign this IP to.
  9. Click Submit to save.
  10. You will be returned to the "IP Addresses" view.
  11. Return to the main page for the network you just added the Block to.
  12. At the top, note the "Rule changes need to be applied" message. You may click "Apply Rules" here, or click "Rules" in the left menu to validate before applying. You may also leave rule application for later, but you must return and apply before your Address will be routed and functional.
  13. Navigate to the Tenant Networks view and, using the top filters, select "Needs FW Apply" "Yes"; your tenant network should be listed.
  14. Select the network with a click, then Apply Rules in the left menu to complete delivery of the IP Address to the tenant.

In the Tenant interface:

  1. Navigate to your External network. You will find your root-assigned IP Address listed with the description "External IP from service provider".
  2. Select the IP Address with a click, then click Edit on the left menu.
  3. Set "Owner Type" to "Network".
  4. Set "Owner" to the network your VM's NIC is attached to.
  5. Click Submit to save.
  6. Return to the Tenant External network view.
  7. Click Apply Rules to activate the automatic rule created to route your IP.
  8. Navigate to the Tenant network you set in step 4.

DNAT Option: (If your workload is compatible)

  1. Click Rules in the left panel.
  2. Click New in the left panel to create a new Rule.
  3. Give your rule a Name.
  4. Optional : Write a helpful Description..
  5. Set "Action" to Translate.
  6. Set "Destination Type" to My IP Addresses.
  7. Select the IP Address you've passed along from the list.
  8. Set "Target" to either:
  9. "Type" My IP Addresses and select the Static IP Address you have already configured for this VM NIC in VergeOS.
  10. OR "Type" IP/Custom and manually enter the static IP you have set the Local IP you have set on the VM NIC already.
  11. Click Submit to save.

SNAT Configuration: (Required for outgoing translation)

  1. Click Rules in the left panel.
  2. Click New in the left panel to create a new Rule.
  3. Give your rule a Name.
  4. Optional Write a helpful Description.
  5. Set "Action" to Translate.
  6. Set "Source" to either: (Using the same IP as the "Target" from the previous steps)
  7. "Type" My IP Addresses and select the Static IP Address you have already configured for this VM NIC in VergeOS.
  8. OR "Type" IP/Custom and manually enter the static IP you have set the Local IP you have set on the VM NIC already.
  9. Set "Destination Type" to My IP Addresses.
  10. Select the IP Address you've passed along from the list.
  11. Set "Pin" to Top to set this Rule above others, ensuring it is applied early.
  12. Click Submit to save.

Routing Option: (May be useful if your workload is not DNAT compatible)

If your workload does not support DNAT, clients must access it with the native IP on the device, AND you have only 1 IP available, there is an alternative to the "DNAT Option" above. Follow the "DNAT Option" instructions above, and at Step 5 set your action to Route rather than Translate. This will send traffic for the public IP to the VM via the native private IP. Then, on the NIC in your Guest OS, set a secondary IP with the public IP and a /32 (255.255.255.255) subnet. Follow the "SNAT Configuration" as written to translate the outbound traffic. This option is entirely GuestOS dependent and may not work in all situations.

To use this method, follow the "DNAT Option" instructions above with 1 change, Step 5. changes to: 5. Set "Action" to Route.

Set the Public/External IP address as a secondary address on the VM NIC, inside the guest OS. This will allow traffic bound for the IP to be routed to the native internal IP, then allow the guest OS to handle it on the /32 single IP. The "SNAT Configuration" steps will likely still need to be followed; the outbound traffic from the VM NIC will still originate from the local IP, not the public assigned as a secondary address, and thus Source NAT will need to change it on the way out.


Document Information

  • Last Updated: 2025-02-17
  • VergeOS Version: 4.13.3

Settings that Influence VM Node Selection

Each time a VM is powered on or migrated, the system decides where to run the VM based on user-specified VM options as well as balancing workloads across available nodes.

VM options used in deciding node selection for running a VM:

  • HA Group

    • Node Affinity: (value starts with a "+", e.g. "+commapp") The system attempts to run VMs with the same HA Group value on the same node. This is used to coalesce application-related workloads to a single physical node for performance optimization.
    • Node Anti-affinity: (value does NOT with "+", e.g. "webservers") VMs with the same HA Group value are run on separate nodes to provide high availability of applications or services.
  • Preferred Node: a specific node is selected as the first-choice

  • Preferred Cluster: nodes in specified cluster used as first choice
  • Failover Cluster: nodes in specified cluster used as next choice when preferred cluster is not available

For more information about these and other VM options, see: Product Guide - Virtual Machine Fields

Provide Layer 2 Access to a Tenant - LEGACY

Newer Configuration Method

VergeOS 26.0 and later, consider using Tenant Layer 2 Networks for a simpler, more streamlined approach to providing Layer 2 access to tenants. This method automatically creates the necessary networks within the tenant environment.

Terminology Update

Virtual Switch Ports were previously called "Virtual Wires" in earlier versions of VergeOS documentation. The functionality remains the same - only the terminology has been updated for clarity.

High-Level Steps

  1. Prepare the physical network: verify VLANs are configured on the appropriate physical switch ports so that they are accessible within the VergeOS environment.

Warning

VLANs 1 & 100-102 cannot be used in a Virtual Switch Port capacity. These VLANs are reserved for internal traffic. These IDs can, however, be remapped to other VLAN IDs for tenant consumption.

  1. Create the Virtual Switch Port Determine whether the tenant will need access to a single VLAN or multiple VLANs. This will determine the Virtual Switch Port configuration:

Virtual Switch Port Host Placement

When using a Virtual Switch Port, both networks participating in that Virtual Switch Port must be on the same host. Failure to meet this requirement can lead to network connectivity issues.

  1. Add VLANs Inside the Tenant

Creating a 1:1 Virtual Switch Port

  1. Ensure the VLAN(s) have been configured in the VergeOS UI. If not, follow the steps to create VLAN(s) here.
  2. Select Networks and then Dashboard from the top menu to open the Networks Dashboard.
  3. Select Virtual Switch Ports in the left menu to view all Virtual Switch Ports in the environment.
  4. Select New to create the first half of the Virtual Switch Port:

    • Name: a descriptive name, e.g., VLAN from host, etc.
    • Network: the external network with the corresponding VLAN to pass to the tenant
    • Destination Wire: field should display --Empty List-- or select --None--
    • PVID: 1.
      Example Configuration:
  5. Submit your changes and return to the Virtual Switch Ports list view.

  6. Select New to create the second half of the Virtual Switch Port:

    • Name: a name to identify the wire such as vlan id, tenant, purpose, etc
    • Network: the tenant network, typically named tenant_'$TENANTNAME'.
    • Destination Wire: the other half of the Virtual Switch Port created above.
    • PVID: VLAN ID of the network being attached.
      Example Configuration:
  7. Submit your changes.

  8. Navigate to the Networks Dashboard, select Networks, and Apply Rules for both networks connected by the Virtual Switch Ports.

Creating a Trunk Mode Virtual Switch Port

Bridge Mode Required

To use trunk mode Virtual Switch Ports, the corresponding physical network (tied to node NICs) must be set to Bridge mode.

Set the Physical Network to Bridge Mode

  1. From the top menu, select Networks > List.
  2. Double-click the Physical Network (NIC) that the VLANs are trunked to on the physical switch.

Tip

A physical Network typically has "Switch" appended to the name and represents a physical NIC on a node. You can filter the list of networks by "Type" to display only the physical networks.

  1. Select Edit to enter the network configuration page.
  2. In the configuration page, enable Physical Bridged to activate Bridge Mode. It is best to set the On Power Loss setting to Power On so that the network starts up automatically after a system power loss.
  3. Submit your changes.
  4. Reboot the necessary nodes for Bridge Mode to become active.

Follow proper Maintenance Mode procedures when rebooting a node to avoid workload disruptions.

Configuring a Trunk Mode Virtual Switch Port

  1. Ensure the physical network is set to Bridged Mode and is powered on.
  2. Navigate to Networks > Virtual Switch Ports.
  3. Select New to create the first half of the Virtual Switch Port.

    • Name: identify the wire, e.g., "trunk from host"
    • Network: physical network with the corresponding VLAN to pass to the tenant.
    • Destination Wire: should display --Empty List-- or select --None--
    • PVID: 0
    • Allowed VLAN List: comma-delimited and with ranges as necessary
      Example Configuration:
  4. Submit your configuration.

  5. Select New to create the second half of the Virtual Switch Port.

    • Network dropdown, select the tenant network that the VLAN will be passed to, typically named tenant_'$TENANTNAME'.
    • PVID: 0
    • Allowed VLAN List: comma-delimited and with ranges as necessary
      Example Configuration:
  6. Submit your changes.

  7. Navigate to the Networks Dashboard, select Networks, and Apply Rules for both networks connected by the Virtual Switch Ports.

Add VLANs Inside the Tenant

  1. Navigate to the tenant UI and log in.
  2. Select Networks > + New External from the top menu.
  3. Configure settings:
    • Name: a label to identify the network (name, vlan ids, purpose, etc.)
    • Layer 2 Type: VLAN
    • Layer 2 ID: VLAN ID
    • Interface Network: Physical
    • IP Address Type: None
      Example Configuration:

Leave other fields at default settings unless specific configuration needed. For information about additional external network options, see: How to Create an External Network

  1. Submit your configuration.
  2. Attach workloads to the network for Layer 2 access to networks outside VergeOS.

Troubleshooting Steps

Traffic is not reaching the virtual machine

  • Confirm firewall rules related to the Virtual Switch Port have been applied.
  • Verify the destination tenant network and VLAN network are in the "Running" state and reside on the same physical node.
  • Ensure VLANs are trunked to the correct physical node ports.

Force Power Off a VM Using the API

Overview

Key Points

  • Use the VergeOS API to force power off a stuck VM
  • Requires API/Swagger access
  • Process involves multiple API calls to ensure accurate targeting
  • Should only be used when normal power off methods fail

This guide explains how to force power off a non-responsive virtual machine (VM) using the VergeOS API when standard power-off methods are unsuccessful.

Prerequisites

  • Access to the VergeOS UI with administrative privileges
  • The name of the stuck/non-responsive VM
  • Basic understanding of API operations

Important

This procedure should only be used when standard power-off methods have failed. Forcing a VM to power off can lead to data loss or corruption if not used carefully.

Steps

1. Access the API Documentation

  1. Navigate to System in the VergeOS UI
  2. Click on API Documentation (also known as Swagger)

2. Locate the VM ID

  1. In the API interface, locate and expand the VMs table
  2. Click the blue GET button
  3. In the parameters section: - Use filter name eq your_vm_name to find a specific VM
  4. Click Execute
  5. Note the Machine number from the response

3. Get Machine Status ID

  1. Navigate to the machines table
  2. Click the blue GEt /machines/{id}
  3. In the parameters: - Set id to the Machine number from the previous response - Set fields to status
  4. Click Execute
  5. From the response, note the status value

4. Verify Machine Status

  1. Go to the machine_status table
  2. Click the blue GET /machine_stats/{id}
  3. In the parameters: - Set id to status_number (using the status value from step 3) - Set fields to most
  4. Click Execute
  5. Verify this is the correct VM by checking: - Number of cores - RAM allocation - Status information - Machine number

5. Force Power Off

  1. In the machine_status table, click PUT
  2. Enter the status number as the id resource id
  3. In the request body, enter the following JSON:
{
    "running": false,
    "migratable": true,
    "status": "stopped",
    "state": "offline"
}
  1. Click Execute

6. Verify Power Off

  1. Return to the VergeOS UI
  2. Verify that the VM shows as powered off

Troubleshooting

Common Issues

  • If the VM doesn't show as powered off after the API call, wait a few minutes for the status to update
  • If the status doesn't update, verify that all IDs were correct in the previous steps
  • In case of errors, check the API response for specific error messages

Additional Notes

  • Always document the VM's ID and status values before making changes
  • Consider taking a snapshot of the VM before forcing power off if possible
  • Monitor the VM after forcing power off to ensure it starts properly when needed

Feedback

Need Help?

If you do not feel confortable with this process, please reach out to our support team for assistance.


Document Information

  • Last Updated: 2024-01-28
  • VergeOS Version: All

How to Import a Linux VM (RHEL, CentOS, SUSE, Debian, Ubuntu)

Overview

Key Points

  • Linux distributions install drivers only for detected hardware during installation.
  • Imported VMs may fail to boot due to missing virtio drivers for VergeOS hardware.
  • You can resolve boot issues by adjusting VM configuration and regenerating the initramfs.
  • This guide covers RHEL, CentOS, Fedora, SUSE, openSUSE, Debian, and Ubuntu.

This guide explains how to import Linux virtual machines from other hypervisors into VergeOS. It addresses potential problems like VMs not booting or lacking network connectivity after migration, and provides distribution-specific instructions for regenerating the initramfs.

Prerequisites

  • Access to VergeOS and the VergeOS UI.
  • Familiarity with the hypervisor environment and VM configuration.
  • Imported VM files must be present in the VergeOS environment.

Steps

1. Update VM Hardware Configuration

  1. Change all hard drives to virtio-scsi:

    • In the VergeOS UI, navigate to the VM's settings.
    • For each hard drive, change the interface to virtio-scsi for optimal performance and compatibility.
  2. Change all NICs to virtio:

    • Ensure that all network interface cards (NICs) are set to virtio for enhanced networking support.
  3. Adjust Boot Order:

    • Make sure that the OS disk is listed as ID 0 in the boot order.

2. Boot into Rescue Mode

  1. Start the VM:

    • Power on the VM, and during boot, hold the Left Shift key to access the GRUB boot menu.
  2. Select Rescue Mode:

    • In the GRUB menu, select the rescue mode to boot into a minimal recovery environment.

SUSE/openSUSE Alternative

If you cannot boot to rescue mode from GRUB (common with SLES15 and openSUSE Leap 15), mount an installation ISO to the VM and boot from it. The ISO will provide a "Rescue System" option.

3. Mount the Root Filesystem and Chroot

Once booted into rescue mode, log in as root and mount the root filesystem.

  1. Find the root partition:

    If you don't know which partition contains the root filesystem, list all available partitions:

    cat /proc/partitions
    

    For systems using LVM, list all logical volumes:

    lvdisplay
    
  2. Mount the root partition:

    Mount the root partition or logical volume to /mnt:

    mount /dev/<device_name> /mnt
    

    Replace <device_name> with your root partition (e.g., sda2, mapper/vg0-root).

  3. Verify the mount:

    Check that you mounted the correct filesystem by listing its contents:

    ls /mnt
    

    You should see directories like /root, /boot, /home, /etc, and /var.

  4. Bind the virtual filesystems:

    Use the following for-loop to bind the necessary virtual filesystems:

    for i in proc sys dev run; do mount --rbind /$i /mnt/$i; done
    

    Alternatively, mount them individually:

    mount --rbind /proc /mnt/proc
    mount --rbind /sys /mnt/sys
    mount --rbind /dev /mnt/dev
    mount --rbind /run /mnt/run
    
  5. Chroot into the mounted filesystem:

    chroot /mnt
    
  6. Mount remaining filesystems:

    After chrooting, mount any additional partitions defined in fstab:

    mount -a
    

4. Rebuild Initramfs

After chrooting into the system, regenerate the initramfs with the necessary virtio drivers.

RHEL / CentOS / Fedora / SUSE (using dracut)

Run the following command to regenerate the initramfs:

dracut -f --regenerate-all --add-drivers "virtio_blk virtio_net virtio_pci"

This adds drivers for virtio_blk (block device), virtio_net (network device), and virtio_pci (PCI bus) to the initramfs.

Debian / Ubuntu (using update-initramfs)

If dracut is not available (common on Debian 10 and Ubuntu), use update-initramfs instead:

  1. Add the virtio modules to the initramfs configuration:

    cat >> /etc/initramfs-tools/modules << EOF
    virtio_pci
    virtio_blk
    virtio_net
    EOF
    
  2. Regenerate the initramfs:

    update-initramfs -u
    

5. Reboot and Verify

  1. Exit the chroot environment:

    exit
    
  2. Reboot the VM:

    reboot
    
  3. Verify Boot and Network Connectivity:

    • Confirm that the VM boots successfully and that network connectivity is functional via the virtio NIC.

Troubleshooting

Common Issues

  • VM is not booting:
  • Solution: Double-check the boot order in the VM settings. The OS disk must be set as ID 0.
  • No network connectivity:
  • Solution: Ensure that NICs are set to virtio and that the initramfs was rebuilt with the appropriate network drivers.

Additional Resources

Feedback

Need Help?

If you have any questions or encounter issues while importing a VM, please reach out to our support team for assistance.

Configuring VergeOS as an OIDC Client

Overview

Key Points

  • Configure VergeOS to use OIDC authentication
  • Connect to a VergeOS OIDC identity provider
  • Enable automatic user creation and synchronization
  • Customize login appearance and behavior

This guide explains how to configure a VergeOS system or tenant to authenticate using another VergeOS system as an OIDC identity provider.

Prerequisites

  • Access to the VergeOS OIDC provider system
  • Well Known Configuration URL from the provider
  • Client ID and Client Secret from the provider
  • Administrative access to the client VergeOS system
  • Full URL of the client VergeOS system

Steps

1. Access Authorization Settings

  • Click System in the top menu
  • Select Auth Sources
  • Click New

2. Configure Basic Settings

  • Name: Enter an identifier for this auth source (appears on login button)
  • Driver: Select OpenID (Well Known Config)
  • Base URL: Enter the Well Known Configuration URL
  • Redirect URI: Enter the full URL of this VergeOS system
  • Client ID: Paste the client ID from the provider
  • Client Secret: Paste the client secret from the provider

3. Configure Authentication Parameters

Default values typically work best for these settings: - Token hint parameter: Leave as post_logout_redirect_uri - Redirect parameter: Leave as post_logout_redirect_uri - Scope: Leave as openid profile email groups - Group Scope: Leave as groups

Check these boxes for optimal functionality: - Decode ID Token - Update Remote User - Update User Email Address - Update User Display Name - Update Group Membership

5. Configure User Creation

Choose your preferred user creation method: - Auto-Create Users: Enter .* to create all users automatically - Auto-Create Users in Group: Specify groups for restricted auto-creation

6. Customize Login Appearance

Optionally configure: - Button background color - Button text color - Custom Font Awesome icon - Icon color (using HEX codes)

7. Save Configuration

  • Click Submit to create the authorization source

Best Practices

  • Test authentication with a test user before rolling out widely
  • Keep debug mode disabled unless troubleshooting
  • Document your configuration choices for future reference
  • Regular verify user synchronization is working as expected

Troubleshooting

Common Issues

  • Authentication Fails

    • Verify Client ID and Secret are correct
    • Check Well Known Configuration URL
    • Ensure Redirect URI matches exactly
  • User Sync Issues

    • Verify Group Scope is enabled
    • Check group membership settings
    • Enable Debug Mode temporarily
  • Login Button Missing

    • Verify authorization source is enabled
    • Check login styling settings
    • Clear browser cache

Additional Resources

Feedback

Need Help?

If you encounter any issues while configuring OIDC client settings or have questions about this process, please don't hesitate to contact our support team.


Document Information

  • Last Updated: 2024-01-22
  • VergeOS Version: 4.12 and later

Setting Up VergeOS as an Identity Provider with OIDC

Overview

Key Points

  • Create an OIDC application to establish VergeOS as an identity provider
  • Enable single sign-on for other VergeOS systems and tenants
  • Configure centralized authentication with third-party providers
  • Support multiple client systems with a single OIDC setup

This guide walks you through the process of configuring VergeOS as an identity provider using OpenID Connect (OIDC), allowing centralized authentication for multiple VergeOS systems and tenants.

Prerequisites

  • Administrative access to the VergeOS system
  • Valid SSL certificate installed on the VergeOS system
  • Basic understanding of OIDC concepts
  • URLs of client systems that will use this authentication

Steps to Create an OIDC Application

  1. Access OIDC Settings - Navigate to System > OIDC Applications from the top menu - Click New

  2. Configure Basic Settings - Enter a descriptive Name for the application - Check the Enabled box - Add an optional Description

  3. Set Up Redirect URIs - Enter the callback URL(s) where users will be redirected after authentication - Format: https://your-system-name.example.com - Multiple URIs can be added for different client systems

Using Wildcards

You can use wildcards in redirect URIs: - For multiple systems: https://examplecorp-site*.example.com - For multiple subdomains: https://vergesystem.*.example.com

  1. Configure Authentication Options - Force Authorization Source: Optionally select a third-party provider - Map User: Choose if all verified users should map to a specific account - Set Scope Settings (Profile, Email, Groups) - Configure access restrictions if needed

  2. Save Configuration - Click Submit to create the OIDC application - The system will generate a Client ID and Secret

Retrieving Client Credentials

  1. Access Application Dashboard - Navigate to System > OIDC Applications - Double-click your OIDC application

  2. Copy Required Information - Client ID: Copy using the displayed value or copy icon - Client Secret: Use the copy icon (value is hidden) - Well Known Configuration URL: Copy the displayed URL

Best Practices

  • Create separate OIDC applications for different client groups
  • Regularly review and update access restrictions
  • Use specific redirect URIs instead of wildcards when possible
  • Document which systems are using each OIDC application

Troubleshooting

Common Issues

  • Authentication Fails

    • Verify SSL certificate is valid and not expired
    • Check redirect URI matches exactly
    • Ensure client ID and secret are correctly copied
  • Scope Access Denied

    • Verify required scopes are enabled
    • Check user permissions in restriction settings
  • Redirect Problems

    • Confirm URL format matches redirect URI
    • Verify wildcard patterns if used
    • Check for SSL certificate issues

Additional Resources

Feedback

Need Help?

If you encounter any issues while setting up OIDC or have questions about this process, please don't hesitate to contact our support team.


Document Information

  • Last Updated: 2024-08-29
  • VergeOS Version: 4.12 and later

Updating the VergeOS System

Overview

Key Points

  • System updates should be performed during a maintenance window
  • Updates can be performed with zero downtime when adequate resources are available
  • System updates are only run from the host system (top-level parent)
  • Tenant systems are automatically updated from their host system
  • Updates can be scheduled or performed on-demand
  • The system automatically handles workload migration during updates

This guide provides detailed instructions for performing system updates in VergeOS, whether on-demand or scheduled.

Prerequisites

  • Administrative access to the VergeOS Cloud Dashboard
  • Adequate system resources to allow workload migration during updates
  • A maintenance window (recommended, though not required due to zero-downtime capability)

Performing On-Demand Updates

1. Check for Updates

  1. Navigate to System > Updates in the Cloud Dashboard
  2. Click Check For Updates in the left menu
  3. Click Yes to confirm
    • The Packages section will show available updates
    • A cloud icon indicates downloadable packages
    • Version information displays current and available versions

2. Download Updates

  1. Click Download in the left menu
  2. Click Yes to confirm
  3. Wait for the download to complete

3. Install Updates

  1. Click Install in the left menu
  2. Click Yes to confirm
  3. Wait for installation to complete
    • Status will show "Idle - Reboot Required" when ready
    • The Reboot option will become enabled

Note

Updates that don't include VergeOS package changes won't require full node reboots, but still need the Reboot option to apply changes.

4. Apply Updates

  1. Click Reboot in the left menu
  2. Click Yes to confirm - The system will process one node at a time:
    • Node enters maintenance mode
    • Workloads migrate to other nodes
    • Application restarts/node reboots
    • Node exits maintenance mode
    • Progress shows in the Status field
    • Nodes Updated status tracks completion

Tip

Use Cancel Reboot to halt automatic reboots if needed (e.g., for workload rebalancing)

Scheduling Updates

1. Create Update Task

  1. Navigate to System > Updates > Tasks
  2. Click New in the left menu

2. Configure Schedule

  1. Choose scheduling option:
    • One-time: Keep default "Does Not Repeat"
    • Recurring: Select frequency (weekly, bi-weekly, monthly)
  2. Set Start Date and time
  3. For recurring tasks, optionally set end date

3. Configure Task Details

  1. Enter required Name
  2. Add optional Description
  3. Select Task Type:
    • Choose "Download, Install, and Reboot" for complete update
  4. Optional: Enable Delete After Running
  5. Click Submit to save

Best Practices

  • Schedule updates during low-usage periods and during maintenance windows
  • Ensure adequate system resources for workload migration
  • Monitor system during update process
  • Keep regular backups before major updates
  • Review available updates before applying

Troubleshooting

Common Issues

  • Issue: Workloads fail to migrate
  • Solution: Verify adequate resources on target nodes

  • Issue: Update process hangs

  • Solution: Check system logs and contact support if needed

  • Issue: Node fails to rejoin after reboot

  • Solution: Review logs and network connectivity

Feedback

Need Help?

If you encounter any issues during the update process or have questions, please reach out to our support team.


Document Information

  • Last Updated: 2024-12-19
  • VergeOS Version: All

Adding Tier 0 to an Existing System

Overview

Key Points

  • Tier 0 is normally configured during initial installation
  • This procedure is for special cases requiring post-installation configuration
  • Requires careful attention to device paths and hardware compatibility

This guide outlines the process for adding Tier 0 storage to an existing VergeOS system. While Tier 0 is typically configured during installation, these steps provide a method for adding it to production systems that cannot be reinstalled.

Critical Warning

  • This procedure should only be performed by qualified VergeOS engineers or under direct support guidance
  • Selected devices will be formatted and all existing data will be destroyed
  • Incorrect device path selection can seriously damage your system

Prerequisites

Before beginning this procedure, ensure:

  • Storage devices are physically installed in the system
  • Tier 0 devices are consistent across controller nodes
  • Hardware meets specifications from the Node Sizing Guide

Steps

1. Identify Device Paths

  1. Navigate to System > vSAN Diagnostics.
  2. Select Get Node Device List from the Query dropdown
  3. Click Send
  4. Identify unused devices (marked as "vsan = false")
  5. Note the device paths (/dev/sd*) for each controller node

Tip

Verify current vSAN drive assignments by checking vSAN Tiers > [select tier] > Drives to avoid selecting drives already in use.

2. Add Drives to Tier 0

For each drive:

  1. In vSAN Diagnostics:
    • Set Query to Add Drive to vSAN
    • Select the appropriate Node (node0 or node1)
    • Enter the correct Path for the device
    • Set Tier to Tier 0
    • Configure Swap setting

Swap Configuration

  • Enable swap on only ONE storage tier
  • If swap is enabled on another tier, disable it for Tier 0
  • Contact VergeOS Support for guidance on swap configuration if needed
  1. Enter the verification phrase: Yes I know what I'm doing
  2. Click Send to execute

3. Verify Configuration

  1. Monitor the system dashboard for tier status - Status will show "online-no redundancy" during meta migration
  2. Refresh node information: - Navigate to each controller node's dashboard - Select Refresh > Drives & NICs

Post-Configuration

Monitor the vSAN tier status in the system dashboard. The tier should transition from "online-no redundancy" to "online" once meta migration completes.

Additional Resources


Document Information

  • Last Updated: 2024-11-25
  • VergeOS Version: 4.13

Change External Network to Bonded with Tagged VLAN

Overview

Key Points

  • This procedure creates an active-backup bond across vlanned physical networks.
  • It is recommended for bare-metal installations with a limitation of 2 NICs per node.
  • System downtime is not required to make this change.

This guide outlines the process to create a bonded external network across vlanned physical networks. The outlined method provides optimal redundancy for bare-metal installations that are limited to two NICs per node, allowing for two independent core-fabric networks and a single-VLAN, bonded external network.

Prerequisites

Warning

  • This process should be performed with local server access because external network changes can affect remote UI access. This will also allow you to test the bond configuration by removing one of the network cables to verify expected bond failover.
  • Before making any significant system changes confirm you have the name/password for the "admin" user (user ID #1 (1)), in case command-line operations become needed.
  1. Hint: "Key=1" parameter is in the URL of the user's dashboard

Steps

  1. Navigate to the External Network dashboard
    • Networks > Dashboard > Externals
    • Double-click External Network
    • Click Edit on the left menu
  2. Change Layer 2 Type to vLAN and enter appropriate Layer 2 ID (VLAN number).
  3. Select the option to Enable Bonding.
  4. Select the Physical Networks you want to participate in the bonding.
  5. Click Submit to save the change.

Post Configuration

  1. Check the external network by accessing the UI from a remote connection.
  2. Test Bond failover: Navigate to the external network dashboard and select NICs to view the network adapters. Physically disconnect one network cable. The UI should now indicate the NIC is in a "Down" status; verify remote UI access is still available.

Verify core network redundancy is in place before disconnecting network cables.

Troubleshooting

Common Issues

  • Problem: Loss of remote access
  • Solution:
    1. Check correct VLAN was entered in the external network config
    2. Verify network switch ports are correctly configured for the VLAN tag.

Additional Resources


Emulating USB Devices in VergeOS

Overview

Key Points

  • Create emulated USB devices for VMs
  • Enables legacy application support
  • Allows hotplugging storage for driver installation
  • Requires specific VM configuration

Prerequisites

Before creating an emulated USB device, ensure your VM meets these requirements:

  • VM settings:
  • Allow Hotplug must be enabled
  • Machine Type must be 9.0 or higher
  • VirtIO drivers installed in the guest OS

Machine Type Changes

Before modifying a VM's machine type, create a short-term snapshot (24-hour expiration) to enable rollback if needed.

VirtIO Driver Installation

  • Linux distributions typically include VirtIO drivers
  • For Windows, drivers are available in VergeOS custom ISOs or can be downloaded from: VirtIO Drivers

Steps to Create USB Device

  1. Access VM Settings - Navigate to the VM dashboard (Virtual Machines > List > Double-click your target VM )

  2. Create New Drive - Click Drives in the left menu - Select New from the left menu

  3. Configure USB Device - Set Media to either Disk or Clone Disk - Set Interface to USB - Optional: Enter a custom name (system will auto-generate if blank) - If using Clone Disk, select the appropriate *.raw file

  4. Enable Device - Click Submit to create the device - Select the new USB device from the drive list - Click HotPlug in the left menu

For additional drive configuration options, see: VM Drives

Troubleshooting

Common Issues

  • Problem: Device stays in "hot plugging" status
  • Solution:
    1. Check VM dashboard logs for errors
    2. Verify all prerequisites are met
    3. Try restarting the VM if settings were recently changed
  • Problem: Device shows as offline
  • Solution: Ensure hotplug is enabled and VirtIO drivers are installed

Document Information

  • Last Updated: 2024-11-19
  • VergeOS Version: 4.13

Configuring BGP Hold Down Timers

BGP (Border Gateway Protocol) hold timers are critical for maintaining stable BGP sessions between routers. This document will guide you through configuring the BGP hold down timers to 5 seconds for the keepalive interval and 15 seconds for the hold time.

Prerequisites

  1. Basic BGP Configuration: You should have a basic BGP configuration set up.
  2. Basic Knowledge of FRR Configuration: Familiarity with FRR configuration commands and procedures.

Configuration Steps

Step 1: Setup BGP

  1. Create a new External Network.
  2. Set its IP address type to BGP/OSPF.
  3. Set an ASN (Autonomous System Number).
  4. Define the IP address and Network Address.
  5. If this is a VLAN, configure the Layer 2 ID.
  6. Select an interface network.

Step 2: Open the BGP Network

  1. Open the network you created.
  2. Select Routers from the left menu.
  3. Open the ASN you defined during network creation.
  4. Select New from the left menu.
  5. Select Timers from the command menu.
  6. Under Parameters, enter bgp x y where x is the keepalive interval and y is the hold time. For example, bgp 5 15.
  7. Select Submit. This will return you to the Router page.
  8. Navigate back to the BGP network. A restart is required for the recent changes to take effect. Click Restart to apply changes.

Step 3: Verify the Setting

  1. Navigate back to the BGP network you configured.
  2. Select Network Diagnostics from the left menu.
  3. Choose FRRRouting BGP/OSPF from the Query dropdown.
  4. Run the default command show running-config.
  5. The settings modified in Step 2 should now appear in the running configuration.

For more information on other values and variables, refer to the FRR documentation.

Device Passthrough Advanced Configuration (Manual Creation/Editing of Resource Rules)

Although allowing auto-generation of resource rules (e.g. when you select a device and use the Make Resource menu option) is easiest and usually recommended, there may be situations where it may be useful to manually create a resource rule or to modify an auto-generated resource rule.

It is important to read and be familiar with PCI Passthrough Risks and Precautions before making passthrough configurations.

Manually Create a New Resource Rule

  1. Navigate to Infrastructure > Resources.
  2. Click Rules (ui card or on the left menu).
  3. Click New on the left menu.
  4. Provide a Name for the Rule; it is recommended to use a descriptive name can be helpful in future administration.
  5. Select the Resource Group to which the resource rule will apply.
  6. Select a specific Node or select --None-- to apply the rule to all nodes.
  7. Select the Type (PCI, USB, SR-IOV, or NVIDIA vGPU).
  8. Leave the default value set to --None-- in the field labeled Automatically created based on PCI Device.
  9. Configure device filters as desired; filter fields will vary depending on the device type selected; see below. (Advanced Entry 1 option also available)

Information on installed PCI devices, for use in filters, you can use the PCI devices listing: navigate to Infrastructure > Resources > PCI Devices. To show additional fields, right-click in the heading section to select from the full list of available columns that can be displayed.

Edit an Existing Resource Rule

  1. Navigate to the Associated Resource Group dashboard (Infrastructure > Resources > Groups > double-click the particular group).
  2. In the Rules section, locate and click the desired resource rule.
  3. Click Edit on the left menu.
  4. Node selection and PCI Filters can be modified as needed. (Advanced Entry 1 option also available)

  1. The Advanced Entry section allows you to manually input filter syntax rather than using the filter entry fields. Generally, it is preferable to allow system-generated syntax based on your filter field selections. 

Setting Up Storware on VergeOS

This guide outlines the steps for configuring Storware on VergeOS to protect your virtual machines.

For more comprehensive information on Storware's capabilities and additional backup configuration options, visit the Storware Backup and Recovery Documentation.

Prerequisites

  • VergeOS on version 4.13 or higher.
  • Access to a Storware Backup and Recovery instance on version 7 or higher.
  • Credentials for an account with the appropriate permissions to configure both VergeOS and Storware.

Setup a dedicated Verge NAS Service for Storware

  1. Deploy the NAS Service:
  1. Configure NFS Settings:
  • Before powering on the NAS service, click on Edit NFS Settings.
  • Enable NFSv4 by selecting the checkbox for this option.
  • Click Submit to save the changes.
  • Power on the NAS service.

Depending on the size of your environment you may want to increase the amount of CPU and RAM for the NAS Service. Storware recomends 8 cores and 12 GB of RAM as a good starting point


Adding Your VergeOS System to Storware

  1. Log in to Storware:
  • Access the Storware Backup and Recovery management console.
  1. Add VergeOS as a Virtual Environment:
  • Navigate to Virtual Environments > Virtualization Providers and click Create.
  • Select VergeOS as the Virtualization Provider.
  1. Configure the Connection Details:
  • General Tab:

    • URL: Enter the VergeOS URL in the format https://<VERGE_IP>.
    • Username: Provide the username for VergeOS.
    • Password: Enter the password for the specified user.
    • Verge Settings Tab:
    • Enter the name of the NAS service created in the previous step.
  1. Test the Connection:
  • Select the newly added Verge system from the list
  • Click Test Connectivity to verify that Storware can successfully communicate with the VergeOS environment.

Important Notes

NFS Version Selection

Enabling NFSv4 on VergeOS ensures compatibility with modern backup solutions like Storware, providing improved security and performance.

Snapshot Optimization

Using Storware's snapshot management in conjunction with VergeOS’s built-in vSAN capabilities allows for efficient incremental backups, reducing the time and storage required for VM protection.


Feedback

Need Help?

If you have any questions or encounter issues while setting up Storware on VergeOS, please reach out to our support team for assistance.


Document Information

  • Last Updated: 2024-11-07
  • VergeOS Version: 4.13

How to Create an External Network

This guide provides steps for creating an external network in VergeOS. The example assumes that the physical network in VergeOS is named External Switch, the VLAN for the new network is 50, and a static IP address is being used.

Steps

  1. Access Network Configuration:
  • Navigate to Networks > New External.
  1. Configure Network Settings:
  • Network Name: Enter a name for your network. In this example, use WAN1.
  • Layer 2 Type: Set to vLAN.
  • Layer 2 ID: Enter the VLAN ID, in this example, 50.
  • MTU: Leave as 1500 (Advanced users may adjust this as needed).
  • Interface Network: Select the physical network, in this example, External Switch.
  1. Configure Network Router:
  • IP Address Type: Select Static. (If using DHCP, select it here and skip the remaining router steps).
  • IP Address: Enter the IP address for this network. Example: 192.168.212.2.
  • Network Address: Enter the network address in CIDR format. Example: 192.168.212.0/24.
  • Gateway Monitoring: Enabling this feature is recommended for network reliability.
  1. Save and Activate the Network:
  • Click Save and wait for the network to power on. Once it displays as Running, proceed to set up routing rules.
  1. Add Default Routing Rule:
  • Click on Rules and select New.
  • Rule Name: Enter a name for this rule, such as default route.
  • Action: Select Route.
  • Direction: Choose Outgoing.
  • Source and Destination Filters: Leave as any and default since this is the default route.
  • Target:
    • Type: Select IP/Custom.
    • Target IP: Enter the router IP of your gateway. Example: 192.168.212.1.
  • Click Save, then Apply Rules.

Feedback

Need Help?

If you have any questions or encounter issues while creating an external network, please reach out to our support team for assistance.


Document Information

  • Last Updated: 2024-10-30
  • VergeOS Version: 4.12.6

API Guide

Overview

The VergeOS API allows developers to interact with the VergeOS system programmatically. It provides access to system operations such as creating virtual machines, managing resources, and interacting with billing and catalog repositories. The API uses standard REST-like conventions and supports multiple authentication methods. This guide provides an overview of the VergeOS API, endpoint documentation, example requests, and error handling.

This document outlines the usage of the API. Detailed information for the API can be found within the VergeOS UI as a Swagger documentation page, which is dynamically generated and shows a complete listing of available API tables and operations.

Swagger Interface

To access the Swagger documentation in the VergeOS UI:

  1. Login to the VergeOS system with valid credentials.
  2. Select System from the top menu.
  3. Select API Documentation.
  4. The Swagger documentation page will open. This page provides detailed examples for each API operation, including the ability to test the API directly.

Swagger Documentation Example

  1. Select an individual table and choose one of the available GET/POST/DELETE/PUT options to view and test API actions.

  2. Specify the parameters and click the Execute Button to run the API command. This will return the response, which includes the response body, header, and a curl example.

API Basics

HTTP Methods

The VergeOS API uses standard HTTP methods like GET, POST, PUT, and DELETE for resource manipulation.

GET Parameters

  • fields: Specify which fields to return in the result set.
  • filter: Filter the result set based on certain criteria.
  • sort: Sort the results by a specified field.
  • limit: Limit the number of returned results.

Authentication

All API requests must be made over HTTPS and require authentication using basic access authentication or a session token.

Additional Notes

  • Rate Limits: The API supports a maximum of 1000 requests per hour per API key.
  • Data Formats: All responses are returned in JSON format.
  • Pagination: Endpoints that return large sets of data support pagination using offset and limit query parameters.

Authentication

VergeOS supports two methods of authentication:

  1. Basic HTTP Authentication
    The API is available only through SSL.

  2. Token-based Authentication
    Developers must request a token from the API by posting to the /sys/tokens endpoint. The token is then passed in subsequent API requests in the x-yottabyte-token header.

Example Authentication Request

To obtain a token:

curl --header "X-JSON-Non-Compact: 1" --basic --data-ascii '{"login": "USERNAME", "password": "PASSWORD"}' --insecure --request "POST" --header 'Content-Type: application/json' 'https://your-verge-instance.com/api/sys/tokens'

Example response:

{
   "location":"\\/sys\\/tokens\\/3a334563456378845634563b7b82d2efcadce9",
   "dbpath":"tokens\\/3a334563456378845634563b7b82d2efcadce9",
   "$row":1,
   "$key":"3a334563456378845634563b7b82d2efcadce9"
}

Use the token from the "$key" field in all subsequent requests:

x-yottabyte-token: 3a334563456378845634563b7b82d2efcadce9

To log out, send a DELETE request to the /sys/tokens/{token} endpoint.

Example Logout Request

DELETE /sys/tokens/3a334563456378845634563b7b82d2efcadce9

Example Virtual Machines

The VMs section of the VergeOS API allows users to manage virtual machines programmatically. It includes endpoints to list, create, modify, and delete VMs.

Retrieve a List of Virtual Machines

Endpoint:
GET /v4/vms?fields=most

Description:
Retrieves a list of all VMs in the system with details such as CPU cores, RAM, machine type, and configuration details.

Example Request:

curl -X 'GET' \
  'https://your-verge-instance.com/api/v4/vms?fields=most' \
  -H 'accept: application/json' \
  -H 'x-yottabyte-token: <your-token>'

Example Response:

[
  {
    "$key": 1,
    "name": "CentOS 7 (Latest) 1.0-7",
    "machine": 7,
    "cpu_cores": 2,
    "cpu_type": "Cascadelake-Server",
    "ram": 2048,
    "os_family": "linux",
    "is_snapshot": true,
    "boot_order": "cd",
    "rtc_base": "utc",
    "console": "vnc",
    "uefi": false,
    "secure_boot": false,
    "serial_port": true,
    "uuid": "d3914756-4ec5-9dfe-5c45-b28af2fd3d73",
    "created": 1724435418,
    "modified": 1724435418
  }
]

Overview of the Data Returned:

  • $key: The unique identifier for the VM.
  • name: The name of the virtual machine.
  • machine: Machine ID associated with the VM.
  • cpu_cores: Number of CPU cores allocated to the VM.
  • ram: Amount of RAM allocated (in MB).
  • os_family: Operating system type.
  • uuid: Universally unique identifier (UUID) for the VM.
  • created: The creation timestamp.
  • modified: The last modified timestamp.

Create a New Virtual Machine

Endpoint:
POST /v4/vms

Description:
Creates a new virtual machine with specific configuration details, such as CPU cores, RAM, machine type, boot order, etc.

Example Request:

curl -X 'POST' \
  'https://your-verge-instance.com/api/v4/vms' \
  -H 'accept: application/json' \
  -H 'x-yottabyte-token: <your-token>' \
  -H 'Content-Type: application/json' \
  -d '{
    "name": "rest",
    "description": "test",
    "machine_type": "pc-q35-9.0",
    "allow_hotplug": true,
    "cpu_cores": 1,
    "cpu_type": "Broadwell",
    "ram": 1024,
    "os_family": "linux",
    "boot_order": "cd",
    "uefi": false,
    "note": "test vm"
  }'

Example Response:

{
  "location": "/v4/vms/36",
  "dbpath": "vms/36",
  "$row": 36,
  "$key": "36"
}

Overview of the Data Returned:

  • location: The location of the newly created VM resource.
  • dbpath: Database path of the new VM.
  • $row: Row ID of the VM.
  • $key: Unique key for the VM.

Example Virtual Networks (Vnets)

The Vnets section of the VergeOS API allows users to manage virtual networks (Vnets) programmatically. It includes endpoints to retrieve, create, and manage network resources, including internal and external networks, and allows advanced options like rate limiting.

Retrieve Vnet Details

Endpoint:
GET /v4/vnets?fields=most

Description:
Retrieves a list of all Vnets in the system with details such as network type, MTU, DHCP settings, and DNS configuration.

Example Request:

curl -X 'GET' \
  'https://your-verge-instance.com/api/v4/vnets?fields=most' \
  -H 'accept: application/json' \
  -H 'x-yottabyte-token: <your-token>'

Example Response:

[
  {
    "$key": 6,
    "name": "Internal Test 1",
    "advanced_options": {
      "dnsmasq": [
        "--dhcp-boot=netboot.xyz.kpxe,,192.168.10.20",
        "--dhcp-match=set:efi-x86,option:client-arch,6",
        "--dhcp-boot=tag:efi-x86,netboot.xyz.efi,,192.168.10.20",
        "--dhcp-match=set:efi-x86_64,option:client-arch,7",
        "--dhcp-boot=tag:efi-x86_64,netboot.xyz.efi,,192.168.10.20",
        "--dhcp-match=set:efi-x86_64,option:client-arch,9",
        "--dhcp-boot=tag:efi-x86_64,netboot.xyz.efi,,192.168.10.20"
      ]
    },
    "type": "internal",
    "layer2_type": "vxlan",
    "network": "192.168.100.0/24",
    "mtu": 9000,
    "dhcp_enabled": true,
    "dhcp_start": "192.168.100.100",
    "dhcp_stop": "192.168.100.200",
    "rate_limit": 0,
    "rate_limit_type": "mbytes/second",
    "gateway": ""
  }
]

Overview of the Data Returned:

  • $key: The unique identifier for the Vnet.
  • name: Name of the virtual network.
  • advanced_options: Advanced options to be passed to services running inside the network, in this example netboot flags for dnsmasq
  • type: Network type (e.g., "internal").
  • layer2_type: The type of Layer 2 networking, such as VXLAN.
  • network: Network CIDR block.
  • mtu: Maximum Transmission Unit (MTU) size.
  • dhcp_enabled: Indicates whether DHCP is enabled for this network.
  • dhcp_start: Starting IP address for the DHCP pool.
  • dhcp_stop: Ending IP address for the DHCP pool.
  • rate_limit: Rate limit for the network (in mbytes/second).
  • gateway: The default gateway for the network.

Create an Internal Network with Rate Limiting

Endpoint:
POST /v4/vnets

Description:
Creates a new internal virtual network with rate limiting and DHCP settings.

Example Request:

curl -X 'POST' \
  'https://your-verge-instance.com/api/v4/vnets' \
  -H 'accept: application/json' \
  -H 'x-yottabyte-token: <your-token>' \
  -H 'Content-Type: application/json' \
  -d '{
    "name":"int1",
    "description":"workloads",
    "type":"internal",
    "mtu":"9000",
    "network":"192.168.80.0/24",
    "gateway":"192.168.80.1",
    "dnslist":"1.1.1.1",
    "dhcp_enabled": true,
    "rate_limit": 100,
    "rate_limit_burst": 500,
    "dhcp_start":"192.168.80.100",
    "dhcp_stop":"192.168.80.200"
  }'

Example Response:

{
  "location": "/v4/vnets/8",
  "dbpath": "vnets/8",
  "$row": 8,
  "$key": "8"
}

Overview of the Data Returned: - location: The location of the newly created Vnet resource. - dbpath: Database path of the new Vnet. - \(row**: Row ID of the Vnet. - **\)key: Unique key for the Vnet.


Resources

Below is an example URL used to query a list of machines.

Example: https://user1:xxxxxx@server1.verge.io/api/v4/machines?fields=all

https:// user : password @ server /api /v4/machines ? filter=&fields=all&sort=&limit=
User name User Password Server host name or IP Resource location (URI) These options are described below

GET Options

Fields
  • Specify which fields to return in the result set (may also be a view if there is one defined for the table schema).
  • all returns every field.
  • most returns most fields except for argument fields and rows.
  • summary returns fields marked as 'summary' in their schema.
  • Example: fields=name,email,enabled,groups[all] as all_groups,collapse(groups[name]) as first_groups_name

Field functions: - collapse - datetime - upper - lower - count - diskspace - display - hex - sha1 - sum - avg - min - max

Filter
  • Filter result sets by specified criteria.
  • Similar to OData.
  • Example: filter=enabled eq true and size gt 1048576.
  • Example: filter=cputype eq 'qemu' or cputype eq 'kvm'.
Operator Description
eq Equal
ne Not equal
gt Greater than
ge Greater than or equal
lt Less than
le Less than or equal
bw Begins with
ew Ends with
and Logical and
or Logical or
cs Contains string (case sensitive)
ct Contains text (case insensitive)
rx Regex match
Sort
  • Sort results by the specified field.
  • Example: sort=+name.
  • Example: sort=-id.
Limit
  • limit (integer) limits the result set to a specified number of entries. A value of 0 means unlimited.
  • Example: limit=1.

Generic HTTP Response Codes

  • 400 - Bad Request: The request was invalid.
  • 401 - Failed Login / Login Required: Authentication failed or is required.
  • 403 - Permission Denied: You lack the required permissions.
  • 404 - Resource Not Found: The requested row or API does not exist.
  • 405 - Not Permitted: The operation is not allowed.
  • 409 - Row Exists: The resource already exists.
  • 422 - Failed Validation / Invalid Parameter: Validation failed.
  • 500 - Internal Server Error: An unhandled error occurred.
POST-Specific
  • 201 - Created: A new row/resource was successfully created.
Websocket-Specific (Used for VNC/SPICE)
  • 101 - Switching Protocols: The protocol was successfully switched.
PUT/GET/DELETE
  • 200 - Success: The operation completed successfully.

Schema Table Definitions

Field Types
  • bool
  • text
  • string
  • num
  • uint8
  • uint16
  • uint32
  • uint64
  • int8
  • int16
  • int32
  • int64
  • enabled
  • created
  • created_ms
  • created_us
  • modified
  • modified_ms
  • modified_us
  • filename
  • filesize
  • fileused
  • fileallocated
  • filemodified
  • json
  • row
  • rows
Schema Owner / Parent Field
  • Owner Field: If the owner field is null, normal permissions apply. If the owner field has a value, permissions are replaced by a permission check to the owner.
  • Parent Field: The permission check is applied to the row itself, and if permissions fail, permissions are also checked on the parent row.

Full Table Schema

To retrieve a table’s schema, append $table to the URI:

/api/v4/machines/$table (replace "machines" with the table name).

You will be prompted for your credentials; this requires VergeOS admin credentials. The output will be in JSON format. Firefox displays this in a readable format by default, but other browsers may require exporting the JSON to an external program for better readability.


Example Errors

Example Error (HTTP Code 422)
{
  "err": "Validation error on field: 'dhcp_start' - 'fails validation test'"
}

VergeOS uses standard HTTP status codes to indicate the result of an API request.

  • 400 Bad Request: The request is invalid or cannot be processed.
  • 401 Unauthorized: The API key is missing or invalid.
  • 403 Forbidden: The API key lacks the required permissions.
  • 404 Not Found: The resource does not exist.
  • 500 Internal Server Error: A server error occurred.

Document Information

  • Last Updated: 2024-11-14
  • vergeOS Version: 4.12.6

Terraform VergeIO Provider

The Terraform VergeIO Provider enables the integration and automation of VergeOS infrastructure with Terraform. It allows users to define, manage, and scale VergeOS resources as part of Infrastructure as Code (IaC) workflows.

For the latest provider documentation and examples, please refer to the following:


Example Usage

For more detailed usage examples, check the docs folder in the GitHub repository.

Example Configuration

provider "vergeio" {
  host     = "https://some_url_or_ip"
  username = "my_user"
  password = "my_password"
  insecure = false  # Use true if using self-signed SSL certificates
}

resource "vergeio_vm" "new_vm" {
  name        = "NEW VM"
  description = "NEW TF VM"
  enabled     = true
  os_family   = "linux"
  cpu_cores   = 4
  machine_type = "q35"
  ram         = 8192
}

Initializing and Applying

To apply the configuration:

terraform init && terraform apply

Configuration Reference

host (Required): URL or IP address for the VergeOS system or tenant.
username (Required): Username for the VergeOS system or tenant.
password (Required): Password for the provided username.
insecure (Optional): Set to true for systems using self-signed SSL certificates.

Resources

The following VergeOS resources can be managed via Terraform:

vergeio_drive
vergeio_member
vergeio_network
vergeio_nic
vergeio_user
vergeio_vm

Data Sources

The following data sources are available for querying VergeOS resources:

vergeio_clusters
vergeio_groups
vergeio_mediasources
vergeio_networks
vergeio_nodes
vergeio_version
vergeio_vms

Testing a Sample Configuration

To test your configuration, create a main.tf file in your Terraform workspace:

terraform {
  required_providers {
    vergeio = {
      source = "vergeio/cloud/vergeio"
    }
  }
}

provider "vergeio" {
  host     = "https://someURLorIP"
  username = "username"
  password = "password"
}

resource "vergeio_vm" "new_vm" {
  name        = "NEW VM"
  description = "NEW TF VM"
  enabled     = true
  os_family   = "linux"
  cpu_cores   = 4
  machine_type = "q35"
  ram         = 8192
}

Then, run the following command:

terraform init && terraform apply

Document Information

  • Last Updated: 2024-09-03
  • VergeOS Version: 4.12.6

Updating a VergeOS System with Airgap License

Overview

Key Points

  • System updates should be performed during a maintenance window
  • This guide details the process of manually updating a VergeOS system using an air-gap license.
  • The update is performed using an ISO file, ensuring that systems without internet access can be kept up-to-date.
  • Ensure you have a valid air-gap license and the latest ISO file before starting.

This guide provides a step-by-step process to manually update your air-gapped VergeOS system using an ISO file.

Prerequisites

  • Access to the VergeOS Cloud Dashboard.
  • The latest VergeOS update ISO file.
  • A valid air-gap license.
  • A recent backup of your VergeOS system.

Steps

  1. Download the Update ISO - Visit the VergeOS updates page at https://updates.verge.io/download. - Download the latest VergeOS release ISO file.

!!! tip "Pro Tip" Ensure that the ISO file corresponds to your current VergeOS version to avoid compatibility issues.

  1. Upload the ISO to VergeOS - Log in to your VergeOS environment. - Navigate to Files - Upload the downloaded ISO file to the Files section.

!!! note The upload process may take a few minutes depending on your network speed.

  1. Configure Update Settings - Go to System > Updates > Edit Settings. - In the Update Source dropdown menu, select -- Update ISO --. - Choose the ISO file you just uploaded from the Files. - Click Submit to save the settings.

  2. Perform the Update - Return to the Updates section and click Check For Updates. - Once the update is detected, click Download. - After the download completes, click Install. - Follow the prompts to Reboot the system to apply the updates.

!!! warning "Important" Do not interrupt the update process. Ensure that the system remains powered on and connected during the update.

Troubleshooting

Common Issues

  • Issue: Update not detected after uploading the ISO.
  • Solution: Ensure the ISO was uploaded correctly and reselect it in the Update Source settings.

  • Issue: Errors during the update process.

  • Solution: Check system logs for detailed error messages and verify that your air-gap license is valid.

  • Issue: System fails to reboot after the update.

  • Solution: Contact Verge support for assistance.

Additional Resources

Feedback

Need Help?

If you encounter any issues during the update process or have any questions, please reach out to our support team.


Document Information

  • Last Updated: 2024-08-19
  • VergeOS Version: 4.12.6

Requesting an Airgap License for VergeOS

Air-gap licensing is not common and requires justification. Please see Licensing and Software Updates for more information.

Overview

Key Points

  • VergeOS requires a valid license for operation
  • Air-gapped environments need a special airgap license
  • The process involves generating a license request file and emailing it to Verge

This guide walks you through the process of requesting an airgap license for VergeOS systems in environments without outbound Internet access.

Prerequisites

  • Access to the VergeOS Cloud Dashboard
  • A working email client on a machine that can send external emails
  • Understanding of your system's airgapped status

Steps

  1. Navigate to System Settings

    • From the System Dashboard, click System on the left menu
    • Click Settings on the left menu
  2. Initiate License Request

    • In the License section, click the Request License button
  3. Generate License Request File

    • A popup window titled "Request Generated" will appear
    • This window displays information about the license request file
  4. Download Request File

    • Click the Download Request File button
    • Save the license request file to your local machine
  5. Prepare Email to Verge

    • Click the Email license@Verge.io button
    • This opens your default email client with a pre-addressed email
  6. Send License Request

    • Attach the downloaded license request file to the email
    • Provide additional information in the email body (e.g., company name, purpose of license)
    • Send the email to Verge's licensing team

What Happens Next

  1. Verge processes your request and generates an airgap license file
  2. You receive a reply email with the airgap license file attached
  3. Upload the received license file to your VergeOS system (covered in a separate guide)

Processing Time

If you haven't received a response within 2 business days, please follow up with Verge's support team.

Important Considerations

  • Ensure the system requesting the license is the one you intend to license
  • Keep the license request file secure
  • For multiple systems, repeat this process for each system individually

Troubleshooting

Common Issues

  • Problem: Unable to generate license request file
  • Solution: Verify your access permissions in the VergeOS Cloud Dashboard

  • Problem: Email client doesn't open automatically

  • Solution: Manually compose an email to license@Verge.io and attach the downloaded request file

Feedback

Need Help?

If you encounter any issues while requesting an airgap license or have questions about this process, please don't hesitate to contact our support team.