Oracle Linux Virtualisation Manager and Veeam Integration

Over the past few days I’ve been taking the opportunity to familiarise myself with KVM-based hypervisors and the integrations available with Veeam. Below are a few of the notes and learnings I had along the way in getting the system deployed and running. My deployment was done with a minimal setup of a single KVM host deployed on top of my existing VMware lab, and some of the notes will reflect that in case someone is looking to do something similar, either on VMware or another hypervisor.

Deploying Upstream oVirt

My first attempt to get this solution up and running was to do so using the upstream release of oVirt and CentOS Stream 9. Whether it was an issue with using CentOS or with the specific version of oVirt I was deploying, there was no end of frustrations in getting the solution up, to the extent I eventually gave up. Aside from my initial brainfart in not enabling hardware assisted virtualization to the OS, I had multiple instances of the Ansible playbook failing to run and throwing error messages and logs that were difficult to interpret. The only error messages displayed are along the lines of "The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.”. Then to see the error log state "Network not found: no network with matching name 'default'" with no reference to whether that network was supposed to have been bought up by the playbook or if there was a pre-existing error of some kind. Eventually I decided that the benefit of the bleeding edge wasn’t worth the pain and opted instead to go for Oracle Linux Virtualisation Manager as the preferred option.

Remember to expose hardware assisted virtualization to the Guest OS when running nested hypervisors, otherwise you’ll get a failed deployment

Deploying OLVM

A slight annoyance in getting OLVM up and running is that it is only currently documented as being supported on Oracle Linux 8, despite version 9 having been out for quite a while. That minor grievance aside, the process of getting OLVM up and running is a fairly painless process, and the documentation available is easy to navigate. Once again I had a slight hiccup due to the nature of how I was deploying OLVM and wasn’t able to connect to my newly deployed appliance as I hadn’t allowed Forged Transmits on my appliance, effectively dropping any packets destined for my management console running inside of my KVM host. On fixing that I was able to get to the Manager Portal and login. For previous versions of oVirt/OLVM the default login username was admin@internal, based on the details I’ve seen. With the more recent versions this has been changed to admin@ovirt. This isn’t particularly well-documented and took a bit of digging to find, but I managed to stumble across this detail somewhere in an oVirt forum and got in.

Connecting to Veeam Backup and Recovery

Not to be outdone on the username front, I was again stymied by the process of connecting Veeam to the new KVM Manager to test out the backup process. The process of downloading and installing the plugin was a simple experience, as we’ve come to expect, however when it came time to connect to the KVM Manager, none of the usernames I put forward would work. I tried the above admin@ovirt, admin@internal, root and even went through the linked article process to add a new user to the OLVM system to see if I needed to do so in order to get access. None of them seemed to resolve anything. This again comes down to some breaking changes that have recently been introduced to oVirt upstream, in the form of a new authentication engine. What needed to be done was to add “@internalsso” to the username to use the credentials in the oVirt Engine, instead of the KeyCloak authentication system. With that debacle out of the way it was a smooth process to get the Veeam Proxy deployed and running, and connected to Veeam.

Next Steps

While I now have a basic nested host installed, I’m still lacking any workloads on the host to see what the day to day activity on OLVM is like, so I’ll be trying that out and seeing what the process is like to backup and restore VM’s to and from the host. This is less about performance, given the nested nature of the environment, and more to see what can be done and how it gets done. There are some interesting dashboards out of the box using Grafana that will be worthwhile to explore too, and to compare the visibility of various metrics as compared to vCenter. Outside of OLVM, exploring options with Openshift Virtualization to see how the system works and to understand how backups of VM’s work as they sit on top of a kubernetes cluster. If it’s a successful trial, I wouldn’t be surprised if in time we saw OSV or another KubeVirt offering begin to challenge virtualization dominance, given that it will allow a unified management and integration approach between both containerised and virtualised workloads. Time will tell.

Previous
Previous

Adding Backblaze B2 to Veeam

Next
Next

Upgrades to the Home Lab