You can use {the-lc} {project-first} to migrate virtual machines from the following source providers to {virt} destination providers:
-
VMware vSphere
-
{rhv-full} ({rhv-short})
-
{osp}
-
Open Virtual Appliances (OVAs) that were created by VMware vSphere
-
Remote {virt} clusters
The release notes describe technical changes, new features and enhancements, known issues, and resolved issues.
This release has the following technical changes:
In earlier releases of {project-short}, users had to specify a fingerprint when creating a vSphere provider. This required users to retrieve the fingerprint from the server that vCenter runs on. {project-short} no longer requires this fingerprint as an input, but rather computes it from the specified certificate in the case of a secure connection or automatically retrieves it from the server that runs vCenter/ESXi in the case of an insecure connection.
The user interface console has improved the process of creating a migration plan. The new migration plan dialog enables faster creation of migration plans.
It includes only the minimal settings that are required, while you can confirgure advanced settings separately. The new dialog also provides defaults for network and storage mappings, where applicable. The new dialog can also be invoked from the the Provider
> Virtual Machines
tab, after selecting the virtual machines to migrate. It also better aligns with the user experience in the OCP console.
virtual machine preferences
have replaced {ocp-name} templatesThe virtual machine preferences
have replaced {ocp-name} templates. {project-short} currently falls back to using {ocp-name} templates when a relevant preference is not available.
Custom mappings of guest operating system type to virtual machine preference can be configured using config
maps. This is in order to use custom virtual machine preferences, or to support more guest operating system types.
Migration from OVA moves from being a Technical Preview and is now a fully supported feature.
Running
state{project-short} creates the VM with its desired Running
state on the target provider, instead of creating the VM and then running it as an additional operation. (MTV-794)
must-gather
logs can now be loaded only by using the CLIThe {project-short} web console can no longer download logs. With this update, you must download must-gather
logs by using CLI commands. For more information, see Must Gather Operator.
pvc-init
pods when migrating from vSphere{project-short} no longer runs pvc-init
pods during cold migration from a vSphere provider to the {ocp-name} cluster that {project-short} is deployed on. However, in other flows where data volumes are used, they are set with the cdi.kubevirt.io/storage.bind.immediate.requested
annotation, and CDI runs first-consume pods for storage classes with volume binding mode WaitForFirstConsumer
.
This section provides features and enhancements introduced in {project-full} 2.6.
You can now perform cold migrations from a vSphere provider of VMs whose virtual disks are encrypted by Linux Unified Key Setup (LUKS). (MTV-831)
You can now specify the primary disk when you migrate VMs from vSphere with more than one bootable disk. This avoids {project-short} automatically attempting to convert the first bootable disk that it detects while it examines all the disks of a virtual machine. This feature is needed because the first bootable disk is not necessarily the disk that the VM is expected to boot from in {virt}. (MTV-1079)
You can now remotely access the UI of a remote cluster when you create a source provider. For example, if the provider is a remote {rhv-full} {rhv-short} cluster, {project-short} adds a link to the remote {rhv-short} web console when you define the provider. This feature makes it easier for you to manage and debug a migration from remote clusters. (MTV-1054)
You can now specify a CA certificate that can be used to authenticate the server that runs vCenter or ESXi, depending on the specified SDK endpoint of the vSphere provider. (MTV-530)
You can now specify a CA certificate that can be used to authenticate the API server of a remote {ocp-name} cluster. (MTV-728)
{project-short} enables the configuration of vSphere providers with the SDK of ESXi. You need to select ESXi as the Endpoint type of the vSphere provider and specify the URL of the SDK of the ESXi server. (MTV-514)
{project-short} supports the migration of VMs that were created from images in {osp}. (MTV-644)
{project-short} supports migrations of VMs that are set with Fibre Channel (FC) LUNs from {rhv-short}. As with other LUN disks, you need to ensure the {ocp-name} nodes have access to the FC LUNs. During the migrations, the FC LUNs are detached from the source VMs in {rhv-short} and attached to the migrated VMs in {ocp-name}. (MTV-659)
{project-short} sets the CPU type of migrated VMs in {ocp-name} with their custom CPU type in {rhv-short}. In addition, a new option was added to migration plans that are set with {rhv-short} as a source provider to preserve the original CPU types of source VMs. When this option is selected, {project-short} identifies the CPU type based on the cluster configuration and sets this CPU type for the migrated VMs, for which the source VMs are not set with a custom CPU. (MTV-547)
Red Hat Enterprise Linux (RHEL) 9 does not support RHEL 6 as a guest operating system. Therefore, RHEL 6 is not supported in {ocp-name} Virtualization. With this update, a validation of RHEL 6 guest operating system was added to {ocp-name} Virtualization. (MTV413)
The ability to retrieve CA certificates, which was available in previous versions, has been restored. The vSphere Verify certificate
option is in the add-provider
dialog. This option was removed in the transition to the {ocp} console and has now been added to the console. This functionality is also available for {rhv-short}, {osp}, and {ocp-name} providers now. (MTV-737)
{project-short} validates the availability of a VDDK image that is specified for a vSphere provider on the target {ocp-name} name as part of the validation of a migration plan. {project-short} also checks whether the libvixDiskLib.so
symbolic link (symlink) exists within the image. If the validation fails, the migration plan cannot be started. (MTV-618)
{project-short} presents a warning when attempting to migrate a VM that is set with a TPM device from {rhv-short} or vSphere. The migrated VM in {ocp-name} would be set with a TPM device but without the content of the TPM device on the source environment. (MTV-378)
With this update, you can edit plans that have failed to migrate any VMs. Some plans fail or are canceled because of incorrect network and storage mappings. You can now edit these plans until they succeed. (MTV-779)
The validation service includes default validation rules for virtual machines from the Open Virtual Appliance (OVA). (MTV-669)
This release has the following resolved issues:
In earlier releases of {project-short}, there was an issue with the incorrect handling of single and double quotes in interface configuration (ifcfg) files, which control the software interfaces for individual network devices. This issue has been resolved in {project-short} 2.6.7, in order to cover additional IP configurations on Red Hat Enterprise Linux, CentOS, Rocky Linux and similar distributions. (MTV-1439)
In earlier releases of {project-short}, there was an issue with the preservation of netplan-based network configurations. This issue has been resolved in {project-short} 2.6.7, so that static IP configurations are preserved if netplan (netplan.io) is used by using the netplan configuration files to generate udev rules for known mac-address and ifname
tuples. (MTV-1440)
In earlier releases of {project-short}, there was an issue with the accidental leakage of error messages into udev .rules files. This issue has been resolved in {project-short} 2.6.7, with a static IP persistence script added to the udev rule file. (MTV-1441)
In earlier releases of {project-short}, there was a runtime error of invalid memory address
or nil pointer dereference
caused by a pointer that was nil, and there was an attempt to access the value that it points to. This issue has been resolved in {project-short} 2.6.6. (MTV-1353)
In earlier releases of {project-short}, the scheduler could place all migration pods on a single node. When this happened, the node ran out of the resources. This issue has been resolved in {project-short} 2.6.6. (MTV-1354)
In earlier releases of {project-short}, a vulnerability was found in the Forklift Controller. There is no verification against the authorization header, except to ensure it uses bearer authentication. Without an authorization header and a bearer token, a 401
error occurs. The presence of a token value provides a 200 response with the requested information. This issue has been resolved in {project-short} 2.6.6.
For more details, see (CVE-2024-8509).
In earlier releases of {project-short}, during the migration of Rocky Linux 8, CentOS 7.2 and later, and Ubuntu 22 virtual machines (VM) from VMware to {ocp} (OCP), the name of the network interfaces is modified, and the static IP configuration for the VM is no longer functional. This issue has been resolved for static IPs in Rocky Linux 8, Centos 7.2 and later, Ubuntu 22 in {project-short} 2.6.5. (MTV-595)
Windows (Windows 2022) VMs configured with multiple disks, which are Online before the migration, are Offline after a successful migration from {rhv-short} or VMware to {ocp}, using {project-short}. Only the C:\
primary disk is Online. This issue has been resolved for basic disks in {project-short} 2.6.4. (MTV-1299)
For details of the known issue of dynamic disks being Offline in Windows Server 2022 after cold and warm migrations from vSphere to container-native virtualization (CNV) with Ceph RADOS Block Devices (RBD), using the storage class ocs-storagecluster-ceph-rbd
, see (MTV-1344).
In earlier releases of {project-short}, while migrating a Windows 2022 Server with a static IP address assigned, and selecting the Preserve static IPs option, after a successful Windows migration, while the node started and the IP address was preserved, the subnet mask, gateway, and DNS servers were not preserved. This resulted in an incomplete migration, and the customer was forced to log in locally from the console to fully configure the network. This issue has been resolved in {project-short} 2.6.4. (MTV-1286)
qemu-guest-agent
not being installed at first boot in Windows Server 2022After a successful Windows 2022 server guest migration using {project-short} 2.6.1, the qemu-guest-agent
is not completely installed. The Windows Scheduled task is being created, however it is being set to run 4 hours in the future instead of the intended 2 minutes in the future. (MTV-1325)
golang: net
malformed DNS message can cause infinite loopIn earlier releases of {project-short}, there was a flaw was discovered in the stdlib
package of the Go programming language, which impacts previous versions of {project-short}. This vulnerability primarily threatens web-facing applications and services that rely on Go for DNS queries. This issue has been resolved in {project-short} 2.6.3.
For more details, see (CVE-2024-24788).
virt-v2v
copies disks sequentially (vSphere only)In earlier releases of {project-short}, there was a problem with the way {project-short} interpreted the controller_max_vm_inflight
setting for vSphere to schedule migrations. This issue has been resolved in {project-short} 2.6.3. (MTV-1191)
In earlier versions of {project-short}, cold migrations from a vSphere provider with an ESXi SDK endpoint failed if any network was used except for the default network for disk transfers. This issue has been resolved in {project-short} 2.6.3. (MTV-1180)
DiskTransfer
state (vSphere only)In earlier versions of {project-short}, warm migrations over an ESXi network from a vSphere provider with a vCenter SDK endpoint were stuck in DiskTransfer
state because {project-short} was unable to locate image snapshots. This issue has been resolved in {project-short} 2.6.3. (MTV-1161)
Lost
state after cold migrationsIn earlier versions of {project-short}, after cold migrations, there were leftover PVCs that had a status of Lost
instead of being deleted, even after the migration plan that created them was archived and deleted. Investigation showed that this was because importer pods were retained after copying, by default, rather than in only specific cases. This issue has been resolved in {project-short} 2.6.3. (MTV-1095)
In earlier versions of {project-short}, some VMs that were imported from vSphere were not mapped to a template in {ocp-short} while other VMs, with the same guest operating system, were mapped to the corresponding template. Investigations indicated that this was because vSphere stopped reporting the operating system after not receiving updates from VMware tools for some time. This issue has been resolved in {project-short} 2.6.3 by taking the value of the operating system from the output of the investigation that virt-v2v
performs on the disks. (MTV-1046)
net/http, x/net/http2
: unlimited number of CONTINUATION
frames can cause a denial-of-service (DoS) attackA flaw was discovered with the implementation of the HTTP/2
protocol in the Go programming language, which impacts previous versions of {project-short}. There were insufficient limitations on the number of CONTINUATION frames sent within a single stream. An attacker could potentially exploit this to cause a denial-of-service (DoS) attack. This flaw has been resolved in {project-short} 2.6.2.
For more details, see (CVE-2023-45288).
mtv-api-container
: Golang html/template: errors
returned from MarshalJSON
methods may break template escapingA flaw was found in the html/template
Golang standard library package, which impacts previous versions of {project-short}. If errors returned from MarshalJSON
methods contain user-controlled data, they may be used to break the contextual auto-escaping behavior of the HTML/template package, allowing subsequent actions to inject unexpected content into the templates. This flaw has been resolved in {project-short} 2.6.2.
For more details, see (CVE-2024-24785).
mtv-validation-container
: Golang net/mail
: comments in display names are incorrectly handledA flaw was found in the net/mail
Golang standard library package, which impacts previous versions of {project-short}. The ParseAddressList
function incorrectly handles comments, text in parentheses, and display names. As this is a misalignment with conforming address parsers, it can result in different trust decisions being made by programs using different parsers. This flaw has been resolved in {project-short} 2.6.2.
For more details, see (CVE-2024-24784).
mtv-api-container
: Golang crypto/x509
: Verify panics on certificates with an unknown public key algorithmA flaw was found in the crypto/x509
Golang standard library package, which impacts previous versions of {project-short}. Verifying a certificate chain that contains a certificate with an unknown public key algorithm causes Certificate.Verify
to panic. This affects all crypto/tls
clients and servers that set Config.ClientAuth
to VerifyClientCertIfGiven
or RequireAndVerifyClientCert
. The default behavior is for TLS servers to not verify client certificates. This flaw has been resolved in {project-short} 2.6.2.
For more details, see (CVE-2024-24783).
mtv-api-container
: Golang net/http
memory exhaustion in Request.ParseMultipartForm
A flaw was found in the net/http
Golang standard library package, which impacts previous versions of {project-short}. When parsing a multipart
form, either explicitly with Request.ParseMultipartForm
or implicitly with Request.FormValue
, Request.PostFormValue
, or Request.FormFile
, limits on the total size of the parsed form are not applied to the memory consumed while reading a single form line. This permits a maliciously crafted input containing long lines to cause the allocation of arbitrarily large amounts of memory, potentially leading to memory exhaustion. This flaw has been resolved in {project-short} 2.6.2.
For more details, see (CVE-2023-45290).
In earlier releases of {project-short}, migration of VMs failed because the migration was stuck in the AllocateDisks
phase. As a result of being stuck, the migration did not progress, and PVCs were not bound. The root cause of the issue was that ImageConversion
did not run when target storage was set for wait-for-first-consumer
. The problem was resolved in {project-short} 2.6.2. (MTV-1126)
In earlier releases of {project-short}, forklift-controller
panicked when a user attempted to import VMs that had direct LUNs. The problem was resolved in {project-short} 2.6.2. (MTV-1134)
In {project-short} 2.6.0, there was a problem in copying VMs with multiple disks from VMware vSphere and from OVA files. The migrations appeared to succeed but all the disks were transferred to the same PV in the target environment while other disks were empty. In some cases, bootable disks were overridden, so the VM could not boot. In other cases, data from the other disks was missing. The problem was resolved in {project-short} 2.6.1. (MTV-1067)
In {project-short} 2.6.0, migrations from one {ocp} cluster to another failed when the time to transfer the disks of a VM exceeded the time to live (TTL) of the Export API in {ocp-name}, which was set to 2 hours by default. The problem was resolved in {project-short} 2.6.1 by setting the default TTL of the Export API to 12 hours, which greatly reduces the possibility of an expiration of the Export API. Additionally, you can increase or decrease the TTL setting as needed. (MTV-1052)
In earlier releases of {project-short}, if a VM was configured with a disk that was on a datastore that was no longer available in vSphere at the time a migration was attempted, the forklift-controller
crashed, rendering {project-short} unusable. In {project-short} 2.6.1, {project-short} presents a critical validation for VMs with such disks, informing users of the problem, and the forklift-controller
no longer crashes, although it cannot transfer the disk. (MTV-1029)
In earlier releases of {project-short}, the PV was not removed when the OVA provider was deleted. This has been resolved in {project-short} 2.6.0, and the PV is automatically deleted when the OVA provider is deleted. (MTV-848)
In earlier releases of {project-short}, when migrating a VM that has a snapshot from VMware, the VM that was created in {ocp-name} Virtualization contained the data in the snapshot but not the latest data of the VM. This has been resolved in {project-short} 2.6.0. (MTV-447)
populate
pods and PVCIn earlier releases of {project-short}, when you canceled and deleted a failed migration plan, and after creating a PVC and spawning the populate
pods, the populate
pods and PVC were not deleted. You had to delete the pods and PVC manually. This issue has been resolved in {project-short} 2.6.0. (MTV-678)
In earlier releases of {project-short}, when migrating from {ocp} to {ocp}, the version of the source provider cluster had to be {ocp} version 4.13 or later. This issue has been resolved in {project-short} 2.6.0, with validation being shown when migrating from versions of {ocp-name} before 4.13. (MTV-734)
In earlier releases of {project-short}, multiple disks from different storage domains were always mapped to a single storage class, regardless of the storage mapping that was configured. This issue has been resolved in {project-short} 2.6.0. (MTV-1008)
In earlier releases of {project-short}, a VM that was migrated from an OVA that did not include the firmware type in its OVF configuration was set with UEFI. This was incorrect for VMs that were configured with BIOS. This issue has been resolved in {project-short} 2.6.0, as {project-short} now consumes the firmware that is detected by virt-v2v
during the conversion of the disks. (MTV-759)
In earlier releases of {project-short}, when configuring a transfer network for vSphere hosts, the console plugin created the Host
CR before creating its secret. The secret should be specified first in order to validate it before the Host
CR is posted. This issue has been resolved in {project-short} 2.6.0. (MTV-868)
ConnectionTestFailed
message appearsIn earlier releases of {project-short}, when adding an OVA provider, the error message ConnectionTestFailed
instantly appeared, although the provider had been created successfully. This issue has been resolved in {project-short} 2.6.0. (MTV-671)
ConnectionTestSucceeded
True response from the wrong URLIn earlier releases of {project-short}, the ConnectionTestSucceeded
condition was set to True
even when the URL was different than the API endpoint for the RHV Manager. This issue has been resolved in {project-short} 2.6.0. (MTV-740)
In earlier releases of {project-short}, migrating a VM that is placed in a Data Center that is stored directly under the /vcenter
in vSphere succeeded. However, it failed when the Data Center was stored inside a folder. This issue was resolved in {project-short} 2.6.0. (MTV-796)
The OVA inventory watcher detects files changes, including deleted files. Updates from the ova-provider-server
pod are now sent every five minutes to the forklift-controller
pod that updates the inventory. (MTV-733)
In earlier releases of {project-short}, the error logs lacked clear information to identify the reason for a failure to create a PV on a destination storage class that does not have a configured storage profile. This issue was resolved in {project-short} 2.6.0. (MTV-928)
CopyDisks
phase when there is an outdated ovirtvolumepopulatorIn earlier releases of {project-short}, an earlier failed migration could have left an outdated ovirtvolumepopulator
. When starting a new plan for the same VM to the same project, the CreateDataVolumes
phase did not create populator PVCs when transitioning to CopyDisks
, causing the CopyDisks
phase to stay indefinitely. This issue was resolved in {project-short} 2.6.0. (MTV-929)
For a complete list of all resolved issues in this release, see the list of Resolved Issues in Jira.
This release has the following known issues:
Warning
|
Warm migration and remote migration flows are impacted by multiple bugs
Warm migration and remote migration flows are impacted by multiple bugs. It is strongly recommended to fall back to cold migration until this issue is resolved. (MTV-1366) |
When migrating older Linux distributions, such as CentOS 7.0 and 7.1, virtual machines (VMs) from VMware to {ocp}, the name of the network interfaces changes, and the static IP configuration for the VM no longer functions. This issue is caused by RHEL 7.0 and 7.1 still requiring virtio-transitional
. Workaround: Manually update the guest to RHEL 7.2 or update the VM specification post-migration to use transitional. (MTV-1382)
The dynamic disks are Offline in Windows Server 2022 after cold and warm migrations from vSphere to container-native virtualization (CNV) with Ceph RADOS Block Devices (RBD), using the storage class ocs-storagecluster-ceph-rbd
. (MTV-1344)
The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)
vSphere only: Migrations from {rhv-short} and {osp} do not fail, but the encryption key might be missing on the target {ocp} cluster.
Warm migration from {rhv-short} fails if a snapshot operation is triggered and running on the source VM at the same time as the migration is scheduled. The migration does not wait for the snapshot operation to finish. (MTV-456)
hostPath
When migrating a VM with multiple disks to more than one storage class of type hostPath
, it might happen that a VM cannot be scheduled. Workaround: Use shared storage on the target {ocp} cluster.
Warm migrations and migrations to remote {ocp} clusters from vSphere do not support the same guest operating systems that are supported in cold migrations and migrations to the local {ocp} cluster. RHEL 8 and RHEL 9 might cause this limitation.
See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.
When migrating VMs that are installed with RHEL 9 as a guest operating system from vSphere, the network interfaces of the VMs could be disabled when they start in {ocp-name} Virtualization. (MTV-491)
When migrating a virtual machine (VM) with NVME disks from vSphere, the migration process fails, and the Web Console shows that the Convert image to kubevirt
stage is running
but did not finish successfully. (MTV-963)
Migrating an image-based VM without the virtual_size
field can fail on a block mode storage class. (MTV-946)
Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs, and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)
Migrating VMs with independent persistent disks from VMware to OCP-V fails. (MTV-993)
When vSphere does not receive updates about the guest operating system from the VMware tools, it considers the information about the guest operating system to be outdated and ceases to report it. When this occurs, {project-short} is unaware of the guest operating system of the VM and is unable to associate it with the appropriate virtual machine preference or {ocp-name} template. (MTV-1046)
default
projectThe migration process fails when migrating an image-based VM from {osp} to the default
project. (MTV-964)
For a complete list of all known issues in this release, see the list of Known Issues in Jira.