You can use {the-lc} {project-first} to migrate virtual machines from the following source providers to {virt} destination providers:
-
VMware vSphere
-
{rhv-full} ({rhv-short})
-
{osp}
-
Open Virtual Appliances (OVAs) that were created by VMware vSphere
-
Remote {virt} clusters
The release notes describe technical changes, new features and enhancements, and known issues for {project-full}.
This release has the following technical changes:
In this version of {project-short}, migration using OpenStack source providers graduated from a Technology Preview feature to a fully supported feature.
{project-short} enables migrations from vSphere source providers by not enforcing Enterprise Master Secret (EMS). This enables migrating from all vSphere versions that {project-short} supports, including migrations that do not meet 2023 FIPS requirements.
The user interface of the create and update providers now aligns with the look and feel of the {ocp} web console and displays up-to-date data.
The old UI of {project-short} 2.3 cannot be enabled by setting feature_ui: true
in ForkliftController anymore.
{project-short} 2.5.6 can be deployed on {ocp-name} 4.15 clusters.
This release has the following features and improvements:
In {project-short} {project-version}, you can migrate using Open Virtual Appliance (OVA) files that were created by VMware vSphere as source providers. (MTV-336)
Note
|
Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by {project-short}. {project-short} supports only OVA files created by VMware vSphere. |
In {project-short} {project-version}, you can now use Red Hat {virt} provider as a source provider and a destination provider. You can migrate VMs from the cluster that {project-short} is deployed on to another cluster, or from a remote cluster to the cluster that {project-short} is deployed on. (MTV-571)
During the migration from {rhv-full} ({rhv-short}), direct Logical Units (LUNs) are detached from the source virtual machines and attached to the target virtual machines. Note that this mechanism does not work yet for Fibre Channel. (MTV-329)
In addition to standard password authentication, {project-short} supports the following authentication methods: Token authentication and Application credential authentication. (MTV-539)
The validation service includes default validation rules for virtual machines from OpenStack. (MTV-508)
You can now create the VMware vSphere source provider without specifying a VMware Virtual Disk Development Kit (VDDK) init
image. It is strongly recommended you create a VDDK init
image to accelerate migrations.
In {project-short} 2.5.3, deployment on {ocp-name} Kubernetes Engine (OKE) has been enabled. For more information, see About {ocp-name} Kubernetes Engine. (MTV-803)
In {project-short} 2.5.4, migration of VMs to destination storage classes that have encrypted RADOS Block Devices (RBD) volumes is now supported.
To make use of this new feature, set the value of the parameter controller_block_overhead
to 1Gi
, following the procedure in Configuring the MTV Operator. (MTV-851)
This release has the following known issues:
Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)
The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)
vSphere only: Migrations from {rhv-short} and OpenStack do not fail, but the encryption key may be missing on the target {ocp} cluster.
Warm migration from {rhv-short} fails if a snapshot operation is triggered and running on the source VM at the same time as the migration is scheduled. The migration does not wait for the snapshot operation to finish. (MTV-456)
When migrating a VM with multiple disks to more than one storage classes of type hostPath
, it might happen that a VM cannot be scheduled. Workaround: Use shared storage on the target {ocp} cluster.
Warm migrations and migrations to remote {ocp} clusters from vSphere do not support all types of guest operating systems that are supported in cold migrations to the local {ocp} cluster. This is a consequence of using RHEL 8 in the former case and RHEL 9 in the latter case.
See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.
When migrating VMs that are installed with RHEL 9 as guest operating system from vSphere, the network interfaces of the VMs could be disabled when they start in {ocp-name} Virtualization. (MTV-491)
When adding an OVA provider, the error message ConnectionTestFailed
can appear, although the provider is created successfully. If the message does not disappear after a few minutes and the provider status does not move to Ready
, this means that the ova server pod creation
has failed. (MTV-671)
ovirtvolumepopulator
from failed migration causes plan to stay indefinitely in CopyDisks
phaseAn outdated ovirtvolumepopulator
in the namespace, left over from an earlier failed migration, stops a new plan of the same VM when it transitions to CopyDisks
phase. The plan remains in that phase indefinitely. (MTV-929)
The migration fails to build the Persistent Volume Claim (PVC) if the destination storage class does not have a configured storage profile. The forklift-controller
raises an error message without a clear reason for failing to create a PVC. (MTV-928)
For a complete list of all known issues in this release, see the list of Known Issues in Jira.
This release has the following resolved issues:
Versions of the package jsrsasign
before 11.0.0, used in earlier releases of {project-short}, are vulnerable to Observable Discrepancy in the RSA PKCS1.5 or RSA-OAEP decryption process. This discrepancy means an attacker could decrypt ciphertexts by exploiting this vulnerability. However, exploiting this vulnerability requires the attacker to have access to a large number of ciphertexts encrypted with the same key. This issue has been resolved in {project-short} 2.5.5 by upgrading the package jsrasign
to version 11.0.0.
For more information, see CVE-2024-21484.
A flaw was found in handling multiplexed streams in the HTTP/2 protocol. In previous releases of {project-short}, the HTTP/2 protocol allowed a denial of service (server resource consumption) because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection, which resulted in a denial of service due to server resource consumption.
This issue has been resolved in {project-short} 2.5.2. It is advised to update to this version of MTV or later.
For more information, see CVE-2023-44487 (Rapid Reset Attack) and CVE-2023-39325 (Rapid Reset Attack).
Context.FileAttachment
functionA flaw was found in the Gin-Gonic Gin Web Framework, used by {project-short}. The filename parameter of the Context.FileAttachment
function was not properly sanitized. This flaw in the package could allow a remote attacker to bypass security restrictions caused by improper input validation by the filename parameter of the Context.FileAttachment
function. A maliciously created filename could cause the Content-Disposition
header to be sent with an unexpected filename value, or otherwise modify the Content-Disposition
header.
This issue has been resolved in {project-short} 2.5.2. It is advised to update to this version of {project-short} or later.
For more information, see CVE-2023-29401 (Gin-Gonic Gin Web Framework) and CVE-2023-26125.
A flaw was found in the package GraphQL from 16.3.0 and before 16.8.1. This flaw means {project-short} versions before {project-short} 2.5.2 are vulnerable to Denial of Service (DoS) due to insufficient checks in the OverlappingFieldsCanBeMergedRule.ts
file when parsing large queries. This issue may allow an attacker to degrade system performance. (MTV-712)
This issue has been resolved in {project-short} 2.5.2. It is advised to update to this version of {project-short} or later.
For more information, see CVE-2023-26144.
A flaw was found in otelhttp handler
of OpenTelemetry-Go. This flaw means {project-short} versions before {project-short} 2.5.3 are vulnerable to a memory leak caused by http.user_agent
and http.method
having unbound cardinality, which could allow a remote, unauthenticated attacker to exhaust the server’s memory by sending many malicious requests, affecting the availability. (MTV-795)
This issue has been resolved in {project-short} 2.5.3. It is advised to update to this version of {project-short} or later.
For more information, see CVE-2023-45142.
A flaw was found in Golang. This flaw means {project-short} versions before {project-short} 2.5.3 are vulnerable to QUIC connections not setting an upper bound on the amount of data buffered when reading post-handshake messages, allowing a malicious QUIC connection to cause unbounded memory growth. With the fix, connections now consistently reject messages larger than 65KiB in size. (MTV-708)
This issue has been resolved in {project-short} 2.5.3. It is advised to update to this version of {project-short} or later.
For more information, see CVE-2023-39322.
A flaw was found in Golang. This flaw means {project-short} versions before {project-short} 2.5.3 are vulnerable to processing an incomplete post-handshake message for a QUIC connection, which causes a panic. (MTV-693)
This issue has been resolved in {project-short} 2.5.3. It is advised to update to this version of {project-short} or later.
For more information, see CVE-2023-39321.
A flaw was found in the Golang html/template
package used in {project-short}. This flaw means {project-short} versions before {project-short} 2.5.3 are vulnerable, as the html/template
package did not properly handle occurrences of <script
, <!--
, and </script
within JavaScript literals in <script>
contexts. This flaw could cause the template parser to improperly consider script contexts to be terminated early, causing actions to be improperly escaped, which could be leveraged to perform an XSS
attack. (MTV-693)
This issue has been resolved in {project-short} 2.5.3. It is advised to update to this version of {project-short} or later.
For more information, see CVE-2023-39319.
A flaw was found in the Golang html/template
package used in {project-short}. This flaw means {project-short} versions before {project-short} 2.5.3 are vulnerable as the html/template
package did not properly handle HMTL-like ""
comment tokens, nor hashbang \#!
comment tokens. This flaw could cause the template parser to improperly interpret the contents of <script>
contexts, causing actions to be improperly escaped, which could be leveraged to perform an XSS
attack. (MTV-693)
This issue has been resolved in {project-short} 2.5.3. It is advised to update to this version of {project-short} or later.
For more information, see CVE-2023-39318.
In earlier releases of {project-short} {project-version}, the log files downloaded from UI could contain logs that are related to an earlier migration plan. (MTV-783)
This issue has been resolved in {project-short} 2.5.3.
In earlier releases of {project-short} {project-version}, the size of disks that are extended in RHV was not adequately monitored. This resulted in the inability to migrate virtual machines with extended disks from a RHV provider. (MTV-830)
This issue has been resolved in {project-short} 2.5.3.
In earlier releases of {project-short} {project-version}, the filesystem overhead for new persistent volumes was hard-coded to 10%. The overhead was insufficient for certain filesystem types, resulting in failures during cold-migrations from {rhv-short} and OSP to the cluster where {project-short} is deployed. In other filesystem types, the hard-coded overhead was too high, resulting in excessive storage consumption.
In {project-short} 2.5.3, the filesystem overhead can be configured, as it is no longer hard-coded. If your migration allocates persistent volumes without CDI, you can adjust the file system overhead. You adjust the file system overhead by adding the following label and value to the spec
portion of the forklift-controller
CR:
spec:
`controller_filesystem_overhead: <percentage>` (1)
-
The percentage of overhead. If this label is not added, the default value of 10% is used. This setting is valid only if the
storageclass
isfilesystem
. (MTV-699)
In earlier releases of {project-short}, the create and update provider forms could have presented stale data.
This issue is resolved in {project-short} {project-version}, the new forms of create and update provider display up-to-date properties of the provider. (MTV-603)
In earlier releases of {project-short}, the Migration Controller
service did not delete snapshots that were created during a migration of source virtual machines in OpenStack automatically.
This issue is resolved in {project-short} {project-version}, all the snapshots created during the migration are removed after the migration has been completed. (MTV-620)
In earlier releases of {project-short}, the Migration Controller
service did not delete snapshots automatically after a successful warm migration of a VM from {rhv-short}.
This issue is resolved in {project-short} {project-version}, the snapshots generated during migration are removed after a successful migration, and the original snapshots are not removed after a successful migration. (MTV-349)
In earlier releases of {project-short}, the cutover operation failed when it was triggered while precopy was being performed. The VM was locked in {rhv-short} and therefore the ovirt-engine
rejected the snapshot creation, or disk transfer, operation.
This issue is resolved in {project-short} {project-version}, the cutover operation is triggered, but it is not performed at that time because the VM is locked. Once the precopy operation completes, the cutover operation is triggered. (MTV-686)
In earlier releases of {project-short}, triggering a warm migration while there was an ongoing operation in {rhv-short} that locked the VM caused the migration to fail because it could not trigger the snapshot creation.
This issue is resolved in {project-short} {project-version}, warm migration does not fail when an operation that locks the VM is performed in {rhv-short}. The migration does not fail, but starts when the VM is unlocked. (MTV-687)
In earlier releases of {project-short}, when removing a VM that was migrated, its persistent volume claims (PVCs) and physical volumes (PV) were not deleted.
This issue is resolved in {project-short} {project-version}, PVCs and PVs are deleted when deleting migrated VM.(MTV-492)
In earlier releases of {project-short}, when a migration failed, its PVCs and PVs were not deleted as expected when its migration plan was archived and deleted.
This issue is resolved in {project-short} {project-version}, PVCs are deleted when archiving and deleting migration plan.(MTV-493)
In earlier releases of {project-short}, VM with multiple disks that were migrated might not have been able to boot on the target {ocp} cluster.
This issue is resolved in {project-short} {project-version}, VM with multiple disks that are migrated can boot on the target {ocp} cluster. (MTV-433)
In {project-short} releases 2.4.0-2.5.3, cold migrations from vSphere to the local cluster on which {project-short} was deployed did not take a specified transfer network into account. This issue is resolved in {project-short} 2.5.4. (MTV-846)
In {project-short} 2.5.6, the virt-v2v arguments include –root first
, which mitigates an issue with multi-boot VMs where the pod fails. This is a fix for a regression that was introduced in {project-short} 2.4, in which the '--root' argument was dropped. (MTV-987)
In earlier releases of {project-short} {project-version}, populator pods were always restarted on failure. This made it difficult to gather the logs from the failed pods. In {project-short} 2.5.3, the number of restarts of populator pods is limited to three times. On the third and final time, the populator pod remains in the fail status and its logs can then be easily gathered by must-gather
and by forklift-controller
to know this step has failed. (MTV-818)
A vulnerability found in the Node.js Package Manager (npm) IP Package can allow an attacker to obtain sensitive information and obtain access to normally inaccessible resources. MTV-941
This issue has been resolved in {project-short} 2.5.6.
For more information, see CVE-2023-42282
A flaw was found in the versions of the Golang net/http/internal
package, that were used in earlier releases of {project-short}. This flaw could allow a malicious user to send an HTTP request and cause the receiver to read more bytes from the network than are in the body (up to 1GiB), causing the receiver to fail reading the response, possibly leading to a Denial of Service (DoS). This issue has been resolved in {project-short} 2.5.6.
For more information, see CVE-2023-39326.
For a complete list of all resolved issues in this release, see the list of Resolved Issues in Jira.
It is recommended to upgrade from {project-short} 2.4.2 to {project-short} {project-version}.
When upgrading from MTV 2.4.0 to a later version, the operation fails with an error that says the field 'spec.selector' of deployment forklift-controller
is immutable. Workaround: Remove the custom resource forklift-controller
of type ForkliftController
from the installed namespace, and recreate it. Refresh the {ocp} console once the forklift-console-plugin
pod runs to load the upgraded {project-short} web console. (MTV-518)