Chapter 6 Understanding Server Pools and Oracle VM Servers
- 6.1 How are Oracle VM Servers Added?
- 6.2 What are Server Roles?
- 6.3 How is Maintenance Performed on an Oracle VM Server?
- 6.4 Configuring NTP for Oracle VM Servers.
- 6.5 Rebooting and Changing Power State of Oracle VM Servers
- 6.6 What are Oracle VM Server States?
- 6.7 What are Server Pools used for in Oracle VM?
- 6.8 How are Server Pools Created?
- 6.9 How do Server Pool Clusters Work?
- 6.10 Unclustered Server Pools
- 6.11 How does High Availability (HA) Work?
- 6.12 What are Server Pool Policies?
- 6.13 What are Anti-Affinity Groups?
- 6.14 What are Server Processor Compatibility Groups?
Oracle VM Servers are very much the work-horses in a Oracle VM environment. It is important to understand their purpose within a deployment, how they are added to a running environment and how they are maintained. A brief overview of Oracle VM Server, its components and its general relationship to other Oracle VM entities is provided in Section 2.2, “What is Oracle VM Server?”.
Some discussion about how servers are added to a deployment is provided in Section 6.1, “How are Oracle VM Servers Added?”. Details on server maintenance are provided in Section 6.3, “How is Maintenance Performed on an Oracle VM Server?”.
Oracle VM Servers are grouped together to form server pools. These server pools provide a mechanism to share physical and virtual resources to host virtual machines, perform virtual machine migration, high availability (HA), and so on. Server pools are discussed in Section 6.7, “What are Server Pools used for in Oracle VM?”.
Server pools can take advantage of Oracle VM's clustering facility to achieve HA. Implementation of a clustered server pool is very easy using Oracle VM Manager, but it is important to gain a deeper understanding of the technologies involved and how clustering is facilitated. A thorough description of clustered server pools is provided in Section 6.9, “How do Server Pool Clusters Work?”
While the most common deployment strategy tends to take advantage of Oracle VM's clustering facility, it is possible to configure a server pool to run in an unclustered arrangement. This approach is discussed in Figure 6.2, “Unclustered Server Pools Using Only NFS Storage”.
While clustering helps to provide HA for the servers within a server pool, Oracle VM allows you to configure whether or not HA should be applied to individual virtual machines. We take a closer look at HA in Section 6.11, “How does High Availability (HA) Work?”
Oracle VM provides additional features specific to how servers and server pools are configured. These include the ability to optimize resource usage or power consumption (see Section 6.12, “What are Server Pool Policies?”); the ability to define which individual servers virtual machines should be allowed to run on (see Section 6.13, “What are Anti-Affinity Groups?”); and the ability to create groups of servers that have compatible processor types to facilitate live migration of virtual machines (see Section 6.14, “What are Server Processor Compatibility Groups?”).
6.1 How are Oracle VM Servers Added?
After Oracle VM Server has been installed, it is in an unowned state and has no relationship to any single Oracle VM deployment. For an Oracle VM Server to be used, it must be added to an Oracle VM Manager instance that can take ownership of the server. This is a process of server discovery.
Server discovery is a straightforward process performed from within Oracle VM Manager, using one of the provided interfaces. The Oracle VM Manager uses a provided IP address or hostname and password to attempt a connection to the Oracle VM Agent running on Oracle VM Server. When authenticated, the server is added to the Oracle VM Manager where various configuration actions such as adding networks or storage, grouping within a server pool, or server updates can be performed. The actions required to discover servers within the Oracle VM Manager Web Interface are discussed in Discover Servers in the Oracle VM Manager User's Guide.
Each Oracle VM Manager instance is assigned a UUID that identifies it to
the Oracle VM Servers that it discovers. The Oracle VM Agent running on an
Oracle VM Server stores the UUID of the Oracle VM Manager that discovers and takes
ownership of it. The UUID is stored on both Oracle VM Manager and the
Oracle VM Server. On Oracle VM Manager you can find the UUID in
/u01/app/oracle/ovm-manager-3/.config
. On
Oracle VM Server, the UUID of the owning Oracle VM Manager instance is stored in a
Berkeley DB database file that you are able to dump the contents
for by doing the following as the root user on the Oracle VM Server
system:
# cd /etc/ovs-agent/db # ovs-agent-db dump_db server
In the case of an Oracle VM Server that is in an unowned state, Oracle VM Manager automatically takes ownership of the server upon discovery. However, for servers that report an owned state, Oracle VM Manager cannot take ownership of the server until the Oracle VM Manager that already has ownership relinquishes it. This is important as configuration actions performed by two separate Oracle VM Manager instances could result in conflicts. Therefore, it is not possible to perform any server configuration until an Oracle VM Manager instance actually has ownership of the server. Taking ownership of a server using the Oracle VM Manager Web Interface is described in Edit Server in the Oracle VM Manager User's Guide.
In addition to the UUID of the Oracle VM Manager that owns a server, each Oracle VM Server also stores a large amount of configuration data locally. This means that if the Oracle VM Manager instance experiences downtime, the environment can continue to function normally. It also means that if an Oracle VM Manager instance is entirely replaced, a fair portion of the configuration can be loaded directly from each newly discovered server. Typical examples of this include networking and clustering information as well as Storage Connections. While this may provide some consolation, information such as the aliases for different elements is not stored on each server, so recovery of an Oracle VM Manager instance in this way is not recommended. A proper backup strategy should be followed instead. It is also important to realize that for this to work, the new Oracle VM Manager instance must have the same UUID as the original instance.
This has implications for server discovery. If an Oracle VM Server has been under the ownership of a separate Oracle VM Manager instance, its existing networking, clustering and storage configuration is not automatically loaded into Oracle VM Manager at the same time. Only the fundamental information about the server is loaded into Oracle VM Manager. This is known as a partial-discovery. Full discovery can only be performed once ownership of the server is under the control of the Oracle VM Manager instance. In a case where an alternate instance of Oracle VM Manager has ownership of a server, it would need to relinquish ownership first. This can only be done for servers that are not part of a server pool and that do not have a repository presented to them. If a server is partially discovered by an Oracle VM Manager instance and the original Oracle VM Manager instance releases ownership, the Oracle VM Manager instance that wishes to take ownership must perform a full rediscovery of the server. The Oracle VM Manager Web Interface automatically triggers server rediscovery when you attempt to take ownership of a server.
6.2 What are Server Roles?
Not all Oracle VM Servers within a server pool may be required for the purpose of running virtual machines. Oracle VM Servers are also required to perform utility functions, including performing actions on storage repositories on behalf of Oracle VM Manager. While all Oracle VM Servers can be configured to be capable of performing both functions, Oracle VM provides the option to separate servers within a server pool based on their functional roles. These roles are defined as follows:
-
VM Server role: The VM Server role enables an Oracle VM Server to run virtual machines.
-
Utility Server role: A Utility Server will be favored to do operations other than hosting virtual machines, for example, importing virtual machine templates and virtual appliances, creating virtual machine templates from virtual appliances, and creating repositories. If no Utility Servers are available, a non-utility server is chosen to do the work.
The role of a server can be configured within Oracle VM Manager once the server is part of a server pool. At least one server within the server pool must be assigned the VM Server role, or no virtual machines are able to run in the server pool. When you add an Oracle VM Server to a server pool it has both the Utility Server role and the VM Server role automatically selected by default. The VM Server role is required to run virtual machines. The Utility Server role is not required, it is advisory only.
Since Oracle VM Servers are capable of being configured for either or both roles, role separation is used for performance purposes. For instance, in the case of the import of a virtual machine template or virtual appliance, it is preferable that there is a server dedicated to this task, to reduce the impact of such a task on the running of virtual machines within the server pool.
You cannot update the VM Server role for an Oracle VM Server if the server is not in a server pool or if the Oracle VM Server has any virtual machines, running or not running. You must own the Oracle VM Server to change the VM Server role. You can change the Utility Server role anytime as long as you own the Oracle VM Server.
More information about the configuration of server roles can be found at Edit Server in the Oracle VM Manager User's Guide.
6.3 How is Maintenance Performed on an Oracle VM Server?
Once an Oracle VM Server is installed and discovered, an administrator has no need to directly access the system unless directed to do so by Oracle Support. All maintenance and configuration is handled directly from the Oracle VM Manager instance that has ownership of the server. This makes system administration such as performing server updates, configuring networks, attaching storage, configuring clustering and the management of virtual machines much easier for a systems administrator and reduces the security implications of provisioning access to multiple systems within a deployment.
Oracle VM Manager is able to communicate directly with each Oracle VM Server via the Oracle VM Agent installed and running on the server. Communications are secured using TLS and authentication against the Oracle VM Agent is required. The password used for the Oracle VM Agent is configured at install time, and is required by Oracle VM Manager to perform server discovery. Once Oracle VM Manager has taken ownership of the Oracle VM Server, authentication of the Oracle VM Manager against the Oracle VM Agent is achieved using certificates to improve the security of the Oracle VM Agent. This ensures that only the Oracle VM Manager that has ownership of an Oracle VM Server is authorized to perform any action on the server. The Oracle VM Agent is responsible for performing all configuration changes and maintenance initiated from the Oracle VM Manager.
Oracle VM Server software updates and upgrades are handled by configuring one or more server update repositories for the Oracle VM Servers from within Oracle VM Manager. One or more update repositories can be configured for x86-based Oracle VM Servers, or for SPARC-based servers. These repositories are available to all Oracle VM Servers with that hypervisor type. You can also override these repositories at the server pool level. When a repository has been configured, Oracle VM Manager reports whether or not updates are available for each Oracle VM Server in the environment. Updating a server can be triggered from Oracle VM Manager. See Update Server in the Oracle VM Manager User's Guide for more information about Oracle VM Server updates.
During an update, the Oracle VM Server is automatically put into maintenance mode. Maintenance mode is used to lock a server to prevent it from starting any further virtual machines. If the server is part of a clustered server pool, setting maintenance mode also triggers a best attempt to automatically perform a live-migration of any virtual machines running on the server to an alternate server within the server pool. If live-migration fails for any reason, the upgrade is terminated and an error is returned, otherwise all virtual machines are migrated off of the server and the upgrade can proceed. This allows an update to be performed without affecting existing services. If the server requires a reboot after an upgrade, this is performed automatically. Once the server has rebooted, it rejoins the cluster and resumes its normal functions. Note that it is possible to set maintenance mode for an Oracle VM Server by editing the server properties. See Edit Server in the Oracle VM Manager User's Guide for more information about maintenance mode.
Additional networking configuration is also handled from within Oracle VM Manager. During installation, a primary network is defined to allow the server to connect to the Server Management network. From this point onward, any further network configuration is performed from within Oracle VM Manager. Networking for Oracle VM is described in more detail in Chapter 5, Understanding Networks.
The process of defining Storage Connections and mount points is also handled from within Oracle VM Manager. These connections and how they are created are discussed in more detail in Chapter 3, Understanding Storage.
It should be clear that other than the initial installation required for an Oracle VM Server, all administrative tasks related to server maintenance and configuration are handled by Oracle VM Manager. This centralized administration not only eases the management of servers, but also improves security and reduces the likelihood of configuration errors that could result in server downtime.
6.4 Configuring NTP for Oracle VM Servers.
In environments where data is constantly shared between systems, it is important that time is properly synchronized. This is particularly important within Oracle VM where clustering is frequently used and where virtual machines can move between Oracle VM Servers as required. The most common approach to this requirement is to configure an NTP server to run on the same host where you install Oracle VM Manager. Each Oracle VM Server can then use the Oracle VM Manager host as an NTP server. You can find out more about the installation and configuration of NTP in Configure the NTP Service on the Oracle VM Manager Host in the Oracle VM Installation and Upgrade Guide.
Oracle VM Servers that are not under the ownership of any Oracle VM Manager instance, are either configured to point to localhost (127.127.1.0) for NTP or to one or more NTP servers that are configured for each server. Once an Oracle VM Server has been discovered and ownership is taken inside of Oracle VM Manager, it becomes possible to configure the NTP servers that should be used for that Oracle VM Server. The Oracle VM Manager allows you to configure the NTP servers that should be used for any Oracle VM Server directly within the server configuration in the Oracle VM Manager Web Interface. See Edit Server in the Oracle VM Manager User's Guide for more information on how to configure NTP for each server. If you need to batch edit NTP for a large number of servers, the preferred method is to use the Oracle VM Manager Command Line Interface or the Oracle VM Web Services API. See edit Server for an example of how to edit an Oracle VM Server to configure NTP using the Oracle VM Manager Command Line Interface.
Removing all NTP servers from an Oracle VM Server configuration results in the configuration reverting to its default state, where the Oracle VM Server is configured to point to localhost as its NTP server. This is the same behavior that is applied when an Oracle VM Server is released from ownership of an Oracle VM Manager instance. Therefore, whenever an Oracle VM Server is newly discovered and the Oracle VM Manager instance takes ownership, the NTP configuration on the Oracle VM Server is always configured to point to the localhost.
6.5 Rebooting and Changing Power State of Oracle VM Servers
Oracle VM Servers can be rebooted from Oracle VM Manager, using the web user interface or the command line interface. See Restart Server in the Oracle VM Manager User's Guide. A reboot has different consequences depending on the hardware architecture that you the Oracle VM Server is running on.
On x86 hardware, a reboot causes the entire system to restart. This means that depending on the configuration of your server pool and whether the server has been put into maintenance mode first, all virtual machines that are running on the server are either migrated to an alternative server or stopped, before the server is rebooted.
On SPARC hardware, a reboot is only applied to the control domain. This means that any virtual machines running on the SPARC-based Oracle VM Server continue to run while the control domain is rebooted. However, network access and disk I/O are blocked while the control domain is rebooting, unless a second service domain or shadow domain is available to facilitate this activity. See Configuring a Secondary Service Domain in the Oracle VM Administrator's Guide. Once the control domain has finished rebooting, network access and disk I/O are restored for the running virtual machines and activity resumes as normal. If you are using server pool clustering, virtual machines may be migrated to an alternative server if the reboot of the control domain takes too long or does not complete for some reason.
It is worthwhile noting that a power cycle has the same effect on both x86 and SPARC servers. The entire system is restarted. All virtual machines are either stopped or migrated, depending on whether or not clustering is enabled for the server pool that the server belongs to. Killing an Oracle VM Server from within Oracle VM Manager has an equivalent effect to a power cycle. See Kill Server in the Oracle VM Manager User's Guide for more information on this.
Standby mode is not supported for Oracle VM Server. Do not attempt to put an Oracle VM Server into standby mode. Setting a server's power state to standby may cause the server to freeze.
6.6 What are Oracle VM Server States?
Oracle VM Manager tracks the different running states for each Oracle VM Server at regular intervals. At any point, it is possible to check the running state for any Oracle VM Server within the Oracle VM Manager Web Interface or the Oracle VM Manager Command Line Interface. There are five different running states that apply to Oracle VM Servers. The following list describes each running state and the scenarios that apply to each of these:
-
STARTING:
-
This is the initial state that is set for an Oracle VM Server when it is started via Oracle VM Manager.
-
This is also the state that is set for an Oracle VM Server very early in the discovery process. Once the discovery is finished, the state is updated and set to RUNNING.
-
-
RUNNING:
-
This is the state that is set for an Oracle VM Server when it is running and the Oracle VM Manager instance is able to authenticate and communicate with the Oracle VM Agent on the server.
-
This state is set at the end of a successful Oracle VM Server discovery operation.
-
-
STOPPING:
-
When an attempt is made to stop, restart or kill an Oracle VM Server, it is immediately set into the STOPPING state.
-
-
STOPPED:
-
When an Oracle VM Server cannot be contacted for an extended period of time it first receives a DISCONNECT event and eventually an OFFLINE event and its state is set to STOPPED.
-
-
UNKNOWN:
-
An Oracle VM Server is only ever set to the UNKNOWN state if the Oracle VM Manager instance does not own it, or the last authentication attempt against it failed.
-
6.7 What are Server Pools used for in Oracle VM?
A server pool consists of one or more Oracle VM Servers, and represents a logical grouping of the servers where a particular set of virtual machines can run. It is a requirement of all server pools that the Oracle VM Servers within them share the same CPU architecture. All servers within a server pool must be in the same physical location. Stretching server pools across geographical locations is not supported.
Oracle VM deployments can vary in design. It is equally plausible that one deployment may only use a single server pool for all of its virtualization requirements, while another deployment may consist of several server pools either catering for different hardware platforms or for complete virtual machine separation.
Server pools help to provide horizontal scalability. If you find that a server pool does not have sufficient resources, such as CPU or memory, to run the virtual machines, you can expand the server pool by adding more Oracle VM Servers. This process is discussed in Edit Server Pool in the Oracle VM Manager User's Guide. In this way, a server pool can be described as the set of resources available to a group of virtual machines.
Therefore, before creating a server pool, it is useful to consider how many Oracle VM Servers are to be included in the server pool, and which function(s) or role(s) each Oracle VM Server is to perform. See Section 6.2, “What are Server Roles?” for more information of server functions and roles. Each virtual machine running in a server pool requires resources to be available to it, such as CPU, network, and memory. You should size your server pool accordingly.
Oracle VM's usual deployment architecture utilizes server pools, with shared access to storage across Oracle VM Servers in the server pool. Virtual machines are stored on the shared storage and loaded onto any one of the Oracle VM Servers to balance the workloads of the server pool.
In a deployment that uses shared storage that is accessible to all of the Oracle VM Servers in the server pool, many other facilities are available to ensure high availability and excellent failover. A server pool can be configured as a cluster, so that virtual machines are automatically live-migrated between servers in the event of server downtime.
Since the virtual machines are not bound to any specific Oracle VM Server in the server pool, virtual machines are not prevented from starting up simply because an individual Oracle VM Server happens to be down for maintenance or otherwise unavailable at the time. Further, options are provided to specify the start policy for the virtual machines in the server pool. The start policy can implement a load-balancing algorithm that assures that a virtual machine is only started on the Oracle VM Server with the most resources available. Load balancing is achieved using the same algorithms used for Distributed Power Management (DPM) and for the Distributed Resource Scheduler (DRS). Load-balancing further helps assure the maximum aggregate performance from the server pool.
When you create a server pool in Oracle VM, you specify:
-
Server pool name and description.
-
Whether or not to activate the cluster.
-
A server pool file system for the global heartbeat and other cluster information.
The server pool name and description are used as friendly identifiers within Oracle VM Manager to reference different server pools and to understand their purpose.
If you opt to enable clustering, the servers in the server pool are clustered and all of the configuration steps required are performed automatically through Oracle VM Manager. In this case, it is necessary for you to configure the server pool file system that is to be used to store cluster information. See Section 6.9, “How do Server Pool Clusters Work?” for more information. If you opt for a server pool without clustering, see Section 6.10, “Unclustered Server Pools”.
6.8 How are Server Pools Created?
A server pool consists of at least one, but usually multiple Oracle VM Servers. All Oracle VM Servers in a server pool should have CPUs in the same CPU family and of the same type. If they are not in the same CPU family and type, some operations such as live migration may fail. Though the CPUs should be in the same CPU family, they may have differing configurations, such as different number of cores. Other hardware components on the host computer may also differ, such as the amount of RAM, the number and size of disk drives, and so on.
Although the host computers may have differing configurations, Oracle recommends that all Oracle VM Servers in a server pool are identical. Oracle VM Manager contains rules for processor compatibility groups. If live migration is attempted between incompatible processors, an error message is displayed.
Before creating a server pool, you must have:
-
IP addresses or hostnames for the Oracle VM Servers.
-
Password to access the Oracle VM Agent installed on the Oracle VM Server(s).
NoteThe Oracle VM Agent password should be the same on each Oracle VM Server in a server pool. Once an Oracle VM Server is under the ownership of an Oracle VM Manager instance, password authentication is replaced with certificate-based authentication to improve the security of the Oracle VM Agent.
A clustered server pool must have a dedicated file system (either a NAS export, or a LUN) to use as the server pool's file system. Oracle recommends that you create this storage with a size of at least 10 GB. If you are creating a SPARC-based server pool, only an NFS file system is supported for the server pool file system.
The Create Server Pool
icon is available on the
Servers and VMs tab within the
Oracle VM Manager Web Interface and is used to open the Create
Server Pool wizard that guides you through the server
pool creation process.
For information on creating a server pool, see Create Server Pool in the Oracle VM Manager User's Guide.
6.9 How do Server Pool Clusters Work?
The implementation of server pool clustering within Oracle VM differs depending on whether the Oracle VM Servers used in the server pool are based on an x86 or SPARC architecture, however the behavior of a server pool cluster and the way in which it is configured within Oracle VM Manager is largely seamless regardless of the platform used. This means that server pool clustering is handled automatically by Oracle VM Manager as soon as you enable it for a server pool, as long as the necessary requirements are met to allow clustering to take place.
In this section we discuss how server pool clusters work for each of the different architectures, and describe the requirements for clustering to be enabled.
6.9.1 Clustering for x86 Server Pools
Oracle VM works in concert with Oracle OCFS2 to provide shared access to server pool resources residing in an OCFS2 file system. This shared access feature is crucial in the implementation of high availability (HA) for virtual machines running on x86 Oracle VM Servers that belong to a server pool with clustering enabled.
OCFS2 is a cluster file system developed by Oracle for Linux, which allows multiple nodes (Oracle VM Servers) to access the same disk at the same time. OCFS2, which provides both performance and HA, is used in many applications that are cluster-aware or that have a need for shared file system facilities. With Oracle VM, OCFS2 ensures that Oracle VM Servers belonging to the same server pool access and modify resources in the shared repositories in a controlled manner.
The OCFS2 software includes the core file system, which offers the standard file system interfaces and behavioral semantics and also includes a component which supports the shared disk cluster feature. The shared disk component resides mostly in the kernel and is referred to as the O2CB cluster stack. It includes:
-
A disk heartbeat to detect live servers.
-
A network heartbeat for communication between the nodes.
-
A Distributed Lock Manager (DLM) which allows shared disk resources to be locked and released by the servers in the cluster.
OCFS2 also offers several tools to examine and troubleshoot the OCFS2 components. For detailed information on OCFS2, see the OCFS2 documentation at:
http://oss.oracle.com/projects/ocfs2/documentation/
Oracle VM decouples storage repositories and clusters so that if a storage repository is taken off-line, the cluster is still available. A loss of one heartbeat device does not force an Oracle VM Server to self fence.
When you create a server pool, you have a choice to activate the cluster function which offers these benefits:
-
Shared access to the resources in the repositories accessible by all Oracle VM Servers in the cluster.
-
Protection of virtual machines in the event of a failure of any Oracle VM Server in the server pool.
You can choose to configure the server pool cluster and enable HA in a server pool, when you create or edit a server pool within Oracle VM Manager. See Create Server Pool and Edit Server Pool in the Oracle VM Manager User's Guide for more information on creating and editing a server pool.
During server pool creation, the server pool file system specified for the new server pool is accessed and formatted as an OCFS2 file system. This formatting creates several management areas on the file system including a region for the global disk heartbeat. Oracle VM formats the server pool file system as an OCFS2 file system whether the file system is accessed by the Oracle VM Servers as an NFS share, a FC LUN or iSCSI LUN. See Section 3.8, “How is Storage Used for Server Pool Clustering?” for more information on how storage is used for the cluster file system and the requirements for a stable cluster heartbeat function.
As Oracle VM Servers are added to a newly created server pool, Oracle VM:
-
Creates the cluster configuration file and the cluster time-out file.
-
Pushes the configuration files to all Oracle VM Servers in the server pool.
-
Starts the cluster.
Cluster timeout can only be configured during server pool creation. The cluster timeout determines how long a server should be unavailable within the cluster before failover occurs. Setting this value too low can cause false positives, where failover may occur due to a brief network outage or a sudden load spike. Setting the cluster timeout to a higher value can mean that a server is unavailable for a lengthier period before failover occurs. The maximum value for the cluster timeout is 300 seconds, which means that in the event that a server becomes unavailable, failover may only occur 5 minutes later. The recommended approach to setting timeout values is to use the functionality provided within the Oracle VM Manager when creating or editing a server pool. See Create Server Pool and Edit Server Pool in the Oracle VM Manager User's Guide. for more information on setting the timeout value.
On each Oracle VM Server in the cluster, the cluster configuration file
is located at /etc/ocfs2/cluster.conf
, and
the cluster time-out file is located at
/etc/sysconfig/o2cb
. Note that it is
imperative that every node in the cluster has the same o2cb
parameters set. Do not attempt to configure different parameters
for different servers within a cluster, or the heartbeat device
may not be mounted. Cluster heartbeat parameters within the
cluster timeout file are derived from the cluster timeout value
defined for a server pool during server pool creation. The
following algorithms are used to set the listed o2cb parameters
accordingly:
-
o2cb_heartbeat_threshold = (timeout/2) + 1
-
o2cb_idle_timeout_ms = (timeout/2) * 1000
Starting the cluster activates several services and processes on each of the Oracle VM Servers in the cluster. The most important processes and services are discussed in Table 6.1, “Cluster services”.
Service |
Description |
---|---|
o2net |
The o2net process creates TCP/IP intra-cluster node communication channels on port 7777 and sends regular keep-alive packages to each node in the cluster to validate if the nodes are alive. The intra-cluster node communication uses the network with the Cluster Heartbeat role. By default, this is the Server Management network. You can however create a separate network for this function. See Section 5.6, “How are Network Functions Separated in Oracle VM?” for information about the Cluster Heartbeat role. Make sure the firewall on each Oracle VM Server in the cluster allows network traffic on the heartbeat network. By default, the firewall is disabled on Oracle VM Servers after installation. |
o2hb-diskid |
The server pool cluster also employs a disk heartbeat check. The o2hb process is responsible for the global disk heartbeat component of cluster. The heartbeat feature uses a file in the hidden region of the server pool file system. Each pool member writes to its own block of this region every two seconds, indicating it is alive. It also reads the region to maintain a map of live nodes. If a pool member stops writing to its block, then the Oracle VM Server is considered dead. The Oracle VM Server is then fenced. As a result, that Oracle VM Server is temporarily removed from the active server pool. This fencing process allows the active Oracle VM Servers in the pool to access the resources of the fenced Oracle VM Server. When the fenced server comes back online, it rejoins the server pool. However, this fencing process is internal behavior that does not result in noticeable changes to the server pool configuration. For instance, the Oracle VM Manager Web Interface still displays a fenced Oracle VM Server as a member of the server pool. |
o2cb |
The o2cb service is central to cluster operations. When an Oracle VM Server boots, the o2cb service starts automatically. This service must be up for the mount of shared repositories to succeed. |
ocfs2 |
The OCFS2 service is responsible for the file system operations. This service also starts automatically. |
ocfs2_dlm and ocfs2_dlmfs |
The DLM modules (ocfs2_dlm, ocfs2_dlmfs) and processes (user_dlm, dlm_thread, dlm_wq, dlm_reco_thread, and so on) are part of the Distributed Lock Manager. OCFS2 uses a DLM to track and manage locks on resources across the cluster. It is called distributed because each Oracle VM Server in the cluster only maintains lock information for the resources it is interested in. If an Oracle VM Server dies while holding locks for resources in the cluster, for example, a lock on a virtual machine, the remaining Oracle VM Servers in the server pool gather information to reconstruct the lock state maintained by the dead Oracle VM Server. |
Do not manually modify the cluster configuration files, or start and stop the cluster services. Oracle VM Manager automatically starts the cluster on Oracle VM Servers that belong to a server pool. Manually configuring or operating the cluster may lead to cluster failure.
When you create a repository on a physical disk, an OCFS2 file system is created on the physical disk. This occurs for local repositories as well. The resources in the repositories, for example, virtual machine configuration files, virtual disks, ISO files, templates and virtual appliances, can then be shared safely across the server pool. When a server pool member stops or dies, the resources owned by the departing server are recovered, and the change in status of the server pool members is propagated to all the remaining Oracle VM Servers in the server pool.
Figure 6.1, “Server Pool clustering with OCFS2 features” illustrates server pool clustering, the disk and network heartbeats, and the use of the DLM feature to lock resources across the cluster.

Figure 6.1, “Server Pool clustering with OCFS2 features” represents a server pool with three Oracle VM Servers. The server pool file system associated with this server pool resides on an NFS share. During server pool creation, the NFS share is accessed, a disk image is created on the NFS share and the disk image is formatted as an OCFS2 file system. This technique allows all Oracle VM Server pool file systems to be accessed in the same manner, using OCFS2, whether the underlying storage element is an NFS share, an iSCSI LUN or a Fibre Channel LUN.
The network heartbeat, which is illustrated as a private network connection between the Oracle VM Servers, is configured before creating the first server pool in your Oracle VM environment. After the server pool is created, the Oracle VM Servers are added to the server pool. At that time, the cluster configuration is created, and the cluster state changes from off-line to heartbeating. Finally, the server pool file system is mounted on all Oracle VM Servers in the cluster and the cluster state changes from heartbeating to DLM ready. As seen in Figure 6.1, “Server Pool clustering with OCFS2 features”, the heartbeat region is global to all Oracle VM Servers in the cluster, and resides on the server pool file system. Using the network heartbeat, the Oracle VM Servers establish communication channels with other Oracle VM Servers in the cluster, and send keep-alive packets to detect any interruption on the channels.
For each newly added repository on a physical storage element, an OCFS2 file system is created on the repository, and the repository is usually presented to all Oracle VM Servers in the pool. Figure 6.1, “Server Pool clustering with OCFS2 features” shows that Repository 1 and Repository 2 are accessible by all of the Oracle VM Servers in the pool. While this is the usual configuration, it is also feasible that a repository is accessible by only one Oracle VM Server in the pool. This is indicated in the figure by Repository 3, which is accessible by Oracle VM Server 1 only. Any virtual machine whose resources reside on this repository cannot take advantage of the high availability feature afforded by the server pool.
Note that repositories built on NFS shares are not formatted as OCFS2 file systems. See Section 3.9, “Where are Virtual Machine Resources Located?” for more information on repositories.
Figure 6.1, “Server Pool clustering with OCFS2 features” shows several virtual machines with resources in shared Repositories 1 and 2. As virtual machines are created, started, stopped, or migrated, the resources for these virtual machines are locked by the Oracle VM Servers needing these resources. Each Oracle VM Server ends up managing a subset of all the locked resources in the server pool. A resource may have several locks against it. An exclusive lock is requested when anticipating a write to the resource while several read-only locks can exist at the same time on the same resource. Lock state is kept in memory on each Oracle VM Server as shown in the diagram. The distributed lock manager (DLM) information kept in memory is exposed to user space in the synthetic file system called dlmfs, mounted under /dlm. If an Oracle VM Server fails, its locks are recovered by the other Oracle VM Servers in the cluster and virtual machines running on the failed Oracle VM Server are restarted on another Oracle VM Server in the cluster. If an Oracle VM Server is no longer communicating with the cluster via the heartbeat, it can be forcibly removed from the cluster. This is called fencing. An Oracle VM Server can also fence itself if it realizes that it is no longer part of the cluster. The Oracle VM Server uses a machine reset to fence. This is the quickest way for the Oracle VM Server to rejoin the cluster.
6.9.1.1 Troubleshooting Cluster-related Problems for x86 Server Pools
There are some situations where removing an Oracle VM Server from a server pool may generate an error. Typical examples include the situation where an OCFS2-based repository is still presented to the Oracle VM Server at the time that you attempt to remove it from the server pool, or if the Oracle VM Server has lost access to the server pool file system or the heartbeat function is failing for that Oracle VM Server. The following list describes steps that can be taken to handle these situations.
-
Make sure that there are no repositories presented to the server when you attempt to remove it from the server pool. If this is the cause of the problem, the error that is displayed usually indicates that there are still OCFS2 file systems present. See Present or Unpresent Repository in the Oracle VM Manager User's Guide for more information.
-
If a pool file system is causing the remove operation to fail, other processes might be working on the pool file system during the unmount. Try removing the Oracle VM Server at a later time.
-
In a case where you try to remove a server from a clustered server pool on a newly installed instance of Oracle VM Manager, it is possible that the file server has not been refreshed since the server pool was discovered in your environment. Try refreshing all storage and all file systems on your storage before attempting to remove the Oracle VM Server.
-
In the situation where the Oracle VM Server cannot be removed from the server pool because the server has lost network connectivity with the rest of the server pool, or the storage where the server pool file system is located, a critical event is usually generated for the server in question. Try acknowledging any critical events that have been generated for the Oracle VM Server in question. See Events Perspective in the Oracle VM Manager User's Guide for more information. Once these events have been acknowledged you can try to remove the server from the server pool again. In most cases, the removal of the server from the server pool succeeds after critical events have been acknowledged, although some warnings may be generated during the removal process. Once the server has been removed from the server pool, you should resolve any networking or storage access issues that the server may be experiencing.
-
If the server is still experiencing trouble accessing storage and all critical events have been acknowledged and you are still unable to remove it from the server pool, try to reboot the server to allow it to rejoin the cluster properly before attempting to remove it again.
-
If the server pool file system has become corrupt for some reason, or a server still contains remnants of an old stale cluster, it may be necessary to completely erase the server pool and reconstruct it from scratch. This usually involves performing a series of manual steps on each Oracle VM Server in the cluster and should be attempted with the assistance of Oracle Support.
6.9.2 Clustering for SPARC Server Pools
Since the Oracle OCFS2 file system caters for Linux and not for Solaris, SPARC server pools are unable to use this file system to implement clustering functionality. Therefore, clustering for SPARC server pools cannot be implemented on physical disks and are limited to using NFS storage to host the cluster file system. Clustering for SPARC server pools relies on an additional package, which must be installed in the control domain for each Oracle VM Server in the server pool. This package contains a distributed lock manager (DLM) that is used to to facilitate the cluster. Installation of this package is described in more detail in Installing the Distributed Lock Manager (DLM) Package in the Oracle VM Installation and Upgrade Guide.
The DLM package used to achieve clustering for SPARC server pools is a port of the tools that are used within OCFS2 on Linux, but exclude the actual OCFS2 file system itself. The DLM package includes:
-
A disk heartbeat to detect live servers.
-
A network heartbeat for communication between the nodes.
-
A Distributed Lock Manager (DLM) which allows shared disk resources to be locked and released by the servers in the cluster.
The only major difference between clustering on SPARC and on x86 is that there are limitations to the types of shared disks that SPARC infrastructure can use to host the cluster file system. Without OCFS2, clustering depends on a file system that is already built to facilitate shared access and this is why only NFS is supported for this purpose. Unlike in x86 environments, when NFS is used to host a SPARC server pool file system, an OCFS2 disk image is not created on the NFS share. Instead, the cluster data is simply stored directly on the NFS file system itself.
With this information in mind, the description provided in Section 6.9.1, “Clustering for x86 Server Pools” largely applies equally to clustering on SPARC, although the implementation does not use OCFS2.
A final point to bear in mind is that clustering for SPARC server pools is only supported where a single control domain has been configured on all of the Oracle VM Servers in the server pool. If you have decided to make use of multiple service domains, you must configure an unclustered server pool. See Section 6.10, “Unclustered Server Pools” for more information.
6.10 Unclustered Server Pools
When creating a server pool, you specify whether the servers in the pool will be part of a cluster or not. In most cases, you create a clustered server pool. You can create a non-clustered pool when all servers in the pool are expected to use only NFS shares as repositories. If your Oracle VM Servers are also expected to access repositories on shared physical disks, then these servers must be part of a clustered server pool. Unclustered server pools are more common in SPARC environments, where the use of multiple service domains prevents clustering but offers a more fault-tolerant and robust platform.
Figure 6.2, “Unclustered Server Pools Using Only NFS Storage” illustrates server pools in an unclustered configuration, with shared access to resources on NFS storage but no HA features for the servers.

Non-clustered server pools do not require a server pool file system.
A non-clustered server pool does not support HA for virtual machines deployed on its servers. If a server fails, the virtual machines on this server have to be restarted manually on a server in this server pool, or possibly on a server in another server pool, if that server pool also has access to the repositories needed for deploying the virtual machines on the failed server. Live Migration is supported between servers in a non-clustered pool if the servers have the same CPU affinity (same family and type of CPU).
Converting non-clustered server pools to clustered server pools is not supported in Release 3.4 of Oracle VM.
6.11 How does High Availability (HA) Work?
Oracle VM has high availability (HA) functionality built in. Even though there is only one Oracle VM Manager in the environment, it distributes vital information over the servers it manages, so that in case of failure the Oracle VM Manager and its infrastructure database can be rebuilt. For virtual machine HA, Oracle VM Servers can be clustered so that if one server fails, the virtual machines can be automatically migrated to another server as all virtual machine data is on shared storage and not directly on the Oracle VM Server. In case of predictable failures or scheduled maintenance, virtual machines can be moved to other members of the server pool using live migration.
In addition, Oracle VM supports HA networking and storage, but these are configurations the system administrator must implement outside Oracle VM Manager (RAID, multipathing, etc.).
You can set up HA to help ensure the uninterrupted availability of a virtual machine. If HA is configured and a Oracle VM Server is restarted or shut down, the virtual machines running on it are either restarted on, or migrated to, another Oracle VM Server.
HA also takes precedence over HugePages rules. For example, you have a server pool with two servers running in it. You enable HugePages on the virtual machines running on server A. You do not enable HugePages on the virtual machines running on server B. You also enable HA for all virtual machines on both servers. If either server A or server B stops running, then the virtual machines are migrated to the server that is still running. This migration occurs despite the rule that prevents virtual machines with different HugePage settings running on the same server.
If you have set the inbound migration lock feature on an Oracle VM Server, then the Oracle VM Manager does not create or migrate new virtual machines on that server, but virtual machines already running on the server may be migrated to other Oracle VM Servers in a server pool.
If you have HA configured for a server, the inbound migration lock feature does not protect a server from inbound migration when failover occurs.
See Section 7.12, “How Can I Protect Virtual Machines?” for more information on using the inbound migration lock feature.
The following are the prerequisites to implement HA:
-
The server pool must contain multiple Oracle VM Servers. HA cannot be implemented with a stand-alone Oracle VM Server.
-
The server pool must be clustered.
-
All Oracle VM Servers must be Oracle VM Server Release 3.0 or above.
-
Each instance of Oracle VM Server must be at the same release version for live migration to succeed. Virtual machines cannot live migrate to an instance of Oracle VM Server if that instance is at an earlier release version than the Oracle VM Server where that virtual machine is running. This condition can prevent HA from functioning successfully.
To use HA, you must first enable HA on the server pool, then on all virtual machines, as shown in Figure 6.3, “Enabling HA”. If you enable HA on the server pool and then for virtual machines, when an Oracle VM Server is shut down or fails, the virtual machines are migrated or restarted on another available Oracle VM Server. HA must be enabled for both the server pool and for virtual machines.

To automatically configure the server pool cluster and enable HA in a server pool, the server pool must be created with clustering enabled. See Create Server Pool in the Oracle VM Manager User's Guide for more information on creating a server pool.
To enable HA on a virtual machine, high availability must be enabled when you create or edit a virtual machine. See Create Virtual Machine and Edit Virtual Machine in the Oracle VM Manager User's Guide for more information on creating and editing a virtual machine.
The following conditions apply to HA environments:
-
If HA is enabled and you want to restart, shut down, or delete an Oracle VM Server, you must first migrate the running HA-enabled virtual machines to another available Oracle VM Server. For information on migrating virtual machines, see Migrate or Move Virtual Machines in the Oracle VM Manager User's Guide.
-
If there are no Oracle VM Servers available, HA-enabled virtual machines are shut down (powered off) and are restarted when an Oracle VM Server becomes available.
-
If an Oracle VM Server fails, all running virtual machines are restarted automatically on another available Oracle VM Server. Note that this occurs after the cluster timeout has occurred for the Oracle VM Server within the cluster. See Section 6.9, “How do Server Pool Clusters Work?” for more information.
-
If an Oracle VM Server fails and no other Oracle VM Servers are available, all running virtual machines are restarted when an Oracle VM Server becomes available.
-
If you shut down an HA-enabled virtual machine from within the guest operating system, then the virtual machine automatically restarts. To shut down an HA-enabled virtual machine, you must stop the virtual machine from Oracle VM Manager. See Stop Virtual Machines in the Oracle VM Manager User's Guide.
Figure 6.4, “HA in effect for an Oracle VM Server failure” shows an Oracle VM Server failing and the virtual machines restarting on other Oracle VM Servers in the server pool.
firefox server
You should test your HA configuration to ensure it is properly configured in the event of a real failure.
Figure 6.5, “HA in effect for an Oracle VM Server restart or shut down” shows an Oracle VM Server restarting or shutting down and the virtual machines migrating to other Oracle VM Servers in the server pool. In this example, the virtual machines are running and so live migration can be performed and the virtual machines continue to run, uninterrupted. Live migration is not a feature of HA, but can be used in conjunction with, or independently of, HA. For more information on live migration, see Migrate or Move Virtual Machines in the Oracle VM Manager User's Guide.
If you do not have HA enabled, before you shut down an Oracle VM Server, you should migrate all virtual machines to another Oracle VM Server (either using standard virtual machine migration or live migration), or have them automatically migrated by placing the server into maintenance mode.
6.12 What are Server Pool Policies?
Managing load and reducing power consumption are two of the major benefits of virtualization. When a server is already under significant load, it is preferable to distribute the virtual machines that it is running across less utilized servers within the server pool. Equally, during periods of low utilization it is preferable to consolidate virtual machines across as few servers as possible so that unused servers can be powered off to reduce energy consumption.
Oracle VM provides a facility to handle this kind of behavior automatically. This facility is handled by creating a server pool policy. Server pool policies allow you to define the different options that you wish to support. These are defined as follows:
-
Distributed Resource Scheduling: Used to optimize resource utilization through a form of load balancing. See Section 6.12.1, “Distributed Resource Scheduler (DRS)” for more information.
-
Distributed Power Management: Used to reduce energy consumption by consolidating virtual machines across fewer servers. See Section 6.12.2, “Distributed Power Management (DPM)” for more information.
It is also possible to apply these policies to the networks that are available within a server pool, by setting network utilization thresholds that trigger these behaviors within the server pool. This is discussed in more detail in Section 6.12.3, “DRS/DPM Network Policies”.
If you have set the inbound migration lock feature to disallow new virtual machines on an Oracle VM Server, then any server pool policies you set are restricted from migrating virtual machines, or creating new ones on the server. See Section 7.12, “How Can I Protect Virtual Machines?” for more information on using the inbound migration lock feature.
All Oracle VM Servers must have matching release numbers for either of these server pool policies to be effective. If the release numbers for Oracle VM Servers do not match for a significant length of time, virtual machines running on Oracle VM Servers with higher release numbers are unable to live migrate to Oracle VM Servers with lower release numbers. These policies perform a check before attempting a migration and will prevent the migration in the event that the target server does not have a matching release number. Therefore, in an environment with mixed server versions, server pool policies may not be implemented.
6.12.1 Distributed Resource Scheduler (DRS)
The Distributed Resource Scheduler (DRS) optimizes virtual machine resource utilization in a server pool. DRS automatically moves running virtual machines to another Oracle VM Server in a server pool if any of the Oracle VM Servers exceed a specified CPU threshold for a specified period of time. DRS continuously samples performance data from every Oracle VM Server and every virtual machine.
The movement of virtual machines is policy-driven. When a threshold is reached, Oracle VM Manager live migrates the running virtual machine from one Oracle VM Server to another, without down time. Oracle VM Manager allows you to specify a DRS threshold for each server pool, and to choose which Oracle VM Servers participate in the policy.
See Define or Edit Server Pool Policies in the Oracle VM Manager User's Guide for information on enabling and configuring the DRS in a server pool.
In addition, you can define the default start-up policy for all of your virtual machines at the server pool level. The default VM start policy is Best Server, which determines VM placement based on DRS and DPM algorithms. As of Release 3.4.5, a new VM start policy option named Balance Server is available, which optimizes the CPU and memory utilization across the servers in a pool. It is possible to override the default policy within the configuration of each virtual machine.
See Create Server Pool in the Oracle VM Manager User's Guide for additional information on VM start policies.
6.12.2 Distributed Power Management (DPM)
Distributed Power Management (DPM) is used when there are periods of relative low resource utilization to increase the consolidation ratio on fewer Oracle VM Servers. DPM dynamically migrates virtual machines from under-utilized Oracle VM Servers. When there are Oracle VM Servers without virtual machines running the Oracle VM Server can be powered off, conserving power until the Oracle VM Server is needed again.
DPM aims to keep only the minimum necessary number of Oracle VM Servers running. If a periodic check reveals that a Oracle VM Server's CPU utilization is operating at below a user-set level, virtual machines are live migrated to other Oracle VM Servers in the same server pool.
When all virtual machines are migrated, the Oracle VM Server is shut down.
If an Oracle VM Server exceeds the DPM policy CPU threshold, Oracle VM Manager looks for other Oracle VM Servers to migrate virtual machines to from the busy Oracle VM Server. If no powered Oracle VM Servers are available, Oracle VM Manager finds and starts an Oracle VM Server using its Wake-On-LAN capability. When that Oracle VM Server is running, Oracle VM Manager off-loads the virtual machines from the busy Oracle VM Server to the newly started Oracle VM Server to balance the overall load. It is a prerequisite that all the servers that participate in DPM have Wake-On-LAN enabled in the BIOS for the physical network interface that connects to the dedicated management network.
Oracle VM Manager allows you to specify a DPM threshold for each server pool, and to choose which Oracle VM Servers participate in the policy.
See Define or Edit Server Pool Policies in the Oracle VM Manager User's Guide for information on enabling and configuring DPM in a server pool.
6.12.3 DRS/DPM Network Policies
Both the DRS and DPM policies can also be set for the networks used by Oracle VM Servers in a server pool. When a network used by an Oracle VM Server exceeds its threshold, virtual machines are migrated to other Oracle VM Servers to either balance the resources used (DRS), or reduce the power used (DPM). Each network on an Oracle VM Server can have a threshold set. The threshold applies to either the received data or the transmitted data. If the threshold is set to say 50%, when an Oracle VM Server's receive or transmit traffic on that network exceeds 50% of the theoretical capacity of the network, the Oracle VM Server is deemed to be over the threshold. The theoretical capacity of a network on an Oracle VM Server is equal to the port speed of the physical Ethernet adapter on the Oracle VM Server. If the network is bonded in a fail-over configuration, then the port capacity is equal to the port speed of one of the Ethernet adapters. If the network is bonded on a Oracle VM Server with link aggregation, then the network capacity is equal to the sum of the speed of the bonded Ethernet adapters.
You set the network policies for DRS and DPM when you set up the server pool policy. See Define or Edit Server Pool Policies in the Oracle VM Manager User's Guide for information on enabling and configuring network DRS and DPM policies in a server pool.
It is important to understand that a network policy can be defined for a server pool, even if that network is not used by any servers in the server pool. In this case, the policy is simply ignored, however if a server with the network attached is added to the server pool at a later date, the policy is automatically enabled for the network attached to that server. If you define a network policy on a server pool and later remove all of the servers that had that network attached, the policy still remains enforced on the server pool. Therefore, it is always good practice to regularly check the server pool policy when adding servers to a server pool, since an old policy may be in place that affects the behavior of the network.
6.13 What are Anti-Affinity Groups?
Anti-affinity groups specify that specific virtual machines should never run on the same Oracle VM Server. An anti-affinity group applies to all the Oracle VM Servers in a server pool. You may want to set up anti-affinity groups when you want to build-in redundancy or load balancing of specific applications in your environment.
If you add a virtual machine to an anti-affinity group that already has a virtual machine in the group running on the same Oracle VM Server, the job is aborted and the virtual machine is not added to the group. To add the virtual machine to the anti-affinity group, migrate it to another Oracle VM Server, then add it to the group.
If you have set the inbound migration lock feature to disallow new virtual machines on an Oracle VM Server, then any anti-affinity groups you set are restricted from migrating virtual machines, or creating new ones on the server. See Section 7.12, “How Can I Protect Virtual Machines?” for more information on using the inbound migration lock feature.
Details on creating, editing and deleting anti-affinity groups is available in Anti-Affinity Groups Perspective in the Oracle VM Manager User's Guide.
6.14 What are Server Processor Compatibility Groups?
To ensure successful virtual machine live migration, Oracle VM Manager requires the processor family and model number of the source and destination computer to be the same. A server processor compatibility group is a group of Oracle VM Servers with compatible processors, where a running virtual machine on one Oracle VM Server can safely be migrated and continue to run on another Oracle VM Server. Although Oracle VM Manager contains rules for server processor compatibility, you can create custom compatibility groups to ensure the ability to do smooth migrations is possible if you are certain that applications running within a virtual machine can survive the migration in the case where the family and model of a processor are not the same. If live migration is attempted between incompatible processors, an error message is displayed and the migration fails. Therefore you should be absolutely certain that migrations can be fully supported between all of the servers that belong to a custom server processor compatibility group.
All Oracle VM Servers are added to a default server processor compatibility group as they are discovered. A default server processor compatibility group is created when an Oracle VM Server is discovered if that Oracle VM Server has a processor that is new and unique to Oracle VM Manager. This happens automatically to ensure that live migration and high availability functions can be performed safely and without errors. You should never remove or edit the default server processor compatibility groups directly.
Each server processor compatibility group may include Oracle VM Servers that are members of one or more server pools. An Oracle VM Server may be included in multiple server processor compatibility groups. You can create server processor compatibility groups and select which Oracle VM Servers to include according to your needs. There is no limit to the number of server processor compatibility groups you may have. It is important to understand that when you create a server processor compatibility group, you are defining which servers can take part in live migration and other high availability functions. If you create a server processor compatibility group that contains servers with incompatible processors, live migration and many other functions may fail within your environment. Therefore, you should only create server processor compatibility groups if you are confident that live migration can take place across all of the servers within the group.
Since server processor compatibility groups are used to define which servers may be used for successful virtual machine live migration, it is worth reiterating that live migration is only supported between servers with matching release numbers. If you have an environment where there are mixed server versions, these servers should not be in the same compatibility group unless you are in the process of upgrading all of your servers to the same release.
More information on configuring server processor compatibility groups is available in Server Processor Compatibility Perspective in the Oracle VM Manager User's Guide.