Quantcast
Channel: Tech Support
Viewing all 880 articles
Browse latest View live

How to Jailbreak iOS8.0-8.1.2

$
0
0

Jailbreak tutorial for iOS8.0-8.1.2

Support device: iPhone 4s- 6Plus; iPad/iPad Air/2; iPad mini/mini2/mini3; iPod touch 5



Download Jailbreak tool from here 

 

Confirm your iOS device version is 8-8.1.2

Check version via:
Settings - General - About - Version

Attention: if OTA device failed in Jailbreak,
please download the latest firmware and flash it, then continue Jailbreak.

Please backup your data before Jailbreak

Please use iTunes to backup your data in case of unnecessary lose.

Turn off passcode and "Find My iPhone" function

Before Jailbreak, please make sure turned off passcode and "Find My iPhone" function. Or Jailbreak will fail.

1. Turn off passcode via: Settings - Passcode - Turn Passcode Off
2. Turn off "Find my iPhone" via: Settings - iCloud - Find My iPhone - Click to turn off

Attention: if you forget your Apple ID or password,click hereto find it via official Apple website.


Download and open TaiG Jailbreak tool for iOS8, Click "Start" button, and program will execute automatically, don't need other operation.

During Jailbreak, please keep device connect stable and wait patiently.

After Jailbreak, device will restart automatically. Then you can see the result.

Open your device, Cydia is shown on screen, and untether jailbreak is finish. Now you can enjoy your time of TaiG Jailbreak.


Windows Server 2012 Unattended Installation

$
0
0
One of the common tasks that you may do on a regular basis is installing Windows Server 2012 and you want to automate it as much as possible. Using similar tools to those used with Windows Server 2008 R2, you can create an autounattend.xml file that you inject into the Windows Server 2012 installation ISO for a hands-free install.


The same set of instructions would also work for Windows 8 as well, but you will need to use the Windows 8 install.wim image, and also pay attention to whether you want to automate a 32-bit or 64-bit installation. The autounattend.xml can contain configuration data for both, so only one xml file is needed. If you want to download a pre-configured autounattend.xml file, you can find it hereon my Skydrive folder.

1. Download the Windows ADK (Assessment and Deployment Kit) from this link. Never mind that it says Windows 8, as it will work with Windows Server 2012 since they are the same code base.

2. Start the installation process and after a long download select the two options below (Deployment Tools and Windows Preinstallation Environment (Windows PE)). WinPE is technically optional, but in case you need it in the future, I’d install it anyway.


3. After the installation completes go to the Start Menu and select Windows Kits > Windows ADK > Windows System Image Manager.

4. Mount the ISO image of Windows Server 2012, go to the sources directory and copy install.wim to a local drive, such as D.

5. From the File menu click on Select Windows Image, find the install.wim you copied, then select the edition that you want to build an answer file for. I selected SERVERSTANDARD.


6. Next it will complain that a catalog cannot be found, so it will build one for you, taking a few minutes. After the catalog is built, from the file menu select New Answer File.

7. Scroll through the Components pane and select amd64_Microsoft-Windows-International-Core-WinPE-_6.2.9200.0_neutral, as shown below, right click, and add to Pass 1.


8. In the Answer File pane click on the component, and fill in the language settings as appropriate. In this case it is configured for US English. You can find a list of the codes here. You also need to configure the SetupUILanguage too.



9. Configuring the disk partitions is tedious, but required. To do that, find amd64_Microsoft-Windows-Setup_6.2.9200.0_neutral and add it to Pass 1 as well.


10. In the Answer File pane right click DiskConfiguration and select Insert New Disk. Right click on CreatePartitions and select Insert New CreatePartition. Configure the partition as shown below. This will create a 100MB primary boot partition. Note: The default in 2008 R2 was 100MB, but in 2012 this is now 350MB. I would suggest using 350MB instead of the 100MB in the screenshot.


11. Create a second partition, but this time set Extend to true, and don’t configure a size. This will use the remainder of the disk size.


12. In the Answer File pane click on Disk and change the ID to 0 and WillWipeDisk to true.


13. Right click on ModifyPartition and select Insert New ModifyPartition. Configure the partition as shown below.


14. Add a second ModifyPartitions and configure as shown below:


15. Drill down to the OSImage option and configure as shown below:


16. Right click InstallFrom and select Insert New Metadata. Configure the metadata as shown below. To determine the proper label just think back to when you opened the Windows image (step 5) and enter the image name exactly as it is listed.


17. Configure the InstallTo and use DiskID 0 and PartitionID 2.


18. Configure the UserData options as shown below.


19. Configure the UserDataProductKey option. The key you use will vary depending on how you are going to activate it (KMS or MAK). You should use the GVLK (generic volume license key) that Microsoft publishes here if you use a KMS server, or your MAK key.


20. Add the amd64_Microsoft-Windows-Shell-Setup_6.2.9200.0_neutral component to Pass 4 specialize.


21. In the Answer File pane click on amd64_Microsoft-Windows-Shell-Setup_6.2.9200.0_neutral  and configure the highlighted items below (use the same key as before). You can change the computer name, or leave it blank and it will create a random name upon installation. For a list of timezone values, click here.


22. Add the amd64_Microsoft-Windows-Shell-Setup_6.2.9200.0_neutral component to Pass 7 oobesystem. In the Answer File pane click on amd64_Microsoft-Windows-Shell-Setup_6.2.9200.0_neutral and configure the highlighted items below.


23. Normally I configure autologon for a count of 2, so my image build process goes quicker and in case I forget the administrator password I configured in the answer file I can reset it during the first two reboots. You will also need to configure the password. Enter a password, and when the answer file is written it will be encrypted.


24. Under UserAccounts, configure the AdministratorPassword with the same password you entered for the AutoLogon information.
 
25. Save the file as autounattend.xml and verify that no errors are shown in the validation pane. You will see a lot of warnings, but that is normal.
 
26. Open the Windows Server 2012 ISO image in an ISO editor, like UltraISO. Add the autounattend.xml file to the ROOT of the ISO image. Save the ISO, and then configure a VM or physical server to boot from it and verify that there are no prompts or errors during the installation process. Note that the disk configuration and data will be wiped during the installation process.

How to Get Started Creating Oracle Solaris Kernel Zones in Oracle Solaris 11

$
0
0
Oracle Solaris 11 is a complete, integrated, and open platform engineered for large-scale enterprise environments. Its built-in Oracle Solaris Native Zones technology provides application virtualization through isolated, encapsulated, and highly secure environments that run on top of a single, common kernel. As a result, native zones provide a highly efficient, scalable, zero-overhead virtualization solution that sits at the core of the platform.


With the inclusion of the Kernel Zones feature, Oracle Solaris 11.2 provides a flexible, cost-efficient, cloud-ready solution that is perfect for the data center.

This article describes how to create a kernel zone in Oracle Solaris 11.2, as well as how to configure the kernel zone to your requirements, install it, and boot it.

You will learn about the two main methods of installing a kernel zone: direct installation and installation via an ISO image. In addition, you will learn about a third installation method that enables you to convert a native zone to a kernel zone. You will learn how to update a kernel zone so that it uses a different Oracle Solaris release than the release that is running in the host machine's kernel.

The examples in this article will leave you familiar with the basic procedures for installing, configuring, and managing kernel zones in Oracle Solaris 11.2.

Note: This article demonstrates how to update a kernel zone from Oracle Solaris 11.2 to a later release through examples that mention "Oracle Solaris 11.3" and "Oracle Solaris Next." These examples are purely hypothetical and are for demonstration purposes only; no release later than Oracle Solaris 11.2 is currently available.

About Oracle Solaris Zones and Kernel Zones

Oracle Solaris Zones let you isolate one application from others on the same operating system (OS), allowing you to create a user-, security-, and resource-controlled environment suitable to that particular application. Each Oracle Solaris Zone can contain a complete environment and also allows you to control different resources such as CPU, memory, networking, and storage.

The system administrator who owns the host system can choose to closely manage all the Oracle Solaris Zones on the system. Alternatively, the system administrator can assign rights to other system administrators for specific Oracle Solaris Zones. This flexibility lets you tailor an entire computing environment to the needs of a particular application.

Kernel zones, the newest type of Oracle Solaris Zones, provide all the flexibility, scalability, and efficiency of Oracle Solaris Zones while adding the capability to have zones with independent kernels. This capability is highly useful when you are trying to coordinate the updating of multiple zone environments belonging to different owners.

With kernel zones, the updates can be done at the level of an individual kernel zone at a time that is convenient for each owner. In addition, applications that have specific version requirements can run side by side on the same system and benefit from the high consolidation ratios that Oracle Solaris Zones provide.

Benefits of the Each Installation Method

In this article, we will create three kernel zones using different methods:
  • The first method will show how quickly and easily you can create a new kernel zone using a direct installation—that is, an installation based on the OS running on the host system. This is an extremely useful method for getting additional kernel zone environments up and running quickly in response to a new application or user demands.
  • Using the second method, you will learn how to create a kernel zone from an ISO image. This is useful when it is desirable to deploy a specific kernel version to support an application or environment.
  • Using the final method, you will learn how to convert a native zone to a kernel zone. This is useful when you want to update an application or service to run on a later kernel version without affecting the other services running on the system.
Figure 1 summarizes what we will do:


Figure 1. Illustration of the three methods for creating kernel zones

Prerequisites

There are a couple of tasks that need to be completed before we create our first kernel zone. We need to check that the hardware is capable of running kernel zones, and we also need to provide a hint to the system about application memory usage.

Checking the Hardware Capabilities

Kernel zones will run only on certain types of hardware, as follows:
  • Intel CPUs with CPU virtualization (VT-x) enabled in BIOS and with support for Extended Page Tables (EPT), such as Nehalem or newer CPUs
  • AMD CPUs with CPU virtualization (AMD-v) enabled in BIOS and with support for Nested Page Tables (NPT), such as Barcelona or newer CPUs
  • sun4v CPUs with a "wide" partition register, for example, Oracle's SPARC T4 or SPARC T5 processors running a supported firmware version and Oracle's SPARC M5, SPARC M6, or newer processors
You can easily check that the system is capable of running kernel zones by using the virtinfo command, as shown in Listing 1:
 
root@global:~# virtinfo
NAME CLASS
non-global-zone supported
kernel-zone supported

Listing 1

You can see from the output in Listing 1 that kernel zones are supported.

There are some other hardware prerequisites; for a full list, see the Oracle Solaris Kernel Zones documentation.

Providing Information About Application Memory Usage

When using kernel zones, is it necessary to provide a hint to the system about application memory usage. This information is used to limit the growth of the ZFS Adaptive Replacement Cache (ARC) so that more memory stays available for applications and, in this case, for the kernel zones themselves.

Providing this hint is achieved by setting the user_reserve_hint_pct parameter. A script is provided for doing this, and the current recommendation is to set the value to 80.
 
root@global:~# ./set_user_reserve.sh -f 80
Adjusting user_reserve_hint_pct from 0 to 80
Adjustment of user_reserve_hint_pct to 80 successful.

You can find this script and more information by visiting the My Oracle Support website and then accessing Doc ID 1663862.1.

Creating Your First Kernel Zone Using the Direct Installation Method

For a full discussion on all the steps involved in creating a kernel zone and configuring all its attributes, please see Creating and Using Oracle Solaris Kernel Zones. This article will concentrate on a subset of the steps to demonstrate how to quickly get a kernel zone instance up and running.
 

Prerequisites

First, check the status of the ZFS file system and the network, as shown in Listing 2:
 
demo@global:~$ zfs list | grep zones
rpool/VARSHARE/zones 16.5G 348G 32K /system/zones

demo@global:~$ dladm show-link
LINK CLASS MTU STATE OVER
net1 phys 1500 unknown --
net2 phys 1500 unknown --
net0 phys 1500 up --
net3 phys 1500 unknown --

Listing 2

Note: In Listing 2, there are no ZFS datasets associated with any specific zones. We will see later how these are created for you as you install zones. Also note that there are no virtual network interface card (VNIC) devices.

Let's also check the Oracle Solaris version of the global zone, as shown in Listing 3, because we will use this information later:
 
root@global:~# uname -a
SunOS global 5.11 11.2 i86pc i386 i86pc

Listing 3

In Listing 3, we can see the version is Oracle Solaris 11.2.

Note: In this article, we will use uname as a quick way of showing the kernel version of the system. However, that is not the recommended way to check the system version. The recommended way is to query the entire package, as shown in Listing 4, which also indicates that the version is Oracle

Solaris 11.2. (See "Understanding Oracle Solaris 11 Package Versioning" for an explanation about how to decipher the output when you query the entire package.)

demo@dcsw-79-168:~$ pkg list entire
NAME (PUBLISHER) VERSION IFO
entire 0.5.11-0.175.2.0.0.41.0 i—

Listing 4

We can also see that the system has a publisher set up:
 
root@global:~# pkg publisher
PUBLISHER TYPE STATUS P LOCATION
solaris origin online F http://ipkg.us.oracle.com/solaris11/dev/
 

Step 1: Create the Kernel Zone

Let's start by creating our first kernel zone using the command line, as shown in Listing 5:
 
root@global:~# zonecfg -z myfirstkz create -t SYSsolaris-kz

Listing 5

In Listing 5, note that all we need to supply is the zone name (myfirstkz) and the kernel zone brand (SYSsolaris-kz).

By default, all Oracle Solaris Zones are configured to have an automatic VNIC called anet, which gives us a network device automatically. We cannot see this network device, but it is automatically created upon booting the zone (and also automatically destroyed upon shutdown). We can check this by using the dladm command:
 
root@global:~# dladm show-link
LINK CLASS MTU STATE OVER
net1 phys 1500 unknown --
net2 phys 1500 unknown --
net0 phys 1500 up --
net3 phys 1500 unknown --

We can also see that, as of yet, no storage has been created for our kernel zone:
 
root@global:~# zfs list | grep zones
rpool/VARSHARE/zones 16.5G 348G 32K /system/zones

We can verify that the kernel zone is now in the configured state by using the zoneadm command:
 
root@global:~# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
- myfirstkz configured - solaris-kz excl

Let's take a look at the default settings for the kernel zone that we have created. We can do this by passing the info option to the zonecfg command, as shown in Listing 6:
 
root@global~# zonecfg -z myfirstkz info
zonename: myfirstkz
brand: solaris-kz
autoboot: false
autoshutdown: shutdown
bootargs:
pool:
scheduling-class:
hostid: 0x3888f5a3
tenant:
anet:
lower-link: auto
allowed-address not specified
configure-allowed-address: true
defrouter not specified
allowed-dhcp-cids not specified
link-protection: mac-nospoof
mac-address: auto
mac-prefix not specified
mac-slot not specified
vlan-id not specified
priority not specified
rxrings not specified
txrings not specified
mtu not specified
maxbw not specified
rxfanout not specified
vsi-typeid not specified
vsi-vers not specified
vsi-mgrid not specified
etsbw-lcl not specified
cos not specified
evs not specified
vport not specified
id: 0
device:
match not specified
storage: dev:/dev/zvol/dsk/rpool/VARSHARE/zones/myfirstkz/disk0
id: 0
bootpri: 0
capped-memory:
physical: 2G

Listing 6

From the output in Listing 6, we can see that the zone is called myfirstkz, that it is a kernel zone (brand: solaris-kz), that we have a boot disk (and its location is dev:/dev/zvol/dsk/rpool/VARSHARE/zones/myfirstkz/disk0) and, finally, that we have 2 GB of physical memory assigned to this kernel zone.

What we don't see is the amount of CPU resources we have for this kernel zone. When nothing is specified, the default is to have one virtual CPU assigned. We'll see how to verify this later when we boot the kernel zone.

Step 2: Install the Kernel Zone

Now that the kernel zone has been created, we need to install it.

For this first installation, we are going to use what is called a direct installation. With a direct installation, the installer runs on the host. It will create and format the kernel zone's boot disk and install Oracle Solaris packages on that disk, using the host's package publishers. Since the installer is running on the host, the installer can install only the exact version of Oracle Solaris that it is actively running on the host.

This installation method makes use of the Oracle Solaris 11 Image Packaging System. You will need to make sure you have access to your Image Packaging System repository; in this case, we have network access to our repository. For more details on the Image Packaging System, see "Introducing the Basics of Image Packaging System (IPS) on Oracle Solaris 11."

Run the following command to install the myfirstkz kernel zone:

root@global:~# zoneadm -z myfirstkz install
Progress being logged to /var/log/zones/zoneadm.20140724T124406Z.myfirstkz.install
pkg cache: Using /var/pkg/publisher.
Install Log: /system/volatile/install.7395/install_log
AI Manifest: /tmp/zoneadm6814.Voa43n/devel-ai-manifest.xml
SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml
Installation: Starting ...

Creating IPS image
Installing packages from:
solaris
origin: http://ipkg.us.oracle.com/solaris11/dev/
The following licenses have been accepted and not displayed.
Please review the licenses for the following packages post-install:
consolidation/osnet/osnet-incorporation
Package licenses may be viewed using the command:
pkg info --license

DOWNLOAD PKGS FILES XFER (MB) SPEED
Completed 483/483 64276/64276 543.8/543.8 11.6M/s

PHASE ITEMS
Installing new actions 87529/87529
Updating package state database Done
Updating package cache 0/0
Updating image state Done
Creating fast lookup database Done
Installation: Succeeded
Done: Installation completed in 538.018 seconds.

We can check on the status of the myfirstkz kernel zone using the zoneadm command:
 
root@global:~# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
- myfirstkz installed - solaris-kz excl

Note: A kernel zone needs a boot disk on which it is installed; by using the command shown in Listing 7, we can see that this boot disk has been created for us:
 
root@global:~# zfs list | grep zones
rpool/VARSHARE/zones 16.5G 348G 32K /system/zones
rpool/VARSHARE/zones/myfirstkz 16.5G 348G 31K /system/zones/myfirstkz
rpool/VARSHARE/zones/myfirstkz/disk0 16.5G 361G 2.92G -

Listing 7

You can see in Listing 7 that the /myfirstkz/disk0 dataset has been created automatically for you.
 

Step 3: Boot the Kernel Zone and Complete the System Configuration

The final step in getting myfirstkz up and running is to boot it and set up the system configuration. We will boot the zone and then access its console using one command at the command line, as shown in Listing 8, so the majority of the console output can be seen:
 
root@global:~# zoneadm -z myfirstkz boot; zlogin -C myfirstkz
[Connected to zone 'myfirstkz' console]
Boot device: disk0 File and args:
reading module /platform/i86pc/amd64/boot_archive...done.
reading kernel file /platform/i86pc/kernel/amd64/unix...done.
SunOS Release 5.11 Version 11.2 64-bit
Copyright (c) 1983, 2014, Oracle and/or its affiliates. All rights reserved.
Loading smf(5) service descriptions: 183/183
Configuring devices.

Listing 8

Note: The -C option to zlogin shown in Listing 8 lets us access the zone console; the command will bring us into the zone and let us work within the zone.

Because no system configuration files are available, the System Configuration Tool starts up, as shown in Figure 2.

             Figure 2. Initial screen of the System Configuration Tool

Press F2 to continue.

In the System Identity screen (shown in Figure 3), enter myfirstkz as the computer name, and then
press F2 to continue.

             Figure 3. System Identity screen

In the Network screen (shown in Figure 4), Enter the network settings appropriate for your network and then press F2. Here we will select Automatically.

             Figure 4. Network screen

In the Time Zone: Regions screen (shown in Figure 5), choose the time zone region appropriate for your location. In this example, we chose Europe. Then press F2.

             Figure 5. Time Zone: Regions screen

In the Time Zone: Locations screen (shown in Figure 6), choose the time zone location appropriate for your location, and then press F2.

             Figure 6. Time Zone: Locations screen

In the Time Zone screen (shown in Figure 7), choose the time zone appropriate for your location, and then press F2.

             Figure 7. Time Zone screen

In the Locale: Language screen (shown in Figure 8), choose the language appropriate for your location, and then press F2.

             Figure 8. Locale: Language screen

In the Locale: Territory screen (shown in Figure 9), choose the language territory appropriate for your location, and then press F2.

             Figure 9. Locale: Territory screen

In the Date and Time screen (shown in Figure 10), set the date and time, and then press F2.

             Figure 10. Date and Time screen

In the Keyboard screen (shown in Figure 11), select the appropriate keyboard, and then press F2.

             Figure 11. Keyboard screen

In the Users screen (shown in Figure 12), choose a root password and enter information for a user account. Then press F2.

            Figure 12. Users screen

In the Support — Registration screen (shown in Figure 13), enter your My Oracle Support credentials. Then press F2.

             Figure 13. Support — Registration screen

In the Support — Network Configuration screen (shown in Figure 14), choose how you will send configuration data to Oracle. Then press F2.

 
             Figure 14. Support — Network Configuration screen

In the System Configuration Summary screen (shown in Figure 15), verify that the configuration you have chosen is correct and apply the settings by pressing F2.

            Figure 15. System Configuration Summary screen

The zone will continue booting, and soon you will see the console login:
 
SC profile successfully generated as:
/etc/svc/profile/sysconfig/sysconfig-20140724-130314/sc_profile.xml

Exiting System Configuration Tool. Log is available at:
/system/volatile/sysconfig/sysconfig.log.287
Hostname: myfirstkz


myfirstkz console login:

The zone is now ready to be logged in to. For this example, we will now exit the console using the "~." escape sequence.

You can check that your zone is booted and running using the zoneadm command:
 
root@global:~# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
2 myfirstkz running - solaris-kz excl

As promised, a VNIC was automatically created for us when the zone was booted. We can verify this by using the dladm command shown in Listing 9:
 
root@global:~# dladm show-link
LINK CLASS MTU STATE OVER
net1 phys 1500 unknown --
net2 phys 1500 unknown --
net0 phys 1500 up --
net3 phys 1500 unknown --
myfirstkz/net0 vnic 1500 up net0

Listing 9

In Listing 9, we can see the VNIC is listed as myfirstkz/net0.

Step 4: Log In to Your Kernel Zone

The last step is to log in to your zone and have a look. You can do this from the global zone using the zlogin command, as shown in Listing 10:
 
root@global:~# zlogin myfirstkz
[Connected to zone 'myfirstkz' pts/1]
Oracle Corporation SunOS 5.11 11.2 June 2014
root@myfirstkz:~# uname -a
SunOS myfirstkz 5.11 11.2 i86pc i386 i86pc 
 
root@myfirstkz:~# ipadm show-addr
ADDROBJ TYPE STATE ADDR
lo0/v4 static ok 127.0.0.1/8
net0/v4 dhcp ok 10.134.79.210/24
lo0/v6 static ok ::1/128
net0/v6 addrconf ok fe80::8:20ff:fe47:ca30/10 
 
root@myfirstkz:~# dladm show-link
LINK CLASS MTU STATE OVER
net0 phys 1500 up -- 
 
root@myfirstkz:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 4.65G 10.7G 32.5K /rpool
rpool/ROOT 2.58G 10.7G 31K legacy
rpool/ROOT/solaris-5 2.58G 10.7G 2.08G /
rpool/ROOT/solaris-5/var 510M 10.7G 508M /var
rpool/VARSHARE 2.52M 10.7G 2.43M /var/share
rpool/VARSHARE/pkg 63K 10.7G 32K /var/share/pkg
rpool/VARSHARE/pkg/repositories 31K 10.7G 31K /var/share/pkg/repositories
rpool/VARSHARE/zones 31K 10.7G 31K /system/zones
rpool/dump 1.03G 10.8G 1.00G -
rpool/export 96.5K 10.7G 32K /export
rpool/export/home 64.5K 10.7G 32K /export/home
rpool/export/home/demo 32.5K 10.7G 32.5K /export/home/demo
rpool/swap 1.03G 10.8G 1.00G - 
 
root@myfirstkz:~# zonename
global 
 
root@myfirstkz:~# exit
logout

[Connection to zone 'myfirstkz' pts/1 closed]

Listing 10

Note: In Listing 10, we did not use the -C option for the zlogin command, which means we are not accessing the zone via its console. This is why we can simply exit the shell at the end to leave the zone.

Let's look at the output shown in Listing 10 to see what we have:
  • The output of the uname command shows that we are running on Oracle Solaris 11.2—the same kernel version used in the global zone in which our myfirstkz kernel zone is running.
  • The output of the ipadm command shows the IP address for myfirstkz. There are four entries: two loopback devices (IPv4 and IPv6), our IPv4 net0 device with an IP address of 10.134.79.210 and, finally, an IPv6 net0 device.
  • The output of the dladm command shows our automatically created net0 VNIC.
  • The output of the zfs list command shows our ZFS dataset.
  • Finally, the output of the zonename command shows that our zone name is global. With native zones, this would be the actual zone name. However, a kernel zone actually runs a full kernel instance, so users running inside the kernel zone have their own instance of a global zone.
If you want to determine the zone name from inside the kernel zone, you can use the virtinfo command:
 
root@global:~# zlogin myfirstkz
[Connected to zone 'myfirstkz' pts/1]
Oracle Corporation SunOS 5.11 11.2 June 2014 
 
root@myfirstkz:~# virtinfo -c current get zonename
NAME CLASS PROPERTY VALUE
kernel-zone current zonename myfirstkz
root@myfirstkz:~# exit
logout

Note: From within myfirstkz, we cannot see any information about the global zone; we can see only the attributes of our own zone.

You have now verified that myfirstkz is up and running. You can give the login information to your users to allow them to complete the setup of their team's kernel zone as if it were a single system.

Updating a Kernel Zone to a Later Oracle Solaris Release

One of the main features of kernel zones is the ability to run your kernel zone with a different kernel version from that of the host global zone.

Starting with Oracle Solaris 11.2, kernel zones support both backwards and forwards compatibility. What that means in practice is that you can not only have a kernel zone running Oracle Solaris 11.2 on a host running a later Oracle Solaris version, say Oracle Solaris 11.3 (when it is available), but you can also have a kernel zone running a later Oracle Solaris version, say Oracle Solaris 11.3, on a host running Oracle Solaris 11.2. Figure 16 illustrates this capability.

               Figure 16. Example of forward and backward compatibility of kernel zones

Updating myfirstkz to a Later Oracle Solaris Release

Let's update our kernel zone to use a later Oracle Solaris release—a hypothetical "Oracle Solaris Next" release—rather than the release running on the host (Oracle Solaris 11.2).

First, let's use the command shown in Listing 11 to look at what boot environments we have from the host global zone:
 
root@global:~# beadm list
BE Active Mountpoint Space Policy Created
-- ------ ---------- ----- ------ -------
solaris - - 44.67M static 2012-01-26 18:59
solaris-1 - - 47.78M static 2014-06-25 08:12
solaris-2 - - 46.71M static 2014-06-25 08:40
solaris-3 - - 1.03G static 2014-06-25 23:30
solaris-3-backup-1 - - 221.0K static 2014-07-10 14:23
solaris-4 NR / 52.77G static 2014-07-20 18:41
solaris-backup-1 - - 144.0K static 2012-01-26 19:28

Listing 11

In Listing 11, we could see a list of native zone boot environments, if there were any. However, we will not see kernel zone boot environments listed, because a kernel zone has its own boot disk.

Let's check what our current publisher is and point the kernel zone to a publisher that has a newer kernel. We start by logging in to myfirstkz, as shown in Listing 12:
 
root@global:~# zlogin myfirstkz
[Connected to zone 'myfirstkz' pts/2]
Oracle Corporation SunOS 5.11 11.2 June 2014 
 
root@myfirstkz:~# pkg publisher
PUBLISHER TYPE STATUS P LOCATION
solaris origin online F http://ipkg.us.oracle.com/solaris11/dev/ 
 
root@myfirstkz:~# pkg set-publisher -G '*' -g http://ipkg.us.oracle.com/solaris-n/dev/ solaris 
 
root@myfirstkz:~# pkg publisher
PUBLISHER TYPE STATUS P LOCATION
solaris origin online F http://ipkg.us.oracle.com/solaris-n/dev/

Listing 12

Note that in this example, we will use an internally created repository. You will be able to reproduce this for yourself as later releases of Oracle Solaris become available. In Listing 12, we can see that we are running Oracle Solaris 11.2, and we have set our publisher to point to the dev repository.

Before we update, let's look at what the kernel zone sees as its boot environment:
 
root@myfirstkz:~# beadm list
BE Active Mountpoint Space Policy Created
-- ------ ---------- ----- ------ -------
solaris-5 NR / 7.91M static 2014-07-24 13:44

Now, let's update our kernel zone, as shown in Listing 13:
 
root@myfirstkz:~# pkg update --accept
------------------------------------------------------------
Package: pkg://solaris/consolidation/osnet/osnet-incorporation@5.12,5.12-
5.12.0.0.0.52.0:20140714T022826Z
License: lic_OTN

You acknowledge that your use of this Oracle Solaris software product
is subject to (i) the license terms that you accepted when you obtained
the right to use Oracle Solaris software; or (ii) the license terms that
you agreed to when you placed your Oracle Solaris software order with
Oracle; or (iii) the Oracle Solaris software license terms included with
the hardware that you acquired from Oracle; or, if (i), (ii) or (iii)
are not applicable, then, (iv) the OTN License Agreement for Oracle
Solaris (which you acknowledge you have read and agree to) available at
http://www.oracle.com/technetwork/licenses/solaris-cluster-express-license-
167852.html.
Note: Software downloaded for trial use or downloaded as replacement
media may not be used to update any unsupported software.



Packages to remove: 37
Packages to install: 57
Packages to update: 432
Mediators to change: 4
Create boot environment: Yes
Create backup boot environment: No
DOWNLOAD PKGS FILES XFER (MB) SPEED
Completed 526/526 23157/23157 363.4/363.4 16.7M/s

PHASE ITEMS
Removing old actions 6580/6580
Installing new actions 9594/9594
Updating modified actions 24807/24807
Updating package state database Done
Updating package cache 469/469
Updating image state Done
Creating fast lookup database Done
Updating package cache 1/1

A clone of solaris-5 exists and has been updated and activated.
On the next boot the Boot Environment solaris-6 will be
mounted on '/'. Reboot when ready to switch to this updated BE.

Updating package cache 1/1

---------------------------------------------------------------------------
NOTE: Please review release notes posted at:

http://www.oracle.com/pls/topic/lookup?ctx=solaris11&id=SERNS
---------------------------------------------------------------------------

Listing 13

In the command shown in Listing 13, we use the --accept option to automatically accept any licenses. We can see in the output that a boot environment has been created. Let's look at what that is, as shown in Listing 14:
 
root@myfirstkz:~# beadm list
BE Active Mountpoint Space Policy Created
-- ------ ---------- ----- ------ -------
solaris-5 N / 7.91M static 2014-07-24 13:44
solaris-6 R - 7.39G static 2014-07-24 17:04

Listing 14

In Listing 14, we can see from the R next to the solaris-6 boot environment that after a reboot, we will select this new environment.

Finally, let's reboot the zone.
 
root@myfirstkz:~# reboot

[Connection to zone 'myfirstkz' pts/2 closed]
root@global:~#

We are now back in the host global zone, and we can use zoneadm to check the status of our kernel zone, as shown in Listing 15:
 
root@global:~# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
3 myfirstkz running - solaris-kz excl

Listing 15

As shown in Listing 15, our kernel zone has already rebooted and is running again.

Let's log back in, as shown in Listing 16, and check what kernel version we are running:
 
root@global:~# zlogin myfirstkz
[Connected to zone 'myfirstkz' pts/2]
Oracle Corporation SunOS 5.n sn_52 June 2014

Listing 16

Listing 16 shows we are running a completely different kernel: the hypothetical "Oracle Solaris Next."

Let's run the command shown in Listing 17 to take a final look at the boot environments before we leave this kernel zone:
 
root@myfirstkz:~# beadm list
BE Active Mountpoint Space Policy Created
-- ------ ---------- ----- ------ -------
solaris-5 - - 12.35M static 2014-07-24 13:44
solaris-6 NR / 7.50G static 2014-07-24 17:04
root@myfirstkz:~# exit
logout

[Connection to zone 'myfirstkz' pts/2 closed]

Listing 17

In Listing 17, we can see that we are running in the new boot environment.

Installing a Kernel Zone from an ISO Image

Sometimes you might not want to do a direct installation with a kernel zone; you might want to install from an ISO image instead. This is supported for kernel zones, and this section will show how to do that.

We will also use this opportunity to explore how to allocate some dedicated CPU resources to the kernel zone, as well as how to add some extra memory and increase the size of its boot disk.

Step 1: Configure Dedicated CPU Resources and More Memory

Let's create a new kernel zone similar to what we did before, but this time we will use the zonecfg command to add some dedicated CPU resources.

Let's start by checking how many CPU resources we have:
 
root@global:~# psrinfo -t
socket: 0
core: 0
cpus: 0,8
core: 1
cpus: 1,9
core: 2
cpus: 2,10
core: 3
cpus: 3,11
socket: 1
core: 8
cpus: 4,12
core: 9
cpus: 5,13
core: 10
cpus: 6,14
core: 11
cpus: 7,15

Now, let's create a new kernel zone called iso-kz and then add four CPU's worth of dedicated CPU resources to it:

root@global:~# zonecfg -z iso-kz create -t SYSsolaris-kz
root@global:~# zonecfg -z iso-kz
zonecfg:iso-kz> add dedicated-cpu
zonecfg:iso-kz:dedicated-cpu> set ncpus=4
zonecfg:iso-kz:dedicated-cpu> end
zonecfg:iso-kz> verify
zonecfg:iso-kz> commit
zonecfg:iso-kz> exit

We can check that the zone creation and resource configuration worked by using the zonecfg command:

root@global:~# zonecfg -z iso-kz info dedicated-cpu
dedicated-cpu:
ncpus: 4
cpus not specified
cores not specified
sockets not specified

You can set a kernel zone to have either virtual CPUs or dedicated CPUs. The difference between the two types is basically about sharing.
  • With a virtual CPU, the CPU resource is shared with the rest of the system or other zones and it can be leveraged in cases where the kernel zone is not busy.
  • With dedicated a CPU, the CPU resource is exclusive to the kernel zone and will never be used by anything other than that specific kernel zone.
We can also use the zonecfg command to add some extra memory to the kernel zone:
 
root@global:~# zonecfg -z iso-kz
zonecfg:iso-kz> select capped-memory
zonecfg:iso-kz:capped-memory> set physical=3g
zonecfg:iso-kz:capped-memory> end
zonecfg:iso-kz> verify
zonecfg:iso-kz> commit
zonecfg:iso-kz> exit 
 
root@global:~# zonecfg -z iso-kz info capped-memory
capped-memory:
physical: 3G

Step 2: Install the Kernel Zone with a Bigger Disk

It's now time to install our zone. We will use a hypothetical Oracle Solaris 11.3 ISO image to do this and we will also increase the size of the install disk. The default is a 16 GB disk, so let's increase that to 24 GB. Listing 18 shows how you do this at installation time:

root@global:~# zoneadm -z iso-kz install -b /root/sol-11_3-42-text-x86.iso -x install-size=24g

Listing 18

In Listing 18, you can see that this time, the image we used is using the text installer.
Once we have answered the usual installation questions, we can log in to our zone, as shown in

Listing 19:
 
root@global:~# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
3 myfirstkz running - solaris-kz excl
5 iso-kz running - solaris-kz excl

root@global:~# zlogin iso-kz
[Connected to zone 'iso-kz' pts/2]
Oracle Corporation SunOS 5.11 11.3 June 2014
root@:~#
root@solarisiso-kz:~# psrinfo -t
socket: 0
core: 0
cpu: 0
core: 1
cpu: 1
core: 2
cpu: 2
core: 3
cpu: 3
root@:~# exit
logout

[Connection to zone 'iso-kz' pts/2 closed]

Listing 19

In Listing 19, we can see the four dedicated CPUs we assigned and we can see that we are running a release different than that of the host global zone.

Before we move on, let's shut down our two kernel zones:
 
root@global:~# zoneadm -z myfirstkz shutdown
root@global:~# zoneadm -z iso-kz shutdown

Converting a Native Zone to a Kernel Zone

The final operation to try is converting a native zone to a kernel zone, which is made especially easy through the use of Oracle Solaris Unified Archives.

In this example, we will use a native zone that has already been created. If you are not sure how to create a native zone, see "How to Get Started Creating Oracle Solaris Zones in Oracle Solaris 11."

Let's start by having a look at the native zone we are going to convert, as shown in Listing 20:
 
root@global:~# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
6 native-zone running /system/zones/native-zone solaris excl
- myfirstkz installed - solaris-kz excl
- iso-kz installed - solaris-kz excl

root@global:~# zlogin native-zone
[Connected to zone 'native-zone' pts/2]
Oracle Corporation SunOS 5.11 11.2 June 2014
root@native-zone:~# touch my_special_files
root@native-zone:~# zonename
native-zone
root@native-zone:~# exit
logout

[Connection to zone 'native-zone' pts/2 closed]
root@global:~# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
- myfirstkz installed - solaris-kz excl
- iso-kz installed - solaris-kz excl
- native-zone installed /system/zones/native-zone solaris excl

Listing 20

In Listing 20, we can see our native-zone is already up and running, and we have logged in and created a file called my_special_files. This example is just to reflect any configuration that we might have done when taking a zone from a real environment. Finally, we checked the zone name, logged out, and shut down the native zone.

Note: One of the big advantages of using a Unified Archive to capture a zone is that you can do the capture on a running zone, which means you can avoid outages to end users. In this case, because we want to convert our native zone to a kernel zone (rather than clone the native zone), we shut down the native zone.

Now let's create a Unified Archive of our native zone:
 
root@global:~# archiveadm create -z native-zone ./native-zone-archive.uar
Initializing Unified Archive creation resources...
Unified Archive initialized: /root/native-zone-archive.uar
Logging to: /system/volatile/archive_log.26165
Executing dataset discovery...
Dataset discovery complete
Preparing archive system image...
Beginning archive stream creation...
Archive stream creation complete
Beginning final archive assembly...
Archive creation complete

Once we have created the archive, we can examine it to see what it contains:
 
root@global:~# ls -l
total 5602777
-rw-r--r-- 1 root root 901992448 Jul 1 06:18 0.175.2_ai_i386.iso
-rw-r--r-- 1 root root 1308958720 Jul 24 17:27 native-zone-archive.uar
-rw-r--r-- 1 demo staff 675102720 Jul 24 07:37 sol-11_2-42-text-x86.iso

root@global:~# archiveadm info -v ./native-zone-archive.uar
Archive Information
Creation Time: 2014-07-24T21:54:08Z
Source Host: global
Architecture: i386
Operating System: Oracle Solaris 11.3 X86
Recovery Archive: No
Unique ID: e1bf0d42-338b-e879-fec4-ab78290ef55c
Archive Version: 1.0

Deployable Systems
'native-zone'
OS Version: 0.5.11
OS Branch: 0.175.3.0.0.1.0
Active BE: solaris
Brand: solaris
Size Needed: 978MB
Unique ID: f488ea7c-ab1e-6cc4-d407-c60fce1e3818
AI Media: 0.175.3_ai_i386.iso
Root-only: Yes

Next, let's configure a new kernel zone and when we are ready to install it, we will pass in the archive, as shown in

Listing 21:
 
root@global:~# zonecfg -z converted-zone-kz create -t SYSsolaris-kz
root@global:~# zoneadm -z converted-zone-kz install -a ./native-zone-archive.uar
Progress being logged to /var/log/zones/zoneadm.20140724T233807Z.converted-zone-kz.install
[Connected to zone 'converted-zone-kz' console]
Boot device: cdrom1 File and args: -B install=true,auto-shutdown=true -B aimanifest=/system/shared/ai.xml
reading module /platform/i86pc/amd64/boot_archive...done.
reading kernel file /platform/i86pc/kernel/amd64/unix...done.
SunOS Release 5.11 Version 11.2 64-bit
Copyright (c) 1983, 2014, Oracle and/or its affiliates. All rights reserved.
Remounting root read/write
Probing for device nodes ...
Preparing image for use
Done mounting image
Configuring devices.
Hostname: solaris
Using specified install manifest : /system/shared/ai.xml

solaris console login:
Automated Installation started
The progress of the Automated Installation will be output to the console
Detailed logging is in the logfile at /system/volatile/install_log
Press RETURN to get a login prompt at any time.

23:40:15 Install Log: /system/volatile/install_log
23:40:15 Using XML Manifest: /system/volatile/ai.xml
23:40:15 Using profile specification: /system/volatile/profile
23:40:15 Starting installation.
23:40:15 0% Preparing for Installation
23:40:15 100% manifest-parser completed.
23:40:15 100% None
23:40:15 0% Preparing for Installation
23:40:18 1% Preparing for Installation
23:40:18 2% Preparing for Installation
23:40:19 3% Preparing for Installation
23:40:19 4% Preparing for Installation
23:40:19 5% archive-1 completed.
23:40:21 8% target-discovery completed.
23:40:23 Pre-validating manifest targets before actual target selection
23:40:23 Selected Disk(s) : c1d0
23:40:24 Pre-validation of manifest targets completed
23:40:24 Validating combined manifest and archive origin targets
23:40:24 Selected Disk(s) : c1d0
23:40:24 9% target-selection completed.
23:40:24 10% ai-configuration completed.
23:40:24 9% var-share-dataset completed.
23:40:29 10% target-instantiation completed.
23:40:29 10% Beginning archive transfer
23:40:29 Commencing transfer of stream: ce6d4b69-ad85-e7e1-aaf7-fdbfdc17f001-0.zfs
to rpool
23:40:35 30% Transferring contents
23:40:39 50% Transferring contents
23:40:43 70% Transferring contents
23:40:54 86% Transferring contents
23:41:09 Completed transfer of stream: 'ce6d4b69-ad85-e7e1-aaf7-fdbfdc17f001-0.zfs'
from file:///system/shared/uafs/OVA
23:41:12 Archive transfer completed
23:41:31 90% generated-transfer-965-1 completed.
23:41:31 90% Beginning IPS transfer
23:41:31 Setting post-install publishers to:
23:41:31 solaris
23:41:31 origin: http://ipkg.us.oracle.com/solaris11/dev/
23:41:32 90% generated-transfer-965-2 completed.
23:41:32 Changing target pkg variant. This operation may take a while
23:51:17 90% apply-pkg-variant completed.
23:51:21 Setting boot devices in firmware
23:51:21 91% boot-configuration completed.
23:51:21 91% update-dump-adm completed.
23:51:21 92% setup-swap completed.
23:51:22 92% device-config completed.
23:51:23 92% apply-sysconfig completed.
23:51:23 93% transfer-zpool-cache completed.
23:51:36 98% boot-archive completed.
23:51:36 98% transfer-ai-files completed.
23:51:37 98% cleanup-archive-install completed.
23:51:38 100% create-snapshot completed.
23:51:39 100% None
23:51:39 Automated Installation succeeded.
23:51:39 You may wish to reboot the system at this time.
Automated Installation finished successfully
Shutdown requested. Shutting down the system
Log files will be available in /var/log/install/ after reboot
svc.startd: The system is coming down. Please wait.
root@global:~# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
- myfirstkz installed - solaris-kz excl
- iso-kz installed - solaris-kz excl
- native-zone installed /system/zones/native-zone solaris excl
- converted-zone-kz installed - solaris-kz excl

Listing 21

As we can see in Listing 21, the install process completed successfully and we have an installed kernel zone.

Let's boot up our newly converted zone and have a look at it, as shown in Listing 22:
 
root@global:~# zoneadm -z converted-zone-kz boot
root@global:~# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
3 converted-zone-kz running - solaris-kz excl
- duckstack unavailable - solaris-kz excl
- native-zone installed /system/zones/native-zone solaris excl
root@global:~# zlogin converted-zone-kz
[Connected to zone 'converted-zone-kz' pts/1]

Oracle Corporation SunOS 5.11 11.2 June 2014
root@unknown:~# ls
my_special_files

Listing 22

In Listing 22, we can see that the contents of our native zone have been preserved.

Conclusion

In this article, we explored how to create, install, boot, and configure a kernel zone. You learned that Oracle Solaris Kernel Zones have the ability to run kernel versions that are different from the kernel version running on the host. We also saw how to do a direct installation and an installation based on an ISO image. Finally, we saw how to convert a native zone to a kernel zone using Unified Archives.

How to Install and Configure a Two-Node Cluster for Oracle Solaris Cluster 4.1 Using SR-IOV/IB Interfaces

$
0
0
This article is intended to help a new or experienced Oracle Solaris user to quickly and easily install and configure Oracle Solaris Cluster software for two nodes, including the creation of Single Root I/O Virtualization/InfiniBand (SR-IOV/IB) devices. It provides a step-by-step procedure to simplify the process.

This article does not cover the configuration of highly available services. For more details on how to install and configure other Oracle Solaris Cluster software configurations, see the Oracle Solaris Cluster Software Installation Guide.

This articles uses the interactive scinstall utility to configure all the nodes of a cluster quickly and easily. The interactive scinstall utility is menu driven. The menus help reduce the chance of mistakes and promote best practices by using default values and prompting you for information specific to your cluster. The utility also helps prevent mistakes by identifying invalid entries. Finally, the scinstall utility eliminates the need to manually set up a quorum device by automating the configuration of a quorum device for your new cluster.

Note: This article applies to the Oracle Solaris Cluster 4.1 release. For more information about the Oracle Solaris Cluster release, see the Oracle Solaris Cluster 4.1 Release Notes.

Overview of SR-IOV


SR-IOV is a PCI-SIG standards-based I/O virtualization specification. SR-IOV enables a PCIe function known as physical function (PF) to create multiple lightweight PCIe functions known as virtual functions (VFs). VFs show up like a regular PCIe functions and also operate like regular PCIe functions. The address space for a VF is well contained so that a VF can be assigned to a virtual machine (a logical domain or LDom) with the help of a hypervisor. SR-IOV provides a high degree of sharing compared to other forms of direct hardware access methods that are available in LDom technology, that is, PCIe bus assignment and direct I/O.

Prerequisites, Assumptions, and Defaults


This section discusses several prerequisites, assumptions, and defaults for two-node clusters.

Configuration Assumptions


This article assumes the following configuration is used:

  • You are installing the two-node cluster on Oracle Solaris 11.1 and you have basic system administration skills.
  • You are installing the Oracle Solaris Cluster 4.1 software.
  • The cluster hardware is a supported configuration for Oracle Solaris Cluster 4.1 software.
  • This is a two-node cluster for SPARC T4-4 servers from Oracle. SR-IOV is only supported on servers based on Oracle's SPARC T4 (or above) processors.
  • Each cluster node is an I/O domain.
  • Each node has two spare network interfaces to be used as private interconnects, also known as transports, and at least one network interface that is connected to the public network.
  • iSCSI shared storage is connected to the two nodes.
  • Your setup looks like Figure 1. You might have fewer or more devices, depending on your system or network configuration.

In addition, it is recommended that you have console access to the nodes during cluster installation, but this is not required.

Figure 1. Oracle Solaris Cluster Hardware Configuration

Prerequisites


Perform the following prerequisite tasks:

  1. Ensure that Oracle Solaris 11.1 SRU13 is installed on both the SPARC T4-4 systems.
  2. Perform the initial preparation of public IP addresses and logical host names.

    You must have the logical names (host names) and IP addresses of the nodes to configure a cluster. Add those entries to each node's /etc/inet/hosts file or to a naming service if such a service (for example, DNS, NIS, or NIS+ maps) is used. The example in this article uses a NIS service.
    Table 1 lists the configuration used in this example.
    Table 1. Configuration
    ComponentNameInterfaceIP Address
    clusterphys-schost
    node 1phys-schost-1igbvf01.2.3.4
    node 2phys-schost-2igbvf01.2.3.5

  3. Create SR-IOV VF devices for the public, private, and storage networks.

    You have to create the VF devices on the corresponding adapters for public, private, and storage networks in the primary domain and assign the VF devices to the logical domains that will be configured as cluster nodes.
    Type the commands shown in Listing 1 on the control domain phys-primary-1:
    root@phys-primary-1# ldm ls-io|grep IB
    /SYS/PCI-EM0/IOVIB.PF0 PF pci_0 primary
    /SYS/PCI-EM1/IOVIB.PF0 PF pci_0 primary
    /SYS/PCI-EM0/IOVIB.PF0.VF0 VF pci_0 primary
    root@phys-primary-1# ldm start-reconf primary
    root@phys-primary-1# ldm create-vf /SYS/MB/NET2/IOVNET.PF0
    root@phys-primary1# ldm create-vf /SYS/PCI-EM0/IOVIB.PF0
    root@phys-primary-1# ldm create-vf /SYS/PCI-EM1/IOVIB.PF0
    root@phys-primary-1# ldm add-domain domain1
    root@phys-primary-1# ldm add-vcpu 128 domain1
    root@phys-primary-1# ldm add-mem 128g domain1
    root@phys-primary-1# ldm add-io /SYS/MB/NET2/IOVNET.PF0.VF1 domain1
    root@phys-primary-1# ldm add-io /SYS/PCI-EM0/IOVIB.PF0.VF1 domain1
    root@phys-primary-1# ldm add-io /SYS/PCI-EM1/IOVIB.PF0.VF1 domain1
    root@phys-primary-1# ldm ls-io | grep domain1
    /SYS/MB/NET2/IOVNET.PF0.VF1 VF pci_0 domain1
    /SYS/PCI-EM0/IOVIB.PF0.VF1 VF pci_0 domain1
    /SYS/PCI-EM0/IOVIB.PF0.VF2 VF pci_0 domain1
    Listing 1
    The VF IOVNET.PF0.VF1 is used for the public network. IB VF devices have partitions that host both private network and storage network devices.
    Repeat the commands shown in Listing 1 on phys-primary-2. The I/O domain domain1 on both nodes must be installed with Oracle Solaris 11.1 SRU13 before installing the cluster software.

Note: To learn more about SR-IOV technology, take a look at the documentation for Oracle VM Server for SPARC 3.1. For information about InfiniBand VFs, see "Using InfiniBand SR-IOV Virtual Functions."

Defaults


The scinstall interactive utility in the Typical mode installs the Oracle Solaris Cluster software with the following defaults:

  • Private-network address 172.16.0.0
  • Private-network netmask 255.255.248.0
  • Cluster-transport switches switch1 and switch2

Perform the Preinstallation Checks


  1. Temporarily enable rsh or ssh access for root on the cluster nodes.
  2. Log in to the cluster node on which you are installing Oracle Solaris Cluster software and become superuser.
  3. On each node, verify the /etc/inet/hosts file entries. If no other name resolution service is available, add the name and IP address of the other node to this file.

    The /etc/inet/hosts file on node 1 has the following information.
    # Internet host table
    #
    ::1 phys-schost-1 localhost
    127.0.0.1 phys-schost-1 localhost loghost

    The /etc/inet/hosts file on node 2 has the following information.
    # Internet host table
    #
    ::1 phys-schost-2 localhost
    127.0.0.1 phys-schost-2 localhost loghost
  4. On each node, verify that at least one shared storage disk is available.

    In this example, the following disks are shared between the two nodes: c0t600A0B800026FD7C000019B149CCCFAEd0 and c0t600A0B800026FD7C000019D549D0A500d0.
    # format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c4t0d0
    /pci@7b,0/pci1022,7458@11/pci1000,3060@2/sd@0,0
    /dev/chassis/SYS/HD0/disk
    1. c4t1d0
    /pci@7b,0/pci1022,7458@11/pci1000,3060@2/sd@1,0
    /dev/chassis/SYS/HD1/disk
    2. c0t600144F0CD152C9E000051F2AFE20007d0
    /scsi_vhci/ssd@g600144f0cd152c9e000051f2afe20007
    3. c0t600144F0CD152C9E000051F2AFF00008d0
    /scsi_vhci/ssd@g600144f0cd152c9e000051f2aff00008
  5. On each node, ensure that the right OS version is installed.

    # more /etc/release
    Oracle Solaris 11.1 SPARC
    Copyright (c) 1983, 2013, Oracle and/or its affiliates. All rights reserved.
    Assembled 06 November 2013
  6. Ensure that the network interfaces are configured as static IP addresses (not DHCP or of type addrconf, as displayed by the ipadm show-addr -o all command.)

    If the network interfaces are not configured as static IP addresses, then run the command shown in Listing 2 on each node, which will unconfigure all network interfaces and services.
    If the nodes are configured as static, go to the "Configure the Oracle Solaris Cluster Publisher" section.
    # netadm enable -p ncp defaultfixed
    Enabling ncp 'DefaultFixed'
    phys-schost-1: Sep 27 08:19:19 phys-schost-1 in.ndpd[1038]: Interface net0 has been removed from
    kernel. in.ndpd will no longer use it
    Sep 27 08:19:19 phys-schost-1 in.ndpd[1038]: Interface net1 has been removed from kernel
    . in.ndpd will no longer use it
    Sep 27 08:19:19 phys-schost-1 in.ndpd[1038]: Interface net2 has been removed from kernel
    . in.ndpd will no longer use it
    Sep 27 08:19:20 phys-schost-1 in.ndpd[1038]: Interface net3 has been removed from kernel
    . in.ndpd will no longer use it
    Sep 27 08:19:20 phys-schost-1 in.ndpd[1038]: Interface net4 has been removed from kernel
    . in.ndpd will no longer use it
    Sep 27 08:19:20 phys-schost-1 in.ndpd[1038]: Interface net5 has been removed from kernel
    . in.ndpd will no longer use it
    Listing 2
  7. On each node, type the following commands to configure the naming services and update the name service switch configuration:

    # svccfg -s svc:/network/nis/domain setprop config/domainname = hostname: nisdomain.example.com
    # svccfg -s svc:/network/nis/domain:default refresh
    # svcadm enable svc:/network/nis/domain:default
    # svcadm enable svc:/network/nis/client:default
    # /usr/sbin/svccfg -s svc:/system/name-service/switch setprop config/host = astring: \"files nis\"
    # /usr/sbin/svccfg -s svc:/system/name-service/switch setprop config/netmask = astring: \"files nis\"
    # /usr/sbin/svccfg -s svc:/system/name-service/switch setprop config/automount = astring: \"files nis\"
    # /usr/sbin/svcadm refresh svc:/system/name-service/switch
  8. Bind each node to the NIS server.

    # ypinit -c
  9. Reboot each node to make sure that the new network setup is working fine.

Configure the Oracle Solaris Cluster Publisher


There are two main ways to access the Oracle Solaris Cluster package repository, depending on whether the cluster nodes have direct access (or through a web proxy) to the internet: using a repository hosted on pkg.oracle.com or using a local copy of the repository.

Using a Repository Hosted on pkg.oracle.com


To access either the Oracle Cluster Solaris Release or Support repositories, obtain the SSL public and private keys.

  1. Go to http://pkg-register.oracle.com.
  2. Choose the Oracle Solaris Cluster Release or Support repository.
  3. Accept the license.
  4. Request a new certificate by choosing the Oracle Solaris Cluster software and submitting a request. This displays a certification page that contains download buttons for download the key and certificate files.
  5. Download the key and certificate files and install them, as described in the returned certification page.
  6. Configure the ha-cluster publisher with the downloaded SSL keys to point to the selected repository URL on pkg.oracle.com.

    This example uses the release repository:
    # pkg set-publisher \
    -k /var/pkg/ssl/Oracle_Solaris_Cluster_4.key.pem \
    -c /var/pkg/ssl/Oracle_Solaris_Cluster_4.certificate.pem \
    -g https://pkg.oracle.com/ha-cluster/release/ ha-cluster

Using a Local Copy of the Repository


To access a local copy of the Oracle Solaris Cluster Release or Support repository, download the repository image.

  1. Download the repository image from the Oracle Technology Network or Oracle Software Delivery Cloud. To download the repository image from Oracle Software Delivery Cloud, select Oracle Solaris as the Product Pack on the Media Pack Search Page.
  2. Mount the repository image and copy the data to a shared file system that all the cluster nodes can access.

    # mount -F hsfs  /mnt
    # rsync -aP /mnt/repo /export
    # share /export/repo
  3. Configure the ha-cluster publisher.

    This example uses node 1 as the system that shared the local copy of the repository:
    # pkg set-publisher -g file:///net/phys-schost-1/export/repo ha-cluster

Install the Oracle Solaris Cluster Software Packages


  1. On each node, ensure that the correct Oracle Solaris package repositories are published.

    If they are not, unset the incorrect publishers and set the correct ones. The installation of the ha-cluster packages is highly likely to fail if it cannot access the solaris publisher.
    # pkg publisher
    PUBLISHER TYPE STATUS URI
    solaris origin online
    ha-cluster origin online
  2. On each cluster node, install the ha-cluster-full package group.

    # pkg install ha-cluster-full

    Packages to install: 68
    Create boot environment: No
    Create backup boot environment: Yes
    Services to change: 1

    DOWNLOAD PKGS FILES XFER (MB)
    Completed 68/68 6456/6456 48.5/48.5$<3>

    PHASE ACTIONS
    Install Phase 8928/8928

    PHASE ITEMS
    Package State Update Phase 68/68
    Image State Update Phase 2/2
    Loading smf(5) service descriptions: 9/9
    Loading smf(5) service descriptions: 57/57
    3>

Configure the Oracle Solaris Cluster Software


  1. On each node of the cluster, identify the network interfaces that will be used for the private interconnects.

    In this example, 8513 and 8514 are the PKEYs for a private IB partition that is used for transport. 8503 is the PKEY for a private storage network that is used to configure iSCSI storage from an Oracle ZFS Storage Appliance with an IB connection.
    The Oracle ZFS Storage Appliance has the IP address 192.168.0.61 configured on the InfiniBand network. The priv1 and priv2 IB partitions are used as private interconnects for the private network. The storage1 and storage2 partitions are used for the storage network.
    Type the following commands on node 1:
    phys-schost-1# dladm show-ib |grep net
    net6 21290001EF8BA2 14050000000001 1 up localhost 0a-eth-1 8031,8501,8511,8513,8521,FFFF
    net7 21290001EF8BA2 14050000000008 2 up localhost 0a-eth-1 8503,8514,FFFF
    phys-schost-1# dladm create-part -l net6 -P 8513 priv1
    phys-schost-1# dladm create-part -l net7 -P 8514 priv2
    phys-schost-1# dladm create-part -l net6 -P 8503 storage1
    phys-schost-1# dladm create-part -l net7 -P 8503 storage2
    phys-schost-1# dladm show-part
    LINK PKEY OVER STATE FLAGS
    priv1 8513 net6 up ----
    priv2 8514 net7 up ----
    storage1 8503 net6 up ----
    storage2 8503 net7 up ----
    phys-schost-1# ipadm create-ip storage1
    phys-schost-1# ipadm create-ip storage2
    phys-schost-1# ipadm create-ipmp -i storage1 -i storage2 storage_ipmp0
    phys-schost-1# ipadm create-addr -T static -a 192.168.0.41/24 storage_ipmp0/address1
    phys-schost-1# iscsiadm add static-config iqn.1986-03.com.sun:02:a87851cb-4bad-c0e5-8d27-dd76834e6985,192.168.10.61

    Type the following commands on node 2:
    phys-schost-2# dladm show-ib |grep net
    net9 21290001EF8FFE 1405000000002B 2 up localhost 0a-eth-1 8032,8502,8512,8516,8522,FFFF
    net6 21290001EF4E36 14050000000016 1 up localhost 0a-eth-1 8031,8501,8511,8513,8521,FFFF
    net7 21290001EF4E36 1405000000000F 2 up localhost 0a-eth-1 8503,8514,FFFF
    net8 21290001EF8FFE 14050000000032 1 up localhost 0a-eth-1 8503,8515,FFFF
    phys-schost-2# dladm create-part -l net6 -P 8513 priv1
    phys-schost-2# dladm create-part -l net7 -P 8514 priv2
    phys-schost-2# dladm create-part -l net6 -P 8503 storage1
    phys-schost-2# dladm create-part -l net7 -P 8503 storage2
    phys-schost-2# dladm show-part
    LINK PKEY OVER STATE FLAGS
    priv1 8513 net6 up ----
    priv2 8514 net7 up ----
    storage1 8503 net6 up ----
    storage2 8503 net7 up ----
    phys-schost-2# ipadm create-ip storage1
    phys-schost-2# ipadm create-ip storage2
    phys-schost-2# ipadm create-ipmp -i storage1 -i storage2 storage_ipmp0
    phys-schost-2# ipadm create-addr -T static -a 192.168.0.42/24 storage_ipmp0/address1
    phys-schost-2# iscsiadm add static-config iqn.1986-03.com.sun:02:a87851cb-4bad-c0e5-8d27-dd76834e6985,192.168.10.61
  2. On each node, ensure that the Oracle Solaris Service Management Facility services are not in the maintenance state.

    # svcs -x
  3. On each node, ensure that the service network/rpc/bind:default has the local_only configuration set to false.

    # svcprop network/rpc/bind:default | grep local_only
    config/local_only boolean false

    If not, set the local_only configuration to false.
    # svccfg
    svc:> select network/rpc/bind
    svc:/network/rpc/bind> setprop config/local_only=false
    svc:/network/rpc/bind> quit
    # svcadm refresh network/rpc/bind:default
    # svcprop network/rpc/bind:default | grep local_only
    config/local_only boolean false
  4. From one of the nodes, start the Oracle Solaris Cluster configuration. This will configure the software on the other node as well.

    In this example, the following command is run on the node 2, phys-schost-2.
    # /usr/cluster/bin/scinstall
    *** Main Menu ***

    Please select from one of the following (*) options:

    * 1) Create a new cluster or add a cluster node
    * 2) Print release information for this cluster node
    * ?) Help with menu options
    * q) Quit

    Option: 1

    From the Main menu, type 1 to choose the first menu item, which can be used to create a new cluster or add a cluster node.
    *** Create a New Cluster ***


    This option creates and configures a new cluster.
    Press Control-D at any time to return to the Main Menu.


    Do you want to continue (yes/no) [yes]?

    Checking the value of property "local_only" of service svc:/network/rpc/bind
    ...
    Property "local_only" of service svc:/network/rpc/bind is already correctly
    set to "false" on this node.

    Press Enter to continue:

    Answer yes and then press Enter to go to the installation mode selection. Then select the default mode: Typical.
    >>> Typical or Custom Mode <<<

    This tool supports two modes of operation, Typical mode and Custom
    mode. For most clusters, you can use Typical mode. However, you might
    need to select the Custom mode option if not all of the Typical mode
    defaults can be applied to your cluster.

    For more information about the differences between Typical and Custom
    modes, select the Help option from the menu.
    Please select from one of the following options:

    1) Typical
    2) Custom

    ?) Help
    q) Return to the Main Menu

    Option [1]: 1

    Provide the name of the cluster. In this example, type the cluster name as phys-schost.
    >>> Cluster Name <<<

    Each cluster has a name assigned to it. The name can be made up of any
    characters other than whitespace. Each cluster name should be unique
    within the namespace of your enterprise.

    What is the name of the cluster you want to establish? phys-schost

    Provide the name of the other node. In this example, the name of the other node is phys-schost-1. Finish the list by pressing ^D. Answer yes to confirm the list of nodes.
    >>> Cluster Nodes <<<

    This Oracle Solaris Cluster release supports a total of up to 16
    nodes.

    List the names of the other nodes planned for the initial cluster
    configuration. List one node name per line. When finished, type
    Control-D:

    Node name (Control-D to finish): phys-schost-1
    Node name (Control-D to finish):

    ^D


    This is the complete list of nodes:

    phys-schost-2
    phys-schost-1

    Is it correct (yes/no) [yes]?

    The next two screens configure the cluster's private interconnects, also known as the transport adapters. Select the priv1 and priv2 IB partitions.
    >>> Cluster Transport Adapters and Cables <<<

    Transport adapters are the adapters that attach to the private cluster
    interconnect.

    Select the first cluster transport adapter:

    1) net1
    2) net2
    3) net3
    4) net4
    5) net5
    6) priv1
    7) priv2
    8) Other

    Option: 6

    Adapter "priv1" is an Infiniband adapter.

    Searching for any unexpected network traffic on "priv1" ... done
    Verification completed. No traffic was detected over a 10 second
    sample period.

    The "dlpi" transport type will be set for this cluster.

    For node "phys-schost-2",
    Name of the switch to which "priv1" is connected [switch1]?

    Each adapter is cabled to a particular port on a switch. And, each
    port is assigned a name. You can explicitly assign a name to each
    port. Or, for Ethernet and Infiniband switches, you can choose to
    allow scinstall to assign a default name for you. The default port
    name assignment sets the name to the node number of the node hosting
    the transport adapter at the other end of the cable.

    For node "phys-schost-2",
    Use the default port name for the "priv1" connection (yes/no) [yes]?

    Select the second cluster transport adapter:

    1) net1
    2) net2
    3) net3
    4) net4
    5) net5
    6) priv1
    7) priv2
    8) Other

    Option: 7

    Adapter "priv2" is an Infiniband adapter.

    Searching for any unexpected network traffic on "priv2" ... done
    Verification completed. No traffic was detected over a 10 second
    sample period.

    The "dlpi" transport type will be set for this cluster.

    For node "phys-schost-2",
    Name of the switch to which "priv2" is connected [switch2]?

    For node "phys-schost-2",
    Use the default port name for the "priv2" connection (yes/no) [yes]?

    The next screen configures the quorum device. Select the default answers for the questions asked in the Quorum Configuration screen.
    >>> Quorum Configuration <<<

    Every two-node cluster requires at least one quorum device. By
    default, scinstall selects and configures a shared disk quorum device
    for you.

    This screen allows you to disable the automatic selection and
    configuration of a quorum device.

    You have chosen to turn on the global fencing. If your shared storage
    devices do not support SCSI, such as Serial Advanced Technology
    Attachment (SATA) disks, or if your shared disks do not support
    SCSI-2, you must disable this feature.

    If you disable automatic quorum device selection now, or if you intend
    to use a quorum device that is not a shared disk, you must instead use
    clsetup(1M) to manually configure quorum once both nodes have joined
    the cluster for the first time.

    Do you want to disable automatic quorum device selection (yes/no) [no]?

    Is it okay to create the new cluster (yes/no) [yes]?
    During the cluster creation process, cluster check is run on each of the new
    cluster nodes. If cluster check detects problems, you can
    either interrupt the process or check the log files after the cluster
    has been established.

    Interrupt cluster creation for cluster check errors (yes/no) [no]?

    The final screens print details about the configuration of the nodes and the installation log's file name. The utility then reboots each node in cluster mode.
    Cluster Creation

    Log file - /var/cluster/logs/install/scinstall.log.3386

    Configuring global device using lofi on phys-schost-1: done

    Starting discovery of the cluster transport configuration.

    The following connections were discovered:

    phys-schost-2:priv1 switch1 phys-schost-1:priv1
    phys-schost-2:priv2 switch2 phys-schost-1:priv2

    Completed discovery of the cluster transport configuration.

    Started cluster check on "phys-schost-2".
    Started cluster check on "phys-schost-1".

    ...
    ...
    ...

    Refer to the log file for details.
    The name of the log file is /var/cluster/logs/install/scinstall.log.3386.


    Configuring "phys-schost-1" ... done
    Rebooting "phys-schost-1" ...
    Configuring "phys-schost-2" ...
    Rebooting "phys-schost-2" ...

    Log file - /var/cluster/logs/install/scinstall.log.3386

    When the scinstall utility finishes, the installation and configuration of the basic Oracle Solaris Cluster software is complete. The cluster is now ready for you to configure the components you will use to support highly available applications. These cluster components can include device groups, cluster file systems, highly available local file systems, and individual data services and zone clusters. To configure these components, refer to the Oracle Solaris Cluster 4.1 documentation library.
  5. Verify on each node that multiuser services for the Oracle Solaris Service Management Facility (SMF) are online. Ensure that the new services added by Oracle Solaris Cluster are all online.

    # svcs -x
    # svcs multi-user-server
    STATE STIME FMRI
    online 9:58:44 svc:/milestone/multi-user-server:default
  6. From one of the nodes, verify that both nodes have joined the cluster.

    # cluster status
    === Cluster Nodes ===
    --- Node Status ---

    Node Name Status
    --------- ------
    phys-schost-1 Online
    phys-schost-2 Online


    === Cluster Transport Paths ===
    Endpoint1 Endpoint2 Status
    --------- --------- ------
    phys-schost-1:priv1 phys-schost-2:priv1 Path online
    phys-schost-1:priv2 phys-schost-2:priv2 Path online

    === Cluster Quorum ===
    --- Quorum Votes Summary from (latest node reconfiguration) ---
    Needed Present Possible
    ------ ------- --------
    2 3 3

    --- Quorum Votes by Node (current status) ---

    Node Name Present Possible Status
    --------- ------- -------- ------
    phys-schost-1 1 1 Online
    phys-schost-2 1 1 Online


    --- Quorum Votes by Device (current status) ---

    Device Name Present Possible Status
    ----------- ----- ------- -----
    d1 1 1 Online


    === Cluster Device Groups ===

    --- Device Group Status ---

    Device Group Name Primary Secondary Status
    ----------------- ------- ------- ------


    --- Spare, Inactive, and In Transition Nodes ---
    Device Group Name Spare Nodes Inactive Nodes In Transition Nodes
    ----------------- --------- -------------- --------------------

    --- Multi-owner Device Group Status ---
    Device Group Name Node Name Status
    ----------------- ------- ------


    === Cluster Resource Groups ===
    Group Name Node Name Suspended State
    ---------- --------- --------- -----

    === Cluster Resources ===

    Resource Name Node Name State Status Message
    ------------- --------- ----- --------------

    === Cluster DID Devices ===
    Device Instance Node Status
    --------------- --- ------
    /dev/did/rdsk/d1 phys-schost-1 Ok
    phys-schost-2 Ok
    /dev/did/rdsk/d2 phys-schost-1 Ok
    phys-schost-2 Ok
    /dev/did/rdsk/d3 phys-schost-1 Ok
    /dev/did/rdsk/d4 phys-schost-1 Ok
    /dev/did/rdsk/d5 phys-schost-2 Ok
    /dev/did/rdsk/d6 phys-schost-2 Ok

    === Zone Clusters ===

    --- Zone Cluster Status ---

    Name Node Name Zone HostName Status Zone Status
    ---- --------- ------------- ------ ----------

Verify High Availability (Optional)


This section describes how to create a failover resource group with a LogicalHostname resource for a highly available network resource and an HAStoragePlus resource for a highly available ZFS file system on a zpool resource.

  1. Identify the network address that will be used for this purpose and add it to the /etc/inet/hosts file on the nodes. In this example, the host name is schost-lh.

    The /etc/inet/hosts file on node 1 contains the following information:
    # Internet host table
    #
    ::1 localhost
    127.0.0.1 localhost loghost
    1.2.3.4 phys-schost-1 # Cluster Node
    1.2.3.5 phys-schost-2 # Cluster Node
    1.2.3.6 schost-lh

    The /etc/inet/hosts file on node 2 contains the following information:
    # Internet host table
    #
    ::1 localhost
    127.0.0.1 localhost loghost
    1.2.3.4 phys-schost-1 # Cluster Node
    1.2.3.5 phys-schost-2 # Cluster Node
    1.2.3.6 schost-lh

    schost-lh will be used as the logical host name for the resource group in this example. This resource is of the type SUNW.LogicalHostname, which is a preregistered resource type.
  2. From one of the nodes, create a zpool with the two shared storage disks /dev/did/rdsk/d1s0 and /dev/did/rdsk/d2s0. In this example, the entire disk is assigned to slice 0 of the disks, using the format utility.

    # zpool create -m /zfs1 pool1 mirror /dev/did/dsk/d1s0 /dev/did/dsk/d2s0
    # df -k /zfs1
    Filesystem 1024-blocks Used Available Capacity Mounted on
    pool1 20514816 31 20514722 1% /zfs1

    The created zpool will now be placed in a highly available resource group as a resource of type SUNW.HAStoragePlus. This resource type has to be registered before it is used for the first time.
  3. To create a highly available resource group to house the resources, on one node, type the following command:

    # /usr/cluster/bin/clrg create test-rg
  4. Add the network resource to the test-rg group.

    # /usr/cluster/bin/clrslh create -g test-rg -h schost-lh schost-lhres
  5. Register the storage resource type.

    # /usr/cluster/bin/clrt register SUNW.HAStoragePlus
  6. Add the zpool to the group.

    # /usr/cluster/bin/clrs create -g test-rg -t SUNW.HAStoragePlus -p  zpools=pool1 hasp-res
  7. Bring the group online:

    # /usr/cluster/bin/clrg online -eM test-rg
  8. Check the status of the group and the resources:

    # /usr/cluster/bin/clrg status

    === Cluster Resource Groups ===

    Group Name Node Name Suspended Status
    ---------- --------- --------- ------
    test-rg phys-schost-1 No Online
    phys-schost-2 No Offline



    # /usr/cluster/bin/clrs status

    === Cluster Resources ===

    Resource Name Node Name State Status Message
    ------------- ------- ----- --------------
    hasp-res phys-schost-1 Online Online
    phys-schost-2 Offline Offline

    schost-lhres phys-schost-1 Online Online - LogicalHostname online.
    phys-schost-2 Offline Offline


    The command output shows that the resources and the group are online on node 1.
  9. To verify availability, switch over the resource group to node 2 and check the status of the resources and the group.

    # /usr/cluster/bin/clrg switch -n phys-schost-2 test-rg
    # /usr/cluster/bin/clrg status

    === Cluster Resource Groups ===

    Group Name Node Name Suspended Status
    ---------- --------- --------- ------
    test-rg phys-schost-1 No Offline
    phys-schost-2 No Online

    # /usr/cluster/bin/clrs status

    === Cluster Resources ===

    Resource Name Node Name State Status Message
    ------------- --------- ----- --------------
    hasp-res phys-schost-1 Offline Offline
    phys-schost-2 Online Online

    schost-lhres phys-schost-1 Offline Offline - LogicalHostname offline.
    phys-schost-2 Online Online - LogicalHostname online.

How to Configure a Failover Oracle Solaris Kernel Zone

$
0
0
This article is intended to help a new or experienced Oracle Solaris administrator to quickly and easily configure an Oracle Solaris Kernel Zone in failover mode using the Oracle Solaris Cluster HA for Oracle Solaris Zones agent.

About Oracle Solaris Cluster Failover Zones


Oracle Solaris Zones include support for fully independent and isolated environments called kernel zones, which provide a full kernel and user environment within a zone. Kernel zones increase operational flexibility and are ideal for multitenant environments where maintenance windows are significantly harder to schedule. Kernel zones can run at a different kernel version from the global zone and can be updated separately without requiring a reboot of the global zone. You can also use kernel zones in combination with Oracle VM Sever for SPARC for greater virtualization flexibility.

This article describes how to set up a failover kernel zone on a two-node cluster.

Configuration Assumptions


This article assumes the following configuration is used:

  • The cluster is installed and configured with Oracle Solaris 11.2 and Oracle Solaris Cluster 4.2.
  • The repositories for Oracle Solaris and Oracle Solaris Cluster are configured on the cluster nodes.
  • The cluster hardware is a supported configuration for Oracle Solaris Cluster 4.2 software.
  • The cluster is a two-node SPARC cluster. (However, the installation procedure is applicable to x86 clusters as well.)
  • Each node has two spare network interfaces to be used as private interconnects, also known as transports, and at least one network interface that is connected to the public network.
  • SCSI shared storage is connected to the two nodes.
  • Your setup looks like Figure 1. You might have fewer or more devices, depending on your system or network configuration.

It is recommended that you have console access to the nodes during administration, but this is not required.

Figure 1. Oracle Solaris Cluster hardware configuration

Prerequisites


Ensure the following prerequisites are met:

  1. The boot disk of a kernel zone in an HA zone configuration must reside on a shared disk.
  2. The zone must be configured on each cluster node where the zone can fail over.
  3. The zone must be active on only one node at a time, and the zone's address must be plumbed on only one node at a time.
  4. Make sure you have a shared disk available to host the zonepath for the failover zone. You can use /usr/cluster/bin/scdidadm -L or /usr/cluster/bin/cldevice list to see the shared disks. Each cluster node has a path to the shared disk.
  5. Verify that the Oracle Solaris operating system version is at least 11.2.

    root@phys-schost-1:~# uname -a
    SunOS phys-schost-1 5.11 11.2 sun4v sparc sun4v
  6. Verify that the kernel zone brand package, brand/brand-solaris-kz, is installed on the host.

    root@phys-schost-1# pkg list brand/brand-solaris-kz
    NAME (PUBLISHER) VERSION IFO
    system/zones/brand/brand-solaris-kz 0.5.11-0.175.2.0.0.41.0 i--
  7. Run the virtinfo command to verify that kernel zones are supported on cluster nodes. The following example shows that the kernel zone brand package is installed on the host phys-schost-1.

    root@phys-schost-1:~# virtinfo
    NAME CLASS
    logical-domain current
    non-global-zone supported
    kernel-zone supported
  8. Identify two shared disks, one for the boot disk and the other for the suspend disk. Suspend and resume are supported for a kernel zone only if a kernel zone has a suspend resource property in its configuration. If the suspend device is not configured, it is not possible to use warm migration. Kernel zones support cold and warm migration during switchover. This example uses shared disks d7 and d8. You can use suriadm for looking up the URI for both disks.

    root@phys-schost-1:~# /usr/cluster/bin/scdidadm -L d7 d8
    7 phys-schost-1:/dev/rdsk/c0t60080E500017B5D80000084D52711BB9d0 /dev/did/rdsk/d7
    7 phys-schost-2:/dev/rdsk/c0t60080E500017B5D80000084D52711BB9d0 /dev/did/rdsk/d7
    8 phys-schost-1:/dev/rdsk/c0t60080E500017B5D80000084B52711BAEd0 /dev/did/rdsk/d8
    8 phys-schost-2:/dev/rdsk/c0t60080E500017B5D80000084B52711BAEd0 /dev/did/rdsk/d8
    root@phys-schost-1:~# suriadm lookup-uri /dev/did/dsk/d7
    dev:did/dsk/d7
    root@phys-schost-1:~# suriadm lookup-uri /dev/did/dsk/d8
    dev:did/dsk/d8
  9. The zone source and destination must be on the same platform for zone migration. On x86 systems, the vendor as well as the CPU revision must be identical. On SPARC systems, the zone source and destination must be on the same hardware platform. For example, you cannot migrate a kernel zone from a SPARC T4 host to a SPARC T3 host.

Enable a Kernel Zone to Run in a Failover Configuration Using a Failover File System


In a failover configuration, the zone's zonepath must reside on a highly available file system. Oracle Solaris Cluster provides the SUNW.HAStoragePlus service to manage a failover file system.

  1. Register the SUNW.HAStoragePlus (HASP) resource type.

    phys-schost-1# /usr/cluster/bin/clrt register SUNW.HAStoragePlus
  2. Create the failover resource group.

    phys-schost-1# /usr/cluster/bin/clrg create sol-kz-fz1-rg
  3. Create a HAStoragePlus resource to monitor the disks that are used as boot or suspend devices for the kernel zone.

    root@phys-schost-1:~# clrs create -t SUNW.HAStoragePlus -g sol-kz-fz1-rg \
    -p GlobalDevicePaths=dsk/d7,dsk/d8 sol-kz-fz1-hasp-rs
    root@phys-schost-1:~# /usr/cluster/bin/clrg online -emM -n phys-schost-1 \ sol-kz-fz1-rg
  4. Create and configure the zone on phys-schost-1. You must ensure that the boot and suspend devices reside on shared disks. For configuring a two-node cluster, execute the following commands on phys-schost-1 and then replicate the zone configuration to phys-schost-2.

    root@phys-schost-1:~# zonecfg -z sol-kz-fz1 'create -b; set brand=solaris-kz;
    add capped-memory;
    set physical=2G; end; add device;
    set storage=dev:did/dsk/d7; set bootpri=1; end; add suspend; set
    storage=dev:did/dsk/d8; end; add anet; set lower-link=auto; end; set autoboot=false; add attr;
    set name=osc-ha-zone; set type=boolean; set value=true; end;'
  5. Verify that the zone is configured.

    phys-schost-1# zoneadm list -cv
    ID NAME STATUS PATH BRAND IP
    0 global running / solaris shared
    - sol-kz-fz1 configured - solaris-kz excl
  6. Install the zone using zoneadm and then boot the zone.

    root@phys-schost-1:~# zoneadm -z sol-kz-fz1 install
    Progress being logged to /var/log/zones/zoneadm.20140829T212403Z.sol-kz-fz1.install
    pkg cache: Using /var/pkg/publisher.
    Install Log: /system/volatile/install.4811/install_log
    AI Manifest: /tmp/zoneadm4203.ZLaaYi/devel-ai-manifest.xml
    SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml
    Installation: Starting ...
    Creating IPS image
    Installing packages from:
    solaris
    origin: http://solaris-publisher.domain.com/support/sru/
    ha-cluster
    origin: http://cluster-publisher.domain.com/solariscluster/sru/
    The following licenses have been accepted and not displayed.
    Please review the licenses for the following packages post-install:
    consolidation/osnet/osnet-incorporation
    Package licenses may be viewed using the command:
    pkg info --license

    DOWNLOAD PKGS FILES XFER (MB) SPEED
    Completed 482/482 64261/64261 544.1/544.1 1.9M/s

    PHASE ITEMS
    Installing new actions 87569/87569
    Updating package state database Done
    Updating package cache 0/0
    Updating image state Done
    Creating fast lookup database Done
    Installation: Succeeded
    Done: Installation completed in 609.014 seconds.
  7. Verify that the zone is successfully installed and boots up.

    phys-schost-1# zoneadm list -cv
    ID NAME STATUS PATH BRAND IP
    0 global running / solaris shared
    - sol-kz-fz1 installed - solaris-kz excl
  8. In another window, log in to the zone's console and boot the zone. Follow the prompts through the system configuration interactive screens to configure the zone.

    phys-schost-1# zlogin -C sol-kz-fz1
    phys-schost-1# zoneadm -z sol-kz-fz1 boot
  9. Shut down the zone and switch the resource group to another node available in the list of resource group nodes.

    phys-schost-1# zoneadm -z sol-kz-fz1 shutdown
    phys-schost-1# zoneadm -z sol-kz-fz1 detach -F
    phys-schost-1# /usr/cluster/bin/clrg switch -n phys-schost-2 sol-kz-fz1-rg
    phys-schost-1# zoneadm list -cv
    ID NAME STATUS PATH BRAND IP
    0 global running / solaris shared
    - sol-kz-fz1 configured - solaris-kz excl
  10. Copy the zone configuration to the second node and create the kernel zone on the second node using the configuration file.

    root@phys-schost-1:~# zonecfg -z sol-kz-fz1 export -f \ 
    /var/cluster/run/sol-kz-fz1.cfg
    root@phys-schost-1:~# scp /var/cluster/run/sol-kz-fz1.cfg phys-schost- \
    2:/var/cluster/run/
    root@phys-schost-1:~# rm /var/cluster/run/sol-kz-fz1.cfg
    root@phys-schost-2:~# zonecfg -z sol-kz-fz1 -f /var/cluster/run/sol-kz-\
    fz1.cfg
    root@phys-schost-2:~# rm /var/cluster/run/sol-kz-fz1.cfg
  11. Attach the zone and verify that the zone can boot on the second node. Log in from another session to ensure that the zone boots up fine.

    root@phys-schost-2:~# zoneadm -z sol-kz-fz1 attach -x force-takeover
    root@phys-schost-2:~# zoneadm -z sol-kz-fz1 boot
    root@phys-schost-2:~# zlogin -C sol-kz-fz1
  12. Shut down and detach the zone.

    root@phys-schost-2:~# zoneadm -z sol-kz-fz1 shutdown
    root@phys-schost-2:~# zoneadm -z sol-kz-fz1 detach -F
  13. Install the failover zone agent if it is not already installed.

    root@phys-schost-1# pkg install ha-cluster/data-service/ha-zones
    root@phys-schost-2# pkg install ha-cluster/data-service/ha-zones
  14. To create the resource from any one node, edit the sczbt_config file and set the parameters as shown below.

    root@phys-schost-2:~# clrt register SUNW.gds
    root@phys-schost-2:~# cd /opt/SUNWsczone/sczbt/util
    root@phys-schost-2:~# cp -p sczbt_config sczbt_config.sol-kz-fz1-rs
    root@phys-schost-2:~# vi sczbt_config.sol-kz-fz1-rs
    RS=sol-kz-fz1-rs
    RG=sol-kz-fz1-rg
    PARAMETERDIR=
    SC_NETWORK=false
    SC_LH=
    FAILOVER=true
    HAS_RS=RS=sol-kz-fz1-hasp-rs
    RG=sol-kz-fz1-rg
    Zonename="sol-kz-fz1"
    Zonebrand="solaris-kz"
    Zonebootopt=""
    Milestone="svc:/milestone/multi-user-server"
    LXrunlevel="3"
    SLrunlevel="3"
    Mounts=""
    Migrationtype="warm"
  15. To configure the zone-boot resource, set the parameters in the zone-boot configuration file.

    root@phys-schost-2:~# ./sczbt_register -f ./sczbt_config.kz
    sourcing ./sczbt_config.kz
    Registration of resource kz-rs succeeded.
    root@phys-schost-2:~# /usr/cluster/bin/clrs enable sol-kz-fz1-rs
  16. Check the status of the resource groups and resources.

    root@phys-schost-2:~# /usr/cluster/bin/clrs status -g sol-kz-fz1-rg
    === Cluster Resources ===
    Resource Name Node Name State Status Message
    ------------------- ------------- ----- -------------------
    sol-kz-fz1-rs phys-schost-1 Online Online - Service is online.
    phys-schost-2 Offline Offline

    sol-kz-fz1-hasp-rs phys-schost-1 Online Online
    phys-schost-2 Offline Offline
    root@phys-schost-2:~#
  17. Log in using the zlogin -C sol-kz-fz1 command to verify that zone successfully boots up and then switch to other node to test switchover.

    root@phys-schost-2:~# /usr/cluster/bin/clrg switch -n phys-schost-1 sol-kz-fz1-rg
    root@phys-schost-2:~# /usr/cluster/bin/clrs status -g sol-kz-fz1-rg
    === Cluster Resources ===

    Resource Name Node Name State Status Message
    ------------------- ---------- ----- -------------------
    sol-kz-fz1-rs phys-schost-1 Online Online
    phys-schost-2 Offline Offline

    ha-zones-hasp-rs phys-schost-1 Online Online
    phys-schost-2 Offline Offline
    root@phys-schost-2:~#

How To Create a Calibre Ebook Server on Ubuntu 14.04

$
0
0
Calibre is a free and open source ebook manager.

Although Calibre is probably better known for its desktop client, it can also act as a powerful server, allowing you to access your ebooks from anywhere in the world (or share your collection with friends). Keeping your ebooks on a server is great, as you aren't reliant on having the same reading device with you whenever you want to read. And if you go traveling, you don't need to worry about taking your ebook collection with you!

The server includes a simple and elegant browser front-end that allows you to search for and download books from your library. It also has a mobile-friendly site built in, making it easy to download books straight to an e-reader – even to ones with only the most basic web functionality.

For example, Calibre's browser works with the Kindle Touch, which can download books directly even though the device only has an e-ink display and an experimental browser.

In this article we'll look at how to install, set up, and use Calibre on a Ubuntu 14.04 server. We'll also take a look at how to use the calibredb command to create, customize, and maintain your ebook database right from the server.

For this tutorial we'll cover:

  • Installing Calibre
  • Creating an ebook library, or importing an existing one
  • Making Calibre server a background service
  • Automatically adding new books to the library

By the end of this tutorial, you'll have a small initial library to which you can easily add new books!

Prerequisites


Please make sure you have these prerequisites:

  • Ubuntu 14.04
  • A sudo user

Examples in this tutorial are shown for a virtual or physical machine running a fresh installation of Ubuntu 14.04, but they should be easily adaptable to other operating systems.

Step 1 — Installing Calibre


Calibre is available from the APT software repositories, but as advised by its creators it is far better to install from the binaries provided on their website. Calibre is updated very frequently and the version in the repos tends to lag behind.

Luckily, the creators of Calibre have made this very simple to do. Just run the following Python command on your server. Before running the command, please double-check the official Calibre site in case the command has been changed.

Install Calibre (make sure you scroll to get the entire command):

sudo -v && wget -nv -O- https://raw.githubusercontent.com/kovidgoyal/calibre/master/setup/linux-installer.py | sudo python -c "import sys; main=lambda:sys.stderr.write('Download failed\n'); exec(sys.stdin.read()); main()"

You will notice some warnings about failed desktop integration, but these are safe to ignore, since you are installing Calibre on a remote server.

Step 2 — Installing Dependencies


The Calibre command line tool calibredb is used for various operations on your Calibre library, such as adding or importing books, and fetching metadata and covers for books.

We'll take a look at how to use some of these commands later, but for now we'll just install two dependencies. The first is ImageMagick, without which calibredb won't run; and the second is xvfb which we'll use to run calibredb in a virtual X display server – in order to sidestep issues caused by running Calibre in a non-display environment.

To install these just run the following commands.

Update your package lists:

sudo apt-get update

Install xvfb:

sudo apt-get install xvfb

Install ImageMagick:

sudo apt-get install ImageMagick

Step 3 — Creating the Library


Now we're almost ready to start running the server. We need to get some books to serve, however.

You may well have your own ebook library already, so we'll look at two ways of doing this.

  1. Add ebook files directly; we'll grab a couple from Project Gutenberg
  2. Import an existing Calibre library; useful if you're already running the desktop version of Calibre

Getting Books


First let's make a directory for our Calibre library. This example creates the directory in your user's home directory, although you could place it anywhere on the server. Run the following commands:

mkdir ~/calibre-library
mkdir ~/calibre-library/toadd

We've created two directories: the first, ~/calibre-library is the one that Calibre will organize automatically, while we'll add books manually to the toadd sub-directory. Later, we'll take a look at how to automate this process too.

How we'll grab some books from Project Gutenberg. For this tutorial we'll download Pride and Prejudice by Jane Austen and A Christmas Carol by Charles Dickens.

Change to the toadd directory to get started.

cd ~/calibre-library/toadd

Download the two ebooks:

wget http://www.gutenberg.org/ebooks/1342.kindle.noimages -O pride.mobi
wget http://www.gutenberg.org/ebooks/46.kindle.noimages -O christmascarol.mobi

Calibre relies somewhat on file extensions to correctly add books, so the -O flag in the wget command specifies a more friendly filename. If you downloaded a different format from Gutenberg (such as .epub) then you need to change the file extension accordingly.

Adding the Books to Calibre's Database


Now we need to add these books to the Calibre database using the calibredb command through the xvfb virtual display we installed earlier. To do this, run:

xvfb-run calibredb add ~/calibre-library/toadd/* --library-path ~/calibre-library

The asterisk means that Calibre will add all books found in the toadd directory to the library, in the calibre-library directory. You might see an error about not finding a cover (we chose to download the .mobi files without images), but you should also see confirmation that the books were added to the Calibre database.

Sample output:

Failed to read MOBI cover
Backing up metadata
Added book ids: 1, 2
Notifying calibre of the change

That's all we need to start seeing the first results. Let's test out the server. Run:

calibre-server --with-library ~/calibre-library

The command won't produce any output, but will just appear to hang in your terminal. This is fine for now; we'll look at daemonizing it properly later. Now open a web browser and navigate to:

  • http://your_server_ip:8080

Replace your_server_ip with your Ubuntu machine IP address. You should see the main page of your library, looking similar to the screenshot below.


If you click on the All books link, you should see the two books that we added earlier. You can click on the Get button below either book to download it.


Uploading an Existing Calibre Library


If you're already running the desktop version of Calibre and already have your library set up, you can import it to your server easily.

Double-check your current library folder for a file called metadata.db. If this file exists, then everything should just work without any additional configuration.

Upload your entire library folder to your server.

Then, run this command:

calibre-server --with-library /path/to/calibre-library

This will add your existing library in its entirety to the server. You can add more books to it on the server by placing the book files in the toadd directory, as explained in this tutorial.

Step 4 — Making Calibre a Background Service


We don't really want to keep a shell open with the calibre-server command running in it just to keep the server running.

While we could add the --daemonize flag to the command, there are better ways to do it. Below we'll look at how easy it is to make calibre-server into a service so that it will automatically start on system reboot and so that we can very easily start, stop, or restart the process.

Until recently, the way to achieve this was to write complex scripts and put them in the /etc/init.d/ directory. The currently recommended way is to use a far simpler Upstart script, which is a .conf file placed in the /etc/init/ directory. We'll take a look at how to do this:

If the server is still running, hit CTRL + C in your terminal to stop it.

Now create a new configuration file:

sudo nano /etc/init/calibre-server.conf

Create the Upstart script, being sure to replace the variables according your environment:

description "Calibre (ebook manager) content server"

start on runlevel [2345]
stop on runlevel [^2345]

respawn

env USER='myusername'
env PASSWORD='mypassword'
env LIBRARY_PATH='/home/user/calibre-library'
env MAX_COVER='300x400'
env PORT='80'

script
exec /usr/bin/calibre-server --with-library $LIBRARY_PATH --auto-reload \
--max-cover $MAX_COVER --port $PORT \
--username $USER --password $PASSWORD
end script

Paste this into your text editor and save it. (CTRL + X, then Y, then ENTER). We'll look at what each line does below:

  • The first line is just a description to help you (or others) know what the script does
  • The next two lines state at what level you want your script to start and stop on, since Upstart, allows for order specification so that scripts that rely on each other will start in the right order. Level 1 is for all essential services, so we'll start on level 2, by which time we know that the network and anything else we need will be up and running
  • respawn means that if the service stop unexpectedly, it'll try to restart

The next lines are all variables that we pass to the calibre-server command. Before, we just used the minimum of specifying the --with-library option, but we can see now how much flexibility Calibre offers. Above, we've specified:

  • Username and password to access the library from the web (please change these from the examples provided)
  • Library location path, as before
  • Max image size for book cover images (this is useful to make the page load more quickly)
  • Port number (here we've changed it to 80; change this to something else if you already use port 80 to serve standard web pages, etc.)
  • Finally, in the script section (known as a stanza) we run the main command using exec, and passing in all our variables. The /usr/bin/calibre-server part is the path to the executable

Once you've saved the script and closed the editor, start up the server:

sudo start calibre-server

This time you should see this output, but with a different process number:

calibre-server start/running, process 7811

Now use a browser to navigate to your server's IP address or domain name.

You should see a popup form asking for the username and password. These should be the ones you added to the Upstart script. Enter these and you'll be taken to your ebook library as before.

The server can now easily be stopped, started, and restarted using the following commands:

sudo service calibre-server stop
sudo service calibre-server start
sudo service calibre-server restart

This makes managing the server a lot easier than having to manually deal with daemon processes and process IDs!

The site by default has a mobile version that works nicely with smaller-screen devices such as phones and e-readers. This should load automatically if you visit the site from a mobile device.

Step 5 — Creating a Cron Job to Add Books Automatically


We can write a simple cron job to watch our toadd directory for new books.

Every 10 minutes it will look for files in the /home/user/calibre-library/toadd/ directory, add any files in there to our Calibre database, and then remove the original files. (Calibre makes copies of the files when it adds them to our library so we don't need the originals once the add has taken effect.) This means that if you transfer book files via scp, ssh, etc. to this directory from your main machine, or just download them directly into the toadd directory, then they'll automatically be added to your Calibre database and be available for download from your library!

To create a cron job, execute:

crontab -e

You might have to make a selection about your preferred text editor.

At the end of the file add the line:

*/10 * * * * xvfb-run calibredb add /home/user/calibre-library/toadd/ -r --with-library /home/user/calibre-library && rm /home/user/calibre-server/toadd/*

The first part of the command (*/10 * * * *) means that the command should be run every ten minutes. The second part is the same as the command we manually ran earlier. It adds all the books from the toadd folder to the database and then removes the original files.

That's that. You can now access your ebooks from anywhere in the world.

Note: The search results in Calibre aren't sorted by relevance, so if you enter a common term you often find unrelated books before the one you're looking for. However, you can specify to search only by title or author, which does help a lot, and the browse options (browse alphabetically by Author, for example) are very well implemented as well.

Conclusion


There are one or two things to keep in mind about running and maintaining a Calibre server. We'll take a brief look at these to finish off.

How To Install SchoolTool Student Information System on Ubuntu 14.04

$
0
0
SchoolTool is an open-source student management system alternative to Blackboard or Pearson’s PowerSchool. It can be used to manage any of the following records a school might keep:

  • Achievement and goal tracking
  • Attendance journals
  • Event calendars
  • Gradebooks
  • Guardian/parent, staff, and student contact information
  • Infraction/intervention reports

School administrators, clerks, students, and teachers can access SchoolTool using a typical web browser. Unlike PowerSchool, it does not require the Java Runtime Environment.

Prerequisites


Make sure you have these prerequisites before you begin.

  • A server with at least 2GB of RAM running Ubuntu 14.04. Depending on the number of SchoolTool users, you may need more memory
  • One server per school. SchoolTool does not natively support multiple schools on the same server; i.e., a single district-wide deployment
  • A sudo user to execute day-to-day commands

Step 1 — Adding SchoolTool’s Package Repository


The SchoolTool team does not publish its software on the official Ubuntu package repositories, so you will need to add the address of their repository to your server:

sudo add-apt-repository ppa:schooltool-owners/2.8

When prompted, press ENTER.

Step 2 — Installing SchoolTool


Now that SchoolTool’s repository is added to your server, update your server’s package list.

sudo apt-get update

Then, install SchoolTool.

sudo apt-get install schooltool

SchoolTool will install a vast assortment of Python packages along with the SchoolTool software itself, so this can take a few minutes. Python is the programming language that SchoolTool is written in. Those of you who are experienced sysadmins will notice that SchoolTool does not require a LAMP stack for serving web pages or storing data. According to SchoolTool’s developers, the use of Python apps provides more stability in day-to-day operation and during program updates.

Step 3 — Allowing Public Access


By default, SchoolTool is accessible only from the computer where it's installed. In this section, we will open it up to public Internet access.

Open SchoolTool’s paste.ini configuration file on your server in nano, a terminal-based text editor.

sudo nano /etc/schooltool/standard/paste.ini

Use the down arrow on your keyboard to move your cursor towards the bottom of the file. You will see this:

[server:main]
use = egg:zope.server
host = 127.0.0.1
port = 7080

Use the arrow and BACKSPACE keys to replace 127.0.0.1 with 0.0.0.0.

[server:main]
use = egg:zope.server
host = 0.0.0.0
port = 7080

Press the CONTROL + X keys simultaneously for a moment. At the bottom of your screen, nano will ask you this:

Save modified buffer (ANSWERING "No" WILL DESTROY CHANGES) ?                    
Y Yes
N No ^C Cancel

Press the Y key on your keyboard to save your changes to the configuration file.

To apply the changes to SchoolTool, you will need to restart it.

sudo service schooltool restart

Now you can take a moment to view SchoolTool in your browser, to make sure everything is working so far.

Open your browser, and visit the URL http://example.com:7080 or http://your_server_ip:7080, depending on your desired configuration. Note that for now, you need to add the :7080 port number. The next section will show you how to access the server on the default port (80), which should make it easier for more users to access.

You should see the default calendar page.


(Optional) Step 4 — Configuring Port 80 Access


SchoolTool’s default port is 7080. However, most users will be more comfortable accessing it on port 80, which is one of the normal ports used by web browsers. That means people will be able to access the server at example.com rather than example.com:7080.

If you do not have any programs (e.g., Apache or Nginx) using port 80 on your server, you can change SchoolTool’s default port to 80 by following the instructions in this section. If you already have a program on your server that uses port 80, you will have to use the default port 7080 or create a new server specifically for SchoolTool.

You can use iptables to forward port 80 to port 7080. Assuming your server is connected to the internet using the interface eth0, use the following commands to accomplish this.

Execute these three commands on your server to set up port forwarding:

sudo iptables -A INPUT -i eth0 -p tcp --dport 80 -j ACCEPT
sudo iptables -A INPUT -i eth0 -p tcp --dport 7080 -j ACCEPT
sudo iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 7080

Now you will be able to log into SchoolTool using the URL http://example.com or http://your_server_ip, depending on your desired configuration.

Step 5 — Logging in to SchoolTool


Use your favorite web browser to access SchoolTool. SchoolTool’s home page is the Calendar page by default.

In the upper right-hand corner, click on the Log in link.



Use the default SchoolTool login credentials:

  • Username: manager
  • Password: schooltool

Then press the Log in button to log in.


You're now logged in to SchoolTool.


Step 6 — Making Basic Configuration Changes


Now that you have logged into SchoolTool, you will want to make the following changes:

  • Change the manager account’s password
  • Specify your school’s name
  • Specify your school’s logo

First we'll update the password for the manager account. Do not leave this with the default password; otherwise, anyone could log into the account.

Click on the Home tab in the navigation menu located at the top of the web page. Next, click on the Settings > Password link in the left-hand navigation menu.























Type in the current password, schooltool. Then type in your new password twice, and press the Apply button. Please choose a strong password.

A Password changed successfully popup will appear. From now on, you should use this password to log in to the SchoolTool control panel.



Next we'll change your school's name and logo.

Click on the School tab in the navigation bar located at the top of the web page. Then, click on the Customize > School Name link in the left-hand navigation menu.

Replace Your School with the name of your school. In this tutorial, we'll use TechSupportPK Academy.

Click on the Choose File button to upload an image from your computer to use as your school’s logo. Once you have selected an image to use, press the Submit button to save your changes.



When the page reloads, SchoolTool will use the name and logo of your school instead of its defaults.

Conclusion

Now that you have installed SchoolTool, you have a free alternative to BlackBoard or PowerSchool that will allow your school to manage student records from a browser-based application.

You'll want to add teachers and students, set up grade books, and more. To access the full SchoolTool manual, refer to The SchoolTool Book, a knowledge base maintained by the developers of SchoolTool.

Hyper-V Virtual CPUs Explained

$
0
0
Hyper-V

Did your software vendor indicate that you can Virtualize their application, but only if you dedicate one or more CPU cores to it? Not clear on what happens when you assign CPUs to a virtual machine? You are far from alone.


Physical Processors are Never Assigned to Specific Virtual Machines

This is the most important note. Assigning 2 vCPUs to a system does not mean that Hyper-V plucks two cores out of the physical pool and permanently marries them to your virtual machine. I’ve seen IBM systems that do something like this, but I don’t believe that any other hypervisor does. Hyper-V certainly doesn’t. You can’t actually assign a physical core to a VM at all. So, does this mean that vendor request to dedicate a core just can’t be met? Well, not exactly. More on that toward the end.

Start by Understanding Operating System Processor Scheduling

Let’s kick this off by looking at how CPUs are used in regular Windows. Here’s a shot of my Task Manager screen:


Nothing fancy, right? Looks familiar, right?

Now, back when computers never, or almost never, came in multi-CPU multi-core boxes, we all knew that computers couldn’t really multitask. They had one CPU and one core, so there was only one possible thread of execution. But aside from the fancy graphical updates, Task Manager then looked pretty much like Task Manager now. You had a long list of running processes, all of them with a metric indicating what percentage of the CPUs time they were using.

Then, as in now, each line item you see is a process (or, new in the recent Task Manager versions, a process group). A process might consist of one or many threads. A thread is nothing more than a sequence of CPU instructions (key word: sequence).

What happens is that (in Windows, this started in 95 and NT) the operating system would stop a running thread, preserve its state, and then start another thread. After a bit of time, it would repeat those operations for the next thread. Remember that this is pre-emptive, meaning that it is the operating system that decides when a new thread will run.

The thread can beg for more, and you can set priorities that affect where a process goes in line, but the OS is in charge of thread scheduling.

The only difference today is that you have multiple cores and/or multiple CPUs in practically every system (as well as hyper-threading in Intel processors), so Windows can actually multi-task now.

Taking These Concepts to the Hypervisor

Because of its role as a thread manager, Windows can be called a “supervisor” (very old terminology that you really never see anymore): a system that manages processes that are made up of threads.  Hyper-V is a hypervisor: a system that manages supervisors that manage processes that are made up of threads. Pretty easy to understand, right?

Task Manager doesn’t work the same way for Hyper-V, but the same thing is going on. There is a list of partitions, and inside those partitions are processes and threads. The thread scheduler works pretty much the same way. What follows is a rethought version of the original image that was submitted for the book, changed to avoid plagiarism:


 Of course, there are always going to be a lot more than just nine threads going at any given time. They’ll be queued up in the thread scheduler.

What About Processor Affinity?

You probably know that you can affinitize threads in Windows so that they always run on a particular core or set of cores. As far as I know there’s no way to do that in Hyper-V with vCPUs. Doing so would be of questionable value anyway; dedicating a thread to a core is not the same thing as dedicating a core to a thread, which is what many people really want to try to do. You can’t prevent a core from running other threads in the Windows world.

How Does Thread Scheduling Work?

The simplest answer is that Hyper-V makes the decision at the hypervisor level, but it doesn’t really let the guests have any input. Guest operating systems decide which of their threads they wish to operate. The image I presented is necessarily an oversimplification, as it’s not simple first-in-first-out. NUMA plays a role, for instance. Really understanding this topic requires a fairly deep dive into some complex ideas, and that level of depth is not really necessary for most administrators.

The first thing that matters is that (affinity aside) you never know where any given thread is going to actually execute. A thread that was paused to yield CPU time to another thread may very well be assigned to another core when it is resumed.

Did you ever wonder why an application consumes right at 50% of a dual core system and each core looks like it’s running at 50% usage? That behavior indicates a single-threaded application. Each time it is scheduled, it consumes 100% of the core that it’s on. The next time it’s scheduled, it goes to the other core and consumes 100% there.

When the performance is aggregated for Task Manager, that’s an even 50% utilization for the app. Since the cores are handing the thread off at each scheduling event and are mostly idle while the other core is running that app, they amount to 50% utilization for the measured time period. If you could reduce the period of measurement to capture individual time slices, you’d actually see the cores spiking to 100% and dropping to 0% (or whatever the other threads are using) in an alternating pattern.

What we’re really concerned with is the number of vCPUs assigned to a system and priority.

What Does the Number of vCPUs I Select Actually Mean?

You should first notice that you can’t assign more vCPUs to a virtual machine than you have physical cores in your host.


So,a virtual machine’s CPU count means the maximum number of threads that it is allowed to operate on physical cores at any given time. I can’t set that virtual machine to have more than two vCPUs because the host only has two CPUs. Therefore, there is nowhere for a third thread to be scheduled.

But, if I had a 24-core system and left this VM at 2 vCPUs, then it would only ever send a maximum of two threads up to the hypervisor for scheduling. Other threads would be kept in the guest’s thread scheduler (the supervisor), waiting their turn.

But Can’t You Assign More Total vCPUs to all VMs than Physical Cores?

Absolutely. Not only can you, but you’re almost definitely going to. It’s no different than the fact that I’ve got 40+ processes “running” on my dual core laptop right now. I can’t actually run more than two threads at a time, but I’m always going to have far more than two threads scheduled.

Windows has been doing this for a very long time now, and Windows is so good at it (usually) that most people don’t even pause to consider just what’s going on. Your VMs (supervisors) will bubble up threads to run and Hyper-V (hypervisor) will schedule them the way (mostly) that Windows has been scheduling them ever since it outgrew cooperative scheduling in Windows 3.x.

What’s The Proper Ratio of vCPU to pCPU/Cores?

This is the question that’s on everyone’s mind. I’ll tell you straight: in the generic sense, this question has no answer.

Sure, way back when, people said 1:1. Some people still say that today. And you know, you can do it. It’s wasteful, but you can do it. I could run my current desktop configuration on a quad 16 core server and I’d never have any contention. But, I probably wouldn’t see much performance difference. Why? Because almost all my threads sit idle almost all the time. If something needs 0% CPU time, what does giving it its own core do? Nothing, that’s what.

Later, the answer was upgraded to 8 vCPUs per 1 physical core. OK, sure, good.

Then it became 12.

And then the recommendations went away.

They went away because they were dumb. I mean, it was probably a good rule of thumb that was built out of aggregated observations and testing, but really, think about it. You know that mostly, operating threads will be evenly distributed across whatever hardware is available.

So then, the amount of physical CPUs needed doesn’t depend on how many virtual CPUs there are. It’s entirely dependent on what the operating threads need. And, even if you’ve got a bunch of heavy threads going, that doesn’t mean their systems will die as they get pre-empted by other heavy threads. It really is going to depend on how many other heavy threads they wait for.

I’m going to let you in on a dirty little secret about CPUs: Every single time a thread runs, no matter what it is, it drives the CPU at 100% (power-throttling changes the clock speed, not workload saturation). The CPU is a binary device; it’s either processing or it isn’t.

The 100% or 20% or 50% or whatever number you see is completely dependent on a time measurement. If you see it at 100%, it means that the CPU was completely active across the measured span of time. 20% means it was running a process 1/5th of the time and 4/5th of the time it was idle.

What this means is that a single thread can’t actually consume 100% of the CPU the way people think it can, because Windows/Hyper-V will pre-empt it when it’s another thread’s turn. You can actually have multiple “100%” CPU threads running on the same system.

The problem is that a normally responsive system expects some idle time, meaning that some threads will simply let their time slice go by, freeing it up so other threads get CPU access more quickly. When you have multiple threads always queuing for active CPU time, the overall system becomes less responsive because the other threads have to wait longer for their turns. Using additional cores will address this concern as it spreads the workload out.

What this means is, if you really want to know how many physical cores you need, then you need to know what your actual workload is going to be. If you don’t know, then go with the 8:1 or 12:1, because you’ll probably fine.

What About Reserve and Weighting (Priority)?

I don’t recommend you tinker with CPU settings unless you really understand what’s going on. Let the thread scheduler do its job. Just like setting CPU priorities on threads in Windows can get initiates into trouble in a hurry, fiddling with hypervisor vCPU settings can throw a wrench into the operations. In fact, I’ll confess that I haven’t spent a great deal of time testing it because I trust the hypervisor enough.

Let’s look at the config screen:


The first group of boxes is the reserve. The first box represents the percentage that I want to set, and its actual meaning depends on how many vCPUs I’ve given the VM. In this case, I have a 2 vCPU system on a dual core host, so the two boxes will be the same. If I set 10 percent reserve, that’s 10 percent of the total physical resources. If I drop this down to 1 vCPU, then 10 percent reserve becomes 5 percent physical. The second box, which is grayed out, will be calculated for you as you adjust the first box.

The reserve is a hard minimum… sort of. If the total of all reserve settings of all virtual machines on a given host exceeds 100%, then at least one virtual machine isn’t going to start. But, if a VM’s reserve is 0%, then it doesn’t count toward the 100% at all (seems pretty obvious, but you never know). But, if a VM with a 20% reserve is sitting idle, then other processes are allowed to use up to 100% of the available processor power… until such time as the VM with the reserve starts up. Then, once the CPU capacity is available, the reserved VM will be able to dominate up to 20% of the total computing power. Because time slices are so short, it’s effectively like it always has 20% available, but it does have to wait like everyone else.

So, that vendor that wants a dedicated CPU? If you really want to honor their wishes, this is how you do it. You enter whatever number in the top box that makes the second box the equivalent processor power of however many pCPUs/cores they think they need. If they want one whole CPU and you have a quad core host, then make the second box show 25%. Do you really have to? Well, I don’t know. Their software probably doesn’t need that kind of power, but if they can kick you off support for not listening to them, well… don’t get me in the middle of that. The real reason virtualization densities never hit what the hypervisor manufacturers say they can do is because of software vendors’ arbitrary rules, but that’s a rant for another day.

The next two boxes are the limit. Now that you understand the reserve, you can understand the limit. It’s a resource cap. It keeps a greedy VM’s hands out of the cookie jar.

The final box is the weight. As indicated, this is relative. Every VM set to 100 (the default) has the same pull with the scheduler, but they’re all beneath all the VMs that have 200, so on and so forth. If you’re going to tinker, this is safer than fiddling with reserves because you can’t ever prevent a VM from starting by changing relative weights. What the weight means is that when a bunch of VMs present threads to the hypervisor thread scheduler at once, the higher weighted VMs go first. That’s it, that’s all.

But What About Hyper-Threading?

Hyper-Threading is an Intel-specific technology that lets a single core process two separate instructions in parallel (called pipelines). Neat, right? One problem: the pipelines run in lockstep.

If the instruction in pipeline one finishes before the thread in pipeline two, pipeline one sits and does nothing. But, that second pipeline shows up as another core. So the question is, do you count it? As far as I know, the official response is: No, Hyper-Threading should not be counted toward physical cores when considering hypervisor processing capabilities. Me, I’m a little more lenient. It’s not quite as good as another actual core, but it’s not useless either. Your mileage may vary.

How to Reset A Forgotten Hyper-V Admin Password with a Windows CD

$
0
0
As an IT Professional, you might find yourself blessed with the unfortunate scenario of working on a Hyper-V server that is not able to authenticate to the domain and the cached domain credentials are no longer working. In addition to this predicament, you learn that there is no documentation for the local administrator password. Either the client who you’re working for doesn’t know the local administrator password or the previous engineer who built the server is no longer working for your company and the standard passwords aren’t working.


A 3rd party password cracker application will allow you to reset the local administrator password. The drawback is you have to pay for it and in my experience they don’t always work. Follow the steps below and use the Ease of Access Exploit to change the local administrator password.

The Ease of Access Exploit modifies the windows system files to enable you to open a command prompt at the windows login screen. This command prompt also runs as the system account, allowing you to add, create, or edit local accounts. The only tool needed for this exploit is a Windows CD.

It can be a Windows Vista, Windows 7, Windows 8, Server 2008 R2, Server 2008, Server 2012 or a Server 2012 R2 CD. Even though a Windows Server 2008 CD is used in this example, Linux distributions and Ultimate Boot CD will also work as well. The commands would be the same; the only difference would be the way you get to the command prompt.

How to reset your admin password

To get started, boot the server to a Windows CD.  Browse to the repair section and open up the command line tool.

Inside the command line, change directories to the windows installation directory. It will usually be the C, D, or E drive. In this example the D drive contains the Windows system files. Type the following commands:

D:
CD Windows/system32
Ren utilman.exe utilman.exe.old
Copy cmd.exe utilman.exe


This will change directories to the system32 directory and rename the Utilman.exe file, which is the executable file that allows users to open up the Ease of Access menu. This menu allows users to modify the contrast of the screen and access features such as the Magnifier and Narrator. After renaming the utilman.exe file, cmd.exe is copied and renamed as “utilman.exe”. Now when users click on the ease of access tool at the windows login, the command prompt will appear instead of the normal menu.

Reboot the host and start up normally. At the login screen click on the ease of access button in the lower left corner. A command prompt running as the system account will appear.




Now you can either enable and reset the local administrator password or create an additional account and add it to the local administrators group.

Change the Local Administrator Password

Type the following commands to change the local administrator password and enable the account if it’s disabled:

Net user administrator newpassword
Net user Administrator /active:yes



Create an Additional Local Administrator Account

Type the following commands add another local administrator account

Net user newadmin P@ssw0rd /add
Net localgroup administrators newadmin /add


You can now login to the server with the new local administrator password or with an additional admin account. Once logged in, make sure to revert the Ease of Access menu back to normal by typing the following commands:

Copy utilman.exe.old utilman.exe

 

What if I Am Using Windows Server Core?

Windows server core is an installation option that is available during the initial install of Windows Server 2008 and higher. Essentially this will install windows without the graphical user interface.  The Utilman.exe file is not included in the install of server core. So when you boot to the windows CD you can skip the part where you back up the utilman.exe executable. Type in the following commands:

D:
Cd Windows/system32
Copy cmd.exe utilman.exe


This will copy the command line executable and rename it as the utilman.exe file. When the host is rebooted, it will think the utilman.exe file exists and the Ease of Access button will respond by opening the command prompt when clicked on.

How Do I Protect My Server Against This Exploit?

The easiest and most inexpensive way to protect against this exploit is to set a BIOS password on the Hyper-V host and change the boot order to exclude CD-ROMs and USB drives. This would protect against an internal attacker that compromises the out-of-band management utility on the host.

However, if the attacker gains physical access to the server, they can reset the BIOS password and still use this exploit. This is why access to the server room should always be secured.

Hyper-V and Networking – Part 1: Mapping the OSI Model

$
0
0
After storage, Hyper-V’s next most confusing subject is networking. There are a dizzying array of choices and possibilities. To make matters worse, many administrators don’t actually understand that much about the fundamentals because, up until now, they’ve never really had to.

 

Why It Matters

In the Windows NT 4.0 days, the Microsoft Certified Systems Engineer exam track required passage of “Networking Essentials” and the electives included a TCP/IP exam. Neither of these exams had a corollary in the Windows 2000 track and, although I haven’t kept up much with the world of certification since the Windows 2003 series, I’m fairly certain that networking has largely disappeared from Microsoft certifications.

That’s both a blessing and a curse. Basic networking isn’t overly difficult and a working knowledge can be absorbed through simple hands-on experience. More advanced, and sometimes even intermediate skills, can be involved and require a fair level of dedication. If all you really need to do is plug a Windows Server into an existing network and get it going, then a lot of that is probably excess detail that you can leave to someone else.

There are certification, expertise, and career tracks available just for networking, and the network engineers and administrators that earn them deserve to have their own world separate from system engineering and administration. Learning all of that is burdensome for systems administrators and is unlikely to pay dividends, especially with the risk of skill rot.

The downside is that it’s no longer good enough to know how to set up a vendor team and slam in some basic IP information. Too many systems people have ignored the networking stacks in favor of their servers and applications and are now playing catch-up as integrated teaming, datacenter bridging, software-defined networking, and other technologies escape the confines of netops and intrude into the formerly tidy world of sysops.

The first post of this series will (re)introduce you to the fundamentals of networking that you will build the rest of your Hyper-V networking understanding upon.

The OSI Model

If you’ve never heard the phrases “All People Seem To Need Data Processing” or “Please Do Not Throw Sausage Pizza Away”, then someone along your technical education path has done you a great disservice (or you learned the OSI model in a non-English language). These are mnemonics used by students to drill for exams that test on the seven layers of the OSI model, which obviously worked because I can still recall them fifteen years later:
  1. Physical
  2. Data Link
  3. Network
  4. Transport
  5. Session
  6. Presentation
  7. Application
Oddly enough, I’ve never been asked on any test what “OSI” stands for and I had to look that up: Open Systems Interconnection. Now you know what to put on that blank Apples to Apples card if you want to never be invited to a party again.

The reason that we have two mnemonics is because traffic travels both ways in the model. If your application is Skype, then the model covers your voice being broken into a rush of electrons (from seventh down to first layer) and back into something that might sound almost like you on the other side of an ocean (from first up to seventh layer).

The OSI model is a true model in that it does nothing but describe how a complete networking stack might look. In practice, there is nothing that perfectly matches to this model. The idea is that each of the seven layers performs a particular function in network communications, but only knows enough to interoperate with the layer immediately above and immediately below. So, no jumping from the physical layer to the presentation layer, for instance.

I’m not going to spend a bunch of time on the seven layers. There are a lot of great references and guides available on the Internet, so if you really care, do some searching and find the resource that suits your learning model. If you’re in systems or network administration/engineering, layers six and seven will likely never be of any real concern to you. You might occasionally care about layer five. We’re really focused on layers one through four, and that’s what we’ll talk about.

Use the following diagram as a visual reference for the upcoming sections:


Layer One

In theory, layer one is extremely easy to understand; it’s all in the name: “physical”. This is the electrons and the wires and fiber and switch ports and network adapters and such. It’s the world of twisted pairs and CAT-5e and crossover. In practice, it’s always messier.

This is also the world of crosstalk and interference and phrases like, “Hey, we can use the ballasts in these fluorescent lights as anchors for the network cable, right?” and, “I only use cables with red jackets because they have better throughput than the blue ones,” and all sorts of other things that are pretty maddening if you spend too much time thinking about them. We’ll move along quickly. Just be aware that cables and switches are important, and they break, and they need to be cared for.

Layer Two

Layer two is where things start to get fuzzy. From this point upward, everything exists because we say it does. It’s the first level at which those pulses of light and electron bursts take on some sort of meaning. For us in the Hyper-V world, it’s mostly going to be Ethernet.

The unit of communication in Ethernet is the frame. In keeping with our layered model concept, the frame was a sequence of light or electric pulses that some physical device, like a network adapter, has re-interpreted into digital bits. The Ethernet specification says that a series of bits has a particular format and meaning.

An incoming series of these bits starts with a header and is then followed by what is expected to be a data section (called the payload), and ends with a set of validation bits. This is the first demonstration point of the OSI model: layer one handles all the nasty parts of converting pulses to data bits and back. Layer two is only aware of and concerned with the ordering of these bits.

The Ethernet Frame

By tearing apart the Ethernet frame header, we can see most of the basic features that live in this layer.

The first thing of note is the source and destination MAC addresses (“media access control address”). On any Windows machine, run IPCONFIG /ALL and you’ll find the MAC address in the Physical Address field. Run Get-NetAdapter in PowerShell and you can retrieve the value of the MacAddress field or the LinkLayerAddress field.

The MAC address comes in six binary octets, usually represented in two-digit hexadecimal number groupings, like this: E0-06-E6-2A-CD-FB. In case it’s not obvious, the hyphens are only present to make it human readable. You’ll sometimes see colons used, or no delimiters at all. Every network device manufacturer and (some other entities) have their own prefix(-es), indicated in the first three octets. If you search the Internet for “MAC address prefix lookup”, you’ll find a number of sites that allow you to identify the actual manufacturer of the network chip on your branded adapter.

The presence of the MAC address in the Ethernet frame tells us that layer 2 is what deals with these addresses. Therefore, it could also be said that this is the level at which we will find ARP (address resolution protocol), although, as a tangent, it could also be considered as layer 3. Either way, all data that travels across an Ethernet network knows only about MAC addresses. There is no other addressing scheme available here.

TCP/IP and its attendant IP addresses have no presence in Ethernet, and unless you get really deep into the technicalities, TCP/IP isn’t considered to be in layer two at all. It’s vital that you understand this, as it is a common stumbling point that presents a surprisingly high barrier to comprehension. As a bit of a trivia piece, the ability to manage MAC addresses and tables is what differentiates a switch from a hub.

Next, we might encounter the 802.1q tag. This is the technology that enables VLANs to work. This is a potentially confusing topic and will get its own section later. For now, just be aware that, if present, VLAN information lives in the Ethernet frame which means it is part of layer 2. Layer 3 and upward have no idea that VLANs even exist.

What puts layer two right in the face of the Windows administrator is the fact that the Hyper-V virtual switch and Windows network adapter teaming live at this level. Without an ability to parse the Ethernet frame, teaming cannot work at all. It must be able to work with MAC addresses. The Hyper-V virtual switch is a switch, and as such it must also be aware of MAC addresses. It also happens to be a smart switch, so it must also know how to work with 802.1q VLAN tags.

A fairly recent addition to the Ethernet specification is Datacenter Bridging (DCB). This is an advanced subject that I might write a dedicated article about, as it is a large and complex topic in its own right. The basic goal of DCB is to overcome the lossy nature of TCP/IP in the datacenter where data loss is both unnecessary and undesirable. There are a number of implementations, but the Ethernet versions include some way of tagging the frame.

The significance is that Windows can apply a DCB tag to traffic and DCB-aware physical switches are able to process and prioritize traffic according to these tags. You need a fairly large TCP/IP network for this to be an issue of major concern as most LANs see so little contention that any data loss usually indicates a broken component.

The final thing we’re going to talk about here is the payload. In the modern Windows world, the content of this payload is a TCP/IP packet. It doesn’t have to be that, though. In days of yore, it might have been an IPX/SPX packet. Or a NetBEUI packet. Or anything. All that Ethernet cares about is the destination MAC address. Once the frame is delivered, layer two will unpackage the packet and deliver it up to layer three to deal with.

Layer Three

Here is where we first begin to encounter TCP/IP. A couple of things to note here. First, TCP/IP is not really a protocol, but a protocol group. TCP is one of them, IP is another, so on and so forth. Second, it’s also where you really start to see that the layers of the OSI model are only conceptual, because a number of things could be considered to exist in multiple layers simultaneously.

Layer three is where we start talking about the packet as opposed to the frame. Ethernet (or Token Ring, or any other layer 2 protocol… it doesn’t really matter) has delivered the frame and the payload has been extracted for processing. Everything layer 2-related is now gone: no MAC address. No 802.1q tag. In general, the network adapter driver is the first and last thing in your Windows system to know anything about the Ethernet frame. After that, Windows takes over with the TCP/IP stack.

What we have at this level is IP. The stand-out feature of IP is, of course, the IP address. This is a four-octet binary number that is usually represented in dotted-decimal notation, like this: 192.168.25.37. IP is the addressing mechanism of layer three.

TCP/IP traffic is packaged in the packet. In many ways, it looks similar to the Ethernet frame. It has a defined sequence that includes a header and a data section. Inside the header, we find source and destination IP addresses. This is also the point at which we can start thinking about routing.

A very important fact to know when you’re testing a network is that ICMP (which means PING for most of us) lives in layer 3, not layer 4. You need to be aware of this because you will see behaviors in ICMP that don’t make a lot of sense when you try to think of them in terms of layer 4 behavior, especially in comparison to TCP and UDP. We’ll talk about this again once we are introduced to layer 4.

What’s not here is the Hyper-V virtual switch. It has no IP address of its own and is generally oblivious to the fact that IP addresses exist. When you “share” the physical adapter that a Hyper-V switch is assigned to, what actually happens is that a virtual network adapter is created for the management operating system. That virtual adapter “connects” to the Hyper-V virtual switch at layer one (which is, of course virtual).

It does the work of bringing the layer two information off the Hyper-V switch into the layer three world of the management operating system. So, the virtual switch and virtual adapter are in layers one and two, but only the adapter can be said to meaningfully participate in layer three at all.

The Hyper-V Server/Windows Server team is also not really in level three. You do create a team interface, but it also works much like Hyper-V’s virtual adapter.

Layer Four

Layer four is where we find much more of the TCP/IP stack, particularly TCP and UDP. The OSI model is really fuzzy at this point, because these protocols are advertised right there in the TCP/IP packet header, which is definitely a layer three object. However, it is the TCP/IP control software operating in this layer that is responsible for the packaging and handling of these various packets, and the actual differences are seen inside the payload portion of the packet. For the most part, Hyper-V administrators don’t really need to think much about layer four operations, but having no understanding of them will hurt.

The features that we see in layer four are really what made TCP/IP the most popular protocol. This is especially true for TCP, which allows for packets to be lost while preventing data loss. TCP packets are tracked from source to destination, and if one never arrives, the recipient can signal for a retransmission. So, if a few packets in a stream happen to travel a different route and arrive out of order, this protocol can put them back into their original intended pattern. UDP does not do this, but it shares TCP’s ability to detect problems.

This capability is really what separates layer three from layer four, and why ICMP doesn’t behave like a layer four protocol. For instance, if you’re running a Live Migration and a ping is dropped, that doesn’t mean that TCP will be affected at all. I’ve heard it said that ICMP is designed to find network problems and that’s why it fails when other protocols don’t. That’s true to some degree, but it’s also because the functionality that allows TCP and UDP to deal with aberrations in the network are not layer 3 functions.

Encapsulation

The best summary of the process described by the OSI model is that networking is a series of encapsulation. The following illustration shows this at the levels we’ve discussed:

Each successive layer takes the output of the previous layer, and depending on the direction that the data is flowing, either encapsulates it with that layer’s information or unpacks the data for further processing.

What’s Next

In the next installment of this series, we’ll start to see application of these concepts by tearing into VLANs.

Hyper-V and Networking – Part 2: VLANs

$
0
0
In the first part of this series, we started with the foundational concepts of networking in the OSI model and took a brief look at where Hyper-V components live in that model. In this part, we’ll build on that knowledge by looking at the operation of VLANs and how they work within the context of a Hyper-V deployment.


VLANs

“VLAN” is an acronym for Virtual Local Area Network. This concept predates most of the virtualization technologies that are common today and should not be confused with anything in the computer virtualization or software-defined networking worlds.

To recap the first post, the VLAN exists at layer 2 in the OSI model. The significance is that this has nothing to do with IP addresses, although some of the basic rules are similar. Everything at this level is done by MAC address. Within any given Ethernet network, all endpoints must have a MAC address and all MAC addresses must be unique. Usually that’s not tough, given that any single prefix can have almost 17 million unique MAC addresses and manufacturers with that many devices also have more than one prefix.

What a VLAN does is set up a unique Ethernet network. The definition that I commonly see is that each VLAN represents a distinct “broadcast network”. While this explanation is the simplest, I dislike it as it is often confused with IP broadcasting. In Ethernet networking, a “broadcast network” indicates all the endpoints that will receive a frame transmitted to FF-FF-FF-FF-FF-FF. Even though it’s not a wonderful definition, most of the others get so complicated that you forget the question. So, rather than dig into it like that, just look at the functionality.

In a simple network using very basic switches, that means every single plugged-in and live endpoint will receive a broadcast. Switches capable of setting up VLANs can specify that some of their switch ports are members of a particular VLAN. So, an Ethernet frame traveling on VLAN 1 will not be delivered to any switchport that is not a member of VLAN 1. However, this distinction is entirely “virtual”. The only thing that sets the frames apart is the value in the 802.1q tag — and it might not even be there.

Many network devices can’t process an 802.1q tag at all. If an Ethernet frame is received with this tag, a great many devices will treat the frame as malformed. Most of the rest will ignore it. For this reason, you rarely want to point frames with 802.1q tags at physical devices other than switches. Usually, the tag is only present is when a frame passes through a “trunk”. A trunk is a single connection that carries multiple VLANs. In all other cases, switches keep track of which VLAN that traffic for a given port should be in.

Access Mode/PVID

In Cisco and Microsoft networking, ports can be set in “access mode”. If they aren’t set with anything else, then they will always communicate using untagged frames. In the Cisco world, this could be subjected to the device’s default VLAN. The Hyper-V switch has no default VLAN, or perhaps more properly, the default is always untagged. If the port is set to a specific VLAN, then that port becomes a member of that VLAN. Its frames are still untagged, but the switch will only allow that port to communicate with other devices on the same VLAN. The non-Microsoft/non-Cisco world corollary to an access mode port is PVID (port VLAN identifier). Only the term is different. Functionality is the same. To reiterate, access mode ports move traffic untagged, but only within a single specified VLAN.

Trunk Mode/Tagged

In Cisco and Microsoft networking, ports can be set in “trunk mode” instead of access mode. The purpose of trunk mode is to connect to another switch. In trunk mode, the port can become a member of multiple VLANs. VLANs that the trunk should process are added to the allowed list for Cisco and Microsoft switches. On most other devices, a port is considered to be in the equivalent of trunk mode by virtue of having one or more VLANs specified in its “tagged” list. Frames with other tags are discarded.

Native/Untagged VLANs

A trunk on a Cisco or Microsoft switch can be assigned a native VLAN. It then becomes a member of the network shared by all ports on the switch assigned to that VLAN just as it would if that were simply another of the allowed VLANs. Only one VLAN can be configured on a trunk in this method, or traffic in these VLANs would be inextricably combined into a single network. This is because frames that this trunk sends to the other switch travel without an 802.1q tag, and all incoming frames without a tag are treated as members of this VLAN. For a Cisco/Microsoft switch, this will be determined by the receiving trunk’s native VLAN setting. For all others, the equivalent fo the native VLAN is set by designating a single untagged VLAN for the trunk.

Comparing Cisco/Microsoft to other Manufacturers

The nice thing about the Cisco/Microsoft switches is that the usage of “access mode” and “trunk mode” make it pretty clear what’s going on. For everyone else, use “PVID” instead of “access mode” and “tagged” and/or “untagged” instead of “trunk mode”.

Hypothetical VLAN Configuration

Let’s look at an example. I have one Cisco 20-port smart switch (or a Microsoft Hyper-V external virtual switch) and one Netgear 20-port smart switch. I’m going to configure them as identically as possible, then plug them into each other.

In my network, I’ve decided that servers should participate in VLAN 10, desktops should participate in VLAN 20, and printers should participate in VLAN 30. On each switch, I want to designate ports 1-6 for servers, 7-12 for desktops, 13-18 for printers, and 20 for the link between switches. I will leave 19 unused.

Access Port Configurations

On the Cisco switch, I will set ports 1-6 in access mode for VLAN 10. I will set ports 7-12 in access mode for VLAN 20. I will set ports 13-18 in access mode for VLAN 30. I will plug in the devices as normal and just give them IP addresses as normal.

On the Netgear switch, I will assign ports 1-6 a PVID of 10. I will assign ports 7-12 a PVID of 20. I will assign ports 13-18 a PVID of 30.

Trunk Port Configurations

On the Cisco switch, I will set port 20 to trunk mode. I will give it a native VLAN of 10 and add 10, 20, and 30 to its allowed VLANs list.

On the Netgear switch, I will set port 20 so that it allows untagged traffic for VLAN 10 and tagged traffic for VLANs 20 and 30. As a matter of habit, I also usually set these with a PVID that matches the untagged VLAN, but I don’t think that’s actually necessary.

Results

Now that this is done, I’ll plug in servers, desktops, and printers to all these ports as indicated. I will use a standard patch cable to connect port 20 on the Cisco switch to port 20 on the Netgear. Now, the server plugged into port 1 on the Cisco switch will be able to send and receive Ethernet frames directly to/from devices plugged into ports 2-6 and 20 on the Cisco switch and ports 1-6 and 20 on the Netgear switch. Those frames will never be tagged.

A desktop plugged into port 7 on the Cisco switch will be able to send and receive Ethernet frames directly to from devices in ports 8-12 on that switch and 7-12 on the other. However, unlike the VLAN 10 ports, its frames will cross the inter-switch link with an 802.1q tag of 20. Once these tagged frames arrive on the receiving switch, the tag is removed. But, the switch will only allow those frames to be visible to ports that are members of VLAN 10. This is the same behavior that will be seen for members of VLAN 20.

Port Channels/Link Aggregation

I know that at least a few of you are looking at that diagram and wondering why I wouldn’t put ports 19 and 20 on both switches into an aggregated link. The answer is: actually, I would. That’s not what this article is about though. We’ll revisit all that in a later installment.

Management VLANs

I haven’t talked about management VLANs, which is mostly because they’re not really in-scope for Hyper-V. Hyper-V either tags a frame or it doesn’t; it doesn’t really have a concept of a default VLAN or a management VLAN.

The management VLAN is really just a VLAN that compliant network devices will respond on. So, if I assign an IP of 192.168.75.100 to a switch, it will give that IP address a presence inside its designated management VLAN’s Ethernet space. The default for most devices is VLAN 1, and for security purposes, it’s usually recommended that you change it. Since the Hyper-V switch has no IP address, it has no purpose for a management VLAN. The management VLANs are trunked and (un)tagged and accessed and PVID’d just like any other VLAN, although also for security purposes, it’s best if only network infrastructure devices’ management IPs are in it.

Hyper-V VLANs

The Hyper-V terminology usually lines up better with Cisco than with the others (confusing or not, these others are actually following standards most closely). The first difference is that you never really see anything about virtual switch ports unless you travel deep down the path where most are afraid to tread. Usually, you set these options directly on the virtual network adapter. That’s works well in this case because Hyper-V controls the switch, the switch ports, and the virtual network adapters. So, it doesn’t really need to distinguish between the port and the adapter.

When the Hyper-V virtual switch takes over a physical network adapter, that network adapter become an “uplink”. So, instead of the Cisco switch in our example network above, imagine that it’s a Hyper-V switch. The host’s physical adapter it’s assigned to is plugged into port 20 on the Netgear.
The big thing to be aware of is that we don’t, and can’t, configure this particular trunk “port” in Hyper-V. So, no native/untagged VLAN when connecting in to the physical network.

This just means that Hyper-V isn’t going to be able to strip VLAN ID’s from traffic leaving virtual machines. It will be able to accept incoming untagged traffic, of course. Just leave the VLAN ID off of any virtual adapter you want to receive from the incoming switch’s native VLAN ID. The Hyper-V switch’s “uplink” port will send the outgoing traffic of any VM without a defined VLAN without an 802.1q tag, so it will end up in the connected switch’s native VLAN.

Besides that, the Hyper-V virtual switch behaves like any other switch. Switch ports are set in access mode. As previously stated, any port without a defined VLAN winds up in the connected switch’s native or untagged VLAN. All other ports communicate on their designated VLAN, which is analogous to Cisco’s access mode or a non-Cisco switch’s PVID.

VLAN Isolation

By their very nature, VLANs are isolated. You can’t move traffic from one VLAN to another without communicating through a device that is present on both VLANs. There is a capacity issue, however, as there is a maximum of 4095 VLANs. There is a functionality issue if you need to connect duplicate IP subnets to the same router. While exceedingly rare in the SMB space, these situations are not uncommon at companies that host networks for others. As an example, you might have two clients who both use 192.168.0.0/24. The easiest way to deal with that is to simply use a dedicated VLAN for each client and ensure that they can only communicate off of their own VLANs by using a router that employs network address translation (NAT). But, this is still subject to the scalability issue.

The first option is to use Private VLANs. This requires nothing other than what Hyper-V itself can provide. With this configuration, each virtual adapter to use a private VLAN has three configuration steps. First, it is set as promiscuous, community or isolated. Second, it is given a primary VLAN. Finally, depending on the mode, it is given a secondary VLAN or a list of allowed secondary VLANs. How it behaves depends on the first option that was set. This isn’t something I want to spend a lot of time on, first because I’m not very well-versed in it myself and second, I expect that if you really need it, you have access to a network engineer who can explain it to you and/or help you configure it.

The second option is to use network virtualization. This requires VMM, but it also allows you to create fully isolated networks. I also won’t spend much time on this, partially because I feel it’s been done better elsewhere, partially because I’m also not well-versed in it, and partially because, just like private VLANs, I don’t see where the majority of installations will have an immediate need for it. But, because I think that the hybrid cloud will eventually encroach on most of our networks. I do think this is something that’s worthwhile for most Hyper-V administrators to learn in the long run. Just not necessarily right away.

Hyper-V Switch Trunk Mode

The Hyper-V switch allows you to set a virtual adapter into trunk mode. The usefulness of this is in the eye of the beholder, since unlike a physical switch, a Hyper-V switch can only provide a connection between virtual machines and a single physical switch. So, aside from the trunk that exists on an external virtual switch represented by the physical NIC, it can’t trunk to any other switch. Also, as you’ll remember from the first post in this series, VLANs exist at layer 2. IP addresses don’t appear until layer 3.

The practical meaning of that is that all the things you’re used to configuring inside virtual machines know nothing about VLANs. The frames will arrive with the 802.1q tag intact, but you’ll need an application inside the guest that knows what to do with it in order to make any use of this feature. What I see people trying to do is set up the Windows Remote Access Service (RAS, formerly Routing and Remote Access Service, or RRAS) to process VLANs. RAS works at layer 3 and above, so it can’t process these VLAN tags.

Next Steps

As a natural segue, the next thing we’re going to do is step up from layer 2 to layer 3 and look at IP addressing and routing within the context of Hyper-V.

Hyper-V and Networking – Part 3: IP Routing

$
0
0
In the previous post, we dug into VLANs, which are a layer 2 concept in the ISO model. In this piece, we’re going to step up into layer 3 and look at IP address and routing and how they work with Hyper-V.


How IP Interacts with Ethernet, Part 1

As you recall from our first post on this series, each layer works with the layers immediately above and immediately below to package the transmission on the way down or unpackage it on the way up. For a TCP/IP packet to be able to participate on an Ethernet network, it needs to be placed into an Ethernet frame. Once that happens, its “TCP/IP-ness” is masked from Ethernet. The frame then moves according to MAC addresses. This is helpful to understand for this post, as it is very similar to the way that TCP/IP frames move in a TCP/IP network. Let’s start with a crude visualization:


Once the frame is packaged like this, it’s dropped onto the Ethernet broadcast network. If it encounters a switch, that switch will attempt to discern where the destination MAC address is. If it’s in a switch port on that switch, it will deliver the frame to that port. If it’s not, it will talk to all of the switches that it knows. If any of the other switches know where the MAC is, the switch that currently possesses the frame will deliver it to the switch port or aggregated ports that connect to the switch that knows how to get there.
There, you’ve just had an eight-second shot of what ARP (address resolution protocol) does. If, on the other hand, the frame lands on a hub, then the hub will just repeat the frame on all of its other ports. An endpoint that receives an Ethernet frame that wasn’t sent to its MAC or to the broadcast address (FF in every octet) will normally be set to silently discard the frame.

Relating Ethernet to IP

A TCP/IP packet looks very similar to the Ethernet packet above, although it is much busier:


Aside from the order being reversed, the addressing portion is the same. As with the Ethernet frame, if a device receives a TCP/IP packet and the host portion of the destination address is not the broadcast address or that device’s address, then the packet is usually discarded. Be aware that in TCP/IP lingo, “host” means any IP endpoint, such as a Hyper-V host, a virtual adapter, a desktop computer’s NIC, a networked printer’s NIC, a smartphone’s Wi-Fi adapter, etc.

Seems sort of redundant, doesn’t it? You have source and destination addresses in both layer 2 and layer 3. I’ve never worked in protocol design so I don’t know if it’s strictly necessary to have addressing information in both layers. What is certain is that this duality allows TCP/IP to be routed. Where Ethernet frames are permanently limited to a single broadcast network, TCP/IP routers can theoretically get a TCP/IP packet anywhere in the world. Not only that, but they aren’t restricted to an Ethernet network:


This is all possible because of layer separation. In the first hop, the Ethernet frame contains the source MAC address of the sending computer and the destination MAC address of the local router. Routers are considered to be layer 3 devices because upon receipt of a frame, they’ll disassemble it, repackage it into a new frame, and send it on to the next device. All the while, the TCP/IP packet remains intact. In order to do that, the router must be able to inspect at least the IP header.

IP Subnet

When I used to teach classes, this seemed to be the most universally despised subject, at least at the outset. It requires math. Not only does it require math, but it requires binary math. Just saying this caused many students to give up in abject despair. If that’s you, take heart! If you can tell the difference between a 0 and 1, then you know everything you need to in order to understand IP subnetting. While students might certainly lie on course evaluations, it always seemed like they ended these classes with a much better attitude than they started them. Or maybe they were just excited to be able to leave. Either way, this is really not a complicated subject.

When you assign an IP address to a system, you’ve probably noticed there are only two required entries: the IP address and a subnet mask. The subnet mask has exactly one functional purpose: to determine whether or not an outbound packet can be sent right to the destination or if it needs to go through a router first. That’s all that it does. Logically, what it does is determine which portion of the IP address references the network and which portion references the host. The place people get stuck on this is that because they are more comfortable thinking in decimal, they try to process subnet masks in decimal. I’ve met a handful of people who can do that very quickly, but for the most part, it is much easier to operate in binary.

IP Classes

Even though the terminology around IP classes has pretty much faded into the annals of history, understanding them helps with comprehension of subnetting. In days of yore, there were three unicast address classes: A, B, and C. Class A had a subnet mask of 255.0.0.0, B was 255.255.0.0, and C was 255.255.255.0. If you look, you can still find tables that show you which address ranges are considered to belong to each of these classes. The reason that we like these so much is because they work well with our decimal way of thinking. Remember that in an IP address, there are four octets. An octet is nothing more than a byte, or a string of 8 bits. A single bit can have a value of 0 or 1. Eight can have a value from 0 through 255. Or, in hexadecimal, 00 through FF. Or, in binary, 00000000 through 11111111. So, we can start with this simple table:

                     Decimal     Binary         Hexadecimal
Minimum     00              00000000     00
Maximum     255           11111111      FF

Armed with this knowledge, let’s look at a basic IP and subnet mask combination.

                          Decimal                                         Binary                                  Hexadecimal
IP Address        192.168.0.10       11000000.10101000.00000000.00001010     C0.A8.00.0A
Subnet Mask     255.255.255.0     11111111.11111111.11111111.00000000        FF.FF.FF.00
Let’s look at what this gives us. We compare the subnet mask against the IP address. From this comparison, we can determine which part of the IP address describes the network and which describes the actual host. All you have to do is look at the Binary column in the table, and it should be pretty obvious. For each position in the IP address that lines up with a 1 in the subnet mask, that is part of the network. Each position in the IP address that lines up with a 0 in the subnet mask represents part of the host’s address. 

So, we now know that this host will consider every single other host whose IP starts with 192.168.0 to be on the same network. It considers itself to be the host with the number 10, and other hosts on that network will be numbered 1 through 9 and 11 through 254. 192.168.0.0 represents the IP address of the network (host ID of all 0s) and 192.168.0.255 is the broadcast address for that network (host ID of all 1s). See? Just by being able to tell the difference between a 0 and a 1 (and maybe how to use Calc.exe to convert from decimal to binary), you figured out the networking details for this computer.
In case you don’t know how to use Calc.exe, let me help you:

 If you do this a lot, you can set yourself up with a PowerShell script to do it even more quickly.

Classless Inter-Domain Routing (CIDR)

We used a Class C subnet mask because it falls along octet boundaries which makes it easier on the eyes (and maybe the brain), but the basic pattern is established here. A subnet mask always begins with a 1 and always (well, almost always) ends with a 0. There is always an unbroken line of 1s and (almost) always a following line of unbroken zeros.

What makes this “almost” is that 255.255.255.255 is a valid subnet mask, so sometimes the zeros won’t be there. However, in no case can the line of 1s and the line of 0s ever intermingle.

As we found out pretty quickly, trying to break everything up into hard-line classes of A, B, or C ran us out of valid networking combinations pretty quickly. So, the idea of “classless” IPs was born. This is very simple: it just means that the network/host breaks don’t necessarily fall inline with neat octet boundaries. People get scared here, because now you see subnet masks like 255.255.248.0, and most people can’t immediately parse what that means. Keep calm, and calc on:

255.255.248.0 = 11111111.11111111.11111000.00000000.

Just line that up against your IP, and you know exactly what’s going on. For simplicity’s sake, let’s just use the same 192.168.0.10 that we started with.

                          Decimal                                  Binary                                        Hexadecimal
IP Address        192.168.0.10       11000000.10101000.00000000.00001010     C0.A8.00.0A
Subnet Mask     255.255.248.0     11111111.11111111.11111000.00000000       FF.FF.F8.00

This is a little less fun than the previous edition because again, we have a network section of 192.168.0.0, but the host portion is now 0.1 instead of just 1. It’s effectively indistinguishable from the Class C description. For that, we use CIDR notation. This just appends /xx to the end of the network and means nothing more than the total number of 1s used in the subnet mask. So, the network is 192.168.0.0/21 (1 full octet of 8 + 1 full octet of 8 plus 1 partial octet with 5, 8 + 8 + 5 = 21). It’s also nice because we can express an entire IP address this way as well: 192.168.0.10/21 means the same thing as 192.168.0.10 with a subnet mask of 255.255.248.0. What’s changed from the Class C example is the scope of our network. 

Instead of it stopping at FF in the final octet, it stops at 00000111.11111111 in the final two octets. So, the broadcast address in 192.168.0.0/21 is 192.168.7.255. The valid range of host addresses for this network is 192.168.0.1 through 192.168.7.254. If you don’t understand at this point, it’s not because you’re not smart enough, it’s because you’re over-thinking it. You’re letting those little dots get in the way of comprehension. All that you need to do to figure out what IPs fit in the same network is to determine how far you can go without modifying a number that’s blocked off by a 1 in the subnet mask. This is why most of us work this stuff in binary. It’s actually easier.

Pre-CIDR, that example would have probably been called “supernetting”. We took a Class C range and made it bigger. The traditional /24 break would have meant networks of 192.168.0.0/24, 192.168.1.0/24, 192.168.2.0/24, and so on. In the case of our /21, the next network is 192.168.8.0/21, then 192.168.16.0/21, etc. When we first started doing all this stuff, subnetting was more common. 

We would take a Class C range and make it smaller, as in converting it to a /25. Doing this, we could split out networks into smaller chunks, and then we’d have, perhaps, a server IP network and a desktop IP network, etc. You get the idea. Sub/supernetting is really not what I want to talk about here, I just want you to understand what the subnet mask does.

Putting the Subnet to Practical Use

As I said before, the subnet has exactly one practical purpose. That’s for the sending host to determine whether or not the destination host is on the same network. That’s all it does. Pay attention here: this is a one-way calculation. It only comes into play in the send operation.

Example 1

Let’s start with an easy one. My desktop computer’s IP is 192.168.25.55/24. I want to talk to a computer with IP address 192.168.25.37. Here’s the process:

                                         Decimal     Binary
Source IP Address               192.168.25.55     11000000.10101000.00011001.00110111
Destination IP Address        192.168.25.37     11000000.10101000.00011001.00100101
Subnet Mask                        255.255.255.0     11111111.11111111.11111111.00000000
Source Network                   192.168.25.0       11000000.10101000.00011001.00000000
Destination Network           192.168.25.0        11000000.10101000.00011001.00000000
Source Host                         55                         00110111
Destination Host                  37                        00100101

The source host looks at the source network and the destination network and realizes they’re the same. How does it know this? By lining up the subnet mask against the source and the destination network:
Position     1     2     3     4     5     6     7     8     9     10     11     12     13     14     15     16     17     18      
                  19     20     21     22     23     24     25     26     27     28     29     30     31     32

Source     1     1     0     0     0     0     0     0     1     0     1     0     1     0     0     0     0     0     0     1     1     
                0     0     1     0     0     1     1     0     1     1     1

Destination     1     1     0     0     0     0     0     0     1     0     1     0     1     0     0     0     0     0     0     1                         1     0     0     1     0     0     1     0     0     1     0     1

Subnet Mask     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1                           1     1     1     1     0     0     0     0     0     0     0     0

If there is any variance at any point of either address in the columns that line up against the leading march of 1s in the subnet mask, then the source and destination IPs are on different networks. It’s that simple.

Example 2

Let’s look at a “harder” one.
192.168.1.42/25 wants to talk to 192.168.37.42.

                                                             Decimal                 Binary
Source IP Address                               192.168.1.42           11000000.10101000.00000001.00101010
Destination IP Address                        192.168.37.42         11000000.10101000.00100101.00101010
Subnet Mask                                        255.255.255.128     11111111.11111111.11111111.10000000
Source Network                                   192.168.1.0             11000000.10101000.00000001.00000000
Destination Network                            192.168.37.0           11000000.10101000.00100101.00000000
Source Host                                          42                            00101010
Destination Host                                   42                            00101010

We can (hopefully) see right away that the source and destination networks are different. So, the sender will need to go through a router. The fact that the host numbers are identical is irrelevant.

Getting to a Router

For our second example, we understand that we need to find a router. How do we do that? Well, you’ve probably filled out an entry for a “default gateway” a few hundred or thousand times in your career. But have you ever thought much about it? Here’s a dump of the routing tables on my system, restricted to IPv4 and the adapter that gets my onto my live network:


The DestinationPrefix column defines target networks for outbound networks. Starting at the top and working our way down, we begin with 255.255.255.255/32. 255.255.255.255 is THE broadcast address for TCP/IP. Any adapter that receives a packet with that IP as the destination should process it. So, this first entry says that if any packet appears with a destination address that perfectly matches 255.255.255.255 (the /32 means “identical match”, remember?), then that packet should be processed by 0.0.0.0 (that’s what NextHop means). 0.0.0.0 is nowhere, so it effectively means “process here, do not forward”.

Next we see 224.0.0.0/4. This is for multicast and is beyond the scope of this article. However, it is basically handled the same way as 255.255.255.255/32, except that it applies to any packet destined to 224.0.0.0/4.

Based on our discussion above, do you recognize the significance of 192.168.25.255/32? This is the broadcast address for the 192.168.25.0/24 network. So, this line says, “any packet with a destination IP address that perfectly matches 192.168.25.255 should be processed here”.

Next, we see 192.168.25.55/32. 192.168.25.55 is the IP address on this adapter. You might be wondering why this line is necessary. Basically, the IP versus subnet mask is calculated on every single packet that touches an adapter, no matter what. So, since the IP of this system is 192.168.25.55, then if I ping 192.168.25.55, 255.255.255.255/32 is the only matching subnet mask, and due to this entry our adapter knows that means “process here”.

192.168.25.0/24 should also be familiar, as it is the network that this adapter is a member of. This is a bit trickier than the entries above. What’s going to happen is that it’s going to be processed on this adapter, but if it doesn’t match the local adapter’s address and it’s not intended to be outbound, then it will be discarded. In a switched network, this doesn’t happen a lot, but it’s very common on networks that use hubs. If it is intended to be outbound, then it will be delivered into the local network. Either way, the “process here” still applies, hence the 0.0.0.0 hop entry.

The last and final entry is where routing comes into play. The rules are applied in a top-down fashion, so any packet that gets to this point can be considered “none of the above”… hence, default gateway. So, any packet that hasn’t already been trapped will be passed off to the device with an IP address of 192.168.25.1 for it to deal with. Hopefully, that device is a router, otherwise that traffic is going nowhere.

So, now that we’ve read that list in techno-speak, let’s read it in English.
  1. Is this a general broadcast packet (255.255.255.255/32)? If yes, I’ll process it. If not, send it to #2.
  2. Is this a multicast packet (224.0.0.0/4)? If yes, I’ll process it. If not, send to #3.
  3. Is this a local broadcast packet (192.168.25.255/32)? If yes, I’ll process it. If not, send it to #4.
  4. Is this packet addressed to me (192.168.25.55/32)? If yes, I’ll process it. If not, send it to #5.
  5. Is this packet destined for the local network (192.168.25.0/24)? If yes, I’ll process it. If not, send it to #6.
  6. Whatever this is, it is not my problem. Send it to the gateway (192.168.25.1).
We can sort of ignore the first four rules and focus on the last two. Essentially, if it’s on the same network, we’ll process it on the same network (rule #5). Otherwise, we kick it to a gateway. We can insert specific gateways, if we like. For instance, I might have an isolated network that sits behind a different router, say, a secured file server that holds data I want to be accessible to my home computers but to not have any route of its own out to the Internet.

Let’s say I make that network 192.168.100.0/24. The router that connects my 192.168.25.0/24 network to that one has an IP address of 192.168.25.2. To make that work, I need to tell my adapter that packets destined to 192.168.100.0/24 need to go to 192.168.25.2. The PowerShell is like this:

New-NetRoute -DestinationPrefix 192.168.100.0/24 -InterfaceIndex 3 -NextHop 192.168.25.2

My route dump now looks like this:


The placement of the rule for 192.168.100.0/24 means that if something needs to go through that gateway, it will be processed before the default gateway. The placement of the default gateway in the last rule literally means that it’s the “gateway of last resort”. If it can’t deal with the packet, then the packet is doomed.

How IP Interacts with Ethernet, Part 2

After all that, now we get to circle back to the beginning. What does process at “0.0.0.0″ really mean, anyway? It means that the local adapter can’t just send this on to a gateway. It has to deal with it on it’s own. So, what it will do is decide if it’s going to keep the packet or not. Even though it might seem obvious to you if a packet is inbound or outbound, computers really aren’t that smart. They have to calculate the direction of the packet. So, if it’s got a packet that matches on a 0.0.0.0 rule, the next thing it will do is compare the source and destination addresses. If the source is not the local IP, then it’s inbound. The adapter will discard the frame if the destination indicates that it’s not meant for this host, or it will send the packet up to layer 4. If the source is the local IP, then it will send it down to layer 2 for Ethernet processing.

Here, then, is where it gets interesting. If the destination is the local subnet, then the layer 2 portion of this process needs to go out and get the MAC address that corresponds to that IP address. This is where ARP comes into play. I don’t know if there’s any built-in command-line way to go out and resolve an IP to a MAC address, but Windows does cache entries. So, if you have communicated with any given IP address, then you can ask Windows for its MAC:


What happened when Windows was assembling the frame for the PING packets was that it went out and got the MAC address for the adapter that’s using 192.168.25.10. It then built the Ethernet frame with a source MAC address of the local adapter and a destination MAC address of a0-b3-cc-e4-ef-bb. Then, it kicked it out onto the physical network for the switch to deal with.

OK, so now we understand local traffic. What about non-local traffic? Take a look at this:


What? How can this be? How can we communicate with that destination IP without its MAC address? The answer is, “It’s not our problem”. Whose problem is it? See for yourself:
C:\Users\Rock>arp-a192.168.25.1
Interface:192.168.25.55---0x3
  Internet Address      Physical Address      Type
  192.168.25.1          a8-39-44-20-0d-d0     dynamic


TCP/IP knew that 64.91.220.229 wasn’t on the local subnet. There’s no specific route entry that matches 64.91.220.0/24, so it bubbled down to the gateway of last resort. That gateway is 192.168.25.1. So, while the TCP/IP packet kept its source of 192.168.25.55 and its destination of 66.91.220.229, the Ethernet frame used the MAC address of the local adapter but a destination MAC address that belongs to the router. Once the router received the packet, it stripped off the Ethernet frame and made a new one with a destination MAC address of the next router in the series and its own source MAC address. 

Now, I know some of you are about to compose a nastygram about the network designation of 64.91.220.0/24, but you need to wait until after the next section. For completeness, my router also performs network address translation, so it would have also replaced the source IP address with its own, but that’s beyond the scope of this article and unnecessary knowledge for the comprehension of basic TCP/IP routing.

The Return Trip

So, we know that almost all TCP/IP communications is two way. You request a web page, you receive a web page. You copy a file, you get acknowledgement. Etc. But, there is really no such thing in TCP/IP as “return traffic”. Packets are sent and packets are received. They aren’t really “returned”. So, what can, and does, happen is that you can send to a host but it can’t send to you. If you ever take a TCP/IP exam, there will probably be lots of sample configurations in which they’ll ask you which computers can talk to which other computers, based solely on IP addresses and subnet masks.

I can pretty much promise you that you’ll have multiple situations where Computer A can send to Computer B, but Computer B can’t send to computer A, all over a subnet mismatch. When I said above that my system just knows that 64.91.220.0/24 is a network other than its own, that’s because as far as it’s concerned, 255.255.255.0 is the only subnet mask that matters in the whole world and the only two networks that exist are 192.168.25.0/24 and that other network that isn’t 192.168.25.0/24.

The actual network that 64.91.220.229 lives in is something for the host with IP 64.91.220.229 to be concerned with. When that system “replies” to a packet that I sent it, it has to go through the exact same process of figuring out if I’m on it’s local subnet and if not, which router to send it to.
 

TCP/IP and VLANs

I need to take a brief detour here and talk about VLANs. Because of the way network engineers tend to build things, a lot of non-networking people come away believing that VLANs and IP subnets have some relation. They do not. If I put 192.168.3.10/24 in VLAN 2 and 192.168.3.11/24 in VLAN 3, those adapters will never be able to talk to each other. Their adapters will try to reach each other using the local Ethernet broadcast network, but they exist in different Ethernet broadcast networks. Putting a router in won’t help, because routers will think they’re both on the same subnet.

IP addresses are a layer 3 component and VLANs are a layer 2 component. Routers don’t actually cross VLANs, either. Routers only cross IP subnets. What happens, though, is that network engineers will give routers a layer-2 presence in a particular VLAN and give it an IP address that shares a subnet with the other devices in that VLAN, which gives the appearance that what the router is doing is sending from one VLAN to another. While technically true, all it’s doing is moving traffic from one IP subnet to another. This could just as easily be done without using VLANs at all. The purpose of VLANs is to wall off these Ethernet broadcast networks to cut down on the effects of Ethernet broadcasts and to guard against network breaches that work at layer 2.

Application to Hyper-V

Overall, Hyper-V is oblivious to layer 3 and IP addresses. Hyper-V Network Virtualization is more or less a layer 3 operation, but the parts that really matter come in VMM’s extensions, not so much with Hyper-V’s innate powers. Without a basic understanding of how IP addressing works, the function of the Hyper-V switch and the configuration of Hyper-V hosts, especially in a failover cluster, is far more confusing than it needs to be.

What’s Next

From here, we’re going to move on from basic theory into technologies that are more Hyper-V centric. We’ll start with link aggregation and teaming.

Hyper-V and Networking – Part 4: Link Aggregation and Teaming

$
0
0
In part 3, I showed you a diagram of a couple of switches that were connected together using a single port. I mentioned then that I would likely use link aggregation to connect those switches in a production environment. Windows Server introduced the ability to team adapters natively starting with the 2012 version. Hyper-V can benefit from this ability.

      To save you from needing to click back to part 2, here is the visualization again:


      Port 19 is empty on each of these switches. That’s not a good use of our resources. But, we can’t just go blindly plugging in a wire between them, either. Even if we configure ports 19 just like we have ports 20 configured, it still won’t work. In fact, either of these approaches will fail with fairly catastrophic effects. That’s because we’ll have created a loop
      Imagine that we have configured ports 19 and 20 on each switch identically and wired them together. Then, switch port 1 on switch 1 sends out a broadcast frame. Switch 1 will know that it needs to deliver that frame to every port that’s a member of VLAN 10. So, it will go to ports 2-6 and, because they are trunk ports with a native VLAN of 10, it will also deliver it to 19 and 20. Ports 19 and 20 will carry the packet over to switch 2. When it comes out on port 19, it will try to deliver it to ports 1-6 and 20. When it comes out on port 20, it will try to deliver it to ports 1-6 and port 19. So, the frame will go back to ports 19 and 20 on switch 1, where it will repeat the process. Because Ethernet doesn’t have a time to live like TCP/IP does (at least, as far as I know, it doesn’t), this process will repeat infinitely. That’s a loop.

      Most switches can identify a loop long before any frames get caught up. The way Cisco switches will handle it is by cutting off the offending loop ports. So, if it’s the only connection that switch has with the outside world, all its endpoints will effectively go out. I’ve never put any other manufacturer into a loop, so I’m not sure how the various other vendors will deal with it. No matter what, you can’t just connect switches to each other using multiple cables without some configuration work.

      Port Channels and Link Aggregation

      The answer to the above problem is found in Port Channels or Link Aggregation. A port channel is Cisco’s version. Everyone else calls it link aggregation. Cisco does have some proprietary technology wrapped up in theirs, but it’s not necessary to understand that for this discussion. So, to make the above problem go away, we would assign ports 19 and 20 on the Cisco switch into a port channel. On any other hardware vendor, we would assign them to a link aggregation group (LAG). Once that’s done, the port channel or LAG is then configured just like a single port would be, as in trunk/(un)tagged or access/PVID. What’s really important to understand here is that the MAC addresses that the switch assigned to the individual ports are gone. The MAC address now belongs to the port channel/LAG. MAC addresses that it knows about on the connecting switch are delivered to the port channel, not to a switch port.

      LAG Modes

      It’s been quite a while since I worked on a Cisco environment, but as I recall, a port channel is just a port channel. You don’t need to do a lot of configuration once it’s set up. For other vendors, you have to set up the mode. We’re going to see these modes again with the Windows NIC team, so we’ll get acquainted with that first.

      NIC Teaming

      Now we look at how this translates into the Windows and Hyper-V environment. For a number of years, we’ve been using NIC teaming in our data centers to provide a measure of redundancy for servers. This uses multiple connections as well, but the most common types don’t include the same sort of cooperation between server and switch that you saw above between switches. Part of it is that a normal server doesn’t usually host multiple endpoints the way a switch does, so it doesn’t really need a trunk mode.

      A server is typically not concerned with VLANs. So, usually a teamed interface on a server isn’t maintaining two active connections. Instead, it has its MAC address registered on one of the two connected switch ports and the other is just waiting in reserve. Remember that it can’t actually be any other way, because a MAC address can only appear on a single port. So, even though a lot of people thought that they were getting aggregated bandwidth, they really weren’t. But, the nice thing about this configuration is that it doesn’t need any special configuration on the switch, except perhaps if there is a security restriction that prevents migration of MAC addresses.

      New, starting in Windows/Hyper-V Server 2012, is NIC teaming built right into the operating system. Before this, all teaming schemes were handled by manufacturers’ drivers. There are three teaming modes available.

      Switch Independent

      This is the same mode as the traditional teaming mode. The switch doesn’t need to participate. Ordinarily, the Hyper-V switch will register all of its virtual adapters’ MAC addresses on a single port, so all inbound traffic comes through a single physical link. We’ll discuss the exceptions in another post. Outbound traffic can be sent using any of the physical links.

      The great benefit of this method is that it can work with just about any switch, so small businesses don’t need to make special investments in particular hardware. You can even use it to connect to multiple switches simultaneously for redundancy. The downside, of course, is that all incoming traffic is bound to a single adapter.

      Static

      The Hyper-V virtual switch and many physical switches can operate in this mode. The common standard is 802.3ad, but not all implementations are equal. In this method, each member is grouped into a single unit as explained in the Port Channels and Link Aggregation section above. Both switches (whether physical or virtual) much have their matching members configured into a static mode.

      MAC addresses on all sides are registered on the overall aggregated group, not on any individual port. This allows incoming and outgoing traffic to use any of the available physical links. The drawbacks are that the switches all have to support this and you lose the ability to split connections across physical switches (with some exceptions, as we’ll talk about later).

      If a connection experiences troubles but isn’t down, then the static switch will experience problems that might be difficult to troubleshoot. For instance, if you create a static team on 4 physical adapters in your Hyper-V host but only three of the switch’s ports are configured in a static trunk, then the Hyper-V system will still attempt to use all four.

      LACP

      LACP stands for “Link Aggregation Control Protocol”. This is defined in the 802.1ax standard, which supersedes 802.3ad. Unfortunately, there is a common myth that gives the impression that LACP provides special bandwidth consolidation capabilities over static aggregation. This is not true. An LACP group is functionally like a static group.

      The difference is that connected switches communicate using LACPDU packets to detect problems in the line. So, if the example setup at the end of the Static teaming section used LACP instead of Static, the switches would detect that one side was configured using only 3 of the 4 connected ports and would not attempt to use the 4th link. Other than that, LACP works just like static. The physical switch needs to be setup for it, as does the team in Windows/Hyper-V.

      Bandwidth Usage in Aggregated Links

      Bandwidth usage in aggregated links is a major confusion point. Unfortunately, it’s not a simple matter of all physical links being simply combined into one bigger one. It’s more likely that load-balancing will occur than bandwidth aggregation.

      In most cases, the sending switch/team controls traffic flow. Specific load-balancing algorithms will be covered in another post. However it chooses to perform it, the sending system will transmit on a specific link. But, any given communication will almost exclusively use only one physical link. This is mostly because it helps ensure that the frames that make up a particular conversation arrive in order.

      If they were broken up and sent down separate pipes, contention and buffering would dramatically increase the probability that they would be scrambled before reaching their destination. TCP and a few other protocols have built-in ways to correct this, but this is a computationally expensive operation that usually doesn’t outweigh the restrictions of simply using a single physical link.
      Another reason for the single-link restriction is simple practicality.

      Moving a transmission through multiple ports from Switch A to Switch B is fairly trivial. From Switch B to Switch C, it becomes less likely that enough links will be available. The longer the communications chain, the more likely a transmission won’t have the same bandwidth available as the initial hop. Also, the final endpoint is most likely on a single adapter. The available methods to deal with this are expensive and create a drag on network resources.

      The implications of all this aren’t exactly clear. A quick explanation is that no matter what teaming mode you pick, when you run a network performance test across your team, the result is going to show the maximum speed of a single team member. But, if you run two such tests simultaneously, it might use two of the links. What I normally see is people trying to use a file copy to test bandwidth aggregation. Aside from the fact that file copy is a horrible way to test anything other than permissions, it’s not going to show anything more than the speed of a single physical link.

      The exception to the sender-controlling rule is the switch-independent teaming mode. Inbound traffic is locked to a single physical adapter as all MAC addresses are registered in a single location. It can still load-balance outbound traffic across all ports. If used with the Hyper-V port load-balancing algorithm, then the MAC addresses for virtual adapters will be evenly distributed across available physical adapters. Each virtual adapter can still only receive at the maximum speed of a single port, though.

      Stacking Switches

      Some switches have the power to “stack”. What this means is that individual physical switches can be combined into a single logical unit. Then, they share a configuration and operate like a single unit. The purpose is for redundancy. If one of the switch(es) fails, the other(s) will continue to operate. What this means is that you can split a static or LACP inter-switch connection, including to a Hyper-V switch, across multiple physical switch units. It’s like having all the power of the switch independent mode with none of the drawbacks.

      One concern with stacked switches is the interconnect between them. Some use a special interlink cable that provides very high data transfer speeds. With those, the only bad thing about the stack is the monetary cost. Cheaper stacking switches often just use regular Ethernet or 1gb or 2gb fiber. This could lead to bandwidth contention between the stack members. Since most networks use only a fraction of their available bandwidth at any given time, this may not be an issue. For heavily loaded core switches, a superior stacking method is definitely recommended.

      Aggregation Summary

      Without some understanding of load-balancing algorithms, it’s hard to get the complete picture here. These are the biggest things to understand:
      • The switch independent mode is the closest to the original mode of network adapter teaming that has been in common use for years. It requires that all inbound traffic flow to a single adapter. You cannot choose this adapter. If combined with the Hyper-V switch port load-balancing algorithm, virtual switch ports are distributed evenly across the available adapters and each will use only its assigned port for inbound traffic.
      • Static and LACP modes are common to the Windows/Hyper-V Server NIC team and most smart switches.
      • Not all static and LACP implementations are created equally. You may encounter problems connecting to some switches.
      • LACP doesn’t have any capabilities for bandwidth aggregation that the static method does not have.
      • Bandwidth aggregation occurs by balancing different communications streams across available links, not by using all possible paths for each stream.

      What’s Next

      While it might seem logical that the next post would be about the load-balancing algorithms, that’s actually a little more advanced than where I’m ready for this series to proceed. Bandwidth aggregation using static and LACP modes is a fairly basic concept in terms of switching. I’d like to continue with the basics of traffic flow by talking about DNS and protocol bindings.

      Hyper-V and Networking – Part 5: DNS

      $
      0
      0
      The last couple of posts in this series have dealt with how Ethernet frames and IP packets get to their destination. In this post, we’ll step up a little bit and look at the role DNS plays in getting those packets to the correct IP address. We’ll see how this works in general and the issues specific to a Hyper-V environment.

       

      DNS

      DNS is a remarkably simple, yet just as remarkably misunderstood technology. I’ve lost track of the number of truly brilliant people I’ve met that struggle with it. So, if you’re confused by DNS, you’re in good company. Let’s see what we can do to get rid of the confusion.

      Its usage has expanded over the years, but the basic problem that DNS addresses is that humans are not very good at remembering numbers. We have a comparably good ability to recall words, especially when using associative techniques. Computers, as you’ve probably noticed, have the opposite problem. Not only do they like numbers, they store “words” as numbers. DNS was designed to bridge this gap.

      At its core, DNS (domain name system) is a directory that matches names with numbers. That’s about as complicated as it gets. It has a few other functions, but they all boil down to matching a name to a number. There are a couple of common analogies that easily illustrate this concept.

      The first is phone numbers. Each phone line has a globally unique number. Remembering more than a few of them is extremely difficult. So, phone directories were created. For most of my life, these directories were manifested in large printed books known as the “white pages”, which first lost ground to basic speed dial, and are now built right into everyone’s smart phone.

      The second are physical street addresses. In the U.S., the ZIP+4 alone specifies the exact destination. It’s usually accompanied by a larger, more complicated arrangement that is more human-comprehensible.

      Just as those systems match human-decipherable addresses to more specific locations, DNS matches human-readable names to IP addresses. Like the telephone system, it does so in a precisely ordered, hierarchical fashion. This system works from right-to-left. As you work toward the left, each successive element becomes more specific. In a fully-qualified domain name, the left-most object represents the final destination. We’ll work with the TechSupport website as an example:

      www.techsupportpk.com.

      Each element is separated by a dot. In a complete DNS name, a dot does exist at the very right-most position, although it’s usually suppressed. This dot represents the root domain; nothing can be any higher in the hierarchy. This root domain contains all other domains.

      www.techsupportpk.com.

      Moving to the left, the next object we encounter is “com”. This is known as a top-level domain, which is occasionally abbreviated to TLD. “com” is one of several well-known top-level domains that are available on the Internet. Recently, changes to the public registration system have added quite a number of additional top-level domains. Some of these are protected, like edu and gov. Others are open to registrars to provide naming services to customers.

      www.techsupportpk.com.

      After “com”, we find another period. The periods are how DNS separates elements, but have no special meaning in any position other than the root domain.

      www.techsupportpk.com.

      Next, we reach “techsupportpk”. The contents of this element are generally just called the domain. In this case, the company called “Techsupportpk” has registered the domain name of “techsupportpk” underneath the top-level domain of “com”. As the legal registrants, they can put just about anything inside this domain that they like and no one else is allowed to use it.

      www.techsupportpk.com.

      The next element is “www”. I believe that this particular character string is largely responsible for most of the confusion that people feel when talking about DNS. Because it is the left-most element, it is the most specific part of the entire DNS name. In a traditional fully-qualified DNS name, this would refer to a single computer called “www”. However, since the dawn of the web page, it seems that everyone has a computer called “www” sitting on the Internet, serving up web traffic. It is the ubiquity of computers named “www” that confuse people when you tell them that it is the most specific element of the web name, not the least.

      www.techsupportpk.com.

      In traditional DNS parlance, the “www” portion is usually called the hostname. Of course, in the modern era, there probably isn’t a computer actually called “www”. It’s probably a hardware load-balancer or a group of computers operating in round-robin or something of the sort. But the point is, “www” is a singular entity that has a specific IP address that is the target of the fully-qualified name “www.techsupportpk.com”.

      So, to match it back to the street address analogy, the right-most period means “the world”, “com” means the country, “techsupportpk” is the city, and “www” is the street and house number. To match it to the phone number analogy, the right-most period is again “the world”, “com” is the country code, “techsupportpk” is the area code and prefix, and “www” is the final set of numbers. These analogies are obviously not completely perfect, but the concepts are the same.

      DNS becomes considerably easier to understand when you move away from “www”. Another DNS name that’s common to many Internet entities is “support”. For example, “support.microsoft.com” refers to an entity named “support” that is part of the domain “microsoft” that is a member of the TLD “com”.

      Confusing Conventions

      The usage of “www” isn’t the only way that people trip up over DNS. Another is that we’ve also developed tools, especially web servers and browsers, to reduce the complexity in a way that masks the true operation of DNS. For instance, if you just tell your browser to go to “techsupportpk.com”, you’ll land on “http://www.techsupportpk.com”. Some websites, such as http://sourceforge.net, mask the computer name out entirely. This very common behavior has led people to believe that they are connecting to “techsupportpk.com” and that the “www” is just an optional relic. Even though the actual name of the system you’re connecting to really doesn’t matter, it’s still there and is separate from its containing domain name.

      What about https:// and other URL components?

      Another stumbling block for many people is the fact that a DNS name is just one part of a uniform resource locator (URL). A fully-qualified DNS name is composed of nothing more than alphanumeric identifiers (hyphens are also acceptable) separated by periods. The other elements of a URL have different purposes and are not part of DNS. The first of these elements that you usually encounter is the protocol identifier, such as “http://”. Other common protocol identifiers are “https://” and “file://”. What these do is identify to the browser (or any URL-friendly application) that it should connect using a specific protocol. As with “www”, “http://” is so ubiquitous that it’s sometimes assumed to be attached to the DNS name somehow.

      A complete URL also contains a resource identifier. For example, http://www.techsupportpk.com/support.php refers to a resource named “support.php” that is served by the entity named “www” on the domain “techsupportpk” which is a member of the TLD named “com”. Most web sites employ default resources, which means that they automatically serve up a specifically-named entity, such as index.html whenever the client browser doesn’t request a resource. In terms of a web server, such as resource is always delivered in some fashion. For other URLs, such as ldap://dc1.domain.local, just access to the target system may be enough. The important fact for this discussion is that a trailing slash (/) and anything that comes after it is not part of the DNS name.

      Subdomains

      The above examples are the most common and basic format for a DNS name. The dotted notation can continue to proceed to the left with subdomains. Consider:

      chat1.chatnet.socialmediaco.com

      The above refers to the entity named “chat1” which is a member of subdomain “chatnet” which is a part of the domain “socialmediaco” which is under the TLD “com”.

      Fully-Qualified and not-so-Fully-Qualified

      In the early days of DNS, you really only thought about hostnames, domain names, and fully-qualified domain names (FQDN). For an individual host, the first two would combine to form the latter. So, going back to our original example, “www” is the hostname, “techsupportpk.com” is the domain name, and “www.techsupportpk.com” is the FQDN for the “www” host. But, as work was done to simplify even the DNS system for the average user, other things began to be lost. To reuse another example, http://sourceforge.net is missing a hostname. http://intranet is missing a domain name. Any time you see a DNS name that appears to stray from the format shown above, just be aware that it’s likely that you’re looking at a name that is somewhat less than fully-qualified. There are methods employed that take these partial inputs and resolve them into FQDNs. We’ll look at these later.

      DNS Server Operation

      If you have access to a Microsoft DNS system, I find that DNS is pretty simple to understand just by poking around in the console. Here’s a screenshot of mine:


      A brief explanation of the items:
      1. This is the hostname of the DNS server itself.
      2. This is the domain name that the DNS server hosts records for. Notice that it’s underneath the “Forward Lookup Zones” section. This just means that it processes requests by name and returns IP addresses. A “Reverse Lookup Zone” processes requests by IP and returns names.
      3. There are a number of DNS record types, which we’ll look at in the next section.
      4. Notice all the entries marked “(same as parent folder)”. These are records that don’t have hostnames of their own. We’ll revisit this in a later section.
      5. This is a dialog box for a DNS record. Notice the components.
      What we see here is a DNS server that will respond to queries for records in the siron.int domain using its own database. In this regard, this is an authoritative server, because it doesn’t need to query another DNS server. If it receives a request for a domain that it is not authoritative for, such as techsupportpk.com, it will forward that request. With Microsoft DNS, we can configure both standard forwarders and conditional forwarders. These simply tell this DNS server where to go when it is asked for a record in any domain it is not authoritative for. A standard forwarder is used for all unknown requests while a conditional forwarder is just an exception.

      These concepts are not really germane to this discussion, but simple enough to research on your own. If you don’t want to use forwarders, all Microsoft DNS servers are configured to use root hints by default. These are well-known DNS servers on the Internet that sit in the root domain (“.”). As you can imagine, these DNS servers are busy, so sometimes they don’t resolve as quickly.

      Hopefully, the screenshot makes it obvious that DNS is really just a simple directory listing.

      DNS Record Types

      I’m not going to provide a detailed presentation on DNS record types as they aren’t germane to my goal with this post, but I would be remiss to not provide a few basics.
      • A: The A record is the most fundamental of all DNS records. All it does is match a specific fully-qualified domain name to a specific IP address. An example of an A record can be seen in the above screenshot. Multiple A records can point to the same IP without problems.
      • CNAME: The CNAME, or “Canonical Name” record points one name to another CNAME or an A record. A very common usage of this is web servers. Your actual web server might be named “IIS.domain.tld”, but you can create a CNAME that points “www” to “IIS.domain.tld”. The great benefit of CNAME records is that you can very easily point an existing name to a new one without any major infrastructure changes. You can also have multiple CNAMEs pointing to the same A or CNAME. Web host providers often do this. It allows multiple tenants to utilize the same web server without collisions.
      • MX: “MX” stands for “mail exchanger”. When you instruct your e-mail client to send an e-mail to somebody@techsupportpk.com, your e-mail client or mail server will look for an MX record in the techsupportpk.com domain and send your e-mail to that location.
      • NS: “NS” is a name server. NS records are how DNS servers in disparate domains talk to each other and are utilized by tools such as NSLOOKUP.
      • AAAA: These are nothing more than A records for IPv6.
      This isn’t an exhaustive list, but it gives you an idea of what goes on in a DNS server. Just as you can have multiple records pointing to the same back-end target, you can also have multiples of the same type of record with the same name that point to different back-end targets. This is typically done for redundancy, but it can also be used for a form of poor-man’s load-balancing. For example, you can have an A record for “www” that points to 192.168.10.10 and another “www” record that points to 192.168.10.11. When multiple records with the same name exist, they are provided in round-robin fashion as requests come in. The danger here is that DNS doesn’t do any validation. If one of those two hosts goes down, then every other inbound request will result in the IP address of a dead server being sent back to the client. Most clients are smart enough to try again when that happens.

      How DNS Resolution Works

      DNS is part of the TCP/IP suite and works way up in layer 7, if you’re still thinking about the OSI model. Its purpose is to facilitate communication way down at layer 1, and it has a long way to go to get there. The flow is pretty simple, though.
      1. An application requests a connection to a resource by name, such as www.techsupportpk.com. The system requires a matching IP address before it can process this communication.
      2. The local DNS cache and the HOSTS file are checked first.  If a matching IP is found, it is returned to the requesting application. Otherwise, DNS processing continues.
      3. The system checks to see if it has been configured with the addresses of any DNS servers. If not, it attempts to use other name resolution methods, such as NetBIOS broadcasting.
      4. If the system does have DNS servers configured, it sends a request to one of them, asking about the resource name. Usually, this defaults to A and CNAME requests, but the DNS server can be queried for any record type.
      5. The DNS server can respond in a number of ways
        1. If the DNS server is authoritative for the domain and it has a record that matches the request, it will return that record
        2. If the DNS server is authoritative for the domain and it does not have a record that matches the request, it will return a message that the record doesn’t exist
        3. If the DNS server is not authoritative for the domain and is not configured to use forwarders or root hints, it will return a message that it cannot locate the record
        4. If the DNS server is not authoritative for the domain and is configured to use forwarders or root hints, it will forward the request appropriately. This request can generally continue across a number of DNS servers, so it may take some time for an acknowledgement of any kind.
      6. If the DNS server is not responding or returns an “I don’t know” response (as in 5c), the client can then choose to use an alternate DNS server, if it knows of any. If that doesn’t work, it will then fall back to broadcast.
      Once the requesting application has received a valid IP, it can then perform the original communication.

      That workflow is for an FQDN. What if you haven’t got one?

      First, you might only have a hostname. For example, maybe your IT department has you connect to a SharePoint server by using “http://intranet”. A DNS server won’t be able to process that request correctly. By default, your connection will automatically append its own domain to any request without one. You also have the power to provide it with additional search suffixes. If it can’t get a response for a hostname request using its own domain, it will then step through each configured suffix in order, sending out a unique request for each one until it either gets a response or runs out of suffixes.

      Second, you might only have a domain name, such as in the http://sourceforge.net example above. In this case, the request will be handled by a wildcard record. This is an A or CNAME record that has nothing in its name (or with some public DNS providers, an asterisk), but does have an IP or CNAME target. You saw those in the above screenshot as “(same as parent folder)”.

      DNS in a Microsoft Environment

      When Active Directory appeared in Windows 2000 Server, it brought with it a heavy reliance on DNS. If DNS doesn’t work, then pretty much nothing else works either. Some of this is simple convenience. DNS lookups are a lot easier to perform and understand than LDAP lookups. Storing a changeable IP in an LDAP database for quick look-up is much less efficient than DNS. LDAP’s mechanisms provide far more power than DNS, though. Active Directory is able to leverage both of these technologies to great effect – when used properly.

      DNS is pretty easy to understand, and Microsoft DNS is pretty easy to use. To make a DNS server authoritative for a domain, all you really have to do is use the wizard to add that domain. It will then respond to all inbound requests for that domain and will not forward them on. Although I removed them for my screenshot, I’ve used that feature in the past as a cheap way to block access to certain things, like ad delivery sites. All you have to do is create a Forward Lookup Zone for the offending domain and create a wildcard entry that points to 127.0.0.1. Then, traffic bound for any host in that domain will break instantly. Hopefully it’s understood that this doesn’t prevent IP connectivity and could have unintended side effects. Use with caution.

      Most functions of domain membership rely on DNS. When a domain member is turned on, it reaches out using DNS to contact a domain controller for authentication purposes. When a domain member needs to talk to another domain member, it starts the conversation using DNS.

      Where a lot of people get themselves into trouble is by overthinking DNS. What I’ve seen a lot of is people plugging the local AD DNS entry into all their domain computers, but then plugging in an ISP’s DNS address for the alternate. “In case the AD DNS system goes down,” they say. It sounds good, except that you can’t really control which DNS server that a system might query on any given request. So, machines configured like this will have periodic timeouts and communications failures on the LAN that are difficult to reproduce.

      That’s because they occasionally ask an Internet DNS server for the address of a name that’s not on the Internet, and that request gets passed all over the world. Eventually, it times out and the local system is then allowed to try other methods that might work, such as broadcast. The answer is to use only your Microsoft DNS server for local machines. It can be annoying if that system goes down, but not nearly as annoying as persistent failures.

      DNS and Hyper-V

      There are two major places that DNS issues can cause problems in Hyper-V.

      The first is for guests. I see a lot of people setting up virtual machines and plugging them into internal or private virtual switches. Their goal is to get the higher speed of the internal hardware rather than having the external switch and its pesky physical NIC getting involved. Before getting into DNS, there are a couple of expectation problems here. For one, the Hyper-V switch is already supposed to be smart enough to not send local traffic out the physical uplink. Often, this gets broken because the source and destination IPs are on different subnets, so the traffic has to hop out onto the physical network to go find a router. The other is that the internal and private switches don’t really seem to perform all that much more quickly than the network hardware. Your mileage will vary.

      But, using the internal/private switches isn’t without its merits. You just have to be aware of what DNS is going to do. If you try to communicate using DNS names, it will always work exactly as described above. The source system will ask the DNS server for an IP, and the DNS server will provide one. The source system isn’t going to ask the DNS server for IPs on specific subnets or anything like that, because that’s not what DNS does. It’s just going to provide the next IP in the rotation. So, if you want to force communications on a specific virtual switch, then you need to set up a specific subnet for it. Then, you can either use manual entries in the HOSTS file(s) or you can leverage custom A and CNAME records that point to the desired destination IP. When setting this up, remember that DNS is a one-way event; the source system asks DNS for the target’s IP, but all communication after that is strictly by IP.

      The other DNS issue for Hyper-V is multi-homed hosts. A lot of times, people set up dedicated management adapters, but then also choose to share the virtual switch’s adapter with the host. That results in a multi-homed system. Hosts that use multiple NICs for SMB multichannel are multi-homed. Clustered Hyper-V systems are multi-homed. The problem is that, by default, all adapters with valid IP addresses are registered in DNS. So, anything trying to communicate with a multi-homed Hyper-V host by name is going to get whichever address the DNS server provides. That address could be on an isolated subnet. The fix is simple: disable DNS registration for everything except the management adapter.

      What’s Next

      In the next installment in this series, I’m going to discuss ports, sockets, and their relationship to applications.

      Hyper-V and Networking – Part 6: Ports, Sockets and Applications

      $
      0
      0
      In many ways, this particular post won’t have a great deal to do with Hyper-V itself. It will earn its place in this series by helping to clear up a common confusion point I see being posted on various Hyper-V help forums. People have problems moving traffic to or from virtual machines, and, unfortunately, spend a lot of time working on the virtual switch and the management operating system.

      Ports

      From the previous parts of this series, you should now have a basic understanding of how traffic moves between computers using the TCP/IP protocol suite. Rarely is traffic simply between two computers, though. Usually, specific applications are communicating. Web server and web browser, SQL server and business client, control system and telnet client. All of these applications could be running on any given system simultaneously (you should probably separate the servers, though). Because so many network-enabled applications could be co-existing, it’s not enough for a computer to just fire packets at a target IP address. More is necessary in order for those packets to find their way to the destination application. The answer to this problem is the port.

      Ports are used with the TCP and UDP protocols and are really nothing more than a numerical identifier that’s in the header of the packet. A server (piece of software) will tell the system that it wants to process all incoming TCP and/or UDP traffic tagged with that specific port number. This is known as listening. Of course, most communication is two-way. So, in addition to the destination port, the packet also contains a source port. When the server responds to the client, it will use that destination port.

      Ports are allotted 2 bytes in the packet, giving them a range of 0000 through FFFF in hexadecimal or 0-65535 in decimal. I wasn’t around for the discussions on this, but 65535 is the maximum value for an unsigned 16-bit integer, and 16 bits was the largest integer size that was universally common among processors during the rise of IPv4, so I suspect a correlation.

      According to standards, this 2-byte range is broken into three groups: system ports (commonly referred to as well-known ports) are in the range of 0-1023, user ports are from 1024 to 49151, and dynamic ports start at 49152 and run the series out at 65535. The first two are controlled by the Internet Engineering Task Force (IETF) and I personally feel they are somewhat misleadingly named. The IETF retains rigid control over the so-called system ports, but they are not reserved for operating systems or anything like that. They are for common services, such as web and telnet.

      Anyone can apply to the Internet Assigned Numbers Authority (IANA) to have one of the user ports assigned to his/her application or service, such as 5900 for “remote frame buffer”, which is the protocol used by VNC. The final range is open for pretty much anyone to use for anything.

      You’ll notice that the previous paragraph opened with the qualifier of “standards”. That’s because there’s really no way to enforce what happens on any given port. Port 80 is “well-known” to be the port for web servers to use, but it’s trivially simple to code any application to listen on that port or any other.

      I promise you some pictures and further explanation, but I think this is a great place to segue into a discussion of sockets.

      Sockets

      Ports are nice, but they can only get you so far. An application can register a port, but that just facilitates communications. A port alone does not a communications channel make. This is where the socket comes in.

      Sockets are a simple thing. They are the point of contact for a network-enabled application on a computer. Sockets have their own addresses, which are just a combination of the IP address of the host and a specific port. They are what makes TCP and UDP communications possible. In order for proper communication to occur, a TCP or UDP packet requires a destination socket address. Let’s examine the flow:

      Problem: A user wants to retrieve a web page from the Techsupportpk web site. He types http://www.techsupportpk.com into the web browser and presses the Go button.
      1. From the above, the web client knows two things: the user wants the default page from www.techsupportpk.com on the http protocol.
      2. The first thing it does is resolve the hostname www.techsupportpk.com to an IP address using DNS: 64.91.230.229
      3. Because the user specified http, it knows to use port 80.
      4. Therefore, the destination socket is 64.91.230.229:80.
      The destination portion of the packet is now ready for transmission. But, that’s not quite enough. Since the user is asking the web server to deliver a web page, the target server needs to know where to send that page data. The mechanics to handle this are built right into TCP and UDP. The source system first inserts its own IP address in the source IP portion of the packet. Next, it produces a number from the range of dynamic ports (usually at random), and inserts that as well. That’s the source socket. The packet that it sends out looks like this:

      As you’ll recall from previous discussions, traffic doesn’t really move in a message-response fashion. It’s all a send operation. So, what happens is that the destination application is provided with the source socket information. Remember how I said that the OSI model is just a model and that practice is always different? This is one of those places. The complete layer 3 packet doesn’t necessarily survive all the way into layer 7, but the application layer is aware of both the source IP address and the source port. So, when it processes the request and wants to send back a “reply”, it simply reverses the way the socket information was placed in the packet that it received. Outside the contents of the data portion, the packets going back to the source will be the inverse of those coming in:


      What you’re seeing above is the motion of traffic between two sockets. A server application will always have a socket prepared for incoming traffic. This is called listening. A listening socket doesn’t care where the packet came from. It only cares that it was sent to the socket’s address. The socket belonging to the originating client application, however, will (or should) only accept traffic on its socket that is an inverse match for the request that it made. By maintaining a hash table of the destination socket addresses and dynamic source ports that it has made requests on, the application can easily manage multiple connections to multiple destinations. By maintaining a hash table of the sockets, a host can easily manage the traffic for multiple server and/or client applications. 

      Network Address Translation

      There is one inaccuracy in the sample image illustrating the communications chain. The source IP that I used is from a private range. These ranges (10.0.0.0/8, 169.254.0.0/16, 172.16.0.0/12, and 192.168.0.0/16) are not allowed on the open Internet. Any packet with one of these addresses as a source or destination will be dropped by the first Internet router that processes it.

      The purpose of these private ranges is to address IP address starvation. Within IPv4, there aren’t nearly enough addresses for every device worldwide to have its own. But, any organization is free to use private ranges, as its guaranteed that duplicate addresses in these spaces cannot collide across the Internet. Organizations then link up to the rest of the Internet using just one or only a few public IPs.

      In the above diagram, all six of those companies, all of various sizes, connect to the Internet consuming only a single public IP address apiece. Their internal networks are much larger, and some even use the same addressing scheme as other corporations. Network address translation (NAT) is the technology that facilitates this, and it’s very easy to understand. 
      When a web browser sends a request to a web site, it can remember all the socket information that was used in the request. When a packet comes in with the socket information reversed, that’s how it knows that it has received a response to that particular request.


      This is the same concept that NAT operates upon. The web browser sends its packets out, where they eventually reach the router that divides the private network from the public Internet. Unlike a standard router, a NAT router is going to make modifications to the layer 3, and possibly even the layer 4, portion of the packet.
      Just like the requesting application builds a hash table out of the source port and target sockets in order to match incoming packets with requests, the NAT router keeps its own table comparing source sockets, destination sockets, and its own replacement source sockets. The router won’t always need to replace the source port, but it often will in order to prevent collisions from multiple source machines attempting to connect to the same target IP using the same source port. When a responding packet is received that is an inverse match for an item in its table, the NAT router performs the same replacement in reverse so that the sending application can also correctly identify incoming packets. 

      Application to Hyper-V

      Hyper-V is largely unconcerned with most of what we’ve talked about in this article. The significance is that some people seem to become agitated when they learn that the Hyper-V virtual switch is a switch, not a router. It can, and does, perform the MAC address replacements that we saw in part 3, but it doesn’t track source and destination ports the same way. In fact, barring the use of an extension, the only way Hyper-V becomes at all concerned with ports is if you establish ACLs. These ACLs allow you to selectively allow or deny communications to/from specific ports, among other criteria.

      While the virtual switch is probably Hyper-V’s biggest network component, that’s certainly not all it does. Many of its other functions are facilitated by SMB, which uses the well-known port of 445. The management operating system also needs network communications to function, just like any other Windows installation. If you poke around in the default firewall rules, you’ll find a number of important services, such as those belonging to remote access applications.

      What’s Next

      In the next installment in this series, I’m going to refresh an older post about bindings in Hyper-V.

      Hyper-V and Networking – Part 7: Bindings

      $
      0
      0
      So far, this series has spent most of its time focused on the methods that systems use to decide where traffic should go and how it should get there. A logical next step is to discuss how bindings work. What this article will do is bring the concepts of that topic inline with the aims of this series along with an explanation of the machinations of binding.


      Network Binding Basics

      The concept of binding is fairly simple. It determines what capabilities will be available to a given adapter. You can choose what will be bound to an adapter and their order of precedence.
      Let’s start by looking at the bindings for a typical physical network adapter:


      Each line item represents a technology available for use by this adapter. They are organized, beginning with clients, then services, and finally protocols. The adapter is prevented from using any unchecked items. For instance, many organizations don’t need the advanced features of IPv6 and will unbind it to simplify management. 

      Binding Order

      Binding order is not of great importance in Hyper-V, although a lot of people spend a great deal of time trying to perfect it for their systems. I recommend that you ignore binding order for your Hyper-V host. But, we can spend a little bit of time taking a closer look at what happens.

      First, you get here by starting on the Network Connections screen. Press ALT to make the menu bar appear. Click Advanced, then Advanced Settings:


      This will reward you with the following screen:
      In the top area, you can change the order that Windows chooses to access adapters when it doesn’t have any other hints. In my observations, this doesn’t work reliably, which is one of the reasons I recommend you not use it for Hyper-V. 

      Reasons People Try to Use Binding Order for Hyper-V

      The most prominent reason people attempt to manipulate the binding order in Hyper-V is when the host is joined to a cluster. This is because a cluster requires multiple adapters for the management operating system, resulting in a multi-homed system. When such a system is not configured properly, it may choose the wrong adapter for some communications, resulting in a wide variety of problems.

      The other reason is a misunderstanding of the Hyper-V switch. All Hyper-V installations, cluster or not, require a management adapter for the operating system and the virtual switch for the guests. Unfortunately, too many people take this to mean that the Hyper-V switch needs some sort of network presence of its own the same way that a regular adapter does. So, they try to find ways to modify the bindings and binding order between the management adapter and the virtual switch. This should never be done.

      Bindings and the Hyper-V Virtual Switch

      Here’s a picture of what an adapter properly bound to the Hyper-V virtual switch looks like:

      The Broadcom bit is just there because it’s a Broadcom card and I have the entire Broadcom package installed. Other than that, the only thing bound to this adapter is the Hyper-V Extensible Switch. That’s exactly as it should be. Attempting to bind anything else should be blocked by the system, but if you manage to do it, you’ll probably break something. 
      Beyond that, network bindings have no real purpose for Hyper-V. Adapters used for the management operating system are as dependent upon bindings as any other adapter in Windows. The same goes for the virtual adapters inside virtual machines. Configure them as you would any adapter in the respective operating systems.

      The Multiplexor

      Right beneath the Hyper-V virtual switch in the above screenshot, you can see the Microsoft Network Adapter Multiplexor Protocol. This is the magical component that makes native adapter teaming work in Windows Server. As with adapters bound to the Hyper-V switch, adapters bound to the multiplexor must be bound to nothing else.

      What it All Means

      To risk oversimplification, all these components are just software. Most of them act as intermediaries between the bound adapter and anything else that wants to use it. In almost all cases, that “anything else” will be the operating system. Most other software has no real concept of network adapters. They request communications of some sort and the operating system does most of the work. This is especially true when it comes to teaming.

      One exception is the Hyper-V virtual switch. Hyper-V is well aware of this software component. It is through this particular binding that Hyper-V is able to control how traffic flows in and out of all connected virtual adapters. The fact that nothing else is bound is why the management operating system has no visibility into anything that happens on the Hyper-V virtual switch. Configuring firewalls and other such OS level tools will have absolutely no effect on or access to traffic on the Hyper-V switch unless they are specifically designed as extensions to the virtual switch.

      What’s Next

      At this point, we’ve covered the major aspects of generic networking in relation to Hyper-V. Now it’s time to start moving into technologies that are more Hyper-V specific.

      Hyper-V and Networking Part 8: Load-Balancing Algorithms

      $
      0
      0
      We’ve had a long run of articles in this series that mostly looked at general networking technologies. Now we’re going to look at a technology that gets us closer to Hyper-V. Load-balancing algorithms are a feature of the network team, which can be used with any Windows Server installation, but is especially useful for balancing the traffic of several operating systems sharing a single network team.


      The selected load-balancing method is how the team decides to utilize the team members for sending traffic. Before we go through these, it’s important to reinforce that this is load-balancing. There isn’t a way to just aggregate all the team members into a single unified pipe.

      I will periodically remind you of this point, but keep in mind that the load-balancing algorithms apply only to outbound traffic. The connected physical switch decides how to send traffic to the Windows Server team. Some of the algorithms have a way to exert some influence over the options available to the physical switch, but the Windows Server team is only responsible for balancing what it sends out to the switch.

      Hyper-V Port Load-Balancing Algorithm

      This method is commonly chosen and recommended for all Hyper-V installation based solely on its name. This is a poor reason. The name wasn’t picked because it’s the automatic best choice for Hyper-V, but because of how it operates.

      The operation is based on the virtual network adapters. In versions 2012 and prior, it was by MAC address. In 2012 R2, and presumably onward, it will be based on the actual virtual switch port. Distribution depends on the teaming mode of the virtual switch.

      Switch-independent: Each virtual adapter is assigned to a specific physical member of the team. It sends and receives only on that member. Distribution of the adapters is just round-robin. The impact on VMQ is that each adapter gets a single queue on the physical adapter it is assigned to, assuming there are enough left.

      Everything else: Virtual adapters are still assigned to a specific physical adapter, but this will only apply to outbound traffic. The MAC addresses of all these adapters appear on the combined link on the physical switch side, so it will decide how to send traffic to the virtual switch. Since there’s no way for the Hyper-V switch to know where inbound traffic for any given virtual adapter will be, it must register a VMQ for each virtual adapter on each physical adapter. This can quickly lead to queue depletion.

      Recommendations for Hyper-V Port Distribution Mode

      If you somehow landed here because you’re interested in teaming but you’re not interested in Hyper-V, then this is the worst possible distribution mode you can pick. It only distributes virtual adapters. The team adapter will be permanently stuck on the primary physical adapter for sending operations. The physical switch can still distribute traffic if the team is in a switch-dependent mode.
      By the same token, you don’t want to use this mode if you’re teaming from within a virtual machine. It will be pointless.

      Something else to keep in mind is that outbound traffic from a VM is always limited to a single physical adapter. For 10 Gb connections, that’s probably not an issue. For 1 Gb, think about your workloads.

      For 2012 (not R2), this is a really good distribution method for inbound traffic if you are using the switch-independent mode. This is the only one of the load-balancing modes that doesn’t force all inbound traffic to the primary adapter when the team is switch-independent. If you’re using any of the switch-dependent modes, then the best determinant is usually the ratio of virtual adapters to physical adapters. 

      The higher that number is, the better result you’re likely to get from the Hyper-V port mode. However, before just taking that and running off, I suggest that you continue reading about the hash modes and think about how it relates to the loads you use in your organization.

      For 2012 R2 and later, the official word is that the new Dynamic mode universally supersedes all applications of Hyper-V port. I have a tendency to agree, and you’d be hard-pressed to find a situation where it would be inappropriate. That said, I recommend that you continue reading so you get all the information needed to compare the reasons for the recommendations against your own system and expectations.

      Hash Load-Balancing Algorithms

      The umbrella term for the various hash balancing methods is “address hash”. This covers three different possible hashing modes in an order of preference. Of these, the best selection is the “Transport Ports”. The term “4-tuple” is often seen with this mode. All that means is that when deciding how to balance outbound traffic, four criteria are considered. These are: source IP address, source port, destination IP address, destination port.

      Each time traffic is presented to the team for outbound transmission, it needs to decide which of the team members it will use. At a very high level, this is just a round-robin distribution. But, it’s inefficient to simply set the next outbound packet onto the next path in the rotation. Depending on contention, there could be a lot of issues with stream sequencing. So, as explained in the earlier linked posts, the way that the general system works is that a single TCP stream stays on a single physical path. In order to stay on top of this, the load-balancing system maintains a hash table. A hash table is nothing more than a list of entries with more than one value, with each entry being unique from all the others based on the values contained in that entry.

      To explain this, we’ll work through a complete example. We’ll start with an empty team passing no traffic. A request comes in to the team to send from a VM with IP address 192.168.50.20 to the Techsupportpk web address. The team sends that packet out the first adapter in the team and places a record for it in a hash table:

      Source IPSource PortDestination IPDestination PortPhysical Adapter
      192.168.50.2049152108.168.254.197801

      Right after that, the same VM request a web page from the Microsoft web site. The team compares it to the first entry:

      Source IPSource PortDestination IPDestination PortPhysical Adapter
      192.168.50.2049152108.168.254.197801
      192.168.50.204915365.55.57.2780?

      The source ports and the destination IPs are different, so it sends the packet out the next available physical adapter in the rotation and saves a record of it in the hash table. This is the pattern that will be followed for subsequent packets; if any of the four fields for an entry make it unique when compared to all current entries in the table, it will be balanced to the next adapter.

      As we know, TCP “conversations” are ongoing streams composed of multiple packets. The client’s web browser will continue sending requests to the above systems. The additional packets headed to the Techsupportpk site will continue to match on the first hash entry, so they will continue to use the first physical adapter.

      IP and MAC Address Hashing

      Not all communications have the capability of participating in the 4-tuple hash. For instance, ICMP (ping) messages only use IP addresses, not ports. Non-TCP/IP traffic won’t even have that. In those cases, the hash algorithm will fall back from the 4-tuple method to the most suitable of the 2-tuple matches. These aren’t as granular, so the balancing won’t be as even, but it’s better than nothing.

      Recommendations for Hashing Mode

      If you like, you can use PowerShell to limit the hash mode to IP addresses, which will allow it to fall back to MAC address mode. You can also limit it to MAC address mode. I don’t know of a good use case for this, but it’s possible. Just check the options on New- and Set-NetLbfoTeam. In the GUI, you can only pick “Address Hash” unless you’ve already used PowerShell to set a more restrictive option.
      For 2012 (not R2), this is the best solution in non-Hyper-V teaming, including teaming within a virtual machine. For Hyper-V, it’s good when you don’t have very many virtual adapters or when the majority of the traffic coming out of your virtual machines is highly varied in a way that way that would have a high number of balancing hits. Web servers are likely to fit this profile.

      In contrast to Hyper-V Port balancing, this mode will mode always balance outbound traffic regardless of the teaming mode. But, in switch-independent mode, all inbound traffic comes across the primary adapter. This is not a good combination for high quantities of virtual machines whose traffic balance is heavier on the receive side. This part of the reason that the Hyper-V port mode almost always makes more sense in a switch independent mode, especially as the number of virtual adapters increases.

      For 2012 R2, the official recommendation is the same as with the Hyper-V port mode. You’re encourage to use the new Dynamic mode. Again, this is generally a good recommendation that I’m overly inclined to agree with. However, I still recommend that you keep reading so you understand all your options.

      Dynamic Balancing

      This mode is new in 2012 R2, and it’s fairly impressive. For starters, it combines features from the Hyper-V port and Address Hash modes. The virtual adapters are registered separately across physical adapters in switch independent mode so received traffic can be balanced, but sending is balanced using the Address Hash method. In switch independent mode, this gives you an impressive balancing configuration. This is why the recommendations are so strong to stop using the other modes. 

      However, if you’ve got an overriding use case, don’t be shy about using it. I suppose it’s possible that limiting virtual adapters to a single physical adapter for sending might have some merits in some cases.

      There’s another feature added by the Dynamic mode that its name is derived from. It makes use of flowlets. I’ve read a whitepaper that explains this technology. To say the least, it’s a dense work that’s not easy for mortals to follow. The simple explanation is that it is a technique that can break an existing TCP stream and move it to another physical adapter. Pay close attention to what that means: the Dynamic mode cannot, and does not, send a single TCP stream across multiple adapters simultaneously. The odds of out-of-sequence packets and encountering interim or destination connections that can’t handle the parallel data is just too high for this to be feasible at this stage of network evolution. What it can do is move a stream from one physical adapter.

      Let’s say you have two 10 GbE cards in a team using Dynamic load-balancing. A VM starts a massive outbound file transfer and it gets balanced to the first adapter. Another VM starts a small outbound transfer that’s balanced to the second adapter. A third VM begins its own large transfer and is balanced back to the first adapter. The lone transfer on the second adapter finishes quickly, leaving two large transfers to share the same 10 Gb adapter. Using the Hyper-V port or any address hash load-balancing method, there would be nothing that could be done about this short of canceling a transfer and restarting it, hoping that it would be balanced to the second adapter. With the new method, one of the streams can be dynamically moved to the other adapter, hence the name “Dynamic”. Flowlets require the split to be made at particular junctions in the stream. It is possible for Dynamic to work even when a neat flowlet opportunity doesn’t present itself.

      Recommendations for Dynamic Mode

      For the most part, Dynamic is the way to go. The reasons have been pretty well outlined above. For switch independent modes, it solves the dilemma of choosing Hyper-V port for inbound balancing against Address Hash for outbound balancing. For both switch independent and dependent modes, the dynamic rebalancing capability allows it to achieve a higher rate of well-balanced outbound traffic.
      It can’t be stressed enough that you should never expect a perfect balancing of network traffic. 

      Normal flows are anything but even or predictable, especially when you have multiple virtual machines working through the same connections. The Dynamic method is generally superior to all other load-balancing method but you’re not going to see perfectly level network utilization by using it.

      Remember that if your networking goal is to enhance throughput, you’ll get the best results by using faster network hardware. No software solution will perform on par with dedicated hardware.

      Google’s Chrome Remote Desktop for iOS now available

      $
      0
      0

      Google is not shy when it comes to bringing its apps, those made famous on the Android platform, over to the competition. Today, they do it again with the Chrome Remote Desktop app.

      On Monday, Google officially launched the Chrome Remote Desktop app that will, as the name suggests, essentially extends the desktop in a secure fashion over to the iPhone or iPad of your choice. To make the magic happen, a user will have to set up the Chrome Remote Desktop app on their desktop of choice (which is available through the Chrome Web Store as a separate download), and then set up the app on the iOS-based device of their choice as well.


      It works on devices that run iOS 7.0 or later, for what it’s worth, and as mentioned above it is indeed a universal app, so it will work on the iPhone and/or iPad.

      It’s available now for free, through the source link below. Do you think you’ll check it out?

      Download link:


    • Chrome Remote Desktop — Free

    • How to Install and Configure TCP/IP Routing in a Hyper-V Guest

      $
      0
      0

      One of the great things about the Hyper-V virtual switch is that it can be used to very effectively isolate your virtual machines from the physical network. This grants them a layer of protection that’s nearly unparalleled. Like any security measure, this can be a double-edged sword. Oftentimes, these isolated guests still need some measure of access to the outside world, or they at least need to have access to a system that can perform such access on their behalf.

      There are a few ways to facilitate this sort of connection. The biggest buzzword-friendly solution today is network virtualization, but that currently requires additional software (usually System Center VMM) and a not-unsubstantial degree of additional know-how. For most small, and even many medium-sized organizations, this is an unwelcome burden not only in terms of financial expense, but also in training/education and maintenance.

      A simpler solution that’s more suited to smaller and less complicated networks is software routing. Because we’re talking about isolation using a Hyper-V internal or private switch, such software would need to be inside a virtual machine on the same Hyper-V host as the isolated guests.

         
          External and Private Switch

      The routing VM would be represented in that image as the Dual-Presence VM.

      Choosing a Software Router

      There are major commercial software routing solutions available, such as Vyatta. There are free routing software packages available, such as VyOS, the community fork for Vyatta. These are all Linux-based, and as such, are not within my scope of expertise. However, it should be possible to deploy them as Hyper-V guests.

      What we’re going to look at in this article is Microsoft’s Routing and Remote Access Service. It’s an included component of Windows Server, and it’s highly recommended that an RRAS system perform no other functions. If you have a spare virtualization right, then this environment is free for you. Otherwise, you’ll need to purchase an additional Windows Server license.

      Do not enable RRAS in Hyper-V’s management operating system! Doing so does not absolve you of the requirement to provide a virtualization right and networking performance of both the management operating system and RAS will be degraded. It’s also an unsupported configuration. To make matters worse, the performance will be unpredictable.


      Step 1: Build the Virtual Machine

      Sizing a software router depends heavily upon the quantity of traffic that it’s going to be dealing with. However, systems administrators that don’t work with physical routers very often are usually surprised at just how few hardware resources are in even some of the major physical devices. My starting recommendation for the routing virtual machine is as follows:
      • vCPUs: 2
      • Dynamic Memory: 512MB Startup
      • Dynamic Memory: 256MB Minimum
      • Dynamic Memory: 2GB Maximum
      • Disk: 1 VHDX, Dynamically Expanding, 80GB (expect < 20 GB use, fairly stagnant growth)
      • vNIC: 1 to connect to the external switch
      • vNIC: 1 per private subnet per private/internal switch (wait on this during initial build)
      • OS: latest Windows Server that you are licensed for
      • This VM won’t necessarily need to be a member of your domain. If it’s going to be sitting in the perimeter, then you might consider leaving it out. Another option is to leave it in the domain but with higher-than-usual security settings. A great thing to do for machines such as this is disable cached credentials, whether it’s left in the domain or not.
      Once the virtual machine is deployed, monitor it for CPU contention, high memory usage, and long network queue lengths. Adjust provisioning upward as necessary.

      During VM setup, I would only create a single virtual adapter to start, and place it on the external virtual switch. Then use PowerShell to rename that adapter:

      Get-VMNetworkAdapter –VMName svrras | Rename-VMNetworkAdapter –NewName External

      Then, boot up the virtual machine and install Windows Server (I’ll be using 2012 R2 for this article). Rename the adapter inside Windows Server as well to reflect that it connects to the outside world. These steps will help you avoid a lot of issues later. If you’ve already created your adapters and would like a way to identify them, you can disconnect them from their switches and watch which show as being unplugged in the virtual machine.

      Before proceeding, ensure you have added all virtual adapters necessary, one for each switch. The virtual adapter connected to the external virtual switch is the only one that requires complete IP information. It should have a default gateway pointing to the next external hop and it should know about DNS servers. It can use any IP on that network that you have available. It only needs a memorable IP if other systems will need to be able to send traffic directly to hosts on the isolated network.

      All other adapters require only an IP and a subnet mask. The IP that you choose will be used as the default gateway for all other systems on that switch, so you’ll probably want it to be something easy to remember. If you’re using the router’s operating system as a domain member or in some other situation in which it will be able to register its IP addresses in DNS, make sure that you disable DNS registration for all adapters other than the one that’s in the primary network.

      For reference, my test configuration is as follows:
      • System name: SVRRAS
      • “External” adapter: IP: 192.168.25.254, Mask: 255.255.255.0, GW 192.168.25.1
      • “Isolated” adapter: IP: 172.16.0.1, Mask: 255.255.0.0, GW: None
      In order for this to work, you’ll need to make some adjustments to the Windows Firewall. If you want, you can just turn it off entirely. However, it is a moderately effective barrier and better than nothing. We’ll follow up on the firewall after the RRAS configuration explanation.

      Step 2: Installing and Configuring RRAS

      Once Windows is installed and you have all the necessary network adapters connected to their respective virtual switches, the next thing to do is install RRAS. This is done through the Add Roles and Features wizard, just like any other Windows Server role. Choose Remote Access on the Roles screen:

          RRAS Server Role

      On the Role Services screen, check the box for Routing. You’ll be prompted to add several other items in order to fully enable routing, which will include DirectAcccess and VPN (RAS) on the same screen:

          RRAS Role Services

      After this, just proceed through, accepting the suggested settings.

      Once the installation is complete, a reboot is unnecessary. You just need to configure the router. Open up the Start menu, and find the Routing and Remote Access snap-in. Open that up, and you’ll find something similar to the following screenshot. Right-click on your server’s name and click Configure and Enable Routing and Remote Access.

                                Configure RRAS

      On the next screen, you have one of two choices for our purposes. The first is Network Address Translation (NAT). This mode works like a home Internet router does. All the virtual machines behind the router will appear to have the same IP address. This is a great solution for isolation purposes. It’s also useful when you haven’t got a hardware router available and want to connect your virtual machines into another network, such as the Internet. Your second choice is to select Custom configuration. This mode allows you to build a standard router, in which the virtual machines on the private network can be reached from other virtual machines by using their IP addresses. I won’t be illustrating this method as it doesn’t do a great deal for isolation.

      On the next screen, you’ll tell the routing system which of the adapters is connected to the external network:

                            RRAS Adapter Selection

      On the next screen, you’ll choose how addressing is handled on the private networks. The first option will set up your RRAS system to perform DHCP services for them and forward their DNS requests out to the DNS server(s) you specified for the external network. If you choose the second option, you can build your own addressing services as desired. I’m just going to work with the first option for the sake of ease (and a quicker article):

                           RRAS Name Services

      Once this wizard completes, you’ll be returned to the main RRAS screen where the server should now show online (with a small upward pointing green arrow). That’s really all it takes to configure RRAS inside a Hyper-V guest.

      Step 3: Configure the Virtual Machines

      This is probably the best part: there is no necessary configuration at all. Just attach the virtual machines to the private switch and leave them in the default networking configuration. Here’s a screenshot from a Windows 7 virtual machine I connected to my test isolated switch:

          RRAS in a VM: Demo

      You’ll see that it’s using the IP information of SVRRAS’s adapter on the isolated switch for DHCP, gateway, and DNS information. As long as the firewall is down or properly set on the RRAS system, it will be able to communicate as expected.

      Windows Firewall on the RRAS System

      The easiest thing to do with the Windows Firewall is to just turn it off. If you’re using it at the perimeter, that’s a pretty bad idea. While it may not be the best firewall available, it does pretty well for a software firewall and seems to have the fewest negative side effects in that genre.

      For many, a build of this kind exposes them to a completely new configuration style, one that many hardware firewall administrators have dealt with for a very long time. In a traditional software firewall build running on a single system, you usually only have to worry about inbound and outbound traffic on a single adapter. Now, you have to worry about it on at least two adapters. This is a visualization:

       RRAS and Windows Firewall

      When configuring Windows Firewall, you have to be mindful of how this will work. In order to allow guests on the private network to access web sites on the external network, you’ll need to open port 80 INBOUND on the adapter connected to the private network. This kind of management can get tedious, but it’s very effective if you want to further isolate those guests.

      What you might want to do instead is leave the firewall on that connects to the external network but disable it for the private network. Then, your private guests will have receive the default levels of protection from the router VM’s firewall. You could then turn off the firewalls in the guests, if you like. In order to do this, open up Windows Firewall with Advanced Security on the router VM (or target the MMC from a remote computer). In the center pane, click Windows Firewall Properties or, in the left pane, right-click Windows Firewall with Advanced Security and click Properties. Next to Protected network connections, click Customize. In the dialog that appears, uncheck the box that represents the network adapter on the private switch. Click OK all the way out. I’ve renamed my adapters to “Isolated” and “External for the following screenshot:

         RRAS Firewall Adapter Selection

      DMZs and Port Forwarding in RRAS

      You might want to selectively expose some of your guests on the private network to inbound traffic from the external network. Personally, I don’t see much value in using the DMZ mode. It’s functionally equivalent and less computationally expensive to just connect a “DMZ” virtual machine directly to the external switch and not have it go through a routing VM at all. Port forwarding (sometimes called “pincushioning”), on the other hand, does have its uses.

      Open the Routing and Remote Access console. Expand your server, then expand the IP version (IPv4 or IPv6) that you want to configure forwarding for. Click NAT. In the center pane, locate the interface that is connected to the external switch. Right-click it and click Properties.

      If you wish to configure one or more “DMZ” virtual machines, the Address Pool tab is where this is done. First, you create a pool that contains the IP address(es) that you want to map to the private VM(s). For instance, if you want to make 192.168.25.78 the IP address of a private VM, then you would enter that IP address with a subnet mask of 255.255.255.255. Once you have your range(s) configured, you use the Reservations button to map the external address(es) to the private VMs address(es).

      For port forwarding, go to the Services and Ports tab. If you check one of the existing boxes, it will present you with a prefilled dialog where all you need to enter is the IP address of the private VM for which you wish to forward traffic:

                                      RRAS Port Forwarding

      In this screenshot, the On this address pool entry field is unavailable. That’s because I did not add any other IP addresses on the Address Pool tab. If you do that, then the external adapter will have multiple IPs assigned to it. If you don’t use those additional IPs for DMZ purposes, then you can use them here. The reason for doing so is so that multiple private virtual machines can have port forwarding for the same service. One use case for doing so is if you have two or more virtual machines on your private switch serving web pages and you want to make them all visible to computers on the external network.

      VLANs and RRAS

      This is a topic that comes up often. If you’ve read our networking series, then hopefully you already know how this works. VLANs are a layer 2 concept while routing is layer 3. Therefore, VLAN assignments for virtual machines on your private virtual switch will have no relation to any other VLANs anywhere else. Using VLANs on the private switch is probably not useful unless you need to isolate them from each other. If that’s the case, then you’ll need to make a distinct virtual adapter inside the routing virtual machine for each of those VLANs, or it won’t be able to communicate with the other guests on that VLAN.

      Easy Routing

      That’s really all there is to getting routing working through a Hyper-V virtual machine. This is a great solution for isolating virtual machines and for test labs.

      Do remember that it’s not as efficient or as robust as using a true hardware router.

      Red Hat Enterprise Virtualization 3.0 Step by Step Guide

      $
      0
      0
      The Red Hat Enterprise Virtualization platform comprises various components which work seamlessly together, enabling the system administrator to install, configure and manage a virtualized environment. After reading this guide, you will be able to set up Red Hat Enterprise Virtualization as represented in the following diagram:


      Figure 1.1. Overview of Red Hat Enterprise Virtualization components

      Prerequisites

      The following requirements are typical for small- to medium-sized installations. Note that the exact
      requirements of the setup depend on the specific installation, sizing and load. Please use the following requirements as guidelines:

      Red Hat Enterprise Virtualization Manager
      • Minimum - Dual core server with 4 GB RAM, with 25 GB free disk space and 1 Gbps network
      • interface.
      • Recommended - Dual Sockets/Quad core server with 16 GB RAM, 50 GB free disk space on multiple disk spindles and 1 Gbps network interface.
      The breakdown of the server requirements are as below:
      • For the Red Hat Enterprise Linux 6 operating system: minimum 1 GB RAM and 5 GB local disk space.
      • For the Manager: minimum 3 GB RAM, 3 GB local disk space and 1 Gbps network controller
      • bandwidth.
      • If you wish to create an ISO domain on the Manager server, you need minimum 15 GB disk space.
      Note: The Red Hat Enterprise Virtualization Manager setup script, rhevm -setup, supports the
      en_US.UT F-8, en_US.utf8, and en_US.utf-8 locales. Ensure that you install the Red Hat
      Enterprise Virtualization Manager on a system where the locale in use is one of these
      supported values.

      A valid Red Hat Network subscription to the following channels:
      • The Red Hat Enterprise Virtualization Manager (v.3 x86_64 ) channel, also referred to as rhel-x86_64 -server-6-rhevm -3, which provides Red Hat Enterprise Virtualization Manager.
      • The JBoss Application Platform (v 5) for 6Server x86_64 channel, also referred to as jbappplatform -5-x86_64 -server-6-rpm , which provides the supported release of the application platform on which the manager runs.
      • The RHEL Server Supplem entary (v. 6 64 -bit x86_64 ) channel, also referred to as rhel-x86_64 -server-supplem entary-6, which provides the supported version of the Java Runtime Environment (JRE).
      A client for connecting to Red Hat Enterprise Virtualization Manager.
      • Microsoft Windows (7, XP, 2003 or 2008) with Internet Explorer 7 and above
      • Microsoft .NET Framework 4
      For each Host (Red Hat Enterprise Virtualization Hypervisor or Red Hat Enterprise Linux)
      • Minimum - Dual core server, 10 GB RAM and 10 GB Storage, 1 Gbps network interface.
      • Recommended - Dual socket server, 16 GB RAM and 50 GB storage, two 1 Gbps network interfaces.
      The breakdown of the server requirements are as below:
      • For each host: AMD-V or Intel VT enabled, AMD64 or Intel 64 extensions, minimum 1 GB RAM, 3GB free storage and 1 Gbps network interface.
      • For virtual machines running on each host: minimum 1 GB RAM per virtual machine.
      • Valid Red Hat Network subscriptions for each host. You can use either Red Hat Enterprise Virtualization Hypervisor or Red Hat Enterprise Linux hosts, or both.

      For each Red Hat Enterprise Virtualization Hypervisor host:
      • The Red Hat Enterprise Virtualization Hypervisor (v.6 x86-64 ) channel, also referred to as rhel-x86_64 -server-6-rhevh
      • For each Red Hat Enterprise Virtualization Linux host: The Red Hat Enterprise Virt Management Agent (v 6 x86_64 ) channel, also referred to as rhel-x86_64 -rhevmgm t-agent-6.
      Storage and Networking
      • At leasTone of the supported storage types (NFS, iSCSI and FCP).
      • For NFS storage, a valid IP address and export path is required.
      • For iSCSI storage, a valid IP address and target information is required.
      • Static IP addresses for the Red Hat Enterprise Virtualization Manager server and for each host server.
      • DNS service which can resolve (forward and reverse) all the IP addresses.
      • An existing DHCP server which can allocate network addresses for the virtual machines.
      Virtual Machines
      • Installation images for creating virtual machines, depending on which operating system you wish to use.
      • Microsoft Windows XP, 7, 2003 or 2008.
      • Red Hat Enterprise Linux 3, 4, 5 or 6.
      • Valid licenses or subscription entitlements for each operating system.
      Red Hat Enterprise Virtualization User Portal
      • A Red Hat Enterprise Linux client running Mozilla Firefox 3.6 and higher or a Windows client running Internet Explorer 7 and higher.

      Install Red Hat Enterprise Virtualization

      The Red Hat Enterprise Virtualization platform consists of at leasTone Manager and one or more hosts.

      Red Hat Enterprise Virtualization Manager provides a graphical user interface to manage the
      physical and logical resources of the Red Hat Enterprise Virtualization infrastructure. The Manager is installed on a Red Hat Enterprise Linux 6 server, and accessed from a Windows client running
      Internet Explorer.

      Red Hat Enterprise Virtualization Hypervisor runs virtual machines. A physical server running
      Red Hat Enterprise Linux can also be configured as a host for virtual machines on the Red Hat
      Enterprise Virtualization platform.

      Install Red Hat Enterprise Virtualization Manager

                                      Figure 2.1. Install Red Hat Enterprise Virtualization Manager

      The Manager is the control center of the Red Hat Enterprise Virtualization environment. It allows you to define hosts, configure data centers, add storage, define networks, create virtual machines, manage user permissions and use templates from one central location.

      The Red Hat Enterprise Virtualization Manager must be installed on a server running Red Hat
      Enterprise Linux 6, with minimum 4 GB RAM, 25 GB free disk space and 1 Gbps network interface.

      Install Red Hat Enterprise Linux 6 on a server. When prompted for the software packages to
      install, select the default Basic Server option. See the Red Hat Enterprise Linux Installation
      Guide for more details.

      Note: During installation, remember to set the fully qualified domain name (FQDN) and IP for the
      server.


      If the classpathx-jaf package has been installed, it must be removed because it conflicts with some of the components required to support JBoss in Red Hat Enterprise Virtualization Manager. Run:

      # yum remove classpathx-jaf

      If your server has not been registered with the Red Hat Network, run:

      # rhn_register

      To complete registration successfully you need to supply your Red Hat Network username and
      password. Follow the onscreen prompts to complete registration of the system.
      After you have registered your server, update all the packages on it. Run:

      # yum -y update

      Reboot your server for the updates to be applied.

      Subscribe the server to the required channels using the Red Hat Network web interface.

      a. Log on to Red Hat Network (http://rhn.redhat.com/).
      b. Click System s at the top of the page.
      c. Select the system to which you are adding channels from the list presented on the screen,
      by clicking the name of the system.
      d. Click Alter Channel Subscriptions in the Subscribed Channels section of the
      screen.
      e. Select the following channels from the list presented on the screen.
      Red Hat Enterprise Virtualization Manager (v.3 x86_64 )
      RHEL Server Supplem entary (v. 6 64 -bit x86_64 )
      JBoss Application Platform (v 5) for 6Server x86_64 (note that this
      channel is listed under "Additional Services Channels for Red Hat Enterprise Linux 6 for
      x86_64")
      Click the Change Subscription button to finalize the change.

      You are now ready to install the Red Hat Enterprise Virtualization Manager. Run the following
      command:

      # yum -y install rhevm

      This command will download the Red Hat Enterprise Virtualization Manager installation software
      and resolve all dependencies.

      When the packages have finished downloading, run the installer:

      # rhevm -setup

      Note: rhevm -setup supports the en_US.UT F-8, en_US.utf8, and en_US.utf-8 locales.
      You will not be able to run this installer on a system where the locale in use is noTone of
      these supported values.


      The installer will take you through a series of interactive questions as listed in the following
      example. If you do not enter a value when prompted, the installer uses the default settings which
      are stated in [ ] brackets.

      Example: Red Hat Enterprise Virtualization Manager installation

      Welcom e to RHEV Manager setup utility
      HTTP Port [8080] :
      HTTPS Port [8443] :
      Host fully qualified Dmain Name, note that this Name should be fully
      resolvable [rhevm .dem o.redhat.com ] :
      Password for Adm inistrator (adm in@ internal) :
      Database password (required for secure authentication with the locally
      created database) :
      Confirm password :
      Organization Name for the Certificate: Red Hat
      The default storage type you will be using ['NFS'| 'FC'| 'ISCSI'] [NFS] :
      ISCSI
      Should the installer configure NFS share on this server to be used as an ISO
      Dmain? ['yes'| 'no'] [no] : yes
      Mount point path: /data/iso
      Display Name for the ISO Dmain: local-iso-share
      Firewall ports need to be opened.
      You can let the installer configure iptables autom atically overriding the current configuration. The old configuration will be backed up.
      Alternately you can configure the firewall later using an example iptables file found under /usr/share/rhevm /conf/iptables.example
      Configure iptables ? ['yes'| 'no']: yes


      Important points to note:
      • The default ports 8080 and 84 4 3 must be available to access the manager on HTTP and HTTPS respectively.
      • If you elect to configure an NFS share it will be exported from the machine on which the manager is being installed.
      • The storage type that you select will be used to create a data center and cluster. You will then be able to attach storage to these from the Administration Portal.
      You are then presented with a summary of the configurations you have selected. Type yes to
      accept them.

      Example: Confirm Manager installation settings

      RHEV Manager will be installed using the following configuration:
      =================================================================
      http-port: 8080
      https-port: 8443
      host-fqdn: rhevm .dem o.redhat.com
      auth-pass: * * * * * * * *
      db-pass: * * * * * * * *
      org-Name: Red Hat
      default-dc-type: ISCSI
      nfs-m p: /data/iso
      iso-Dmain-Name: local-iso-share
      override-iptables: yes
      Proceed with the configuration listed above? (yes|no): yes


      The installation commences. The following message displays, indicating that the installation was
      successful.

      Example: Successful installation

      Installing:
      Creating JBoss Profile... [ DONE ]
      Creating CA... [ DONE ]
      Setting Database Security... [ DONE ]
      Creating Database... [ DONE ]
      Updating the Default Data Center Storage Type... [ DONE ]
      Editing JBoss Configuration... [ DONE ]
      Editing RHEV Manager Configuration... [ DONE ]
      Configuring the Default ISO Dmain... [ DONE ]
      Starting JBoss Service... [ DONE ]
      Configuring Firewall (iptables)... [ DONE ]


      * * * * Installation com pleted successfully * * * * * *


      Your Red Hat Enterprise Virtualization Manager is now up and running. You can log in to the Red Hat Enterprise Virtualization Manager's web administration portal with the username adm in (the
      administrative user configured during installation) in the internal domain. Instructions to do so are
      provided at the end of this chapter.

      Important: The internal domain is automatically created upon installation, however no new users can be added to this domain. To authenticate new users, you need an external directory service. Red Hat
      Enterprise Virtualization supports IPA and Active Directory, and provides a utility called rhevmmanage-domains to attach new directories to the system.


      Install Hosts

                             Figure 2.2. Install Red Hat Enterprise Virtualization Hosts

      After you have installed the Red Hat Enterprise Virtualization Manager, install the hosts to run your
      virtual machines. In Red Hat Enterprise Virtualization, you can use either Red Hat Enterprise
      Virtualization Hypervisor or Red Hat Enterprise Linux as hosts.



      Install Red Hat Enterprise Virtualization Hypervisor

      This document provides instructions for installing the Red Hat Enterprise Virtualization Hypervisor using a CD. For alternative methods including PXE networks or USB devices, see the Red Hat Enterprise Linux Hypervisor Deployment Guide.

      Before installing the Red Hat Enterprise Virtualization Hypervisor, you need to download the hypervisor image from the Red Hat Network and create a bootable CD with the image. This procedure can be performed on any machine running Red Hat Enterprise Linux.

      To prepare a Red Hat Enterprise Virtualization Hypervisor installation CD
      • Download the latest version of the rhev-hypervisor* package from Red Hat Network. The lisTof hypervisor packages is located at the Red Hat Enterprise Virtualization Hypervisor
        (v.6 x86_64 ) channel.
      a. Log on to Red Hat Network (http://rhn.redhat.com/).
      b. Click System s at the top of the page.
      c. From the list presented on the screen, select the system on which the Red Hat Enterprise
      Virtualization Manager is installed by clicking on its name.
      d. Click Alter Channel Subscriptions in the Subscribed Channels section of the screen.
      e. Select the Red Hat Enterprise Virtualization Hypervisor (v.6 x86_64 ) channel from the list presented on the screen, then click the Change Subscription button to finalize the change.


      Log in to the system on which the Red Hat Enterprise Virtualization Manager is installed. You must log in as the root user to install the rhev-hypervisor package. Run the following command:

      # yum install "rhev-hypervisor* "

      The hypervisor ISO image is installed into the /usr/share/rhev-hypervisor/ directory.

      Insert a blank CD into your CD writer. Use the cdrecord utility to burn the hypervisor ISO image
      onto your disc. Run:


      # cdrecord dev=/dev/cdrw /usr/share/rhev-hypervisor/rhev-hypervisor.iso


      You have created a Red Hat Enterprise Virtualization Hypervisor installation CD, now you can use it to boot the machine designated as your hypervisor host. For this guide you will use the interactive installation where you are prompted to configure your settings in a graphical interface. Use the following keys to navigate around the installation screen:

      Menu Navigation Keys

      Use the Up and Down arrow keys to navigate between selections. Your selections are highlighted in white.
      The T ab key allows you to move between fields.
      Use the Spacebar to tick checkboxes, represented by [ ] brackets. A marked checkbox displays
      with an asterisk (* ).
      To proceed with the selected configurations, press the Enter key. 

      To configure Red Hat Enterprise Virtualization Hypervisor installation settings

      Insert the Red Hat Enterprise Virtualization Hypervisor 6.2-3.0 installation CD into your CD-ROM
      drive and reboot the machine. When the boot splash screen displays, press the T ab key and
      select Boot to boot from the hypervisor installation media. Press Enter.

      On the installation confirmation screen, select Install RHEV Hypervisor and press Enter.

      The installer automatically detects the drives attached to the system. The selected disk for
      booting the hypervisor is highlighted in white. Ensure that the local disk is highlighted, otherwise
      use the arrow keys to select the correct disk. Select Continue and press Enter.

      You are prompted to confirm your selection of the local drive, which is marked with an asterisk.
      Select Continue and press Enter.

      Enter a password for local console access and confirm it. Select Install and press Enter. The
      Red Hat Enterprise Virtualization Hypervisor partitions the local drive, then commences
      installation.

      Once installation is complete, a dialog prompts you to Reboot the hypervisor. Press Enter to
      confirm. Remove the installation disc.

      After the hypervisor has rebooted, you will be taken to a login shell. Log in as the adm in user with
      the password you provided during installation to enter the Red Hat Enterprise Virtualization
      Hypervisor management console.

      On the hypervisor management console, there are eight tabs on the left. Press the Up and Down
      keys to navigate between the tabs and Enter to access them.

      Select the Network tab. Configure the following options:

      HostName: Enter the hostname in the formaTof hostName.Dmain.example.com .
      DNS Server: Enter the Domain Name Server address in the formaTof 192.168.0.254 . You can use up to two DNS servers.
      NTP Server: Enter the Network T ime Protocol server address in the formaTof rhel.pool.ntp.org. This synchronizes the hypervisor's system clock with thaTof the manager's. You can use up to two NTP servers. Select Apply and press Enter to save your network settings.

      The installer automatically detects the available network interface devices to be used as the management network. Select the device and press Enter to access the interface configuration menu. Under IPv4 Settings, tick either the DHCP or Static checkbox.
      If you are using static IPv4 network configuration, fill in the IP Address, Netm ask and Gateway fields.

      To confirm your network settings, selecToK and press Enter.

      Select the RHEV-M tab. Configure the following options:

      Managem ent Server: Enter the Red Hat Enterprise Virtualization Manager domain name in the formaTof rhevm .dem o.redhat.com.
      Management Server Port: Enter the management server port number. The default is 8443.
      Connect to the RHEV Manager and Validate Certificate: Tick this checkbox if you wish to verify the RHEVM security certificate.
      Set RHEV-M Adm in Password: This field allows you to specify the root password for the hypervisor, and enable SSH password authentication from the Red Hat Enterprise Virtualization Manager.

      Select Apply and press Enter. A dialog displays, asking you to connect the hypervisor to the Red Hat Enterprise Virtualization Manager and validate its certificate. Select Approve and press Enter. A message will display notifying you that the manager configuration has been successfully updated.

      Under the Red Hat Network tab, you can register the host with the Red Hat Network.
      This enables the host to run Red Hat Enterprise Linux virtual machines with proper RHN entitlements. Configure the following settings:

      Enter your Red Hat Network credentials in the Login and Password fields.
      To select the method by which the hypervisor receives updates, tick either the RHN or Satellite checkboxes. Fill in the RHN URL and RHN CA fields.

      To confirm your RHN settings, select Apply and press Enter.
      Accept all other default settings. For information on configuring security, logging, kdump and
      remote storage

      Finally, select the Status tab. Select Restart and press Enter to reboot the host and
      apply all changes.

      You have now successfully installed the Red Hat Enterprise Virtualization Hypervisor. Repeat this
      procedure if you wish to use more hypervisors. The following sections will provide instructions on how to approve the hypervisors for use with the Red Hat Enterprise Virtualization Manager.

      Install Red Hat Enterprise Linux Host

      You now know how to install a Red Hat Enterprise Virtualization Hypervisor. In addition to hypervisor hosts, you can also reconfigure servers which are running Red Hat Enterprise Linux to be used as virtual machine hosts.

      To install a Red Hat Enterprise Linux 6 host

      On the machine designated as your Red Hat Enterprise Linux host, install Red Hat Enterprise
      Linux 6.2. SelecTonly the Base package group during installation.
      If your server has not been registered with the Red Hat Network, run the rhn_register command as root to register it. To complete registration successfully you will need to supply your Red Hat Network username and password. Follow the onscreen prompts to complete registration of the system.

      # rhn_register

      Subscribe the server to the required channels using the Red Hat Network web interface.

      Log on to Red Hat Network (http://rhn.redhat.com/).
      b. Click System s at the top of the page.
      c. Select the system to which you are adding channels from the list presented on the screen,
      by clicking the name of the system.
      d. Click Alter Channel Subscriptions in the Subscribed Channels section of the
      screen.
      e. Select the Red Hat Enterprise Virt Managem ent Agent (v 6 x86_64 ) channel
      from the list presented on the screen, then click the Change Subscription button to
      finalize the change.

      Red Hat Enterprise Virtualization platform uses a number of network ports for management and
      other virtualization features. Adjust your Red Hat Enterprise Linux host's firewall settings to allow
      access to the required ports by configuring iptables rules. Modify the /etc/sysconfig/iptables file so it resembles the following example:

      :INPUT ACCEPT [0:0]
      :FORWARD ACCEPT [0:0]
      :OUTPUT ACCEPT [10765:598664]
      -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
      -A INPUT -p icm p -j ACCEPT
      -A INPUT -i lo -j ACCEPT
      -A INPUT -p tcp --dport 22 -j ACCEPT
      -A INPUT -p tcp --dport 16514 -j ACCEPT
      -A INPUT -p tcp --dport 54321 -j ACCEPT
      -A INPUT -p tcp -m m ultiport --dports 5634:6166 -j ACCEPT
      -A INPUT -p tcp -m m ultiport --dports 49152:49216 -j ACCEPT
      -A INPUT -p tcp -m state --state NEW
      -A INPUT -j REJECT --reject-with icm p-host-prohibited
      -A FORWARD -m physdev ! --physdev-is-bridged -j REJECT --reject-with icm phost-
      prohibited
      COMMIT

      Ensure that the iptables service is configured to starTon boot and has been restarted, or started for the first time if it was not already running. Run the following commands:

      # chkconfig iptables on
      # service iptables restart

      You have now successfully installed a Red Hat Enterprise Linux host. As before, repeat this procedure if you wish to use more Linux hosts. Before you can start running virtual machines on your host, you have to manually add it to the Red Hat Enterprise Virtualization Manager via the administration portal, which you will access in the next step. 

                  Figure 2.3. Connect to the Manager administration portal

      Now that you have installed the Red Hat Enterprise Virtualization Manager and hosts, you can log in to the Manager administration portal to start configuring your virtualization environment. The web-based administration portal can be accessed using a Windows client running Internet Explorer.
       
      Before logging in, install .NET Framework 4 and modify the default security settings on the machine used to access the web administration portal. The example below is applicable for Windows 2008.

      To configure Windows client to access the administration portal

      To install .NET Framework 4, download it from 
      http://www.microsoft.com/download/en/details.aspx?id=17718. Run this executable as a user with administration access to the system. 

      Next, disable Internet Explorer Enhanced Security Configuration. Click Start → Administrative
      Tools → Server Manager. On the Security Inform ation pane in the Server Manager window, click Configure IE ESC. SelecToff for Administrators and Users to disable the security configuration. Click OK. 

      To add the administration portal to the browser's lisTof trusted sites, open a browser and click on
      Tools → InterneToptions. Click on the Security tab.

      Select Trusted Sites. Click Sites to display the T rusted Sites dialog. Enter the URL for your administration portal in the Add this website to the zone textbox. Click Add, then Close.

      Click the Custom Level... button. Locate the XAML browser applications item in the list, ensure that it is set to Enable, then click OK. 

      Restart Internet Explorer to access the administration portal.

      Log In to Administration Portal

      Now that the prerequisites have been resolved, you can log in to the Red Hat Enterprise Virtualization
      Manager administration portal. Ensure that you have the administrator password configured during
      installation as instructed in Example, “Red Hat Enterprise Virtualization Manager installation”.

      To connect to Red Hat Enterprise Virtualization web management portal

      Open a browser and navigate to https://Dmain.exam ple.com :84 4 3/RHEVManager.
      Substitute Dmain.exam ple.com with the URL provided during installation.

      If this is your first time connecting to the administration portal, Red Hat Enterprise Virtualization
      Manager will issue security certificates for your browser. Click the link labelled this certificate to trust the ca.cer certificate. A pop-up displays, click Open to launch the Certificate dialog. Click Install Certificate and select to place the certificate in Trusted Root Certification Authorities store.

      Back on the browser screen, click the link labelled here and follow the prompts to install the RHEV-GUI-CertificateInstaller executable. A pop-up displays again, this time click Run.
      Note that the actual certificate installation is preceded by an ActiveX installation.
      When complete, a new link labelled here appears. Click on it to reload the administration portal.

      The portal login screen displays. Enter adm in as your User Name, and enter the Password
      that you provided during installation. Ensure that your domain is set to Internal. Click Login.

      You have now successfully logged in to the Red Hat Enterprise Virtualization web administration portal. Here, you can configure and manage all your virtual resources. The functions of the Red Hat Enterprise Virtualization Manager graphical user interface are described in the following figure and list:

         Figure 2.4 . Administration Portal Features

      Header: This bar contains the name of the logged in user, the sign out button, the option to
      configure user roles.

      Navigation Pane: This pane allows you to navigate between the Tree, Bookmarks and Tags tabs. In the Tree tab, tree mode allows you to see the entire system tree and provides a visual representation your virtualization environment's architecture.

      Resources Tabs: These tabs allow you to access the resources of Red Hat Enterprise Virtualization. You should already have a Default Data Center, a Default Cluster, a Host waiting to be approved, and available Storage waiting to be attached to the data center.

      Results List: When you select a tab, this list displays the available resources. You can perform
      a task on an individual item or multiple items by selecting the item(s) and then clicking the relevant
      action button. If an action is not possible, the button is disabled.

      Details Pane: When you select a resource, this pane displays its details in several subtabs.
      These subtabs also contain action buttons which you can use to make changes to the selected
      resource.

      Once you are familiar with the layouTof the administration portal, you can start configuring your virtual environment.

      Configure Red Hat Enterprise Virtualization

      Now that you have logged in to the administration portal, configure your Red Hat Enterprise Virtualization environment by defining the data center, host cluster, networks and storage. Even though this guide makes use of the default resources configured during installation, if you are setting up a Red Hat Enterprise Virtualization environment with completely new components, you should perform the configuration procedure in the sequence given here.

      Configure Data Centers

                                      Figure 3.1. Configure Data Center

      A data center is a logical entity that defines the seTof physical and logical resources used in a managed virtual environment. Think of it as a container which houses clusters of hosts, virtual machines, storage and networks.

      By default, Red Hat Enterprise Virtualization creates a data center at installation. Its type is configured from the installation script. To access it, navigate to the Tree pane, click Expand All, and select the Default data center. On the Data Centers tab, the Default data center displays.

      Figure 3.2. Data Centers Tab

      The Default data center is used for this document, however if you wish to create a new data center
      see the Red Hat Enterprise Virtualization Administration Guide.

      Configure Cluster

                                     Figure 3.3. Populate Cluster with Hosts

      A cluster is a seTof physical hosts that are treated as a resource pool for a seTof virtual machines.
      Hosts in a cluster share the same network infrastructure, the same storage and the same type of CPU.
      They constitute a migration domain within which virtual machines can be moved from host to host.
      By default, Red Hat Enterprise Virtualization creates a cluster at installation. 

      To access it, navigate to the Tree pane, click Expand All and select the Default cluster. On the Clusters tab, the Default

         Figure 3.4 . Clusters Tab

      For this document, the Red Hat Enterprise Virtualization Hypervisor and Red Hat Enterprise Linux hosts will be attached to the Default host cluster. If you wish to create new clusters, or live migrate virtual machines between hosts in a cluster, see the Red Hat Enterprise Virtualization Evaluation Guide.

      Configure Networking

                                     Figure 3.5. Configure Networking

      At installation, Red Hat Enterprise Virtualization defines a Management network for the default data center. This network is used for communication between the manager and the host. New logical networks - for example for guest data, storage or display - can be added to enhance network speed and performance. All networks used by hosts and clusters must be added to data center they belong to.

      To access the Management network, click on the Clusters tab and select the default cluster. Click the Logical Networks tab in the Details pane. The rhevm network displays.

          Figure 3.6. Logical Networks Tab

      The rhevm Management network is used for this document, however if you wish to create new logical networks see the Red Hat Enterprise Virtualization Administration Guide.

      Configure Hosts

                                      Figure 3.7. Configure Hosts

      You have already installed your Red Hat Enterprise Virtualization Hypervisor and Red Hat Enterprise
      Linux hosts, but before they can be used, they have to be added to the Manager. The Red Hat
      Enterprise Virtualization Hypervisor is specifically designed for the Red Hat Enterprise Virtualization
      platform, therefore iTonly needs a simple click of approval. Conversely, Red Hat Enterprise Linux is a general purpose operating system, therefore reprogramming it as a host requires additional
      configuration.

      Approve Red Hat Enterprise Virtualization Hypervisor Host

      The Hypervisor you installed in Section “Install Red Hat Enterprise Virtualization Hypervisor” is automatically registered with the Red Hat Enterprise Virtualization platform. It displays in the Red Hat Enterprise Virtualization Manager, and needs to be approved for use.

      To set up a Red Hat Enterprise Virtualization Hypervisor host
      On the Tree pane, click Expand All and select Hosts under the Default cluster. On the Hosts tab, select the name of your newly installed hypervisor.

         Figure 3.8. Red Hat Enterprise Virtualization Hypervisor pending approval

      Click the Approve button. The Edit and Approve Host dialog displays. Accept the defaults
      or make changes as necessary, then click OK.

                     Figure 3.9. Approve Red Hat Enterprise Virtualization Hypervisor

      The host status will change from Non Operational to Up.

      Attach Red Hat Enterprise Linux Host

      In contrast to the hypervisor host, the Red Hat Enterprise Linux host you installed in Section,
      “Install Red Hat Enterprise Linux Host” is not automatically detected. It has to be manually attached to the Red Hat Enterprise Virtualization platform before it can be used.

      To attach a Red Hat Enterprise Linux host
      On the Tree pane, click Expand All and select Hosts under the Default cluster. On the Hosts
      tab, click New.
      The New Host dialog displays.

                   Figure 3.10. Attach Red Hat Enterprise Linux Host

      Enter the details in the following fields:
      • Data Center: the data center to which the host belongs. Select the Default data center.
      • Host Cluster: the cluster to which the host belongs. Select the Default cluster.
      • Name: a descriptive name for the host.
      • Address: the IP address, or resolvable hostname of the host, which was provided during installation.
      • Root Password: the password of the designated host; used during installation of the host.
      • Configure iptables rules: This checkbox allows you to override the firewall settings on the host with the default rules for Red Hat Enterprise Virtualization.
      If you wish to configure this host for OuTof Band (OOB) power management, select the Power Managem ent tab. T ick the Enable Power Managem ent checkbox and provide the required information in the following fields:
      • Address: The address of the host.
      • User Name: A valid user name for the OOB management.
      • Password: A valid, robust password for the OOB management.
      • Type: The type of OOB management device. Select the appropriate device from the drop down list.
      alom Sun Advanced Lights Out Manager
      apc American Power Conversion Master
      MasterSwitch network power switch
      bladecenter IBM Bladecentre Remote Supervisor Adapter
      drac5 Dell Remote Access Controller for Dell
      computers
      eps ePowerSwitch 8M+ network power switch
      ilo HP Integrated Lights Out standard
      ilo3 HP Integrated Lights Out 3 standard

      ipmilan Intelligent Platform Management Interface
      rsa IBM Remote Supervisor Adaptor
      rsb Fujitsu-Siemens RSB management interface
      wti Western T elematic Inc Network PowerSwitch
      cisco_ucs Cisco Unified Computing System Integrated
      Management Controller

      • Options: Extra command line options for the fence agent. Detailed documentation of the options available is provided in the man page for each fence agent.

      Click the Test button to test the operation of the OOB management solution.

      If you do not wish to configure power management, leave the Enable Power Management checkbox unmarked.

      Click OK. If you have not configured power management, a pop-up window prompts you to confirm if you wish to proceed without power management. SelecToK to continue.

      The new host displays in the lisTof hosts with a status of Installing. Once installation is complete, the status will update to Reboot and then Awaiting. When the host is ready for use, its status changes to Up.

      You have now successfully configured your hosts to run virtual machines. The next step is to prepare data storage domains to house virtual machine disk images.

      Configure Storage

                                      Figure 3.11. Configure Storage

      After configuring your logical networks, you need to add storage to your data center.

      Red Hat Enterprise Virtualization uses a centralized shared storage system for virtual machine disk
      images and snapshots. Storage can be implemented using Network File System (NFS), Internet Small
      Computer System Interface (iSCSI) or Fibre Channel Protocol (FCP). Storage definition, type and function, are encapsulated in a logical entity called a Storage Dmain. Multiple storage domains are
      supported.

      For this guide you will use two types of storage domains. The first is an NFS share for ISO images of installation media. You have already created this ISO domain during the Red Hat Enterprise Virtualization Manager installation

      The second storage domain will be used to hold virtual machine disk images. For this domain, you need at leasTone of the supported storage types. You have already set a default storage type during
      installation as described in Section, “Install Red Hat Enterprise Virtualization Manager”. Ensure that you use the same type when creating your data domain.

      Select your next step by checking the storage type you should use:
      1. Navigate to the Tree pane and click the Expand All button. Under System, click Default. On
        the results list, the Default data center displays.
      2. On the results list, the Storage T ype column displays the type you should add.
      3. Now that you have verified the storage type, create the storage domain:

      For NFS storage, refer to Section, “Create an NFS Data Domain”.
      For iSCSI storage, refer to Section, “Create an iSCSI Data Domain”.
      For FCP storage, refer to Section, “Create an FCP Data Domain”.

      Create an NFS Data Domain

      Because you have selected NFS as your default storage type during the Manager installation, you will
      now create an NFS storage domain. An NFS type storage domain is a mounted NFS share that is
      attached to a data center and used to provide storage for virtual machine disk images.

      To add NFS storage:

      Navigate to the Tree pane and click the Expand All button. Under System, select the Default
      data center and click on Storage. The available storage domains display on the results list. Click
      New Domain.

      The New Storage dialog box displays.

          Figure 3.12. Add New Storage

      Configure the following options:
      • Name: Enter a suitably descriptive name.
      • Data Center: The Default data center is already pre-selected.
      • Dmain Function / Storage T ype: In the drop down menu, select Data → NFS. The storage domain types not compatible with the Default data center are grayed out. After you select your domain type, the Export Path field appears.
      • Use Host: Select any of the hosts from the drop down menu. Only hosts which belong in the pre-selected data center will display in this list.
      • Export path: Enter the IP address or a resolvable hostname of the chosen host. The export path should be in the formaTof 192.168.0.10:/data or domain.example.com :/data
      Click OK. The new NFS data domain displays on the Storage tab. It will remain with a Locked status while it is being prepared for use. When ready, it is automatically attached to the data center.

      You have created an NFS storage domain. Now, you need to attach an ISO domain to the data center
      and upload installation images so you can use them to create virtual machines. Proceed to Section, “Attach and Populate ISO domain”.

      Create an iSCSI Data Domain

      Because you have selected iSCSI as your default storage type during the Manager installation, you will now create an iSCSI storage domain. Red Hat Enterprise Virtualization platform supports iSCSI storage domains spanning multiple pre-defined Logical Unit Numbers (LUNs).

      To add iSCSI storage:

      On the side pane, select the Tree tab. On System , click the + icon to display the available data
      centers.
      Double click on the Default data center and click on Storage. The available storage domains
      display on the results list. Click New Domain.
      The New Dmain dialog box displays.

          Figure 3.13. Add iSCSI Storage

      Configure the following options:
      • Name: Enter a suitably descriptive name.
      • Data Center: The Default data center is already pre-selected.
      • Domain Function / Storage Type: In the drop down menu, select Data → iSCSI.
      • The storage domain types which are not compatible with the Default data center are grayed out. After you select your domain type, the Use Host and Discover T argets fields display.
      • Use host: Select any of the hosts from the drop down menu. Only hosts which belong in this data center will display in this list.
      To connect to the iSCSI target, click the Discover Targets bar. This expands the menu to
      display further connection information fields.

        Figure 3.14 . Attach LUNs to iSCSI domain

      Enter the required information:
      • Address: Enter the address of the iSCSI target.
      • Port: Select the port to connect to. The default is 3260.
      • User Authentication: If required, enter the username and password.
      Click the Discover button to find the targets. The iSCSI targets display in the results list with a
      Login button for each target.

      Click Login to display the lisTof existing LUNs. Tick the Add LUN checkbox to use the selected
      LUN as the iSCSI data domain.

      Click OK. The new NFS data domain displays on the Storage tab. It will remain with a Locked
      status while it is being prepared for use. When ready, it is automatically attached to the data
      center.

      You have created an iSCSI storage domain. Now, you need to attach an ISO domain to the data center
      and upload installation images so you can use them to create virtual machines. Proceed to Section ,“Attach and Populate ISO domain”.

      Create an FCP Data Domain

      Because you have selected FCP as your default storage type during the Manager installation, you will
      now create an FCP storage domain. Red Hat Enterprise Virtualization platform supports FCP storage domains spanning multiple pre-defined Logical Unit Numbers (LUNs).

      To add FCP storage:

      On the side pane, select the Tree tab. On System , click the + icon to display the available data
      centers.

      Double click on the Default data center and click on Storage. The available storage domains
      display on the results list. Click New Domain.

      The New Dmain dialog box displays.

         Figure 3.15. Add FCP Storage

      Configure the following options:
      • Name: Enter a suitably descriptive name.
      • Data Center: The Default data center is already pre-selected.
      • Domain Function / Storage Type: Select FCP.
      • Use Host: Select the IP address of either the hypervisor or Red Hat Enterprise Linux host.
      • The lisTof existing LUNs display. On the selected LUN, tick the Add LUN checkbox to use it as the FCP data domain.
      Click OK. The new FCP data domain displays on the Storage tab. It will remain with a Locked
      status while it is being prepared for use. When ready, it is automatically attached to the data
      center.

      You have created an FCP storage domain. Now, you need to attach an ISO domain to the data center
      and upload installation images so you can use them to create virtual machines. Proceed to Section, “Attach and Populate ISO domain”.

      Attach and Populate ISO domain

      You have defined your first storage domain to store virtual guest data, now it is time to configure your second storage domain, which will be used to store installation images for creating virtual machines. You have already created a local ISO domain during the installation of the Red Hat Enterprise Virtualization Manager. To use this ISO domain, attach it to a data center.

      To attach the ISO domain
      Navigate to the Tree pane and click the Expand All button. Click Default. On the results list,
      the Default data center displays.

      On the details pane, select the Storage tab and click the Attach ISO button.

      The Attach ISO Library dialog appears with the available ISO domain. Select it and click OK.

         Figure 3.16. Attach ISO Library

      The ISO domain appears in the results lisTof the Storage tab. It displays with the Locked status
      as the domain is being validated, then changes to Inactive.

      Select the ISO domain and click the Activate button. The status changes to Locked and then to
      Active.

      Media images (CD-ROM or DVD-ROM in the form of ISO images) must be available in the ISO repository for the virtual machines to use. To do so, Red Hat Enterprise Virtualization provides a utility that copies the images and sets the appropriate permissions on the file. The file provided to the utility and the ISO share have to be accessible from the Red Hat Enterprise Virtualization Manager.

      Log in to the Red Hat Enterprise Virtualization Manager server console to upload images to the ISO
      domain.

      To upload ISO images

      Create or acquire the appropriate ISO images from boot media. Ensure the path to these images
      is accessible from the Red Hat Enterprise Virtualization Manager server.

      The next step is to upload these files. First, determine the available ISO domains by running:

      # rhevm -iso-uploader list

      You will be prompted to provide the admin user password which you use to connect to the
      administration portal. The tool lists the name of the ISO domain that you attached in the previous
      section.

      ISO Storage Dmain List:
      local-iso-share

      Now you have all the information required to upload the required files. On the Manager console,
      copy your installation images to the ISO domain. For your images, run:

      # rhevm -iso-uploader upload -i local-iso-share [file1] [file2] .... [fileN]

      You will be prompted for the admin user password again, provide it and press Enter.
      Note that the uploading process can be time consuming, depending on your storage performance.

      After the images have been uploaded, check that they are available for use in the Manager
      administration portal.
      • Navigate to the Tree and click the Expand All button.
      • Under Storage, click on the name of the ISO domain. It displays in the results list. Click on it to display its details pane.
      • On the details pane, select the Im ages tab. The lisTof available images should be populated with the files which you have uploaded. In addition, the RHEV-toolsSetup.iso and virtio-win.vfd images should have been automatically uploaded during installation.
          Figure 3.17. Uploaded ISO images

      Now that you have successfully prepared the ISO domain for use, you are ready to start creating virtual machines.

      Manage Virtual Machines

      The final stage of setting up Red Hat Enterprise Virtualization is the virtual machine lifecycle - spanning the creation, deployment and maintenance of virtual machines; using templates; and configuring user permissions. This chapter will also show you how to log in to the user portal and connect to virtual machines.

      Create Virtual Machines
                         Figure 4 .1. Create Virtual Machines

      On Red Hat Enterprise Virtualization, you can create virtual machines from an existing template, as a
      clone, or from scratch. Once created, virtual machines can be booted using ISO images, a network boot (PXE) server, or a hard disk. This document provides instructions for creating a virtual machine using an ISO image.

      In your current configuration, you should have at leasTone host available for running virtual machines, and uploaded the required installation images to your ISO domain. This section guides you through the creation of a Red Hat Enterprise Linux 6 virtual server. You will perform a normal attended installation using a virtual DVD.

      To create a Red Hat Enterprise Linux server

      Navigate to the Tree pane and click Expand All. Click the VMs icon under the Default cluster.
      On the Virtual Machines tab, click New Server.

          Figure 4 .2. Create New Linux Virtual Machine

      You only need to fill in the Name field and select Red Hat Enterprise Linux 6.x as your
      Operating System . You may alter other settings but in this example we will retain the defaults.
      Click OK to create the virtual machine.

      A New Virtual Machine - Guide Me window opens. This allows you to add networks and
      storage disks to the virtual machine.

                                       Figure 4 .3. Create Virtual Machines

      Click Configure Network Interfaces to define networks for your virtual machine. The parameters in the following figure are recommended, but can be edited as necessary. When you have configured your required settings, click OK.

                                                   Figure 4 .4 . New Network Interface configurations

      You are returned to the Guide Me window. This time, click Configure Virtual Disks to add
      storage to the virtual machine. The parameters in the following figure are recommended, but can
      be edited as necessary. When you have configured your required settings, click OK.

                                                 Figure 4 .5. New Virtual Disk configurations

      Close the Guide Me window by clicking Configure Later. Your new RHEL 6 virtual machine will
      display in the Virtual Machines tab.

      You have now created your first Red Hat Enterprise Linux virtual machine. Before you can use your
      virtual machine, install an operating system on it.

      To install the Red Hat Enterprise Linux guesToperating system

      Right click the virtual machine and select Run Once. Configure the following options:

                                                Figure 4 .6. Run Red Hat Enterprise Linux Virtual Machine

      • Attach CD: Red Hat Enterprise Linux 6
      • Boot Sequence: CD-ROM
      • Display protocol: SPICE
      Retain the default settings for the other options and click OK to start the virtual machine.

      Select the virtual machine and click the Console icon. As this is your first time connecting to
      the virtual machine, allow the installation of the Spice Active X and the SPICE client.

      After the SPICE plugins have been installed, select the virtual machine and click the Console icon
      again. This displays a window to the virtual machine, where you will be prompted to begin installing the operating system.

      After the installation has completed, shut down the virtual machine and reboot from the hard drive.

      You can now connect to your Red Hat Enterprise Linux virtual machine and start using it.

      Create a Windows Virtual Machine

      You now know how to create a Red Hat Enterprise Linux virtual machine from scratch. The procedure of creating a Windows virtual machine is similar, except that it requires additional virtio drivers. This example uses Windows 7, but you can also use other Windows operating systems. You will perform a normal attended installation using a virtual DVD.

      To create a Windows desktop

      Navigate to the Tree pane and click Expand All. Click the VMs icon under the Default cluster.
      On the Virtual Machines tab, click New Desktop.

         Figure 4 .7. Create New Windows Virtual Machine

      You only need to fill in the Name field and select Windows 7 as your Operating System . You
      may alter other settings but in this example we will retain the defaults. Click OK to create the virtual
      machine.

      A New Virtual Machine - Guide Me window opens. This allows you to define networks for
      the virtual machine. Click Configure Network Interfaces. See Figure 4.4, “New Network
      Interface configurations” for details.

      You are returned to the Guide Me window. This time, click Configure Virtual Disks to add
      storage to the virtual machine. See Figure 4.5, “New Virtual Disk configurations” for details.

      Close the Guide Me windows. Your new Windows 7 virtual machine will display in the Virtual
      Machines tab.

      To install Windows guesToperating system

      Right click the virtual machine and select Run Once. The Run Once dialog displays as in
      Figure 4.6, “Run Red Hat Enterprise Linux Virtual Machine”. Configure the following options:
      • Attach Floppy: virtio-win
      • Attach CD: Windows 7
      • Boot sequence: CD-ROM
      • Display protocol: SPICE
      Retain the default settings for the other options and click OK to start the virtual machine.

      Select the virtual machine and click the Console icon. This displays a window to the virtual
      machine, where you will be prompted to begin installing the operating system.

      Accept the default settings and enter the required information as necessary. The only change you must make is to manually install the VirtIO drivers from the virtual floppy disk (vfd) image. To do so, select the Custom (advanced) installation option and click Load Driver. Press Ctrl and select:
      • Red Hat VirtIO Ethernet Adapter
      • Red Hat VirtIO SCSI Controller
      The installation process commences, and the system will reboot itself several times.

      Back on the administration portal, when the virtual machine's status changes back to Up, right click on it and select Change CD. From the lisTof images, select RHEV-toolsSetup to attach the Guest Tools ISO which provides features including USB redirection and SPICE display optimization.

      Click Console and log in to the virtual machine. Locate the CD drive to access the contents of the Guest Tools ISO, and launch the RHEV-toolsSetup executable. After the tools have been installed, you will be prompted to restart the machine for changes to be applied.

      You can now connect to your Windows virtual machine and start using it.

      Using Templates

                          Figure 4 .8. Create Templates

      Now that you know how to create a virtual machine, you can save its settings into a template. This template will retain the original virtual machine's configurations, including virtual disk and network interface settings, operating systems and applications. You can use this template to rapidly create replicas of the original virtual machine.

      Create a Red Hat Enterprise Linux Template

      To make a Red Hat Enterprise Linux virtual machine template, use the virtual machine you created in Section, “Create a Red Hat Enterprise Linux Virtual Machine” as a basis. Before it can be used, it has to be sealed. This ensures that machine-specific settings are not propagated through the template.

      To prepare a Red Hat Enterprise Linux virtual machine for use as a template

      Connect to the Red Hat Enterprise Linux virtual machine to be used as a template. Flag the
      system for re-configuration by running the following command as root:

      # touch /.unconfigured

      Remove ssh host keys. Run:

      # rm -rf /etc/ssh/ssh_host_*

      Shut down the virtual machine. Run:

      # poweroff

      The virtual machine has now been sealed, and is ready to be used as a template for Linux virtual
      machines.

      To create a template from a Red Hat Enterprise Linux virtual machine

      In the administration portal, click the Virtual Machines tab. Select the sealed Red Hat
      Enterprise Linux 6 virtual machine. Ensure that it has a status of Down.

      Click Make Template. The New Virtual Machine Template displays.

      Click Make Template. The New Virtual Machine Template displays.

         Figure 4 .9. Make new virtual machine template

      Enter information into the following fields:
      • Name: Name of the new template
      • Description: Description of the new template
      • Host Cluster: The Host Cluster for the virtual machines using this template.
      • Make Private: If you tick this checkbox, the template will only be available to the template's creator and the administrative user. Nobody else can use this template unless they are given permissions by the existing permitted users.
      Click OK. The virtual machine displays a status of "Image Locked" while the template is being
      created. The template is created and added to the Templates tab. During this time, the action
      buttons for the template remain disabled. Once created, the action buttons are enabled and the
      template is ready for use.

      Clone a Red Hat Enterprise Linux Virtual Machine

      In the previous section, you created a Red Hat Enterprise Linux template complete with pre-configured storage, networking and operating system settings. Now, you will use this template to deploy a preinstalled virtual machine.

      To clone a Red Hat Enterprise Linux virtual machine from a template

      Navigate to the Tree pane and click Expand All. Click the VMs icon under the Default cluster.
      On the Virtual Machines tab, click New Server.

         Figure 4 .10. Create virtual machine based on Linux template

      On the General tab, select the existing Linux template from the Based on Template
      list.
      Enter a suitable Name and appropriate Description, then accept the default values
      inherited from the template in the resTof the fields. You can change them if needed.
      Click the Resource Allocation tab. On the Provisioning field, click the drop down
      menu and select the Clone option.

                Figure 4 .11. Set the provisioning to Clone

      Retain all other default settings and click OK to create the virtual machine. The virtual machine
      displays in the Virtual Machines list.

      Create a Windows Template

      To make a Windows virtual machine template, use the virtual machine you created in Section,
      “Create a Windows Virtual Machine” as a basis.

      Before a template for Windows virtual machines can be created, it has to be sealed with sysprep. This ensures that machine-specific settings are not propagated through the template.

      Note that the procedure below is applicable for creating Windows 7 and Windows 2008 R2 templates.

      To seal a Windows virtual machine with sysprep

      In the Windows virtual machine to be used as a template, open a command line terminal and type
      regedit.

      The Registry Editor window displays. On the left pane, expand HKEY_LOCAL_MACHINE →
      SYSTEM → SETUP.

      On the main window, right click to add a new string value using New → String Value. Right click
      on the file and select Modify. When the Edit String dialog box displays, enter the following
      information in the provided text boxes:
      • Value name: UnattendFile
      • Value data: a:\sysprep.inf
      Launch sysprep from C:\Windows\System 32\sysprep\sysprep.exe
      • Under System Cleanup Action, select Enter System Out-of-Box-Experience (OOBE).
      • Tick the Generalize checkbox if you need to change the computer's system identification number (SID).
      • Under Shutdown Options, select Shutdown.

      Click OK. The virtual machine will now go through the sealing process and shut down
      automatically.

      To create a template from an existing Windows machine

      In the administration portal, click the Virtual Machines tab. Select the sealed Windows 7
      virtual machine. Ensure that it has a status of Down and click Make Template.

      The New Virtual Machine Template displays. Enter information into the following fields:
      • Name: Name of the new template
      • Description: Description of the new template
      • Host Cluster: The Host Cluster for the virtual machines using this template.
      • Make Public: Check this box to allow all users to access this template.
      Click OK. In the T em plates tab, the template displays the "Image Locked" status icon while it is
      being created. During this time, the action buttons for the template remain disabled. Once created,
      the action buttons are enabled and the template is ready for use.

      You can now create new Windows machines using this template.

      Create a Windows Virtual Machine from a Template

      This section describes how to create a Windows 7 virtual machine using the template created in
      Section, “Create a Windows Template”.

      To create a Windows virtual machine from a template
      Navigate to the Tree pane and click Expand All. Click the VMs icon under the Default cluster.
      On the Virtual Machines tab, click New Desktop.
      • Select the existing Windows template from the Based on Template list.
      • Enter a suitable Name and appropriate Description, and accept the default values inherited from the template in the resTof the fields. You can change them if needed.
      Retain all other default setting and click OK to create the virtual machine. The virtual machine
      displays in the Virtual Machines list with a status of "Image Locked" until the virtual disk is created.
      The virtual disk and networking settings are inherited from the template, and do not have to be
      reconfigured.

      Click the Run icon to turn iTon. This time, the Run Once steps are not required as the
      operating system has already been installed onto the virtual machine hard drive. Click the green
      Console button to connect to the virtual machine.

      You have now learned how to create Red Hat Enterprise Linux and Windows virtual machines with and without templates. Next, you will learn how to access these virtual machines from a user portal.

      Using Virtual Machines

      Now that you have created several running virtual machines, you can assign users to access them from the user portal. You can use virtual machines the same way you would use a physical desktop.

      Assign User Permissions
                          Figure 4 .12. Assign Permissions

      Red Hat Enterprise Virtualization has a sophisticated multi-level administration system, in which
      customized permissions for each system component can be assigned to different users as necessary.
      For instance, to access a virtual machine from the user portal, a user must have either UserRole or
      PowerUserRole permissions for the virtual machine. These permissions are added from the manager
      administration portal.

      To assign PowerUserRole permissions
      Navigate to the Tree pane and click Expand All. Click the VMs icon under the Default cluster.
      On the Virtual Machines tab, select the virtual machine you would like to assign a user to.

      On the Details pane, navigate to the Perm issions tab. Click the Add button.

      The Add Perm ission to User dialog displays. Enter a Name, or User Name, or part thereof in
      the Search textbox, and click Go. A lisTof possible matches display in the results list.

            Figure 4 .13. Add PowerUserRole Permission

      Select the check box of the user to be assigned the permissions. Scroll through the Assign
      role to user list and select PowerUserRole. Click OK.

      Log in to the User Portal

                          Figure 4 .14 . Connect to the User Portal

      Now that you have assigned PowerUserRole permissions on a virtual machine to the user named admin, you can access the virtual machine from the user portal. To log in to the user portal, all you
      need is a Linux client running Mozilla Firefox or a Windows client running Internet Explorer.

      If you are using a Red Hat Enterprise Linux client, install the SPICE plug-in before logging in to the User Portal. Run:

      # yum install spice-xpi

      To log in to the User Portal

      Open your browser and navigate to https://Dmain.exam ple.com :84 4 3/UserPortal.
      Substitute domain.example.com with the Red Hat Enterprise Virtualization Manager server
      address.

      The login screen displays. Enter your User Name and Password, and click Login.

      You have now logged into the user portal. As you have PowerUserRole permissions, you are taken by
      default to the Extended User Portal, where you can create and manage virtual machines in addition to
      using them. This portal is ideal if you are a system administrator who has to provision multiple virtual machines for yourself or other users in your environment.

          Figure 4 .15. The Extended User Portal

      You can also toggle to the Basic User Portal, which is the default (and only) display for users with
      UserRole permissions. This portal allows users to access and use virtual machines, and is ideal for
      everyday users who do not need to make configuration changes to the system.

          Figure 4 .16. The Basic User Portal

      You have now completed the Quick Start Guide, and successfully set up Red Hat Enterprise
      Virtualization. However, this is just the first step for you to familiarize yourself with basic Red Hat
      Enterprise Virtualization operations. You can further customize your own unique environment based on your organization's needs by working with our solution architects.

      Introduction

      The Red Hat Enterprise Virtualization platform comprises various components which work seamlessly together, enabling the system administrator to install, configure and manage a virtualized environment. After reading this guide, you will be able to set up Red Hat Enterprise Virtualization as represented in the following diagram:

      Figure 1.1. Overview of Red Hat Enterprise Virtualization components

      Prerequisites

      The following requirements are typical for small- to medium-sized installations. Note that the exact
      requirements of the setup depend on the specific installation, sizing and load. Please use the following requirements as guidelines:

      Red Hat Enterprise Virtualization Manager
      • Minimum - Dual core server with 4 GB RAM, with 25 GB free disk space and 1 Gbps network
      • interface.
      • Recommended - Dual Sockets/Quad core server with 16 GB RAM, 50 GB free disk space on multiple disk spindles and 1 Gbps network interface.
      The breakdown of the server requirements are as below:
      • For the Red Hat Enterprise Linux 6 operating system: minimum 1 GB RAM and 5 GB local disk space.
      • For the Manager: minimum 3 GB RAM, 3 GB local disk space and 1 Gbps network controller
      • bandwidth.
      • If you wish to create an ISO domain on the Manager server, you need minimum 15 GB disk space.
      Note: The Red Hat Enterprise Virtualization Manager setup script, rhevm -setup, supports the
      en_US.UT F-8, en_US.utf8, and en_US.utf-8 locales. Ensure that you install the Red Hat
      Enterprise Virtualization Manager on a system where the locale in use is one of these
      supported values.

      A valid Red Hat Network subscription to the following channels:
      • The Red Hat Enterprise Virtualization Manager (v.3 x86_64 ) channel, also referred to as rhel-x86_64 -server-6-rhevm -3, which provides Red Hat Enterprise Virtualization Manager.
      • The JBoss Application Platform (v 5) for 6Server x86_64 channel, also referred to as jbappplatform -5-x86_64 -server-6-rpm , which provides the supported release of the application platform on which the manager runs.
      • The RHEL Server Supplem entary (v. 6 64 -bit x86_64 ) channel, also referred to as rhel-x86_64 -server-supplem entary-6, which provides the supported version of the Java Runtime Environment (JRE).
      A client for connecting to Red Hat Enterprise Virtualization Manager.
      • Microsoft Windows (7, XP, 2003 or 2008) with Internet Explorer 7 and above
      • Microsoft .NET Framework 4
      For each Host (Red Hat Enterprise Virtualization Hypervisor or Red Hat Enterprise Linux)
      • Minimum - Dual core server, 10 GB RAM and 10 GB Storage, 1 Gbps network interface.
      • Recommended - Dual socket server, 16 GB RAM and 50 GB storage, two 1 Gbps network interfaces.
      The breakdown of the server requirements are as below:
      • For each host: AMD-V or Intel VT enabled, AMD64 or Intel 64 extensions, minimum 1 GB RAM, 3GB free storage and 1 Gbps network interface.
      • For virtual machines running on each host: minimum 1 GB RAM per virtual machine.
      • Valid Red Hat Network subscriptions for each host. You can use either Red Hat Enterprise Virtualization Hypervisor or Red Hat Enterprise Linux hosts, or both.

      For each Red Hat Enterprise Virtualization Hypervisor host:
      • The Red Hat Enterprise Virtualization Hypervisor (v.6 x86-64 ) channel, also referred to as rhel-x86_64 -server-6-rhevh
      • For each Red Hat Enterprise Virtualization Linux host: The Red Hat Enterprise Virt Management Agent (v 6 x86_64 ) channel, also referred to as rhel-x86_64 -rhevmgm t-agent-6.
      Storage and Networking
      • At leasTone of the supported storage types (NFS, iSCSI and FCP).
      • For NFS storage, a valid IP address and export path is required.
      • For iSCSI storage, a valid IP address and target information is required.
      • Static IP addresses for the Red Hat Enterprise Virtualization Manager server and for each host server.
      • DNS service which can resolve (forward and reverse) all the IP addresses.
      • An existing DHCP server which can allocate network addresses for the virtual machines.
      Virtual Machines
      • Installation images for creating virtual machines, depending on which operating system you wish to use.
      • Microsoft Windows XP, 7, 2003 or 2008.
      • Red Hat Enterprise Linux 3, 4, 5 or 6.
      • Valid licenses or subscription entitlements for each operating system.
      Red Hat Enterprise Virtualization User Portal
      • A Red Hat Enterprise Linux client running Mozilla Firefox 3.6 and higher or a Windows client running Internet Explorer 7 and higher.

      Install Red Hat Enterprise Virtualization

      The Red Hat Enterprise Virtualization platform consists of at leasTone Manager and one or more hosts.

      Red Hat Enterprise Virtualization Manager provides a graphical user interface to manage the
      physical and logical resources of the Red Hat Enterprise Virtualization infrastructure. The Manager is installed on a Red Hat Enterprise Linux 6 server, and accessed from a Windows client running
      Internet Explorer.

      Red Hat Enterprise Virtualization Hypervisor runs virtual machines. A physical server running
      Red Hat Enterprise Linux can also be configured as a host for virtual machines on the Red Hat
      Enterprise Virtualization platform.

      Install Red Hat Enterprise Virtualization Manager

                                      Figure 2.1. Install Red Hat Enterprise Virtualization Manager

      The Manager is the control center of the Red Hat Enterprise Virtualization environment. It allows you to define hosts, configure data centers, add storage, define networks, create virtual machines, manage user permissions and use templates from one central location.

      The Red Hat Enterprise Virtualization Manager must be installed on a server running Red Hat
      Enterprise Linux 6, with minimum 4 GB RAM, 25 GB free disk space and 1 Gbps network interface.

      Install Red Hat Enterprise Linux 6 on a server. When prompted for the software packages to
      install, select the default Basic Server option. See the Red Hat Enterprise Linux Installation
      Guide for more details.

      Note: During installation, remember to set the fully qualified domain name (FQDN) and IP for the
      server.


      If the classpathx-jaf package has been installed, it must be removed because it conflicts with some of the components required to support JBoss in Red Hat Enterprise Virtualization Manager. Run:

      # yum remove classpathx-jaf

      If your server has not been registered with the Red Hat Network, run:

      # rhn_register

      To complete registration successfully you need to supply your Red Hat Network username and
      password. Follow the onscreen prompts to complete registration of the system.
      After you have registered your server, update all the packages on it. Run:

      # yum -y update

      Reboot your server for the updates to be applied.

      Subscribe the server to the required channels using the Red Hat Network web interface.

      a. Log on to Red Hat Network (http://rhn.redhat.com/).
      b. Click System s at the top of the page.
      c. Select the system to which you are adding channels from the list presented on the screen,
      by clicking the name of the system.
      d. Click Alter Channel Subscriptions in the Subscribed Channels section of the
      screen.
      e. Select the following channels from the list presented on the screen.
      Red Hat Enterprise Virtualization Manager (v.3 x86_64 )
      RHEL Server Supplem entary (v. 6 64 -bit x86_64 )
      JBoss Application Platform (v 5) for 6Server x86_64 (note that this
      channel is listed under "Additional Services Channels for Red Hat Enterprise Linux 6 for
      x86_64")
      Click the Change Subscription button to finalize the change.

      You are now ready to install the Red Hat Enterprise Virtualization Manager. Run the following
      command:

      # yum -y install rhevm

      This command will download the Red Hat Enterprise Virtualization Manager installation software
      and resolve all dependencies.

      When the packages have finished downloading, run the installer:

      # rhevm -setup

      Note: rhevm -setup supports the en_US.UT F-8, en_US.utf8, and en_US.utf-8 locales.
      You will not be able to run this installer on a system where the locale in use is noTone of
      these supported values.


      The installer will take you through a series of interactive questions as listed in the following
      example. If you do not enter a value when prompted, the installer uses the default settings which
      are stated in [ ] brackets.

      Example: Red Hat Enterprise Virtualization Manager installation

      Welcom e to RHEV Manager setup utility
      HTTP Port [8080] :
      HTTPS Port [8443] :
      Host fully qualified Dmain Name, note that this Name should be fully
      resolvable [rhevm .dem o.redhat.com ] :
      Password for Adm inistrator (adm in@ internal) :
      Database password (required for secure authentication with the locally
      created database) :
      Confirm password :
      Organization Name for the Certificate: Red Hat
      The default storage type you will be using ['NFS'| 'FC'| 'ISCSI'] [NFS] :
      ISCSI
      Should the installer configure NFS share on this server to be used as an ISO
      Dmain? ['yes'| 'no'] [no] : yes
      Mount point path: /data/iso
      Display Name for the ISO Dmain: local-iso-share
      Firewall ports need to be opened.
      You can let the installer configure iptables autom atically overriding the current configuration. The old configuration will be backed up.
      Alternately you can configure the firewall later using an example iptables file found under /usr/share/rhevm /conf/iptables.example
      Configure iptables ? ['yes'| 'no']: yes


      Important points to note:
      • The default ports 8080 and 84 4 3 must be available to access the manager on HTTP and HTTPS respectively.
      • If you elect to configure an NFS share it will be exported from the machine on which the manager is being installed.
      • The storage type that you select will be used to create a data center and cluster. You will then be able to attach storage to these from the Administration Portal.
      You are then presented with a summary of the configurations you have selected. Type yes to
      accept them.

      Example: Confirm Manager installation settings

      RHEV Manager will be installed using the following configuration:
      =================================================================
      http-port: 8080
      https-port: 8443
      host-fqdn: rhevm .dem o.redhat.com
      auth-pass: * * * * * * * *
      db-pass: * * * * * * * *
      org-Name: Red Hat
      default-dc-type: ISCSI
      nfs-m p: /data/iso
      iso-Dmain-Name: local-iso-share
      override-iptables: yes
      Proceed with the configuration listed above? (yes|no): yes


      The installation commences. The following message displays, indicating that the installation was
      successful.

      Example: Successful installation

      Installing:
      Creating JBoss Profile... [ DONE ]
      Creating CA... [ DONE ]
      Setting Database Security... [ DONE ]
      Creating Database... [ DONE ]
      Updating the Default Data Center Storage Type... [ DONE ]
      Editing JBoss Configuration... [ DONE ]
      Editing RHEV Manager Configuration... [ DONE ]
      Configuring the Default ISO Dmain... [ DONE ]
      Starting JBoss Service... [ DONE ]
      Configuring Firewall (iptables)... [ DONE ]


      * * * * Installation com pleted successfully * * * * * *


      Your Red Hat Enterprise Virtualization Manager is now up and running. You can log in to the Red Hat Enterprise Virtualization Manager's web administration portal with the username adm in (the
      administrative user configured during installation) in the internal domain. Instructions to do so are
      provided at the end of this chapter.

      Important: The internal domain is automatically created upon installation, however no new users can be added to this domain. To authenticate new users, you need an external directory service. Red Hat
      Enterprise Virtualization supports IPA and Active Directory, and provides a utility called rhevmmanage-domains to attach new directories to the system.


      Install Hosts

                             Figure 2.2. Install Red Hat Enterprise Virtualization Hosts

      After you have installed the Red Hat Enterprise Virtualization Manager, install the hosts to run your
      virtual machines. In Red Hat Enterprise Virtualization, you can use either Red Hat Enterprise
      Virtualization Hypervisor or Red Hat Enterprise Linux as hosts.



      Install Red Hat Enterprise Virtualization Hypervisor

      This document provides instructions for installing the Red Hat Enterprise Virtualization Hypervisor using a CD. For alternative methods including PXE networks or USB devices, see the Red Hat Enterprise Linux Hypervisor Deployment Guide.

      Before installing the Red Hat Enterprise Virtualization Hypervisor, you need to download the hypervisor image from the Red Hat Network and create a bootable CD with the image. This procedure can be performed on any machine running Red Hat Enterprise Linux.

      To prepare a Red Hat Enterprise Virtualization Hypervisor installation CD
      • Download the latest version of the rhev-hypervisor* package from Red Hat Network. The lisTof hypervisor packages is located at the Red Hat Enterprise Virtualization Hypervisor
        (v.6 x86_64 ) channel.
      a. Log on to Red Hat Network (http://rhn.redhat.com/).
      b. Click System s at the top of the page.
      c. From the list presented on the screen, select the system on which the Red Hat Enterprise
      Virtualization Manager is installed by clicking on its name.
      d. Click Alter Channel Subscriptions in the Subscribed Channels section of the screen.
      e. Select the Red Hat Enterprise Virtualization Hypervisor (v.6 x86_64 ) channel from the list presented on the screen, then click the Change Subscription button to finalize the change.


      Log in to the system on which the Red Hat Enterprise Virtualization Manager is installed. You must log in as the root user to install the rhev-hypervisor package. Run the following command:

      # yum install "rhev-hypervisor* "

      The hypervisor ISO image is installed into the /usr/share/rhev-hypervisor/ directory.

      Insert a blank CD into your CD writer. Use the cdrecord utility to burn the hypervisor ISO image
      onto your disc. Run:


      # cdrecord dev=/dev/cdrw /usr/share/rhev-hypervisor/rhev-hypervisor.iso


      You have created a Red Hat Enterprise Virtualization Hypervisor installation CD, now you can use it to boot the machine designated as your hypervisor host. For this guide you will use the interactive installation where you are prompted to configure your settings in a graphical interface. Use the following keys to navigate around the installation screen:

      Menu Navigation Keys

      Use the Up and Down arrow keys to navigate between selections. Your selections are highlighted in white.
      The T ab key allows you to move between fields.
      Use the Spacebar to tick checkboxes, represented by [ ] brackets. A marked checkbox displays
      with an asterisk (* ).
      To proceed with the selected configurations, press the Enter key. 

      To configure Red Hat Enterprise Virtualization Hypervisor installation settings

      Insert the Red Hat Enterprise Virtualization Hypervisor 6.2-3.0 installation CD into your CD-ROM
      drive and reboot the machine. When the boot splash screen displays, press the T ab key and
      select Boot to boot from the hypervisor installation media. Press Enter.

      On the installation confirmation screen, select Install RHEV Hypervisor and press Enter.

      The installer automatically detects the drives attached to the system. The selected disk for
      booting the hypervisor is highlighted in white. Ensure that the local disk is highlighted, otherwise
      use the arrow keys to select the correct disk. Select Continue and press Enter.

      You are prompted to confirm your selection of the local drive, which is marked with an asterisk.
      Select Continue and press Enter.

      Enter a password for local console access and confirm it. Select Install and press Enter. The
      Red Hat Enterprise Virtualization Hypervisor partitions the local drive, then commences
      installation.

      Once installation is complete, a dialog prompts you to Reboot the hypervisor. Press Enter to
      confirm. Remove the installation disc.

      After the hypervisor has rebooted, you will be taken to a login shell. Log in as the adm in user with
      the password you provided during installation to enter the Red Hat Enterprise Virtualization
      Hypervisor management console.

      On the hypervisor management console, there are eight tabs on the left. Press the Up and Down
      keys to navigate between the tabs and Enter to access them.

      Select the Network tab. Configure the following options:

      HostName: Enter the hostname in the formaTof hostName.Dmain.example.com .
      DNS Server: Enter the Domain Name Server address in the formaTof 192.168.0.254 . You can use up to two DNS servers.
      NTP Server: Enter the Network T ime Protocol server address in the formaTof rhel.pool.ntp.org. This synchronizes the hypervisor's system clock with thaTof the manager's. You can use up to two NTP servers. Select Apply and press Enter to save your network settings.

      The installer automatically detects the available network interface devices to be used as the management network. Select the device and press Enter to access the interface configuration menu. Under IPv4 Settings, tick either the DHCP or Static checkbox.
      If you are using static IPv4 network configuration, fill in the IP Address, Netm ask and Gateway fields.

      To confirm your network settings, selecToK and press Enter.

      Select the RHEV-M tab. Configure the following options:

      Managem ent Server: Enter the Red Hat Enterprise Virtualization Manager domain name in the formaTof rhevm .dem o.redhat.com.
      Management Server Port: Enter the management server port number. The default is 8443.
      Connect to the RHEV Manager and Validate Certificate: Tick this checkbox if you wish to verify the RHEVM security certificate.
      Set RHEV-M Adm in Password: This field allows you to specify the root password for the hypervisor, and enable SSH password authentication from the Red Hat Enterprise Virtualization Manager.

      Select Apply and press Enter. A dialog displays, asking you to connect the hypervisor to the Red Hat Enterprise Virtualization Manager and validate its certificate. Select Approve and press Enter. A message will display notifying you that the manager configuration has been successfully updated.

      Under the Red Hat Network tab, you can register the host with the Red Hat Network.
      This enables the host to run Red Hat Enterprise Linux virtual machines with proper RHN entitlements. Configure the following settings:

      Enter your Red Hat Network credentials in the Login and Password fields.
      To select the method by which the hypervisor receives updates, tick either the RHN or Satellite checkboxes. Fill in the RHN URL and RHN CA fields.

      To confirm your RHN settings, select Apply and press Enter.
      Accept all other default settings. For information on configuring security, logging, kdump and
      remote storage

      Finally, select the Status tab. Select Restart and press Enter to reboot the host and
      apply all changes.

      You have now successfully installed the Red Hat Enterprise Virtualization Hypervisor. Repeat this
      procedure if you wish to use more hypervisors. The following sections will provide instructions on how to approve the hypervisors for use with the Red Hat Enterprise Virtualization Manager.

      Install Red Hat Enterprise Linux Host

      You now know how to install a Red Hat Enterprise Virtualization Hypervisor. In addition to hypervisor hosts, you can also reconfigure servers which are running Red Hat Enterprise Linux to be used as virtual machine hosts.

      To install a Red Hat Enterprise Linux 6 host

      On the machine designated as your Red Hat Enterprise Linux host, install Red Hat Enterprise
      Linux 6.2. SelecTonly the Base package group during installation.
      If your server has not been registered with the Red Hat Network, run the rhn_register command as root to register it. To complete registration successfully you will need to supply your Red Hat Network username and password. Follow the onscreen prompts to complete registration of the system.

      # rhn_register

      Subscribe the server to the required channels using the Red Hat Network web interface.

      Log on to Red Hat Network (http://rhn.redhat.com/).
      b. Click System s at the top of the page.
      c. Select the system to which you are adding channels from the list presented on the screen,
      by clicking the name of the system.
      d. Click Alter Channel Subscriptions in the Subscribed Channels section of the
      screen.
      e. Select the Red Hat Enterprise Virt Managem ent Agent (v 6 x86_64 ) channel
      from the list presented on the screen, then click the Change Subscription button to
      finalize the change.

      Red Hat Enterprise Virtualization platform uses a number of network ports for management and
      other virtualization features. Adjust your Red Hat Enterprise Linux host's firewall settings to allow
      access to the required ports by configuring iptables rules. Modify the /etc/sysconfig/iptables file so it resembles the following example:

      :INPUT ACCEPT [0:0]
      :FORWARD ACCEPT [0:0]
      :OUTPUT ACCEPT [10765:598664]
      -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
      -A INPUT -p icm p -j ACCEPT
      -A INPUT -i lo -j ACCEPT
      -A INPUT -p tcp --dport 22 -j ACCEPT
      -A INPUT -p tcp --dport 16514 -j ACCEPT
      -A INPUT -p tcp --dport 54321 -j ACCEPT
      -A INPUT -p tcp -m m ultiport --dports 5634:6166 -j ACCEPT
      -A INPUT -p tcp -m m ultiport --dports 49152:49216 -j ACCEPT
      -A INPUT -p tcp -m state --state NEW
      -A INPUT -j REJECT --reject-with icm p-host-prohibited
      -A FORWARD -m physdev ! --physdev-is-bridged -j REJECT --reject-with icm phost-
      prohibited
      COMMIT

      Ensure that the iptables service is configured to starTon boot and has been restarted, or started for the first time if it was not already running. Run the following commands:

      # chkconfig iptables on
      # service iptables restart

      You have now successfully installed a Red Hat Enterprise Linux host. As before, repeat this procedure if you wish to use more Linux hosts. Before you can start running virtual machines on your host, you have to manually add it to the Red Hat Enterprise Virtualization Manager via the administration portal, which you will access in the next step. 

                  Figure 2.3. Connect to the Manager administration portal

      Now that you have installed the Red Hat Enterprise Virtualization Manager and hosts, you can log in to the Manager administration portal to start configuring your virtualization environment. The web-based administration portal can be accessed using a Windows client running Internet Explorer.
       
      Before logging in, install .NET Framework 4 and modify the default security settings on the machine used to access the web administration portal. The example below is applicable for Windows 2008.

      To configure Windows client to access the administration portal

      To install .NET Framework 4, download it from 
      http://www.microsoft.com/download/en/details.aspx?id=17718. Run this executable as a user with administration access to the system. 

      Next, disable Internet Explorer Enhanced Security Configuration. Click Start → Administrative
      Tools → Server Manager. On the Security Inform ation pane in the Server Manager window, click Configure IE ESC. SelecToff for Administrators and Users to disable the security configuration. Click OK. 

      To add the administration portal to the browser's lisTof trusted sites, open a browser and click on
      Tools → InterneToptions. Click on the Security tab.

      Select Trusted Sites. Click Sites to display the T rusted Sites dialog. Enter the URL for your administration portal in the Add this website to the zone textbox. Click Add, then Close.

      Click the Custom Level... button. Locate the XAML browser applications item in the list, ensure that it is set to Enable, then click OK. 

      Restart Internet Explorer to access the administration portal.

      Log In to Administration Portal

      Now that the prerequisites have been resolved, you can log in to the Red Hat Enterprise Virtualization
      Manager administration portal. Ensure that you have the administrator password configured during
      installation as instructed in Example, “Red Hat Enterprise Virtualization Manager installation”.

      To connect to Red Hat Enterprise Virtualization web management portal

      Open a browser and navigate to https://Dmain.exam ple.com :84 4 3/RHEVManager.
      Substitute Dmain.exam ple.com with the URL provided during installation.

      If this is your first time connecting to the administration portal, Red Hat Enterprise Virtualization
      Manager will issue security certificates for your browser. Click the link labelled this certificate to trust the ca.cer certificate. A pop-up displays, click Open to launch the Certificate dialog. Click Install Certificate and select to place the certificate in Trusted Root Certification Authorities store.

      Back on the browser screen, click the link labelled here and follow the prompts to install the RHEV-GUI-CertificateInstaller executable. A pop-up displays again, this time click Run.
      Note that the actual certificate installation is preceded by an ActiveX installation.
      When complete, a new link labelled here appears. Click on it to reload the administration portal.

      The portal login screen displays. Enter adm in as your User Name, and enter the Password
      that you provided during installation. Ensure that your domain is set to Internal. Click Login.

      You have now successfully logged in to the Red Hat Enterprise Virtualization web administration portal. Here, you can configure and manage all your virtual resources. The functions of the Red Hat Enterprise Virtualization Manager graphical user interface are described in the following figure and list:

         Figure 2.4 . Administration Portal Features

      Header: This bar contains the name of the logged in user, the sign out button, the option to
      configure user roles.

      Navigation Pane: This pane allows you to navigate between the Tree, Bookmarks and Tags tabs. In the Tree tab, tree mode allows you to see the entire system tree and provides a visual representation your virtualization environment's architecture.

      Resources Tabs: These tabs allow you to access the resources of Red Hat Enterprise Virtualization. You should already have a Default Data Center, a Default Cluster, a Host waiting to be approved, and available Storage waiting to be attached to the data center.

      Results List: When you select a tab, this list displays the available resources. You can perform
      a task on an individual item or multiple items by selecting the item(s) and then clicking the relevant
      action button. If an action is not possible, the button is disabled.

      Details Pane: When you select a resource, this pane displays its details in several subtabs.
      These subtabs also contain action buttons which you can use to make changes to the selected
      resource.

      Once you are familiar with the layouTof the administration portal, you can start configuring your virtual environment.

      Configure Red Hat Enterprise Virtualization

      Now that you have logged in to the administration portal, configure your Red Hat Enterprise Virtualization environment by defining the data center, host cluster, networks and storage. Even though this guide makes use of the default resources configured during installation, if you are setting up a Red Hat Enterprise Virtualization environment with completely new components, you should perform the configuration procedure in the sequence given here.

      Configure Data Centers

                                      Figure 3.1. Configure Data Center

      A data center is a logical entity that defines the seTof physical and logical resources used in a managed virtual environment. Think of it as a container which houses clusters of hosts, virtual machines, storage and networks.

      By default, Red Hat Enterprise Virtualization creates a data center at installation. Its type is configured from the installation script. To access it, navigate to the Tree pane, click Expand All, and select the Default data center. On the Data Centers tab, the Default data center displays.

      Figure 3.2. Data Centers Tab

      The Default data center is used for this document, however if you wish to create a new data center
      see the Red Hat Enterprise Virtualization Administration Guide.

      Configure Cluster

                                     Figure 3.3. Populate Cluster with Hosts

      A cluster is a seTof physical hosts that are treated as a resource pool for a seTof virtual machines.
      Hosts in a cluster share the same network infrastructure, the same storage and the same type of CPU.
      They constitute a migration domain within which virtual machines can be moved from host to host.
      By default, Red Hat Enterprise Virtualization creates a cluster at installation. 

      To access it, navigate to the Tree pane, click Expand All and select the Default cluster. On the Clusters tab, the Default

         Figure 3.4 . Clusters Tab

      For this document, the Red Hat Enterprise Virtualization Hypervisor and Red Hat Enterprise Linux hosts will be attached to the Default host cluster. If you wish to create new clusters, or live migrate virtual machines between hosts in a cluster, see the Red Hat Enterprise Virtualization Evaluation Guide.

      Configure Networking

                                     Figure 3.5. Configure Networking

      At installation, Red Hat Enterprise Virtualization defines a Management network for the default data center. This network is used for communication between the manager and the host. New logical networks - for example for guest data, storage or display - can be added to enhance network speed and performance. All networks used by hosts and clusters must be added to data center they belong to.

      To access the Management network, click on the Clusters tab and select the default cluster. Click the Logical Networks tab in the Details pane. The rhevm network displays.

          Figure 3.6. Logical Networks Tab

      The rhevm Management network is used for this document, however if you wish to create new logical networks see the Red Hat Enterprise Virtualization Administration Guide.

      Configure Hosts

                                      Figure 3.7. Configure Hosts

      You have already installed your Red Hat Enterprise Virtualization Hypervisor and Red Hat Enterprise
      Linux hosts, but before they can be used, they have to be added to the Manager. The Red Hat
      Enterprise Virtualization Hypervisor is specifically designed for the Red Hat Enterprise Virtualization
      platform, therefore iTonly needs a simple click of approval. Conversely, Red Hat Enterprise Linux is a general purpose operating system, therefore reprogramming it as a host requires additional
      configuration.

      Approve Red Hat Enterprise Virtualization Hypervisor Host

      The Hypervisor you installed in Section “Install Red Hat Enterprise Virtualization Hypervisor” is automatically registered with the Red Hat Enterprise Virtualization platform. It displays in the Red Hat Enterprise Virtualization Manager, and needs to be approved for use.

      To set up a Red Hat Enterprise Virtualization Hypervisor host
      On the Tree pane, click Expand All and select Hosts under the Default cluster. On the Hosts tab, select the name of your newly installed hypervisor.

         Figure 3.8. Red Hat Enterprise Virtualization Hypervisor pending approval

      Click the Approve button. The Edit and Approve Host dialog displays. Accept the defaults
      or make changes as necessary, then click OK.

                     Figure 3.9. Approve Red Hat Enterprise Virtualization Hypervisor

      The host status will change from Non Operational to Up.

      Attach Red Hat Enterprise Linux Host

      In contrast to the hypervisor host, the Red Hat Enterprise Linux host you installed in Section,
      “Install Red Hat Enterprise Linux Host” is not automatically detected. It has to be manually attached to the Red Hat Enterprise Virtualization platform before it can be used.

      To attach a Red Hat Enterprise Linux host
      On the Tree pane, click Expand All and select Hosts under the Default cluster. On the Hosts
      tab, click New.
      The New Host dialog displays.

                   Figure 3.10. Attach Red Hat Enterprise Linux Host

      Enter the details in the following fields:
      • Data Center: the data center to which the host belongs. Select the Default data center.
      • Host Cluster: the cluster to which the host belongs. Select the Default cluster.
      • Name: a descriptive name for the host.
      • Address: the IP address, or resolvable hostname of the host, which was provided during installation.
      • Root Password: the password of the designated host; used during installation of the host.
      • Configure iptables rules: This checkbox allows you to override the firewall settings on the host with the default rules for Red Hat Enterprise Virtualization.
      If you wish to configure this host for OuTof Band (OOB) power management, select the Power Managem ent tab. T ick the Enable Power Managem ent checkbox and provide the required information in the following fields:
      • Address: The address of the host.
      • User Name: A valid user name for the OOB management.
      • Password: A valid, robust password for the OOB management.
      • Type: The type of OOB management device. Select the appropriate device from the drop down list.
      alom Sun Advanced Lights Out Manager
      apc American Power Conversion Master
      MasterSwitch network power switch
      bladecenter IBM Bladecentre Remote Supervisor Adapter
      drac5 Dell Remote Access Controller for Dell
      computers
      eps ePowerSwitch 8M+ network power switch
      ilo HP Integrated Lights Out standard
      ilo3 HP Integrated Lights Out 3 standard

      ipmilan Intelligent Platform Management Interface
      rsa IBM Remote Supervisor Adaptor
      rsb Fujitsu-Siemens RSB management interface
      wti Western T elematic Inc Network PowerSwitch
      cisco_ucs Cisco Unified Computing System Integrated
      Management Controller

      • Options: Extra command line options for the fence agent. Detailed documentation of the options available is provided in the man page for each fence agent.

      Click the Test button to test the operation of the OOB management solution.

      If you do not wish to configure power management, leave the Enable Power Management checkbox unmarked.

      Click OK. If you have not configured power management, a pop-up window prompts you to confirm if you wish to proceed without power management. SelecToK to continue.

      The new host displays in the lisTof hosts with a status of Installing. Once installation is complete, the status will update to Reboot and then Awaiting. When the host is ready for use, its status changes to Up.

      You have now successfully configured your hosts to run virtual machines. The next step is to prepare data storage domains to house virtual machine disk images.

      Configure Storage

                                      Figure 3.11. Configure Storage

      After configuring your logical networks, you need to add storage to your data center.

      Red Hat Enterprise Virtualization uses a centralized shared storage system for virtual machine disk
      images and snapshots. Storage can be implemented using Network File System (NFS), Internet Small
      Computer System Interface (iSCSI) or Fibre Channel Protocol (FCP). Storage definition, type and function, are encapsulated in a logical entity called a Storage Dmain. Multiple storage domains are
      supported.

      For this guide you will use two types of storage domains. The first is an NFS share for ISO images of installation media. You have already created this ISO domain during the Red Hat Enterprise Virtualization Manager installation

      The second storage domain will be used to hold virtual machine disk images. For this domain, you need at leasTone of the supported storage types. You have already set a default storage type during
      installation as described in Section, “Install Red Hat Enterprise Virtualization Manager”. Ensure that you use the same type when creating your data domain.

      Select your next step by checking the storage type you should use:
      1. Navigate to the Tree pane and click the Expand All button. Under System, click Default. On
        the results list, the Default data center displays.
      2. On the results list, the Storage Type column displays the type you should add.
      3. Now that you have verified the storage type, create the storage domain:

      For NFS storage, refer to Section, “Create an NFS Data Domain”.
      For iSCSI storage, refer to Section, “Create an iSCSI Data Domain”.
      For FCP storage, refer to Section, “Create an FCP Data Domain”.

      Create an NFS Data Domain

      Because you have selected NFS as your default storage type during the Manager installation, you will
      now create an NFS storage domain. An NFS type storage domain is a mounted NFS share that is
      attached to a data center and used to provide storage for virtual machine disk images.

      To add NFS storage:

      Navigate to the Tree pane and click the Expand All button. Under System, select the Default
      data center and click on Storage. The available storage domains display on the results list. Click
      New Domain.

      The New Storage dialog box displays.

          Figure 3.12. Add New Storage

      Configure the following options:
      • Name: Enter a suitably descriptive name.
      • Data Center: The Default data center is already pre-selected.
      • Dmain Function / Storage Type: In the drop down menu, select Data → NFS. The storage domain types not compatible with the Default data center are grayed out. After you select your domain type, the Export Path field appears.
      • Use Host: Select any of the hosts from the drop down menu. Only hosts which belong in the pre-selected data center will display in this list.
      • Export path: Enter the IP address or a resolvable hostname of the chosen host. The export path should be in the formaTof 192.168.0.10:/data or domain.example.com :/data
      Click OK. The new NFS data domain displays on the Storage tab. It will remain with a Locked status while it is being prepared for use. When ready, it is automatically attached to the data center.

      You have created an NFS storage domain. Now, you need to attach an ISO domain to the data center
      and upload installation images so you can use them to create virtual machines. Proceed to Section, “Attach and Populate ISO domain”.

      Create an iSCSI Data Domain

      Because you have selected iSCSI as your default storage type during the Manager installation, you will now create an iSCSI storage domain. Red Hat Enterprise Virtualization platform supports iSCSI storage domains spanning multiple pre-defined Logical Unit Numbers (LUNs).

      To add iSCSI storage:

      On the side pane, select the Tree tab. On System , click the + icon to display the available data
      centers.
      Double click on the Default data center and click on Storage. The available storage domains
      display on the results list. Click New Domain.
      The New Dmain dialog box displays.

          Figure 3.13. Add iSCSI Storage

      Configure the following options:
      • Name: Enter a suitably descriptive name.
      • Data Center: The Default data center is already pre-selected.
      • Domain Function / Storage Type: In the drop down menu, select Data → iSCSI.
      • The storage domain types which are not compatible with the Default data center are grayed out. After you select your domain type, the Use Host and Discover T argets fields display.
      • Use host: Select any of the hosts from the drop down menu. Only hosts which belong in this data center will display in this list.
      To connect to the iSCSI target, click the Discover Targets bar. This expands the menu to
      display further connection information fields.

        Figure 3.14 . Attach LUNs to iSCSI domain

      Enter the required information:
      • Address: Enter the address of the iSCSI target.
      • Port: Select the port to connect to. The default is 3260.
      • User Authentication: If required, enter the username and password.
      Click the Discover button to find the targets. The iSCSI targets display in the results list with a
      Login button for each target.

      Click Login to display the lisTof existing LUNs. Tick the Add LUN checkbox to use the selected
      LUN as the iSCSI data domain.

      Click OK. The new NFS data domain displays on the Storage tab. It will remain with a Locked
      status while it is being prepared for use. When ready, it is automatically attached to the data
      center.

      You have created an iSCSI storage domain. Now, you need to attach an ISO domain to the data center
      and upload installation images so you can use them to create virtual machines. Proceed to Section ,“Attach and Populate ISO domain”.

      Create an FCP Data Domain

      Because you have selected FCP as your default storage type during the Manager installation, you will
      now create an FCP storage domain. Red Hat Enterprise Virtualization platform supports FCP storage domains spanning multiple pre-defined Logical Unit Numbers (LUNs).

      To add FCP storage:

      On the side pane, select the Tree tab. On System , click the + icon to display the available data
      centers.

      Double click on the Default data center and click on Storage. The available storage domains
      display on the results list. Click New Domain.

      The New Dmain dialog box displays.

         Figure 3.15. Add FCP Storage

      Configure the following options:
      • Name: Enter a suitably descriptive name.
      • Data Center: The Default data center is already pre-selected.
      • Domain Function / Storage Type: Select FCP.
      • Use Host: Select the IP address of either the hypervisor or Red Hat Enterprise Linux host.
      • The lisTof existing LUNs display. On the selected LUN, tick the Add LUN checkbox to use it as the FCP data domain.
      Click OK. The new FCP data domain displays on the Storage tab. It will remain with a Locked
      status while it is being prepared for use. When ready, it is automatically attached to the data
      center.

      You have created an FCP storage domain. Now, you need to attach an ISO domain to the data center
      and upload installation images so you can use them to create virtual machines. Proceed to Section, “Attach and Populate ISO domain”.

      Attach and Populate ISO domain

      You have defined your first storage domain to store virtual guest data, now it is time to configure your second storage domain, which will be used to store installation images for creating virtual machines. You have already created a local ISO domain during the installation of the Red Hat Enterprise Virtualization Manager. To use this ISO domain, attach it to a data center.

      To attach the ISO domain
      Navigate to the Tree pane and click the Expand All button. Click Default. On the results list,
      the Default data center displays.

      On the details pane, select the Storage tab and click the Attach ISO button.

      The Attach ISO Library dialog appears with the available ISO domain. Select it and click OK.

         Figure 3.16. Attach ISO Library

      The ISO domain appears in the results lisTof the Storage tab. It displays with the Locked status
      as the domain is being validated, then changes to Inactive.

      Select the ISO domain and click the Activate button. The status changes to Locked and then to
      Active.

      Media images (CD-ROM or DVD-ROM in the form of ISO images) must be available in the ISO repository for the virtual machines to use. To do so, Red Hat Enterprise Virtualization provides a utility that copies the images and sets the appropriate permissions on the file. The file provided to the utility and the ISO share have to be accessible from the Red Hat Enterprise Virtualization Manager.

      Log in to the Red Hat Enterprise Virtualization Manager server console to upload images to the ISO
      domain.

      To upload ISO images

      Create or acquire the appropriate ISO images from boot media. Ensure the path to these images
      is accessible from the Red Hat Enterprise Virtualization Manager server.

      The next step is to upload these files. First, determine the available ISO domains by running:

      # rhevm -iso-uploader list

      You will be prompted to provide the admin user password which you use to connect to the
      administration portal. The tool lists the name of the ISO domain that you attached in the previous
      section.

      ISO Storage Dmain List:
      local-iso-share

      Now you have all the information required to upload the required files. On the Manager console,
      copy your installation images to the ISO domain. For your images, run:

      # rhevm -iso-uploader upload -i local-iso-share [file1] [file2] .... [fileN]

      You will be prompted for the admin user password again, provide it and press Enter.
      Note that the uploading process can be time consuming, depending on your storage performance.

      After the images have been uploaded, check that they are available for use in the Manager
      administration portal.
      • Navigate to the Tree and click the Expand All button.
      • Under Storage, click on the name of the ISO domain. It displays in the results list. Click on it to display its details pane.
      • On the details pane, select the Im ages tab. The lisTof available images should be populated with the files which you have uploaded. In addition, the RHEV-toolsSetup.iso and virtio-win.vfd images should have been automatically uploaded during installation.
          Figure 3.17. Uploaded ISO images

      Now that you have successfully prepared the ISO domain for use, you are ready to start creating virtual machines.

      Manage Virtual Machines

      The final stage of setting up Red Hat Enterprise Virtualization is the virtual machine lifecycle - spanning the creation, deployment and maintenance of virtual machines; using templates; and configuring user permissions. This chapter will also show you how to log in to the user portal and connect to virtual machines.

      Create Virtual Machines
                         Figure 4 .1. Create Virtual Machines

      On Red Hat Enterprise Virtualization, you can create virtual machines from an existing template, as a
      clone, or from scratch. Once created, virtual machines can be booted using ISO images, a network boot (PXE) server, or a hard disk. This document provides instructions for creating a virtual machine using an ISO image.

      In your current configuration, you should have at leasTone host available for running virtual machines, and uploaded the required installation images to your ISO domain. This section guides you through the creation of a Red Hat Enterprise Linux 6 virtual server. You will perform a normal attended installation using a virtual DVD.

      To create a Red Hat Enterprise Linux server

      Navigate to the Tree pane and click Expand All. Click the VMs icon under the Default cluster.
      On the Virtual Machines tab, click New Server.

          Figure 4 .2. Create New Linux Virtual Machine

      You only need to fill in the Name field and select Red Hat Enterprise Linux 6.x as your
      Operating System . You may alter other settings but in this example we will retain the defaults.
      Click OK to create the virtual machine.

      A New Virtual Machine - Guide Me window opens. This allows you to add networks and
      storage disks to the virtual machine.

                                       Figure 4 .3. Create Virtual Machines

      Click Configure Network Interfaces to define networks for your virtual machine. The parameters in the following figure are recommended, but can be edited as necessary. When you have configured your required settings, click OK.

                                                   Figure 4 .4 . New Network Interface configurations

      You are returned to the Guide Me window. This time, click Configure Virtual Disks to add
      storage to the virtual machine. The parameters in the following figure are recommended, but can
      be edited as necessary. When you have configured your required settings, click OK.

                                                 Figure 4 .5. New Virtual Disk configurations

      Close the Guide Me window by clicking Configure Later. Your new RHEL 6 virtual machine will
      display in the Virtual Machines tab.

      You have now created your first Red Hat Enterprise Linux virtual machine. Before you can use your
      virtual machine, install an operating system on it.

      To install the Red Hat Enterprise Linux guesToperating system

      Right click the virtual machine and select Run Once. Configure the following options:

                                                Figure 4 .6. Run Red Hat Enterprise Linux Virtual Machine

      • Attach CD: Red Hat Enterprise Linux 6
      • Boot Sequence: CD-ROM
      • Display protocol: SPICE
      Retain the default settings for the other options and click OK to start the virtual machine.

      Select the virtual machine and click the Console icon. As this is your first time connecting to
      the virtual machine, allow the installation of the Spice Active X and the SPICE client.

      After the SPICE plugins have been installed, select the virtual machine and click the Console icon
      again. This displays a window to the virtual machine, where you will be prompted to begin installing the operating system.

      After the installation has completed, shut down the virtual machine and reboot from the hard drive.

      You can now connect to your Red Hat Enterprise Linux virtual machine and start using it.

      Create a Windows Virtual Machine

      You now know how to create a Red Hat Enterprise Linux virtual machine from scratch. The procedure of creating a Windows virtual machine is similar, except that it requires additional virtio drivers. This example uses Windows 7, but you can also use other Windows operating systems. You will perform a normal attended installation using a virtual DVD.

      To create a Windows desktop

      Navigate to the Tree pane and click Expand All. Click the VMs icon under the Default cluster.
      On the Virtual Machines tab, click New Desktop.

         Figure 4 .7. Create New Windows Virtual Machine

      You only need to fill in the Name field and select Windows 7 as your Operating System . You
      may alter other settings but in this example we will retain the defaults. Click OK to create the virtual
      machine.

      A New Virtual Machine - Guide Me window opens. This allows you to define networks for
      the virtual machine. Click Configure Network Interfaces. See Figure 4.4, “New Network
      Interface configurations” for details.

      You are returned to the Guide Me window. This time, click Configure Virtual Disks to add
      storage to the virtual machine. See Figure 4.5, “New Virtual Disk configurations” for details.

      Close the Guide Me windows. Your new Windows 7 virtual machine will display in the Virtual
      Machines tab.

      To install Windows guesToperating system

      Right click the virtual machine and select Run Once. The Run Once dialog displays as in
      Figure 4.6, “Run Red Hat Enterprise Linux Virtual Machine”. Configure the following options:
      • Attach Floppy: virtio-win
      • Attach CD: Windows 7
      • Boot sequence: CD-ROM
      • Display protocol: SPICE
      Retain the default settings for the other options and click OK to start the virtual machine.

      Select the virtual machine and click the Console icon. This displays a window to the virtual
      machine, where you will be prompted to begin installing the operating system.

      Accept the default settings and enter the required information as necessary. The only change you must make is to manually install the VirtIO drivers from the virtual floppy disk (vfd) image. To do so, select the Custom (advanced) installation option and click Load Driver. Press Ctrl and select:
      • Red Hat VirtIO Ethernet Adapter
      • Red Hat VirtIO SCSI Controller
      The installation process commences, and the system will reboot itself several times.

      Back on the administration portal, when the virtual machine's status changes back to Up, right click on it and select Change CD. From the lisTof images, select RHEV-toolsSetup to attach the Guest Tools ISO which provides features including USB redirection and SPICE display optimization.

      Click Console and log in to the virtual machine. Locate the CD drive to access the contents of the Guest Tools ISO, and launch the RHEV-toolsSetup executable. After the tools have been installed, you will be prompted to restart the machine for changes to be applied.

      You can now connect to your Windows virtual machine and start using it.

      Using Templates

                          Figure 4 .8. Create Templates

      Now that you know how to create a virtual machine, you can save its settings into a template. This template will retain the original virtual machine's configurations, including virtual disk and network interface settings, operating systems and applications. You can use this template to rapidly create replicas of the original virtual machine.

      Create a Red Hat Enterprise Linux Template

      To make a Red Hat Enterprise Linux virtual machine template, use the virtual machine you created in Section, “Create a Red Hat Enterprise Linux Virtual Machine” as a basis. Before it can be used, it has to be sealed. This ensures that machine-specific settings are not propagated through the template.

      To prepare a Red Hat Enterprise Linux virtual machine for use as a template

      Connect to the Red Hat Enterprise Linux virtual machine to be used as a template. Flag the
      system for re-configuration by running the following command as root:

      # touch /.unconfigured

      Remove ssh host keys. Run:

      # rm -rf /etc/ssh/ssh_host_*

      Shut down the virtual machine. Run:

      # poweroff

      The virtual machine has now been sealed, and is ready to be used as a template for Linux virtual
      machines.

      To create a template from a Red Hat Enterprise Linux virtual machine

      In the administration portal, click the Virtual Machines tab. Select the sealed Red Hat
      Enterprise Linux 6 virtual machine. Ensure that it has a status of Down.

      Click Make Template. The New Virtual Machine Template displays.

      Click Make Template. The New Virtual Machine Template displays.

         Figure 4 .9. Make new virtual machine template

      Enter information into the following fields:
      • Name: Name of the new template
      • Description: Description of the new template
      • Host Cluster: The Host Cluster for the virtual machines using this template.
      • Make Private: If you tick this checkbox, the template will only be available to the template's creator and the administrative user. Nobody else can use this template unless they are given permissions by the existing permitted users.
      Click OK. The virtual machine displays a status of "Image Locked" while the template is being
      created. The template is created and added to the Templates tab. During this time, the action
      buttons for the template remain disabled. Once created, the action buttons are enabled and the
      template is ready for use.

      Clone a Red Hat Enterprise Linux Virtual Machine

      In the previous section, you created a Red Hat Enterprise Linux template complete with pre-configured storage, networking and operating system settings. Now, you will use this template to deploy a preinstalled virtual machine.

      To clone a Red Hat Enterprise Linux virtual machine from a template

      Navigate to the Tree pane and click Expand All. Click the VMs icon under the Default cluster.
      On the Virtual Machines tab, click New Server.

         Figure 4 .10. Create virtual machine based on Linux template

      On the General tab, select the existing Linux template from the Based on Template
      list.
      Enter a suitable Name and appropriate Description, then accept the default values
      inherited from the template in the resTof the fields. You can change them if needed.
      Click the Resource Allocation tab. On the Provisioning field, click the drop down
      menu and select the Clone option.

                Figure 4 .11. Set the provisioning to Clone

      Retain all other default settings and click OK to create the virtual machine. The virtual machine
      displays in the Virtual Machines list.

      Create a Windows Template

      To make a Windows virtual machine template, use the virtual machine you created in Section,
      “Create a Windows Virtual Machine” as a basis.

      Before a template for Windows virtual machines can be created, it has to be sealed with sysprep. This ensures that machine-specific settings are not propagated through the template.

      Note that the procedure below is applicable for creating Windows 7 and Windows 2008 R2 templates.

      To seal a Windows virtual machine with sysprep

      In the Windows virtual machine to be used as a template, open a command line terminal and type
      regedit.

      The Registry Editor window displays. On the left pane, expand HKEY_LOCAL_MACHINE →
      SYSTEM → SETUP.

      On the main window, right click to add a new string value using New → String Value. Right click
      on the file and select Modify. When the Edit String dialog box displays, enter the following
      information in the provided text boxes:
      • Value name: UnattendFile
      • Value data: a:\sysprep.inf
      Launch sysprep from C:\Windows\System 32\sysprep\sysprep.exe
      • Under System Cleanup Action, select Enter System Out-of-Box-Experience (OOBE).
      • Tick the Generalize checkbox if you need to change the computer's system identification number (SID).
      • Under Shutdown Options, select Shutdown.

      Click OK. The virtual machine will now go through the sealing process and shut down
      automatically.

      To create a template from an existing Windows machine

      In the administration portal, click the Virtual Machines tab. Select the sealed Windows 7
      virtual machine. Ensure that it has a status of Down and click Make Template.

      The New Virtual Machine Template displays. Enter information into the following fields:
      • Name: Name of the new template
      • Description: Description of the new template
      • Host Cluster: The Host Cluster for the virtual machines using this template.
      • Make Public: Check this box to allow all users to access this template.
      Click OK. In the T em plates tab, the template displays the "Image Locked" status icon while it is
      being created. During this time, the action buttons for the template remain disabled. Once created,
      the action buttons are enabled and the template is ready for use.

      You can now create new Windows machines using this template.

      Create a Windows Virtual Machine from a Template

      This section describes how to create a Windows 7 virtual machine using the template created in
      Section, “Create a Windows Template”.

      To create a Windows virtual machine from a template
      Navigate to the Tree pane and click Expand All. Click the VMs icon under the Default cluster.
      On the Virtual Machines tab, click New Desktop.
      • Select the existing Windows template from the Based on Template list.
      • Enter a suitable Name and appropriate Description, and accept the default values inherited from the template in the resTof the fields. You can change them if needed.
      Retain all other default setting and click OK to create the virtual machine. The virtual machine
      displays in the Virtual Machines list with a status of "Image Locked" until the virtual disk is created.
      The virtual disk and networking settings are inherited from the template, and do not have to be
      reconfigured.

      Click the Run icon to turn iTon. This time, the Run Once steps are not required as the
      operating system has already been installed onto the virtual machine hard drive. Click the green
      Console button to connect to the virtual machine.

      You have now learned how to create Red Hat Enterprise Linux and Windows virtual machines with and without templates. Next, you will learn how to access these virtual machines from a user portal.

      Using Virtual Machines

      Now that you have created several running virtual machines, you can assign users to access them from the user portal. You can use virtual machines the same way you would use a physical desktop.

      Assign User Permissions
                          Figure 4 .12. Assign Permissions

      Red Hat Enterprise Virtualization has a sophisticated multi-level administration system, in which
      customized permissions for each system component can be assigned to different users as necessary.
      For instance, to access a virtual machine from the user portal, a user must have either UserRole or
      PowerUserRole permissions for the virtual machine. These permissions are added from the manager
      administration portal.

      To assign PowerUserRole permissions
      Navigate to the Tree pane and click Expand All. Click the VMs icon under the Default cluster.
      On the Virtual Machines tab, select the virtual machine you would like to assign a user to.

      On the Details pane, navigate to the Perm issions tab. Click the Add button.

      The Add Perm ission to User dialog displays. Enter a Name, or User Name, or part thereof in
      the Search textbox, and click Go. A lisTof possible matches display in the results list.

            Figure 4 .13. Add PowerUserRole Permission

      Select the check box of the user to be assigned the permissions. Scroll through the Assign
      role to user list and select PowerUserRole. Click OK.

      Log in to the User Portal

                          Figure 4 .14 . Connect to the User Portal

      Now that you have assigned PowerUserRole permissions on a virtual machine to the user named admin, you can access the virtual machine from the user portal. To log in to the user portal, all you
      need is a Linux client running Mozilla Firefox or a Windows client running Internet Explorer.

      If you are using a Red Hat Enterprise Linux client, install the SPICE plug-in before logging in to the User Portal. Run:

      # yum install spice-xpi

      To log in to the User Portal

      Open your browser and navigate to https://Dmain.exam ple.com :84 4 3/UserPortal.
      Substitute domain.example.com with the Red Hat Enterprise Virtualization Manager server
      address.

      The login screen displays. Enter your User Name and Password, and click Login.

      You have now logged into the user portal. As you have PowerUserRole permissions, you are taken by
      default to the Extended User Portal, where you can create and manage virtual machines in addition to
      using them. This portal is ideal if you are a system administrator who has to provision multiple virtual machines for yourself or other users in your environment.

          Figure 4 .15. The Extended User Portal

      You can also toggle to the Basic User Portal, which is the default (and only) display for users with
      UserRole permissions. This portal allows users to access and use virtual machines, and is ideal for
      everyday users who do not need to make configuration changes to the system.

          Figure 4 .16. The Basic User Portal

      You have now completed the Quick Start Guide, and successfully set up Red Hat Enterprise
      Virtualization. However, this is just the first step for you to familiarize yourself with basic Red Hat
      Enterprise Virtualization operations. You can further customize your own unique environment based on your organization's needs by working with our solution architects.
      Viewing all 880 articles
      Browse latest View live