Quantcast
Channel: Tech Support
Viewing all 880 articles
Browse latest View live

Free Fault Tolerant Load Balancing using Citrix NetScaler Express (Part 2)

$
0
0

In this second part I describe how to load balance the Citrix StoreFront/Web Interface and Citrix XML Desktop Delivery Controller (XML) services.


If you would like to read the other parts of this article series please go to:
In the first article of this article series I described the installation and configuration of a high available/fault tolerance free NetScaler VPX Express set-up. This set-up can be used to load balance all kind of services for free.

Citrix StoreFront/Web Interface

Let’s start with the configuration of the Citrix StoreFront load balancing. Logically we need two Citrix StoreFront servers for this set-up and a free IP address in the range of the NetScaler VPX Express appliances. I prefer to have the Citrix StoreFront servers running into Server Group to guarantee that the StoreFront configuration is identical.

          Figure 1: Citrix StoreFront Server Group

Within the NetScaler GUI the first step is to go the Configuration Tab, followed by Traffic Management – Load Balancing – Servers.

      Figure 2: NetScaler Load Balancing

In this part we need to specify the actual servers hosting the StoreFront role. In my case these are the SRV-VBN007 and SRV-VBN014. Add each node by specifying the servername and the IP address.

                        Figure 3: Create Server Node

When both servers are added they will be shown in the servers overview.

       Figure 4: StoreFront Servers added

The next step is to create Monitors for these StoreFront servers. NetScaler has specific StoreFront servers monitoring included that actually check if the Storefront is available. Therefore we need to go to Traffic – Management – Load Balancing – Monitors within the Configuration tab. Again choose the Add button.

      Figure 5: Configure Monitors

First you need to enter a name for the monitor. You can use whatever you want (but keep the name logical). Select type as StoreFront, so the specific StoreFront monitor functionality will be applied. Addtionally you can specify the Destination IP at the standard parameters. However you are not required to do that, as if this value is empty the IP address of the server will be used. If you enter a destination IP address you need to create a monitor for each StoreFront server, when leaving this value empty one monitor can be applied to more servers.

     Figure 6: Create Monitor

At the special parameters tab the Store Name of the StoreFront configuration needs to be entered. Also ensure that the StoreFront Account Service box is checked.

               Figure 7: Configure Store Name of the StoreFront monitor

After we created the monitor we are ready for creating the service. This can be found at Traffic Management – Load Balancing – Services again within the Configuration tab. Choose the Add button once again.

        Figure 8: Load Balancing Services

First we need to provide a service name. Again you can type in what you like, but again use a logical naming convention. I’m using SFSRV_<> as an example. As we already created the servers earlier, we now can select Existing Server and choose the corresponding server. Select the protocol and port number. In my example I don’t have certificates, so I’m using port 80 and HTTP. However for production environment I advise to use SSL with port 443.

                              Figure 9: Load Balancing Service set-up

The service will be created and shown in the next window. By default the standard HTTP monitor is applied, but we would like to add our created StoreFront monitor. Scroll down to Monitors and choose the > symbol behind 1 Service to Load Balancing Monitor Binding.

   Figure 10: Load Balancing Service created and change monitors
The Service Load Balancing Monitor Binding window opens. To add our monitor, we need to choose the option Add Binding.

      Figure 11: Service Load Balancing Monitor Binding

Again we already created the Monitor so choose the > symbol again and pick the created service out of the list. Leave the other values in their default state.

                       Figure 12: Selecting Load Balancing Monitor Binding

The binding is replaced by the just selected monitor. Close this window via the Close button (it can take some time for the Current State changes). Choose Done to close the service window as well.

    Figure 13: Changes Load Balancing Monitor Binding

Repeat these steps for the second StoreFront Server. At the end you have two services defined, where the state should be Up (based on the StoreFront Monitor we have bind to the service).

        Figure 14: Services created

The last step within the NetScaler management console is to create the actual virtual server, which will be the entry point for the end-user to access the StoreFront infrastructure. Go to Traffic Management – Load Balancing – Virtual Servers (under Configuration) and use the Add button again.

     Figure 15: Create Virtual Server

Provide a name for the virtual server. Again you can provide whatever you want. Specify the protocol the virtual server should respond to. In my case I will use port 80, when using certificated use SSL. Last enter the IP address for this virtual server.

 

Free Fault Tolerant Load Balancing using Citrix NetScaler Express (Part 2) - Citrix StoreFront/Web Interface and XML Broker

by [Published on 24 Sept. 2015 / Last Updated on 24 Sept. 2015]
In this second part I describe how to load balance the Citrix StoreFront/Web Interface and Citrix XML Desktop Delivery Controller (XML) services.
If you would like to read the other parts of this article series please go to:

Introduction

In the first article of this article series I described the installation and configuration of a high available/fault tolerance free NetScaler VPX Express set-up. This set-up can be used to load balance all kind of services for free.
Advertisement

Citrix StoreFront/Web Interface

Let’s start with the configuration of the Citrix StoreFront load balancing. Logically we need two Citrix StoreFront servers for this set-up and a free IP address in the range of the NetScaler VPX Express appliances. I prefer to have the Citrix StoreFront servers running into Server Group to guarantee that the StoreFront configuration is identical.
Image
Figure 1: Citrix StoreFront Server Group
Within the NetScaler GUI the first step is to go the Configuration Tab, followed by Traffic Management – Load Balancing – Servers.
Image
Figure 2: NetScaler Load Balancing
In this part we need to specify the actual servers hosting the StoreFront role. In my case these are the SRV-VBN007 and SRV-VBN014. Add each node by specifying the servername and the IP address.
Image
Figure 3: Create Server Node
When both servers are added they will be shown in the servers overview.
Image
Figure 4: StoreFront Servers added
The next step is to create Monitors for these StoreFront servers. NetScaler has specific StoreFront servers monitoring included that actually check if the Storefront is available. Therefore we need to go to Traffic – Management – Load Balancing – Monitors within the Configuration tab. Again choose the Add button.
Image
Figure 5: Configure Monitors
First you need to enter a name for the monitor. You can use whatever you want (but keep the name logical). Select type as StoreFront, so the specific StoreFront monitor functionality will be applied. Addtionally you can specify the Destination IP at the standard parameters. However you are not required to do that, as if this value is empty the IP address of the server will be used. If you enter a destination IP address you need to create a monitor for each StoreFront server, when leaving this value empty one monitor can be applied to more servers.
Image
Figure 6: Create Monitor
At the special parameters tab the Store Name of the StoreFront configuration needs to be entered. Also ensure that the StoreFront Account Service box is checked.
Image
Figure 7: Configure Store Name of the StoreFront monitor
After we created the monitor we are ready for creating the service. This can be found at Traffic Management – Load Balancing – Services again within the Configuration tab. Choose the Add button once again.
Image
Figure 8: Load Balancing Services
First we need to provide a service name. Again you can type in what you like, but again use a logical naming convention. I’m using SFSRV_<> as an example. As we already created the servers earlier, we now can select Existing Server and choose the corresponding server. Select the protocol and port number. In my example I don’t have certificates, so I’m using port 80 and HTTP. However for production environment I advise to use SSL with port 443.
Image
Figure 9: Load Balancing Service set-up
The service will be created and shown in the next window. By default the standard HTTP monitor is applied, but we would like to add our created StoreFront monitor. Scroll down to Monitors and choose the > symbol behind 1 Service to Load Balancing Monitor Binding.
Image
Figure 10: Load Balancing Service created and change monitors
The Service Load Balancing Monitor Binding window opens. To add our monitor, we need to choose the option Add Binding.
Image
Figure 11: Service Load Balancing Monitor Binding
Again we already created the Monitor so choose the > symbol again and pick the created service out of the list. Leave the other values in their default state.
Image
Figure 12: Selecting Load Balancing Monitor Binding
The binding is replaced by the just selected monitor. Close this window via the Close button (it can take some time for the Current State changes). Choose Done to close the service window as well.
Image
Figure 13: Changes Load Balancing Monitor Binding
Repeat these steps for the second StoreFront Server. At the end you have two services defined, where the state should be Up (based on the StoreFront Monitor we have bind to the service).
Image
Figure 14: Services created
The last step within the NetScaler management console is to create the actual virtual server, which will be the entry point for the end-user to access the StoreFront infrastructure. Go to Traffic Management – Load Balancing – Virtual Servers (under Configuration) and use the Add button again.
Image
Figure 15: Create Virtual Server

Provide a name for the virtual server. Again you can provide whatever you want. Specify the protocol the virtual server should respond to. In my case I will use port 80, when using certificated use SSL. Last enter the IP address for this virtual server.

Image
Figure 16: Add servers

Just as for the StoreFront servers the next step is to set-up the monitor part. Go to Traffic Management – Load Balancing – Monitors. Add a Monitor

                               Figure 17: Basic Settings Virtual Server

The Virtual Server will be created, next we need to assign the created services to this virtual server as resources. Choose the > Symbol behind No Load Balancing Virtual Server Service Binding.

       Figure 18: Load Balancing Virtual Server configuration

In the Service Binding window choose the > Symbol again to select the services.

         Figure 19: Service Binding

Select the corresponding services to add those to the virtual server configuration.

     Figure 20: Selecting corresponding services.

The services are shown in the Select Service field, leave the other values default and choose the Bind button.

                                  Figure 21: Service Binding

Finally we need to specify the Persistence options. These can be found under Advanced in the right pane. For StoreFront we can use several methodologies. One of the best practices is to use COOKIEINSERT, with as back-up persistence SOURCEIP. Leave the other settings as default.

       Figure 22: Persistence configuration

Choose “Done” once more to finalize the configuration. The status will turn green in a short moment and the StoreFront services are now load balancing using a Citrix NetScaler.

    Figure 23: Load Balancing Virtual Server configuration finalized

The last step is to create a pointer to the virtual server. I created an A record in DNS with the name vbn-sf pointing to the 192.168.21.110 IP address. Now the users can connect via this load balance service to the StoreFront. Don’t forget to save the configuration using the floppy icon.

If you are still using a Citrix Web Interface, you can also use the same set-up. The only difference is that at the monitoring configuration you select Citrix-Web-Interface as the type and specify the site path at the special parameters.

                         Figure 24: Web Interface Monitor

 

Citrix Delivery Controller

Also the Citrix Delivery Controller component can be load balanced using the Citrix NetScaler VPX Express. In all configuration settings within Citrix XenDesktop/XenApp you can configure more Delivery Controllers including a load balancing mechanism. However using the NetScaler Load Balancing has one big advantage, you only need to specify the Delivery Controller actual server names once in the NetScaler. On the other configuration options you specify the Virtual Server name, so when there are changes to the Desktop Delivery Controller you only need to change this in the NetScaler configuration.

The first step is to specify the server which runs the Delivery Controller component within Traffic Management – Load Balancing – Servers as we have also done for the Citrix StoreFront Servers. In this article I’m using the same servers, if you have others check about how to add the server exactly.

The next step is to create the monitor for the Delivery Controller. Again give the monitor the name, specify as type Citrix-XD-DDC. Optionally specify a destination IP, if none specified the IP of the server applied to the monitor will be used. This makes the monitor usable for more services.

     Figure 25: DDC Monitor

After the monitor is created, we can start to set-up the services within Traffic Management – Load Balancing – Services. Just as with the StoreFront a service name should be provided. As we already created the server, we can select an existing server. Specify HTTP as the protocol used and fill in the port where the XML Broker traffic is set-to. The default for this is port 80 with XenDesktop/XenApp 7.x, in my case it’s running on port 8080.

                     Figure 26: DDC Service

Follow the same procedure to change the monitor of this service to the just created monitor. Press the > symbol and change the monitor binding to the just created monitor. Repeat the creation of the service and change the monitor binding for the second (and more) Desktop Delivery Controllers.

    Figure 27: Changing service Load Balancing Monitor Binding

After creating the services for all available Delivery Controllers, we can finalize the set-up by creating the virtual server via Traffic Management – Load Balancing – Virtual Servers by using the Add Button. The same procedure used at creating the StoreFront virtual server is applicable. So first a name, unique IP address, protocol and port number should be specified. Secondly we need to bind the just created services to the virtual server. Third step is to configure the persistence based on SOURCEIP.

    Figure 28: Virtual Server DDC

Don’t forget to save the configuration using the floppy icon. Now we are ready to use the DDC Load Balancing Virtual Server for example into the StoreFront configuration.

                 Figure 29: Using the DDC Virtual Server into the StoreFront configuration

If you are still using a Citrix XenApp 6.x infrastructure the same steps can be used to load balance the XML service. Within the monitor the type should be changed to CITRIX-XML-SERVICE, the other steps and configuration is exactly the same.

 

Summary

In the first part I described the way to set-up and configure a free High Available Load Balancing infrastructure based on the Citrix NetScaler VPX Express. In this second part we used this infrastructure to set-up a load balanced high available StoreFront and Desktop Delivery Controller infrastructure. In an upcoming article I’m going to describe the steps for load balancing the Microsoft RD Web Access and RD Connection Broker components.


Free Fault Tolerant Load Balancing using Citrix NetScaler Express (Part 3)

$
0
0

In this third part I’m describing how to load balance Microsoft Remote Desktop Web Access and Microsoft RD Connection Broker using the Citrix NetScaler infrastructure.

If you would like to read the other parts of this article series please go to:
In the first article of this article series I described the installation and configuration of a high available/fault tolerance free NetScaler VPX Express set-up. This set-up can be used to load balance all kind of services for free. In the second part I described how to load balance the Citrix StoreFront/Web Interface and Citrix XML Desktop Delivery Controller (XML) services.

Microsoft Remote Desktop Web Access

The first component we will set-up as fault tolerant load balancing infrastructure is the Microsoft Remote Desktop Web Access. This component is actually based on IIS, so we need to load balance this based on an HTTP/HTTPS protocol. The first step is to enter the service within the Citrix NetScaler configuration via Traffic Management – Load Balancing – Server under the Configuration tab. Choose the Add button. In the below image there are already servers available from my second article .

       Figure 1: Adding Servers

Within the Create Server window we need to specify the server name and the corresponding IP address. Logically you need to redo this step for all the servers that will be part of the load balance group. For this article I will add two servers, VBN-SRV016 (192.168.21.216) and VBN-SRV017 (192.168.21.217).

                          Figure 2: Create Server window

In another scenario we would set-up a monitor to check if the service is still responding. However the RD Web Access is just plain HTTP/HTTPS traffic. Within NetScaler these monitors are already available by default so we don’t need to create one and we can directly continue with creating the services. The services are created at Traffic Management – Load Balancing – Service again within the Configuration tab. Again use the Add button to set-up the service.

      Figure 3: Create Services

Provide a logical name for the Service Name. I use RDWASRV_<> as a naming convention. Because we already added the server ealier, we can select existing server and select the server in the drop-down box. Choose the corresponding protocol. As I used HTTP in the previous article, I will now use a secured connection. For HTTPS you select SSL_BRIDGE and the corresponding port (by default 443).

                                Figure 4: Basic Settings Load Balancing Service

After pressing OK the service is created. After that we are changing the monitor binding. Select the > symbol at the end of 1 Service to Load Balancing Monitor Binding.

       Figure 5: Load Balance Service created, changing monitors

By default the tcp-default monitor is bind to a service, to change this default behavior choose Add Binding.

      Figure 6: Service Load Balancing Monitor Binding

Click the > symbol at the Select Monitor option.

               Figure 7: Selecting Monitor Binding

A list of available monitors will be shown. Select the monitor HTTPS.

      Figure 8: Select https monitor

Leave the other settings default and choose the Bind button.

                    Figure 9: https monitor selected

Now the Load Balancing Service is fully configured. Repeat these steps for the other servers that will be part of the load balancing infrastructure.

    Figure 10: https monitor selected

After the creation of the services, we are ready to set-up the actual virtual server which will be the access point of the RD Web Access users. To set-up the virtual server go to Configuration then Traffic Management – Load Balancing – Virtual Servers and start the process via the Add button.

         Figure 11: Virtual Servers

Provide a name for the virtual server. Again you name it whatever you like, but a name that explains the functionality makes sense over time. Select  protocol SSL_BRIDGE as Protocol and IP Address as IP Address type. Next you need to enter the IP address for the virtual server, followed by providing the port number the virtual server will be accessed.

                              Figure 12: Create Virtual Server

When the OK button is pressed, the virtual server will be created. After, the creation services should be assigned to this virtual server. Choose the > symbol after No Load Balancing Virtual Service Binding.

       Figure 13: Add services after virtual server creation

Press the > symbol at Select Service to add services to the virtual server.

                              Figure 14: Select Service Binding

A list of configure services is shown. Pick the services you have just configured for this virtual server. In my case these are VBN-SRV016 and VBN-SRV017.

   Figure 15: Select the required services

The services are now selected and can be connect to the virtual server using the Bind button.

                                    Figure 16: Service Binding selected

After binding the services persistance is automatically configured on SSLSESSION. If required you can change the persistance setting using the pencil icon, but this is optional.

      Figure 17: Virtual Server configuration finished

The last step is to create a DNS record to the Virtual Server IP address. For this article I created this for internal access, so I can add it to my local DNS.

                              Figure 18: Creating DNS record for the virtual server

 

Microsoft Remote Desktop Connection Broker

Another component within the Remote Desktop infrastructure that is a real good candidate for load balancing through the free NetScaler Express edition is the Remote Desktop Connection Broker. In this paragraph I will describe the steps to set-up a load balanced RD Connection Broker via the NetScaler VPX Express.

The first step is to add the servers running the RD Connection Broker role into the NetScaler configuration. For this article I’m using the same servers as I used for the RD Web Access set-up, so I can skip this step. See the RD Web Access steps on how to add a server in the NetScaler Express via Traffic Management – Load Balancing – Servers under Configuration.

Unfortunately there is no special monitor available within the NetScaler for monitoring the RD Connection Broker component. The NetScaler has a specific RDP script available, but that is only functioning for machines that are hosting the RD Session Host.

                 Figure 19: RDP monitoring available within the NetScaler, but cannot be used for the RD Connection Broker

So we can skip the monitor part for this component and directly start creating the services for the RD Connection Broker role. Go to Traffic Management – Load Balancing – Services within Configuration followed by the Add button.

           Figure 20: Load Balancing - Services

Provide a logical name for the service. I’m using the convention RDCBSRV_<>, but you can fill in whatever you need. As we already added the servers earlier we now can choose the existing server option and select the corresponding server. Next select RDP as the protocol with port 3389.

         Figure 21: Add Load Balancing Service

When the service is created we normally add a specific monitor, but as just mentioned there is no monitor available. The only option available is using the default tcp-default monitor, which checks that port 3389 is responding. Repeat this step for the other servers hosting the RD Connection Broker role.

   Figure 22: No specific monitor needs to be added

After pressing the Done button the service is fully created and is available for the next step - creating the virtual server.

        Figure 24: Creating Virtual Servers

Creating the Virtual Server starts with providing a name for the virtual server. This name is just for administrative purposes, so fill in a logical name. Secondly the protocol needs to be set to RDP and use IP address as IP Address type. Fill in the IP address the virtual server will be accessed to. At last check if the port number is port 3389.

      Figure 23: Services created

Creating the Virtual Server is done via Traffic Management – Load Balancing – Virtual Servers again within the Configuration tab.

              Figure 25: Create Load Balancing Virtual Server

After the basic settings we need to assign the corresponding services to the virtual server. Choose the > symbol at No Load Balancing Virtual Service Binding.

       Figure 26: Basic Settings Virtual Server, add services

The Service Binding window will open. Select the > symbol to select the services.

          Figure 27: Select Service

Select the services that are hosting the role.

      Figure 28: Select the services

The services are now selected and available to bind to the virtual server.

                    Figure 29: Services to bind to the virtual server

After adding the services, we need to use the OK button to continue with the next step.

      Figure 30: Service binded, press OK for the next step

After pressing OK the Traffic Settings appear, just accept the default values by pressing Done.

       Figure 31: Traffic Settings are set

After some time the Virtual Server will be changed to green and is load balancing service available.

     Figure 32: Virtual Server is up and running

To make sure that settings are retained when a NetScaler is reboot, don’t forget to save your configuration using the floppy disk icon.

                                    Figure 33: Saving the configuration

The last step is to create a DNS record so the service can be reached on a FQDN. Choose a logical name and assign it to the IP address of the Virtual Server. Remember that this FQDN needs to be configured within the RD Connection Broker configuration, so use the same name you are already using (that will probably change the DNS records) or write down the FQDN and use this in the RDCB wizard.

                             Figure 34: Creating a DNS record

 

Remote Desktop Gateway

The last component that should be load balanced is the RD Gateway component. However for this component all communication will flow via the load balancer. While the NetScaler VPX Express is limited to 10 Mbit it’s not a good idea to use the free version of the NetScaler VPX Express for this functionality as you will run out of the bandwidth restrictions pretty quickly. However if you upgrade to official versions you can use the NetScaler to load balance the RD Gateway. For this article this component is not suitable, so I won't go into details about this set-up.

 

Summary

In the first part I described the step to install and configure a Citrix NetScaler VPX Express, a high available and fault tolerant infrastructure. In the second part we described how to use the NetScaler infrastructure to load balance Citrix StoreFront/Web Interface and the Citrix Delivery Controller components. In this third and last article we built a load balance environment for Remote Desktop Web Access and Remote Desktop Connection Broker. The NetScaler VPX series offers lots of possibilities, where I showed some examples of configuration that can be arranged with the free VPX express edition.

Installing, Configuring, and Using Hyper-V in Windows 10

$
0
0

In this article, you learn how to install Hyper-V in Windows 10 and how to configure basic Hyper-V settings. You are also introduced to the Windows PowerShell cmdlet that allows you to perform Hyper-V installation, and how you can configure Hyper-V using Hyper-V Manager and Virtual Switch Manager. In addition, you learn how to create a new Virtual Machine and how to connect to it using the VMConnect tool.

Requirements for Hyper-V on Windows 10

Before installing Hyper-V on Windows 10, you must first ensure that you have one of the operating system editions that support Hyper-V. If you have a Windows 10 Pro, Enterprise, or Education edition, then you can enable Hyper-V on your system. However, if you own Windows 10 Home edition, then you will have to upgrade to one of the supported editions before you can install and use Hyper-V.

In terms of hardware requirements, you must have a system with at least 4 GB of RAM. Of course, the more virtual machines that you want to run simultaneously, the more memory that you will need in your system.

A crucial hardware requirement is that your system must have a 64-bit processor that supports Second Level Address Translation (SLAT). Hyper-V began to leverage SLAT in Windows Server 2008 R2 to reduce the overhead required in virtual to physical address mappings for virtual machines. Based on AMD-V Rapid Virtualization Indexing (RVI) and Intel VT Extended Page Tables (EPT) architectures, supported AMD and Intel processors can maintain address mappings and perform the two levels of address space translations required for each virtual machine in hardware. 

Leveraging the hardware eliminates the need to perform these tasks within Hyper-V, reducing the complexity of the hypervisor and the context switches needed to manage virtual machine page faults. The reduction in processor and memory overhead associated with SLAT improves scalability with respect to the number of virtual machines that can be executed concurrently.

Verifying Hardware Compatibility for Hyper-V on Windows 10

In order to verify if your system is hardware compatible before you try to install Hyper-V in Windows 10, open a command prompt (cmd.exe) and run systeminfo.exe. If your system is hardware compatible, you will see the Hyper-V related entries shown in Figure 1.


              Figure 1: Hyper-V Requirements from Systeminfo.exe

If all of these requirements have a value of Yes, then your system is ready for Hyper-V installation. If some of these values have a value of No, such as Virtualization Enabled in Firmware and Data Execution Prevention Available, then you may have to enable them in your system BIOS. The Second Level Address Translation and VM Monitor Mode Extensions are hardware properties, so if these do not display a Yes value, then you either have to upgrade your system processor if that is possible, or select a different system with a compatible processor architecture.

 

Installing Hyper-V on Windows 10

Once you have verified the hardware compatibility of your system, you can start the Hyper-V installation.
  1. Click on the Windows button, and select Programs and Features (Figure 2).
         Figure 2: Selecting Programs and Features
  1. As shown in Figure 3, click on Turn Windows features on or off to configure new Windows features.
          Figure 3: Selecting Turn Windows Features On or Off
  1. From the Windows Features options, click Hyper-V, ensure that all the sub-options are selected, and then click OK (Figure 4).


           Figure 4: Selecting the Hyper-V Features
  1. After the Hyper-V features are installed, click Restart now in the Windows Features dialog (Figure 5).
           Figure 5: Windows Features Dialog after Hyper-V Feature Installation

In order to verify Hyper-V installation success on your system, open a command prompt and run systeminfo.exe again. You will see that the Hyper-V related entries have changed (Figure 6).

    Figure 6: Hyper-V Requirements from Systeminfo.exe after Hyper-V Installation

 

Using PowerShell to Install Hyper-V

If you prefer to use the command line to install Hyper-V on your system, you can use PowerShell instead of going through the graphical user interface (GUI).
  1. Click on the Windows button, select Search, and then enter PowerShell (Figure 7).

          Figure 7: Using Search to Find Windows PowerShell
  1. Right-click on Windows PowerShell and then select Run as Administrator.
  2. In Windows PowerShell, enter the cmdlet shown in Figure 8, and then press the Enter key.


          Figure 8: Windows PowerShell Cmdlet to Install Hyper-V
  1. Once the Windows PowerShell cmdlet successfully completes, you will see the output shown in Figure 9.

         Figure 9: Windows PowerShell Display after Hyper-V Installation

 

Configuring Hyper-V Settings using Hyper-V Manager

There are two main ways that you can configure Hyper-V after installation. You can continue to use Windows PowerShell cmdlets, or if you are more comfortable using a GUI, you can use Hyper-V Manager.
  1. In order to run Hyper-V Manager, click on the Windows button, select Search, and then enter Hyper-V Manager (Figure 10).

          Figure 10: Using Search to Find Hyper-V Manager
  1. Click on Hyper-V Manager to run the application.
  2. In Hyper-V Manager, in the Actions pane, click Hyper-V Settings (Figure 11).

 
     Figure 11: Hyper-V Manager Console
  1. On the Hyper-V Settings page, select Virtual Hard Disks to specify the default drive and folder to store virtual hard disk files (C:\Virtual Machines, in this example), and then click Apply (Figure 12).
 
     Figure 12: Hyper-V Settings – Virtual Hard DisksPage
  1. On the Hyper-V Settings page, select Virtual Machines to specify the default folder to store virtual machine configuration files (C:\Virtual Machines, in this example), and then click Apply (Figure 13).
 Figure 13: Hyper-V Manager – Virtual MachinesPage
  1. On the Hyper-V Settings page, select Storage Migrations to set the number of simultaneous storage migrations that are allowed (4 allowed, in this example), and then click Apply (Figure 14).

     Figure 14: Hyper-V Manager – Storage MigrationsPage

 

Creating a Virtual Switch using Virtual Switch Manager

Similar to a network interface card (NIC) in a physical system, a virtual switch allows you to create one of more virtual networks and to use these virtual networks to interconnect virtual machines. You can create External, Internal, and Private virtual switches in Hyper-V. External switches allow you to interconnect virtual machines, and provide them with connectivity to external physical networks by redirecting the traffic through a physical system network adapter. Internal virtual switches allow you to interconnect virtual machines and provide them with the ability to communicate with the Hyper-V host, but the virtual machines are isolated from external physical networks. Private virtual switches allow you to interconnect virtual machines, but they are isolated from both the Hyper-V host, and from external physical networks. You can use the Virtual Switch Manager to create and manage virtual switches in Hyper-V.
  1. In Hyper-V Manager, in the Actions pane, click Virtual Switch Manager.
  1. Select New virtual network switch, then in the Actions pane, select External, and then click Create Virtual Switch (Figure 15).


     Figure 15: Virtual Switch Manager
  1. In the Virtual Switch Properties pane, name the virtual switch Public, allow the management operating system to share the network adapter bound to the virtual switch, and then click Apply (Figure 16).
 
     Figure 16: Virtual Switch Manager – External Virtual Switch Creation
  1. After you click Apply, you will see a warning message, as shown in Figure 17, indicating a potential loss of network connectivity while Hyper-V creates the new virtual switch.


                                      Figure 17: Warning Message before Virtual Switch Creation
  1. Click Yes to proceed with the virtual switch creation.
  2. Click OK to close the Virtual Switch Manager.

 

Creating a New Virtual Machine using Hyper-V Manager

After you have configured these very basic Hyper-V settings and created a new virtual switch, you can move on to create a new virtual machine using Hyper-V Manager.
  1. In Hyper-V Manager, in the Actions pane, click New, and then click Virtual Machine, (Figure 18).

 
     Figure 18: Creating a new Virtual Machine using Hyper-V Manager
  1. In the New Virtual Machine Wizard, click Next to skip the Before You Begin page (Figure 19).
       Figure 19: New Virtual Machine Wizard – Before You Begin
  1. On the Specify Name and Location page, enter a name for the new virtual machine (Demo Virtual Machine, in this example), and then click Next to continue (Figure 20).


      Figure 20: New Virtual Machine Wizard – Specify Name and Location
  1. On the Specify Generation page, select the Generation option that supports the guest operating system you will install in the virtual machine. Generation 1 supports most 32-bit guest operating systems, while Generation 2 supports most 64-bit versions of Windows and newer Linux and FreeBSD operating systems. If in doubt, review which generation option best supports your selected guest operating system. Once you have made a selection, click Next to continue (Figure 21).
       Figure 21: New Virtual Machine Wizard – Specify Generation
  1. On the Assign Memory page, enter a value for the amount of memory to allocate to your new virtual machine at startup (1024 MB, in this example), and leave the Dynamic Memory option selected to allow Hyper-V to manage the memory allocation. After you have entered a memory value, click Next (Figure 22).


       Figure 22: New Virtual Machine Wizard – Assign Memory
  1. On the Configure Network page, select the virtual switch that you just created (Public, in this example), and then click Next (Figure 23).


       Figure 23: New Virtual Machine Wizard – Configure Networking
  1. On the Connect Virtual Hard Disk page, review the options and then click Next (Figure 24).


       Figure 24: New Virtual Machine Wizard – Connect Virtual Hard Disk
  1. On the Installation Options page, select the option to Install an operating system from a bootable image file, click Browse to navigate and select the ISO image, and then click Next (Figure 25).


        Figure 25: New Virtual Machine Wizard – Installation Options
  1. On the Completing the New Virtual Machine Wizard page, review the settings, and then click Finish to create the virtual machine and close the wizard (Figure 26).


       Figure 26: Completing the New Virtual Machine Wizard
  1. After Hyper-V creates the new virtual machine, you can view it in Hyper-V Manager (Figure 27).

 
     Figure 27: Viewing the New Virtual Machine in Hyper-V Manager

 

Finishing Guest Operating System Installation using VMConnect

After the creation of the new virtual machine hard disk and configuration files, you have to start the virtual machine to finish the guest operating system installation using the selected image file.
  1. In Hyper-V Manager, double-click on the new virtual machine to launch the VMConnect tool.
  2. In VMConnect, click on the green Start button, and follow the instructions on the screen to boot the virtual machine, as shown in Figure 28.
 
     Figure 28: Starting a new Virtual Machine using VMConnect
  1. As shown in Figure 29, the new virtual machine boots into the Windows setup, and you can complete the installation in the same manner as you would on a physical system.

     Figure 29: Completing Windows Setup using VMConnect

After you complete the virtual machine guest operating system installation, you can continue to access and manage it using VMConnect. To name just a few, VMConnect allows you to start, turn off, shut down, save, pause, and reset the virtual machine. Table 1 contains a more extensive list of VMConnect command options, the icon associated with each commands, and the alternate key sequence that you can use to execute the command.

CommandIconAlternate Key Sequence
Start the virtual machine   ImageCTRL+S
Turn off the VM   Image
Shut down a VM   Image
Save   Image
Pause   Image
Reset   Image
Mouse release   ImageCTRL+ALT+LEFT arrow
Send CTRL+ALT+DELETE to the virtual machineCTRL+ALT+END
Switch from full-screen mode back to windowed modeCTRL+ALT+BREAK
Use enhanced session mode   Image
Open the settings for the virtual machineCTRL+O
Create a checkpoint   ImageCTRL+N or select Action> Checkpoint
Revert to a checkpoint   ImageCTRL+E
Do a screen captureCTRL+C
Return mouse clicks or keyboard input to the physical computerPress CTRL+ALT+LEFT arrow and then move the mouse pointer outside of the virtual machine window. This is the mouse release key combination and it can be changed in the Hyper-V settings in Hyper-V Manager.
Send mouse clicks or keyboard input to the virtual machineClick anywhere in the virtual machine window. The mouse pointer may appear as a small dot when you connect to a running virtual machine.
Change the settings of the virtual machineSelect File> Settings.
Connect to an image file (.iso) or a virtual floppy disk file (.vfd)Click Media on the menu. Virtual floppy disks are   not supported for Generation 2 virtual machines.
Table 1: VMConnect Commands, Icons, and Key Sequences

 

Conclusion

In this article, you learned about the hardware and software prerequisites to install Hyper-V in Windows 10 using the GUI and Windows PowerShell cmdlet. Hyper-V configuration is performed using Hyper-V Manager or Windows PowerShell if you like to manage systems from the command line. You can use Virtual Switch Manager to create new networks that allow the connection of virtual machines to each other, the Hyper-V host, or external physical networks. You can create new virtual machines using Hyper-V Manager, and connect to them using the VMConnect tool. All of these tools should look very familiar to anyone that has used previous versions of Hyper-V on Windows desktop and server operating system editions.



Hyper-V Optimization Tips (Part 1)

$
0
0
The first article in this series examines disk cache settings for virtual disks attached to virtual machines running on Hyper-V hosts.

If you would like to read the next part in this article series please go to Hyper-V Optimization Tips (Part 2).

 

Introduction

Disk write caching is a performance feature introduced with Windows Server 2003 and Windows XP that enables the operating system and applications to run faster by allowing them to not have to wait for data write requests to be committed to disk. 

But while these "delayed writes" can help Windows run faster, they also have a risk associated with them. That's because a sudden hardware failure, software crash or power outage could cause the cached data to be lost. 

The result can be that Windows thinks certain data was written to disk whereas in actual fact the writes weren't committed to disk. In addition, file system corruption and/or data loss may occur. Having a backup power source such as a UPS can help mitigate such risks.

For scenarios where data integrity is more important than performance it's important that disk caching be disabled. One example of such a scenario is Active Directory domain controllers where disk write caching should always be disabled to prevent corruption of the directory database and/or loss of important security information for the domain. 

In fact when you promote a Windows Server system to the role of domain controller, Windows automatically disables its write cache function. On the other hand there are also some applications where disk write caching always needs to be enabled. 

An example of this is Microsoft Exchange Server which uses the Windows write cache function for its own transactional logging function. This is one reason why it's generally not a good idea to deploy Exchange Server on a domain controller as explained here.

Understanding disk write caching

Disk write caching can be enabled or disabled on a per-volume basis in the Windows operating system by configuring the settings found by opening Computer Management, selecting Disk Management, right-clicking on a disk and selecting the Policies tab on the Properties sheet as shown in Figure 1 below:

                              Figure 1: Settings for configuring disk write caching

It's important to begin with that you understand the difference between the two settings shown in the above figure. The first setting "Enable write caching on the device" which is enabled by default tells your storage hardware to signal to Windows that a write request has been completed even though the actual data to be written has not yet been flushed from the intermediate hardware cache (volatile storage i.e. memory) to the final storage location (non-volatile storage i.e. the disk). 

Data from write requests is usually only held for a brief time interval as the storage hardware usually flushes its cache automatically when the hardware is idle. Certain operating system commands involving NTFs can also force cached data to be flushed to disk. If the power to the system fails while data still remains in the cache, data loss or corruption may cause applications to fail or the operating system to crash.

The second configuration setting "Turn off Windows write-cache buffer flushing on the device" has to do with write requests that have been tagged by the operating system as "write-through" by tagging it with a ForceUnitAccess flag. When a write request has been tagged as write-through, the storage hardware is supposed to guarantee that the data has been written to non-volatile (disk) storage and has not been temporarily stored in some intermediate cache on the storage hardware. 

Enterprise storage hardware can accomplish this in several ways, one of more common approaches being when the intermediate cache on the storage hardware is battery-backed which enables dirty (cached) writes to be completed (flushed to disk in proper sequential order) even when the server system itself experiences a power failure or operating system crash. If you enable the "Turn off Windows write-cache buffer flushing on the device" setting however, the ForceUnitAccess flag is removed from any write requests that are tagged with this flag. 

This results in greater use of the cache and thus better write performance, but this option should only be enabled if a UPS is present that backs up the power for hardware along the entire I/O path (or if the machine is a laptop with a working battery in it). Because of the potential added risk of data loss that may occur, this second write caching setting is not enabled by default on Windows Server systems. For more information and usage guidance on these two disk write caching settings, see the Performance Tuning Guidelines for Windows Server 2012 R2 on MSDN.


Scenarios for modifying the default write cache settings

Concerning the first write caching setting, I've already mentioned an example of a scenario where write caching is automatically disabled: on domain controllers. But really for any application where data integrity is paramount, you may want to consider disabling the disk cache to ensure that all writes are committed to disk storage before the storage subsystem reports success for the write request. If this is the case however then be sure to disable write caching both in Windows and in the firmware of your storage controller.

And for some fascinating history concerning the second setting, be sure to see the post titled Dangerous setting is dangerous: This is why you shouldn't turn off write cache buffer flushing on Raymond Chen's blog The Old New Thing. Be sure to read the comments to this post too as they give some additional useful insights--for example the comment that says "Lots of hard drives cheat and do write caching internally--even when the protocol doesn't allow for it" which is kind of scary. Raymond's basic point in his post seems to be that the second setting should never be selected and instead should be eliminated from the Windows UI. 

However as Emmanuel Bergerat points out in his MSDN post titled The checkbox that saves you hours, there are actually some real world scenarios where selecting the second checkbox makes sense. Finally, note that some server systems have firmware (BIOS or UEFI) settings you can use to configure intermediate caching for the storage subsystem, so it's not just the Windows settings that you need to be aware of--to configure write caching you must do so both in the operating system and on the storage controller.


Disk write caching on virtual machines

The question we want to focus on for the remainder of this article is the impact of using these settings for virtual machines running on Hyper-V hosts. Figure 2 shows a virtual machine named SERVER03 running Windows Server 2012 R2 that is opened in Virtual Machine Connection on a Hyper-V host named HOST40 which is also running Windows Server 2012 R2. When we try to clear the "Enable write caching on the device" to disable write caching on the virtual hard disk (VHD) for this virtual machine, the error dialog shown appears informing us that this action is not allowed:

    Figure 2: Attempting to disable write caching for a virtual hard disk.

If you click OK in the above dialog box, the default write caching settings are restored and a warning message is displayed:

   Figure 3: You cannot disable write caching on the virtual hard disk.

Let's think about this more carefully. First, it makes total sense that Hyper-V doesn't allow you to change the write cache settings for virtual hard disks attached to virtual machines. After all, a virtual hard disk isn't really a storage device at all, it's simply a file (.vhd or .vhdx) stored on the file system of the storage device used by the host. Since it's just a file, a virtual hard disk doesn't have any form of disk cache associated with it. 

So what really matters in this kind of scenario is whether you need to enable or disable disk write caching on the underlying physical storage on which the virtual machine's VHD or VHDX is stored. The type of write caching used by the host's physical storage obviously depends on the type of storage device the host is using, which might be internal or directly-attached HDD or SSD storage, hardware RAID, HBA to fiber channel SAN, and so on.

But if you cannot disable write caching for this virtual hard disk then why was it possible in earlier versions of Hyper-V to disable write caching in a virtual machine using the above Policies property sheet? The answer (as I've been told by a Hyper-V expert at Microsoft) is simply that there was a bug in the Windows ataport and Hyper-V storage stack in earlier versions of Hyper-V that allowed you to change the disk write caching setting of the system drive of a virtual machine if that system drive was backed by a virtual hard disk that used virtual IDE (vIDE). 

This bug gave users the impression they could disable write caching to improve data integrity for write operations to the virtual hard disk, but in reality all it was really doing was creating the potential for data loss and corruption of the virtual hard disk should the underlying Hyper-V host experience a power outage or an unplanned start (see KB2853952 for details). Microsoft released a fix for this issue as described in that KB article, but the point is that write caching isn't configurable for virtual hard disks on virtual machines--and nor should it be.

But while Hyper-V won't allow you to disable write caching on a virtual hard disk by clearing the "Enable write caching on the device" setting in the guest operating system of a virtual machine, Hyper-V does allow you to select the second setting "Turn off Windows write-cache buffer flushing on the device" as demonstrated in the next screenshot:

    Figure 4: You can however turn off write-cache buffering on this virtual disk.

Why is that and what's the point of doing this? First, remember that it's the first setting that controls whether write caching is enabled on the disk or not. And since a virtual hard disk isn't really a disk at all, that setting has no meaning as far as virtual disks are concerned. But the second setting is different and does have meaning as it controls the cache flush on/off settings for the disk. When you select the second setting, cache flushes will essentially pretend to succeed--at least at the level of the software stack. 

These flushes can be costly for certain kinds of physical storage devices like hard disk drives and SATA SSDs since the command queue is drained when the dirty data in the cache is flushed. Flushes also usually have their own built-in cost associated with how the storage controller processes them. So when you select this setting in the guest OS for a virtual hard disk in a virtual machine, you might see some performance improvement for applications running in the virtual machine. But always remember that it's the host's disk cache settings that are the important ones as far as data integrity are concerned.

Additional resources

For more information on how caching works in the Hyper-V virtual storage stack, see KB2801713 Hyper-V storage: Caching layers and implications for data consistency. Remember also that disk caching is only one of many issues associated with storage on Hyper-V hosts. 



Hyper-V Optimization Tips (Part 2)

$
0
0

The article covers some tips that can help you optimize and troubleshoot storage performance on Hyper-V hosts.


If you would like to read the first part in this article series please go to Hyper-V Optimization Tips (Part 1).

Introduction

In the previous article in this series we examined disk caching settings and how they should be configured on both Hyper-V hosts and on the virtual machines running on these hosts. In this present article we will examine the dependence of Hyper-V performance on the underlying storage subsystem of those hosts. In particular we will be examining clustered Hyper-V hosts and how to optimize and troubleshoot performance on those systems through judicious choice of storage hardware and how you tune the storage subsystem on the hosts. Since this is a broad topic that is heavily dependent on your choice of vendor for your storage hardware, we will only be examining a few key aspects of this subject in this present article.


Identifying storage bottlenecks

For the scenario we are going to examine, let's assume we have a four-node Windows Server 2012 R2 Hyper-V host cluster with CSV storage that is hosting a dozen virtual machines running as front-end web servers for a large-scale web application. Let's also assume that these virtual machines are using the Virtual Fibre Channel feature of Windows Server 2012 R2 Hyper-V which lets you connect to Fibre Channel SAN storage from within a virtual machine via Fibre Channel host bus adapters (HBAs) on the host cluster nodes.

Users of your web application have been complaining that the performance of the application is often "slow" from their perspective. But "slow" from the end-user's perspective is rather subjective, so what would be a more accurate way to measure application performance? One measure you could look at is the disk answer time, that is, the average response times of the CSV volumes of your host cluster. The following table which associates application performance levels with disk answer times and compares them with raw storage experiences was shared with me by a colleague who works in the field with customers that have large Hyper-V host clusters deployed.

Performance Average disk answer timeDiscussion
Very goodLess than 5 millisecondsThis level of performance is similar to that provided by a dedicated SAS disk
GoodBetween 5 and 10 msecThis level of performance is similar to that provided by a dedicated SATA disk
SatisfactoryBetween 10 and 20 msecThis level of performance is generally not acceptable for I/O intensive workloads such as when databases are involved
Poor, needs attentionBetween 20 and 50 msecThis level of performance may cause users to remark that the application sometimes "feels slow"
A serious bottleneckMore than 50 msecThis level of performance will generally cause users to complain
Table 1: Associating application performance level with disk answer time

If the average disk answer time is more than 20 msec then you should do some performance monitoring of your system to try and determine the cause of the problem. The performance counters you should usually start collecting to monitor disk performance on your Hyper-V hosts are these:

\Logical Disk(*)\Avg. sec/Read
\Logical Disk(*)\Avg. sec/Write

It's usually best to focus on logical disk counters instead of physical disk counters because applications and services running on Windows Server utilize logical drives represented as drive letters whereas the actual physical disk (LUN) being presented to the operating system however may be comprised of multiple physical disk drives arranged in a disk array.

Resolving disk bottleneck problems

Once you've identified that your clustered Hyper-V host is experiencing performance problems because of a storage bottleneck, there are a number of different steps you can take to try and resolve or mitigate this problem. The steps described in this section are not meant to be exhaustive but can often be of help in such "the application feels slow" scenarios.

Follow your storage vendor's best practices
The first thing you should probably do once you've identified storage as a performance bottleneck for your Hyper-V cluster is to check whether your storage vendor has a "best practices" document that covers different Hyper-V scenarios. Storage vendors often create such documentation based on well-understood I/O patterns for different kinds of workloads, and if you can match your vendor's documentation closely to the kind of workload of your own application running on your clustered Hyper-V hosts then you should make sure you're adhering to the various recommendation that your storage vendor makes for that kind of workload.

By following your storage vendor's recommendations you may find that you have resolved or at least have mitigated your performance problem. On the other hand, you might find little or no improvement by following your storage vendor's advice. That's because storage profiling in "lab" environments sometimes doesn't translate well to the "real" world where users sometimes behave in unpredictable ways and multi-tier applications can be more complex in their behavior than is typically seen with "sample" applications.

Use faster disks in your storage array
Using your storage vendor's software you should monitor the load on your storage array to see whether the average load is unacceptably high. If you find this to be so, then one obvious step you can take is to replace any slower disks with faster disks, for example with 15k SAS disks. In general, preference should always be given to SAS disks over SATA disks if you want to ensure optimal performance of your storage array. SAS disks of either 10k or 15k are always to be preferred over any speed of SATA disk.

Use RAID 10 instead of RAID 5
Traditionally, RAID 5 (striping with parity) has been the most popular RAID level used for servers. RAID 10 (mirroring with striping) on the other hand uses a striped array of disks that are mirrored to a second identical set of striped disks. RAID 10 provides the best read-and-write performance of any RAID level, but only at the expense of needing twice as many disks for a given amount of total storage. 

So if you can afford dedicating the extra storage resources to your host cluster, use RAID 10 for the storage utilized by your virtual machines via Virtual Fiber Channel. In any case, you should generally not be using either RAID 5 or RAID 6 (double parity RAID) for storage used by virtualized workloads as it is not an appropriate solution due to random write access. There may be exceptions to this rule however, but the only way to properly identify them is to monitor the read/write performance of different RAID levels for your application so you can evidentially select the most appropriate RAID level for your particular scenario.

Ensure also that you have as many RAID sets as you have nodes in your Hyper-V host cluster. In other words, having a four-node host cluster means you should have four RAID sets configured on your storage array i.e. one RAID set per host.

Check your storage controller configuration
Make sure the firmware of your storage controller has been updated with the latest rev from your storage vendor to ensure optimal performance of your storage array. If your storage controller is experiencing high CPU load then the disks in your storage array are probably too slow and should be upgraded to faster disks (and if possible to SAS type as mentioned earlier).

Also, if you haven't enabled write caching on your storage controller you may want to do this as it can increase I/O capacity by 20% or more depending on your workload and the kind of RAID level you have implemented. Of course there are other considerations involved with regard to write caching, see Part 1 in this series for more concerning this.


Conclusion

We'll examine some other tips for improving storage performance in Hyper-V environments in future articles in this series.



A new Android banking trojan is also ransomware

$
0
0

The Xbot is not widespread yet but is targeting devices in Australia and Russia.

A new kind of Android malware steals online banking credentials and can hold a device's files hostage in exchange for a ransom, delivering a particularly nasty one-two punch. The malware, called Xbot, is not widespread yet and appears to be just targeting devices in Australia and Russia, wrote researchers with Palo Alto Networks in a blog post on Thursday.


But they believe whomever is behind Xbot may try to expand its target base.

"As the author appears to be putting considerable time and effort into making this Trojan more complex and harder to detect, it’s likely that its ability to infect users and remain hidden will only grow," Palo Alto wrote.

Xbot uses a technique called activity hijacking to carry out attacks aimed at stealing online banking and personal details. 

It essentially allows the malware to launch a different action when someone tries to launch an application. User are unaware that they're actually using the wrong program or function.

Activity hijacking take advantage of features in Android versions prior to 5.0. Google has since developed defenses against it, so only older devices or those that have not been updated would be affected.

In one type of attack, Xbot monitors the app a user has launched. If it is a particular online banking app, Xbot intervenes and displays an interface that obscures the real app.

The bogus interface is actually downloaded from a command-and-control server and displayed using WebView, Palo Alto wrote. The legitimate applications are not actually tampered with.

"So far we’ve found seven different faked interfaces," Palo Alto wrote. "We identified six of them – they’re imitating apps for some of the most popular banks in Australia. The interfaces are very similar to these banks’ official apps’ login interfaces. If a victim fills out the form, the bank account number, password, and security tokens will be sent," to the command-and-control server.

Xbot can also bring up an interface through WebView saying the device has been infected with CryptoLocker, a well-known ransomware program. Ransomware encrypts files and then asks for payment for the decryption key. In this case, the attackers ask for US$100 to be paid through a spoofed PayPal site.
Xbot will actually encrypt files on the device's external storage. However, the encryption algorithm used is weak, and it would be possible to recover the files, Palo Alto wrote.

Xbot can also scrape the phone for personal data, such as contacts, SMSes and phone numbers and send the data to the attackers.



How To Downgrade From Windows 10 To Windows 8.1, 8 or 7

$
0
0

Windows 10 not your cup of tea? Here's how to get back to your previous version of Win7 or Win8.1

Hundreds of millions of Windows 10 users can’t be wrong -- or can they? I hear from people every day who tried the Win10 upgrade and for a variety of reasons -- broken drivers, incompatible programs, unfamiliarity, fear of snooping, doubt about Win10’s future -- want to get back to their good ol’ Windows 7 or 8.1.


If you performed an upgrade using Microsoft’s tools and anointed techniques, rolling back should be easy. Operative term: “should.” Unfortunately, many people find that Win10 is a one-way trip -- sometimes for very good reason.

Here’s a thorough rundown of what you should expect, during the upgrade, then amid the rollback, along with a list of what frequently goes wrong and a bunch of tips on how to make the round trip less painful.
If you’ve upgraded from Win7 or Win8.1 to Win10 and you love your new system, more power to ya. But if you have a nagging doubt -- or want to know what’s in store if you decide to move back -- this report details what awaits.

Anatomy of a hassle-free rollback

Most people who want to roll back from Windows 10 to their previous version of Windows have no problem with the mechanics. Providing you still qualify for a rollback (see the next section), the method for moving back is easy.

Caveat: If your original Windows 7 or Windows 8.1 system had log-on IDs with passwords, you’ll need those passwords to log in to the original accounts. If you changed the password while in Windows 10 (local account), you need your old password, not your new one. If you created a new account while in Windows 10, you have to delete it before reverting to the earlier version of Windows.

Step 1. Before you change any operating system it’s a good idea to make a full system backup. Many people recommend Acronis for the job, but Windows 10 has a good system image program as well. It’s identical to the Windows 7 version, but it’s hard to find. To get to the system image program, in the Win10 Cortana search box, type Windows Backup, press Enter, on the left click Create a System Image, and follow the directions.


Click Start > Settings > Update & security > Recovery, and you’ll see an entry to “Go back to Windows 7” or “Go back to Windows 8.1.”
Step 2. In Windows 10. Click Start > Settings > Update & security > Recovery. On the right, you’ll see an entry to “Go back to Windows 7” (see screenshot) or “Go back to Windows 8.1,” depending on the version of Windows from whence you came.

If you don’t see the “Go back to” option and are using an administrator account, you’ve likely fallen victim to one of the many gotchas that surround the upgrade. See the next section -- and don’t get your hopes up.


When going back to a previous Windows, you’re given the choice keep your files or remove everything.
Step 3. If you choose “Go back to a previous Windows,” you’re given a choice (screenshot), analogous to the choice you made when you upgraded to Windows 10, to either “Keep my files” or “Remove everything.” The former keeps your files (as long as they’re located in the usual places), so changes you made to them in Windows 10 will appear back in Windows 7 (or 8.1). The latter wipes out all of your files, apps, and settings, as you would expect.

Step 4. The Windows rollback software wants to know why you are rolling back, offers to check for updates in a last-ditch attempt to keep you in the Windows 10 fold, warns you “After going back you’ll have to reinstall some programs” (a problem I didn’t encounter with my rather pedestrian test programs), thanks you for trying Windows 10, then lets you go back.

Step 5. After a while (many minutes, sometimes hours) you arrive back at the Windows 7 (or 8.1) log-on screen. Click on a log-on ID and provide a password; you’re ready to go with your old version.
I found, in extensive testing, that “Keep my files” does, in spite of the warning, restore apps (programs) and settings to the original apps and settings -- the ones that existed when you upgraded from Win7 to Win10.

Any modifications made to those programs (for example, applying security updates to Office programs) while using Windows 10 will not be applied when you return to Win7 -- you have to apply them again.
On the other hand, changes made to your regular files while working in Windows 10 -- edits made to Office documents, for example, or new files created while working with Windows 10 -- may or may not make it back to Windows 7.

I had no problems with files stored in My Documents; edits made to those documents persisted when Windows 10 rolled back to Windows 7. But files stored in other locations (specifically in the \Public\Documents folder or on the desktop) didn’t make it back: Word docs created in Win10 simply disappeared when rolling back to Win7, even though they were on the desktop, or in the Public Documents folder.

One oddity may prove useful: If you upgrade to Windows 10, create or edit documents in a strange location, then roll back to Windows 7 (or 8.1), those documents may not make the transition. Amazingly, if you then upgrade again to Windows 10, the documents may re-appear. You can retrieve the “lost” documents, stick them in a convenient place (such as on a USB drive or in the cloud), then roll back to Windows 7, and pull the files back again.

Important lesson: Back up your data files before you revert to an earlier version of Windows. If you lose a file while going from Windows 7 to Windows 10, you can usually find it from inside Win10 in the hidden Windows.old folder. But when you go back from Win10 to Win7, there is no Windows.old folder.

Impediments to rollbacks

Microsoft promises that you can upgrade to Windows 10, then roll back, if you perform the rollback within 30 days. While that’s true to a first approximation, the details are a shade more complex.

When you perform an in-place upgrade from Windows 7 (or 8.1) to Windows 10, the installer creates three hidden folders:
  1. C:\Windows.old
  2. C:\$Windows.~BT
  3. C:\$Windows.~WS
Those folders can be very large. Upgrading from a clean Windows 7 machine with Office 2010 installed, C:\Windows.old runs 21GB.




Deleting C:\Windows.old, C:\$Windows.~BT, or C:\$Windows.~WS -- or any of their contents -- will prevent you from rolling your system back.
Deleting the hidden C:\Windows.old folder, either of the other two folders, or any of their contents, will trigger a “We’re sorry, but you can’t go back” message (screenshot). Those are the folders that hold all of your old system, including programs and data. Generally, it’s difficult to delete the folders manually, but if you run Disk Cleanup in Windows 10, opt to Clean up System files, and check the box marked Previous Windows installation(s), your Windows.old folder disappears and can’t be retrieved.

(Older posts suggest that running the Windows Media Creation tool will delete the $Windows.~BT folder. That may have been true six months ago, but it looks like Microsoft fixed the problem.)

Although it isn’t well documented, apparently the Win10 upgrade installer sets a Scheduled Task to delete those files -- they take up a lot of room, and understandably, Microsoft wants to give that room back to you. I couldn’t find any associated setting in Task Scheduler, nor could I find any documentation about the task, so the removal of those files after 30 days may be more complicated than most assume. Others have found that moving (or renaming) those files, then moving them back after the 30 days has expired, does not reload the rollback mechanism. If you think you can be tricky and hide the files, returning them when you want them, I’ve found no indication that’s possible.

You can, however, roll back from Windows 10 to Windows 7, then roll forward again. By rerunning the downgrade/upgrade cycle within the 30-day window, you’re good for another 30 days. I’ve rolled back and forth four different times on the same machine, with no noticeable problems.

There are other situations where either Windows.old never gets generated, or it is stripped of all of your programs and data. That’s what happens with a clean install.

It shouldn’t be any surprise that if you run the Windows Media Creation tool, use it to “Upgrade now,” and in the dialog marked “Choose what to keep,” specify Nothing, you won’t be able to roll back to your original programs or files. This is a common technique for performing a clean install of Windows 10 -- highly recommended to make sure Win10 is more stable. Unfortunately, it also removes your ability to go back to Win7 or 8.1.

In the same vein, if you upgrade to Windows 10, use either the Media Creation Tool or the Windows 10 “Reset this PC” function (Start > Settings > Update & security > Recovery), then tell Windows that you want to “Remove everything / Removes all of your personal files, apps and settings,” the key folders will be removed, and you can’t revert to your old version of Windows.

I’ve seen a lot of advice for recovering the three key hidden folders, should they be deleted. Unfortunately, I haven’t witnessed any approach that works consistently.


That thing about the 30-day clock

After 30 days, you're up the ol' creek without a paddle. If you want to go back to Win7 or 8.1, you have to re-install it from scratch, and you're responsible for moving your apps and data.

If you made a system backup before you upgraded to Win10, you can, of course, go back to that backup. Usual system backup rules: What you get is an exact copy of what you had at the point you made the backup.

If you're coming close to your 30 days, and are the cautious type, you should consider rolling back (taking into account the disappearance of files in unusual places), then rolling forward again. That resets the clock, so you get an additional 30 days to see if you like the Win10 experience.

It's not clear how Microsoft sets the 30 day clock. You'd think it would be a Scheduled Task, but I looked high and low and couldn't find it. (I was anticipating a hack where you could re-schedule the task manually.) But what is clear is that once the files necessary to roll back are wiped out, you're SOL.


What to do if the wheels fall off

In my experience, the rollback to Windows 7 and 8.1 works remarkably well, given the caveats mentioned previously. I have heard of problems, though, ranging from icons that don’t display properly on the recovered desktop, to missing data, to programs/drivers that aren’t working correctly, even though they used to work fine.

If you can’t get Windows to roll back and absolutely detest Windows 10, you’re up against a very tough choice. The only option I’ve found that works reliably is to re-install your original version of Windows from scratch. On some machines, the old recovery partition still exists, and you can bring back your old version of Windows by going through the standard recovery partition technique (which varies from manufacturer to manufacturer), commonly called a “Factory restore.” More frequently, you get to start all over with a fresh install of Windows 7 or 8.1.

That is a completely different can of worms. There are raging debates about the availability and legality of copies of Windows 7 -- suffice it to say that Microsoft doesn’t have any legal source of the bits for individuals. If you’re very lucky and you have the right kind of key, you can download an ISO of Windows 8.1 on an official Microsoft site.

I had a friend stuck in a similar situation, where Windows 10 was unstable. Rolling back from Win10 to Win7 left him with a system that constantly crashed. My suggestion: Back up his data as best he could, rerun the upgrade, then go to Windows 10. Inside Windows 10, run a Reset (Start > Settings > Update & security > Recovery), then “Remove everything / Removes all of your personal files, apps and settings.” That triggers a clean install of Windows 10. He may not like Windows 10, but running that clean install made it substantially more stable. He learned to live with it.
Your mileage may vary, of course.



Windows and Hyper-V Containers in Windows Server 2016

$
0
0

This article presents an overview of the new Windows and Hyper-V container technology that Microsoft has developed for release in Windows Server 2016.

Background

If you have been working in IT for some time, you know that container technology is nothing new. Lately though, it may have been utilized more in Linux environments. IT professionals that work on Windows infrastructures are rediscovering the technology through product releases from companies like Docker and Microsoft. With the release of Windows Server 2016, Microsoft provides an implementation of containers named Windows Server Containers and Hyper-V Containers, and also makes this technology available in Microsoft Azure.

At a high level, a container leverages operating system virtualization to allow execution and isolation of different applications and services on a single host system without having to worry about whether or not each application is compatible with the others. Every application or service running in a container has its own view of the operating system, processes, registry, file system, and network. This offers a less rigorous isolation boundary than Hyper-V virtual machines (VM), but provides a virtualization environment that is more efficient with less overhead to run trusted applications.

Operating System Virtualization

Operating system virtualization is based on the abstraction of the operating system layer to support multiple, isolated partitions or containers on a single-instance host operating system. The virtualization is accomplished by multiplexing access to the kernel while no single container is able to take down the host system. Figure 1 shows the basic architecture implemented with this approach.

                              Figure 1: Basic operating system–level virtualization architecture

This technique results in very low virtualization overhead and can yield high partition density. However, there are limitations with this type of solution. The primary limitation is the inability to run a heterogeneous operating system mix on a given server because all partitions share a single operating system kernel. 

In addition, any operating system kernel update affects all virtual environments. For these reasons, operating system–level virtualization tends to work best for largely homogeneous workload environments. You might remember a product named Virtuozzo Containers from Parallels that was based on operating system–level virtualization. Virtuozzo Containers was extensively adopted and deployed by the Web hosting industry to build high-density infrastructures, offering isolated Web services.


Windows Containers

In Windows Server 2016, Microsoft adopts the Windows Containers nomenclature to describe the partitions that are created on top of the operating system virtualization layer. This is a smart move that avoids confusing containers with Hyper-V partitions that are based on machine-level virtualization. It also helps to clarify the operating model for Windows Containers, which is to allow the faster deployment of applications to run side-by-side on the same version of the operating system while providing isolation and security for each application, and minimizing the virtualization overhead.

When you create a Windows Container, you instantiate an isolated sandbox on top of the Windows Server 2016 host operating system. Conceptually, you can think of the Windows Server 2016 host operating system as a read-only base image. The new Windows Container that you create will contain the modifications that you make, such as installing a new application and its dependencies, or modifying settings. 

The underlying Windows Server 2016 host operating system image is not modified. However, you can save your new Windows Container environment as a new image and save it in an image repository. A big advantage of these images is that they are generally much smaller in size than a VHD-based image because only file-based modifications are stored instead of the entire virtual machine with guest operating system and applications. 

You can deploy a Windows Container image created on a particular Windows Server 2016 host on any other Windows Server 2016 container, and even within a virtual machine if a specific workload requires a higher level of isolation. However, there is no requirement to save a Windows Container as a new image, allowing you to create temporary Windows Containers that can be easily destroyed without saving any of the contents.


Hyper-V Containers

A Hyper-V Container is essentially a Windows Container that is running in a Hyper-V partition. With a Hyper-V Container, you are able to further isolate a workload from the physical host operating system. While Windows Containers offer greater partition density and performance, Hyper-V Containers provide a greater degree of isolation, ensuring that the code running in a container cannot impact the host operating system. In a multi-tenant scenario, the ability to have a nested virtualization solution like Hyper-V Containers may be necessary to satisfy more rigorous security, isolation, or regulatory requirements.

With that said, Windows Container images can be deployed in both Windows Containers and Hyper-V Containers without any changes, simply by setting a specific runtime type flag when you create the new container. This enables you to quickly deploy an application and its dependencies in either type of container, and allows you the flexibility of changing the degree of isolation as requirements change for a specific application.

Windows Containers Network Configuration

Windows Containers provide you the flexibility to access your applications on the network using two different methods. A container can be configured with an externally accessible IP address using DHCP or it can obtain an IP address from the container host using Network Address Translation (NAT). If you choose to assign an external IP address to a Windows Container using DHCP, the container will communicate on the network using its own MAC address. This configuration requires MAC spoofing. You would select this IP address assignment option if you require each Windows Container to have a routable address on your network.

Using NAT, the container host assigns a private IP address to a Windows Container, and a specified Windows Container port is mapped to a port on the container host. The application running in the container is accessible to external clients by specifying a combination of the IP address and port on the container host. The container host forwards the network traffic to the destination Windows Container using an internal NAT table that maps the container host port to the Windows Container NAT address and port number pair. You should select this IP address assignment option if you are going to deploy a large number of containers and do not require or want to manage a large number of routable IP addresses.

Using Windows and Hyper-V Containers on a Physical Host

As you can see in Figure 2, Microsoft has made it possible for you to create and use Windows Containers and Hyper-V Containers on the same physical host, if so needed. In this example, the Hyper-V role is enabled on the physical host. In the parent partition, you can deploy Windows Containers each with their own abstracted view of the host operating system. In addition, you can also create one or more VMs, or child partitions, each with their own guest operating system, in which you can deploy Hyper-V Containers. Each VM guest OS must still be some flavor of Windows Server 2016, such as a Server Core or Nano Server configuration. While the Windows Containers share the host operating system as their base image, Hyper-V Containers running in the same VM share the guest operating system as their base image.

                  Figure 2: Windows and Hyper-V Containers on a Physical Host

 

Docker Integration

Microsoft’s plan is to provide you with a couple of options to manage your container environment. If you are a Microsoft-only shop, you can use PowerShell and WMI to manage your container infrastructure on Windows Server 2016. However, in order to promote rapid adoption of the container technology, Microsoft will also support Docker, an open source system widely used in the Linux community to package, deploy, and manage containers. 

Docker will allow you to centrally manage containers across both your Windows Server 2016 and Linux infrastructures. Docker will still not allow you to deploy Windows Containers on Linux or Linux containers on Windows Server 2016 since the physical host operating system is shared with the containers. However, you will have access to the Docker Hub, Docker Engine, and Docker Client on Windows Server 2016.

The Docker Hub is an image repository that contains a large collection of container images that you can pull down to deploy on your systems, and also push container images to share with the Docker community. The Docker Engine will allow you to build, run and orchestrate containers on Windows Server 2016. The Docker client will allow you to use the same interface used in the Linux environment to manage containers. With these two container management options, Microsoft hopes to satisfy the requirements of IT staff that work in homogenous and heterogeneous environments.

Conclusion

In Windows Server 2016, Windows and Hyper-V Containers provide a new, lighter operating system virtualization option for the quick deployment of applications in your infrastructure. Both options require Windows Server 2016 as the host operating system and base image for the containers. 

Windows Containers are most suitable for deployment in a trusted multi-tenancy environment where the containerized applications trust each other, and there is little risk of violating the container isolation boundary through the mistaken or malicious misbehavior of applications. 

Windows Containers support high automation deployment and scalability factors with less virtualization overhead, resulting in more resources for running applications. Hyper-V Containers provide a higher degree of isolation from the host operating system that is better suited for running applications in a non-trusted multi-tenancy environment, or in environments where workloads have regulatory requirements that drive a higher, more stringent security boundary.





Deploying Exchange Server 2016 (Part 1)

$
0
0


Preparing Active Directory to support Exchange Server 2016 and configuring the prerequisites to install a new Exchange Server 2016.


If you would like to read the next part in this article series please go to Deploying Exchange Server 2016 (Part 2).

It is finally here, on October 1st, Microsoft Exchange Team released the new Exchange Server 2016, and the blog title of the announcement was: Forged in the cloud. Now available on-premises.

That statement makes totally sense. This new Exchange Server 2016 is not a new product per se, Microsoft has been testing and improving on millions of mailboxes in their Office365 environment before releasing the product on-premises.

There are a lot of new features waiting to be explored, and I’m sure all of them will be covered here at MSExchange.org. I would like to mention just a couple of these new features that caught my attention during the release of the RTM version, as follows:
  • Architecture changes. There are only two roles: Mailbox and Edge Transport. All servers installed on the internal/Active Directory network will have the Mailbox role, which simplifies our lives a lot.
  • Outlook on the web (formerly know as Outlook Web Access, Outlook Web App, or just OWA) is the same used by Office365, and some of the new features are: platform-specific experience for phones (iOS and Android), new single-line view, calendar improvements, better performance and so forth.
  • Better Search improvements (search for events in the Calendar, Search and People suggestions) and better performance when using Outlook 2016
  • Faster eDiscovery searches including Public Folder discoveries
  • ReFS is the recommended file system for Exchange Server 2016
This article series is divided in two articles, in this first article we are going to cover the Active Directory preparation to support Exchange Server 2016 and the prerequisites that must be in place to install an Exchange Server 2016. In the second and final article of this series we will cover both methods to install Exchange Server 2016 (command-line and setup wizard), and how to troubleshoot and check the installation.

Preparing Active Directory to support Exchange Server 2016

The Active Directory preparation to support Exchange Server has always been a hot topic in IT forums. Basically, we have two ways to prepare Active Directory to support Exchange Server 2016: using setup.exe from command line which gives more flexibility in larger organizations, or using the graphical user interface (setup wizard) and that will prepare the Active Directory automatically.

By default, the first setup of Exchange Server 2016 will prepare the Active Directory, and to make that work we must add the component RSAT-ADDS to the Windows Components of the future Exchange Server.
If you are a small environment, and there is no specific reason to prepare the Active Directory from a different server, then using default settings during the setup wizard will work for you.

However, if you have a large environment, with perhaps different teams to manage Active Directory and Messaging, you may want to prepare the Active Directory first, and then afterwards work on the Exchange Server 2016 installation. There are a couple of items that must be validated before starting the manual Active Directory preparation, as follows:
  • You should prepare the Active Directory schema from a server located on the same Active Directory site of your Schema Master server.
    Hint: In order to find your current Schema Master server, run netdom query fsmo on any Domain Controller
  • The account that prepares the schema must be a member of the Enterprise Admins group for all steps of this guide, and member of Schema Admins group just for the first command of this section (/PrepareSchema)
    Hint: Use the command net use /domainto validate the group membership of the account, and check if the Enterprise Admins and Schema Admins are being listed.
  • Make sure that your Active Directory replication is working before starting the Active Directory preparation
Now that we covered the basic requirements, here are the steps to prepare the Active Directory to support Exchange Server 2016:
  1. Copy the Exchange Server 2016 installation files to the server
  2. Validate that you have RSAT-ADDS installed (if it is a Domain Controller, then the component is already there), you can check if it is really installed using the Get-WindowsFeature RSAT-ADDS cmdlet using Windows PowerShell (Figure 01)
    Figure 01
  1. Open command prompt as administrator (for this scenario the command prompt is better than Windows PowerShell) and from the root of the Exchange Server 2016 installation files run the following command to prepare the Schema (Figure 02).
    This step will add/modify classes and attributes to support Exchange Server 2016 in your environment.
Setup.exe /PrepareSchema /IacceptExchangeServerLicenseTerms

      Figure 02
  1. After preparing the Schema, the next step is to prepare the Active Directory (Figure 03), and it can be done using the following command.
    This step will create/update containers, objects and other items in Active Directory, at this point your Organization is established on the Active Directory (we are not defining an Organization because we are adding a server into an existent Organization).
Setup.exe /PrepareAD

       Figure 03
  1. The final step is to prepare a specific domain (Figure 04) or all domains of your forest. Using the first command we will be preparing all domains, and on the second one, only that domain will be prepared. The thumb rule here is always prepare the domains where an Exchange Server will be installed or mailbox or mail enabled/users will exist on that domain.
Setup.exe /PrepareAllDomains /IAcceptExchangeServerLicenseTerms
Setup.exe /PrepareDomain:Patricio.ca /IacceptExchangeServerLicenseTerms

    Figure 04

Windows Server and basic requirements for Exchange Server 2016…

There are a few requirements to install Exchange Server 2016 on any given Windows Server, and the following list covers most of them, as follows:
  • Operating System must be either Windows Server 2012 R2 (recommended) or Windows Server 2012
  • The server must be part of a domain (we will be deploying the Mailbox role)
  • The Active Directory forest must be at least Windows Server 2008
  • If Exchange Server 2016 server is being introduced into an existent organization, then the following rules must be followed:
    • Exchange Server 2007 and older versions are not supported. You must get rid of those servers by transitioning to Exchange Server 2010/2013
    • Exchange Server 2010 servers must be running at least Update Rollup 11 for Exchange 2010 Service Pack 3 on all servers
    • Exchange Server 2013 servers must be running at least Cumulative Update 10 on all servers
In the second article of this series, we are going to cover both situations: a new Exchange Organization with Exchange Server 2016, or adding a new Exchange Server 2016 into an Exchange Server 2010/2013 organization.

Installing additional prerequisites…

The Exchange Server 2016 installation is a straightforward process when all prerequisites are installed properly.

The first requirement is .NET Framework 4.5.2 and a common best practice is to make sure that the server has all Windows Updates installed before moving into production. That being said, we have an opportunity to kill two birds with one stone, we need to run Windows Update on the future Exchange Server and make sure that we select the Important update called Microsoft .NET Framework 4.5.2 for Windows 8.1 and Windows Server 2012 R2 for x64-based systems, as shown in Figure 05. After that restart the server.

   Figure 05
 
The next step is to install the Windows Components, although we have an option to force the setup wizard to install them automatically during the setup, but that will require an additional restart which can be avoided.
Open Windows PowerShell as administrator, and run the following cmdlet (Figure 06). Now that we know the details about the Active Directory preparation, you can decide if you want to add RSAT-ADDS or not to the list (it is the last item on the cmdlet)

   Figure 06
 
Install-WindowsFeature AS-HTTP-Activation, Desktop-Experience, NET-Framework-45-Features, RPC-over-HTTP-proxy, RSAT-Clustering, RSAT-Clustering-CmdInterface, RSAT-Clustering-Mgmt, RSAT-Clustering-PowerShell, Web-Mgmt-Console, WAS-Process-Model, Web-Asp-Net45, Web-Basic-Auth, Web-Client-Auth, Web-Digest-Auth, Web-Dir-Browsing, Web-Dyn-Compression, Web-Http-Errors, Web-Http-Logging, Web-Http-Redirect, Web-Http-Tracing, Web-ISAPI-Ext, Web-ISAPI-Filter, Web-Lgcy-Mgmt-Console, Web-Metabase, Web-Mgmt-Console, Web-Mgmt-Service, Web-Net-Ext45, Web-Request-Monitor, Web-Server, Web-Stat-Compression, Web-Static-Content, Web-Windows-Auth, Web-WMI, Windows-Identity-Foundation,RSAT-ADDS

The second step is to install the Unified Communications Managed API 4.0 Runtime and after downloading the file, just execute it. The installation process is simple (Figure 07 shows the initial page), just leave default settings. After installing the tool, the server can be restarted.

                  Figure 07
 
The last but not least, is the Exchange Server 2016 installation files. The download is an .exe file (around 1.6GB). The downloaded file when executed, will ask the location to extract the Exchange Server 2016 installation files, in this article series we will use C:\EX16, as shown in Figure 08.

                                          Figure 08

 

Conclusion

In this article we went through the process to prepare Active Directory to support a new Exchange Server 2016 installation, and the prerequisites that must be in place before starting the installation process.

In the next article, we will cover the installation process using command-line and setup wizard (graphical user interface), and the setup differences between adding a server in an existing organization and creating a brand new Exchange Organization.

Additional Information
If you would like to read the next part in this article series please go to Deploying Exchange Server 2016 (Part 2).


Deploying Exchange Server 2016 (Part 2)

$
0
0

In the previous article we covered all steps required to prepare the Active Directory to support Exchange Server 2016, and we installed all prerequisites required on the future Exchange Server box. In this article we will cover how to deploy Exchange Server 2016 using command-line and setup wizard.


If you would like to read the first part in this article series please go to Deploying Exchange Server 2016 (Part 1).

Adding an Exchange Server 2016 into an existent Organization (setup wizard)

The process to add additional Exchange Server 2016 servers is always the same, and it does not matter if it is an Exchange Organization running only 2016, or running legacy versions (Exchange 2010 or 2013).

Using Windows Explorer, go to the Exchange 2016 installation folder, right-click on setup.exe and then click on Run as administrator (Figure 01).

                                                      Figure 01

In the Check for Updates? page. If there are any available updates the Exchange 2016 setup can check and download those updates to be used in the current installation process. Select Connect to the Internet and check for updates and click next.

In the Downloading Updates page. A list of the updates found, or a message saying that no updates were found will be displayed, either way. Click on Next.

In the Introduction page (Figure 02). An Exchange Server 2016 welcome page will be displayed, nothing to configure here, just click next.

               Figure 02

In the License Agreement page. After reading and accepting the license agreement, select I accept the terms in the license agreement and click next.

In the Recommended Settings page. The administrator can define usage feedback (information about utilization is sent to the product team for future improvements) and online checking for errors will be enable or disable. In this article we are using default value which is Use recommended settings, click next.

In the Server Role Selection page (Figure 03). Here is the biggest change when compared with Exchange Server 2013. In the new architecture of the product there are only two roles available: Mailbox and Edge Transport role. Select Mailbox Role, and click Next.

Note:We are selecting Automatically install Windows Server roles and features that are required to install Exchange Server for safety reasons, however we installed all prerequisites following the instructions of the first article of this series.

          Figure 03

In the Installation Space and Location page. The disk space requirements and availability will be displayed, we will use default settings to store the Exchange 2016 Installation which is C:\Program Files\Microsoft\Exchange Server\V15, click next.

In the Malware Protection Settings page. By default, the malware protection is enabled, we will leave default values and then click next.

In the Readiness Checks page (Figure 04). All Warning and Errors items will be listed, if there are no error messages being listed, then the setup process can continue. We got a warning about MAPI over HTTP which is not currently enabled. Click on install to start the installation process.

Note:
If you are curious about the checks performed by the Exchange Server 2016 Setup, the following link will provide a detailed information for all checks: https://technet.microsoft.com/EN-US/library/jj150508(v=exchg.160).aspx.

                  Figure 04
The final page of the Exchange 2016 setup will be similar to the Figure 05, where the setup confirms the completion of the installation process, click on Finish.


                  Figure 05
 

Adding an Exchange Server 2016 into an existent Organization (command-line)

We will explore the second method to install Exchange Server 2016 which is using the command-line, it is important to know that all options (and more to be honest) that you have on the Exchange Server 2016 Setup wizard are also have available on the command-line. In order to identify all options for any specific action, we can use setup.exe /? to obtain more information and the switches available.

If you just want to install an Exchange Server 2016 with a Mailbox role using default values, the following command line will be enough (Figure 06).

Setup.exe /Mode:Install /Role:Mailbox /IAcceptExchangeServerLicenseTerms

        Figure 06
 

Certificate issues after adding a new Exchange Server 2016…

When the topic is certificate the recommended best practice is to use a Public Certificate and split-brain DNS. By doing that, a single set of namespace can be used for both internal and external web services on Exchange Server (the rule applies for Exchange 2016/2013/2010 and 2007).

In the current scenario of this article series, we have an Exchange Server 2013 configured to use webmail.patricio.ca for all web services, and the DNS has an entry for that same host pointing out to the Exchange Server 2013 server. However, after completing the installation of the new Exchange Server 2016 the error message (Figure 07) will start to pop-up on some of the clients, and if we look closely we will see the name of the Exchange Server 2016 that we have just introduced in our Exchange Organization.

                              Figure 07

The reason is because any new Exchange Server 2016 (the same rule applies to older versions of the product) will configure the AutoDiscoverServiceInternalURI attribute with the FQDN (Full Qualified Domain Name) of the server and since the certificate does not match that name, the result is that certificate error. That issue will occur on the internal network on the Active Directory site where the new Exchange Server 2016 was installed.

In Exchange Server 2016 we have a new cmdlet to retrieve the internal autodiscover which is the Get-ClientAccessService (the former Get-ClientAccessServer is still valid but it will be removed in a future version).

In order to identify what is causing the issue in the configuration, we will run the following cmdlet and the results are shown in Figure 08.

Get-ClientAccessService | ft Name,AutoDiscoverServiceInternalURI -AutoSize

   Figure 08
 
The faster way to solve the issue is to change the AutoDiscoverServiceInternalURI from the new server to point out to the existent and valid URL, this way the clients will no longer receive certificate pop-up messages, and all traffic will go to that host.

After installing the certificate on the new Exchange Server 2016 and certifying that the new server is ready for prime time, then the administrator can change the entry webmail.patricio.ca to point out to the new Exchange Server 2016 box, and at that time the clients will not notice the change.
In order to change the URL, the following cmdlet can be used:

Set-ClientAccessService -AutoDiscoverServiceInternalURI

Creating the Exchange Organization through setup wizard

An Exchange Organization is created during the initial installation of an Exchange Server, and the Organization is at the Forest Level and it will stay in your Active Directory forever (you can remove the configuration manually but that is not the point).
If that is the first time ever that Exchange Server is being installed, an additional step is required which is the definition of the Organization Name. That setting will show up as a new page during the wizard (Figure 09), that new page will be located between Installation Space and Location page and Malware Protection Settings page.
The installation process when creating a new organization and an additional server are the same (the only exception is the additional page).

                  Figure 09
 

Checking the installation and basic troubleshooting…

Independent of the process used to install Exchange Server 2016, there are a couple of basic steps that can be used to test the brand new server. The first thing that the administrator will notice after the installation of the product is a new set of icons on the application list (Figure 10), and we will be using Exchange Management Shell to perform a couple of tests.

                                          Figure 10
A basic cmdlet for troubleshooting and to get an overall status of the service is Test-ServiceHealth and we can see it in action in Figure 11. This cmdlet will list all services required for each component and it will help the administrator to identify services that are not running.

   Figure 11
 
In some cases, the Exchange Server 2016 setup may fail, and in order to identify the issue the first step is to check the log files created during the installation process, and they can be found at C:\ExchangeSetupLogs folder (Figure 12). Those log files will have a lot of information about the process, and if you take your time going through those files you probably will be able to pin point the issue.

                 Figure 12

 

Conclusion

After working on the Active Directory preparation and prerequisites on the first article, we completed our series going over the process to install Exchange Server 2016 using both methods (command-line and setup wizard), and we finished the article by showing how the administrator can check the services after installation, avoid certificates pop-ups and checking the logs created during the installation.

If you would like to read the first part in this article series please go to Deploying Exchange Server 2016 (Part 1).

Migrating a small organization from Exchange 2010 to Exchange 2016

$
0
0
Migrating from exchange 2010 to 2016

In this article we will migrate from Exchange 2010 to Exchange 2016. In this short series we’ll be focusing on the implementation and migration steps to move from Exchange 2010 to Exchange 2016, rather than implementing features like Database Availability Groups or configuring load balancing. Therefore, we’ll focus on a smaller organization with a relatively simple deployment.


Introduction

The latest version of Exchange Server brings the latest cloud-based developments and reliability improvements to on-premises Exchange. In this series we will walk through the steps required to implement Exchange 2016 into your current Exchange 2010 organization and migrate mailboxes across.


Planning for Deployment

Before you begin it’s important to understand that a key architectural change has been made in Exchange 2016. Exchange 2010 had a number of separate roles; Client Access, Hub Transport, Mailbox and Unified Messaging.

In Exchange 2016 only a single role is used, the Mailbox role. This contains all necessary components required.

Our example organization is Goodman Industries, who have a single Exchange 2010 multi-role server and will migrate over to a single Exchange 2016 mailbox server.

                          Figure 1: An overview of our topology

In the example above, you’ll see our source server EX1401 running Exchange 2010. Our target server will be EX1601. In a larger organization this would most likely be highly available, so we’d have multiple domain controllers (rather than just AD01) and use Database Availability Groups on the source and target.


Naming and Services

Our first step is to define names used by clients to access Exchange. Co-existence with Exchange 2010, 2013 and 2016 allows sharing of the same HTTPS names for Autodiscover, OWA, ActiveSync and other services, making it easy to transition across and reduce the risk of implementing co-existence.

Old Exchange 2010 NamespacesNew Exchange 2016 Namespaces
mail.goodmanindustries.com
autodiscover.goodmanindustries.com
mail.goodmanindustries.com
autodiscover.goodmanindustries.com
Table 1

Exchange Server Sizing

The environment we’ll be implementing Exchange 2016 on is virtualised, running Hyper-V in our example.
CPUsCoresSPECint_rate2006   scoreHost RAMDisks   Available
2 x Intel Xeon12367256GB24 x 4TB 3.5” 7.2K RPM SAS (RAID 10)
Table 2
We have also collected statistics from the existing environment:
Number of mailboxesAverage Message SizeAverage ReceivedAverage SentAverage Mailbox Size
15075KB30151GB
Table 3

To calculate the requirements, we’ll use version 7.8 or higher of the Exchange Server Role Requirements Calculator. This supports both Exchange 2013 and Exchange 2016, so be sure to select the correct version when using the tool.

When sizing the solution two important factors will form design constraints:
  • The solution will not have high availability and instead will use Hyper-V for high availability.
  • The Exchange 2016 environment will provide quota limits of 5GB per user.
  • We’ll configure the maximum number of databases to be 5.
  • We’ll use a VSS-based backup solution rather than Exchange Native Protection – simply because it’s a non-HA simple environment.
Our output from the role requirements calculator results in the following server specification:

HostnameVirtual CPURAMOS DiskPage file DiskPhysical disks requiredDatabase virtual disksLog virtual disksRestore LUN
EX16011 x vCPU16GB100GB20GB4 x 4TB5 x 291GB5 x 5GB1 x 213GB
Table 4

The Virtual CPU specifies how many CPU cores should be assigned to the Virtual Machine used, as does the RAM. The OS disk will hold both Operating System Exchange install and transport databases.

The Physical Disks represents how many of the available physical disks are needed to actually support the deployment and meet requirements for performance and space. In the virtual environment, these will be presented as virtual disks and will be used for database and log files respectively.

You'll note that we're still splitting databases and logs. For an implementation making use of Exchange Native Protection we wouldn't look to do this, but for an implementation in a virtual environment that takes advantage of backups this is still required. We've also included an additional virtual disk to use as a restore LUN.

Splitting Databases from logs ensure that in the scenario of a log disk filling up, databases will not be corrupted. We also ensure that losing or the corruption of a virtual disk doesn't result in a full restore of Exchange.


Updating the environment

Updating Exchange Server 2010

The minimum supported patch level for Exchange Server 2010 is Service Pack three with Update Rollup 11.
Exchange 2010 Service Pack 3 is available here. Exchange 2010 SP3 Update Rollup 11 is available here. Install it, or a newer version if it is available.


Directory Service Requirements

The last few versions of Exchange had reasonably light requirements on AD functional levels. Now Windows 2003 R2 has finally went out of support the minimum Forest Functional Level and Domain Functional Level has been changed from 2003 and above. The minimum support FFL/DFL is now a minimum of Windows 2008 or above.


Updating Outlook Clients

Exchange 2016 supports Outlook 2010 and above on Windows, and on the Mac Outlook 2011 and higher. Outlook 2007 is no longer supported, but may work.

All versions of Outlook 2016 and Outlook 2013 are supported. Outlook 2010 is supported with the April 2015 update (KB2965295).

Update clients to the minimum supported version required before implementing Exchange 2016. Newer versions of Outlook will work with Exchange 2010 without issue.


Preparing the server for Exchange 2016

Exchange 2016 supports Windows 2012 and Windows 2012 R2. In our series we'll use Windows 2012 R2.
We'll be using physical disks to support Exchange 2016 and then creating virtual disks atop our Hyper-V environment. In Hyper-V, our new VM looks like this:

                         Figure 2: Hyper-V Configuration for our Virtual Machine

We'll then proceed and install Windows Server 2012 R2 on the virtual machine used for Exchange 2016, then configure it with correct network settings, install the latest Windows updates and join it to our domain.


Storage Overview

Exchange Server 2016 supports NTFS and ReFS for Exchange databases and log files, and supports NTFS for operating system and Exchange binaries.

ReFS is recommended, with data integrity features switched off; therefore, we’ll format all Exchange database and log disks using this filesystem.

In addition to making sure we're using the recommended filesystem, we will create mount points to represent the disks and their purpose:

DiskMount Point
Page fileE:
Database 1C:\ExchangeDatabases\DB01
Database 2C:\ExchangeDatabases\DB02
Database 3C:\ExchangeDatabases\DB03
Database 4C:\ExchangeDatabases\DB04
Database 1 LogC:\ExchangeDatabases\DB01_Log
Database 2 LogC:\ExchangeDatabases\DB02_Log
Database 3 LogC:\ExchangeDatabases\DB03_Log
Database 4 LogC:\ExchangeDatabases\DB04_Log
Restore LUNC:\ExchangeDatabases\Restore
Table 5


Initializing Disks

We will then bring storage online, initialize and then format and mount the storage. Launch Disk Management by right-clicking the Start Button:

                                                          Figure 3: Opening Disk Management

Within Server Manager, navigate to Disk Management. We will see in the upper panel the system disk, C: and the System Reserved Partition. These also display in the lower page, contained as partitions within the primary disk.

All newly added disks will typically be shown as offline. We'll need to first change each of these disks to an online state before we prepare them. This is accomplished by right clicking each disk and simply choosing online. Perform this step, as shown below, across all new disks before proceeding:

                                         Figure 4: Using Disk Management to bring a disk online

After bringing the disks online, we will now select one of the disks, right click and choose Initialize Disk:

                                                         Figure 5: Initializing disks for Exchange use

This will allow us to initialize all new disks in a single operation. We'll ensure all disks are selected (in our case all 12 additional disks), then select GPT (GUID Partition Table), which is recommended for Exchange and supports disk sizes over 2TB, should they be required:

                                               Figure 6: Selecting the GPT partition type

 

Preparing the server for Exchange 2016

 

Configuring disks

We’ll now create our first volume for the Page File. In our design, this is not to be located on a mount point, so we don’t need to create a folder structure to support it. We can simple right click and choose New Simple Volume:

                                           Figure 7: Creating a new volume for the page file

The New Simple Volume Wizard will launch. We’ll be provided with the opportunity to assign our drive letter, mount in an empty folder (which we will use for the database and log volumes) or not to assign a drive letter or path. We’ll choose a drive letter, in this case, D:

                                  Figure 8: Assigning a drive letter to our page file disk

After choosing the drive letter, we’ll then move on to formatting our first disk.

                                  Figure 9: Formatting our page file disk

After formatting the page file volume, we will format and mount our database and log volumes.
The process to create the ReFS volume with the correct settings requires PowerShell. An example function is shown below that we will use to create the mount point, create a partition and format the volume with the right setting.

function Format-ExchangeDisk
{
      param($Disk, $Label, $BaseDirectory="C:\ExchangeDatabases")
      New-Item -ItemType   Directory -Path   "$($BaseDirectory)\$($Label)"
      $Partition = Get-Disk -Number $Disk | New-Partition   -UseMaximumSize
      if ($Partition)
      {
          $Partition | Format-Volume   -FileSystem ReFS   -NewFileSystemLabel $Label   -SetIntegrityStreams:$False
          $Partition | Add-PartitionAccessPath   -AccessPath "$($BaseDirectory)\$($Label)"
      }
}


Check and alter the script for your needs. To use the function, paste the script into a PowerShell prompt. The new function will be available as a cmdlet, Format-ExchangeDisk. Before using the script we need to know which disks to format. In Disk Management examine the list of disks. We’ll see the first one to format as ReFS is Disk 2:

          Figure 10: Checking the first disk number to use for Exchange data

Format the disk using the PowerShell function we’ve created above:

                                  Figure 11: Formatting an Exchange data disk using ReFS

After formatting all disks, they should show with correct corresponding labels:

          Figure 12: Viewing disks after formatting as ReFS

Configuring Page file sizes

Page file sizes for each Exchange Server must be configured correctly. Each server should have the page file configured to be the amount of RAM, plus 10MB, up to a maximum of 32GB + 10MB.
To configure the Page file size, right click on the Start Menu and choose System:

                                                         Figure 13: Accessing system settings

The system information window should open within the control panel. Choose Advanced system settings, as shown below:

                        Figure 14: Navigating to Advanced system settings

Next, the System Properties window will appear with the Advanced tab selected. Within Performance, choose Settings:

                                            Figure 15: Opening Performance settings

We will then adjust the Virtual Memory settings and perform the following actions:
  • Unselect Automatically manage paging file size for all drives
  • Set a page file size to match the current virtual machine RAM, plus 10MB, for example:
    • 8GB RAM = 8192MB RAM = 8202MB page file
    • 16GB RAM = 16384MB RAM = 16394MB page file
You’ll see the result of this for our virtual machine illustrated below:

                                            Figure 16: Configuring the page file size

After making this change you may be asked to reboot.
You don’t need to do so at this stage as we will be installing some pre-requisites to support the Exchange installation.


Configuring Exchange 2016 prerequisites

To install the pre-requisites, launch an elevated PowerShell prompt, and execute the following command:

Install-WindowsFeature AS-HTTP-Activation, Desktop-Experience, NET-Framework-45-Features, RPC-over-HTTP-proxy, RSAT-Clustering, RSAT-Clustering-CmdInterface, RSAT-Clustering-Mgmt, RSAT-Clustering-PowerShell, Web-Mgmt-Console, WAS-Process-Model, Web-Asp-Net45, Web-Basic-Auth, Web-Client-Auth, Web-Digest-Auth, Web-Dir-Browsing, Web-Dyn-Compression, Web-Http-Errors, Web-Http-Logging, Web-Http-Redirect, Web-Http-Tracing, Web-ISAPI-Ext, Web-ISAPI-Filter, Web-Lgcy-Mgmt-Console, Web-Metabase, Web-Mgmt-Console, Web-Mgmt-Service, Web-Net-Ext45, Web-Request-Monitor, Web-Server, Web-Stat-Compression, Web-Static-Content, Web-Windows-Auth, Web-WMI, Windows-Identity-Foundation, RSAT-ADDS

After installation of the components a reboot is required before we can install the other pre-requisites needed for Exchange 2016 installation.

First we’ll install the .Net Framework 4.5.2.

                                            Figure 17: Installing .Net Framework 4.5.2

Next, install the Microsoft Unified Communications Managed API Core Runtime, version 4.0.
After download, launch the installer. After copying a number of files required, the installer provides information about the components it will install as part of the Core Runtime setup:

                                            Figure 18: Installing the Unified Comms Managed API

No special configuration is needed after install as it’s a supporting component used by Unified Messaging.
Our final pre-requisite is to download and extract the Exchange 2016 installation files themselves.
At the time of writing, the latest version of Exchange 2016 is the RTM version.

Note that because each Cumulative Update and Service Pack for Exchange 2016, you do not need to install the RTM version and update if a CU/SP has been released. Download the latest version available.
After download, run the self-extracting executable and choose an appropriate location to extract files to:

                                          Figure 19: Extracting the files for Exchange 2016

Installing Exchange Server 2016

We will install Exchange Server 2016 via the command line. It’s also possible to perform the setup using the GUI, however the command line options allow us to perform each critical component, such as schema updates, step-by-step.

Installation Locations

As recommended by the Exchange 2016 Role Requirements Calculator, we will be placing the Transport Database - the part of Exchange that temporarily stores in-transit messages - on the system drive, therefore it makes a lot of sense to use the default locations for Exchange installation.

The default installation location for Exchange 2016 is within C:\Program Files\Microsoft\Exchange Server\V15.

Preparing Active Directory

Our first part of the Exchange 2016 installation is to perform the Schema update. This step is irreversible; therefore, it is essential that a full backup of Active Directory is performed before we perform this step.
While logged on as a domain user that's a member of the Enterprise Admins and Schema Admins, launch an elevated command prompt and change directory into the location we've extracted the Exchange setup files, C:\Exchange2016.

Execute setup.exe with the following switches to prepare the Active Directory schema:
setup.exe /PrepareSchema /IAcceptExchangeServerLicenseTerms

                         Figure 20: Preparing the schema for Exchange 2016

Expect the schema update to take between 5 and 15 minutes to execute.
Next prepare Active Directory. This will prepare the Configuration Container of our Active Directory forest, upgrading the AD objects that support the Exchange Organization. We'll perform this preparation using the following command:

setup.exe /PrepareAD /IAcceptExchangeServerLicenseTerms


                         Figure 21: Preparing Active Directory for Exchange 2016

Our final step to prepare Active Directory is to run the domain preparation.
Our smaller organization is comprised of a single domain, and therefore we can run the following command:
setup.exe /PrepareDomain /IAcceptExchangeServerLicenseTerms

                         Figure 22: Preparing the domain for Exchange 2016

If you have more than one domain within the same Active Directory forest with mail-enabled users, then you will need to prepare each domain. The easiest way to prepare multiple domains is to replace the /PrepareDomain switch with /PrepareAllDomains.

Performing Exchange 2016 Setup

To install Exchange 2016 via setup.exe we will use the /Mode switch to specify that we will be performing an Install. In addition to the /Mode switch we need to specify the role that we’ll install, the Mailbox role.

setup.exe /Mode:Install /Roles:Mailbox   /IAcceptExchangeServerLicenseTerms

                         Figure 23: Installing Exchange 2016 server

After a successful installation, reboot the server.

 

Post-Installation Configuration Changes

 

Checking Exchange After Installation

After installation completes we will ensure that the new Exchange Server is available.
Choose Start and launch the Exchange Administrative Center from the menu, or navigate using Internet Explorer to https://servername/ecp/?ClientVersion=15:

                                                     Figure 24: Launching the EAC

When launching via a localhost URL and because we haven’t installed the real SSL certificate we will see a certificate warning, as shown below. Click Continue to this website to access the EAC login form:

                        Figure 25: First login to the EAC

You should see the Exchange Admin Center login form. Login using an organization admin credentials:

                        Figure 26: Login as an Admin to the EAC

After you successfully login, take a moment to navigate around each section of the EAC to familiarise yourself with the new interface.

             Figure 27: Exploring the Exchange 2016 EAC

You’ll notice that the EAC is very different in layout to Exchange Server 2010’s Exchange Management Console. In Exchange 2010 and 2007, the focus was based on the organization, servers and recipients with distinct sections for each. Exchange 2013 and 2016 move to a more task-oriented view. For example, Send and Receive connectors are both managed from the Mail Flow section rather than hidden within respective Organization and Server sections.

However even with those changes, very similar commands are used within the Exchange Management Shell and you will be able to re-purpose any Exchange 2010 PowerShell skills learnt.

Updating the Service Connection Point for Autodiscover

After successfully installing Exchange Server 2016, a change worth making is to update the Service Connection Point (SCP).

The SCP is registered in Active Directory and used, alongside the Exchange 2010 SCP, as a location Domain-Joined clients can utilize to find their mailbox on the Exchange Server.

By default, the SCP will be in the form https://ServerFQDN /Autodiscover/Autodiscover.xml; for example https://EX1601.goodmanindustries.com/Autodiscover/Autodiscover.xml.

The name above however won't be suitable for two reasons - firstly, no trusted SSL certificate is currently installed on the new Exchange 2016 server, and the SSL certificate we'll replace it with in the next section won't have the actual full name of the server.

This can cause certificate errors on domain-joined clients, most commonly with Outlook showing the end user a certificate warning shortly after you install a new Exchange Server.

Therefore, we will update the Service Connection Point to use the same name as the Exchange 2010 uses for its Service Connection Point. This is also the same name we’ll move across to Exchange 2016 later on.
To accomplish this, launch the Exchange Management Shell from the Start Menu on the Exchange 2016 server:
                                                     Figure 28: Launch the EMS

To update the Service Connection Point, we'll use the Set-ClientAccessService cmdlet from the Exchange Server 2016 Management Shell, using the AutodiscoverServiceInternalURI parameter to update the actual SCP within Active Directory:

Set-ClientAccessService -Identity EX1601 -AutodiscoverServiceInternalURI   https://autodiscover.goodmanindustries.com/Autodiscover/Autodiscover.xml

          Figure 29: Updating the SCP

After making this change, any clients attempting to use the Exchange 2016 Service Connection Point before we implement co-existence will be directed to use Exchange 2010.

Exporting the certificate as PFX format from Exchange 2010

Because we will migrate the HTTPS name from Exchange 2010 to Exchange 2016 we can re-use the same SSL certificate by exporting it from the existing Exchange server.

To perform this step, log in to the Exchange 2010 server and launch the Exchange Admin Console. Navigate to Server Configuration in the Exchange Management Console, select the valid SSL certificate with the correct name, then select Export Exchange Certificate from the Actions pane on the right hand side.

           Figure 30: Exporting the Exchange 2010 SSL cert

The Export Exchange Certificate wizard should open. Select a location to save the Personal Information Exchange (PFX) file and an appropriate strong password, then choose Export:

                         Figure 31: Specifying an export directory and password

Make a note of this location, as we’ll use it in the next step.

Importing the Certificate PFX File

Back over on the Exchange 2016 server, open the Exchange Admin Center and navigate to Servers>Certificates. Within the more (…) menu choose Import Exchange Certificate:

           Figure 32: Importing the SSL certificate to Exchange 2016

In the Import Exchange Certificate wizard we’ll now need to enter a full UNC path to the location of the exported PFX file, along with the correct password used when exporting the certificate from Exchange 2010:

                         Figure 33: Specifying the path to the Exchange 2010 server

After entering the location and password, we’ll then choose Add (+) to select our Exchange 2016 server, EX1601, as the server to apply this certificate to. We’ll then choose Finish to import the certificate:

                        Figure 34: Selecting appropriate servers to import the certificate to

Assigning the SSL certificate to services

Although we now have the SAN SSL certificate installed on the Exchange 2016 server it is not automatically used by services such as IIS, SMTP, POP/IMAP or Unified Messaging. We’ll need to specify which services we want to allow it to be used with.

To perform this step, within Certificates select the certificate and then choose Edit:

                            Figure 35: Assigning SSL certificates for use

Next, choose the Services tab in the Exchange Certificate window and select the same services chosen for Exchange 2010. In this example, we’re only enabling the SSL certificate for IIS (Internet Information Services):

                         Figure 36: Selecting services to assign the SSL cert to

After the certificate is assigned, ensure it is applied to IIS by running the following command:

iisreset /noforce

 

Configuring Exchange URLs using the Exchange Management Shell

The Exchange Management Shell also provides the functionality to change the Exchange URLs for each virtual directory, however unless you know the syntax it can be a little intimidating - and even if you do know the relevant syntax, typing each URL can be a little time consuming too.

We can use a PowerShell script to make this process simpler.

The first two lines of the script are used to specify the name of the Exchange 2016 server, in the $Server variable, and the HTTPS name used across all services in the $HTTPS_FQDN variable.

The subsequent lines use this information to correctly set the Internal and External URLs for each virtual directory:

$Server = "ServerName" 
$HTTPS_FQDN = "mail.domain.com"
Get-OWAVirtualDirectory -Server $Server | Set-OWAVirtualDirectory -InternalURL   "https://$($HTTPS_FQDN)/owa" -ExternalURL   "https://$($HTTPS_FQDN)/owa"
Get-ECPVirtualDirectory -Server $Server | Set-ECPVirtualDirectory -InternalURL   "https://$($HTTPS_FQDN)/ecp" -ExternalURL   "https://$($HTTPS_FQDN)/ecp"
Get-OABVirtualDirectory -Server $Server | Set-OABVirtualDirectory -InternalURL   "https://$($HTTPS_FQDN)/oab" -ExternalURL   "https://$($HTTPS_FQDN)/oab"
Get-ActiveSyncVirtualDirectory -Server $Server | Set-ActiveSyncVirtualDirectory -InternalURL "https://$($HTTPS_FQDN)/Microsoft-Server-ActiveSync"  -ExternalURL "https://$($HTTPS_FQDN)/Microsoft-Server-ActiveSync"
Get-WebServicesVirtualDirectory -Server $Server | Set-WebServicesVirtualDirectory -InternalURL "https://$($HTTPS_FQDN)/EWS/Exchange.asmx"  -ExternalURL "https://$($HTTPS_FQDN)/EWS/Exchange.asmx"
Get-MapiVirtualDirectory -Server $Server | Set-MapiVirtualDirectory -InternalURL   "https://$($HTTPS_FQDN)/mapi" -ExternalURL   https://$($HTTPS_FQDN)/mapi

In the example below, we've specified both our server name EX1601 and HTTPS name mail.goodmanindustries.com and then updated each Virtual Directory accordingly:

          Figure 37: Updating URL values

 

Configuring Outlook Anywhere

After updating the Virtual Directories for Exchange, we'll also update the HTTPS name and authentication method specified for Outlook Anywhere.

As Outlook Anywhere is the protocol Outlook clients will use by default to communicate with Exchange Server 2016, replacing MAPI/RPC within the LAN, it's important that these settings are correct - even if you are not publishing Outlook Anywhere externally.

During co-existence it's also important to ensure that the default Authentication Method, Negotiate, is updated to NTLM to ensure client compatibility when Exchange 2016 proxies Outlook Anywhere connections to the Exchange 2010 server.

To update these values, navigate to Servers and then choose Edit against the Exchange 2016 server:

               Figure 38: Locating Outlook Anywhere settings

In the Exchange Server properties window choose the Outlook Anywhere tab. Update the External Host Name, Internal Host Name and Authentication Method as shown below:

    Figure 39: Updating Outlook Anywhere configuration

Naturally you can also accomplish this with PowerShell, however it's just as quick to use the Exchange Admin Center for a single server.

With these settings configured, along with iisreset /noforce to ensure configured is re-loaded into IIS we could in theory move client access across from Exchange 2010 to Exchange 2016. Before we do that we will first make some additional configuration changes.

 

Summary

We’ve performed the first basic configuration required for our Exchange 2016 server post-installation. In next part we will complete the post-installation configuration and begin preparation for migration.

To Be Continued!

 

Deploying Office Online Server (OOS)

$
0
0

Planning for the Office Online Server to be used with Exchange Server 2016 and initial deployment steps. In this article we will cover all steps required to prepare the Active Directory to support Exchange Server 2016, and we will install all prerequisites required on the future Exchange Server box.


Introduction

Office Online Server (OOS), currently in preview (not to be used in production! Keep an eye on Microsoft Exchange blog to get the information when this role is released for production environments), renders documents that can be viewed and edited using a variety of browsers and devices. This new Microsoft server role can be used with several other products, such as: SharePoint, OneDrive, Shared Folders and even web sites.

Office Online Server (OOS) has been around for a while in Microsoft Unified Messaging family, the former name was Office Web Apps (WAC) and that version could be used with Exchange Server 2013. However, starting with Exchange Server 2016, this server role got a special place because it is responsible for supporting the Modern Attachments feature.

That is cool, but why is it so important for Exchange Server 2016? Well it all boils down to a new feature called Modern Attachments, where Outlook Web App and Outlook 2016 clients are able to reference files instead of adding them as attachments to the messages, which at the end of the day saves a lot of space on the Mailbox Databases. A good example is a 10MB file attachment on the Mailbox Database, but starting on Exchange Server 2016 that will be just a link, and the end-user will be able to view/edit the file from the source without having the need to download it.

In this article series we are going over the process to deploy the Office Online Server and some tweaks to improve the product and how to configure Exchange Server 2016 to integrate with this server role.


Planning for Office Online Server (OOS) Server

In order to understand where we can install the Office Online Server, the easier way is to list all places where the server should not be installed. Basically, the server cannot be collocated with other server roles, such as: Domain Controllers, IIS, SharePoint, Exchange Server, Skype for Business Server and SQL Server. Also, we must not have Office client installed on the Office Online Server.

Long story short, to keep things simple and consistent, reserve a server just for the Office Online Server, and to make sure that you have a high available environment, the recommendation is to have a minimum of two (2) servers using a Load Balancing solution.

The Certificate is always a discussion topic on any design for Exchange and Skype for Business, and the same applies for Office Online Server. A good thing is that we can find synergies on all those products and they can share the same certificate if we plan well. Here are some recommendations that will help you to design your Office Online Server environment, as follows:
  • Use a Public Certificate (you can use SAN or wildcard certificates) although the preference is to use SAN (Subject Alternative Names)
  • Most likely you will use a Subject Alternative Name (SAN) certificate which will support several names, and for Exchange you can start as simple as 2 names to support a single site (you may need additional names based on your Disaster Recovery, or in case of having multiple sites)
  • Depending of your environment you can use the same Public Certificate for several services, such as SharePoint, Exchange Server, Skype for Business, Active Directory Federation Services, Office Online Server and so forth. Just keep adding names and it will be cheaper and it will reduce the hassle of maintaining several individual certs for each service/application.
  • Active Directory is able to resolve your public domain internally. If you have an invalid FQDN (e.g.: company.local) you may want to use a split-brain DNS where your public domain is created internally and the name resolution of that public zone internally is using internal servers.

Exchange Server 2016 and transition process

If you have been using WAC (Office Web Apps) with Exchange Server 2013, then we need to go over some details before introducing Office Online Server in your environment.

The main rule is about supportability. For starters, Exchange Server 2013 supports Office Web Apps (WAC) however it does not support Office Online Server (OOS), on the other hand Exchange Server 2016 supports Office Online Server (OOS) but it does not support Office Web Apps (WAC).

The take away of the planning is to make sure that you build a high available solution for Office Online Server (OOS), and that will avoid issues in case of a failure on the Office Online Server. We have a situation where Exchange Server 2016 tries to use an Office Web Apps (WAC) which is not a supported scenario.


DNS Configuration…

Active Directory may have a valid FQDN (Full qualified Domain Name), based on a TLD (top-level domain) name, such as company.ca, company.com, or in some cases non-valid FQDN, such as company.local, company.corp. Since in this article we are working about the Office Online Server and Exchange Server 2016, and for both you should already have a deployed and stable Active Directory environment, so it is safe to say that the train has left the station when the subject is defining FQDN for your domain, so work with what you have.

Based on Microsoft’s recommendations it is easier to play with a valid FQDN when using products from Unified Communications family, such as: Skype for Business, Exchange Server and Office Online Server. There are a couple of reasons, one of the most compelling is that Public Certificates are issued only to valid FQDN, and in that case it is easier to point all Exchange Web Services to valid FQDN and use DNS (Internal and Public) to point the clients to the right server. Your current environment is in one of the situations below, so use the scenarios below to define where to configure your DNS to support Office Online Server.
  • If you have an invalid FQDN (e.g.: company.local, company.corp) you can create a valid FQDN zone at the DNS level (create as Primary Zone and store in Active Directory to guarantee the replication to all Domain Controllers), this format is also known as split-brain DNS. After creating the new zone that matches your valid FQDN, just add the names that will be used by Exchange Services, OOS, ADFS, and so forth as hosts on that new zone. 
  • If you already have a valid FQDN, then it is just matter of adding new entries for the new services that you defined on your certificate
Either way, you just need to create a new host (A or AAAA) using the defined Office Online Server name and in this article series we will be using oos.montreallab.info as shown in Figure 01.

         Figure 01

 

Deploying Office Online Server (OOS)

The process to install the Office Online Server requires a few software components and the following list has a summary of the required software and is using the proper installation order. We are going over each component in this section.
The first step is to install Windows Server features, and that can be done using the following cmdlet listed below (Figure 02). After installing those new features, a restart of the server is recommended before moving forward with the next step.

Install-WindowsFeature Web-Server, Web-Mgmt-Tools, Web-Mgmt-Console, Web-WebServer, Web-Common-Http, Web-Default-Doc, Web-Static-Content, Web-Performance, Web-Stat-Compression, Web-Dyn-Compression, Web-Security, Web-Filtering, Web-Windows-Auth, Web-App-Dev, Web-Net-Ext45, Web-Asp-Net45, Web-ISAPI-Ext, Web-ISAPI-Filter, Web-Includes, InkandHandwritingServices 

    Figure 02

The second step is to execute the Microsoft Visual C++ 2015 Redistributable (x64), and the installation process is straight forward. In the initial page (Figure 03), just select I agree to the license terms and conditions and click on Install.

                    Figure 03

After downloading the ISO file, double click on it and the content will be mounted on a drive letter of the server. Click on the new drive, and then on setup.exe that can be found on the root of that new drive (Figure 04)

                          Figure 04

In the initial page of the wizard, if you are in agreement with the license contract, select I accept the terms of this agreement and click on Continue.

In the second page, define the installation location which by default is C:\Program Files\Microsoft Office Web Apps and click on Install now to start the installation process, as shown in Figure 05.

   Figure 05
 
The installation process is that simple, the final page should be similar to Figure 06 where the setup informs the administrator that the server has been installed, click on Close.

    Figure 06

At this point we have the server installed but it is not fully functional, in the next article of our series we will be configuring the Public Certificate and assigning the DNS name that we defined on the Office Online Server.

Adding an Exchange Server 2016 to an existent Organization (setup wizard)

The process to add additional Exchange Server 2016 servers is always the same, and it does not matter if it is an Exchange Organization running only 2016, or running legacy versions (Exchange 2010 or 2013).
Using Windows Explorer, go to the Exchange 2016 installation folder, right-click on setup.exe and then click on Run as administrator (Figure 01).

                                                      Figure 01

In the Check for Updates? page. If there are any available updates the Exchange 2016 setup can check and download those updates to be used in the current installation process. Select Connect to the Internet and check for updates and click next.

In the Downloading Updates page. A list of the updates found, or a message saying that no updates were found will be displayed, either way. Click on Next.
In the Introduction page (Figure 02). An Exchange Server 2016 welcome page will be displayed, nothing to configure here, just click next.

               Figure 02
 
In the License Agreement page. After reading and accepting the license agreement, select I accept the terms in the license agreement and click next.

In the Recommended Settings page. The administrator can define usage feedback (information about utilization is sent to the product team for future improvements) and online checking for errors will be enable or disable. In this article we are using the default value which is Use recommended settings, click next.
In the Server Role Selection page (Figure 03). Here is the biggest change when compared with Exchange Server 2013, in the new architecture of the product there are only two roles available: Mailbox and Edge Transport role. Select Mailbox Role, and click Next.

Note:We are selecting Automatically install Windows Server roles and features that are required to install Exchange Server for safety reasons, however we installed all prerequisites following the instructions of the first article of this series.

               Figure 03

In the Installation Space and Location page. The disk space requirements and availability will be displayed, we will use default settings to store the Exchange 2016 Installation which is C:\Program Files\Microsoft\Exchange Server\V15, click next.

In the Malware Protection Settings page. By default, the malware protection is enabled, we will leave default values and then click next.

In the Readiness Checks page (Figure 04). All Warning and Errors items will be listed, if there are no error messages being listed, then the setup process can continue. We got a warning about MAPI over HTTP which is not currently enabled. Click on install to start the installation process.

Note:
If you are curious about the checks performed by the Exchange Server 2016 Setup, the following link will provide a detailed information for all checks: https://technet.microsoft.com/EN-US/library/jj150508(v=exchg.160).aspx.

                  Figure 04
 
The final page of the Exchange 2016 setup will be similar to the Figure 05, where the setup confirms the completion of the installation process, click on Finish.

               Figure 05

 

Adding an Exchange Server 2016 into an existent Organization (command-line)

We will explore the second method to install Exchange Server 2016 which is using the command-line. It is important to know that all options (and more to be honest) that you have on the Exchange Server 2016 Setup wizard, you also have available on the command-line. In order to identify all options for any specific action, we can use setup.exe /? to obtain more information and the switches available.

If you just want to install an Exchange Server 2016 with a Mailbox role using default values, the following command line will be enough (Figure 06).

Setup.exe /Mode:Install /Role:Mailbox /IAcceptExchangeServerLicenseTerms

    Figure 06

 

Certificate issues after adding a new Exchange Server 2016…

When the topic is certificate the recommended best practice is to use a Public Certificate and split-brain DNS. By doing that, a single set of namespace can be used for both internal and external web services on Exchange Server (the rule applies for Exchange 2016/2013/2010 and 2007).

In the current scenario of this article series, we have an Exchange Server 2013 configured to use webmail.patricio.ca for all web services, and the DNS has an entry for that same host pointing out to the Exchange Server 2013 server. However, after completing the installation of the new Exchange Server 2016 the error message (Figure 07) will start to pop-up on some of the clients, and if we look closely we will see the name of the Exchange Server 2016 that we have just introduced in our Exchange Organization.

                               Figure 07

The reason is because any new Exchange Server 2016 (the same rule applies to older versions of the product) will configure the AutoDiscoverServiceInternalURI attribute with the FQDN (Full Qualified Domain Name) of the server and since the certificate does not match that name, the result is that certificate error. That issue will occur on the internal network on the Active Directory site where the new Exchange Server 2016 was installed.

In Exchange Server 2016 we have a new cmdlet to retrieve the internal autodiscover which is the Get-ClientAccessService (the former Get-ClientAccessServer is still valid but it will be removed in a future version).

In order to identify what is causing the issue in the configuration, we will run the following cmdlet and the results are shown in Figure 08.

Get-ClientAccessService | ft Name,AutoDiscoverServiceInternalURI -AutoSize

    Figure 08
 
The faster way to solve the issue is to change the AutoDiscoverServiceInternalURI from the new server to point out to the existent and valid URL, this way the clients will no longer receive certificate pop-up messages, and all traffic will go to that host.

After installing the certificate on the new Exchange Server 2016 and certifying that the new server is ready for prime time, the administrator can change the entry webmail.patricio.ca to point out to the new Exchange Server 2016 box, and at that time the clients will not notice the change.
In order to change the URL, the following cmdlet can be used:

Set-ClientAccessService -AutoDiscoverServiceInternalURI

 

Creating the Exchange Organization through setup wizard

An Exchange Organization is created during the initial installation of an Exchange Server, and the Organization is at the Forest Level and it will stay in your Active Directory forever (you can remove the configuration manually but that is not the point).

If that is the first time ever that Exchange Server is being installed, an additional step is required which is the definition of the Organization Name. That setting will show up as a new page during the wizard (Figure 09). That new page will be located between Installation Space and Location page and Malware Protection Settings page.

The installation process when creating a new organization and an additional server are the same (the only exception is the additional page).

                   Figure 09

 

Checking the installation and basic troubleshooting…

Independent of the process used to install Exchange Server 2016, there are a couple of basic steps that can be used to test the brand new server. The first thing that the administrator will notice after the installation of the product is a new set of icons on the application list (Figure 10), and we will be using Exchange Management Shell to perform a couple of tests.

                                          Figure 10

A basic cmdlet for troubleshooting and to get an overall status of the service is Test-ServiceHealth and we can see it in action in Figure 11. This cmdlet will list all services required for each component and it will help the administrator to identify services that are not running.

    Figure 11
 
In some cases, the Exchange Server 2016 setup may fail, and in order to identify the issue the first step is to check the log files created during the installation process, and they can be found at C:\ExchangeSetupLogs folder (Figure 12). Those log files will have a lot of information about the process, and if you take your time going through those files you probably will be able to pin point the issue.

                 Figure 12

 

Conclusion

We went over the initial requirement to install Office Online Server (OOS) and the installation of the product and we have performed additional steps to configure the server. After working on the Active Directory preparation and prerequisites, we have completed our series going over the process to install Exchange Server 2016 using both methods (command-line and setup wizard), and we finished the article by showing how the administrator can check the services after installation, avoid certificates pop-ups and checking the logs created during the installation.





How to Perform a vSphere to Hyper-V Conversion

$
0
0
In this article, we will show you how to use another free tool that allows the migration of Virtual Machines from vSphere to Hyper-V using the free 5Nine V2V Easy Converter.


Three Reasons to V2V

Before I show you HOW to perform a V2V (vSphere to Hyper-V), let’s first answer the WHY question. Why would you want to move from vSphere to Hyper-V?
  1. Testing Hyper-V– let’s say that you want to prove that an application runs just as well (or even as poorly) on Hyper-V as it does vSphere. You could do a V2V and try it out. You could also just test out Hyper-V, if you haven’t used it before, with an existing VMware vSphere VM.
  2. Conversion – when a company is bought, sold, or consolidated, there are always scenarios where you would want to move from one platform to another.
  3. Production– with VMware being so popular, many VMs are run in vSphere or saved in VMware’s formats. It is very likely that you need to move production VMs from vSphere to Hyper-V
Bonus reason: “because you want to”. Everyone should have the freedom and ease of moving a VM from one platform to another. After all, that is one of the things that virtualization first enabled – hardware independence, portability, freedom, and choice in your hardware – you should also have that in your virtualization option.

 

Downloading and Installing V2V Easy Converter

Recently, I came across a new V2V tool for going from vSphere to Hyper-V, currently in a beta preview mode. V2V Easy Converter from 5Nine is free (at least the beta preview is), simple to use, tiny to download, fast to install, and it works. Here’s what I did.

I downloaded V2V Easy Converter (which didn’t take long as it is under 2MB). It was the typical Windows install, only a little shorter because it is so small.

 

vSphere to Hyper-V VM Conversion, Step by Step

The general process of performing a V2V, no matter what the two platforms are, is:
  1. Capture the VM’s configuration and create a VM with the same configuration on the other hypervisor
  2. Copy the VM’s virtual disk and convert it to the other hypervisor’s virtual disk format (such as VMDK to VHD)
  3. Replace the virtualized device drivers in the guest OS (inside that virtual disk) with the virtual device drivers for the new hypervisor
No matter what V2V application you use, this is the general process that happens in the background.
To get started with this V2V conversion using our tool, I ran the icon in the Windows start menu called 5Nine EasyConverter. When the converter comes up, it shows you a series of blanks for you to fill out, across a few screens, in easy wizard-format.

It starts with the selection of the source. In other words, what ESXi server or vCenter server will we contact to find the virtual machine that you want to convert?

          Figure 1
 
It scanned the vCenter server I specified and then gave me a list of the VMs I could convert.

          Figure 2

Note that, today, these VMs must be Windows 2008 or Windows 7 (or newer) VMs and that they must be powered off. Later, it will be supported to convert older operating systems and, perhaps, powered on VMs.
I selected the VM I wanted to convert, gave the new VM a name, and specified the hardware resources it will consume over on the Hyper-V server I was moving it to.

          Figure 3
 
Next, I entered the name of my Hyper-V server and used the “assess” option to see if it was a suitable host for the VM that I want to convert there.

          Figure 4
 
From there, I took the defaults for the VM paths on the Hyper-V server and selected what virtual networks this new VM would have access to.

          Figure 5

Next, I reviewed a final summary of what was about to happen and opted to power on the VM once it was converted.

          Figure 6
 
After clicking Finish, the VM started its conversion. Being as this VM has a 200GB thinly provisioned virtual disk (that only had 11GB in use), I was curious to see how long this would take and exactly what would happen. It took a while (on my slow lab network) but I saw that the virtual machine had been converted (as in Figure 7, below).

             Figure 7
 
Over in the Hyper-V Manager, I could see that a new VM had been converted and I could even see the size of the VM’s virtual disk had 12GB in use of the maximum 200GB virtual disk (almost identical to the original).

          Figure 8
           Figure 9

I can even access the console of the converted VM to prove that it worked.

           Figure 10

At this point, our conversion from vSphere to Hyper-V is done!

 

What Did We Learn?

What did we learn through this V2V process?
  1. There are a number of reasons you would want to convert VMs from vSphere to Hyper-V and you should have an easy option to do so if you choose (freedom of choice, just like in a free market economy). Reasons include testing, learning, and consolidation of production VMs.
  2. That, besides System Center Virtual Machine Manager (a great product but a product with a price), there is another option that is tiny, free, and works if you are just looking for V2V conversion
  3. That converting a VM from one hypervisor to another doesn’t have to be difficult when you have tools to help

 

Summary

Freedom of choice in virtualization platforms is made easy when VMs can be easily moved from one platform to another, just as you would choose one restaurant over another if you were dissatisfied or to test the competition. Thanks to tools like EasyConverter, you have a free and easy option to move VMs from vSphere to Hyper-V if you choose to do so.

Note:I am a huge proponent of VMware vSphere so I am not recommending that everyone move, simply that you have the option to do so if you want, just as you have the option to easily move from Hyper-V to vSphere using VMware Converter.




Installing and Configuring Citrix Provisioning Services 7.6 (Part 1)

$
0
0

In this article series we will look at installing and configuring Citrix Provisioning Services. In this article series we will install and configure the whole product, so at the end of the article series we will have a fully functional OS streaming infrastructure.


If you would like to read the other parts in this article series please go to:

 

Introduction

Citrix Provisioning Services (PVS) has entered the Citrix portfolio with the acquisition of Ardence back in 2006. Citrix Provisioning Services is providing what can be called best OS Streaming. The Citrix Provisioning Services infrastructure has three basic components. One or more PVS servers, which are taking care of all the intelligence, a PVS Console to configure and manage the PVS infrastructure and a so called Target Device. This is a machine which will stream the OS from one of the PVS servers. In this first part we will start with the installation of all those components. We will start with the PVS server, followed by the PVS console and finally the Target Device Software.

Installation PVS Server

The installation software is available as a separate download from the Citrix website. As PVS is not available anymore as a separate product, you need to have XenApp or XenDesktop licenses to fully use the program. When mounting the ISO the installation wizard will be started.

   Figure 1: Citrix PVS installation wizard

All installation options are shown (console, server installation and target device installation), for now we select server installation. The server installation can be executed on Windows 2008, Windows 2008R2, Windows 2012 and Windows 2012R2. It also requires that .Net Framework 4.0 and PowerShell 2.0 are installed on the machine. The server installation has some supporting component requirements, but those are installed automatically during the installation wizard. Actually those are installed as a starting point as shown in Figure 2.

                 Figure 2: Citrix PVS required supporting software Hereby is the SQL 2012 Client which is optional, but Citrix advises to install the SQL client when possible/allowed

                     Figure 3: Would you like to install the SQL 2012 client

After the supporting software installation the PVS installation wizard continues with an informational window.

                 Figure 4: Welcome to the installation wizard for Citrix Provisioning Services

The following step is to read the license agreement and to accept these terms to continue with the installation wizard.
                 Figure 5: License Agreement

After the license agreement you need to provide the Customer Information and decide if the shortcuts are created in your own profile or in the “All Users” profile.

                 Figure 6: License Agreement

The next step is to select the Destination Folder where the actual PVS files are installed on the machine.

                 Figure 7: Selecting the Destination Folder

After selecting the destination folder, the actual installation can be started by using the Install button in the upcoming screen.

                 Figure 8: Ready to Install

At that moment the actual installation is executed. When this part is finished the last installation wizard will be shown stating that the installation wizard is completed.

                 Figure 9: Installation Wizard Completed

The installation will also show a message window that PVS Console is not detected. You can install the PVS console on any machine, so it’s not required to install the console on the PVS server. Logically it can be done and can be useful, but that’s up to you.

                            Figure 10: Console installation not detected

After this information message, the Provisioning Services Configuration Wizard is automatically started. This wizard can also be started later; it’s available as an application shortcut.

                 Figure 11: Provisioning Service Configuration wizard is started automatically

As you can see Citrix separated the installation and configuration of the product. As Target Devices have a continuous connection to the PVS server you would need to have at least two PVS servers for fault tolerant purposes. So for both servers the actual installation steps are identical. In this article series I have two PVS servers to show the fault tolerant and load balancing capabilities, so I executed the installation wizard on two systems.

Console Installation

For now I will first continue with the installation steps by installing the PVS Console first. As mentioned before it can be installed on both the PVS server, on a local machine (Windows 7, Windows 8 or Windows 8.1) or an admin server (Windows Server 2008, Window Server 2008R2, Windows Server 2012 or Windows Server 2012R2). The PVS server should be reachable for the PVS console on port 54321 (but more about this topic later in this article series). Also MMC 3.0, Microsoft .Net Framework 4.0 and PowerShell 2.0 are required for the console functionality.

As shown in Figure 1 the PVS console installation is available from the auto run.

                 Figure 12: PVS Console Installation Wizard

The installation wizards starts (again) with the license agreement, followed by the user information and selecting the destination folder of the actual files, just like the PVS server installation

 


Figure 13: PVS Console installation wizard screens

After these standard wizard windows, you have the possibility to do a full installation or a custom installation.

                 Figure 14: A Complete or Custom installation

When choosing Complete everything is installed, when choosing Custom you can decide if you want to install the Console only and don’t install the Boot Device Manager (BDM Creation Tool). I will go into more detail about BDM later in this series.

                 Figure 15: Custom Installation options

After the possibility to decide which items should be installed the actual installation can be started.

                  Figure 16: Custom Installation options

When the installation is finished the installation wizard will show that the installation is done.

                 Figure 17: Custom Installation options

 

Target Device Installation

With Provisioning Services you need to have a base machine where the vDisk is created from. It’s actual a kind of imaging technique that’s being used. On this base machine the client software (Target Device software) needs to be installed. So you install this machine traditionally (hopefully using a software deployment system) including all the applications that are required than start the Target Device installation.

The Target Device software can be installed on Windows 7 SP1 32 bit and 64 bit (Enterprise, Professional, Windows 8/8.1 32bit and 64 bit (All Editions), Windows XP Professional SP3 32 bit, Windows XP Professional SP2 64 bit, Windows Server 2008 R2 SP1 (all editions), Windows Server 2012 (all editions) and Windows Server 2012 R2 (all editions).

   Figure 18: Target Device Installation

The target device installation wizard is very similar to the console and PVS server installation wizard. The wizard starts with a welcome window, followed by the license agreement, the user information and the location where the software will actually be installed. Just as the other installers finally the Install button is shown which starts the actual installation.




                                                   Figure 19: Target Device Installation Installation Wizard steps

When the installation is finished a new window will appear. Within this window by default the option Launch Image Wizard is selected, which will logically start the wizard for creating the vDisk image for the PVS OS streaming. This wizard will be discussed later in this article series, as it’s a requirement that the back-end is fully configured. This wizard can also be started later via a shortcut in the Start Menu.

                   Figure 20: Target Device Installation Wizard Complete

 

Summary

In this article series we will go through the installation and configuration of Citrix Provisioning Services 7.6, so at the end of the series you will have a functional PVS infrastructure including OS streaming to the Target Devices. In this first part we went through the installation steps of the PVS Server, the PVS console and the PVS Target Device. In the upcoming article we will continue with the configuration of the PVS Server.



Installing and Configuring Citrix Provisioning Services 7.6 (Part 2)

$
0
0

In this part of our article series we will continue with the configuration of the PVS Server. In the first part we went through the installation steps of the PVS Server, the PVS console and the PVS Target Device.


If you would like to read the other parts in this article series please go to:

 

Introduction

In this article series we will go through the installation and configuration of Citrix Provisioning Services 7.6, so at the end of the series you will have a functional PVS infrastructure including OS streaming to the Target Devices.

Configuration 1st PVS Server

At the end of the installation the Provisioning Service Configuration Wizard is automatically started, but can be started out of the Start Menu shortcut. The wizard starts with an informational screen.

                 Figure 1: Provisioning Services Configuration Wizard Introduction

PVS Target Devices normally use a DHCP IP address as the OS Streaming is based on an image. The first question in the configuration wizard is about the location of the DHCP services. In most cases the DCHP service will run on a separate computer (not on the PVS server), so that answer will be most used. If you don’t have a DCHP server the PVS software includes a very small basic DHCP service.

                 Figure 2: Provisioning Services Configuration WizardDHCP Services

Secondly the wizard would like to know which PXE services you would like to use. If you would like to use PXE (you can use an alternate method called BDM). If you are using PXE, I recommend using the PXE service of PVS to keep your set-up simple and understandable. However you should be sure that no other PXE services are running in the same VLAN.

                 Figure 3: Provisioning Services Configuration WizardPXE Services

As this is our first PVS server we should create a new environment. An environment is called a farm within PVS, so we needed to choose Create Farm.

                 Figure 4: Provisioning Services Configuration WizardCreate Farm

PVS is using a SQL database to store the information. PVS 7.x supports SQL 2008 or higher. SQL express is also supported, but logically that’s not recommended. In the next screen you specify the SQL server details. What’s really nice about PVS is the full support of database mirroring including automatic failover if you specify the failover partner in the database configuration part.

                 Figure 5: Provisioning Services Configuration WizardDatabase

After specify the SQL server information the database name should be provided including the name for the three levels within the PVS infrastructure (farm, site and collection). These names are only visible/used in the console and can be changed to something different without consequences after the initial configuration. Lastly we need to specify which AD group contains the accounts, which will become the PVS administrators.

 Figure 6: Provisioning Services Configuration WizardNew Farm

The next step is specifying the store name and store location. At the Store location the vDisk(s) will be stored. You can use a UNC path, shared LUN of local storage. Many types of local storage are being used to avoid that the data is travelling on the network twice (from the store to the PVS server and from the PVS server to the Target Devices). However using local storage implies that you need to arrange a sync between the PVS servers, while PVS does not have a sync mechanism in place (often DFS-R is user for this specific task).

 Figure 7: Provisioning Services Configuration WizardNew Store

Just like other Citrix products PVS is using the Citrix License Server. In most circumstances this will be the same server used for XenApp or XenDesktop.

 Figure 8: Provisioning Services Configuration WizardLicense Server

Optionally a user account can be provided. This user account is required when using a shared LUN or UNC path to access the vDisk on this storage location. When using local storage the network service account can be used. Citrix advises the user Network service account when possible. Take into consideration that the account provided here will also be used to contact the database. If your database administrator does not allow computer accounts as an SQL account, you should also specify a user account here.

 Figure 9: Provisioning Services Configuration WizardUser Account

Just like user accounts, computer objects in AD have their own password, which is changed on a regular basis. PVS has a functionality built in to arrange that this process is still functioning although the same vDisk (image) is used by multiple devices. During the initial wizard you can specify the time frame this password needs to change. There is also a group policy setting that sets the change password time frame. This policy and the configuration in PVS should match.

 Figure 10: Provisioning Services Configuration WizardActive Directory Computer Account Password

PVS is based on demand streaming. In the Network Communications window we need to specify which network card (or actually IP address) is used for streaming the OS to the Target Devices and on which IP address management tasks are executed. The IP addresses are stored into the configuration, change the IP address later should be followed by a reconfiguration of the PVS configuration. The streaming IP address can be re-configured within the console, while the management IP address requires a re-run of the Configuration Wizard again. If required also the communication ports can be changed here.

 Figure 11: Provisioning Services Configuration WizardNetwork Communications

In many cases the TFTP service is used to load the bootstrap file. If you are using TFTP you need to check this option. The default file location provided is correct and does by default require no changes.

 Figure 12: Provisioning Services Configuration WizardTFTP and Bootstrap

Within the configuration you need to specify which PVS server(s) can be used for starting the Target Devices. You can specify between 1 and 4 servers. As every PVS server can fulfill this role it’s recommended to add as many PVS servers as possible (four as a maximum). Besides the boot process these IP address are also contacted in case a booted Target Device loses connection with his PVS server to re-connect to the PVS infrastructure.

 Figure 13: Provisioning Services Configuration WizardStream Servers Boot List

With this last configuration step the configuration wizard has all the information required. A summary of the configuration is shown. By pressing Finish the database will be created, the services will be configured and by default started automatically.

 Figure 14: Provisioning Services Configuration WizardConfirm configuration settings

When you are using a Windows Firewall you need to open the necessary ports manually. Unfortunately the installation wizard does not take over that. PVS is using several port ranges, so be sure you have all ports added to the Windows Firewall. During the wizard you will get a message reminding you to about this fact, when an active Windows Firewall is detected.

 Figure 15: Provisioning Services Configuration WizardWindows Firewall

At the end of the wizard and everything went fine, we will get to see all green checkmarks at the activities executed during the configuration wizard and the first PVS is up and running.

 Figure 16: Provisioning Services Configuration WizardFinished
 

Configuration following PVS Server(s)

On the second (and following) PVS server you will start the same wizard. Normally you will answer the same questions as for the first PVS server. Starting from Figure 2 DHCP will run on another machine and in Figure 3 PXE Service will run on this machine. Next you will logically choose another option than the first server. The following servers will join an existing farm (created at the first PVS server).

 Figure 17: Provisioning Services Configuration Join existing farm

Next we need to provide the SQL server name as shown in Figure 5. As we are joining a farm, we need to select the farm. When more PVS databases are available on the same SQL server, you can choose which farm you would like to join.

 Figure 18: Provisioning Services Configuration Choose Existing Farm

The next decision is if the PVS server will join an existing site or a new site is to be created. Each site needs at least one site, while two are recommended for fault tolerance. I will use an existing site, so I can show the load balancing and fault tolerant options later in this article. You can also move a PVS server to a different site using the management console.

 Figure 19: Provisioning Services Configuration Choose Site

The same applies to the Store. Also here you can use the existing store or create a new one. It depends on your Store set-up which option you will choose, mostly you will use the existing store.

 Figure 20: Provisioning Services Configuration Choose Store

Next we need to choose if we use the network service account or a specific account as shown in Figure 9. The same reasons apply for the selection as at the first PVS server. From this point the wizard is exactly the same as the first PVS server, so you follow the steps from Figure 9 also for the following PVS servers. After the wizard has fully run, the PVS server has joined the farm and we are ready to do some additional configuration, which I will describe later on in this article series.

 

Summary

In this article part I described the initial configuration wizard of the first and following PVS server. Now both servers have executed this initial wizard we are ready to do the preparations for creating the vDisk, which will be described in the upcoming part.

If you would like to read the other parts in this article series please go to:




Installing and Configuring Citrix Provisioning Services 7.6 (Part 3)

$
0
0

In the second part I described the initial configuration wizard of the first and following PVS server. Now both servers have executed this initial wizard we are ready to do the preparations for creating the vDisk, which we will do in this part including using the vDisk by other Target Devices.


If you would like to read the other parts in this article series please go to:

 

Creating the vDisk

After the initial configuration in part two the PVS infrastructure is now ready for use. To show the complete workflow I will now continue with the steps on the Master Target Device to create a vDisk.

We already installed the Target Device software in part one. For creating the vDisk we need to start the Image Wizard (which can be started directly after the installation or out of the Start Menu). The Image Wizard starts with a welcome screen.

 Figure 1: Welcome to the Imaging Wizard

The first information we need to provide is the name or IP address of one of the PVS server to contact the PVS infrastructure. If you are logged in with limited credentials you also provide other credentials with PVS administrator rights.

 Figure 2: Welcome to the Imaging Wizard

It’s possible to create the vDisk in advance using the PVS management console, but this can also be done via the Image Wizard. If you already have a vDisk available you can select it here, otherwise you choose Create new vDisk.

 Figure 3: Select New or Existing vDisk

The new vDisk requires a unique name. When you have more stores available you also need to select the store where the vDisk should be created. At last you need to select whether to use a fixed or dynamic disk. Nowadays I recommend using a dynamic disk as test results show that there is no performance difference (anymore) between the two types and dynamic requires less disk space. When choosing Dynamic you also need to provide the block size the vDisk grows, I always choose 16 MB.

 Figure 4: Create a New Disk

The next question is about Microsoft Volume licensing. I recommend checking the excellent article Enabling KMS Licensing on a vDisk by Ingmar Verheij when you are using KMS. Choose which system you are using in your organization.

 Figure 5: Microsoft Volume Licensing

In the following screen you need to specify which disk you want to have in the vDisk. It’s possible to select multiple partions if that’s required (for example when you install all applications on a separate partition). Logically the drive with the operating system installed should always be selected. You can also tweak the size of the vDisk within this part of the wizard.

 Figure 6: Configure Image Volumes

Logically the machine should be part of the PVS infrastructure. Therefore the machine should be provided with a Target Device Name. It’s important to provide a different name than the actual computer name (in Active Directory). I normally just add MTD, but you can use any name. Also the network card should be selected and a Device Collection where the machine will be added to. I recommend creating a separate Device Collection for machines used as Master Target Device.

 Figure 7: Add Target Device

The wizard is almost finished now. A summary is shown of the provided settings. Within this display there is a button called Optimize for Provisioning Services. Within this button several settings are available for tweaking the imaging. Be careful about accepting all values by default. Check the organization security requirements for example. A good one is Windows Search. If you are running Outlook, Windows Search is used for searching e-mails within Outlook. You understand it would not be smart to disable the Windows Search in the image.

 Figure 8: Provisioning Services Device Optimization Tool

After the optimization phase we can end the wizard using the Finish button.

 Figure 9: Summary of Farm Changes

After pressing the Finish button the actions are executed like creating the vDisk and creating the device within the PVS infrastructure. When that part is finished it’s time to restart the machine to start the actual imaging process. Remember that at this phase the machine should connect to the PVS infrastructure using the PXE or BDM option.

 Figure 10: Restart the Master Target Device

Afterwards the machine is connected to the PVS infrastructure using PXE or BDM and is fully booted again. We can logon again and the image wizard will be started automatically, showing the progress bar.

 Figure 11: Imaging Progress Bar

When the imaging process is done a screen is shown that the conversion is completed.

 Figure 12: Imaging Process Finished

The machine will show its normal interface, you will also see an additional disk connected to the system. But logically that’s not necessary. The next step is to shut down the machine, so the disk image is not in use anymore.

The next step is executed in the console. It’s time to change the vDisk from the private (so it can be written) to shared mode (so the vDisk can be used by multiple Target Devices). This is done by selecting properties of the just created vDisk at the vDisk pool within the site. Here you need to change the access mode from private to standard and select the location where you would like to store the Write Cache. It’s a step to far to explain the different options with their characteristics. Please check my articles series about Designing PVS on my personal website. More settings can be changed, but for simplicity I only show the required changes for now.

 Figure 13: Changing vDisk properties

When the vDisk is reconfigured we can start machines using the vDisk for OS streaming. Therefore the machine needs to be known within the PVS infrastructure. This is done within a Device Collection by selecting New Device.

You need to provide a name (this will be the name of the computer when booted) and the MAC address. For now configure Type: Production and Boot From: vDisk. I will discuss the Type later in this article series. Also you need to assign the vDisk to the device within the vDisk tab.

 Figure 14: Creating Target Device

When the Target Device is created, we need to do one more step and that is creating the machine account in AD (so the machine is able to use the domain). Select the Target Device, right click the mouse button and select the option Active Directory – Create Machine Account.

 Figure 15: Create Machine Account

In the next screen you can specify in which OU the account should be placed and the machine account will actually be created. This is executed under the account which is logged into the console, so be sure that the account has the required privileges to create machine accounts.

 Figure 16: Create Machine Account

Now we are ready to start the machine, connect to the PVS infrastructure (via PXE or BDM) and the operating system is streamed to the device. The boot process will look like a normal machine and you can log in with a domain account. The machine will show in the properties the name provided within the PVS console joined in the domain. Now we have a machine running on the vDisk provided by the PVS infrastructure. From now you can add more target devices and boot those from the same vDisk.

 Figure 17: Machine started using a vDisk

 

Summary

In this article we created a vDisk using a Master Target Device. After creating the vDisk we configured the disk in the standard mode. Next we created a Target Device and added that one to the Active Directory. As the last step we booted the Target Device and streamed the OS to this machine. In the upcoming and last article I will discuss some advanced configurations and the vDisk update possibilities.

If you would like to read the other parts in this article series please go to:


Installing and Configuring Citrix Provisioning Services 7.6 (Part 4)

$
0
0

In this last article of our series I will discuss some advanced configurations and the vDisk update possibilities
.

If you would like to read the other parts in this article series please go to:

 

Introduction

In the previous articles I described the installation and configuration of the Citrix Provisioning Services back-end servers. After that we created a vDisk via the Master Target Device. After the creation we added a Target Device, which got his operating system via OS streaming by Citrix Provisioning Services. In this article I will go one step further to discuss some advanced configuration options including the update possibilities of the vDisk.

Advanced Load Balancing

As described in one of the earlier articles, Citrix Provisioning Services (PVS) has built-in load balancing. By default PVS load balances the Target Devices between all available PVS servers within the PVS Site. So step one of advanced load balancing is to set-up several sites and add specific PVS servers per site. 

However the same result can be achieved configuring the standard Load Balancing with some additional configuration. When selecting the Load Balancing option within the right mouse button menu on a vDisk you can configure some the load balancing algorithm.

 Figure 1: Load Balancing algorithm

Within the load balancing algorithm there is a subnet affinity option. By default this is configured with the option “None”. With this setting all PVS servers within the site can offer the streaming service to the Target Devices. Besides “None” there are two other settings available: Best Effort and Fixed. 

When Best Effort is chosen, the load balancing first checks the IP subnet of the Target Device and determines if one or more PVS servers are available within the same IP subnet. If that is the case the Target Devices are redirected to a server within the same IP range with the least load. If no server is available within the same subnet an arbitrarily available PVS server is chosen (with the least load). 

Logically you need to have a set-up where both the PVS Target Devices as well as the PVS Servers (which should be the primary contact point for that VLAN) should be located in the same IP range (VLAN). The second option is Fixed, which almost works the same as Best Effort for the first phase. With Fixed the load balancing first checks the IP subnet of the Target Device and determines if one or more PVS servers are available within the same IP subnet. If that is the case the Target Devices is redirected to a server within the same IP range with the least load. If no server is available, the Target Device will not connect to the PVS infrastructure. This is similar to using several PVS sites.

 

Rebalance

At the same configuration part as the Load Balancing you can also configure (per vDisk) automatic rebalance. With rebalance enabled the PVS infrastructure checks every five minutes if the Target Devices are proportionally divided between the available PVS servers.

 Figure 2: Rebalance Enabled

If the spread is higher than the percentage configured the PVS infrastructure a rebalance will be triggered to distribute the load equally again.

Using the automated rebalance has both advantages and disadvantages. Advantage is that the system administrator don’t have to take care of the load balancing at all. However there are situations that is probably not a good idea that a server with almost no load will service target devices again. Probably the server has issues and it’s not a good idea that the server will serve target devices again.

Delegation of Control

Within PVS, delegation of control can be configured. Separation in the available rights can be configured at farm, site and device collection level. At Device Collection there are two roles available: Device Operator and Device Administrator. Configuring the delegation of control is a bit different than in other products. First you need to add the persons and/or groups within the product via the Groups tab within the Farm properties.

 Figure 3: Adding groups to PVS for adding delegation of control.

The persons/group added there can be selected at the Security tabs at the different levels.

 Figure 4: Configure Delegation of Control

 

Auditing

One other strong point of PVS is the auditing feature. With the auditing feature enabled (configurable via the Farm properties). The real strong point of this auditing option is that the changes are shown at each level within the console. For example when choosing the auditing of one PVS server only the changes of that specific server are shown. However PVS goes one step further, by choosing the button changes, you can exactly see which settings are changed including the previous and current value.

 Figure 5: Auditing options

 

Auto Add

One other nice feature within PVS is the Auto Add feature. With this auto add feature new devices are automatically added to the PVS infrastructure when the device connects to one of the PVS servers. Based on the Device Collection properties the target device is automatically created and configuration is copied from a template target device.

 Figure 6: Auto Add configuration at Device Collection level

Remember that the configuration for auto add is available on three levels. On Farm level you enable Auto add and configure which site should be used, on site level you specific which device collection should be used and lastly on the device collection level you specify the template and naming convention.

Also be careful with enabling the auto-add feature especially when using PXE. Every device that connects via PXE with the PVS server is automatically added and I can imagine that is not desirable.

 

vDisk update

Last but not definitely least is the vDisk update options. In previous version (before version 6.x) the update mechanism was pretty complicated, rebuilding a new vDisk and then configuring the vDisk exactly the same. Updating via a new vDisk is still one of the possibilities however within PVS there is now a version functionality available. Within the vDisk, right-mouse menu there is an option called version.

Within this option you can create a new version. This version is automatically set in in maintenance mode. Within this mode you can make changes to the vDisk for example adding a new installation, installing security updates, remove an application or installing application updates.

 Figure 7: vDisk versions

This is accomplished by starting one of the target devices connected to this maintenance vDisk version. Therefore the Target Device should be set as Type Maintenance. The target device with type maintenance will have a boot menu where the maintenance version can be chosen. When the Target Device is booted, the changes can be made and the Target Device can be shut down (remember that some of the preparations should be done again, like creating unique registry keys for supporting applications).

 Figure 8: Type Target Device

After the shutdown the vDisk version can be changed to test. In this state the vDisk is again read only (after a reboot changes made are gone). Next step is to configure a Target Device as type Test and start this Target Device from the test version. On the target device you can confirm/test that all changes made in the maintenance version are available and working correctly.

If no issues were found in the test phase, we can promote the version to the state production. You can specify that this change is effective immediately or at the configured date. Luckily the active devices are not rebooted directly, but when they are restarted they will use the new version. In this way you can easily make changes to the production. Also one advantage is that in the case something is wrong in this version, you can easily roll-back. In that case we will revert the version to a previous state (test or maintenance) and the old version is active/production again. However active Target Devices using the version that is reverted will be shut down. This can cause down time and should be considered thoroughly including communication to the active users.

Those steps can be automated using vDisk Update Management. First your Target Devices need to be virtual machine running on Citrix XenServer, VMware ESX or Hyper-V, secondly for configuring the updates you need to have WSUS, SCCM or self-written scripts. When this is set-up, task will autorun using the versions. However the default time is pretty short (30 minutes) and you cannot test it accurately in advance. Therefore I don’t see this option really used in production environment. To be fully in control use the manual version steps (which can be automated using PVS PowerShell cmdlets as well).


 Figure 9: vDisk Update Management

 

Conclusion

In this fourth and last part of the article series about Citrix Provisioning Services I discussed some advanced configuration options like Load Balancing, delegation of control, auto add, rebalance and auditing. Last topic was describing the basic steps of updating the vDisk using the versions functionality.

If you would like to read the other parts in this article series please go to:



Installing and Configuring Citrix XenApp/XenDesktop 7.6 (Part 1)

$
0
0

In this first part of the article series I'll start by explaining that XenApp 7.5 and XenDesktop 7.5 are actually the same product and therefore also the same installation and configuration steps apply. After this explanation I'll describe the installation steps of the Delivery Controller.


If you would like to read the other parts in this article series please go to :
  • Installing and Configuring Citrix XenApp/XenDesktop 7.6 (Part 2)
  • Installing and Configuring Citrix XenApp XenDesktop 7.6 (Part 3)
  • Installing and Configuring Citrix XenApp/XenDesktop 7.6 (Part 4)
  • Installing and Configuring Citrix XenApp/XenDesktop 7.6 (Part 5)

 

Introduction

With the first release within the 7th version, Citrix announced that the XenApp product was at end of life and the functionality was integrated into XenDesktop 7.0. Unfortunate for Citrix the customers did not understand this message and there was lots of confusion around this. Citrix responded by re-introducing XenApp again in version 7.5. 

Actually it was bringing back the original product name and will be based on the new FMA architecture. There are only two different license models available; one for XenApp and one for XenDesktop. The Citrix XenDesktop/XenApp matrix provides a good insight of what the differences between the licenses are. With the release of XenApp/XenDesktop 7.6, many features that were available in XenApp 6.5 but not in previous 7.x releases have been introduced again, so the version 7 release is a real XenApp comparable product.

To explain what XenApp and XenDesktop are becames a bit more complicated, especially because, from a technical standpoint, it’s the same product. In this article I’m going to describe the installation and basic configuration steps for XenApp/XenDesktop 7.6. I try to use the name XenDesktop 7.6 through the article, but hopefully you understand that you can read XenApp here as well (and if I use XenApp by accident, you can also read XenDesktop).

 

Installation  of Delivery Controller

The installation starts with a screen where you need to choose whether you would like to install XenApp or XenDesktop. As described in the introduction you are actually installing the same product, only a different name is shown in the installation window.

 Figure 1: Choose between XenApp or XenDesktop

After choosing which “product” you would like to install, the available options are shown. Each XenDesktop infrastructure requires at least one Delivery Controller. For the XenApp 6.x people this is comparable with a XenApp Controller Host (aka Data Collector). The server/desktop that will host the desktop and/or applications is now called a Virtual Delivery Agent (VDA), but I will dive deeper into that later on in this article series.

As we are setting up a new XenApp 7.6 infrastructure, we start with installing a Delivery Controller via the Deliver Controller “button”. You could also browse the DVD ISO and pick the installer yourselves. However keep in mind that you are responsible for installing the required supporting components. Using the installation wizard those prerequisites are installed automatically.

 Figure 2: The installation options offered by the XenDesktop installation wizard.

The installation wizard continues with the installation steps for the Delivery Controller. The first step is accepting the license agreement.

 Figure 3: Accepting the license agreement.

The next step is to choose which component of the XenDesktop Suite you would like to install on the Delivery Controller server. You can install all components offered on the same server; however in (larger) production environments you would separate some of these components. Let’s do a quick walk through to the offered options and my recommandations.
  • Delivery Controller
This is the core functionality which assigns users to a server/desktop hosting the chosen Desktop or Published Application(s). This is a required component for a Delivery Controller.
  • Studio
Studio is the Management Console of XenDesktop 7.x. Within this console the whole configuration is executed. This can be installed on the Delivery Controller and/or on a separate Admin Server. I always install the Studio on the Delivery Controller also, but it is not required.
  • Director
Director is the second (Management) Console of XenDesktop 7.x. This console is available for monitoring and troubleshooting purposes and is built on top of Internet Information Services. In production environment I do not install this component on the Delivery Controller, but on a separate server (often combined with StoreFront).
  • License Server
Each Citrix product requires a License Server. The license server is often already available in the environment (an upgrade of the license software may be required). I normally install this on a separated server, most times together with the RDS License Server.
  • StoreFront
StoreFront is the access point for the end-user connecting to the XenDesktop infrastructure. StoreFront is the successor of Citrix Web Interface. StoreFront is also built-on IIS and I normally install this on a different server(s).

 Figure 4: Selecting the core components to install on the Delivery Controller.

For this article series I will install all components on the Delivery Controller, except the License Server as I have this one already running. You see in Figure 4 that Citrix shows a nice warning when de-selecting an option that you need to have the component installed at least once. When you decide to separate the components you can find them as a separated installer in the screen shown in Figure 2.

After selecting the core components you need to select the features you would like to install. Citrix XenDesktop requires an SQL database. I would recommend using a specific SQL server, but for Proof of Concept you could use the SQL express on the first Delivery Controller. If you install the Director you should also install the Windows Remote Assistance feature so you can shadow the end-users out of the Director console. I have an SQL server available, so I don’t need the SQL Express edition. 

 Figure 5: Selecting the core components to install on the Delivery Controller.

The Delivery Controller uses a few ports for communication (80 or 443). The installation wizard offers to automatically configure the Windows Firewall to allow these ports. You can also decide to configure those manually (but why would you?).

 Figure 6: Let the installation configure the Windows firewall rules.

 Figure 7: Let the installation configure the Windows firewall rules.

As Citrix nowadays separates the installation and initial set-up, the wizard is completed to start the installation. In the summary window the selected components, features and firewall are shown. Also the installation directory is mentioned as the supporting components (prerequisites) which will be installed automatically.

 Figure 8: Summary of the installation wizard including the prerequisites that will be installed.

During the actually installation a nice progress overview is shown including the time remaining before the installation is finished.

 Figure 9: The installation progress.

After the actual installation phase the wizard will show that all components are installed and offers to start the Studio console to start the initial set-up.

 Figure 10: Installation phase finished.

The XenDesktop Delivery can be installed on Windows 2012 or Windows 2012 R2 operating systems. As all communications are executed through the Delivery Controller, in a production environment you would install at least two Delivery Controllers. As the installation does not contain any configuration anymore, the installation steps are exactly the same for the second (and other if more than two are installed) Delivery Controller. In the next article I will describe the initial setup and show the differences in the set-up of the first Delivery Controller and the next Delivery Controllers.

 

Summary

In this first part of the article series "Installing and Configuring Citrix XenApp 7.6" I started trying to explain that XenApp 7.5 and XenDesktop 7.5 are actually the same product and therefore also the same installation and configuration steps apply. After this explanation I described the installation steps of the Delivery Controller. In the upcoming article I will continue with the initial set-up of the Delivery Controllers.



Installing and Configuring Citrix XenApp/XenDesktop 7.6 (Part 2)

$
0
0

In this part two of our article series I will continue with the initial set-up of the XenDesktop 7.6 infrastructure.


If you would like to read the other parts in this article series please go to :

 

Introduction

I started this article series trying to explain that XenApp 7.5 and XenDesktop 7.5 are actually the same product and therefore also the same installation and configuration steps apply. After this explanation I described the installation steps of the Delivery Controller.

 

Initial Set-up for first Delivery Controller

In part one of the article series I installed the Delivery Controller software. As shown in part one you can start Studio for this initial setup directly. When you unchecked that option, you can just use the Studio Console to start the initial set-up.

When Studio starts it offers three options: Site Setup, Remote PC Access and Scale your deployment. I won’t discuss Remote PC Access in this article series and as this is our first Delivery Controller we need to choose Site Setup by selecting the option Deliver Applications and Desktops to your users.

 Figure 1: Initial start-up of Citrix Studio, choosing Deliver application and desktop to your users to set-up the XenDesktop infrastructure

The Site Setup wizard starts by asking you to either create an empty configured Site or to create a fully configured site. For this article I will use an empty, unconfigured site as this makes is easier to explain the configuration out of the console in this article series.

 Figure 2: Site Setup Introduction: empty site or a fully configured site

The next step is specifying the databaseserver and databasename. As this is the first Delivery Controller, a database will not be available. You can specify a name for the database (the wizard will suggest one based on the Site Name filled in the previous screen). There are two options to create the database, you can continue the wizard or create a database script. The database script can be used to provide it to the database administrator if you don’t have (enough) rights on the database server to create the database via the installation wizard. I have enough rights, so for this article I will use the option to create the database via the installation wizard.


Figure 3: Providing database server and database name information

Continue with the wizard. A message will be shown that the database is not available and that via the OK button the database can be created.

 Figure 4: No database was found, create the database automatically?

Providing the license information is the next step. If you already have a license server running, you will probably have to update your current license server software with a new version. As shown in part 1 you could have also installed the license server on the same machine. You can also use the 30-day trial (you don’t have to specify a license server at all using this option).



 Figure 5: Providing license information

After the licensing information a summary is shown before the actual configuration starts. In this screen Citrix also asks if you don’t mind sending statistics and usage information.


 Figure 6: VDA Installation Summary

During the configuration phase a progress bar is shown.


 Figure 7: Studio Configuration progress bar

When the progress bar disappears the site is created. In the Studio Console all kind of options are available. We will come back to this part later in this article series. First we will add a second delivery controller.

 Figure 8: XenDesktop Side created on the first Delivery Controller

 

Initial setup following Delivery Controller(s)

As mentioned in part one, the Delivery Controller is the heart of the XenDesktop infrastructure, so logically you want this component highly available and fault tolerant. This can be done pretty easily; just add one or more Delivery Controllers to the infrastructure.

Also for these Delivery Controllers this is done by starting the Studio console. For adding a Delivery Controller to the just created site we need to choose the obvious button: Connect this Delivery Controller to an existing Site.

 Figure 9: Connect this Delivery Controller to an existing Site

The first step is to specify a Delivery Controller in the site this Delivery Controller should be joining.

 Figure 10: Select Site

After the Delivery Controller made a connection with the specified Delivery Controller, Studio asks if you would like to update the database automatically. If you choose No the wizard generates an SQL script, which can be used to add the Delivery Controller information into the database. When you chooses Yes the database will be updated automatically.


 Figure 11: Update the database automatically or manually

When the database is updated the server is added as a controller within Citrix Studio and also the Studio on the added Delivery Controller will show the site information. Again for now we will leave the Studio console and execute the basic configuration of the StoreFront component, so users can connect to the environment at the end of the article series.


 Figure 12: Update the database automatically or manually

Also to configure the StoreFront component you need to start the corresponding console called Citrix StoreFront. As this is the initial set-up the easiest way to set-up is to use the Create a Store button in the main window.


 Figure 13: Citrix StoreFront console first start-up

With this button the Create Store wizard will be started. The first step is providing a name for the Store, this name will be shown to the end users and will be part of the URL.


Figure 14: Citrix StoreFront Store Name
The second step is providing the earlier installed and configured Delivery Controllers. With the StoreFront 2.6 release Citrix finally made it more clear which type you have to choose as StoreFront also supports older XenApp releases. For XenApp/XenDesktop 7.6 you logically need to choose XenApp 7.5 (or later), or XenDesktop. Also remember that the Delivery Controller without additional configuration communicates over HTTP.

 Figure 15: Providing the Delivery Controllers

Finally you need to provide the way StoreFront provides access to the environment. Three options are available below Remote Access: None, No VPN Tunnel and Full VPN Tunnel. You will choose None if you would like to use the StoreFront without a Citrix NetScaler Gateway, in other words the end-user will directly type in the URL of the StoreFront server for example for internal access. When using the Citrix NetScaler Gateway you will choose between No VPN Tunnel or Full VPN Tunnel, where No VPN Tunnel will be used providing access to the XenDesktop infrastructure only and a Full VPN, as the name applies, will set-up a standard VPN connection tunnel.

Figure 16: StoreFront (Remote) Access

After pushing the Create button the Store and Corresponding website is created. In the window mentioning the successful creation of these, the URL to be used for WebAccess is also shown. It’s worth mentioning that during the initial set-up of the Delivery Controller, a StoreFront configuration is alsoexecuted. I guess the wizard “noticed” that StoreFront is also installed locally and is automatically configured. So if you install StoreFront on the same server, you don’t have to execute the above shown steps.

Figure 17: StoreFront Store created

 

Summary

In the first part of the article series, the installation of the Desktop Delivery Controllers was discussed, in this part we executed the initial setup of the first and following Desktop Delivery Controllers. The last topic was the basic configuration of the Citrix StoreFront component, so that at the end of the article series users cannot connect to the XenDesktop infrastructure. In the upcoming part we will continue with the installation of the VDA agent, followed by the creation of a basic XenDesktop environment.

If you would like to read the other parts in this article series please go to :



Installing and Configuring Citrix XenApp/XenDesktop 7.6 (Part 3)

$
0
0

In this third part we will continue with the installation of the VDA agent, followed by the creation of a basic XenDesktop environment.


If you would like to read the other parts in this article series please go to :

 

Introduction

In this first part of the article series we installed the Desktop Delivery Controller role, while in the second part we executed the initial setup of the first and following Desktop Delivery Controllers and the Citrix StoreFront basic configuration. In this third part we will continue with the installation of the VDA agent, followed by the creation of a basic XenDesktop environment.

 

Installation of the Virtual Desktop Agent

There are two VDA installations available: One installer for Windows Server OS and one installer for Windows Desktop OS. The installation wizard checks the OS and only shows the available VDA option. The Windows Server OS VDA is supported on Windows 2008R2 SP1, Windows 2012 and Windows 2012R2. The VDA Agent for Desktop OS can be installed on Windows 7 SP1, Windows 8 and Windows 8.1. In this article I will also show the installation steps for the Windows Desktop OS, but I will use the Window Server OS VDA for the purpose of this article (almost all license editions support the Server OS features). The set-ups are pretty similar, so it also does not make a big difference. I will mention which part is Desktop OS only.

 Figure 1: XenDesktop Installation Wizard start-up screen

The first question in the installation wizard is about the way the VDA will be provisioned. When using Citrix deployment techniques like MCS (Machine Creation Services) or PVS (Provisioning Services) you will install the VDA in the master image only and the Citrix should take care that the machine is unique. When you don’t have such a deployment you choose that the machine connects directly to a server machine itself. I don’t want to make this article series too complex, so I won’t use MCS or PVS so we only focus on XenDesktop 7.6.

 Figure 2: Create a Master Image or Connect Directly to a server machine

In the Desktop OS installation wizard one additional step is shown; do you want to use VDA for HDX 3D Pro. If you do want HDX 3DPro logically, you need to select this option, but also ensure that the required (hardware and software) components are in place. As already mentioned, I actually use the Server OS installation for this article so I don’t have to actually choose an option.

 Figure 3: HDX 3D Pro in the Desktop OS VDA installation

The following step is again available in both installers, where you choose which core components need to be installed. The Virtual Delivery Agent is logically required; you can choose if you would like to install the Citrix Receiver also. Be careful with this one as it installs a Receiver 4.x version (which does not support the PNAgent functionality anymore).

 Figure 4: Choosing the Core Components to be installed

Next we need to provide the Delivery Controllers. The wizard offers four options:
  • Do it later
Do not provide a delivery controller at this moment. You need to use one of the other possibilities after the installation of the VDA agent.
  • Do it manually
Provide the Delivery Controllers during the wizard, actually now. This is the easiest method, but less flexible. When you provide the role Delivery Controller to other servers, you need to change the settings locally on each VDA agent.
  • Choose locations from Active Directory
You can add a Service Connection Point (CSP) and a security group to Active Directory, so the VDA can get the Delivery Controller more dynamically. I would suggest using this option in large production environments.
  • Let Machine Creation Services do it automatically
Machine Creation Services (MCS) can provide the Delivery Controller information, you can only use this option if you use MSC to deploy the VDA agents.

 Figure 5: Providing the Delivery Controllers information

The next step is providing which features should be installed/enabled. Really consider the option Optimize performance and read the corresponding knowledgebase article (http://support.citrix.com/article/CTX125874) in advance. It’s possible that the optimization conflicts with requirements in your organization, as it’s fully focused on the best performance not about offered visuals and/or functionalities. I would normally enable the other options by default.

 Figure 6: Deciding to enable which features

To communicate with the Delivery Controller to offer the Remote Assistance feature and to use Real Time Audio, several communication ports are required. Just as with the Delivery Controller installation you can set those Firewall Exceptions manually or automatically during the installation process.

 Figure 7: Configuring the Firewall

A summary is shown with the installation settings provided including the destination location of the actual installation.

 Figure 8: Summary of the VDA installation wizard

Just like the Delivery Controller wizard, a nice process window will be shown during the actual installation phase.

 Figure 9: Finish installation VDA

When not using PVS or MSC technology you will install the VDA on more servers or desktops to create a pool of available machines. When the VDA’s are installed we can start configuring the XenDesktop environment. As shown earlier you can do that via the initial set-up, which creates the basic configuration. I have chosen to create it manually to provide a better insight of what needs to be configured.

 

Creating a Machine Catalog

The first step is to create a Machine Catalog. You should see a Machine Catalog as a group of VDA which will be pulled out this Catalog during further configurations. During this further configuration those machines will be picked automatically (you cannot select which VDA you would like to use), therefore it’s a best practice to create a Machine Catalog which holds VDAs offering the same configuration. Creating a Machine Catalog is done through the Studio Console, where both the Actions menu in the right pane, as well as the rightmouse menu can be used to start this task.

 Figure 10: Starting tthe Create Machine Catalog

The first screen of the Machine Catalog Setup describes pretty well what you should have done before starting this wizard. However if you have done this before you probably don’t want to see this information anymore, happily you have the option available not to show this screen anymore.

 Figure 11: Introduction Machine Catalog Setup

Secondly you need to choose if the Machine Catalog exists of Server OS VDAs of Desktop OS VDAs. It’s not possible to mix and match those types of OS together in one Machine Catalog. As I installed the VDA on a Server OS I will check Windows Server OS. 

 Figure 12: Selecting Operating System VDAs

After selecting the Operating System you need to provide information about machine management. As seen in the VDA agent XenDesktop 7.6 can be combined with Citrix technologies MCS and PVS and XenDesktop should know which technique (if any) you are using. When you select one of these technologies you can also choose to use Power Management. As in this article I don’t use MCS or PVS, I need to select another service or technology.

 Figure 13: Machine Management options

Next step is to add the VDAs in this Machine Catalog. This screen really depends on the Machine Management option you are using. For example: within a PVS machine management you will see your PVS infrastructure Device Collections and you need to pick a device collection. Because I’m not using any machine management technique I have to select the machines based on Active Directory. Also you need to provide the version of the VDAs. Preferable you will use the VDA 7.6 version, but you can also use VDA that still have the older VDA running (in an upgrade scenario).

 Figure 14: Adding machines to the Machine Catalog

Although the last screen is called Summary you still need to provide information. The first part indeed shows a summary of the earlier configured settings, however you also need to provide a Machine Catalog Name and an optional description for recognition of the Machine Catalog.

 Figure 15: Summary and providing the Machine Catalog name

The summary page also mentions that a next step is required to complete the deployment by assigning this Machine Catalog to a Delivery Groups. That’s exactly the next step we are going to execute in the next part of this article series.

 

Summary

In this third part we started with installing the VDA component. The machines running this VDA component are the actual machines the user will execute their (daily) tasks on. After the VDA installation part, we started with the basic configuration of a XenDesktop site by configuring a Machine Catalog.
Viewing all 880 articles
Browse latest View live