Quantcast
Channel: Tech Support
Viewing all 880 articles
Browse latest View live

How to Install DHIS-2 on Any Windows Computer using the installer

$
0
0

Installing District Health Information System 2 (DHIS2) on any Windows computer is a simple process that will not take more than 10-15 minutes. All you need is a CD with the installer on it or download it from here.

DHIS 2 Live

This package is extremely easy to install and convenient as it contains everything you need in order to run DHIS 2, including GIS, reporting and charting. It is based on an embedded Jetty servlet container and an embedded H2 database. Simply unpack the archive, run the executable file and you are good to go. For production use we recommend that you use PostgreSQL as DBMS. Login on the empty database is admin / district.
PackageSizeVersion
DHIS 2 Live111 Mb2.22

 







WAR file

The WAR file requires you to install a Java servlet container (like Tomcat or Jetty) and a relational database (PostgreSQL, MySQL and H2 are supported), and is recommended for server setups and environments with high volumes of data and traffic. The latest version is maintained with bug-fixes and minor improvements. You can always get the latest stable release at stable.dhis2.org. Check out the installation guide for Ubuntu Linux here. For the bleeding edge build check out the continuous integration server. WAR files are copied from our continuous integration server where you can find revision number and build time.

PackageUpgrade notesRelease notesSizeVersion
DHIS 2 WAR-fileUpgrade notesRelease notes109 Mb2.22
DHIS 2 WAR-fileUpgrade notesRelease notes107 Mb2.21
DHIS 2 WAR-fileUpgrade notesRelease notes97 Mb2.20
DHIS 2 WAR-fileUpgrade notesRelease notes95 Mb2.19
DHIS 2 WAR-fileUpgrade notesRelease notes93 Mb2.18

Android apps

The Android applications are mobile extensions of DHIS 2 and allows for capture and analysis of your date. The apps are generally linked directly to your DHIS 2 server, removing the need for manual steps for synchronizing data between the clients and the server. Data can be saved while being offline and uploaded to the server when connectivity is present.

Google PlayAPKSize
Data captureAPK1.6 Mb
Event captureAPK3.8 Mb
Tracker captureAPK3.8 Mb
DashboardAPK3.2 Mb

 

Java mobile client

The DHIS 2 mobile clients runs on Java enabled mobile phones. The solution relies on an available data connection also known as GPRS, Edge or 3G, over which it communicates with a DHIS 2 server instance being publicly available on the internet.

There are two separate client applications available: the facility reporter and the program tracker. The facility reporting client is for regular data reporting from a facility, while the program tracker is designed for following up and reporting on individual program service deliveries to beneficiaries, as part of the name-based component of DHIS2.

PackageDescriptorSizeVersion
DHIS 2 Mobile Aggregate Reporter JARJAD (Descriptor)240 Kb2.16
DHIS 2 Mobile Aggregate Reporter JARJAD (Descriptor)130 Kb2.11
DHIS 2 Mobile Program Tracking JARJAD (Descriptor)430 Kb2.16
DHIS 2 Mobile Program Tracking JARJAD (Descriptor)376 Kb2.15

Sample data

When setting up your system it is useful to have access to a database with sample data. This database contains data from the DHIS 2 implementation in Sierra Leone. Note that the H2 database file needs write privileges on any OS. The PostgreSQL file must be unzipped and can be imported through pgAdmin restore function or with psql -d dbname -U username -f demo.sql

PackageSizeVersion
Sample database (PostgreSQL)118 Mb2.22
Sample database (PostgreSQL)118 Mb2.21
Sample database (PostgreSQL)103 Mb2.20
Sample database (PostgreSQL)103 Mb2.19
Sample database (PostgreSQL)87 Mb2.18

Miscellanous

Download various tools used for DHIS 2 maintenance and a sample Drupal skin.

PackageSizeVersion
Logo45 Kb1.1
Translation resource editor69 Kb1.0
PostgreSQL database backup cron setup10 Kb
Tracker DHIS 2.14 migration SQL script1 Kb
MyDatamart3 Mb1.2.1 Win
Sqlite odbc driver3 MbWindows

Data management and analytics
DHIS 2 lets you manage aggregate data with a flexible data model which has been field-tested for more than 15 years. Everything can be configured through the user interface: You can set up data elements data entry forms, validation rules, indicators and reports in order to create a fully-fledged system for data management. DHIS 2 has advanced features for data visualization, like GIS, charts, pivot tables and dashboards which lets you explore and bring meaning to your data.
Individual data records
DHIS 2 enables you to collect, manage and analyse transactional, case-based data records. It lets you store information about individuals and track these persons over time using a flexible set of identifiers. As an example, you can use DHIS 2 to collect and share essential clinical health data records across multiple health facilities. Individuals can be enrolled for longitudinal programs with several stages. You can configure SMS reminders, track missed appointments, generate visit schedules and much more.
Mobile
You can expand the reach of DHIS 2 through a wide range of mobile solutions. DHIS 2 lets you register cases, events and personal information, track individuals, conduct surveys and capture aggregate data, all through a mobile phone. DHIS 2 provides a range of mobile solutions based on SMS, plain HTML and Java for feature phones, and a high-end Web-based solution with offline support for smartphones.


Awesome visualization
DHIS 2 lets you explore and understand your data through great visualization features. Get the complete overview through the pivot table feature, spot trends in your data with charting and visualize your geographical data aspects using the GIS functionality.
DHIS 2 analytics is so easy to use that anyone can take advantage of it. The system is based on simple and intuitive principles and enables you to create analysis from live data in seconds. DHIS 2 is completely web-based, making it simple to share your analysis with colleagues.
All your data in one place
DHIS 2 gives you great dashboards which easily lets you place all of your data in a single view. Search and drag charts, maps and pivot tables into your dashboard. Create any number of dashboards and easily switch between them. Include your personal messages and alerts directly. Click on any dashboard item to drill down and explore it further.




Social analytics
DHIS 2 enables people to communicate and share interpretations of data. It lets you define your personal profile, write and share interpretations and comments on charts, maps and reports and create groups of people. DHIS 2 allows you to communicate with other users through the integrated messaging feature, letting you receive feedback, quickly fix issues and communicate news and updates. You can even let people self-register their own account.
On premise or in the cloud
DHIS 2 is open source software and can be installed at your servers and used for free. The installation process is documented here. You can also go for a professionally managed DHIS 2 instance in the cloud. A managed DHIS 2 instance takes care of the backup, security, monitoring and high-speed connectivity aspects of the deployment and allows you to focus on the information system itself.
Open source
DHIS 2 is free and open source software released under the liberal BSD license. It is developed in Java and runs on any platform with a JRE 8 installed. DHIS 2 is web-based and follows HTML 5 standards. You can download the WAR file and drop it into a Web container like Tomcat. Or download the Live package and simply click the executable file.
Internationalized
The DHIS 2 user interface comes fully translated into 8 languages. In addition DHIS 2 lets you translate your database content into as many languages as you like. Each user can easily switch between languages on the fly. If you need to translate the user interface into a new language that's easy, too.
Highly scalable
With DHIS 2 you can have thousands of concurrent users and hundreds of millions of data records using only a single, standard web-server. It lets you analyse and explore your data and get answers within tenths of a second. DHIS 2 is being used as national health information systems in a large number of countries and has thousands of days in production leading up to a high-performing and mature system.
Interoperability
DHIS 2 comes with great capabilities for system interoperability and features its own format for meta-data and data exchange called DXF 2 as well as the ADX standard. Most parts of the system can be accessed through the extensive REST-based Web API, making interoperability with third-party clients like Android apps, Web portals and other information systems easy. You can even set up scheduled integration jobs in order to periodically synchronize with or import data from other sources.

Insert the installer CD in the CD drive

On doing so you would get this window popping up on your screen, press any key (spacebar works fine) and go to next step

    If you do not get the popup window, press and hold the windows key (the key marked with the windows logo on your keyboard) and press "E". This will open up a windows explorer window with four different options. Select the CD option / icon labeled "dhis2" and double click.







    Opening the CD will display three different files, double-click on the one named "installer" (or maybe it will show as "installer.bat"). You should now see the same image as shown in step 2a, press any key (spacebar works fine).


    Now wait until the dos-window says "installation completed successfully". There will be quite a few messages running down your screen, especially in the beginning, ignore these. Once the dos-window looks like the picture below, the installer is done. Press any key to get rid of the window.

    You should now have a new icon called "DHIS2" on your desktop as shown in the following picture. Double-click on the icon.

    First the following window will be displayed on your screen:



    Select «Don't import anything» and click on Next. Then you will get the following screen (if it does not show up right away, please wait a couple of minutes):


    Click on Yes. Then enter the username “admin” and password “district” on the screen, followed by pressing the enter key, and you will get following screen:

    Click Never for This Site.

    Finally you should get following screen:

    DHIS2 is ready for use! The next time you start DHIS2 you will not have to do the above steps, but it will instead be like described in step above.

    Here is how it will look when you now start the DHIS2-icon: Login with the username “admin” and password “district”.
    As you have now installed DHIS2 on your computer, the next time you want to work with DHIS you will not have to use the installation CD and go through the above described process, so you can now remove it from the computer.Next time you restart the machine you will be asked which user to log on as – administrator or dhis2:






     




    VMware vRealize Automation 7 Configuration - Step by Step Guide

    $
    0
    0

    In previous two articles I have installed vRealize Automation 7 simple and enterprise install. In next step we will get into the portal and start doing some configuration. Go to https://vrealizehostname-or-IP and enter the administrator login that you specified during your install. Unlike prior versions of vRealize Automation, no domain vsphere.local domain suffix is required to login.









    To start, Lets add some local users to our vSphere.local tenant. Click on the vsphere.local tenant.


    Click on the “Local users” tab and then click the “New” button to add a local account. I’ve created a vraadmin account that will be a local account only used to manage the default tenant configurations.


    Click the Administrators Tab and add the account you just created to the Tenant Admins and IaaS Admins groups. Click Finish.

    Click on the Branding Tab. If you want to change any of the look and feel of your cloud management portal, uncheck the “Use default” check box and upload headers, change colors to fit your needs.


    Click on the Email Servers tab.


    Click the “New” button to add your mail server. I’m adding an outbound server only at this time.


    Add the information for your mail server and click the “Test Connection” button to ensure it works.


    Log out of the portal and log back in as the new tenant administrator account.

    We’ve provided some very basic information to vRealize Automation at this point in the series. In next step we will be adding a new tenant and to setup some authentication mechanisms other than local users.

    Authentication

    In order to setup Active Directory Integrated Authentication, we must login to our default tenant again but this time as our “Tenant Administrator” (we setup earlier) instead of the system administrator account that is created during initial setup.

    Once you’re logged in, click the Administration tab –> Directories Management –> Directories and then click the “Add Directory” button. Give the directory a descriptive name like the name of the ad domain for example. Then select the type of directory. I’ve chosen the “Active Directory (Integrated Windows Authentication)” option. This will add the vRA appliance to the AD Domain and use the computer account for authentication.  

    Note: you must setup Active Directory in the default (vsphere.local) tenant before it can be used in the subtenants.

    Next choose the name of the vRA appliance for the “Sync Connector” and select “Yes” for the Authentication. I’ve chosen sAMAccountName for the Directory Search Attribute. After this, we need to enter an account with permissions to join the vRA appliance to the Active Directory Domain. Lastly, enter a Bind UPN that has permissions to search Active Directory for user accounts. Click “Save and Next”.


    Now, select the domain you just added. Click Next

    Now we can map vIDM properties to your active directory properties. The properties I used are shown in the screenshot below. I tweaked the default values a tad bit, but for the most part, all of the properties were already mapped correctly to start with.

    Now we enter a Distinguished Name to search for groups to sync with. I chose the root DN for my domain, and selected all of the groups. Click Next.

    I repeated the process with user accounts. Click Next.

    The next screen shows you details about the user and groups that will be synced. You can edit your settings or click “Sync Directory” to complete the setup.

    We’ve added an external identity source to sync logins with. This is much more preferable than adding local user accounts and having to make your users remember multiple accounts. In next step, we’ll add these users to business groups, tenant administrators, fabric administrators and other custom groups.

    Create Tenants

    Now it’s time to create a new tenant in our vRealize Automation portal. Let’s login to the portal as the system administrator account as we have before. Click the Tenants tab and then click the “New” button.


    Give the new tenant a name and a description. Then enter a URL name. This name will be appended to this string: https://[vraappliance.domain.name]/vcac/org/ and will be the URL that users will login to. In my example the url is https://vra7hostname/vcac/org/labtenant. Click “Submit and Next”.

    Enter a local user account. I used the vraadmin account much like I did in the previous post about setting up authentication. Click Next.

    In the administrators tab, I added the vraadmin account as both a tenant administrator and an IaaS Administrator. I will admit, I’m omitting some information here. After I added the vraadmin account, I logged into the tenant as this account. I setup directory services for this account the same way I did for the default tenant. Once this was done, I added the “Domain Admins” group as a Tenant Administrator and IaaS Administrator. This end result is seen in the screenshot below.

    Thats it for setting up a new tenant. I did want to mention that in vRA7, when you login to a tenant with a directory configured, you’ll have the option to login to either the directory you setup or the default tenant domain. Be sure to select the right domain before trying to login.








    There isn’t a ton of work to be done to setup a new tenant, but I’ve found that the most important lesson is to create a blank default tenant and then setup  your new tenant under that where all your goodies will be.

    We’ll be discussing some of these goodies in the next few posts.

    Endpoints

    Now that we’ve setup our new tenant, lets login as an infrastructure admin and start assigning some resources that we can use. To do this we need to start by adding an endpoint. An endpoint is anything that vRA uses to complete it’s provisioning processes. This could be a public cloud resource such as Amazon Web Services, an external orchestrator appliance, or a private cloud hosted by Hyper-V or vSphere.

    In the example below, we’ll add a vSphere endpoint. Go to the Infrastructure Tab –> Credentials and then click the “New” button to add a login. Give it a name and description that will help you remember what the credentials are used for. I like to name my credentials the same as the endpoint in which they’re connecting. Enter a User Name and a password, which will be encrypted. When done, click the green check mark to save the credentials. DON’T FORGET TO DO THIS OR IT WON’T BE SAVED!


    Now that we’ve got some credentials to use, go to Infrastructure Tab –> Endpoints and then click the “New” button again. Here I’m selecting Virtual –> vSphere (vCenter) because thats the type of endpoint I’m connecting to. Your mileage may vary. 

    Fill out the name which should match the agents that were created during the installation. If you kept all of the defaults during the install, the first vCenter agent is named “vCenter” spelled exactly like this with the capital “C”. Give it a description and then enter the address.

    The address for a vCenter should be https://vcenterFQDN/sdk. Now click the ellipsis next to credentials and select the username/password combination that we created earlier.

    Optional: If you’re using a product like VMware NSX or the older vCNS product, click the “Specify manager for network and security platform” and then enter an address and new set of credentials for this login.
    When you’re done click save.

    In this post we connected vRealize Automation 7 to a vSphere environment and we added at least one set of credentials. This should allow us to start creating fabric groups and reservations in the next few steps, but first vRA will need to do a quick discovery on the endpoint.

    Fabric Groups

    In the above step we setup an vCenter endpoint that defines how our vRealize Automation solution will talk to our vSphere environment. Now we must create a fabric group. Fabric Groups are a way of segmenting our endpoints into different types of resources or to separate them by intent. These groups are mandatory before you can build anything so don’t think that since you don’t need to segment your resources, that you can get away with not creating one.

    To add a Fabric Group, login to your vRealize Automation tenant as a IaaS Administrator account which we setup in a previous post. Now go to the Infrastructure Tab –> Endpoints –> Fabric Groups. Click the “New Fabric Group” button to create a new group. Once the “New Fabric Group” screen opens, you should first check to see if there are any resources in the “Compute resources:” section. If there are no resources, check to make sure that all of your endpoint connections are correct and the credentials are working. If you need to dig into this more deeply, you can check the vRealize Automation logs to make sure the endpoints are being discovered properly.

    I should note here that if you just setup your endpoints, go grab a cup of coffee before setting up the Fabric Group. The resources take a little bit to discover, but trust me on this. Version 7’s discover works MUCH faster that in previous version. My lab vCenter was discovered in under 5 minutes.

    Now, once your compute resources have been discovered, enter a name for the fabric group, a description and some fabric administrators who will be able to modify the resources and reservations that we’ll create in our next post. Lastly, and most importantly, select the compute resources (Clusters in a vCenter) that will be used to deploy vRealize Automation workloads.

    Click OK

    Fabric Groups are a necessary piece of a vRA7 installation and can be used to separate fabric administrators or simply to limit which compute resources in your endpoint can be used. In this step we added all of our vCenter resources, but we could just have easily selected only the “WorkloadCluster” and prevented vRA from ever deploying to the Management Cluster.

    Business Groups

    The job of a business group is to associate a set of resources with a set of users. Think of it this way, your development team and your production managers likely need to deploy machines to different sets of servers. I should mention that a business group doesn’t do this by itself. Instead it is combined with a reservation which we’ll discuss in the next post. But before we can build those out, lets setup our business groups as well as machine prefixes.

    A machine prefix lets us take some sort of string and prepend it to some set of numbers to give us a new machine name. We want to make sure that our machines don’t have the same names so we’ll need a scheme to set them up in some sort of pool like we do with IP addresses. To setup a machine prefix go to Infrastructure –> Administration –> Machine Prefixes. Click the “New” button with the plus sign on it to add a new prefix. Enter a string to be used in the name that will always be added to a new machine name. Next add a number of digits to append to the end of that string, and lastly enter a number for the next machine to start with. In my example below, my next machine would be named “vra7-01” without the quotes.

    NOTE: Be sure to click the green check mark after adding this information. It’s easy to click OK at the bottom of the screen without saving the record.


    Now that we created the machine prefix, we can add our business group. Go to Administration –> Users and Groups –> Business Groups. Click the “New” button again to add a new group. When the first screen opens, Give the group a name, description and an email address in which to send business group activities. Click “Next”.

    Next, we’re presented with a screen to add users to three different roles. The group manager role will entitle the users to blueprints and will manage approval policies. The support role will be users that can provision resources on behalf of the users, and the users role will be a list of users who can request catalog items. Click “Next”.

    On the Infrastructure screen, select a machine prefix from the drop down. You don’t have to have a prefix for the group but this is a best practice in case so that each of your blueprints don’t have to have their own assigned. The default prefix can be overridden by the blueprint.

    Optionally you can enter an Active Directory container which will house the computer objects if you’re using WIM provisioning. I’ve left this blank since we’re using VMware templates to deploy VMs.






    The business groups are an important piece to deploying blueprints because if a user isn’t in a group, it can’t be entitled to a catalog item. These business groups will likely be your corporate teams that need to self-provision resources and their manager or team leads. In our next step, we’ll assign the business group with some resources, through the use of reservations.

    Reservations

    vRealize Automation 7 uses the concept of reservations to grant a percentage of fabric group resources to a business group. To add a reservation go to Infrastructure –> Reservations. Click the “New” button to add a reservation and then select the type of reservation to be added. Since I’m using a vSphere Cluster, I selected Virtual –> vCenter. Depending on what kind of reservations you’ve selected, the next few screens may be different, but I’m assuming many people will use vSphere so I’ve chosen this for my example.

    Enter a Name for the reservation and the tenant (which should already be selected). Next, in the dropdown select your business group that will have access to the reservation. Leave reservation policy empty for now but enter a priority. If a business group has access to more than one reservation, the priority is used to determine which to use up first. Lastly, select “Enable this reservation”. Click “Next”.


    On the resources tab, select the compute resource and then we need to add some quotas. Quotas limit how large the reservation will be, so we can limit it by a number of machines, the amount of memory or how much storage is being used. Be sure to enter a memory amount and at least one datastore to be used for deploying cloud resources. Click “Next”.

    On the network tab, select the networks that can be used to deploy resources and for now leave the “Network Profile” blank.The bottom section is used with NSX or vCNS but we’ll leave that for another post.

    On the properties tab, you can add custom properties that will be associated with all catalog items deployed through this reservation. For now we’ll leave this empty. Click “Next”.

    Lastly, the alerts page we can set the thresholds on when to alert our administrators about resource usage.

    Reservations are how we limit our business groups to a certain amount of resources in our cloud. They are necessary to prevent our vSphere environment from being over provisioned with virtual machines and can empower business group managers to handle their own resources instead of the IT Administrators.

    Services

    Services might be a poor name for this feature of vRealize Automation 7. When I think of a service, I think of some sort of activity that is being provided but in the case of vRA a service is little more than a category or type. For example, I could have a service called “Private Cloud” and put all of my vSphere blueprints in it and another one called “Public Cloud” and put all of my AWS blueprints in it. In the screenshot below you can see the services in a catalog. If you highlight the “All Services” service, it will show you all blueprints regardless of their service category. Otherwise, selecting a specific service will show you only the blueprints in that category.

    NOTE: if you only create a single service, the tab that is highlighted on the left side does not appear. Creating a second service forces this pane to be displayed.


    All blueprints must be part of a service for it to be provisioned. To create a service go to Administration tab –> Catalog Management –> Services. Click the “New” button to add a new service. 

    Give the service a name and description. Then click the browse button to add an icon for your service. If you’re looking for some standard icons to use, I recommend the vCAC icon pack from vmtocloud.com.

    Change the status to Active and then give it an owner. You can also set which hours the service is available to your users and the default change window for the service if you’d like. Again, I’m in a small lab so I’m not messing with this much. Click “OK”.

    Services are pretty simple to setup and there isn’t much to them once you understand that they are just a grouping for blueprints and not something more complex. We’ll add our blueprints to a service so that we can group them better later on.

    Custom Groups

    f you’ve been reading the whole series of posts on vRealize Automation 7, then you’ll know that we’ve already been setting up roles in our cloud portal, but we’re not done yet. If you need any permissions besides just requesting a blueprint, you’ll need to be added to a custom group first.

    To create a custom group, login as a tenant administrator and go to the Administration Tab –> Users and Groups –> Custom Groups. From there click the “New” button to add a new custom group.

    Once the “New Group” screen appears give it a name and description. On the right hand side, select the built in roles that you’d like to assign to this group. In my case, this is a lab and I’m assigning all roles to the group and assuming that this group managed EVERYTHING in my vRA infrastructure. If you’re doing this for a corporation this information should be locked down by what tasks the group will be handing. To find out what each of the roles do, take a look in the bottom right hand corner of the “New Group” screen. The permissions will be listed as you click on each role. When you’re done, click “Next”.


    On the following screen select your users. Again this is my home lab and all of my Domain Admins will manage this vRA7 portal.

    We’ve created a lot of permissions and roles already, but the custom groups are important for us to build blueprints and manage catalogs. If you’re moving on to the next post in the series, be sure you log out of vRA7 and back in before continuing since some of your permissions probably just changed!

    Blueprints

    Blueprints are arguably the thing you’ll spend most of your operational time dealing with in vRealize Automation. We’ve finally gotten most of the setup done so that we can publish our vSphere templates in vRA.

    To create a blueprint in vRealize Automation 7 go to the “Design” tab. Note: If you’re missing this tab, be sure you added yourself to the custom group with permissions like we did in a previous post, and that you’ve logged back into the portal after doing so.

    Click the “New” button to add a new blueprint.


    Give the new blueprint a name and a Unique ID. The ID can’t be changed later so be sure to make it a good one. Next, enter a description as well as the lifecycle information. Archive (days) determines how long an item will be kept after a lease expires. The lease is how long an item can be provisioned before it’s automatically removed. Click OK.

    Now we’ve given our blueprint some basic characteristics. The next step is to put all of our “stuff” into the blueprint. For my very basic example, I’m going to drag the “vSphere Machine” object onto our design canvas. This adds a vCenter template into our blueprint. As you can see we have a lot of options to be added to our blueprint, such as multiple machine types, networks, software and other services. A really neat change to version 7 over version 6 if you ask me.

    Once we’ve added our components into the blueprint, we need to give each of them some characteristics. To start, we’re going to give the component an ID and description.

    On the Build Information tab, I’m going to make sure the blueprint type is “Server” and I’m going to change the Action to “Clone”. Click the ellipsis and select one of your vSphere templates. And lastly on this tab enter a customization spec exactly how it is named in vSphere, including case sensitivity.

    The next tab is the “Machine Resources” tab. Here we need to enter in the size of this virtual machine, or the max sizes that a user could request. Fill out your values and go to the next tab.

    The storage tab will let us customize the sizes of our disks. I’ve left my disk sizes the same as my vSphere template, but you can change them if needed.

    The network tab, I’ve also left blank. I’m letting the network in my vSphere template dictate what networks I’ll be deployed on. For a larger corporate installation, you’ll want to specify some network info here

    The security tab is to be used specifically with NSX or vCNS. So fare we’re not using this so we’ll leave it blank for now. 

    Custom properties deserve their own blog post or series of blog posts. They will allow us to do lots of cool things during provisioning, but they are not required to deploy a machine from blueprint. If you understand them, you can enter them here for the blueprint. 

    When you’re all done fiddling with your settings click “Finish”. When you’re done, you’ll see your blueprint listed in the grid. Before it can be assigned to people though, it must be published. Click the blueprint in the grid and then select the “Publish” button.

    In this post we created our very first blueprint. Don’t worry if we messed up a step, I’m sure we’ll be creating lots of these little guys! In future posts we’ll be assigning this blueprint to our users and services so that we can request a server.

    Entitlements

    An entitlement is how we assign users a set of catalog items. Each of these entitlements can be managed by the business group manager or a tenant administrator can manage entitlements for all business groups in their tenant.

    To create a new entitlement go to Administration tab –> Catalog Management –> Entitlements. Click the “New” button to add a new entitlement.


    Under the General tab, enter a name for the entitlement and a description. Change the status to “Active” and select a Business Group. Note: If only a single business group has been created, this will not be selectable since it will default to the only available group. Then select the users who will be part of this entitlement.

    Next, under the “Items & Approvals” tab, we get to pick which things this user(s) will have access to. We do not need to fill out all of these types, but we can if we choose to do so.

    We can entitle users to Services, Items, and/or Actions. I chose to entitle this user to my “Private Cloud” service that we created earlier. This will ensure that any catalog items I assign to that service will automatically be entitled.

    If I chose an item, I’d need to do it for each item but this may be preferable in your use case. Lastly, I selected every action because the user I’m entitling is an administrator. As you might guess, if this user should have restricted access then not all items should be checked. For example, if you want your users to be able to build their own servers, but not destroy them, then don’t entitle them to the “Destroy” action.

    Now we’ve setup our cloud management portal to assign our users catalog items and actions that they can execute on those items. Entitlements are a key piece to making sure that your users have access to the stuff they need, but not too much access or items that might be confusing to them.

    Manage Catalog Items

    ou’ve created your blueprints and entitled users to use them. How do we get them to show up in our service catalog? How do we make them look pretty and organized? For that, we need to look at managing catalog items. Log in as a tenant administrator and go to the Administration Tab –> Catalog Management –> Catalog Items. From here, we’ll need to look for the blueprint that we’ve previously published. Click on the blueprint.


    The configure catalog item screen will appear. Here, we can assign this catalog item an icon. If you’re looking for some great icons to use I would recommend starting with vmtocloud icon pack found here.

    Next, change the status to Active so that it will show up in the catalog, and lastly, select which service this catalog item should be listed under. Remember that a service is like a group of catalog items. Also, if you want, there is a check box to show the item as “New and Noteworthy.” This just highlights the catalog item in the service catalog.

    If we click the entitlements tab, we’ll see who has been entitled to the item. Click Finish.

    When we go to the service catalog, we should see some nicely laid out items, with icons and grouped together by services. If you don’t see the correct things, check to make sure the user logged in has the correct entitlements.

    A blueprint that is published has to get configured so that it shows up all nice and neat in the service catalog. Managing catalog items is the way to do this.

    Subscriptions

    In vRealize Automation 7 a new concept was introduced called a “Subscription.” A subscription is a way to allow you to execute a vRealize Orchestrator workflow based on some sort of event that has taken place in vRA

    The truth is that stubs are still available in vRealize Automation 7 but are clearly being phased out and we should stop using them soon because they are likely to not be around in future versions. The idea of an event subscription is a lot like a stub when in the context of machine provisioning, but there are a lot more events that can be triggered than the stubs that have been around in previous versions.

    To being we will go to the Administration Tab –> Events –> Subscriptions. Here we can add a new subscription by click the “New” button with the plus sign.

    The first screen that shows up is the “Event Topic” screen. An event topic describes the type of event that we’re going to watch for. You can see that we can trigger an action from a variety of different types of events. It’s important to note that the machine provisioning events are similar to stubs, but the other events would be new concepts and can be triggered by vRA reconfigurations like changing a blueprint.

    Maybe you trigger an email from vRO every time a business group is changed or something. For the purposes of this post, I’m using the “Machine Provisioning” event which would be very similar to a “Stub.” If you’d like to see what the other Event Topics are for, please check the official VMware vRealize Automation Documentation for Event Topics.

    Once you’ve selected an Event Topic, you’ll notice that the schema will be displayed on the right hand side of the screen. This explains the data that will be passed to vRealize Orchestrator during the event. Once you’ve selected an Event Topic, click Next.

    Now we get to the conditions tab. By default, the “Run for all events” option is selected. I encourage you to leave this alone and run it one time with some really basic “Hello World” type workflow just to see what it does but in the rest of this post we’re going to set some specific conditions.

    Change the radio option to “Run based on conditions” and then choose “All of the following.” This will allow us to enter a list of conditions and every one of them must be met before the action is triggered.

    Next click the drop down and select “Lifecycle state name”.


    In the next box choose “Equals”.

    And in the last box leave the radio button on “Constant” and the select the “VMPSMasterWorkflow32.BuildingMachine.” This building machine lifecycle state should be familiar because its the same name as a stub in vRealize Automation 6 but if you’re new to this, it just means that this is the stage of the provisioning lifecycle where the machine is actually being built.

    To recap what we’ve done in the past few steps, we said we only want our workflow to trigger when the lifecycle state = WMPSMasterWorkflow32.BuildingMachine.


    We’re not finished yet, we’re going to add one more condition here. Click the “Add expression” link to add another condition.

    This time select “state phase” and then “equals”.

    Then in the last dropdown leave the radio button on “Constant” and select “Event”.

    Ok, so what is a state phase? Well, in version 7 of vRealize Automation, we don’t have just one “Building Machine” option, but rather three! A pre-building machine, a post-building machine and the actual building machine event. If you didn’t specify the state phase and you build a new virtual machine from a blueprint, the “Building Machine” event subscription would trigger 3 times! One for each phase.

    If you need to know more information about the lifecycle states, please check out the official VMware vRealize Automation Documentation on lifecycle states. If you’re looking for some really quick information about the order of lifecycle states goes for a clone from template operation, then check out the table below. This should give you an idea of all the things that would run if there are no conditions specified in your subscription.

    vRealize Automation 7 Lifecycles


    Once we’ve added all of our conditions, click next to go to the “Workflow” tab. Select the vRealize Orchestrator workflow that you want to run when the event occurs. Then click Next.

    On the details tab, enter a name for the subscription, and a description.

    There is also an option for blocking, which means that workflows have to wait on this workflow to finish before running. If you don’t click the “blocking” checkbox then any other subscriptions may run simultaneously.

    To determine what order blocking tasks run in, you will then have to enter a priority. You’ll also be able to put in a timeout period to move on to the next workflow if your first one seems to have taken too long to execute.

    Once you’ve finished setting up your event, be sure to click on the subscription in the list and then click “Publish.” I always forget to do this piece.

    Subscription events should be a pretty quick concept to grasp if you’re familiar with stub workflows in previous versions. They can be very powerful and there are many more opportunities to jump out of vRA and execute a task via Orchestrator now and that is a very good thing. More options are better but we need to take a few minutes to learn the new lifecycle states and phases before we can use them effectively.

    Custom Properties

    Custom Properties are used to control aspects of machines that users are able to provision. For example, memory and CPU are required information that are necessary for users to deploy a VM from a blueprint. Custom properties can be assigned to a blueprint or reservation to control how memory and CPU should be configured.

    Custom properties are really powerful attributes that can vastly change how a machine behaves. I like to think of custom properties as the “Windows Registry” of vRealize Automation. Changing one property can have a huge effect on deployments.

    To add a custom property to a blueprint, open the blueprint in the “Design” tab. Select the blueprint we’re working with and then click the vSphere Machine that is on the “Design Canvas”.

    Now click on the properties tab of the machine object and click the “Custom Properties” tab. Here we can click the “New” button to add a new property.

    From here, we need to enter a name and a value. What you enter here will really vary but a list of the custom properties can be found at the official VMware vRealize Automation Documentation. I do want to call out a few custom properties that you may find very valuable, especially if you’re just getting started.

    Not all of the virtual machine properties are passed to vRealize Orchestrator in vRA7 unless you add the following custom property to the blueprint. Extensibility.Lifecycle.Properties.VMPSMasterWorkflow32.[LIFECYCLESTATE]. Where Lifecycle state is the name of the lifecycle that the machine would be in.
    For instance Extensibility.Lifecycle.Properties.VMPSMasterWorkflow32.BuildingMachine with a Value of __*,* will pass all hidden properties as well as all of the normal properties.

    If you didn’t figure it out, the __* (Double underscore, asterisk) denotes a hidden property. You can see the actual property that I’ve added to my blueprint in the screenshot below. Notice that I’ve added two custom properties so that all the attributes are passed during both the BuildingMachine and Requested Lifecycle states.

    When the custom properties below are passed over to vRealize Orchestrator we can list all the properties of the machine to do custom workflows. The screenshot below shows a list of properties that are passed over by default.
    [2016-01-05 11:42:40.016] [I] BlueprintName: CentOS
    [2016-01-05 11:42:40.021] [I] ComponentId: CentOS
    [2016-01-05 11:42:40.022] [I] ComponentTypeId: Infrastructure.CatalogItem.Machine.Virtual.vSphere
    [2016-01-05 11:42:40.022] [I] EndpointId: 12250e26-da94-4c0a-b19d-5c5d7c73ebcb
    [2016-01-05 11:42:40.023] [I] RequestId: 16d179cc-a1ce-4261-831e-cd54ed009c3f
    [2016-01-05 11:42:40.024] [I] VirtualMachineEvent: null
    [2016-01-05 11:42:40.025] [I] WorkflowNextState: null
    [2016-01-05 11:42:40.028] [I] State: VMPSMasterWorkflow32.Requested
    [2016-01-05 11:42:40.029] [I] Phase: PRE
    [2016-01-05 11:42:40.030] [I] Event: null
    [2016-01-05 11:42:40.030] [I] ID: 4e87d827-50b4-407a-b9b7-955db9d644af
    [2016-01-05 11:42:40.033] [I] Name: HollowAdmin0003
    [2016-01-05 11:42:40.034] [I] ExternalReference: null
    [2016-01-05 11:42:40.034] [I] Owner: user@domain.local
    [2016-01-05 11:42:40.035] [I] Type: 0
    [2016-01-05 11:42:40.036] [I] Properties: HashMap:1409151584
    [2016-01-05 11:42:40.040] [I] vRA VM Properties :
    Cafe.Shim.VirtualMachine.TotalStorageSize : 16
    Extensibility.Lifecycle.Properties.VMPSMasterWorkflow32.BuildingMachine : __*,*
    Extensibility.Lifecycle.Properties.VMPSMasterWorkflow32.Requested : __*,*
    VirtualMachine.Admin.TotalDiskUsage : 16384
    VirtualMachine.CPU.Count : 1
    VirtualMachine.Cafe.Blueprint.Component.Cluster.Index : 0
    VirtualMachine.Cafe.Blueprint.Component.Id : CentOS
    VirtualMachine.Cafe.Blueprint.Component.TypeId : Infrastructure.CatalogItem.Machine.Virtual.vSphere
    VirtualMachine.Cafe.Blueprint.Id : CentOS
    VirtualMachine.Cafe.Blueprint.Name : CentOS
    VirtualMachine.Disk0.IsClone : true
    VirtualMachine.Disk0.Label : Hard disk 1
    VirtualMachine.Disk0.Size : 16
    VirtualMachine.Disk0.Storage : Synology02
    VirtualMachine.Memory.Size : 2048
    VirtualMachine.Network0.Name : VMs-VLAN
    VirtualMachine.Storage.Name : Synology02
    __Cafe.Request.BlueprintType : 1
    __Cafe.Request.VM.HostnamePrefix : Use group default
    __Cafe.Root.Request.Id : 276b4854-db3b-4cca-9a06-fc070c1081d1
    __Clone_Type : CloneWorkflow
    __InterfaceType : vSphere
    __Legacy.Workflow.ImpersonatingUser :
    __Legacy.Workflow.User : user@domain.local
    __VirtualMachine.Allocation.InitialMachineState : SubmittingRequest
    __VirtualMachine.ProvisioningWorkflowName : CloneWorkflow
    __api.request.callback.service.id : 6da9b261-33ce-495e-b91a-4f50c202635d
    __api.request.id : 16d179cc-a1ce-4261-831e-cd54ed009c3f
    __clonefrom : CentOS-Template
    __clonefromid : 18ff68d0-5190-4b42-99b3-641145378a3a
    __clonespec : CentOS
    __request_reason :
    __trace_id : FhP8UgRD
    _number_of_instances : 1
    You can see from that list, there are a bunch of properties assigned to a VM by default and you can makeup your own properties if you’d like. Maybe you want a variable passed named “DR Server” and the value is either Yes or No. vRealize Orchestrator could read that property, if you add it to your blueprint, and you can make a decision based on that. In another case, maybe you want to make a decision about what datastore in which the vm should be placed, and you use vRO to update that property so the machine is deployed in a different datastore.

    There are three more things you need to add to a custom property besides “Name” and “Value.” These are Encrypted, Overridable and Show in Request. Lets take a look at these.
    • Encrypted – Removes clear text from the vRealize Automation GUI. Hint: use this for passwords.
    • Overridable – Allows the user to change the property during provisioning.
    • Show in Request – Prompts the user to enter this property during provisioning.

    Custom Properties are a must have piece of your vRealize Automation instance if you plan to do any serious customization or decision making. These properties allow you to add variables and make the solution fit within your organizational structure. Learn them, love them and get used to dealing with them.

    Custom Actions

    We’ve deployed a virtual machine from a vRA blueprint, but we still have to manage that machine. One of the cool things we can do with vRealize Automation 7 is to add a custom action. This takes the virtual machine object and runs a vRealize Orchestration blueprint against that input. We call these actions “Day 2 Operations” since they happen post provisioning.

    To create a new custom resource action go to the Design Tab –> Design –> Resource Actions. Click the “New” button to add a new action.

    Select the Orchestrator workflow from the list.


    The vRO workflow should have an input parameter that can be passed from a server blueprint. I’m using a VC:VirtualMachine parameter because I know it will identify the virtual machine and is passed automatically.

    On the Input Resource tab, select the IaaS VC Virtual Machien as the resource type and the Input Parameter should be filled in already.

    On the details tab enter the name and a description. The Type in my case is blank because i’m not using it for provisioning or deprovisioning. 

    Change the form to match your requirements. I like to keep the form as empty as possible so that users are able to request the action from a blueprint and vRO attributes fill in the rest.

    When you’re done, be sure to “Publish” the blueprint so that it can be used.

    Now we need to configure the action, much like we’ve configured our catalog items in above step.

    Give the action an icon and click Finish.

    Now, when we provision a virtual machine, we can see the Action that we created in our list. We can now run this action from the Items screen.

    Custom Actions are a great way to allow our users to manage their own resources after they’ve provisioned them. Since its a vRealize Orchestrator workflow, we can use these actions to put guardrails on actions to protect users from themselves. For instance maybe we replace the “Snapshot” action with a custom action that also deletes the snapshots after 3 days. It certainly can reduce helpdesk tickets that come in and ask for a snapshot to be taken.

    Load Balancer Rules

    In a previous post we went over installing an Enterprise Install of vRealize Automation. This install required us to setup a Load Balancer with three VIPs but also required that we only had one active member in each VIP.

    After the installation is done, some modifications need to be made on the Load Balancer. The instructions on this can be found in the official vRealize Automation Load Balancing Configuration Guide if you want to learn more. There are several examples on how to setup load balancing on an F5 load balancer and NSX for example. This post will focus on a KEMP load balancer which is free for vExperts and it will all be shown through with GUI examples.

    IaaS Manager Service

    Let’s start with the Iaas Manager Service. Modify your IaaSmgmt service VIP and make sure the load balancing method is “Round Robin”. Then we want to add some health check parameters. the URL that we should check is located at /VMPSProvision and we’re going to run a “GET” command on this. The return response should be “ProvisionService” without the quotes. Lastly, look at the weight here and ensure that your Primary Manager Service Server is given more weight thatn the secondary server. The secondary server will be used only if there is a failure to the primary.


    Web Services

    Now we’ll move onto the web services VIP. Let’s modify our IaaSweb VIP and start with updating the persistence options. We want session persistence to be based on the Source Address and have a timeout of 30 minutes. The Load Balancing policy should again be Round Robin and now we can move on to the health check.

    Well, the Web Services health check has the wrong case. Notice that the Monitor for the IaaS Web component says “registered” but it should actually be “REGISTERED.”


    If we check the URL we can see the monitor clearly.
    Finish adding your health check with a URL of /wapi/api/status/web with a GET method that is looking for a pattern named “REGISTERED.”

    vRealize Appliance

    Now we can do the vRealize Appliance URL. Edit the VIP and add an additional port of 8444 to the existing 443. Port 8444 is used for remote console access which is a useful access method that you might want. Change the persistence options to a source based method, with a timeout of 30 minutes just like we did for the web services. The load balancing method is going to be “Round Robin” like it has been for our other services, and then it’s time to do our health check again.

    This time we want to look for a URL of /vcac/services/api/health with a GET method, and we’re only looking for a 200 or 204 response pattern back so nothing needs to be added in the “Reply Pattern” box.


    Now we’ve added all of our load balancing rules and we can see our VIPs are all up and happy with our members. The enterprise environment now has some failover capabilities that a simple installation is lacking.

    XaaS Blueprints

    XaaS isn’t a made up term, well maybe it is, but it supposed to stand for “Anything as a Service.” vRealize Automation will allow you to publish vRO workflows in the service catalog. This means that you can publish just about any thing you can think of, and not just server blueprints. If you have a workflow that can order your coffee and have it delivered to you, then you can publish it in your vRA service catalog. Side note, if you have that workflow, please share it with the rest of us.

     

    Create a XaaS Blueprint

    Before you begin, make sure that the user who will be adding these new service blueprints is an XaaS Architect.

    To create an XaaS Blueprint, go to the Design Tab –> XaaS –> XaaS Blueprints. Click the “New” button to add a new blueprint.

    Select the vRO blueprint that should be added to the service catalog.

    Give the blueprint a name and description. Click Next.

    The inputs from the vRO blueprint should be added to the main form. It is possible to customize how the form will look when published to end users. Rearrange details, add or remove fields and click next when you’re ready. 

    On the provisioned resource tab, leave the field at “No provisioning”.

    When you’re done, you’ll see your XaaS blueprint in the list. Remember that before anything can be requested from that, it must be published.

    An XaaS Blueprint is a great way to add functionality to your cloud portal. The cloud doesn’t need to be used for just server provisioning. Helpdesk requests, or any other types of automated services can also be made available to your users.

    NSX Initial Setup

    Its time to think about deploying our networks through vRA. Deploying servers are cool, but deploying three tiered applications in different networks is cooler. So lets add VMware NSX to our cloud portal and get cracking.

    The first step is to have NSX up and running in your vSphere environment. Once this simple task is complete, a Distributed Logical Router should be deployed with an Uplink interface configured. The diagram below explains what needs to be setup in vSphere prior to doing any configurations in vRealize Automation. A Distributed Logical Router with a single uplink to an Edge Services Gateway should be configured first, then any new networks will be built through the vRealize Automation integration.

    While the section of the diagram that is manual, will remain roughly the same throughout, the section handled by vRealize Automation will change often, based on the workloads that are deployed. Note: be sure to setup some routing between your Provider Edge and the DLR so that you can reach the new networks that vRA creates.

    Below, you can also see my NSX DLR prior to any vRealize Automation configurations being done.


    Now, make sure that your vRealize Orchestrator endpoint is setup correctly and configured. Before we do anything with NSX we need to make sure that the NSX plugin is installed on your vRO endpoint. NSX will utilize this plugin to setup new networks, switches etc. Be sure to do this before continuing.

    Endpoint Setup

    The first configuration that needs to happen in vRealize Automation is to re-configure your vCenter endpoint to add your NSX connection. Find the vCenter endpoint and add a URL and set of credentials that connect to the NSX manager.

    Network Profiles

    Now we need to setup some network profiles. For the purposes of this demonstration, I’ve setup four network profiles. My Transit network profile which is external and three routed network profiles. The transit network profile will be used in the reservations to show which uplink is used to get to the physical network. In this case it goes through our DLR to our Edge Services Gateway.


    The transit network setup looks something like the example below where my gateway is the next hop route to our Edge Services Gateway.


    In the IP Ranges tab, I’ve added some IP Addresses that are available on my transit network

    Now if we look at the Routed Network Profiles, here we’re added some networking information that probably doesn’t even exist yet in your networks. These networks will be automatically created by vRA by leveraging NSX. There are a couple of important things to review here, The first is the external network profile. This profile should be the external Transit profile that we created just a moment ago. This tells vRA which uplink will be used as a gateway network to the rest of the environment. The next thing is to determine the subnet mask for the whole profile, and then a range subnet mask which is a segment of that range.

    Once you’ve setup the details, click on the IP Ranges tab where you should be able to click “Generate Ranges.” This will create each of the subnets that can be used by vRA for your segmented applications.

    Reservations

    Now that we’ve setup the network profiles, we can create or modify our vRA reservation. The first step here is to map the external network profile we created earlier, to the port group that it belongs with. Next, under the advanced settings section, select the transport zone that was created in NSX. Below this you can add security groups to the reservation automatically if you would like. Lastly, under routed gateways, select the Distributed Logical Router that as created and then in the drop downs, select the interface and the network profile that corresponds with your external network.

    If your routed gateways don’t show up, make sure you’ve run a discovery on your compute cluster for “Networking and Security”. Also, make sure that you’ve created a Distributed Logical Router and not an Edge.

    In this step, we setup our basic configurations in vRealize Automation and connected it to our NSX manager. The reservations and network profiles are now ready for us to build some blueprints with on-demand networks, which we’ll discuss in the next step.

    Deploy NSX Blueprints

    In the previous step we went over how to get the basics configured for NSX and vRealize Automation integration. In this post we’ll build a blueprint and deploy it! Let’s jump right in and get started.

     

    Blueprint Designer

    Login to your vRA tenant and click on the Design Tab. Create a new blueprint just like we have done in the above step. This time when you are creating your blueprint, click the NSX Settings tab and select the Transport zone. I’ve also added a reservation policy that can help define with reservations are available for this blueprint.

    Now that you’ve got the designer open, you can drag and drop your blueprints into the grid just like you have always done. But now, once you’ve added your servers in, you can drag and drop in Network & Security components. I’ve decided to add three “On-Demand Routed Networks”


    Once you’ve added your network to the grid, you’ll need to configure it. Give the network a name and then select the parent network profile that we created in the previous post. This should be a routed profile.


    Once your networks have been configured, click on your blueprint and go to the network properties. Select the network in which to join the virtual machine.


    When I was all done with my three-tiered routed app the blueprint designer looked like this.
    Note: I do want to mention that you can not only add your networks into the designer, but can also add security configurations. Maybe your web server should be firewalled and only port 443 allowed. You can drag that security profile into the grid as well. Pretty neat!


    Deploy the blueprint

    I’m not going to go through the motions of requesting a new machine from blueprint, but publish your blueprint to the catalog and request a new one. When you’re done, you’ll see something similar to this in the items list.


    The Distributed Logical Router will have three additional interfaces added to it.


    There will be three more switches added as well that correspond with the additional interfaces on the DLR.

    The three virtual machines created will be on different networks.

    The new designer makes things really easy to deploy multiple networks and security settings with your servers. The visual way that servers and networks can be deployed should really make network deployments more popular. If you’re building out vRA 7 in your environment and you’ve been considering using NSX for a while, this may be the tipping point.



    Free iCloud Bypass for iPhone 4, 4s, 5, 5c, 5s,6, 6+, iPad Air, iPad Mini, iPad 4/3

    $
    0
    0

    There is good news for iPhone 4, 4s, 5, 5c, 5s,6, 6+, iPad Air, iPad Mini, iPad 4/3, iPad. We have come up with a new tool that makes it easy to remove the bypass the activation lock on all iOS versions. Follow the steps below to unlock your phone or remove the icloud activation and bypass it.








    Are you facing locked Apple iCloud Activation ? this software will help you to remove icloud account from iphone



    INSTRUCTIONS

    • Download the free iCloudBypasser to your computer.
    • Install any free mobile phone app runner or injector example http://FrogGuard.com/
    • Connect your iPhone to your computer via the USB cable
    • Click on APP RUNNER inside the frog guard software
    • Cick on RUN FROM PC tab whiles inside the app runner
    • BROWSE to your computer and select the iCloudBypasser.exe you downloaded to your computer
    • BROWSE to the detination which is your iPhone
    • Click on RUN APP to bypass iCloud. Enjoy






     

    How to install VMware Site Recovery Manager (SRM) 5.8 - Step by step guide

    $
    0
    0

    In this article we will show you how to install VMware Site Recovery Manager 5.8. This step by step guide will help you to understand the design, installation, operation and architecture of setting up VMware Site Recovery Manager (SRM) 5.8.


    VMware Site Recovery Manager consists of several different pieces that all have to fit together, let alone the fact that you are working with two different physical locations.

    The following components will all need to be configured for a successful SRM implementation:
    • 2 or more sites
    • 2 or more Single Sign On Servers
    • 2 or more vCenter Servers 5.5
    • 2 or more SRM Servers
    • Storage – Either storage arrays with replication, or 2 or more Virtual Replication Appliances
    • Networks
    It’s worth noting that SSO, vCenter, and SRM could all be installed on the same machine, but you’ll need this many instances of these components.









    As of VMware Site Recovery Manager 5.8 you can do a traditional Protected to Recovery Site implementation like the one shown below.  This can be a unidirectional setup with a warm site ready for a failover to occur, or it can be bi-directional where both sites are in use and a failure at either site could be failed over to the opposite site.

    Each site will require their own vCenter Server and SRM Server, as well as a method of replication such as a storage array.


    Along with a 1 to 1 setup, SRM 5.8 can manage a many to one failover scenario where multiple sites could fail over to a single site.  This would require an SRM instance for each of the protected sites as seen in the diagram below.


    The configuration that is not available at the moment is a single site to multiple failover sites.  *as of SRM 5.8

    SRM Installation Prerequisites:


    Database Prerequisites

    Before you are able to install SRM, you’ll need a database to store configuration files.  Create a database on your SQL Server to house the configuration information.  Note: You’ll need a database server in both the protected site and recovery site; one for each SRM Server.
    • Pre-create the SQL Database and assign your SRM Service account AT LEAST the ADMINISTER BULK OPERATIONS, CONNECT, AND CREATE TABLE permissions.
    • Ensure the SRM database schema has the same name as the database user account.
    • The SRM database service account should be the database owner of the SRM database
    • The SRM database schema should be the default schema of the SRM database user.
    • On your SRM Servers, install the SQL Server native client for your version of SQL Server.
    Create an ODBC connection to the SRM database on your SRM Servers.  Select the SQL Native Client appropriate for your database server.


    Give it a name, and point the ODBC connection to the server.


    Enter login information.


    Enter the SRM database


    Installer Prerequisites

    VMware Site Recovery Manager installation is relatively simple.  Grab the installer from http://vmware.com/downloads and run the installer on your SRM Server.   There are a couple of notes to be aware of when installing though.
    • Right click the installer and run as Administrator if you are leaving UAC on.  This makes re-installation easier in the future.
    • The installation should be done by a user who will be running the SRM Service.  The logged in installer by default is the service account that is used.
    • The logged in user should have administrative access to the server it’s being installed on.
    • Consistent use of SSL Certificates need to be used
      • If any vCenter is using custom SSL Certificates then the SRM Services must also use SSL certificates from a Certificate Authority
      • If the protected site vCenter uses SSL Certificates then the recovery site vCenter Server should also use SSL certificates from a Certificate Authority
      • If you need to use custom SSL certificates from a certificate authority instead of the default VMware certificates, the CN for both SRM servers should be identical
      • I recommend reviewing this post from Sam McGeown if you need to use SSL Certificates

     

    Install

    Run the installer as an administrator.  Click Next on the welcome screen.


    You can view additional prerequisite tasks from the next screen.  I’ve covered many of them already in this post.


    Choose the location for the install files.


    Enter the location of the vCenter server associated with the SRM Server, as well as some credentials to register the service with vCenter.


    Give the site a name and enter an email address and select the Host IP Address and ports to be used.


    If you’re using the same SRM Server for multiple sites, select the Custom SRM Option and enter the information, otherwise just use the default if it’s a 1 to 1 relationship with another SRM Server.


    If you’re using the default SSL Certificates then click the “Automatically generate a certificate and then enter some certificate information to generate a cert.




    If you’re using custom SSL Certificates, then you’ll load your SSL Certificate during this phase.



    Next, select your Data Source that you created prior to the installation.  (If you forgot to do this, there is a button to do it during setup)


    Select the connection counts for the database.


    Enter the password for the service account for the Site Recovery Manager Server.  The user will be the logged in user.


    Once all the information has been entered, click the Install button.


    Watch as the installer progresses.


    There are quite a few things that need to be done before the install process happens, but it will make your life simpler to have these done beforehand.  Now that you’ve installed SRM on one of your sites, repeat the process for the second site.

    If you notice, now that SRM has been installed, the vSphere Web Client now has a Site Recovery menu in it. (If it doesn’t, log out and back in)

    From here, we can go into the new SRM menus.



    Site Pairing

    Once you’ve gotten to the SRM Menus, we’ll want to click on Sites to configure our Sites.


    Note: If you see the error below, this means that you’ve got an SSL Certificate mismatch between the SRM Server and the vCenter server.  If you use custom SSL certificates for vCenter, you must use them on your SRM Installation as well.


    Assuming all your installations have gone well, you’ll see a screen like the one below.  Click the “Pair Site” link to get started with the site configuration.


    Enter the vCenter information for the remote vCenter.  This will pair your site with the opposite site and create a relationship between them.

    If you are using the default VMware certificates, you’ll need some login information entered.


    If you are using custom SSL certificates from a certificate Authority, login information is not needed.


    Once Site Pairing is done, you’ll see two sites in the SRM Sites menu

     

    Resource Mapping

    Now that the sites are paired, we can setup mappings for the relationships between the two sites.  This includes Resource Pools, Folders, and Networks.

    Open up one of your sites and you’ll see a helpful “Guide to Configuring SRM” menu.  We’ll go right down the list by selecting the Create resource mappings.


    Select a relationship between the protected network and the recovery network.  Once you’ve created your relationship click the Add Mappings button to add it to your mapping list.  When done, you can click the check box to create the same mapping in the reverse direction for fail back operations.  You can select a many to one relationship here, but if you do, you won’t be able to select the Reverse Mapping option.  Click OK.


    Now we can click the “Create folder mappings” link in the guide to create a relationship for the virtual machine folders.  Repeat the process we did for resources, only this time for virtual machine folders.  The same rules apply for many to one relationships.  Click OK.


    The next mapping we’ll need to do is for networking.  Map a network in the protected site to a recovery network.  Don’t worry about IP Addressing yet, we can customize this later, but you’ll need to know what network the virtual machines will map to during a failover.


    Placeholder Datastores

    The next section of the “SRM Configuration Guide” is to create placeholder datastores.  These datastores hold the configuration information for the virtual machines that are to be failed over.  Think of this as a .vmx file that is registered with vCenter without disks.  During a failover this virtual machine becomes active and the replicated virtual disks are attached to it.  This datastore should not be a replicated datastore, and does not need to be very large to store these files.

    Configure the placeholder datastore.  Select one or more datastores to house the virtual machine files.  Click OK.


    Once done, you’ll want to go to the other site and configure a datastore for it as well.  This is so the mappings are already done if you fail over and want to fail back.


    We’ve now installed SRM and configured the sites.  We can now start looking at setting up replication and protection groups in the next step.

    If you plan to use Array Based Replication for your SRM implementation, you’ll need to install and configure your Storage Replication Adapter on your SRM Servers.  The SRA is used for SRM to communicate with the array to do things like snapshots, and mounting of datastores.

     

    Pair the Arrays

    Once your SRAs have been installed in both your sites and you’ve gotten the arrays replicating, you’ll want to pair the arrays in SRM so that they can be used for protection Groups.  Open the “Array Based Replication” tab in the “Site Recovery” menu of the web client.  Click the Add button.

    Here, we’ll want to add a pair of array managers (one for each site).



    Select the sites that you’ll be workign with.


    Choose the SRA type.  If you only have one SRA installed, only one option should be available.  In this case, we’re using EMC RecoverPoint.


    Now, we need to configure the manager with the IP Addresses, and names as well as an account that has enough privileges to create snapshots, as well as mount and unmount LUNs.


    Now we’ll configure the opposite site’s array manager as well.  Same rules apply.


    Once we’ve configured the array managers we can enable them which will make them a pair that replicate to each other.







    Finish the wizard.


    Protection Groups for Arrays

    When you create a protection group for virtual machines being replicated by array based replication, you will give it a name as usual, and a site pair.


    Choose the Protected site and the that it’s an Array Based Pair.  Select the pair.


    Select the datastores that contain the virtual machines.  All of the virtual machines on this array pair will be protected.


    Array based replication does not take much additional effort for VMware Site Recovery Manager, but may take some additional planning to make sure your protection groups are in the right datastores.  Remember that all VMs in a datastore will be failed over together.

    SRM Sites and resource mappings are all done.  It’s time to create some Protection Groups for our new VMware Site Recovery Manager deployment.

    A protection group is a collection of virtual machines that should be failed over together.  For instance, you may want all of your Microsoft Exchange servers to fail over together, or you may want a Web, App, Database Tier to all failover at the same time.  It is also possible that your main goal for SRM is to protect you in the event of a catastrophic loss of your datacenter and you’re concerned with every VM.  It still a good idea to create multiple protection groups so that you can fail over certain apps in the event of an unforeseen issue.

    Think about it, if your mail servers crashed but the rest of your datacenter is fine, would it make sense to just fail over the mail servers, or the entire datacenter?  Just failing over the mail servers would make sense if they are in their own protection group.

    If we look at the protection groups menu of Site Recovery we’ll want to click the shield icon with the “+” sign on it.


    Give the new protection group a name.  Of course give it a descriptive name.  A name like “Protection Group 1” doesn’t work very well when you have lots of protection groups.  Name it something easy to identify.  Back to my examples, I’ve named my protection group, “Test-PG1”.  Yep, I’m a hypocrite.  Click Next.


    Select the Protected Site and a replication strategy.  In my lab, I’ve setup vSphere Replication so I’ve chosen that as my replication type.  Click Next.

    NOTE  If you are using Array Based Replication, make sure that you don’t have multiple protection groups on the same LUN or consistency group.  The entire LUN would be taken offline during a failover of a protection group, so having some VMs that aren’t supposed to failover on the same LUN could cause you an issue.


    Select the Virtual Machines to fail over.  The populated list will only show virtual machines that are being replicated.  As you can see from the screenshot below, the VM named “FailoverVM” is available for protection even though I have many VMs in my vCenter.  “FailoverVM” is the only one that is being replicated.  Click Next.

    NOTE: If you are using Array Based Replication, you will be selecting a datastore vs individual virtual machines.  The same rule about replication holds true, however.  Only replicated datastores should show up in this menu.


    Give the Protection group a good description.  Click Next.


    Review the Protection Group settings and click Finish.


    Protection groups are simple to setup in Site Recovery Manager, but could take a considerable amount of planning to make sure VMs are in the correct LUNs.  The planning of your entire disaster recovery plan should be designed with this in mind.

    A recovery plan is the orchestration piece of Site Recovery Manager and likely the main reason for purchasing the product.  All of the setup that’s been done prior to creating the recovery plans is necessary but the recovery plan is where magic happens.

    When we go to the Recovery Plans menu in Site Recovery, we’ll see the option to click the notepad with the “+” sign on it to create a new recovery plan.



    Give the recovery plan a descriptive name.  Remember that you can create a recovery plan for individual protection groups, or multiple protection groups.  This allows you the opportunity to create individual recovery plans for things like “Mail Services”, “Database Services”, “DMZ”, “File Servers” and then create a catch all named “Full Recovery” that includes all of the protection groups.  This allows for flexibility with whatever outage you’re planning for.


    Choose which site is the recovery site and click Next.


    Select the Protection Groups that are part of this recovery plan.  In the example below, there is only one protection group, but you could select many if they are available.  Click Next.


    Select the test networks.  We’ve already created mapping for networking that should show handle what happens when a virtual machine fails over to the recovery site, but we need to configure what happens when we run a “TEST” recovery.  During a failover test, we may not want the VM to be on the same network as our production servers.  Leaving the “Isolated network (auto created) as the test network, allows us to create a virtual switch with no uplinks in order to ensure that the virtual machines won’t be accessible via the network during a test.


    Give the recovery plan a description and click Next.


    Review the settings and Click Finish.


    Once done, we can see that a Recovery Plan is available and we can run a test or a failover.


    This is the basic layout of a recovery plan.  Most disaster recovery plans require a lot more customization than just powering on a virtual machine at another location.  In upcoming steps we’ll review many more options that are available when setting up a recovery plan such as IP customization, power-on priorities and scripting.

    Some companies have built out their disaster recovery site with a stretched layer 2 network or even a disjoint layer 2 network that shares the same IP addresses with their production sites.  This is great because VMs don’t need to change IP Addresses if there is a failover event.  This post goes over what options we have if you need to change IP Addresses during your failover.

     

    Network mappings

    SRM 5.8 has a wonderful new way to manage IP Addresses between datacenters.  Prior to SRM 5.8 each VM needed to be manually updated with a new IP Address or done in bulk with a CSV template (show later in this post) if you had to re-IP your VMs.  Now with SRM 5.8 we can do a network mapping to make our lives much easier.  This is one of the best new features of SRM 5.8 in my opinion.

    Go to your sites in “Site Recovery” and click the Manage tab.  Here, you’ll see our network mappings again.  Click the networks that you’ve mapped previously and then you can click the “Add…” button to create some IP Customization Rules.




    When the “Add IP Customization Rule” screen comes up, you can see that we can now map the networks to one another and the virtual machine will keep the host bits the same between networks.  For example, if you have a VM on the 10.10.50.0/24 network with an IP Address of 10.10.50.100, and it needs to failover to the 10.10.70.0/24 network, it will keep it’s hosts bits the same, and just change the network, making it 10.10.70.100 at the DR site

    Obviously, there are a few other things that you’ll need to modify such as DNS Servers, suffixes and of course the default gateway.


    Once you’ve created your IP Customization Rules, you can see them listed below the network mappings for your virtual machines.


    Manual IP Customization

    If the subnet mapping spelled out above doesn’t work, you can manually customize an IP Address of each VM.  Go into your recovery plans and find the virtual machine to customize.  Right click and choose “Configure Recovery…”


    Click the IP Customization Tab.  Here you’ll see that you can add IP Addresses for both sites.  Be sure to enter IP information in for both sites.  If you failover to the recovery site and didn’t set the protected site IP Addresses, you’ll have some IP issues when you try to fail back.


    Click either the “Configure Protection…” or “Configure Recovery…” and then you can enter your IP information.  Again, be sure to do both sites.


    BULK IP Customizer

    Many times it’s not practical to modify the IP addresses of every individual VM as they are configured.  Luckily VMware has provided a way to bulk upload IP addresses.

    From an SRM server, open a command prompt and change the working directory to:  c:Program FilesVMwareVMware vCenter Site Recovery Manager bin

    NOTE: Path may be different depending on your install location.


    Generate a .CSV file to edit your IP Addresses by running dr-ip-customizer.exe with the –cfg, –cmd –vc -i –out switches.

    –cfg should be the location of the vmware-dr.xml file.  –cmd should be “Generate”, –vc lists the vCenter server, and –out lists the location to generate the .csv file.

    Example: dr-ip-customizer.exe –cfg “C:Program filesVMwareVMware vCenter Site Recovery ManagerConfigvmware-dr.xml” –cmd generate –vc FQDNofvCenter -i –out c:ipaddys.csv


    Open the .csv file and fill out the information.  Notice that there are two entries for the VM.  This is because there are two vCenters and in order to do protection and fail back we need the IP Addresses for both sides.


    Once the IP Address information is entered, run the customizer again with the –cmd “Apply” and –CSV file location.

    Example: dr-ip-customizer.exe –cfg “C:Program filesVMwareVMware vCenter Site Recovery ManagerConfigvmware-dr.xml” –cmd apply –vc FQDNofvCenter -i –csv c:ipaddys.csv


    IP changes during a SRM failover are a necessity for many companies and SRM 5.8 has both made this process easier as well as giving plenty of options depending on your needs.  We can now use network mapping, manual IP customization or bulk IP customization to accomplish our objectives.

    It’s time to failover your datacenter to your disaster recovery site.  Well, maybe you’re just migrating your datacenter to a new one, but this is always a bit of a tense situation.

    Go to the Recovery Plan and click the monitor tab.  Click the “BIG RED BUTTON” (yeah, it’s not that big, but it has big consequences).


    Before the failover actually happens, you’ll be given a warning and you actually have to click a check box stating that you understand the consequences of performing this operation.  After that you’ll be given the opportunity to do a Planned Migration which will try to replicate the most recent changes and will stop if an error is encountered, or a Disaster Recovery migration which will just failover anything it can and as fast as it can.  Pick your recovery type and click Next.


    Review the process that is about to happen and click Finish.


    While the recovery is running, you’ll be able to monitor the process on the recovery steps screen.  Notice that this is slightly different from a test recovery in a few places, such as not creating a writable snapshot but rather making the existing storage writable in the new datacenter.   Hopefully everything is working well for you after the failover. 

    Now it’s time to go back to our original datacenter.  Click the “Re-Protect” button which looks like a shield with a lightning bolt on it.  This Re-Protect will reverse the direction of the replication and setup a failover in the opposite direction.  You can consider the DR site to be the protected site and the original production site to be the recovery site, until you fail back.


    When you run the Re-Protect, you’ll need to once again confirm that you understand the ramifications of this operation.


    Now that everything is reversed, you can run another failover, but this time a “Planned Migration” is probably more reasonable since you’re likely planning to do a failback and it’s not a second disaster, this time at your disaster recovery site.


    Review the failover and click Finish.  When the failover is done, be sure to Re-Protect it again to get your disaster recovery site back in working order.


    Failovers can be stressful but thankfully we’ve tested all of our plans before, so that should take some of the pressure off.

    Run a Test

    Open up one of your recovery plans and click the monitor tab.  Here you’ll have several buttons to choose from as well as seeing the list of recovery steps.   To run a “Test” recovery click the green arrow button.


    Once you’ve begun the test process, you’ll be prompted about whether or not you want to run one additional replication to the DR site.  You’ll have to decide what you’re testing here.  If it’s a disaster test, then you probably don’t want to run an additional replication because you can’t hold off your disaster until you replicate one more time.  If you’re test is for a planned datacenter migration then maybe this is applicable to you.


    Review your test settings and click Finish.  Once the test starts, it will create a snapshot of the storage at the DR site so that replication can continue in the background while the test is run.  It may also create some new virtual switches if you’re running an isolated test.


    During the test, you’ll be able to monitor the recovery plan every step of the way.  If you encounter a failure, you’ll know what step failed and you’ll be able to fix it and try again.  Assuming everything goes as planed, you’ll get a “Test Complete” message with a check mark.  Once the test is complete you can login to some of your virtual machines to ensure things are how you expect them to be after a failover.  When you’re ready to finish the test, click the broom icon in the recovery plan menu.


    When you click the cleanup button, you’ll get a confirmation much like you did when you ran the test.  Click Next.







    Review the cleanup settings and click Finish.  When you click Finish, the snapshots created at the recovery site will be deleted, any isolated virtual switches used for the test will be destroyed, and the placeholder VMs will be ready for another failover.


    We don’t have to wait for a long test window to try our DR plan any longer.  We can test during the middle of the day, test once a month, week, day  or hour if you really wanted to.   Now we have some semblance of certainty that our DR plan will work successfully if the time arises.




    OneSafe - Premium Password Manager is now free as Apple’s App of the Week

    $
    0
    0
    oneSafe is like Fort Knox in your pocket! oneSafe provides advanced security for your passwords with features like Touch ID, auto-lock, decoy safe, intrusion detection, self-destruct mode and double protection for your most sensitive data.








    NEW! supports 3D Touch!

    You can even sync your secured info across various devices via iCloud Drive, CloudKit or Dropbox to have your passwords available whenever and wherever you need them.

    oneSafe protects your confidential information with AES-256 encryption; the highest level of encryption on mobile devices. Passwords, documents, photos, credit card numbers, bank account details, PIN codes and much more can all be locked away securely inside your safe – and at the same time be accessible whenever you need them. Plus, oneSafe adapts to your preferences allowing you to change the colors and images of your safe for maximum customization.



    THEY LOVE oneSafe

    • Apple Editor's Choice, best new app in 80 countries!
    • "oneSafe has cuter graphics than its rivals" [New York Times]
    • "Even though oneSafe is incredibly powerful, it remains incredibly user-friendly." [iMore]
    • "Listed in top password managers" [Cnet]
    • "oneSafe is well built, robust, and easy to use" [Marco Tabini]
    • "If you want to protect and secure your data like Fort Knox, oneSafe is your solution." [AppDictions.com]

    oneSafe is ultra secure, easy to use and provides an unequaled user experience. Keep all your confidential information secure and easily accessible when you need it – with oneSafe!


    SECURITY first

    oneSafe incorporates the strongest encryption algorithm available on mobile devices (AES-256). All your data is automatically encrypted as soon as it's stored in the app, even the synched content. oneSafe also makes your entry password unhackable by using encryption standards involving extremely complex calculations. In addition, oneSafe comes with numerous advanced security options allowing you to manage your level of security: double-protection categories, auto-lock feature, password generator, decoy safe, self-destruct option, break-in attempts monitoring and a password change reminder.







    SYNC your content, the way you want

    • You control at all times which categories are synched and which are not.
    • Your info is synched via iCloud or Dropbox. Before being sent to the server it’s encrypted with AES-256 for the maximum possible level of security.
    • You don't want to use the cloud? Choose the manual sync option!


    FEATURES that will make your life easier

    • New sync engine including CloudKit to synchronize your categories
    • Integration with Withings watch, securing oneSafe based on the watch's distance from your iPhone or iPad.
    • Create, browse and edit your items super easily.
    • Adapt your safe with customizable categories to keep all your information well organized.
    • Use ready-made templates to quickly and easily enter your data.
    • Take advantage of the ultra-secure built-in browser with auto-fill feature to access websites quickly and securely.
    • Back up your data (via email, iTunes or Wi-Fi) to be sure you’ll always have a copy in storage.
    • Copy and paste complex usernames and passwords.
    • Change the texture, icon and color for any of your items.
    • Flag items as 'Favorites' for quick access to your most commonly used information.
    • Use the ‘Search’ feature to find items quickly within your database.
    • Sync the contents of your safe between your devices using iCloud and/or Dropbox.
    • Use the bullet-proof "Secure sharing" feature to share your confidential data with your family, friends or colleagues.

    Store in complete security:
    • Credit card numbers
    • PIN codes and entry codes
    • Social security and tax numbers
    • Bank account details
    • Usernames and passwords
    • Documents in PDF, Word, Excel and more
    • Private photos and videos

    This app is designed for both iPhone and iPad
    • Free
    • Category: Productivity
    • Updated:
    • Version: 3.4.1
    • Size: 48.9 MB
    • Languages: English, Chinese, French, German, Italian, Japanese, Korean, Portuguese, Russian, Simplified Chinese, Spanish
    • Developer:






    Rated 4+
    Compatibility: Requires iOS 9.0 or later. Compatible with iPhone, iPad, and iPod touch

    How to install KEMP Virtual LoadMaster using Hyper-V - Windows 2012 and 8

    $
    0
    0

    This document describes the installation of the KEMP Virtual LoadMaster (VLM) within a Microsoft Hyper-V environment. The VLM has been tested with Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2 and Windows 8. The instructions in this installation guide are for Windows Server 2012, Windows Server 2012 R2 and Windows 8.






    Introduction:

    The KEMP Virtual LoadMaster in other words (Virtual Load Balancer) is a version of the KEMP LoadMaster that runs as a virtual machine within a hypervisor and can provide all the features and functions of a hardware-based LoadMaster.

    There are several different versions of the VLM available. Full details of the currently supported versions are available on KEMP website: www.kemptechnologies.com.

    The Microsoft Hyper-V virtual machine guest environment for the VLM, at minimum, must include:
    • 2 x virtual processors
    • 2 GB RAM
    • 32 GB virtual hard disk capacity
    There may be maximum configuration limits imposed by Hyper-V such as maximum RAM per VM, Virtual NICs per VM etc. For further details regarding the configuration limits imposed by Microsoft Hyper-V, please refer to the relevant Microsoft Hyper-V documentation.

    Installing Virtual LoadMaster (VLM) using Hyper-V Manager:

    The following instructions describe how to install a Virtual LoadMaster on a Hyper-V environment using the Hyper-V Manager.

    Download the Hyper-V Files

    The VLM is packaged within a .vhd file for ease of deployment. This file can be freely downloaded from KEMP Technologies for a 30 day evaluation period. To download the VLM please follow the instructions below:
    1. Log on to http://www.KEMPtechnologies.com/try
    2. Within the Select the hypervisor platform section, select the option for Microsoft Hyper-V.
    3. Click on the Download Hyper-V VM button.
    4. Read the end user agreement.
    5. To continue, select your country from the drop-down list.
    6. Click on the Agree button.
    7. Download the Hyper-V zip file.
    8. Unzip the contents of the file to an accessible location within the hyper-v environment.

     

    Importing the VLM:

    To import the VLM we use the Import Virtual Machine function within the Hyper-V Manager.

    1. Open the Hyper-V Manager and select the relevant server node in the left panel.


    Figure 2‑1: Hyper-V Manager

    2. Click the Import Virtual Machine menu option in the panel on the right.

    Figure 2‑2: Import Virtual Machine

    3. Click Next.

    Figure 2‑3: Import Virtual Machine

    4. Click the Browse button and browse to where you downloaded the Hyper-V files.
    5. Select the LoadMaster VLM folder (under the top-level folder LoadMaster-VLM-n.n-nn-HYPERV, where n.n-nn is the build number) and click the Select Folder button.

     
    Figure 2‑4: Select the VLM

    6. Select the VLM and click Next.
    7. Select the Copy the virtual machine (create a new unique ID) option.
    8. Choose your destination and hard disks.
    9. Click the Import button.
    10. The virtual machine should be imported and should now appear within the Virtual Machines pane in
          the Hyper-V Manager.

    Check the Network Adapter Settings:

    Before starting the VLM we must first verify that the network adapters are configured correctly.
    1. Right-click on the virtual machine you have imported within the Virtual Machines pane.
    2. Click on the Settings option.
    3. Click on the Network Adapter or Legacy Network Adapter option within the Hardware list.
    KEMP recommend selecting the Network Adapter option as it provides much higher performance and less load on the host.
    1. Ensure that the network adapter is configured correctly.
      • Ensure that the network adapter is connected to the correct virtual network.
      • Expand the Network Adapter menu and Select the Static option within the MAC address section and enter the relevant MAC address.
      • Ensure that the Enable spoofing of MAC addresses checkbox is selected.

    Figure 2‑5: Network Adapter settings
    1. Click on the OK button.
    2. Repeat these steps for the second network adapter.
    Jumbo frames are supported for Hyper-V network synthetic drivers.

     

    Power On the LoadMaster:

    Once the VLM has been deployed it can be powered on:
    1. Right-click the Virtual Machine that was imported within the Virtual Machines pane.
    2. Click Start.
    The VLM should begin to boot up.
    1. Right-click the VLM and select Connect to open the console window.

    Figure 2‑6: IP address
    1. The VLM should obtain an IP address via DHCP. Make a note of this address.
    If the VLM does not obtain an IP address, or if the IP address needs to be changed, it can be manually configured in the console by following the steps in above step.

     

    Licensing and Configuration:

    The LoadMaster must now be configured to operate within the network configuration.
    1. In an internet browser, enter the IP address that was noted previously.
    2. Ensure to enter https:// before the IP address.
    1. A warning may appear regarding website security certificates. Please click the continue/ignore option.
    2. The LoadMaster End User License Agreement screen appears.
    Please read the license agreement and, if willing to accept the conditions therein, click the Agree button to proceed.







     Figure 2‑7: License type selection
    1. Select the relevant license type.
    2. A section will then appear asking if you are OK with the LoadMaster regularly contacting KEMP to check for updates and other information. Click the relevant button to proceed.
      Figure 2‑8: License Required
    3. If using the Online licensing method, fill out the fields and click License Now.
    If you are starting with a trial license, there is no need to enter an Order ID. If you are starting with a permanent license, enter the KEMP Order ID# if this was provided to you.

    If using the Offline Licensing method, select Offline Licensing, obtain the license text, paste it into the License field and click Apply License
    1. The Change Password screen appears.
    C:\Users\kgaffney\Dropbox (Kemp Technologies)\ongoing projects\Reskin\2fourall_password_cropped.PNG
    C:\Users\kgaffney\Dropbox (Kemp Technologies)\ongoing projects\Reskin\2fourall_password_cropped.PNG
    1. Enter a password for the bal user in the Password input field and retype it in the Retype Password input field.
    2. The login screen appears again, enter the bal user name and the new password as defined in the previous step.
    3. In the screen informing that the password has changed, click the Continue button
    4. If the machine has shipped with a temporary license, a warning will appear informing that a temporary license has been installed on the machine and for how long the license is valid.

    1. Click on the OK button
    2. The Appliance Vitals screen of the LoadMaster will appear.
    Figure 2‑9: Appliance Vitals
    1. Go toSystem Configuration> Network Setup in the main menu.
    2. Click the eth0 menu option within the Interfaces section.
    1. In the Network Interface 0 screen, enter the IP address of the eth0 interface, the network facing interface of the LoadMaster, in the Interface Address input field.
    2. Click on the Set Address button
    3. Click on the eth1 menu option within the Interfaces section
    4. In the Network Interface 1 screen, enter the IP address of the eth1 interface, the farm-side interface of the LoadMaster, in the Interface Address input field.
    5. Click on the Set Address button
    This interface is optional, depending on the network configuration.
    1. Click on the Local DNS Configuration > Hostname Configuration menu option.
    1. In the Hostname configuration screen, enter the hostname into the Current Hostname input field.
    2. Click on the Set Hostname button.
    3. Click on the Local DNS Configuration > DNS Configuration menu option.
    1. In the DNS configuration screen, enter the IP address(es) of the DNS Server(s) which will be used to resolve names locally on the LoadMaster into the DNS NameServer input field.
    2. Click on the Add button.
    3. Enter the domain name that is to be prepended to requests to the DNS nameserver into the DNS NameServer input field.
    4. Click the Add button.
    5. Click the System Configuration > Network Setup > Default Gateway menu option.
    1. In the DNS configuration screen, enter the IP address of the default gateway into the IPv4 Default Gateway Address input field.
    2. If you have an IPv6 Default Gateway, please enter the value in the IPv6 Default Gateway Address input field.
    3. Click the Set IPv4 Default Gateway button.
    The LoadMaster is now fully installed and ready to be used.

    Cannot access the Web User Interface

    If a connection to the WUI cannot be established, network settings can be configured via the console view.
    1. Login into the VLM via the console using the settings:
    lb100 login: bal
    Password: 1fourall

    Figure 3‑1: Enter IP address
    1. Enter the IP address of the eth0 interface, the network-facing interface of the LoadMaster, in the input field within the Network Side Interface Address field and press Enter on the keyboard.

    Figure 3‑2: Enter default gateway address
    1. Enter the IP address of the Default Gateway.

    Figure 3‑3: Nameserver IP addresses
    1. Enter a space-separated list of nameserver IP addresses.
    2. A message will appear asking to continue licensing via the WUI. Try to access the IP address via a web browser. Ensure to enter https:// before the IP address.
    3. Contact the local KEMP Customer Services Representative for further support if needed.

     

    NIC Types Cannot Be Mixed

    If you add new Network Adapters to a VLM then they must be of the same Adapter Type as those already configured on the VLM.

    If you install a Network Adapters with a different Adapter Type as those already configured on the VLM the VLM will not recognize the new interface.

     

    Static MAC Addresses Must Be Configured

    In case you move a VLM system to a different Virtual Machine, ensure that the MAC addresses of the Virtual Machine’s NICs stay the same. It is recommended to configure static MAC addresses for all NICs within Virtual Machines.

    For further information on configuring static MAC addresses, please refer to the relevant Hyper-V documentation.






    Make this VM Highly Available Option is Greyed Out

    There is an option in the Virtual Machine Manager called Make this VM highly available. This option is set when the Virtual Machine is placed on a host. Therefore, the option will be greyed out, even when the Virtual Machine is not running. There is a way to work around this problem without having to delete and re-create the Virtual Machine. The Virtual Machine can be migrated. Follow the steps below to do this:
    1. Open the Virtual Machine Manager console.
    1. Right-click the relevant Virtual Machine that you want to make highly-available and select Migrate.
    2. Select the current host from the list for migration.
    3. Click Yes to the prompt asking if you want to make the Virtual Machine highly available.
    4. The path can be changed if needed. Click Next.
    5. Select the network and click Move.
    6. Wait for the migration to complete.

     

    Factory Reset:

    If you perform a factory reset on the VLM, all configuration data, including the VLM’s IP address is deleted. During the subsequent reboot the VLM attempts to obtain an IP address via DHCP. If the VLM is on a different subnet to the DHCP server then an IP address will not be obtained and the IP address is set to the default 192.168.1.101.

    The VLM may not be accessible using this address. If this is the case then you must run through the quick setup via the console as described in above step.



    How to Configure KEMP Virtual LoadMaster using Web User Interface (WUI)

    $
    0
    0

    This guide describes in detail how to configure the various features of the KEMP LoadMaster using the WUI. This document also describes the Web User Interface (WUI) of the KEMP LoadMaster. The available menu options in the LoadMaster may vary from the ones described in this document. The features available in a LoadMaster depend on what license is in place.

    Introduction

    KEMP Technologies products optimize web and application infrastructure as defined by high-availability, high-performance, flexible scalability, security and ease of management. KEMP Technologies products maximize the total cost-of-ownership for web infrastructure, while enabling flexible and comprehensive deployment options.

    Home

    Clicking the Home menu option displays the home page which presents a list of basic information regarding the LoadMaster.
    Figure 2‑1: LoadMaster Vitals screen






    Virtual Services

    From this point onwards, the headings in this document generally correspond to the options in the main menu on the left of the LoadMaster WUI.

    Add New


    Figure 3‑1: Add a new Virtual Service screen.

    Here the Virtual IP (VIP) address, port, protocol and name are defined. The VIP address, name and port are manually entered into the text boxes and the protocol is selected from the drop-down list. 

    If templates are installed on your machine, a Use Template drop-down list is available whereby you can select a template to configure the Virtual Service parameters such as port and protocol.

    For the LoadMaster Exchange appliance there is a maximum limit of thirteen (13) Virtual Services that may be configured.

    View/Modify (Existing HTTP Service)


    Figure 3‑2: Virtual Services screen

    This screen displays a list of Virtual Services on the LoadMaster, summarizing the main properties of each and giving the options to modify or delete services, or create a new service.

    CAUTION
    Delete is permanent, there is no UNDO feature. Use with care.
    Each configured Virtual Service may be changed by clicking the Modify button or deleted by clicking the Delete button.

    The Virtual Service status may be one of the following:
    • Up– At least one Real Server is available.
    • Down– No Real Servers are available.
    • Sorry– All Real Servers are down and traffic is routed to a separately configured Sorry Server that is not part of the Real Server set, with no health checking.
    • Disabled– The service has been administratively disabled.
    • Redirect– A fixed redirect response has been configured. Redirect Virtual Services can be created by using the Add a Port 80 Redirector VS option in the Advanced Properties section. For more information, refer to Section 3.6.
    • Fail Message– A fixed error message has been configured. A fixed error message can be specified using the Not Available Redirection Handling options. Refer to Section 3.6 for more information.
    • Unchecked– Health checking of the Real Servers has been disabled. All Real Servers are accessed and presumed UP.
    • Security Down– The LoadMaster is unable to reach the Authentication Server and will prevent access to any Virtual Service which has Edge Security Pack (ESP).
    • WAF Misconfigured– If the WAF for a particular Virtual Service is misconfigured, for example if there is an issue with a rule file, the status changes to WAF Misconfigured and turns red. If the Virtual Service is in this state, all traffic is blocked. AFP can be disabled for that Virtual Service to stop the traffic being blocked, if required, while troubleshooting the problem.
    The image below shows the Virtual Service properties screen. It is composed of several component sections:

    Figure 3‑3: Virtual Service Properties screen
    • Basic Properties - where the usual and most common attributes are set
    • Standard Options– the most widely used features of a Virtual Service
    • SSL Properties– if SSL acceleration is being used,it will show Acceleration Enabled and this section of the screen will be used to configure the SSL functions
    • Advanced Properties– the additional features for a Virtual Service
    • WAF Options – where the options relating to the Application Firewall Pack (AFP) can be set
    • ESP Options –where the options relating to ESP are set
    • Real Servers/SubVSs– where Real Servers/SubVSs are assigned to a Virtual Server
    Depending upon the service type, and enabled or disabled features, specific fields and options show in the WUI. The screenshots in this document may not represent every possible configuration.

     

    Basic Properties


    Figure 3‑4: Basic Properties section

    There are two buttons adjacent to the Basic Properties heading:
    Duplicate VIP
    This option makes a copy of the Virtual Service, including any related SubVSs. All Virtual Service configuration settings are copied to the duplicate Virtual Service. When this button is clicked, a screen appears where the IP address and port can be specified for the copied Virtual Service.
    Change Address
    Clicking this button opens a screen where the virtual IP address and port of the Virtual Service can be modified.
    The fields in the Virtual Service modify screen are:
    Service Name
    This text box allows you to assign a nickname to the Virtual Service being created, or change an existing one.
    In addition to the usual alphanumeric characters, the following ‘special’ characters can be used as part of the Service Name:
    . @ - _
    However, there must be at least one alphanumeric character before the special characters.
    Alternate Address
    This is where, if so desired, you would specify a secondary address in either IPv6 or IPv4 format.
    Service Type
    Setting the Service Type controls the options displayed for the Virtual Service. It’s important to make sure the Service Type is set according to the type of application that you are load balancing.
    WebSocket Virtual Services must be get to the Generic Service Type.
    The HTTP/2 Service Type allows HTTP/2 traffic - but does not currently offer any Layer 7 options beyond address translation (transparency, subnet originating, alternate source).
    Activate or Deactivate Service
    This check box gives you the option to activate or deactivate a Virtual Service. The default (active) is selected.

     

    Standard Options


    Figure 3‑5: Standard Options section

    Force L7
    If visible, Force L7 should be selected (default). If it is not selected, the Virtual Service will be forced to Layer 4.

    L7 Transparency
    Enabling this option makes the Virtual Service transparent (NO NAT). However, if the client resides on the same subnet as the Virtual IP and Real Servers, then the Virtual Services will automatically NAT the source IP (enabling non-transparency).

    If the Real Servers considered local option is enabled, then the Real Servers, within a two-armed configuration, are considered local even if they are on a different arm of the configuration.

    Subnet Originating Requests
    This option is only available if Transparency is not enabled.

    When transparency is not enabled, the source IP address of connections to the Real Servers is that of the Virtual Service. When transparency is enabled, the source IP address will be the IP address that is initiating connection to the Virtual Service. If the Real Server is on a subnet, and the Subnet Originating Requests option is enabled, then the subnet address of the LoadMaster will be used as the source IP address.

    This switch allows control of subnet originating requests on a per-Virtual Service basis. If the global switch (Subnet Originating Requests in System Configuration > Miscellaneous Options > Network Options in the main menu) is enabled then it is enabled for all Virtual Services.
    It is recommended that the Subnet Originating Requests option is enabled on a per-Virtual Service basis.

    If the global option is not enabled, it can be controlled on a per-Virtual Service basis.
    If this option is switched on for a Virtual Service that has SSL re-encryption enabled, all connections currently using the Virtual Service will be terminated.

    Extra Ports
    You may specify a range of ports, sequential or otherwise, starting with the base port already configured for the Virtual Service. The port numbers are inputted to the field and separated with a space, and the maximum range is 510 ports.

    You can enter the extra ports either as port ranges or single ports separated by spaces or comma in whatever order you wish, for example, entering the list 8000-8080, 9002, 80, 8050, 9000 will add the ports 80, 8000 to 8080, 9000 and 9002.

    Server Initiating Protocols
    By default, the LoadMaster will not initiate a connection with a Real Server until it has received some data from a client. This prohibits certain protocols from working as they need to communicate with the Real Server before transmitting data.

    If the Virtual Service uses one of these protocols then select the protocol from the drop-down list to enable it to work correctly.

    The protocols that can be selected are:
    • SMTP
    • SSH
    • IMAP4
    • MySQL
    • POP3
    • Other Server Initiating Protocols
    The Server Initiating Protocols option is not visible when the port specified in the Virtual Service is 80, 8080 or 443.

    Persistence Options
    Persistence is setup on a per Virtual Service basis. This section allows you to select whether persistence is enabled for this service, to set the type of persistence and the persistence timeout value.

    If persistence is enabled it means that a client connection to a particular Real Server via the LoadMaster is persistent, in other words - the same client will subsequently connect to the same Real Server. The timeout value determines for how long this particular connection is remembered.

    The drop-down list gives you the option to select the type of persistence. These are:
    • Source IP Address:
    The source IP address (of the requesting client) is used as the key for persistency in this case.
    • Super HTTP:
    Super HTTP is the recommended method for achieving persistence for HTTP and HTTPS services with the LoadMaster. It functions by creating a unique fingerprint of the client browser and uses that fingerprint to preserve connectivity to the correct Real Server. The fingerprint is based on the combined values of the User-Agent field and, if present, the Authorization header. Connections with the same header combination will be sent back to the same Real Server.
    • Server Cookie:
    The LoadMaster checks the value of a specially set cookie in the HTTP header. Connections with the same cookie will go to the same Real Server.
    • Server Cookie or Source IP:
    If cookie persistence fails, it reverts to source-based persistence.
    • Active Cookie:
    The LoadMaster automatically sets the special cookie.
    • Active Cookie or Source IP:
    If active cookie persistence fails, it reverts to source-based persistence.
    • Hash All Cookies:
    The Hash All Cookies method creates a hash of the values of all cookies in the HTTP stream. Cookies with the same value will be sent to the same server for each request. If the values change, then the connection will be treated as a new connection and the client will be allocated to a server according to the load balancing algorithm.
    • Hash All Cookies or Source IP:
    Hash All Cookies or Source IP is identical to Hash All Cookies, with the additional feature that it will fall back to Source IP persistence in the event no cookies are in the HTTP string.
    • Super HTTP and Source IP Address:
    This is the same as super HTTP but it also appends the source IP address to the string, thus improving the distribution of the resulting HASH.
    • URL Hash:
    With URL Hash persistence, the LoadMaster will send requests with the same URL to the same server.
    • HTTP Host Header:
    With HTTP Host Header persistence, the LoadMaster will send all requests that contain the same value in the HTTP Host: header to the same server.
    • Hash of HTTP Query Item:
    This method operates in exactly the same manner as Server Persistence, except that the named item being inspected is a Query Item in the Query String of the URL. All queries with the same Query Item value will be sent to the same server.
    • Selected Header:
    With Selected Header persistence, the LoadMaster will send all requests that contain the same value in the specified header to the same server.
    • SSL Session ID:
    Each session over SSL has its own session ID which can be persisted on.
    For this option to appear as a persistence method, the Virtual Service needs to have a Service Type of Generic and SSL acceleration must be disabled.

    If a Virtual Service is an SSL service and not offloaded, the LoadMaster cannot meaningfully interact with any of the data in the stream at Layer 7. The reason is, the data is encrypted and the LoadMaster has no way of decrypting it.

    If, in the above scenario, a persistence mode that is not based off source IP is required, this is the only other option. When an SSL session is started, it generates a session ID for the connection. This session ID can be used to cause the client to persist to the correct server.

    There are some downsides to this however, as most modern browsers regenerate the session ID at very short intervals, basically overwriting it, even if there is a longer interval set on the persist timeout.
    • UDP Session Initiation Protocol (SIP):
    This persistence mode is only available in a UDP Virtual Service when Force L7 is enabled. SIP uses request and response transactions, similar to HTTP. An initial INVITE request is sent, which contains a number of header fields. These header fields can be used for persistence.

    Timeout
    When any persistence mode is selected, a Timeout drop-down list appears. This allows you to set the length of time after the last connection that the LoadMaster will remember the persistence information.

    Header field name
    When UDP Session Initiation Protocol is selected as the persistence mode is selected sin the LoadMaster, a text box called Header field name will appear. The header field that is to be used as the basis for the persistence information should be entered here.

    Scheduling Methods
    This section allows you to select the method by which the LoadMaster will select a Real Server, for this particular service. The scheduling methods are as follows:
    • Round Robin:
    Round Robin causes the LoadMaster to assign Real Servers to a session in order, i.e. the first session connects to Real Server 1, the second to Real Server 2 etc. There is no bias in the way the Real Servers are assigned.
    • Weighted Round Robin:
    This method uses the weight property of the Real Servers to determine which Real Servers get preference. The higher the weight a Real Server has, the higher the proportion of connections it will receive.
    • Least Connection:
    With this method, the current Real Server with the fewest open connections is assigned to the session.
    • Weighted Least Connection:
    As with Least Connection, but with a bias relative to the weight.
    • Resource Based (Adaptive):
    Adaptive scheduling means that the load on the Real Servers is periodically monitored and that packets are distributed such that load will be approximately equal for all machines. More details can be found in the section covering scheduling methods.
    • Resource Based (SDN Adaptive):A Virtual Service which is using an adaptive scheduling method (whether using SDN or not) can be viewed as a control system. The intent is to achieve an evenly distributed load over the Real Servers and the controller calculates an error value from this (that describes the deviation from the desired even distribution). It also calculates a set of control values (Real Server weights) that are fed back into the system in a way to decrease the error value.
    • Fixed Weighting:
    All traffic goes to highest weight Real Server that is available. Real Servers should be weighted at the time they are create and no two Real Servers should have same weight, otherwise unpredictable results may occur.
    • Weighted Response Time:
    Every 15 seconds the LoadMaster measures the time it takes for a response to arrive for a health check probe and uses this time to adjust the weights of the Real Servers accordingly, i.e. a faster response time relative to the other Real Servers leads to a higher weight which in turn leads to more traffic sent to that server.
    • Source IP Hash:
    Instead of using the weights or doing round robin, a hash of the source IP is generated and used to find the correct real server. This means that the real server is always the same from the same host.You do not need any source IP persistence.

    Because this method relies solely on the client (source) IP address and ignores current server load, using this method can lead to a particular Real Server becoming overloaded, or a general traffic imbalance across all Real Servers.

    Idle Connection Timeout (Default 660)
    The seconds before an idle connection is closed. There are some special values that can be set for this field:
    • Setting it to 0 will ensure that the default L7 connection timeout will be used. The default Connection Timeout value can be modified by going to System Configuration > Miscellaneous Options > Network Options.
    • Setting it to 1 will discard the connection after the packet is first forwarded – a response is not expected or handled
    • Setting it to 2 will use a DNS type of operation. The connection is dropped after the reply message.
    Setting the Idle Connection Timeout to the special values of 1 or 2 allow better performance and memory usage for UDP connections and they correspond better to how UDP is used.

    Quality of Service
    The Quality of Service drop-down sets a Differentiated Services Code Point (DSCP) in the IP header of packets that leave the Virtual Service. This means that the next device or service that deals with the packets will know how to treat and prioritise this traffic. Higher priority packets are sent from the LoadMaster before lower priority packets.

    The different options are described below:
    • Normal-Service: No special priority given to the traffic
    • Minimize-Cost: Used when data needs to be transferred over a link that has a lower “cost”
    • Maximize-Reliability: Used when data needs to travel to the destination over a reliable link and with little or no retransmission
    • Maximize-Throughput: Used when the volume of data transferred during an interval is important, even if the latency over the link is high
    • Minimize-Delay: Used when the time required (latency) for the packet to reach the destination must be low. This option has the quickest queue of each of the Quality of Service choices.
    The Quality of Service feature only works with Layer 7 traffic. It does not work with Layer 4 traffic.



    Use Address for Server NAT
    By default, when the LoadMaster is being used to SNAT Real Servers, the source IP address used on the internet is that of the LoadMaster. The Use Address for Server NAT option allows the Real Servers configured on the Virtual Service to use the Virtual Service as the source IP address instead.

    This option is most useful for services such as SMTP when the LoadMaster is in a public domain and when the service requires a reverse DNS check to see if the source address sent from the LoadMaster is the same as the Mail Exchanger (MX) record of the sender.

    If the Real Servers are configured on more than one Virtual Service which has this option set, only connections to destination port 80 will use this Virtual Service as the source IP address.
    The Use Address for Server NAT option only works on Virtual Services which are operating on the default gateway. This option is not supported on non-default gateway interfaces.

     

    SSL Properties


    Figure 3‑6: SSL Properties section

    SSL Acceleration
    This checkbox appears when the criteria for SSL Acceleration have been met, and serves to activate SSL Acceleration.

    Enabled: If the Enabled check box is selected, and there is no certificate for the Virtual Service, you will be prompted to install a certificate. A certificate can be added by clicking the Manage Certificates button and importing or adding a certificate.

    Reencrypt: Selecting the Reencrypt checkbox re-encrypts the SSL data stream before sending it to the Real Server.

    Reversed: Selecting this checkbox will mean that the data from the LoadMaster to the Real Server is re-encrypted. The input stream must not be encrypted. This is only useful in connection with a separate Virtual Service which decrypts SSL traffic then uses this Virtual Service as a Real Service and loops data back to it. In this way, the client to real server data path is always encrypted on the wire.

    Supported Protocols
    The checkboxes in the Supported Protocols section allow you to specify which protocols should be supported by the Virtual Service. By default, the three TLS protocols are enabled and SSLv3 is disabled.

    Require SNI hostname
    If require Server Name Indication (SNI) is selected, the hostname will always be required to be sent in the TLS client hello message.

    When Require SNI hostname is disabled, the first certificate will be used if a host header match is not found.

    When Require SNI hostname is enabled, a certificate with a matching common name must be found, otherwise an SSL error is yielded. Wildcard certificates are also supported with SNI.
    When using a Subject Alternative Name (SAN) certificate, alternate source names are not matched against the host header.

    Wildcard certificates are supported but please note that the root domain name will not be matched as per RFC 2459. Only anything to the left of the dot will be matched. Additional certificates must be added to match the root domain names. For example, www.kemptechnologies.com will be matched until a wildcard of *.kemptechnologies.com. Kemptechnologies.com will not be matched.

    To send SNI host information in HTTPS health checks, please enable Use HTTP/1.1 in the Real Servers section of the relevant Virtual Service(s) and specify a host header. If this is not set, the IP address of the Real Server will be used.


    Certificates
    Available certificates will be listed in the Available Certificates select list on the left. To assign or unassign a certificate, select it and click the right or left arrow button. Then click Set Certificates. Multiple certificates can be selected by holding Ctrl on your keyboard and clicking each required certificate.

    Reencryption Client Certificate
    With SSL connections, the LoadMaster gets a certificate from the client and also gets a certificate from the server. The LoadMaster transcribes the client certificate in a header and sends the data to the server. The server still expects a certificate. This is why it is preferable to install a pre-authenticated certificate in the LoadMaster.

    Reencryption SNI Hostname
    Specify the Server Name Indication (SNI) hostname that should be used when connecting to the Real Servers.

    This field is only visible when SSL re-encryption is enabled.

    Cipher Set
    A cipher is an algorithm for performing encryption or decryption.
    Each Virtual Service (which has SSL Acceleration enabled) has a cipher set assigned to it. This can either be one of the system-defined cipher sets or a user-customized cipher set. The system-defined cipher sets can be selected to quickly and easily select and apply the relevant ciphers.

    The system-defined cipher sets are as follows:
    • Default: The current default set of ciphers in the LoadMaster.
    • Default_NoRc4: The Default_NoRc4 cipher set contains the same ciphers as the default cipher set, except without the RC4 ciphers (which are considered to be insecure).
    • BestPractices: This is the recommended cipher set to use. This cipher set is for services that do not need backward compatibility - the ciphers provide a higher level of security. The configuration is compatible with Firefox 27, Chrome 22, IE 11, Opera 14 and Safari 7.
    • Intermediate_compatibility: For services that do not need compatibility with legacy clients (mostly Windows XP), but still need to support a wide range of clients, this configuration is recommended. It is compatible with Firefox 1, Chrome 1, IE 7, Opera 5 and Safari 1.
    • Backward_compatibility: This is the old cipher suite that works with clients back to Windows XP/IE6. This should be used as a last resort only.
    • FIPS: Ciphers which conform to FIPS (Federal Information Processing Standards).
    • Legacy: This is the set of ciphers that were available on the old LoadMaster firmware (v7.0-10) before OpenSSL was updated.
    Refer to the SSL Accelerated Services, Feature Description for a full list of the ciphers supported by the LoadMaster, and a breakdown of what ciphers are in each of the system-defined cipher sets.

    KEMP Technologies can change the contents of these cipher sets as required based on the best available information.

    The list of ciphers which are assigned to a Virtual Service can be edited by clicking the Modify Cipher Set button. If changes are made to a preconfigured cipher set, a new custom cipher set will be created. Custom cipher sets can be named and can be used across different Virtual Services.

    By default, the name for the custom cipher set will be Custom_. KEMP recommends changing the name of custom cipher sets because if another system-defined cipher set is modified, the name will again default to Custom_and will overwrite any existing cipher sets with that name.

    It is not possible to modify the list of ciphers in a system-defined cipher set. Instead, a new custom cipher set will be created when changes are made to the ciphers list.

    It is not possible to delete a custom cipher set in the LoadMaster WUI. However, it is possible to delete a cipher set using the RESTful API.

    Ciphers
    When a cipher set is selected and applied, the Ciphers list is read only. To modify the ciphers that are assigned to a Virtual Service, either change the assigned Cipher Set or click Modify Cipher Set.

    When modifying a cipher set, available ciphers are listed on the left. Ciphers can be assigned or unassigned by selecting them and clicking the right or left arrow buttons. Then, specify a name for the custom cipher set and click Save Cipher Set. Multiple ciphers can be selected by holding the Ctrl key on your keyboard and selecting the required ciphers.


    Client Certificates
    • No Client Certificates required: enables the LoadMaster to accept HTTPS requests from any client. This is the recommended option.
    By default the LoadMaster will accept HTTPS requests from any client. Selecting any of the other values below will require all clients to present a valid client certificate. In addition, the LoadMaster can also pass information about the certificate to the application.

    This option should not be changed from the default of No Client Certificates required. Only change from the default option if you are sure that all clients that access this service have valid client certificates.
    • Client Certificates required: requires that all clients forwarding a HTTPSrequest must present a valid client certificate.
    • Client Certificates and add Headers: requires that all clients forwarding a HTTPS request must present a valid client certificate. The LoadMaster also passes information about the certificate to the application by adding headers.
    • The below options send the certificate in its original raw form. The different options let you specify the format that you want to send the certificate in:
    • Client Certificates and pass DER through as SSL-CLIENT-CERT
    • Client Certificates and pass DER through as X-CLIENT-CERT
    • Client Certificates and pass PEM through as SSL-CLIENT-CERT
    • Client Certificates and pass PEM through as X-CLIENT-CERT

    Verify Client using OCSP
    Verify (via Online Certificate Status Protocol (OCSP)) that the client certificate is valid.
    This option is only visible when ESP is enabled.

     

    Advanced Properties


    Figure 3‑7: Advanced Properties section

    Content Switching
    Clicking the Enable button, enables rule-based Content Switching on this Virtual Service. Once enabled,
    rules must be assigned to the various Real Servers. Rules can be attached to Real Server by clicking the  

    None button located next the Real Server. Once rules are attached to a Real Server the None button will display the count of rules attached.

    Rules Precedence
    Clicking the Rules Precedence button displays the order in which Content Switching rules are applied. This option only appears when Content Switching and when rules are assigned to the Real Server(s).

    Figure 3‑8: Request Rules

    This screen shows the Content Switching rules that are assigned to the Real Servers of the Virtual Services and the order in which they apply. A rule may be promoted in the order of precedence by clicking its corresponding Promote button.

    HTTP Selection Rules
    Show the selection rules that are associated with the Virtual Service.


    HTTP Header Modifications
    Clicking the Show Header Rules button displays the order in which Header Modification rules are implemented. The number of rules (of both request and response type) is displayed on the actual button.

    Figure 3‑9: Modification Rules

    From within the screen you can Add and Delete Header Modification rules. The order in which the rules are applied can be changed by clicking the Promote buttons.

    Enable Caching
    This option enables caching of static content. This saves valuable Real Server processing power and bandwidth. Caching can be enabled per HTTP and offloaded HTTPS Virtual Services.
    Types of file that can be cached may be defined in AFE configuration under the Systems Configuration> Miscellaneous Options menu.

    Maximum Cache Usage
    This option limits the size of the cache memory per Virtual Service. For example, two Virtual Services, each running with a limit of 50% will use 100% of the cache store. The default is No Limit. It is recommended to limit the cache size to prevent unequal use of the cache store. Ensure that the cache maximum usage is adjusted so that each Virtual Service has a percentage of cache to use. If there is not remaining space to be allocated for a cache enabled Virtual Service, that service will not cache content.

    Enable Compression
    Files sent from LoadMaster are compressed with Gzip.
    If compression is enabled without caching, LoadMaster performance may suffer.
    The types of file that can be compressed may be defined in AFE configuration in the Systems 

    Configuration> Miscellaneous section of the LoadMaster WUI.
    Compression is not recommended for files 100MB or greater in size

    Detect Malicious Requests
    The Intrusion Prevention System (IPS) service will provide in-line protection of Real Server(s) by providing real-time mitigation of attacks and isolation of Real Server(s). Intrusion prevention is based on the industry standard SNORT database and provides real-time intrusion alerting.
    To get updated or customized rules, please refer to the SNORT website: https://www.snort.org/.
    Selecting the Detect Malicious Requests check box enables the IPS per HTTP and offloaded HTTPS Virtual Services. There are two options for handling of requests that match a SNORT rule. Drop Connection, where a rule match will generate no HTTP response, or Send Reject, where a rule match will generate a response to the client of HTTP 400 “Invalid Request”. Both options prevent the request from reaching the Real Server(s).

    Enable Multiple Connect
    Enabling this option permits the LoadMaster to manage connection handling between the LoadMaster and the Real Servers. Requests from multiple clients will be sent over the same TCP connection.
    Multiplexing only works for simple HTTP GET operations. The Enable Multiple Connect check box will not be available in certain situations, for example if WAF, ESP or SSL Acceleration is enabled.

    Port Following
    Port following enables a switch from an HTTP connection to an HTTPS (SSL) connection to be persistent on the same Real Server. Port following is possible between UDP and TCP connections.
    To switch on port following, the following must be true:
    • The Virtual Service where port following is being switched on must be an HTTPS service
    • There must be a HTTP service
    • Both of these Virtual Services must the same Layer 7 persistence modeselected, i.e. Super HTTP or Source IP Address persistence
    Port following is not available on SubVSs.

    Add Header to Request
    Input the key and the value for the extra header that is to be inserted into every request sent to the Real Servers.

    Click the Set Header button to implement the functionality.

    Add HTTP Headers
    The Add HTTP Headers drop-down list is only available when SSL offloading (SSL Acceleration) is enabled.

    This option allows you to select which headers are to be added to the HTTP stream. The options available include:
    • Legacy Operation(XXX)
    • None
    • X-Forwarded-For
    • X-Forwarded-For (No Via)
    • X-ClientSide
    • X-ClientSide (No Via)
    • Via Only
    In the Legacy operation, if the system is in HTTP kernel mode, then a header is added. Otherwise nothing is done. For the other operation methods, then the system is forced into HTTP kernel mode and the specified operation is performed.

    Sorry Server
    Enter the IP Address and Port number in the applicable fields. If no Real Servers are available, the LoadMaster will redirect to a specified location, with no checking. The IP address of a Sorry Server must be on a network or subnet that is defined on the LoadMaster.

    When using a Layer 7 Virtual Service with transparency enabled, the Sorry Server should be on the same subnet as the Real Server.

    Not Available Redirection Handling
    When no Real Servers are available to handle the request you can define the error code and URL that the client should receive.
    • Error Code: If no Real Servers are available, the LoadMaster can terminate the connection with a HTTP error code. Select the appropriateerror code.
    • Redirect URL: When there are no Real Servers available and an error response is to be sent back to the client, a redirect URL can also be specified. If the string entered in this text box does not include http:// or https:// the string is treated as being relative to the current location, so the hostname will be added to the string in the redirect. This field also supports the use of wildcards such as %h and %swhich represent the requested hostname and Uniform Resource Identifier (URI) respectively.
    • Error Message: When no Real Servers are available and an error response is to be sent back to the client, the specified error message will be added to the response.
    For security reasons, the returned HTML page only returns the text Document has moved. No request-supplied information is returned.
    • Error File:When no Real Servers are available and an error response is to be sent back to the client, the specified file will be added to the response. This enables simple error HTML pages to be sent in response to the specified error.
    The maximum size of this error page is 16KB.
    Not Available Server/Port

    Figure 3‑10: Not Available Server

    In a UDP Virtual Service there is an option to specify a Not Available Server and Port. When there are no Real Servers available to handle the request this option defines the URL that the client will receive.
    The value of the Not Available Server can only be changed for UDP if the service is not currently using the Not Available Server.

    Add a Port 80 Redirector VS
    If no port 80 Virtual Service is configured, one can be created. It will then redirect the client to the URL specified in the Redirection URL: field.

    Click the Add HTTP Redirector button to implement the redirector.
    When the Add HTTP Redirector button is clicked, a redirect Virtual Service is created and this WUI option disappears from the relevant Virtual Service.

    Default Gateway
    Specify the Virtual Service-specific gateway to be used to send responses back to the clients. If this is not set, the global default gateway will be used.

    Click the Set Default Gateway button to implement the default gateway.
    If the global Use Default Route Only option is set in System Configuration > Miscellaneous Options > Network Options, traffic from Virtual Services that have the Default Gateway set will be only routed to the interface where the Virtual Service’s default route is located. This can allow the LoadMaster to be directly connected to client networks without returning traffic directly using the adjacent interface.

    Alternate Source Addresses
    If no list is specified, the LoadMaster will use the IP address of the Virtual Service as its local address. Specifying a list of addresses ensures the LoadMaster will use these addresses instead.

    Click the Set Alternate Source Addresses button to implement the Alternate Source Addresses.
    This option is only available if the Allow connection scaling over 64K Connections option is enabled in the L7 Configurationscreen.

    Service Specific Access Control
    Allows you to change the Virtual Service-specific Access Control lists.
    If you implement the Access Control Lists option, the Extra Ports option will not work correctly.

     

    Web Application Firewall (WAF) Options


    Figure 3‑11: AFP Options

    The Web Application Firewall (WAF) feature must be enabled before you can configure these options.

    Figure 3‑12: Enable AFP

    To enable WAF, select the Enabled check box. A message will be displayed next to the Enabled check box displaying how many WAF-enabled Virtual Services exist and it will also display the maximum number of WAF-enabled Virtual Services that can exist. If the maximum number of WAF-enabled Virtual Services have been reached, the Enabled check box will be greyed out.

    Utilizing WAF can have a significant performance impact on your LoadMaster deployment. Please ensure that the appropriate resources are allocated.

    For virtual and bare metal LoadMaster instances, a minimum of 2GB of allocated RAM is required for operation of AFP. The default memory allocation for Virtual LoadMasters and LoadMaster Bare Metal instances prior to LoadMaster Operating System version 7.1-22 is 1GB of RAM. If this default allocation has not been changed please modify the memory settings before attempting to proceed with AFP configuration.

    Default Operation
    Select the default operation of the WAF:
    • Audit Only: This is an audit-only mode – logs will be created but requests and responses are not blocked.
    • Block Mode: Either requests or responses are blocked.
    Audit mode
    Select what logs to record:
    • No Audit: No data is logged.
    • Audit Relevant: Logs data which is of a warning level and higher. This is the default option for this setting.
    • Audit All: Logs all data through the Virtual Service.
    Selecting the Audit All option produces a large amount of log data. KEMP does not recommend selecting the Audit All option for normal operation. However, the Audit All option can be useful when troubleshooting a specific problem.

    Inspect HTML POST Request Content
    Enable this option to also process the data supplied in POST requests.

    Two additional options (Disable JSON Parser and Disable XML Parser) only become available if Inspect HTML Post Request Content is enabled.

    Disable JSON Parser
    Disable processing of JavaScript Object Notation (JSON) requests.

    Disable XML Parser
    Disable processing of XML requests.

    Process Responses
    Enable this option to verify responses sent from the Real Servers.
    This can be CPU and memory intensive.

    If a Real Server is gzip encoding, WAF will not check that traffic, even if Process Responses is enabled.
    Hourly Alert Notification Threshold

    This is the threshold of incidents per hour before sending an alert. Setting this to 0 disables alerting.

    Rules
    This is where you can assign/un-assign generic, custom, application-specific and application-generic rules to/from the Virtual Service.

    You cannot assign application-specific and application-generic rules to the same Virtual Service.

     

    Edge Security Pack (ESP) Options

    The ESP feature must be enabled before you can configure these options. To enable the ESP function, please select the Enable ESP check box.

    Figure 3‑13: SP Options section

    The full ESP Options screen will appear.
    The ESP feature can only be enabled if the Virtual Service is a HTTP, HTTPS or SMTP Virtual Service

    Figure 3‑14: ESP Options

    Enable ESP
    Enable or disable the ESP feature set by selecting or removing the checkmark from the Enable ESP checkbox.

    ESP Logging
    There are three types of logs stored in relation to the ESP feature. Each of these logs can be enabled or disabled by selecting or deselecting the relevant checkbox. The types of log include:
    • User Access:logs recording all user logins
    • Security: logs recording all security alerts
    • Connection:logsrecording each connection
    Logs are persistent and can be accessed after a reboot of the LoadMaster.

    Client Authentication Mode
    Specifies how clients attempting to connect to the LoadMaster are authenticated. The following types of methods are available:
    • Delegate to Server:the authentication is delegated to the server
    • Basic Authentication: standard Basic Authentication is used
    • Form Based: clients must enter their user details within a form to be authenticated on the LoadMaster
    • Client Certificate: clients must present the certificate which is verified against the issuing authority
    • NTLM: NTLM credentials are based on data obtained during the interactive logon process and consist of a domain name and a user name
    The remaining fields in the ESP Options section will change based on the Client Authentication Mode selected.

    SSO Domain
    Select the Single Sign-On (SSO) Domain within which the Virtual Service will be included.

    An SSO Domain must be configured in order to correctly configure the ESP feature.
    Only SSO domains with the Configuration type of Inbound Configuration will be shown as options in this SSO Domain field.

    Alternative SSO Domains
    Many organizations use extranets to share information with customers and partners. It is likely that extranet portals will have users from two or more Active Directory domains. Rather than authenticating users from individual domains one at a time, assigning Alternative SSO Domains gives the ability to simultaneously authenticate users from two or more domains using one Virtual Service.

    This option appears only when more than one domain has been configured.

    Currently this option is available for domains which are configured with the following Authentication Protocols:
    • LDAP
    • RSA-SecurID
    • Certificates

    Figure 3‑15: Enabled and Reencrypt tick boxes selected

    Before configuring the ESP Options to use Alternative SSO Domains ensure that, in the SSL Properties section, the Enabled and Reencrypt tick boxes are selected.

    Figure 3‑16: Available Domains

    The domain name which appears in the SSO Domain drop-down list is the default domain. This is also the domain which will be used if only one is configured.

    Previously configured alternative domains appear in the Available Domain(s) list.

    Figure 3‑17: Alternative Domains (SECOND and THIRD) Assigned to the Virtual Service.

    To assign alternative SSO Domains:
    1. Highlight each of the domains you wish to assign and click the > button.
    An assigned domain is a domain which can be authenticated using a particular Virtual Service.
    All domains which appear as available may be assigned to a Virtual Service.
    1. Click the Set Alternative SSO Domains button to confirm the updated list of Assigned Domain(s).
    2. Choose Basic Authentication from the Server Authentication Mode drop-down list.
    When logging in to a domain using the ESP form, users should enter the name of the SSO Domain if an alternative domain needs to be accessed. If no domain name is entered in the username, users are, by default, logged on the domain entered in the default SSO Domain drop-down list.

    To view the status of the Virtual Services, click Virtual Services and View/Modify Services in the main menu.

    A list of the Virtual Services displays showing the current status of each service.
    If alternative domains are assigned and there is an issue with a particular domain, the affected domain name is indicated in the Status column.

    Allowed Virtual Hosts
    The Virtual Service will only be allowed access to specified virtual hosts. Any virtual hosts that are not specified will be blocked.

    Enter the virtual host name(s) in the Allowed Virtual Hosts field and click the Set Allowed Virtual Hosts button to specify the allowed virtual hosts.

    Multiple domains may be specified within the field allowing many domains to be associated with the Single Sign On Domain.

    The use of regular expressions is allowed within this field.
    If this field is left blank, the Virtual Service will be blocked.

    Allowed Virtual Directories
    The Virtual Service will only be allowed access to the specified virtual directories, within the allowed virtual hosts. Any virtual directories that are not specified will be blocked.

    Enter the virtual directory name(s) in the Allowed Virtual Directories field and click the Set Allowed 

    Virtual Directories button to specify the allowed virtual directories.
    The use of regular expressions is allowed within this field.

    Pre-Authorization Excluded Directories
    Any virtual directories specified within this field will not be pre-authorized on this Virtual Service and will be passed directly to the relevant Real Servers.

    Permitted Groups
    Specify the groups that are allowed to access this Virtual Service. When set, if a user logs in to a service published by this Virtual Service, the user must be a member of at least one of the groups specified. Up to 10 groups are supported per Virtual Service. Performance may be impacted if a large number of groups are entered. Groups entered in this field are validated via an LDAP query.
    Some guidelines about this field are as follows:
    • The group(s) specified must be valid groups on the Active Directory in the SSO domain associated with the Virtual Service. The SSO domain in the LoadMaster must be set to the directory for the groups. For example, if the SSO domain in the LoadMaster is set to webmail.example and webmail is not the directory for the groups, it will not work. Instead, the SSO domain may need to be set to .example.com.
    • The group(s) listed must be separated by a semi-colon
    A space-separated list does not work because most groups contain a space in the name, for example Domain Users.
    • The following characters are not allowed in permitted group names:/ : + *
    • The authentication protocol of the SSO domain must be LDAP
    • The groups should be specified by name, not by full distinguished name

    Include Nested Groups
    This field relates to the Permitted Groups setting. Enable this option to include nested groups in the authentication attempt. If this option is disabled, only users in the top-level group will be granted access. If this option is enabled, users in both the top-level and first sub-level group will be granted access.

    SSO Image Set
    This option is only available if Form Based is selected as the Client Authentication Mode. You can choose which form to use to gather the Username and Password. There are three form options, Exchange, Blank and Dual Factor Authentication. There are also options to display the form and error messages in other languages.
    • Exchange Form

    Figure 3‑18: Exchange form

    The Exchange Form contains the KEMP Logo
    • Blank Form

    Figure 3‑19: Blank form

    The Blank Form does not contain the large KEMP logo.
    • Dual Factor Authentication

    Figure 3‑20: Dual Factor Authentication form

    The Dual Factor Authentication form contains four fields - two for the remote credentials and two for the internal credentials.

    Remote Credentials are credentials that are used to authenticate against remote authentication servers such as RADIUS, before allowing the user to authenticate against Domain Servers such as Active Directory servers.

    Internal Credentials are credentials that are used to authenticate against the internal domain servers such as Active Directory Servers.

    If the Authentication Protocol of the relevant SSO Domain is set to RADIUS and LDAP, the SSO Image Set must be set to Dual Factor Authentication.

    SSO Greeting Message
    This option is only available if Form Based is selected as the Client Authentication Mode. The login forms can be further customized by adding text. Enter the text that you would like to appear on the form within the SSO Greeting Message field and click the SetSSO Greeting Message button. The message can have up to 255 characters.

    The SSO Greeting Message field accepts HTML code, so you can insert an image if required.
    The grave accent character ( ` ) is not supported. If this character is entered in the SSO Greeting Message, the character will not display in the output, for example a`b`c becomes abc.

    Logoff String
    This option is only available if Form Based is selected as the Client Authentication Mode. Normally this field should be left blank. For OWA Virtual Services, the Logoff String should be set to /owa/logoff.owa or in customized environments, the modified Logoff String needs to be specified in this text box.

    If the URL to be matched contains sub-directories before the specified string, the logoff string will not be matched. Therefore the LoadMaster will not log the user off.

    Display Public/Private Option

    Figure 3‑21: Public/private option

    Enabling this check box will display a public/private option on the ESP log in page. Based on the option the user selected on the login form, the Session timeout value will be set to the value specified for either public or private in the Manage SSO Domain screen. If the user selects the private option their username will be stored for that session.

    Use Session or Permanent Cookies
    Three options are available to select for this field:
    • Session Cookies Only: This is the default and most secure option
    • Permanent Cookies only on Private Computers: Sends session cookies on public computers
    • Permanent Cookies Always: Sends permanent cookies in all situations
    Specify if the LoadMaster should send session or permanent cookies to the users’ browser when logging in.
    Permanent cookies should only be used when using single sign on with services that have sessions spanning multiple applications, such as SharePoint.


    Server Authentication Mode
    This field is only updatable when the Client Authentication Mode is set to Form Based.
    Specifies how the LoadMaster is authenticated by the Real Servers. There are three types of methods available:
    • None: no client authentication is required
    • Basic Authentication: standard Basic Authentication is used
    • KCD: KCD authentication is used
    If Delegate to Server is selected as the Client Authentication Mode, then None is automatically selected as the Server Authentication mode. Similarly, if either Basic Authentication or Form Based is selected as the Client Authentication Mode, then Basic Authentication is automatically selected as the Server Authentication mode.

    Server Side configuration
    This option is only visible when the Server Authentication mode value is set to KCD.
    Select the SSO domain for the server side configuration. Only SSO domains which have the Configuration type set to Outbound Configuration are shown here.

    SMTP Virtual Services and ESP

    If you create an SMTP Virtual Service (with 25 as the port), the ESP feature is available when you select the Enable ESP checkbox but with a reduced set of options.

    Figure 3‑22: ESP Options

    Enable ESP
    Enable or disable the ESP feature set by selecting or deselecting the Enable ESP checkbox.


    Connection Logging
    Logging of connections can be enabled or disabled by selecting or deselecting the Connection Logging checkbox.
    Permitted Domains
    All the permitted domains that are allowed to be received by this Virtual Service must be specified here. For example, if you wish the Virtual Service to receive SMTP traffic from john@kemp.com, then the kemp.com domain must be specified in this field.

     

    Sub Virtual Services

    From within a Virtual Service you can create one or more ‘Sub Virtual Services’ (SubVS). A SubVS is linked to, and uses the IP address of, the ‘parent’ Virtual Service. The SubVSs may have different settings (such as health check methods, content rules etc.) to the parent Virtual Service and to each other.

    This allows the grouping of related Virtual Services, all using the same IP address. This could be useful for certain configurations such as Exchange or Lync which typically are comprised of a number of Virtual Services.

    Users with the Virtual Services permission can add a SubVS.
    Users with the Real Server permission cannot add a SubVS.

    Figure 3‑23: Real Servers section

    To create a SubVS, within a Virtual Service configuration screen, expand the Real Servers section and click the Add SubVS button.

    Figure 3‑24: SubVS created

    A message appears stating that the SubVS has been created.
    You cannot have Real Servers and SubVSs associated with the same Virtual Service. You can however, associate a Real Server with a SubVS.

    Figure 3‑25: SubVS section

    When the SubVS is created, the Real Servers section of the Virtual Services configuration screen is replaced with a SubVSs section.

    All the SubVSs for the Virtual Service are listed here. The Critical check box can be enabled to indicate that the SubVS is required in order for the Virtual Service to be considered available. If a non-critical SubVS is down, the Virtual Service is reported as up and a warning will be logged.

    If a critical SubVS is down, a critical log will be generated and the Virtual Service will be marked as down. If the email options are configured, an email will be sent to the relevant recipients.

    In all cases, if the Virtual Service is considered to be down and the Virtual Service has a sorry server or an error message configured, these will be used.

    To modify the SubVS, click the relevant Modify button. A configuration screen for the SubVS appears. This contains a subset of the configuration options available for a normal Virtual Service.

    Figure 3‑26: Section of the SubVS modify screen

    The SubVSs can also be modified by clicking the relevant Modify button from within the main Virtual Services view. A Virtual Service with SubVSs is colored differently within the Virtual IP address section and the SubVSs are listed in the Real Server section. The SubVS details can be viewed by clicking the ‘parent’ Virtual Service to expand the view to include the SubVSs.

    If you would like to remove a Virtual Service which contains SubVSs, you must remove the SubVSs first before you are able to delete the main service.

    SubVSs may have different ESP configurations than their parent Virtual Service, however care must be taken to ensure that the parent Virtual Service and SubVS ESP options do not conflict.

     

    View/Modify (Remote Terminal Service)

    This section is not relevant to the LoadMaster Exchange product.
    Properties of the Virtual Service include the Generic Type and also provide Remote Terminal specific options.

    Persistence
    If the terminal servers support a Session Directory, the LoadMaster will use the "routing " supplied by the Session Directory to determine the correct host to connect to. The LoadMaster persistency timeout value is irrelevant here - it is a feature of the Session Directory.

    The switch "IP address redirection" in the Session Directory configuration must not be selected in order for this to work.

    Using Session Directory with LoadMaster is optional, in terms of persistence. If the client pre-populates the username and password fields in the initial request, then this value is stored on the LoadMaster. As long as these fields are still populated upon reconnect, the LoadMaster will look up the name and reconnect to the same server as the original connection. The persistence timeout is used to limit the time the information is kept on the LoadMaster.

    If using Terminal-Service or Source IP mode, then if neither of these two modes succeeds, then the source IP address will be used for persistency.

    Service Check for the Virtual Service
    Only three options are available; ICMP, TCP and RDP. Remote Terminal Protocol (RDP) opens a TCP connection to the Real Server on the Service port (port 3389). The LoadMaster sends an a1110 Code (Connection Request) to the server. If the server sends an a1101 Code (Connection Confirm) then LoadMaster closes the connection and marks the server as active. If the server fails to respond within the configured response time for the configured number of times, or if it responds with a different status code, it is assumed dead.

     

    Real Servers

    This section allows you to create a Real Server and lists the Real Servers that are assigned to the Virtual Service. The properties of the Real Servers are summarized and there is also the opportunity to add or delete a Real Server, or modify the properties of a Real Server. When Content Switching is enabled, there is also the opportunity to add rules to, or remove rules from, the Real Server (see Add Rule).

    Real Server Check Parameters
    This provides a list of health checks for well-known services, as well as lower level checks for TCP/UDP or ICMP. With the service health checks, the Real Servers are checked for the availability of the selected service. With TCP/UDP the check is simply a connect attempt.

    Figure 3‑27: Real Servers

    Real Server Check Protocol
    The tables below describe the options that may be used to verify Real Server health. You may also specify a health check port on the Real Server. If none are specified here, it will default to the Real Server port.
    When the HTTP/HTTPS, Generic and STARTTLS protocols Service Types are selected, the following health check options are available.

    MethodAction
    ICMP PingAn ICMP ping is sent to the Real Server
    HTTPHTTP checking is enabled
    HTTPSHTTPS (SSL) checking is enabled
    TCPA basic TCP connection is checked
    MailThe SMTP (Simple Mail Transfer Protocol) is used
    NNTPThe NNTP (Network News Transfer Protocol) is used
    FTPThe FTP (File Transfer Protocol) is used
    TelnetThe Telnet protocol is used
    POP3The POP3 (Post Office Protocol – mail client protocol) is used
    IMAPThe IMAP (Internet Message Access Protocol – mail client protocol) is used
    Name Service (DNS) ProtocolThe Name Service Protocol is used
    Binary DataSpecify a hexadecimal string to send and specify a hexadecimal string to check for in the response
    NoneNo checking performed

    When the Remote Terminal Service Type is selected the following health check options are available.
    MethodAction
    ICMP PingAn ICMP ping is sent to the Real Server
    TCPA basic TCP connection is checked
    Remote Terminal ProtocolAn RDP Routing Token is passed to the Real Server.
    This health check supports Network-Level Authentication.
    NoneNo checking performed

    For a UDP virtual service, only the ICMP Ping and Name Service (DNS) Protocol options are available for use

    Enhanced Options
    Enabling the Enhanced Options check box provides an additional health check option – Minimum number of RS required for VS to be considered up. If the Enhanced Options check box is disabled (the default), the Virtual Service will be considered available if at least one Real Server is available. If the  

    Enhanced Options check box is enabled, you can specify the minimum number of Real Servers that must be available in order to consider the Virtual Service to be available.

    Minimum number of RS required for VS to be considered up
    This option will only appear if the Enhanced Options check box is enabled and if there is more than one Real Server.

    Select the minimum number of Real Servers required to be available for the Virtual Service to be considered up.

    If less than the minimum number of Real Servers is available, a critical log is generated. If some Real Servers are down but it has not reached the minimum amount specified, a warning is logged. If the email options are configured, an email will be sent to the relevant recipients.

    Note that the system marks a Virtual Service as down whenever a Real Server that is marked as Critical becomes unavailable – even if Enhanced Options are enabled and there are more than the specified minimum number of Real Servers still available.

    In all cases, if the Virtual Service is considered to be down and the Virtual Service has a sorry server or an error message configured, these will be used.

    If the minimum number is set to the total number of Real Servers and one of the Real Servers is deleted, the minimum will automatically reduce by one.

    When using content rules in a SubVS, the minimum number of Real Servers required has a slightly different meaning. A rule is said to be available and can be matched if and only if the number of available Real Servers with that rule assigned to them is greater than the limit. If the number of available Real Servers is below this limit, the rule can never be matched.

    If a Real Server on a SubVS is marked as critical – the SubVS will be marked as down if that Real Server is down. However, the parent Virtual Service will not be marked down unless that SubVS is marked as critical.

     

    HTTP or HTTPS Protocol Health Checking

    When either the HTTP Protocol or HTTPS Protocol options are selected a number of extra options are available as described below.

    Figure 3‑28: Real Servers section

    The post data option only appears if the POST HTTP Method is selected.
    The Reply 200 Pattern option only appears if either the POST or GETHTTP Method is selected
    URL

    By default, the health checker tries to access the URL to determine if the machine is available. A different URL can be specified here.

    Use HTTP/1.1
    By default, the LoadMaster uses HTTP/1.0. However you may opt to use HTTP/1.1 which will operate more efficiently.

    HTTP/1.1 Host
    This field will only be visible if ‘Use HTTP/1.1 is selected.

    When using HTTP/1.1 checking, the Real Servers require a hostname to be supplied in each request. If no value is set, then this value is the IP address of the Virtual Service.

    To send SNI host information in HTTPS health checks, please enable Use HTTP/1.1 in the Real Servers section of the relevant Virtual Service(s) and specify a host header. If this is not set, the IP address of the Real Server will be used.

    HTTP Method
    When accessing the health check URL, the system can use either the HEAD, GET or POST method.

    Post Data
    This field will only be available if the HTTP Method is set to POST. When using the POST method, up to 2047 characters of POST data can be passed to the server.

    Reply 200 Pattern
    When using the GET or the POST method, the contents of the returned response message can be checked. If the response contains the string specified by this Regular Expression, then the machine is determined to be up. The response will have all HTML formatting information removed before the match is performed. Only the first 4K of response data can be matched.

    The LoadMaster will only check for this phrase if the reply from the server is a 200 code. If the reply is something else, the page will be marked as down without checking for the phrase. However, if the reply is a redirect (code 302), the page is not marked as down. This is because the LoadMaster assumes that the phrase will not be present and also it cannot take the service down, as the redirect would then become useless.

    If the pattern starts with a carat ‘^’ symbol, it inverts the pattern response.
    Both Regular Expressions and Perl Compatible Regular Expressions (PCRE) can be used to specify strings.

    Custom Headers
    Here you can specify up to 4 additional headers/fields which will be sent with each health check request. Clicking the Show Headers button will show the entry fields. The first field is where you define the key for the custom header that is to be part of the health check request. The second field is the value of the custom header that is to be sent as part of the health check request. Once the information is input, click the Set 
    Header button.

    Each of the headers can be up to a maximum of 20 characters long and the fields can be up to a maximum of 100 characters long. However, the maximum allowed number of characters in total for the 4 header/fields is 256.

    The following special characters are allowed in the Custom Headers fields:
    ; . ( ) / + = - _
    If a user has specified HTTP/1.1, the Host field is sent as before to the Real Server. This can be overridden by specifying a Host entry in the additional headers section. The User-Agent can also be overridden in the same manner. If a Real Server is using adaptive scheduling, the additional headers which are specified in the health check are also sent when getting the adaptive information.

    It is possible to perform a health check using an authenticated user: enable Use HTTP/1.1, select HEAD as the HTTP Method and enter the username in the first Custom Header text box, and the password in the second box.

    To send SNI host information in HTTPS health checks, please enable Use HTTP/1.1 in the Real Servers section of the relevant Virtual Service(s) and specify a host header. If this is not set, the IP address of the Real Server will be used.

    Rules
    If any of the Real Servers have Content Switching rules assigned to them the Rules column appears in the Real Servers section. A button with the number of rules assigned to each of the Real Server (or with None if there are no rules assigned) is displayed in the Rules column.

    Clicking the button within the Rules column opens the Rules Management screen.

    Figure 3‑29: Rules

    From within this screen you can Add or Delete the rules assigned to a Real Server.

    Binary Data Health Checking

    When Binary Data is selected as the health check method, some other fields are available, as described below.

    Figure 3‑30: Binary Data health check

    Data to Send
    Specify a hexadecimal string to send to the Real Server.
    This hexadecimal string must contain an even number of characters.

    Reply Pattern
    Specify the hexadecimal string which will be searched for in the response sent back from the Real Server. If the LoadMaster finds this pattern in the response, the Real Server is considered up. If the string is not found, the Real Server will be marked as down.

    This hexadecimal string must contain an even number of characters.

    Find Match Within
    When a response is returned, the LoadMaster will search for the Reply Pattern in the response. The LoadMaster will search up to the number of bytes specified in this field for a match.

    Setting this to 0 means that the search is not limited. Data is read from the Real Server until a match is found. A maximum of 8 KB will be read from the Real Server.

    Setting the value to less than the length of the reply string means that the check will act as if the value has been set to 0, i.e. all packets (up to 8 KB) will be searched.

     

    Add a Real Server

    Clicking the Add New button brings you to the following screen where the properties of the Real Server are set.

    Figure 3‑31: Real Server Parameters

    Allow Remote Addresses: By default only Real Servers on local networks can be assigned to a Virtual Service. Enabling this option will allow a non-local Real Server to be assigned to the Virtual Service.
    To make the Allow Remote Addresses option visible, Enable Non-Local Real Servers must be selected (in System Configuration > Miscellaneous Options > Network Options). Also, Transparency must be disabled in the Virtual Service.

    When alternative gateways/non-local Real Servers are set up, health checks are routed through the default gateway.

    Real Server Address: The Real Server IP address. This is not editable when modifying a Real Server.

    Port: The forwarding port of the Real Server. This field is editable, so the port may be altered later if required.

    Forwarding Method: Either NAT (Network Address Translation) or Route (Direct) forwarding. The available options are dependent on the other modes selected for the service.

    Weight: The Real Server's weight. This is weight of the Real Server, as used by the Weighted Round Robin, Weighted Least Connection and Adaptive scheduling methods. The default initial value for the weight is 1000, the maximum is 65535, and the minimum is 1. It is a good benchmark to give a Real Server a weight relative to its processor speed, i.e. if server1 seems to bring four times the power of server2, assign a weight of 4000 to server1 and weight of 1000 to server2.

    Connection Limit: The maximum number of open connections that a Real Server will accept before it is taken out of the rotation. This is only available for Layer 7 traffic. The limit stops new connections from being created, but it will allow requests that already have persistent connections to the server.

    A maximum number of 1024 Real Servers is allowed. This is the global limit and is divided among the existing Virtual Services. For example, if one Virtual Service had 1000 Real Servers, then the remaining Virtual Services can only have 24 further Real Servers in total.

    For the LoadMaster Exchange, there is a limit of six Real Servers that may be configured.
    Click the Add This Real Server button and it will be added to the pool.

    Critical
    This option will only appear if the Enhanced Options check box is enabled. 

    In the Real Servers section of the Virtual Service modify screen, there is a Critical check box for each of the Real Servers. Enabling this option indicates that the Real Server is required for the Virtual Service to be considered available. The Virtual Service will be marked as down if the Real Server has failed or is disabled.

    If a Real Server on a SubVS is marked as critical – the SubVS will be marked as down if that Real Server is down. However, the parent Virtual Service will not be marked down unless that SubVS is marked as critical.
    This option overrides the Minimum number of RS required for VS to be considered up field. For example, if the minimum is set to two and only one Real Server is down but that Real Server is set to critical – the Virtual Service will be marked as down.

    In all cases, if the Virtual Service is considered to be down and the Virtual Service has a sorry server or an error message configured, these will be used.

     

    Modify a Real Server

    When you click the Modify button of a Real Server, the following options are available:

    Figure 3‑32: Real Server options

    Real Server Address
    This field shows the address of the Real Server. This is not an editable field.

    Port
    This is a field detailing the port on the Real Server that is to be used.

    Forwarding Method
    This is a field detailing the type of forwarding method to be used. The default is NAT; Direct Server Return can only be used with L4 services.


    Weight
    When using Weighted Round Robin Scheduling, the weight of a Real Server is used to indicate what relative proportion of traffic should be sent to the server. Servers with higher values will receive more traffic.

    Connection Limit
    This is the maximum amount of open connections that can be sent to the real server before it is taken out of rotation. The maximum limit is 100,000.

     

    Manage Templates

    Templates make the setting up of Virtual Services easier by automatically creating and configuring the parameters for a Virtual Service. Before a template can be used to configure a Virtual Service, it must be imported and installed on the LoadMaster.

    Figure 3‑33: Manage Templates

    Click the Choose File button, select the template you wish to install and click the Add New Template button to install the selected template. This template is now available for use when you are adding a new Virtual Service.

    Click the Delete button to remove the template.

     

    Manage SSO Domains

    Before using the Edge Security Pack (ESP) the user must first set up a Single Sign-On (SSO) Domain on the LoadMaster. The SSO Domain is a logical grouping of Virtual Services which are authenticated by an LDAP server.

    The maximum number of SSO domains that are allowed is 128.

    Figure 3‑34: Manage Single Sign On Options

    Click the Manage SSO Domains menu option to open the Manage Single Sign On Options screen.

     

    Single Sign On Domains

    Two types of SSO domains can be created – client side and server side.
    Client Side configurations allow you to set the Authentication Protocol to LDAP, RADIUS, RSA-SecurID, Certificates or RADIUS and LDAP.

    Server Side configurations allow you to set the Authentication Protocol to Kerberos Constrained Delegation (KCD).

    To add a new SSO Domain enter the name of the domain in the Name field and click the Add button. The name entered here does not need to relate to the allowed hosts within the Single Sign On Domain.
    When using the Permitted Groups field in ESP Options, you need to ensure that the SSO domain set here is the directory for the permitted groups. For example, if the SSO Domain is set to webmail.example and webmail is not the directory for the permitted groups within example.com, it will not work. Instead, the SSO Domain needs to be set to .example.com.

    If the Domain/Realm field is not set, the domain Name set when initially adding an SSO domain will be used as the Domain/Realm name.
     
    Client Side (Inbound) SSO Domains

    Figure 3‑35: Manage Domain screen

    Authentication Protocol
    This dropdown allows you to select the transport protocol used to communicate with the authentication server. The options are:
    • LDAP
    • RADIUS
    • RSA-SecurID
    • Certificates
    • RADIUS and LDAP
    The fields displayed on this screen will change depending on the Authentication protocol selected.

    LDAP Configuration Type
    Select the type of LDAP configuration. The options are:
    • Unencrypted
    • StartTLS
    • LDAPS
    This option is only available if the Authentication Protocol is set to LDAP.
    RADIUS and LDAP Configuration Type
    Select the type of RADIUS and LDAP configuration. The options are:
    • RADIUS and Unencrypted LDAP
    • RADIUS and StartTLS LDAP
    • RADIUS and LDAPS
    This option is only available if the Authentication Protocol is set to RADIUS and LDAP.

    LDAP/RADIUS/RSA-SecurID Server(s)
    Type the IP addresses of the server or servers which will be used to authenticate the domain into the server(s) field and click the set server(s) button.

    Multiple server addresses can be entered within this text box. Each entry must be separated by a space.

    RADIUS Shared Secret
    The shared secret to be used between the RADIUS server and the LoadMaster.
    This field will only be available if the Authentication Protocol is set to RADIUS or RADIUS and LDAP.

    LDAP Administrator and LDAP Administrator Password
    These text boxes are only visible when the Authentication Protocol is set to Certificates.
    These details are used to check the LDAP database to determine if a user from the certificate exists.

    Check Certificate to User Mapping
    This option is only available when the Authentication Protocol is set to Certificates. When this option is enabled - in addition to checking the validity of the client certificate, the client certificate will also be checked against the altSecurityIdentities (ASI) attribute of the user on the Active Directory.

    If this option is enabled and the check fails, the login attempt will fail. If this option is not enabled, only a valid client certificate (with the username in the SubjectAltName (SAN)) is required to log in, even if the altSecurityIdentities attribute for the user is not present or not matching.



    Domain/Realm
    The login domain to be used. This is also used with the logon format to construct the normalized username, for example;
    • Principalname:@
    • Username:\
    If the Domain/Realm field is not set, the Domain name set when initially adding an SSO domain will be used as the Domain/Realm name.

    RSA Authentication Manager Config File
    This file needs to be exported from the RSA Authentication Manager.

    RSA Node Secret File
    A node secret must be generated and exported in the RSA Authentication Manager.
    It is not possible to upload the RSA node secret file until the RSA Authentication Manager configuration file is uploaded. The node secret file is dependent on the configuration file.

    Logon Format
    This drop-down list allows you to specify the format of the login information that the client has to enter.
    The options available vary depending upon which Authentication Protocol is selected.

    Not Specified: The username will have no normalization applied to it - it will be taken as it is typed.

    Principalname: Selecting this as the Logon format means that the client does not need to enter the domain when logging in, for example name@domain.com. The SSO domain added in the corresponding text box will be used as the domain in this case.

    When using RADIUS as the Authentication protocol the value in this SSO domain field must exactly match for the login to work. It is case sensitive.

    Username: Selecting this as the Logon format means that the client needs to enter the domain and username, for example domain\name@domain.com.

    Username Only: Selecting this as the Logon Format means that the text entered will be normalized to the username only (the domain will be removed).

    The Username Only option is only available for the RADIUS and RSA-SecurID protocols.

    Logon Format (Phase 2 Real Server)
    Specify the logon string format used to authenticate to the Real Server.

    The Logon Format (Phase 2 Real Server) field only appears if the Authentication Protocol is set to one of the following options:
    • RADIUS
    • RADIUS and LDAP
    • RSA-SecurID

    Logon Transcode
    Enable or disable the transcode of logon credentials, from ISO-8859-1 to UTF-8, when required.
    If this option is disabled, log in using the format that the client dictates. If this option is enabled, check if the client uses UTF-8. If the client does not use UTF-8, use ISO-8859-1.

    Failed Login Attempts
    The maximum number of consecutive failed login attempts before the user is locked out. Valid values range from 0 to 99. Setting this to 0 means that users will never be locked out.

    When a user is locked out, all existing logins for that user will be terminated, along with future logins.

    Reset Failed Login Attempt Counter after
    When this time (in seconds) has elapsed after a failed authentication attempt (without any new attempts) the failed login attempts counter is reset to 0. Valid values for this text box range from 60 to 86400. This value must be less than the Unblock timeout value.

    Unblock timeout
    The time (in seconds) before a blocked account is automatically unblocked, i.e. unblocked without administrator intervention. Valid values for this text box range from 60 to 86400. This value must be greater than the Reset Failed Login Attempt Counter after value.

    Session timeout
    The idle time and max duration values can be set here for trusted (private) and untrusted (public) environments. The value that will be used is dependent on whether the user selects public or private on their login form. Also, either max duration or idle time can be specified as the value to use.

    Idle time: The maximum idle time of the session in seconds, i.e. idle timeout.

    Max duration: The max duration of the session in seconds, i.e. session timeout.
    Valid values for these fields range from 60 to 86400.

    Use for Session Timeout: A switch to select the session timeout behaviour (max duration or idle time).

    Test User and Test User Password
    In these two fields, enter credentials of a user account for your SSO Domain. The LoadMaster will use this information in a health check of the Authentication Server. This health check is performed every 20 seconds.

    Currently Blocked Users
    Figure 3‑36: Currently Blocked Users

    This section displays a list of users who are currently blocked and it also shows the date and time that the block occurred. It is possible to remove the block by clicking the unlock button in the Operation drop-down list.

    Different formats of the same username are treated as the same username, for example administrator@kemptech.net, kemptech\administrator and kemptech.net\administrator are all treated as one username.
    3.13.1.2Server Side (Outbound) SSO Domains
    Authentication Protocol
    This dropdown allows you to select the transport protocol used to communicate with the authentication server. The only option available for outbound (server side) configurations is Kerberos Constrained Delegation.

    Kerberos Realm
    The address of the Kerberos Realm.
    Colons, slashes and double quotes are not allowed in this field.
    This field only supports one address.

    Kerberos Key Distribution Center (KDC)
    The host name or IP address of the Kerberos Key Distribution Center. The KDC is a network service that supplies session tickets and temporary session keys to users and computers within an Active Directory domain.

    This field only accepts one host name or IP address. Double and single quotes are not allowed in this field.

    Kerberos Trusted User Name
    Before configuring the LoadMaster, a user must be created and trusted in the Windows domain (Active Directory). This user should also be set to use delegation. This trusted administrator user account is used to get tickets on behalf of users and services when a password is not provided. The user name of this trusted user should be entered in this text box.

    Double and single quotes are not allowed in this field.

    Kerberos Trusted User Password
    The password of the Kerberos trusted user.

     

    Single Sign On Image Sets


    Figure 3‑37: Single Sign On Image Sets

    To upload a new image set, click Choose File, browse to and select the file and click Add Custom Image Set. After adding the file, the supplied image set(s) will be listed on this page. It will also be available to select in the SSO Image Set drop-down list in the ESP Options section of the Virtual Service modify screen.

     

    WAF Settings

    You can get to this screen by selecting Virtual Services > WAF Settings in the main menu of the LoadMaster WUI.

    Figure 3‑38: Remote Logging

    Enable Remote Logging
    This check box allows you to enable or disable remote logging for WAF.

    Remote URL
    Specify the Uniform Resource Identifier (URI) for the remote logging server.

    Username
    Specify the username for the remote logging server.

    Password
    Specify the password for the remote logging server.

    Figure 3‑39: Automated WAF Rule Updates

    The automatic and manual download options will be greyed out if the AFP subscription has expired.

    Enable Automated Rule Updates
    Select this check box to enable the automatic download of the latest AFP rule files. This is done on a daily basis, if enabled.

    Last Updated
    This section displays the date when the last rules were downloaded. It gives you the option to attempt to download the rules now. It will also display a warning if rules have not been downloaded in the last 7 days.

    The Show Changes button will be displayed if the rules have been downloaded. This button can be clicked to retrieve a log of changes which have been made to the KEMP Technologies WAF rule set.

    Enable Automated Installs
    Select this check box to enable the automatic daily install of updated rules at the specified time.

    When to Install
    Select the hour at which to install the updates every day.

    Manually Install rules
    This button allows you to manually install rule updates, rather than automatically installing them. This section also displays when the rules were last installed.

    Figure 3‑40: Custom Rules and Custom Rule Data

    Custom Rules
    This section allows you to upload custom rules and associated data files. Individual rules can be loaded as files with a .conf extension, or you can load a package of rules in a Tarball (.tar.gz) file. A Tarball of rule files usually includes a number of individual .conf and .data files.

    Custom Rule Data
    This section allows you to upload data files which are associated to the custom rules.

     

    Global Balancing

    This menu option may not be available in your configuration. These features are part of the GSLB Feature Pack and are enabled based on the license that has been applied to the LoadMaster. If you would like to have these options available, contact KEMP to upgrade your license.

     

    Enable/Disable GSLB

    Click this menu option to either enable or disable GEO features. When GEO is enabled, the Packet Routing Filter is enabled by default and cannot be changed. When GEO is disabled, it is possible to either enable or disable the Packet Routing Filter in System Configuration > Access Control > Packet Filter.

     

    Manage FQDNs

    A Fully Qualified Domain Name (FQDN), sometimes also referred to as an absolute domain name, is a domain name that specifies its exact location in the tree hierarchy of the Domain Name System (DNS). It specifies all domain levels, including the top-level domain and the root zone. A fully qualified domain name is distinguished by its lack of ambiguity: it can only be interpreted in one way. The DNS root domain is unnamed, which is expressed by the empty label, resulting in an FQDN ending with the dot character.

    Figure 4‑1: Global Fully Qualified Names

    From this screen, you can Add or Modify an FQDN.

     

    Add a FQDN


    Figure 4‑2: Add a FQDN

    New Fully Qualified Domain Name
    The FQDN name, for example www.example.com. Wildcards are supported, for example *.example1.com matches anything that ends in .example1.com.

     

    Add/Modify an FQDN


    Figure 4‑3: Configure FQDN

    Selection Criteria
    The selection criterion used to distribute the resolution requests can be selected from this drop-down list. The Selection Criteria available are:
    • Round Robin - traffic distributed sequentially across the server farm (cluster), i.e. the available servers.
    • Weighted Round Robin– Incoming requests are distributed across the cluster in a sequential manner, while taking account of a static “weighting” that can be pre-assigned per server.
    • Fixed Weighting - the highest weight Real Server is used only when other Real Server(s) are given lower weight values.
    • Real Server Load - LoadMaster contains logic which checks the state of the servers at regular intervals and independently of the configured weighting.
    • Proximity– traffic is distributed to the closest site to the client. The positioning of the sites is set by inputting the longitude and latitude coordinates of the site during setup. The position of the client is determined by their IP address.
    • Location Based- traffic is distributed to the closest site to the client. The positioning of the sites is set by inputting the location of the site (country or continent) during setup. The position of the client is determined by their IP address. If there is more than one site with the same country code, requests will be distributed in a round robin fashion to each of the sites.
    Fail Over
    The Fail Over option is only available when the Selection Criteria is set to Location Based. When the Fail Over option is enabled, if a request comes from a specific region and the target is down, the connection will fail over and be answered with the next level in the hierarchy. If this is not available, the connection will be answered by the nearest (by proximity) target. If this is not possible, the target with the lowest requests will be picked. The Fail Over setting affects all targets.

    Public Requests& Private Requests
    The Isolate Public/Private Sites setting has been enhanced in version 7.1-30. The checkbox has been migrated to two separate dropdown menus to allow more granular control of DNS responses. Existing behavior has been preserved and will be migrated from your current setting, ensuring that no change in DNS responses is experienced.

    These new settings allow administrators finer control of DNS responses to configured FQDNs. Administrators may selectively respond with public or private sites based on whether the client is from a public or private IP. For example, administrators may wish to allow only private clients to be sent to private sites.

    The following table outlines settings and their configurable values:
    SettingValueClient TypeSite Types Allowed
    Public
    Requests
    Public Only
    Prefer Public
    Prefer Private
    All Sites
    Public
    Public
    Public
    Public
    Public
    Public, Private if no public
    Private, Public if no private
    Private and Public
    Private RequestsPrivate Only
    Prefer Private
    Prefer Public
    All Sites
    Private
    Private
    Private
    Private
    Private
    Private, Public if no private
    Public, Private if no public
    Private and Public
    Table 4‑1: Public/Private Request Settings

    Please note that exposing private IP address information to public queries in this way may result in exposed network details. Select this setting at your own risk.



    Site Failure Handling
    The default is for failover to occur automatically. However, in certain circumstances, for example in a multi-site Exchange 2010 configuration, this may not be optimal and different behaviour may be required. Failure Delay is set in minutes. If a Failure Delay is set, a new option called Site Recovery Mode becomes available.

    Site Recovery Mode
    This option is only available if a Failure Delay has been set. There are two options:
    • Automatic: The site is brought back into operation immediately upon site recovery
    • Manual: Once the site has failed, disable the site. Manual intervention is required to restore normal operation.

    Cluster
    If needed, the cluster containing the IP address can be selected.

    Checker
    This defines the type of health checking that is performed. The options include:
    • None: This implies that no health check will be performed to check the health status of the machine (IP address) associated to the current FQDN
    • ICMP Ping: This tests the health status by pinging the IP address
    • TCP Connect: This will test the health by trying to connect to the IP address on a specified port
    • Cluster Checks: When this is selected, the health status check is performed using the method associated with the selected cluster
    • When using Real Server Load as the Selection Criteria, and the cluster Type is set to Local LMor Remote LM, a drop-down list will appear called Mapping Menu. The Mapping Menu drop-down list will display a list of Virtual Service IP addresses from that LoadMaster. It will list each Virtual Service IP address with no port, as well as all of the Virtual IP address and port combinations. Please select the Virtual IP address that is associated with this mapping.If a Virtual Service with no port is selected, the health check will check all Virtual Services with the same IP address as the one selected. If one of them is in an “Up” status, the FQDN will show as “Up”. The port does not come in to consideration.If a Virtual Service with a port is selected, the health check will only check against the health of that Virtual Service when updating the health of the FQDN.

    Parameters
    The parameters for the Selection Criteria are described and can be changed within this section. The parameters differ depending on the Selection Criteria in use, as described below:
    • Round Robin –no parameters available
    • Weighted Round Robin –the weight of the IP address can be set by changing the value in the Weight text box and clicking the Set Weight button
    • Fixed Weighting – the weight of the IP address can be set in the Weight text box
    • Real Server Load –the weight of the IP address can be set in the Weight text box and the Virtual Service which will be measured can be chosen from the Mapping field
    • Proximity–the physical location of the IP address can be set by clicking the Show Coordinates button
    • Location Based – the locations associated with the IP address can be set by clicking the Show Locations button

    Delete IP address
    An IP address can be deleted by clicking the Delete button in the Operation column of the relevant IP address.

    Delete FQDN
    An FQDN can be deleted by clicking the Delete button at the bottom of the Modify (Configure) FQDN screen.

    Manage Clusters

    GEO clusters is a feature mainly used inside data centers. Health checks are performed on a machine (IP address) associated to a specific FQDN, using the containing cluster server, rather than the machine itself.

    Figure 4‑4: Configured Clusters

    In the Manage Clusters screen there are options to Add, Modify and Delete clusters.

    Add a Cluster


    Figure 4‑5: Add a Cluster

    When adding a cluster, there are 2 text boxes to fill out:
    • IP address–the IP address of the cluster
    • Name – the name of the cluster. This name can be used to identify the cluster while in other screens.

    Modify a Cluster


    Figure 4‑6: Modify Cluster

    Name
    The name of the cluster.

    Location
    If needed, the Show Locations button can be clicked in order to enter the latitude and longitude of the location of the IP address.

    Type
    The cluster type can be Default, Remote LM or Local LM:
    • Default: When the type of cluster is set to Default, the check is performed against the cluster using one of the following three available health checks:
    • None: No health check is performed. Therefore, the machine always appears to be up.
    • ICMP Ping: The health check is performed by pinging against the cluster IP address.
    • TCP Connect: The health check is performed by connecting to the cluster IP address on the port specified.
    • Local LM: When Local LM is selected as the Type, the Checkers fieldis automatically set to Not Needed. This is because the health check is not necessary because the cluster is the local machine.
    • Remote LM: The health check for this type of cluster is Implicit (it is performed via SSH).
    The only difference between Remote LM and Local LM is that it saves a TCP connection because it gets the information locally and not over TCP. Otherwise, the functionality is the same.

    Checkers
    The health check method used to check the status of the cluster.

    If the Type is set to Default the health check methods available are ICMP Ping and TCP Connect.
    If Remote LM or Local LM is selected as the Type, the Checkers dropdown list is unavailable.

    Disable
    If needed, a cluster can be disabled by clicking the Disable button in the Operation column.

    Delete a Cluster

    To delete a cluster, click the Delete button in the Operation column of the relevant cluster.
    Use the Delete function with caution. There is no way to undo this deletion.

    Upgrading GEO Clusters

    When upgrading GEO clusters, it is strongly recommended that all nodes are upgraded at the same time. Since GEO clusters operate in active-active mode, upgrading at the same time ensures that consistent behavior is experienced across all nodes.

    If you must operate a GEO cluster with mixed versions, be sure to make all changes from the most recent version. This prevents configuration loss due to incompatible configurations. Additionally, changing configuration options not present in older versions will result in disparate behavior.

    Miscellaneous Params

    A description of the sections and fields in the Miscellaneous Params menu option are below.

    Source of Authority


    Figure 4‑7: Source of Authority

    Source of Authority
    This is defined in RFC 1035. The SOA defines global parameters for the zone (domain). There is only one SOA record allowed in a zone file.

    Name Server
    The Name Server is defined as the forward DNS entry configured in the Top Level DNS, written as a Fully-Qualified Domain Name (FQDN and ends with a dot), for example lm1.example.com.

    If there is more than one Name Server, for example in a HA configuration, then you would add the second Name Server in the field also, separated by a blank space, for example lm1.example.com lm2.example.com.

    SOA Email
    This textbox is used to publish a mail address of a person or role account dealing with this zone with the “@” converted to a “.”. The best practice is to define (and maintain) a dedicated mail alias, for example “hostmaster” [RFC 2142] for DNS operations, for example hostmaster@example.com.

    TTL
    The Time To Live (TTL) value dictates how long the reply from the GEO LoadMaster can be cached by other DNS servers or client devices. This value should be as practically low as possible. The default value for this field is 10. The time interval is defined in seconds.

     

    Resource Check Parameters


    Figure 4‑8: Resource Check Parameters

    Check Interval
    Defined in seconds, this is the delay between health checks. This includes clusters and FQDNs. The valid range for this field is between 9 and 3600. The default value is 120.

    The interval value must be greater than the timeout value multiplied by the retry value (Interval > Timeout * Retry + 1). This is to ensure that the next health check does not start before the previous one completes.
    If the timeout or retry values are increased to a value that breaks this rule, the interval value will be automatically increased.

    Connection Timeout
    Defined in seconds, this is the allowed maximum wait time for a reply to a health check. The valid range for this field is between 4 and 60.

    Retry Attempts
    This is the consecutive number of times in which a health check must fail before it is marked down and removed from the load balancing pool.

    The maximum detection window for failed clusters of FQDNs is the (Check Interval + Connection Timeout) multiplied by the Retry attempts.

     

    Stickiness


    Figure 4‑9: Stickiness

    ‘Stickiness’, also known as Global Persistence, is the property that enables all name resolution requests from an individual client to be sent to the same resources until a specified period of time has elapsed.

    Location Data Update


    Figure 4‑10: Location Data Update

    The location patch contains the geographically-encoded IP to location data. Data files can be obtained directly from KEMP via normal support channels. These files are a repackaged distribution of Maxmind; the GeoIP database.

     

    IP Range Selection Criteria


    Figure 4‑11: IP Range Selection Criteria

    This section allows the definition of up to 64 IP ranges per data center.

    IP Address
    Specify an IP address or network. Valid entries here are either a single IP, for example 192.168.0.1, or a network in Classless Inter-Domain Routing (CIDR) format, for example 192.168.0.0/24.

    Coordinates
    Specify the latitude and longitude of the location.

    Location
    Specify the location to be assigned to the address.

    Add Custom Location
    Selecting this check box allows you to add a custom location.

     

    Statistics

     

    Real Time Statistics

    Shows the activity for the LoadMasters within the system (Global), the Real Servers and the Virtual Services.

     

    Global


    Figure 5‑1: Statistics

    Total CPUActivity
    This table displays the following CPU utilization information for a given LoadMaster:

    Statistic
    Description
    UserThe percentage of the CPU spent processing in user mode
    SystemThe percentage of the CPU spent processing in system mode
    IdleThe percentage of CPU which is idle
    I/O WaitingThe percentage of the CPU spent waiting for I/O to complete

    The sum of these 4 percentages will equal 100%.

    Core Temperatures: The temperature for each CPU core is displayed for LoadMaster hardware appliances. Temperature will not show on a Virtual LoadMaster statistics screen.

    CPU Details: To get statistics for an individual CPU, click the relevant number button in CPU Details.

    Figure 5‑2: CPU Statistics

    The CPU details screen has two additional statistics displayed - HW Interrupts and SW Interrupts.
    Memory usage

    This bar graph shows the amount of memory in use and the amount of memory free.

    Network activity
    These bar graphs show the current network throughput on each interface.

     

    Real Servers


    Figure 5‑3: Section of the Real Servers Statistics screen

    These graphs display the connections, bytes, bits or packets, depending on choice. The buttons in the top right of the page toggle which values are displayed. The values displayed for the Real Server comprise of the values for all the Virtual Services accessing the Real Server.

    If the Real Server has been assigned to more than one Virtual Service, you can view the statistics for each Real Server by Virtual Service by clicking the arrow ( ) to the right of the number in the first column. This expands the view to show the statistics for each Virtual Service on the Real Server.

    Due to the way that encrypted services are implemented, it is not possible to view the packet statistics on an encrypted Virtual Service.

    Name: The Name column is automatically populated based on a DNS lookup.
    RS-IP: This column displays the IP address of the Real Servers, and the Virtual Service (if expanded).

    Figure 5‑4: Real Server Statistics

    Clicking the links in the RS-IP column will display another screen containing a number of statistics specific to that Real Server.

    Status: This shows the status of the Real Server.

    Adaptive: This will only be displayed if an adaptive scheduling method has been selected for a Virtual Service. This column will display the adaptive value.

    Weight: This will only be displayed if the scheduling method is set to resource based (SDN adaptive) in a Virtual Service. The information which is gathered from the controller determines what the Adaptive value is set to. As the adaptive value goes up, the weight of the Real Server goes down. If all adaptive values are the same, all weights will be the same. When the adaptive values are different the weights will change.

    The weight of the Real Servers determines where traffic is sent. If a Real Server is configured in multiple Virtual Services, two numbers will be displayed for the weight - the first shows the average of the current weights over all Virtual Services that the Real Server is configured in. The second shows the number of Virtual Services that the Real Server is configured in. For example, a Weight of 972/2 means that the average weight of a Real Server which is configured in two Virtual Services is 972.

    Total Conns: The total number of connections made.
    Last 60 Sec: The total number of connections in the last 60 seconds.
    5 Mins: The total number of connections in the last 5 minutes.
    30 Mins: The total number of connections in the last 30 minutes.
    1 Hour: The total number of connections in the last hour.
    Active Conns: The total number of connections that are currently active.
    Current Rate Conns/sec: The current rate of connections per second.
    [%]: The percentage of connections per second.
    Conns/sec: A graphical representations of the connections per second.

     

    Virtual Services


    Figure 5‑5: Virtual Services

    These graphs display the connections, bytes, bits or packets, depending on choice. The buttons in the top right of the page toggle which values are displayed. The percentage of distribution across the Virtual Service's Real Servers are displayed.

    Name: The name of the Virtual Service.
    Virtual IP Address: The IP address and port of the Virtual Service.

    Figure 5‑6: Virtual Service Statistics

    Clicking the links in the Virtual IP Address column will display another screen containing a number of statistics specific to that Virtual Service.

    Protocol: The protocol of the Virtual Service. This will either be tcp or udp.
    Status: The status of the Virtual Service.
    Total Conns: The total number of connections made.
    Last 60 Sec: The total number of connections in the last 60 seconds.
    5 Mins: The total number of connections in the last 5 minutes.
    30 Mins: The total number of connections in the last 30 minutes.
    1 Hour: The total number of connections in the last hour.
    Active Conns: The total number of connections that are currently active.
    Current Rate Conns/sec: The current rate of connections per second.

     

    Historical Graphs

    The Historical Graphs screen provides a graphical representation of the LoadMaster statistics. These configurable graphs provide a visual indication of the traffic that is being processed by the LoadMaster.
    There are graphs for the network activity on each interface. There is also an option to view graphs for the overall and individual Virtual Services and the overall and individual Real Servers.

    The time granularity can be specified by selecting one of the hour, day, month, quarter or year options.
    In the case of the network activity on the interface graphs, you can choose which type of measurement unit you wish to use by selecting one of the Packet, Bits or Bytes options.

    For the Virtual Services and Real Servers graphs you can choose which type of measurement unit you wish to use by selecting one of the Connections, Bits or Bytes options.
    You can configure which Virtual Service statistics are being displayed by clicking the configuration icon: in the Virtual Services panel. This opens the Virtual Services configuration window.

    Figure 5‑7: Virtual Service (VS) selection for history graphs

    From here, Virtual Services can be added or removed from the statistics display.
    You can disable these graphs by disabling the Enable Historical Graphs check box in WUI Settings screen.

    A maximum of five Virtual Services can be displayed at the same time.

    To close the dialog and apply any changes, please ensure to click the button within the window itself.

    Figure 5‑8: Real Server (RS) selection for history graphs
    You can configure which Real Server statistics are being displayed by clicking the configuration icon, in the Real Servers panel. This opens the Real Servers configuration dialog in a separate window.
    From here, Real Servers can be added or removed from the statistics display.

    A maximum of five Real Servers can be displayed at the same time.
    To close the dialog and apply any changes, please ensure you click the button within the window itself.
    By default, only the statistics for the Virtual Services and Real Servers displayed on the Statistics page are gathered and stored. To view statistics for all Virtual Services and Real Servers, enable the Collect All Statistics option in System Configuration > Miscellaneous Options > WUI Settings.

    This option is disabled by default because collecting statistics for a large number of Virtual Services and Real Servers can cause CPU utilization to become very high.

     

    SDN Statistics

    To view the SDN statistics, go to Statistics >SDN Statistics in the main menu of the LoadMaster WUI.

    Figure 6‑1: SDN Statistics

    The Name, Version and Credentials will be displayed if the LoadMaster has successfully connected to the SDN Controller.

    Statistics section
    Statistics will not be displayed unless the SDN Controller has been added and is communicating with the LoadMaster. If the Name, Version and Credentials are not displaying it means that the LoadMaster is not connected to the SDN Controller. This could mean that the configuration is not correct, or the SDN Controller is down.

    Two types of statistics are displayed on this screen - network traffic and adaptive parameters:
    • Network traffic - this can display the number of bits and bytes transferred per second for each of the Real Servers. The maximum, average and minimum number of bits/bytes per second are shown.
    • Adaptive parameters - this displays the adaptive value (ctrl) and the weight. As the adaptive value goes up, the weight of the Real Server goes down.

     

    Device Information


    Figure 6‑2: Section of the Devices screen

    Information about switches on a controller which has OpenFlow enabled can be viewed by clicking the device info button.

    Figure 6‑3: Section of the Devices screen - further details

    Further information can be seen by clicking the plus (+) button to expand each of the devices.

     

    Path Information

    Figure 6‑4: Section of the Path Information screen

    Path information can be viewed by clicking the path info button.
    The LoadMaster and the SDN controller need to be directly connected in order for the path information to be displayed.

    To view a graphical representation of the path, click the => or <= icon in the Dir column for the relevant path.

    Figure 6‑5: Path Info - Graphical Representation

    This screen will display the LoadMaster, Real Server and any switches in between. The LoadMaster and Real Server are represented in brown. The LoadMaster is at the top and the Real Server is at the bottom.
    The switches are represented in blue. The switch name will appear in the blue boxes if the SDN Controller picks it up.

    The Data Path Identifier (DPID) of each switch on the network will be displayed on the right of the switches. The DPID is how the controller identifies the different switches.

    The Media Access Control (MAC) address of the LoadMaster and Real Server will be displayed to the right of those devices. The IP address of the LoadMaster and Real Server will also be displayed on the left.
    The colour of the paths are explained below:
    • Light green: Traffic is idle and the link is healthy.
    • Red: The path is congested with traffic.
    • Grey: The path between the LoadMaster and initial switch will be shown as grey.
    So, in the example screenshot above - the path between the Path2 and Switch2 switches is healthy but the paths between Switch2 and Switch1 and the Real Server are congested.

    The colour of the path may change as the path gets more or less congested. There is an array of red colours that can be displayed - the darker the red colour is, the more congestion is on the path.

     

    Real Servers


    Figure 7‑1: Real Servers screen
    This screen shows the current status of the Real Servers and gives the option to Disable or Enable each Real Server. Each Real Server has corresponding buttons, and pressing one button will take an online server offline, and vice-versa. The user can also Enable or Disable multiple Real Servers at the same time by selecting the Real Servers that they want to perform the operation on, and clicking the relevant button at the bottom. The status can be Enabled (Green), Disabled (Red) or Partial (Yellow) – meaning the Real Server is enabled in one Virtual Service.

    Caution
    Disabling a Real Server will disable it for all Virtual Services configured to use it. If it is the only Real Server available, i.e. the last one, the Virtual Service will effectively be down and not pass any traffic.

     

    Rules & Checking

     

    Content Rules

     

    Content Matching Rules


    Figure 8‑1: Rules

    This screen shows rules that have been configured and gives the option to Modify or Delete.
    To define a new rule, click the Create New button. You must give the rule a name.
    Rule names must be alphanumeric, unique and start with an alpha character. They are case sensitive, thus two different rules can exist in the form "Rule1" and “rule1". Giving a rule an existing name will overwrite the rule of that exact name.

    The options that are available depend on the Rule Type that you select. The available rules are as follows:
    Rule Types:
    • Content Matching: matches the content of the header or body
    • Add Header: adds a header according to the rule
    • Delete Header: deletes the header according to the rule
    • Replace Header: replaces the header according to the rule
    • Modify URL: changes the URL according to the rule

    Content Matching

    When the Rule Type selected is Content Matching the following describes the options available.

    Figure 8‑2: Content Matching

    Rule Name
    The name of the rule.

    Match Type:
    • Regular Expression: compares the header to the rule
    • Prefix: compares the prefix of the header according to the rule
    • Postfix:compares the postfix of the header according to the rule
    Header Field
    The header field name must be matched. If no header field name is set, the default is to match the string within the URL.

    Rules can be matched based on the Source IP of the client by entering src-ip within the Header Field text box. The header field will be populated by the source IP of the client.

    Similarly, rules can also be matched based on the HTTP Method used, for example GET, POST or HEAD. The methods that are to be matched should be written in uppercase.

    The body of a request can also be matched by typing body in the Header Field text box.
    Match String
    Input the pattern that is to be matched. Both Regular Expressions and PCRE are supported. The maximum number of characters allowed is 250.


    Negation
    Invert the sense of the match.
    Ignore Case
    Ignore case when comparing strings.
    Include Host in URL
    Prepend the hostname to request URL before performing the match.
    Include Query in URL
    Append the query string to the URL before performing a match.
    Fail On Match
    If this rule is matched, then always fail to connect.
    Perform If Flag Set
    Only try to execute this rule if the specified flag is set.
    Set Flag If Matched
    If the rule is successfully matched, set the specified flag.

    Using the Perform If Flag Set and Set Flag If Matched options, it is possible to make rules dependent on each other, i.e. only execute a particular rule if another rule has been successfully matched.

    Add Header

    When the Rule Type selected is Add Header the following describes the options available.

    Figure 8‑3: Add Header






    Rule Name
    This is a text box to enter the name of the rule.

    Header Field to be Added
    This is a text box to enter the name of the header field to be added.

    Value of Header Field to be Added
    This is for a textbox to enter the value of the header field to be added.

    Perform If Flag Set
    Only execute this rule if the specified flag is set.
    The flag is set by a different rule.

    Delete Header

    When the Rule Type selected is Delete Header the following describes the options available.

    Figure 8‑4: Delete Header

    Rule Name
    This is a textbox to enter the name of the rule.

    Header Field to be Deleted
    This is for a text box to enter the name of the header field to be deleted.

    Perform If Flag Set
    Only execute this rule if the specified flag is set.
    The flag will have been set by a different rule.

    Replace Header

    When the Rule Type selected is Replace Header the following describes the options available.

    Figure 8‑5: Replace Header

    Rule Name
    This is for a textbox to enter the name of the rule.
    Header Field
    This is for a textbox to enter the header name field where the substitution should take place.
    Match String
    The pattern that is to be matched.
    Value of Header Field to be replaced
    This is for a textbox to enter the value of the header field to be replaced.
    Perform If Flag Set
    Only execute this rule if the specified flag is set.

    The flag is set by a different rule.

    Modify URL

    When the Rule Type selected is Modify URL the following describes the options available.

    Figure 8‑6: Modify URL

    Rule Name
    This is for a textbox to enter the name of the rule.
    Match String
    This is a textbox to enter the pattern that is to be matched.
    Modified URL
    This is a textbox to enter the URL that is to be modified.
    Perform If Flag Set
    Only execute this rule if the specified flag is set.
    The flag is set by a different rule.

     

    Header Modification

    Check Parameters

    To access the Check Parameters screen, go to Rules & Checking > Check Parameters in the main menu of the LoadMaster WUI. The Check Parameters screen has two sections - Service Check Parameters and either Adaptive Parameters or SDN Adaptive Parameters, depending on the Scheduling Method selected in the Virtual Services. If the Scheduling Method is set to resource based (adaptive), the Adaptive Parameters section is displayed. If the Scheduling Method is set to resource based (SDN adaptive), the SDN Adaptive Parameters section is displayed.
    Refer to the relevant section below to find out more information.

     

    Service (Health) Check Parameters

    The LoadMaster utilizes Layer 3, Layer 4 and Layer 7 health checks to monitor the availability of the Real Servers and the Virtual Services.

    Figure 8‑7: Service Check Parameters

    Check Interval(sec)
    With this field you can specify the number of seconds that will pass between consecutive checks. The recommended value is 9 seconds.
    Connect Timeout (sec)
    The HTTP request has two steps: contact the server, and then retrieve the file. A timeout can be specified for each step, i.e. how long to wait for a connection, how long to wait for a response. A good value for both is 3 seconds.

    Retry Count
    This specifies the number of retry attempts the check will make before it determines that the server is not functioning. A value of 1 or less disables retries.

    Adaptive Parameters


    Figure 8‑8: Adaptive Parameters

    Adaptive Interval (sec)
    This is the interval, in seconds, at which the LoadMaster checks the load on the servers. A low value means the LoadMaster is very sensitive to load, but this comes at a cost of extra load on the LoadMaster itself. 7 seconds is a good starting value. This value must not be less than the HTTP checking interval.

    Adaptive URL
    The Adaptive method retrieves load information from the servers via HTTP inquiry. This URL specifies the file where the load information of the servers is stored. The standard location is /loads. It is the servers’ job to provide the current load data in this file in ASCII format. In doing so, the following must be considered:
    An ASCII file containing a value in the range of 0 to 100 in the first line where: 0=idle and 100=overloaded. As the number increases, i.e. the server becomes more heavily loaded, the LoadMaster will pass less traffic to that server. Hence, it ‘adapts’ to the server loading.

    The file is set to "/load" by default.
    The file must be accessible via HTTP.

    The URL must be the same for all servers that are to be supported by the adaptive method.
    This feature is not only of interest for HTTP-based Virtual Services, but for all Services. HTTP is merely used as the transport method for extracting the application-specific load information from the Real Server.

    Port
    This value specifies the port number of the HTTP daemon on the servers. The default value is 80.Min. Control Variable Value (%)

    This value specifies a threshold below which the balancer will switch to static weight-based scheduling, i.e. normal Weighted Round Robin. The value is a percentage of the maximum load (0-50). The default is 5.

     

    SDN Adaptive Parameters


    Figure 8‑9: SDN Adaptive Parameters

    Adaptive Interval (sec)
    When using SDN-adaptive scheduling, the SDN controller is polled to retrieve the loading values for the Real Server. This field value specifies how often this occurs.

    Average over Load values
    Use this value to dampen fluctuations in the system.

    UseMin. Control Variable Value (%)
    Anything below the value set here is considered idle traffic and it does not affect the adaptive value (which is displayed on the Real Servers Statistics screen), for example - in the screenshot above anything below 5% is considered idle.

    Use relative Bandwidth
    Use the maximum load observed on the link as link bandwidth. KEMP recommends enabling this option.

    Current max. Bandwidth values
    This section displays the current received and transmitted maximum bandwidth values.

    Reset values
    This checkbox can be used to reset the current max. bandwidth values.

     

    Certificates

     

    SSL Certificates

    The SSL certificates screen looks different depending on whether the Hardware Security Module (HSM) feature is enabled or not.

    Refer to the relevant section below, depending on your settings, to find out more information about the SSL certificates screen.

     

    HSM Not Enabled


    Figure 9‑1: SSL Certificates

    Shown above is the Manage Certificates screen where:
    Import Certificate– to import the certificate with a chosen filename.
    Add Intermediate.
    Identifier– is the name given to the certificate at the time it was created.
    Common Name(s)– is the FQDN (Fully Qualified Domain Name) for the site.
    Virtual Services– the Virtual Service with which the certificate is associated.
    Assignment – lists of available and assigned Virtual Services
    Operations
    • New CSR– generates a new Certificate Signing Request (CSR) based on the current certificate.
    • Replace Certificate– updates or replaces the certificate stored in this file.
    • Delete Certificate – deletes the relevant certificate.
    • Reencryption Usage –display the Virtual Services that are using this certificate as a client certificate when re-encrypting.
    Administrative Certificates– the certificate you want to use, if any, for the administrative interface.
    TPS Performance will vary based on key length. Larger keys will reduce performance.

    HSM Enabled

    Private Key Identifier
    When HSM is enabled, the Generate CSR option moves from the main menu of the LoadMaster to the Manage Certificates screen.

    Enter a recognizable name for the private key on the LoadMaster and click Generate CSR. The fields on the generate CSR screen are the same as the ones described above, except that the Use 2048 bit key field is not included.

    Add Intermediat
    Private Key - this column displays the private key name.
    Common Name(s)– is the FQDN (Fully Qualified Domain Name) for the site.
    Virtual Services– the Virtual Service with which the certificate is associated.
    Assignment – lists of available and assigned Virtual Services
    Operations
    • Import Certificate– import the certificate associated with this key
    • Delete Key– delete this private key and/or certificate
    • Show Reencrypt Certs– display the re-encrypt certificates

    Intermediate Certificates


    Figure 9‑2: Intermediate Certificates
    This screen shows a list of the installed intermediate certificates and the name assigned to them.
    Figure 9‑3: Install Intermediate Certificates

    If you already have a certificate, or you have received one from a CSR, you can install the certificate by clicking the Choose File button. Navigate to and select the certificate and then enter the desired Certificate Name. The name can only contain alpha characters with a maximum of 32 characters.
    Uploading several consecutive intermediate certificates within a single piece of text, as practiced by some certificate vendors such as GoDaddy, is allowed. The uploaded file is split into the individual certificates.

     

    Generate CSR (Certificate Signing Request)

    If you do not have a certificate, you may complete the Certificate Signing Request (CSR) from and click the Create CSR button. CSRs generated by the LoadMaster use SHA256.

    Figure 9‑4: Create CSR

    2 Letter Country Code (ex. US)
    The 2 letter country code that should be included in the certificate, for example US should be entered for the United States.

    State/Province (Entire Name – New York, not NY)
    The state which should be included in the certificate. Enter the full name here, for example New York, not NY.

    City
    The name of the city that should be included in the certificate.

    Company
    The name of the company which should be included in the certificate.

    Organization (e.g., Marketing,Finance,Sales)
    The department or organizational unit that should be included in the certificate.

    Common Name
    The Fully Qualified Domain Name (FQDN) for your web server.

    Email Address
    The email address of the responsible person or organization that should be contacted regarding this certificate.

    SAN/UCC Names
    A space-separated list of alternate names.

    Use 2048 bit key
    This field does not appear on this form if the HSM feature is enabled on the LoadMaster.
    Select whether or not to use a 2048 bit key.

    Alter clicking the Create CSR button, the following screen appears:

    Figure 9‑5: CSR unsigned certificate and private key

    The top part of the screen should be copied and pasted into a plain text file and sent to the Certificate Authority of your choice. They will validate the information and return a validated certificate.

    The lower part of the screen is your private key and should be kept in a safe place. This key should not be disseminated as you will need it to use the certificate. Copy and paste the private key into a plain text file (do not use an application such as Microsoft Word) and keep the file safe.

     

    Backup/Restore Certificates

    This screen will be different depending on whether HSM has been enabled or not. Refer to the relevant section below, depending on the LoadMaster configuration.

     

    HSM Not Enabled


    Figure 9‑6: Backup/Restore Certs - HSM not enabled

    Backup all VIP and Intermediate Certificates: When backing up certificates, you will be prompted to enter a mandatory passphrase (password) twice. The parameters of the passphrase are that it must be alpha-numeric and it is case sensitive with a maximum of 64 characters.

    Caution
    This passphrase is a mandatory requirement to restore a certificate. A certificate cannot be restored without the passphrase. If it is forgotten, there is no way to restore the certificate.

    Backup File: select the certificate backup file
    Which Certificates: select which certificates you wish to restore
    Passphrase: enter the passphrase associated with the certificate backup file

     

    HSM Enabled

    Backup Intermediate Certificates: When backing up certificates, enter a mandatory passphrase (password) twice. The parameters of the passphrase are that it must be alpha-numeric and it is case sensitive with a maximum of 64 characters.

    Caution
    This passphrase is a mandatory requirement to restore a certificate. A certificate cannot be restored without the passphrase. If it is forgotten, there is no way to restore the certificate.

    Intermediate Certificate Backup File: select the intermediate certificate backup file
    Passphrase: enter the passphrase associated with the certificate backup file

     

    OCSP Configuration


    Figure 9‑7: OCSP Server Settings

    OCSP Server
    The address of the OCSP server.
    OCSP Server Port
    The port of the OCSP server.
    OCSP URL
    The URL to access on the OCSP server.
    Use SSL
    Select this to use SSL to connect to the OCSP server.

    Allow Access on Server Failure
    Treat an OCSP server connection failure or timeout as if the OCSP server had returned a valid response, i.e. treat the client certificate as valid.

     

    HSM Configuration


    Figure 9‑8: No HSM Support

    Please select a HSM subsystem
    This drop-down menu has two options:
    • No HSM Support
    • Safenet Luna HSM
    To use HSM, select Safenet Luna HSM and configure the settings.

    Figure 9‑9: Safenet HSM Configuration

    Address of the Safenet HSM
    Enter the IP address of the Safenet unit to be used.
    Upload the CA certificate
    Upload the certificate that has been downloaded from the HSM.

    Generate the HSM Client Certificate
    Generate the local client certificate that is to be uploaded to the HSM. The name specified here should be the LoadMaster FQDN name. This name should be used in the client register command on the HSM.

    Password for the HSM partition
    Specify the password for the partition on the HSM so that the LoadMaster can access the HSM.
    The partition password cannot be set here until the certificates have been generated.

    Enable Safenet HSM
    This check box can be used to enable or disable HSM.
    Starting the HSM may take some time.

    Disabling the HSM will cause the LoadMaster to be unable to create new SSL (HTTPS) connections and will immediately drop existing connections until another HSM is added or the certificate configuration is changed.

    It is strongly recommended to only change the HSM configuration when there are no active SSL connections.

     

    System Configuration

     

    Network Setup

     

    Interfaces

    Describes the external network and internal network interfaces. The screen has the same information for the eth0 and eth1 Ethernet ports. The example below is for eth0 on a non–HA (High Availability) unit.

    Figure 10‑1: Network Interface options

    Interface Address
    Within the Interface Address (address[/prefix]) text box you can specify the Internet address of this interface.

    Cluster Shared IP address
    Specify the shared IP address which can be used to access the cluster. This is also used as the default source address when using Server NAT.
    The clustering options will only be available on LoadMasters which have a clustering license. To add the clustering feature to your license, please contact a KEMP representative. 

    Use for Cluster checks
    Use this option to enable cluster health checking between the nodes. At least one interface must be enabled.

    Use for Cluster Updates
    Use this interface for cluster synchronization operations.


    Speed
    By default, the Speed of the link is automatically detected. In certain configurations, this speed is incorrect and must be forced to a specific value.

    Use for Default Gateway
    The Use for Default Gateway checkbox is only available if the Enable Alternate GW support is selected in the Network Options screen. If the settings being viewed are for the default interface this option will be greyed out and selected. To enable this option on another interface, go to the other interface by clicking it in the main menu on the left. Then this option is available to select.

    Allow Administrative WUI Access
    This option is only available when the Allow Multi Interface Access check box is enabled in  

    Miscellaneous Options > Remote Access.
    When both of these options are enabled, the WUI can be accessed from the IP address of the relevant interface, and any Additional addresses set up for that interface.

    There is only one interface attached to all of these addresses, so there may be issues with this unless the certificate used is a wildcard certificate.

    There is a maximum of 64 network interfaces that can be tracked, and that there is a maximum of 1024 total addresses where the system will listen on.

    Use for GEO Responses and Requests
    By default, only the default gateway interface is used to listen for and respond to DNS requests. This field gives you the option to listen on additional interfaces.

    This option cannot be disabled on the interface containing the default gateway. By default, this is eth0.
    When this option is enabled, GEO also listens on any Additional addresses that are configured for the interface.

    MTU
    Within the MTU field you can specify the maximum size of Ethernet frames that will be sent from this interface. The valid range is 512 - 9216.

    The valid range of 512 - 9216 may not apply to VLMs as the range will be dependent on the hardware the VLM is running on. It is advised to check your hardware restrictions.

    Additional addresses
    Using the Additional addresses field allows the LoadMaster to give multiple addresses to each interface, as aliases. This is sometimes referred to as a “router on a stick”. It allows both IPv4 and IPv6 addresses in standard IP+CIDR format, so this can also be used to do a mixed mode of IPv4 and IPv6 addresses on the same interface. Any of the subnets that are added here will be available for both virtual IPs and real server IPs.

    HA
    If the unit is part of a HA configuration, the following screen will be displayed when one of the interfaces is clicked.

    Figure 10‑2: Network Interface Management - HA

    This screen tells the user:
    • This is the Master machine of the pair (top-rightof the screen)
    • This LoadMaster is up and the paired machine is down (green and red icons)
    • The IP address of this LoadMaster
    • The HA Shared IP address. This is the IP address used to configure the pair.
    • The IP address of the paired machine
    • This interface is enabled for HA healthchecking
    • This interface is used as the Default Gateway
    • The speed of the link is automatically detected
    • Any alternate addresses on this interface
    Creating a Bond/Team
    Before creating a bonded interface please note the following:
    • You can only bond interfaces higher than the parent, so if you choose to start with port 10 then you can only add ports 11 and greater
    • Bond links first if you need VLAN tagging then add VLANs after the bond has been configured
    • In order to add a link to a bonded interface, any IP addressing must first be removed from the link to be added
    • Enabling the Active-Backup mode generally does not require switch intervention
    • Bonding eth0 with eth1 can lead to serious issues and is not allowed to occur
    Click the Interface Bonding button to request the bond.

    Confirm the bond creation by clicking the Create a bonded interface button.
    Acknowledge the warning dialogs.

    Using the Web User Interface (WUI) select the System Configuration > Interfaces > bndx menu option.
    If you do not see the bndX interface, refresh your browser, then select the bonded interface and click the Bonded Devices button.

    Select the desired bonding mode.
    Add the additional interfaces to this bond.
    Configure the IP and Subnet Mask on the bonded interface.

    Removing a Bond/Team
    Remove all VLANs on the bonded interface first; if you do not remove them they will automatically be assigned to the physical port at which the bond started.

    Select the System Configuration > Interfaces > bndx menu option. If you do not see the bndX interface refresh your browser, then select the bonded interface, then click the Bonded Devices button.
    Unbind each port by clicking the Unbind Port button, repeat until all ports have been removed from bond.
    Once all child ports have been unbounded, you can unbond the parent port by clicking Unbond this interface button.

    Adding a VLAN
    Select the interface and then select the VLAN Configuration button.

    Figure 10‑3: VLAN Id

    Add the VLAN Id value and select the Add New VLAN menu option.
    Repeat as needed. To view the VLANs, select the System Configuration > Interfaces menu option.

    Removing a VLAN
    Before removing a VLAN, please ensure that the interface is not being used for other purposes, for example as a multicast interface, WUI interface, SSH interface or a GEO interface.

    To remove a VLAN select the System Configuration > Interfaces menu option and select the appropriate VLAN ID from the drop-down list.

    Once selected, delete the IP and then click Set Address. Once the IP has been removed you will have the option to delete the VLAN, by clicking the Delete this VLAN button.

    Repeat as needed. To view the VLANs select the System Configuration > Interfaces menu option and select the appropriate VLAN ID from the drop-down list.

    Adding a VXLAN
    Select the relevant interface and then click the VXLAN Configuration button.

    Figure 10‑4: Add New VXLAN

    Enter a new VXLAN Network Identifier (VNI) in the VNI text box. Enter the multicast group or remote address in the Group or Remote address text box. Click Add New VXLAN.

    To modify the VXLAN, go to System Configuration > Interfaces and select the VXLAN from the drop-down list.

    Figure 10‑5: Modify VXLAN

    On this screen, the interface address of the VXLAN can be specified. The VXLAN can also be deleted from this screen.

    If HA is enabled, HA parameters can be set in the VXLAN:
    • The HA Shared IP address. This is the IP address used to configure the HA pair.
    • The IP address of the partner machine
    • Specify whether or not this interface is used for HA health checking

    Host & DNS Configuration


    Figure 10‑6: Hostname & DNS Configuration

    Set Hostname
    Set the hostname of the local machine by entering the hostname in the Hostname text box and clicking the Set Hostname button. Only alphanumeric characters are allowed.

    Add NameServer (IP Address)
    Enter the IP address of a DNS server that will be used to resolve names locally on the LoadMaster in this field and click the Add button. A maximum of three DNS servers are allowed.

    Add Search Domain
    Specify the domain name that is to be prepended to requests to the DNS NameServer in this field and click the Add button. A maximum of six Search Domains are allowed.

    Default Gateway

    The LoadMaster requires a default gateway through which it can communicate with the Internet.

    Figure 10‑7: Default Gateway

    If both IPv4 and IPv6 addresses are being used on the LoadMaster, then both an IPv4 and IPv6 Default Gateway Address are required.

    IPv4 and IPv6 default gateways must be on the same interface.

     

    Additional Routes


    Figure 10‑8: Additional Routes

    Further routes can be added. These routes are static and the gateways must be on the same network as the LoadMaster. To segment traffic you can also leverage the Virtual Service level default gateway.

     

    Packet Routing Filter


    Figure 10‑9: Packet Filter

    Packet Routing Filter
    If GEO is enabled, the Packet Routing Filter is enabled by default and cannot be disabled. If GEO is disabled, the Packet Routing Filter is configurable – it can be either enabled or disabled. To disable GEO, on a LoadMaster which has GEO functionality, in the main menu, select Global Balancing and Disable GSLB.

    If the filter is not activated, the LoadMaster also acts as a simple IP-forwarder.
    When the filter is activated, client-to-LoadMaster access to Virtual Services is unaffected. Real Server initiated traffic that is processed on the LoadMaster with SNAT is also unaffected.

    Reject/Drop blocked packets
    When an IP packet is received from a host, which is blocked using the Access Control Lists (ACLs), the request is normally ignored (dropped). The LoadMaster may be configured to return an ICMP reject packet, but for security reasons it is usually best to drop any blocked packets silently.

    Restrict traffic to Interfaces
    This setting enforces restrictions upon routing between attached subnets.

    Add Blocked Address(es)
    The LoadMaster supports a “blacklist” Access Control List (ACL) system. Any host or network entered into the ACL will be blocked from accessing any service provided by the LoadMaster.

    The ACL is only enabled when the Packet Filter is enabled. The whitelist allows a specific IP address or address range access. If the address or range is part of a larger range in the blacklist, the whitelist will take precedence for the specified addresses.

    If a user does not have any addresses listed in their blacklist and only has addresses listed in their whitelist, then only connections from addresses listed on the whitelist are allowed and connections from all other addresses are blocked.

    This option allows a user to add or delete a host or network IP address to the Access Control List. In addition to IPv4 addresses - IPv6 addresses are allowed in the lists if the system is configured with an IPv6 address family. Using a network specifier specifies a network.

    For example, specifying the address 192.168.200.0/24 in the blacklist will block all hosts on the 192.168.200 network.

    A static port Virtual Service, with an access list defined to block particular traffic, will not work correctly if you also have a wildcard Virtual Service on the same IP address. The wildcard Virtual Service will accept the traffic after the static port Virtual Service denies it.

    It is recommended to use a separate IP address in this case to avoid unexpected behavior resulting from this interaction.

    VPN Management

    The VPN Management link/screen will only be available if the LoadMaster is licensed for IPsec tunneling.


    Figure 10‑10: VPN Management

    Connection Name
    Specify a unique name to identify the connection.
    Create
    Create a uniquely identifiable connection with the specified name.
    View/Modify
    View or modify the configuration parameters for this connection.
    Delete
    Delete this connection.

    All associated configuration will be permanently deleted. A connection can be deleted at any time, even if it is running.
     
    View/Modify VPN Connection

    Figure 10‑11: Modify Connection

    When initially creating a connection, or when modifying a connection, the View/Modify VPN Connection screen appears.

    Local IP Address
    Set the IP address for the local side of the connection.
    In non-HA mode, the Local IP Address should be the LoadMaster IP address, i.e. the IP address of the default gateway interface.

    In HA-mode, the Local IP Address should be the shared IP address. This will be automatically populated if HA has already been configured. For more information on setting up tunneling in a HA configuration, refer to the next section.

    Local Subnet Address
    When the Local IP Address is set the Local Subnet Address text box is automatically populated. The local IP can be the only participant if applicable, given the /32 CIDR. Review the Local Subnet Address and update it if needed. Ensure to click Set Local Subnet Address to apply the setting, whether the address has been changed or not. Multiple local subnets can be specified using a comma-separated list. Up to 10 IP addresses can be specified.


    Remote IP Address
    Set the IP address for the remote side of the connection. In the context of an Azure endpoint, this IP address is expected to be the public-facing IP address for the Virtual Private Network (VPN) Gateway device.

    Remote Subnet Address
    Set the subnet for the remote side of the connection. Multiple remote subnets can be specified using a comma-separated list. Up to 10 IP addresses can be specified.

    Perfect Forward Secrecy
    Activate or deactivate the Perfect Forward Secrecy option.
    The cloud platform being used will determine what the Perfect Forward Secrecy option should be set to. Perfect Forward Secrecy is needed for some platforms but is unsupported on others. 

    Local ID
    Identification for the local side of the connection. This may be the local IP address. This field is automatically populated with the same address as the Local IP Address if the LoadMaster is not in HA mode.
    If the LoadMaster is in HA mode, the Local ID field will be automatically set to %any. This value cannot be updated when the LoadMaster is in HA mode.

    Remote ID
    Identification for the remote side of the connection. This may be the remote IP address.
    Pre Shared Key (PSK)

    Enter the pre-shared key string.
    Save Secret Information

    Generate and save the connection identification and secret information.

    Cluster Control

    The Cluster Control option will only be available on LoadMasters which have a clustering license. To add the clustering feature to your license, please contact a KEMP representative. F

    Figure 10‑12: Cluster Control

    Before setting up clustering, clicking the Cluster Control menu item will give the option to either create a new cluster or add this LoadMaster to a cluster.

    Create New Cluster: If setting up a new cluster, click this button.
    Add to Cluster: Add this LoadMaster to an already existing LoadMaster.

    Figure 10‑13: Creating a New Cluster

    When the Create New Cluster button is clicked, the screen above will appear which prompts to set the shared IP address of the cluster. The shared IP address is the address which will be used to administer the cluster.

    Figure 10‑14: Rebooting

    When the Create a Cluster button is clicked, the LoadMaster reboots. A message will appear asking to reconnect to the shared IP address that was just set.

    Figure 10‑15: Cluster Control

    After creating a cluster, the Cluster Control screen in the WUI of the shared IP address will allow the addition of LoadMaster nodes into the cluster.

    A LoadMaster can only be added to a cluster when the cluster is available and waiting to join the cluster. 

    Figure 10‑16: Cluster Control

    The Cluster Control screen, in the shared IP address WUI, displays details for each of the nodes in the cluster.

    Show Options: Clicking the Show Options button will display the Cluster Parameters section which contains two additional fields which can be used to set the Cluster Virtual ID and Node Drain Time.

    ID: The cluster ID.

    Address: The IP address of the LoadMaster node. If a second IP address appears in brackets after the first one - the second IP address is the IP address of the interface port. The IP address and status text will be coloured depending on the status:
    • Blue: The node is the master node.
    • Yellow: The node is disabled.
    • Green: The node is up.
    • Red: The node is down.
    Status: The status of the node. The possible statuses are:
    • Admin: The node is the primary control node.
    • Up: The node is up.
    • Down: The node is down.
    • Drain stopping: The node has been disabled and the connections are being shut down in an orderly fashion. Drain stopping lasts for 10 seconds by default. This can be updated by changing the Node Drain Time value on the Cluster Control screen. 
    • Disabled: The node is disabled - connections will not be sent to that node.
    Operation: The different operations that can be performed in relation to the notes:
    • Add new node: Add a new node with the specified IP address to the cluster.
    • Disable: Disable the node. Nodes that are disabled will first go through drain stopping. During the drain stopping time, the connections are shut down in an orderly fashion. After the drain, the node will be disabled and no traffic will be directed to that node.
    • Enable: Enable the node. When a node comes up, it will not be immediately be brought into rotation. It will only come online after it has been up for 30 seconds.
    • Delete: Delete a node from the cluster. When a node is deleted it becomes a regular single LoadMaster instance. If the LoadMaster is later added back in to the cluster, any configuration changes that have been made in the shared IP address will propagate to the node LoadMaster.
    • Reboot: When performing a cluster-wide firmware update, a Reboot button will appear on this screen after uploading the firmware update patch. 

     

    Cluster Parameters


    Figure 10‑17: Cluster Parameters

    When the Show Options button is clicked, the Cluster Parameters section appears. This section contains two additional WUI options - Cluster Virtual ID and Node Drain Time.

    Cluster Virtual ID
    When using multiple clusters or LoadMaster HA systems on the same network, the virtual ID identifies each cluster so that there are no potential unwanted interactions. The cluster virtual ID is set to 1 by default, but it can be changed if required. Valid IDs range from 1 to 255. Changes made to an admin Loadmaster propagate across all nodes in the cluster.

    Node Drain Time
    When a node is disabled, the connections that are still being served by the node are allowed to continue for the amount of seconds specified in the Node Drain Time text box. No new connections will be handled by the node during this time. The Node Drain Time is set to 10 seconds by default, but it can be changed if required. Valid values range from 1 to 600 (seconds).

    During the drain time the status changes to Draining until the specified drain time elapses.
    When the drain time has elapsed the status changes to disabled.

     

    System Administration

    These options control the base-level operation of the LoadMaster. It is important to know that applying changes to these parameters in a HA pair must be done using the floating management IP. Many of these options will require a system reboot. When configuring these parameters, only the active system in a pair is affected.

     

    User Management

    Change the appliance password. This is a local change only and does not affect the password of the partner appliance in a HA deployment.

    Figure 10‑18: User Management

    The User Management screen allows you to:
    • Change an existinguser’s password by clicking the Password button in the Action section
    • Add a new user and associated password
    • Change the permissions for an existing user by clicking the Modify button in the Action section
    Usernames can contain alphanumeric characters and periods and dashes (‘.’ and ‘_‘). Usernames can be a maximum of 64 characters long.

    The Use RADIUS Server option allows you to determine whether the user will use RADIUS server authentication or not when logging on to the LoadMaster. The RADIUS Server details must be setup before this option can be used.

    RADIUS server can be used to authenticate users who wish to log on to the LoadMaster. The LoadMaster passes the user’s details to the RADIUS server and the RADIUS server informs the LoadMaster whether the user is authenticated or not.

    When Session Management is enabled, the Use RADIUS Server option is not available within this screen.

    Figure 10‑19: Permissions

    In this screen you may set the level of user permissions. This determines what configuration changes the user is allowed to perform. The primary user, bal, always has full permissions. Secondary users may be restricted to certain functions.

    Named users, even those without User Administration privileges, can change their own passwords. When a named user clicks the System Administration > User Management menu option the Change Password screen appears.

    Figure 10‑20: Change Password

    From within this screen, users can change their own password. Passwords must be a minimum of 8 characters long. Once changed, a confirmation screen appears after which the users will be forced to log back in to the LoadMaster using their new password.

     

    Update License

    This screen displays the activation date and the expiration date of the current license. Before updating the license in the LoadMaster, you must either contact your KEMP representative, or use the Upgrade option. After you have contacted KEMP or used the upgrade option, there are two ways to update a license – via the Online method and via the Offline method. Refer to the sections below to find out details about the screens for each method.
    Online Method

    Figure 10‑21: Update License - online method.

    To upgrade the license via the online method, the LoadMaster must be connected to the internet. You will need to enter your KEMP ID and Password to license via the online method.
    Offline Method

    Figure 10‑22: Update License – offline method

    To upgrade the license via the offline method, you need to enter license text in the LoadMaster. You can either get this from KEMP or via the Get License link.

    A reboot may be required depending on which license you are applying. If upgrading to an ESP license, a reboot is required after the update.
    Debug Options
    Some debug options have been included on the Update License screen which will help to troubleshoot problems with licensing.

    Figure 10‑23: Available Debug Options

    Clicking the Debug Options button displays three debug options:
    • Ping Default Gateway
    • Ping DNS Servers
    • Ping Licensing Server

    Figure 10‑24: Ping Results

    Clicking a ping button displays the results of the ping in the right hand column.

    The Clean ping logs button clears the information from the right hand column.

    System Reboot


    Figure 10‑25: System Reboot

    Reboot
    Reboot the appliance.

    Shutdown
    Clicking this button attempts to power down the LoadMaster. If, for some reason, the power down fails, it will at a minimum halt the CPU.

    Reset Machine
    Reset the configuration of the appliance with exception of the license and username and password information. This only applies to the active appliance in a HA pair.

    Update Software


    Figure 10‑26: Update Software

    Contact support to obtain the location of firmware patches and upgrades. Firmware downloads require Internet access.

    Update Machine
    Once you have downloaded the firmware you can browse to the file and upload the firmware directly into LoadMaster. The firmware will be unpacked and validated on LoadMaster. If the patch is validated successfully you will be ask to confirm the release information. To complete the update you will need to reboot the appliance. This reboot can be deferred if needed.

    Update Cluster
    The Update Cluster option will only be available on LoadMasters which have a clustering license. To add the clustering feature to your license, please contact a KEMP representative.

    The firmware on all LoadMasters in a cluster can be updated via the shared IP address by clicking the Update Cluster button.

    Restore Software
    If you have completed an update of LoadMasters firmware you can use this option to revert to the previous build.

    Figure 10‑27: Installed Addon Packages

    Installed Addon Packages
    Add-on packages can be installed in the KEMP LoadMaster. Add-on packages provide features that are additional to those already included in the LoadMaster. KEMP Technologies plan on creating further add-on packages in the future.

    Add-On packages can be downloaded from the KEMP Technologies website: www.kemptechnologies.com
    To install an add-on package, click Choose File, browse to and select the file and click Install Addon Package. A reboot is required in order for the add-on package to be fully installed. If an add-on package of the same name is uploaded, the existing one will be overwritten/updated.

    If an installed add-on package cannot be started, the text will display in red and the hover text will show that the package could not be started.

     

    Backup/Restore


    Figure 10‑28: Backup and Restore

    Create Backup File
    Generate a backup that contains the Virtual Service configuration, the local appliance information and statistics data. License information and SSL Certificate information is not contained in the backup.
    For ease of identification, the Backup file name includes the LoadMaster’s hostname.

    RestoreBackup
    When performing a restore (from a remote machine), the user may select what information should be restored: the VS Configuration only, LoadMaster Base Configuration only, Geo Configuration or a combination of the three options.

    It is not possible to restore a single machine configuration onto a HA machine and vice versa.
    It is not possible to restore a configuration with ESP-enabled Virtual Services onto a machine which is not enabled for ESP.

    Automated Backups
    If the Enable Automated Backups check box is selected, the system may be configured to perform automated backups on a daily or weekly basis.

    For ease of identification, the Backup file name includes the LoadMaster’s hostname.
    If the automated backups are not being performed at the correct time, ensure the NTP settings are configured correctly.

    When to perform backup
    Specify the time (24 hour clock) of backup. Also select whether to backup daily or on a specific day of the week. When ready, click the Set Backup Time button.

    In some situations, spurious error messages may be displayed in the system logs, such as:
    Dec 8 12:27:01 KEMP_1 /usr/sbin/cron[2065]: (system) RELOAD (/etc/crontab)
    Dec 8 12:27:01 KEMP_1 /usr/sbin/cron[2065]: (CRON) bad minute (/etc/crontab)
    These can be safely ignored and the automated backup will likely still complete successfully.

    Remote user
    Set the username required to access remote host.

    Remote password
    Set the password required to access remote host. This field accepts alpha-numeric characters and most non-alphanumeric characters. Disallowed characters are as follows:
    • Control characters
    • ‘ (apostrophe)
    • ` (grave)
    • The delete character

    Remote host
    Set the remote host name.

    Remote Pathname
    Set the location on the remote host to store the file.

    Test Automated Backups
    Clicking the Test Backup button performs a test to check if the automated backup configuration is working correctly. The results of the test can be viewed within the System Message File.
    The Automated Backup transfer protocolis currently FTP only.

     

    Date/Time

    You can manually configure the date and time of LoadMaster or leverage an NTP server.

    Figure 10‑29: Set Date and Time

    NTP host(s)
    Specify the host which is to be used as the NTP server. NTP is a strongly preferred option for a HA cluster. For a single unit it is at the user’s discretion. Clicking the Set NTP host button will refresh the time based on the details configured.

    If you do not have a local NTP server, refer to www.pool.ntp.org for a list of public NTP server pools which can be used.

    The time zone must always be set manually.

    Show NTP Authentication Parameters
    The LoadMaster supports NTPv4 which uses cryptographic signing to query a secure NTP server. This uses a simple authorization scheme which uses a shared secret and key to validate that the response from the server is actually valid. Enable the Show NTP Authentication Parameters check box to display the parameters that are needed to support NTP authenticated requests.

    NTP Shared Secret
    The NTP shared secret string. The NTP secret can be a maximum of 20 ASCII characters long or 40 hexadecimal characters long.

    NTP Key ID
    Select the NTP key ID. The values range from 1 to 99. Different key IDs can be used for different servers.

    NTP Key Type
    Select the NTP key type.
    In order for the NTPv4 feature to work, a file must be created on the server (/etc/ntp.keys) which has the following format:
    M
    ...
    M
    To enable the use of the key, specify the keyed in the trustedkey line of /etc/ntp.conf, i.e. if the keyed is 5 then you have to specify “trustedkey5”. The trustedkey value can take multiple values, for example trustedkey 1 2 3 4 5 9 10).

     

    Logging Options

    Logging of LoadMaster events can be both pushed and also pulled from the appliance. It is important to note that log files on LoadMaster are not historical, if the appliance reboots the logs are reset. It is important to keep a record of events generated on LoadMaster on a remote facility.

     

    System Log Files


    Figure 10‑30: System Log Files

    Boot.msg File - contains information, including the current version, during the initial starting of LoadMaster.
    Warning Message File - contains warnings logged during the operation of LoadMaster.
    System Message File - contains system events logged during the operation of LoadMaster. This includes both operating system-level and LoadMaster internal events.
    Nameserver Log File - show the DNS name server log.
    Nameserver Statistics - show the latest name server statistics.
    IPsec IKE Log - show the IPsec IKE log.
    WAF Event Log - contains logs for most recently triggered WAF rules.
    Audit LogFile - contains a log for each action which is performed by a user; either via the API or the WUI. This will only function if session management is enabled.
    Reset Logs - will reset ALL log files.
    Save all System Log Files - is used if you need to send logs to KEMP support as part of a support effort. Click this button, save the files to your PC and forward them to KEMP support.
     
    Debug Options
    The LoadMaster has a range of features that will help you and KEMP Support staff with diagnosing connectivity issues. Clicking the Debug Options button will bring up the screen shown below.

    Figure 10‑31: Debug Options

    Disable All Transparency
    Disables transparency on every Virtual Service and forces them to use Layer 7. Use with caution.
    Enable L7 Debug Traces
    Generates log traffic in the message files. Due to the large amount of files being logged it slows down L7 processing.
    Perform an l7adm
    Displays raw statistics about the L7 subsystem.
    Enable WAF Debug Logging
    Enable AFP debug traces.
    This generates a lot of log traffic. It also slows down AFP processing. Only enable this option when requested to do so by KEMP Technical Support. KEMP does not recommend enabling this option in a production environment.
    The AFP debug logs are never closed and they are rotated if they get too large. AFP needs to be disabled and re-enabled in all AFP-enabled Virtual Service settings in order to re-enable the debug logs. Alternatively, perform a rule update, with rules that are relevant for the Virtual Service(s).
    Enable IRQ Balance
    Enable this option only after consulting with KEMP support staff.
    Enable TSO
    Enable TCP Segmentation Offload (TSO).
    Only modify this option after consultation with KEMP Technical Support. Changes to this option will only take affect after a reboot.
    Enable Bind Debug Traces
    Enable bind debug trace logs for GEO.
    Enable FIPS 140-2 level 1Mode
    FIPS mode cannot be enabled if Session Management is disabled. 

    Switch to FIPS 140-2 level 1 certified mode for this LoadMaster. The LoadMaster must be rebooted to activate.

    A number of warnings will appear before enabling FIPS. If FIPS is enabled on a LoadMaster, it cannot easily be disabled. If FIPS has been enabled and you want to disable it, please contact KEMP Support.

    Figure 10‑32: FIPS-1 mode


    When a LoadMaster is in FIPS level 1 mode - FIPS-1 will appear in the top-right of the LoadMaster WUI.
    FIPS level 1 has a different set of ciphers to a non-FIPS LoadMaster. There is a Default cipher set and there are no other system-defined cipher sets to choose from.

    Perform a PS
    Performs a ps on the system.
    Display Meminfo
    Displays raw memory statistics.
    Display Slabinfo
    Displays raw slab statistics.
    Perform an Ifconfig
    Displays raw Ifconfig output.
    Perform aNetstat
    Displays Netstat output.
    Reset Statistic Counters
    Reset all statistics counters to zero.
    Flush OCSPD Cache

    When using OCSP to verify client certificates, OCSPD caches the responses it gets from the OCSP server. This cache can be flushed by pressing this button. Flushing the OCSPD cache can be useful when testing, or when the Certificate Revocation List (CRL) has been updated.
    Stop IPsec IKE Daemon
    Stop the IPsec IKE daemon on the LoadMaster.
    If this button is clicked, the connection for all tunnels will go down.
    Perform an IPsec Status
    Display the raw IPsec status output.
    Enable IKE Debug Level Logs
    Control the IPsec IKE log level.

    Flush SSO Authentication Cache
    Clicking the Flush SSO Cache button flushes the Single Sign-On cache on the LoadMaster. This has the effect of logging off all clients using Single Sign-On to connect to the LoadMaster.
    Linear SSO Logfiles

    By default, older log files are deleted to make room for newer log files, so that the filesystem does not become full. Selecting the Linear SSO Logfiles check box prevents older files from being deleted.
    When using Linear SSO Logging, if the log files are not periodically removed and the file system becomes full, access to ESP-enabled Virtual Services will be blocked, preventing unlogged access to the virtual service. Access to non-ESP enabled Virtual Services are unaffected by the Linear SSO Logfile feature.

    Netconsole Host
    The syslog daemon on the specified host will receive all critical kernel messages. The syslog server must be on the local LAN and the messages sent are UDP messages.

    You can select which interface the Netconsole Host is set to via the Interface dropdown.
    Please ensure that the netconsole host specified is on the selected interface as errors may occur if it is not.

    Ping Host
    Performs a ping on the specified host. The interface which the ping should be sent from can be specified in the Interface drop-down list. The Automatic option selects the correct interface to ping an address on a particular network.

    Traceroute Host
    Perform a traceroute of a specific host.

    Kill LoadMaster
    Permanently disables all LoadMaster functions. The LoadMaster can be re-enabled by being relicensed.
    Please do not kill your LoadMaster without consulting KEMP Technical Support.

    The Kill LoadMaster option will not be available in LoadMasters which are tenants of the KEMP Condor.

    Figure 10‑33: TCP dump

    TCP dump
    A TCP dump can be captured either by one or all Ethernet ports. Address and port parameters, as well as optional parameters may be specified. The maximum number of characters permitted in the Options text box is 255.

    You can stop and start the dump. You can also download it to a particular location. The results of the TCP dump can then be analysed in a packet trace analyser tool such as Wireshark.

    Extended Log Files

    The Extended Log Files screen provides options for logs relating to the ESP and AFP features. These logs are persistent and will be available after a LoadMaster reboot. To view all of the options click on the icons.

    The AFP logs are not generated in real time – they can be up to two minutes behind what the AFP engine is actually processing.

    Figure 10‑34: ESP Options

    There are four types of log files relating to ESP and WAF stored on the LoadMaster:
    • ESP Connection Log: logsrecording each connection
    • ESP Security Log: logs recording all security alerts
    • ESP User Log: logs recording all user logins
    • WAF Audit Logs: recording WAF logs based on what has been selected for the Audit mode drop-down list in the WAF Options section of the Virtual Service modify screen. The number listed in each log entry corresponds to the ID of the Virtual Service. To get the Virtual Service ID, first ensure that the API interface is enabled (System Configuration >Miscellaneous Options > Remote Access > Enable API Interface). Then, in a web browser address bar, enter https:///access/listvs. Check the index of the Virtual Service. This is the number that corresponds to the number on the audit log entry.
    To view the logs please click the relevant View button.

    The logs viewed can be filtered by a number of methods. If you wish to view logs between a particular date range, select the relevant dates in the from and to fields and click the View button. One or more archived log files can be viewed by selecting the relevant file(s) from the list of file names and clicking the View button. You can filter the log files by entering a word(s) or regular expression in the filter field and clicking on the View field.

    Clear Extended Logs
    All extended logs can be deleted by clicking the Clear button.
    Specific log files can be deleted by filtering on a specific date range, selecting one or more individual log files in the log file list or selecting a specific log type (for example connection, security or user) in the log file list and clicking the Clear button. Click OK on any warning messages.

    Save Extended Logs
    All Extended logs can be saved to a file by clicking the Save button.
    Specific log files can be saved by filtering on a specific date range, selecting one or more individual log files in the log file list or selecting a specific log type (for example connection, security or user) in the log file list and clicking the Save button.

     

    Syslog Options

    The LoadMaster can produce various warning and error messages using the syslog protocol. These messages are normally stored locally.

    Figure 10‑35: Syslog Options

    It is also possible to configure the LoadMaster to transmit these error messages to a remote syslog server by entering the relevant IP address in the relevant field and clicking Change Syslog Parameters.

    Six different error message levels are defined and each message level may be sent to a different server. Notice messages are sent for information only; Emergency messages normally require immediate user action.
    Up to ten individual IP addresses can be specified for each of the Syslog fields. The IP addresses must be differentiated using a space separated list.

    Examples of the type of message that may be seen after setting up a Syslog server are below:
    • Emergency: Kernel-critical error messages
    • Critical: Unit one has failed and unit two is taking over as master (in a HA setup)
    • Error: Authentication failure for root from 192.168.1.1
    • Warn: Interface is up/down
    • Notice: Time has been synced
    • Info: Local advertised Ethernet address
    One point to note about syslog messages is they are cascading in an upwards direction. Thus, if a host is set to receive WARN messages, the message file will include message from all levels above WARN but none for levels below.

    We recommend you do not set all six levels for the same host because multiple messages for the same error will be sent to the same host.

    To enable a syslog process on a remote Linux server to receive syslog messages from the LoadMaster, the syslog must be started with the “-r” flag.

     

    SNMP Options

    With this menu, the SNMP configuration can be modified.

    Figure 10‑36: SNMP Options

    Enable SNMP
    This check box enables or disables SNMP metrics. For example, this option allows the LoadMaster to respond to SNMP requests.

    By default SNMP is disabled.
    When the feature is enabled, the following traps are generated:
    • ColdStart: generic (start/stop of SNMP sub-system)
    • VsStateChange: (Virtual Service state change)
    • RsStateChange: (Real Server state change)
    • HaStateChange: (HA configuration only: LoadMaster failover)
    When using SNMP monitoring of ESP-enabled Virtual Services that were created using a template, ensure to monitor each SubVS directly rather than relying on the master service. This is because the Authentication Proxy sub-service will always be marked as up and, as a consequence, so will the master service.

    The information regarding all LoadMaster-specific data objects is stored in three enterprise-specific MIBs (Management Information Base).
    IPVS-MIB.txtVirtual Server stats
    B-100-MIB.txtL7 LoadMaster configuration and status info
    ONE4NET-MIB.txtEnterprise ID

    These MIBs (located on the KEMP website) need to be installed on the SNMP manager machine in order to be able to request the performance-/config-data of the LoadMaster via SNMP.

    The description of the counters can be taken from the LoadMaster MIBs (the description clause). Apart from just reading the MIB this can be done for Linux (and ucdsnmp) with the command:
    snmptranslate -Td -OS
    where is the object identifier in question.
    Example: = .1.3.6.1.4.1.one4net.ipvs.ipvsRSTable.rsEntry.RSConns
    snmptranslate -Td –Ov .1.3.6.1.4.1.one4net.ipvs.ipvsRSTable.rsEntry.RSConns
    .1.3.6.1.4.1.12196.12.2.1.12
    RSConns OBJECT-TYPE
    -- FROM IPVS-MIB
    SYNTAXCounter32
    MAX-ACCESSread-only
    STATUScurrent
    DESCRIPTION"the total number of connections for this RS"
    ::= { iso(1) org(3) dod(6) internet(1) private(4) enterprises(1) one4net(12196) ipvs(12) ipvsRSTable(2) rsEntry(1) 12 }

    The data object defined in the LoadMaster MIBS is a superset to the counters displayed by the WUI.
    The data objects on the LoadMaster are not writable, so only GET requests (GET, GET-NEXT, GET-BULK, etc.) should be used.

    Enable SNMP V3
    This check box enables SNMPv3 metrics. SNMPv3 primarily added security and remote configuration enhancements to SNMP.

    When this option is enabled, two additional fields become available - Username and Password.
    The Username and Password must be set in order for SNMPv3 to work.
    The password must be at least 8 characters long.

    Authentication protocol
    Select the relevant Authentication protocol - MD5 or SHA. SHA is recommended.

    Privacy protocol
    Select the relevant Privacy protocol - AES or DES. AES is recommended.

    SNMP Clients
    With this option, the user can specify from which SNMP management hosts the LoadMaster will respond to.
    If no client has been specified, the LoadMaster will respond to SNMP management requests from any host.


    SNMP Community String
    This option allows the SNMP community string to be changed. The default value is “public”.
    Allowed characters in the Community String are as follows: a-z, A-Z, 0-9, _.-@()?#%^+~!.

    Contact
    This option allows the SNMP Contact string to be changed. For example, this could be e-mail address of the administrator of the LoadMaster.

    SNMP Location
    This option allows the SNMP location string to be changed.
    This field accepts the following characters:
    a-z A-Z 0-9 _ . - ; , = : { } @ ( ) ? # % ^ + ~ !
    Do not enter a hashtag symbol (#) as the first character in the Location.

    SNMP traps
    When an important event happens to a LoadMaster a Virtual Service or to a Real Server, a trap is generated. These are sent to the SNMP trap sinks. If a change is made, the LoadMaster waits for all changes to finish and then waits five seconds before reading it. At that point, all changes will have stabilized and SNMP traps can then be sent. If there are any state changes within the five second wait, the state changes are handled and then the wait is restarted.

    Enable/Disable SNMP Traps
    This toggle option enables and disables the sending of SNMP traps.
    SNMP traps are disabled by default.

    Send SNMP traps from the shared address
    This check box is only visible when the LoadMaster is in HA mode.
    By default, SNMP traps are sent using the IP address of the master HA unit as the source IP address. Enabling this option will send SNMP traps from the master HA unit using the shared IP address.

    SNMP Trap Sink1
    This option allows the user to specify a list of hosts to which a SNMPv1 trap will be sent when a trap is generated.

    SNMP Trap Sink2
    This option allows the user to specify a list of hosts to which a SNMPv2 trap will be sent when a trap is generated.

     

    Email Options

    This screen permits the configuration of email alerting for LoadMaster events. Email notification can be delivered for six predefined informational levels. Each level can have a distinct email address and each level supports multiple email recipients. Email alerting depends on a mail server, support for both an open relay mail server and a secure mail server is provided.

    Figure 10‑37: Email Options

    SMTP Server
    Enter the FQDN or IP address of the mail server. If you are using FQDN please make sure to set the DNS Server.

    Port
    Specify the port of the SMTP server which will handle the email events.


    Server Authorization (Username)
    Enter the username if your mail server requires authorization for mail delivery. This is not required if you mail server does not require authorization.

    AuthorizationPassword
    Enter the password if your mail server requires authorization for mail delivery. This is not a required if you mail server does not require authorization.

    Local Domain
    Enter the top-level domain, if your mail server is part of a domain. This is not a required parameter.

    Connection Security
    Select the type of security for the connection;
    • None
    • STARTTLS, if available
    • STARTTLS
    • SSL/TLS
    Set Email Recipient
    In the various Recipients text boxes, enter the email address that corresponds with the level of notification desired. Multiple email addresses are supported by a comma-separated list, such as:

    Info Recipients: info@kemptechnologies.com, sales@kemptechnologies.com
    Error Recipients: support@kemptechnologies.com

    Clicking the Send Test Email to All Recipients button sends a test email to all the listed email recipients.

     

    SDN Log Files


    Figure 10‑38: SDN Log Files

    The SDN Log Files screen provides options for logs relating to the SDN feature. To view all of the options click the icons.

    View SDNstats Logs
    To view the SDNstats logs please select the relevant log files and click the View button.

    The sdnstats.log file is the main, rolling log file. The .gz files are backups of logs for a particular day.
    One or more archived log files can be viewed by selecting the relevant file(s) from the list of file names and clicking the View button. The log files can be filtered by entering a word(s) or regular expression in the filter field and clicking the View button.


    View SDNstats Traces
    This option is only available if SDNstats debug logging is enabled (System Configuration > Logging Options > SDN Log Files > Debug Options > Enable Debug Log).

    To view the SDNstats logs please select the relevant log files and click the View button.
    One or more archived log files can be viewed by selecting the relevant file(s) from the list of file names and clicking the View button. The log files can be filtered by entering a word(s) or regular expression in the filter field and clicking the View button.

    Clear Logs
    All SDN logs can be deleted by clicking the Clear button.
    A specific range of log files can be filtered by specifying a date range using the from and to fields. Specifying a date range will simply select the relevant log files that apply in the right-hand box. Individual log files can still be selected/deselected as needed on the right.
    Important: If the sdnstats.log file is selected, all logs in that file will be cleared, regardless of what dates are selected in the date range fields.

    Save Extended Logs
    All SDN logs can be saved to a file by clicking the Save button.
    Specific log files can be saved by filtering on a specific date range and/or selecting one or more individual log files in the log file list in the log file list and clicking the Save button.
     
    Debug Options
    To get to the SDN Debug Options screen, click the Debug Options button on the SDN Log Files screen.

    Figure 10‑39: Debug Options

    Enable Debug Log
    Enable SDNstats debug logging.
    To view the SDN Statistics logs, open System Configuration > Logging Options > SDN Log Files, select the log file you wish to view and click the View button.

    Debug logging should only be enabled when troubleshooting because it will impact performance of the LoadMaster.

    Restart SDNstats service
    When troubleshooting issues with SDN, the entire SDN service can be restarted. Restarting the connection will not affect any traffic connections - it just restarts the connection between the LoadMaster and the SDN controller.

    If successful the Process ID will change to a new id.

    The Process ID can be found by clicking the Debug button in System Configuration > Logging Options > System LogFiles and clicking the ps button.

    This will restart the connection to all attached SDN controllers.

    SDNstats mode
    There are two modes that can be used to gather the SDN statistics.

    Figure 10‑40: SDNstats mode

    The mode can be set by going to System Configuration > Logging Options > SDN Log Files > Debug Options and setting the SDNstats mode.
    The modes are described below:
    • Mode 1: When set to mode 1, the statistics are taken from the switch port that is connected to the server and the statistics are relayed back to the LoadMaster.
    • Mode 2: When set to mode 2, the information is taken from all of the switch ports along the path.

     

    Miscellaneous Options

     

    WUI Settings

    Only the bal user or users with ‘All Permissions’ set can use this functionality. Users with different permissions can view the screen but all buttons and input fields are greyed out.

    Figure 10‑41: WUI Configuration screen

    Enable Hover Help
    Enables blue hover notes shown when the pointer is held over a field.

    Message of the Day (MOTD)
    Type in text into the field and click the Set MotD button. This message will be displayed within the LoadMaster Home screen.

    If WUI Session Management is enabled, the MOTD is displayed on the login screen rather than the Home screen.

    The maximum allowed message length is 5,000 characters. HTML is supported, but not required. Single quotes (‘) and double quotes (“) are not allowed, though you can use the equivalent HTML character codes i.e. entering &#34it&#39s allowed&#34 would result in a MOTD of “it’s allowed”.

    Set Statistics Display Size
    This sets the maximum number of rows that can be displayed in the Statistics page. The allowable range is between 10 and 100 rows being displayed on the page.

    End User License
    Click the Show EULA button to display the LoadMaster End User License Agreement.

    Supported TLS Protocols
    Checkboxes are provided here which can be used to specify whether or not it is possible to connect to the LoadMaster WUI using the following protocols; SSLv3, TLS1.0, TLS1.1 or TLS1.2. TLS1.1 and TLS1.2 are enabled by default. It is not recommended to only have SSLv3 selected because SSLv3 is only supported by some old browsers. When connecting to the WUI via a web browser, the highest security protocol which is mutually supported by both the browser and the WUI will be used.
    If FIPS mode is enabled, the only available options are TLS1.1 and TLS1.2.

    Enable Historical Graphs
    Enable the gathering of historical statistics for the Virtual Services and Real Servers.

    Collect All Statistics
    By default, this option is disabled. This means that only the statistics for the Virtual Services and Real Servers that are configured to be displayed on the home page are collected. Enabling this option will force the LoadMaster to collect statistics for all Virtual Services and Real Servers.

    If there are a large number of Virtual Services and Real Servers this option can cause CPU utilization to become very high.

     

    WUI Session Management


    Figure 10‑42: WUI Session Management (bal user)

    The level of user permissions determine what WUI Session Management fields can be seen and modified. Refer to the table below for a breakdown of permissons.

    ControlBal userUser with ‘All Permissions’User with ‘User Administration’ permissionsAll other users
    Session ManagementModifyViewViewNone
    Require Basic AuthenticationModifyViewViewNone
    Basic Authentication PasswordModifyViewViewNone
    Failed Login AttemptsModifyModifyViewNone
    Idle Session TimeoutModifyModifyViewNone
    Limit Concurrent LoginsModifyModifyView
    Pre-Auth Click Through BannerModifyModifyViewNone
    Currently Active UsersModifyModifyViewNone
    Currently Blocked UsersModifyModifyViewNone
    Table 10‑1: WUI Session Management screen permissions

    When using WUI Session Management, it is possible to use one or two steps of authentication.
    If Enable Session Management check box is ticked and Require Basic Authentication is disabled, the user only needs to log in using their local username and password. Users are not prompted to log in using the bal or user logins.

    If the Enable Session Management and Require Basic Authentication check boxes are both selected, there are two levels of authentication enforced in order to access the LoadMaster WUI. The initial level is Basic Authentication where users log in using the bal or user logins, which are default usernames defined by the system.

    Once logged in via Basic Authentication, the user then must log in using their local username and password to begin the session.

    Enable Session Management
    Selecting the Enable Session Management check box enables the WUI Session Management functionality. This will force all users to login to the session using their normal credentials.

    When this check box is checked, the user is required to login in order to continue to use the LoadMaster.
    LDAP users need to login using the full domain name. For example an LDAP username should be test@kemp.com and not just test.

    Figure 10‑43: User Credentials
    After a user has logged in, they may log out by clicking the Logout button, , in the top right-hand corner of the screen.

    Once the WUI Session Management functionality is enabled, all the WUI Session Management options appear.

    Require Basic Authentication
    If WUI Session Management and Basic Authentication are both enabled, there are two levels of authentication enforced in order to access the LoadMaster WUI. The initial level is Basic Authentication where users log in using the bal or user logins, which are default usernames defined by the system.

    Once logged in via Basic Authentication, the user then must log in using their local username and password to begin the session.

    Basic Authentication Password
    The Basic Authentication password for the user login can be set by typing the password into the Basic Authentication Password text box and clicking the Set Basic Password button.

    The password needs to be at least 8 characters long and should be a mix of alpha and numeric characters. If the password is considered to be too weak, a message appears asking you to enter a new password.
    Only the bal user is permitted to set the Basic Authentication password.

    Failed Login Attempts
    The number of times that a user can fail to login correctly before they are blocked can be specified within this text box. The valid values that may be entered are numbers between 1 and 999.

    If a user is blocked, only the bal user or other users with All Permissions set can unblock a blocked user.
    If the bal user is blocked, there is a ‘cool-down’ period of ten minutes before the bal user can login again.

    Idle Session Timeout
    The length of time (in seconds) a user can be idle (no activity recorded) before they are logged out of the session. The valid values that may be entered are numbers between 60 and 86400 (between one minute and 24 hours).

    Limit Concurrent Logins
    This option gives LoadMaster administrators the ability to limit the number of logins a single user can have to the LoadMaster WUI at any one time.

    The values which can be selected range from 0 – 9.
    A value of 0 allows an unlimited number of logins.
    The value entered represents the total number and is inclusive of any bal user logins.

    Pre-Auth Click Through Banner
    Set the pre-authentication click through banner which will be displayed before the LoadMaster WUI login page. This field can contain plain text or HTML code. The field cannot contain JavaScript. This field accepts up to 5,000 characters. Active and Blocked Users

    Only the bal user or users with ‘All Permissions’ set can use this functionality. Users with ‘User Administration’ permissions set can view the screen but all buttons and input fields are greyed out. All other users cannot view this portion of the screen.

    Figure 10‑44: Currently Active Users

    Currently Active Users
    The user name and login time of all users logged into the LoadMaster are listed within this section.
    To immediately log out a user and force them to log back into the system, click the Force logout button.

    To immediately log out a user and to block them from being able to log in to the system, click the Block user button. The user will not be able to log back in to the system until they are unblocked or until the LoadMaster reboots. Clicking the Block user button does not force the user to log off, to do this, click the Force logout button.

    If a user exits the browser without logging off, that session will remain open in the currently active users list until the timeout has reached. If the same user logs in again, before the timeout is reached, it would be within a separate session.

    Currently Blocked Users
    The user name and login time of when the user was blocked are listed within this section.
    To unblock a user to allow them to login to the system, click the Unblock button.

     

    Remote Access

     
    Administrator Access

    Figure 10-37: Administrator Access

    Allow Remote SSH Access
    You can limit the network from which clients can connect to the SSH administrative interface on LoadMaster.

    Using
    Specify which addresses that remote administrative SSH access to the LoadMaster is allowed.

    Port
    Specify the port used to access the LoadMaster via the SSH protocol.

    SSH Pre-Auth Banner
    Set the SSH pre-authentication banner, which is displayed before the login prompt when logging in via SSH. This field accepts up to 5,000 characters.

    Allow Web Administrative Access
    Selecting this check box allows administrative web access to the LoadMaster. Disabling this option will stop access upon the next reboot. Click Set Administrative Access to apply any changes to this field.
    Disabling web access is not recommended.

    Using
    Specify the addresses that administrative web access is to be permitted. Click Set Administrative Access to apply any changes to this field.

    Port
    Specify the port used to access the administrative web interface. Click Set Administrative Access to apply any changes to this field.


    Admin Default Gateway
    When administering the LoadMaster from a non-default interface, this option allows the User to specify a different default gateway for administrative traffic only. Click Set Administrative Access to apply any changes to this field.

    Allow Multi Interface Access
    Enabling this option allows the WUI to be accessed from multiple interfaces. When this option is enabled, a new option appears in each of the interface screens (System Configuration > eth<n>) called Allow 

    Administrative WUI Access. When both of these options are enabled, the WUI can be accessed from the IP address of the relevant interface(s) and any Additional addresses configured for that interface. Click  

    Set Administrative Access to apply any changes to this field.
    The certificate used by default to secure WUI connections specifies the initial WUI IP address, and so will not work for WUI connections on other interfaces. If you enable the WUI on multiple interfaces, you will need to install a wildcard certificate for the WUI. 

    Enabling the WUI on multiple interfaces can have a performance impact on the system. There is a maximum of 64 network interfaces that can be tracked. There are a maximum of 1024 total addresses where the system will listen on.

    RADIUS Server
    Here you can enter the address of the RADIUS server that is to be used to validate user access to the LoadMaster. To use a RADIUS server, you have to specify the Shared Secret.
    A Shared Secret is a text string that serves as a password between the LoadMaster and the RADIUS server.

    The Revalidation Interval specifies how often a user should be revalidated by the RADIUS server.
    RADIUS Server Configuration

    To configure RADIUS to work correctly with the LoadMaster, authentication must be configured on the RADIUS server and the RADIUS Reply-Message must be mapped to LoadMaster permissions.

    Reply-MessageLoadMaster Permission
    realReal Servers
    vsVirtual Services
    rulesRules
    backupSystem Backup
    certsCertificate Creation
    cert3Intermediate Certificates
    certbackupCertificate Backup
    usersUser Administration
    geoGEO Configuration
    Table 10‑2: Reply-Message/LoadMaster Permissions

    The values in the Reply-Message should map to the user permissions page in the WUI as per Figure 119, with the exception of “All Permissions”:

    Figure 10‑45: Section of the User Permissions

    To configure the Windows version of RADIUS, please refer to Radius Authentication and Authorization, Technical Note on the KEMP website.

    To configure the Linux FreeRADIUS server, please insert the text below into the /etc/freeradius/users file in the sections indicated within the file. The example below is to configure permissions for the user ‘LMUSER’.
    LMUSER Cleartext-Password := "1fourall"
    Reply-Message = "real,vs,rules,backup,certs,cert3,certbackup,users"
    The /etc/freeradius/clients.conf file must also be configured to include the LoadMaster IP address. This file lists the IP addresses that are allowed to contact RADIUS.

    When Session Management is enabled, the RADIUS Server options are not available within this screen.

    Enable API Interface
    Enables/disables the RESTful Application Program Interface (API).

    Allow Update Checks
    Allow the LoadMaster to regularly check the KEMP website for new software versions.

    Enable Admin WUI CAC support
    Session Management must be enabled in order to see this option.

    Tick this check box to enable Common Access Card (CAC) authentication on the administrative WUI interface of the LoadMaster.

    A reboot is required to turn on this feature after this checkbox has been enabled.

    For CAC authentication to work, the certificate to be validated must be uploaded to the Intermediate Certs section in the LoadMaster WUI.
    GEO Settings

    Figure 10-38: GEO Settings

    Remote GEO LoadMaster Access
    Set the addresses of the GEO LoadMasters that can retrieve service status information from this LoadMaster. The addresses are space separated. When in HA mode, only the shared address needs to be entered.

    GEO LoadMaster Partners
    GEO functionality comes as part of the GSLB Feature Pack and is enabled based on the license that has been applied to the LoadMaster. If you would like to get the GSLB Feature pack, contact KEMP to upgrade your license.

    Set the addresses of the partner GEO LoadMasters. The addresses are space separated. These GEO LoadMasters will keep their DNS configurations in sync.

    Before partnering GEO LoadMasters, a backup should be taken of the relevant GEO LoadMaster which has the correct/preferred configuration. This backup should then be restored to the other LoadMasters that will be partnered with the original LoadMaster. 

    Up to 64 GEO HA partner addresses can be added.

    GEO LoadMaster Port
    The port over which GEO LoadMasters will use to communicate with this LoadMaster unit.

    GEO update interface
    Specify the GEO interface in which the SSH partner tunnel is created. This is the interface that the GEO partners will communicate through.
     
    GEO Partners Status
    This section is only visible when GEO partners have been set.

    Figure 10-39: GEO Partner Status

    A GEO partner status of Green indicates the two partners can see each other.
    A GEO partner status of Red indicates the LoadMasters cannot communicate. The reasons for this include (among other possibilities); one of the partners is powered down, there may be a power outage or a cable may be disconnected.

    If there is a failure to update the GEO partner, the logs display an error message saying the GEO update to the partner failed. The message displays the IP address of the partner.

     

    WUI Authentication and Authorization

    WUI Authorization Options
    Click the WUI Authorization Options button on the Remote Access screen to display the WUI Authentication and Authorization screen. This option is only available when Session Management is enabled.

    Figure 10‑46: WUI Authentication and Authorization

    The WUI Authentication and Authorization screen enables the administration of the available authentication (login) and authorization (allowed permissions) options.

    Authentication
    Users must be authenticated before logging on to the LoadMaster. The LoadMaster allows authentication of users to be performed using the RADIUS and LDAP authentication methods as well as Local User authentication.

    When all authentication methods are selected, the LoadMaster attempts to authenticate users using the authentication methods in the following order:
    1. RADIUS
    2. LDAP
    3. Local Users
    For example, if the RADIUS server is not available then the LDAP server is used. If the LDAP server is also not available then Local User authentication methods are used.

    If neither RADIUS nor LDAP authentication methods are selected, then the Local User authentication method is selected by default.

    Authorization
    LoadMaster allows the users to be authorized by either RADIUS or via Local User authorization. The user’s authorization decides what level of permissions the user has and what functions on the LoadMaster they are allowed to perform.

    You can only use the RADIUS authorization method if you are using the RADIUS authentication method.
    When both authorization methods are selected, the LoadMaster initially attempts to authorize the user using RADIUS. If this authorization method is not available, the LoadMaster attempts to authorize the user using the Local User authorization. Authorization using LDAP is not supported.

    If the RADIUS authorization method is not selected, then the Local User authorization method is selected by default.

    Below is an example of the configuration that needs to be on the radius server for authorization to work.
    The below example is for Linux only.

    The Reply-Message should be self-explanatory on what permission it’s allowing. They should match up to the WUI’s user permissions page, with the exception of “All Permissions”:
    LMUSER Cleartext-Password := "1fourall"
    Reply-Message = "real,vs,rules,backup,certs,cert3,certbackup,users" The bal user is always authenticated and authorized using the Local User authentication and authorization methods.

    RADIUS Server Configuration
    RADIUS Server
    The IP address and Port of the RADIUS Server that is to be used to authenticate user WUI access to the LoadMaster.

    Shared Secret
    This input field is for the Shared Secret of the RADUS Server.
    A Shared Secret is a text string that serves as a password between the LoadMaster and the RADIUS server.

    Backup RADIUS Server
    The IP address and Port of the backup RADIUS Server that is to be used to authenticate user WUI access to the LoadMaster. This server will be used in case of failure of the main RADIUS Server.

    Backup Shared Secret
    This text box is to enter the Shared Secret of the backup RADUS Server.

    Revalidation Interval
    Specifies how often a user should be revalidated by the RADIUS server.

    LDAP Server Configuration
    LDAP Server
    The IP address and Port of the LDAP Server that is to be used to authenticate user WUI access to the LoadMaster.

    Backup LDAP Server
    The IP address and Port of the backup LDAP Server that is to be used to authenticate user WUI access to the LoadMaster. This server will be used in case of failure of the main LDAP Server.

    LDAP Protocol
    Select the transport protocol used to communicate with the LDAP server.
    The available options are Not encrypted, StartTLS and LDAPS.

    Revalidation Interval
    Specifies how often a user should be revalidated by the LDAP server.

    Local Users Configuration
    Use ONLY if other AAA services fail
    When selected, the Local Users authentication and authorization methods are used only if the RADIUS and LDAP authentication and authorization methods fail.

    Test AAA for User
    To test a user’s credentials, enter their username and password in the Username and Password fields and click the Test User button.
    A message appears to inform you whether the user is validated or not. This is a useful utility to check a user’s credentials without having to log in or out.

     

    L7 Configuration


    Figure 10‑47: L7 Configuration

    Allow Connection Scaling over 64K Connections
    Under very high load situations, Port Exhaustion can occur. Enabling this option will allow the setting of Alternate Source Addresses which can be used to expand the number of local ports available.

    If more than 64K concurrent connections are required, enable the Allow Connection Scaling over 64K Connections option and set the Virtual Service IP as the alternate address in the Alternate Source Addresses input field. This allows each Virtual Service to have its own pool of source ports.

    Transparent Virtual Services are capped at 64K concurrent connections. This limit is on a per Virtual Service basis.

    If, after selecting this option, you set some Alternate Source Addresses, you will not be able to deselect the Allow connection scaling over 64K Connections option.



    Always Check Persist
    By default, the L7 module will only check persist on the first request of a HTTP/1.1 connection. Selecting Yes for this option will check the persistence on every request. Selecting Yes – Accept Changes means that all persistence changes will be saved, even in the middle of a connection.

    Add Port to Active Cookie
    When using active cookies, the LoadMaster creates the cookie from (among other things) the IP address of the client. However, if many clients are behind a proxy server, all of those clients come from the same IP address. Turning this on adds the clients source port to the string as well, making it more random.

    Conform to RFC
    This option addresses parsing the header of a HTTP request in conformance with RFC 1738.
    The request consists of 3 parts: GET /pathname HTTP/1.1 and when "conform" is on, the LoadMaster scans through the pathname until it finds a space. It then presumes that the next thing is HTTP/1.x. If the pathname contains spaces and the browser is conformant to the RFC, the pathname will have the spaces escaped to "%20" so the scan for a space will function correctly.

    However, on some non-conformant browsers, spaces are not escaped and the wrong pathname is processed. And since the system cannot find the HTTP/1.x, the LoadMaster will reject the request.
    Turning off this feature forces the LoadMaster to assume that the pathname extends to the last space on the line. It is then assumed that what follows is HTTP/1.x. So making pathnames with spaces in them useable – however, it is non-conformant to the RFC 1738.

    Close on Error
    If the LoadMaster has to send back a failure report to the client, for example if a file is newer in the cache; this forces the LoadMaster to close the connection after sending the response. You can continue using the connection after sending a failure report, but some systems could become confused. This option forces the close instead of continuing.

    Add Via Header In Cache Responses
    The relevant HTTP RFC states that proxies should add a Via header to indicate that something came from the cache. Unfortunately, older LoadMaster versions did not do this. This check box is used to enable backward compatibility with older versions (if needed).

    Real Servers are Local
    The LoadMaster has an automatic detection of local/non-local clients for the purpose of transparency (selective transparency). This works well in most cases, but it does not work well if the client is actually a Real Server. Turning this option on helps the LoadMaster to determine that a Real Server is actually local, therefore making selective transparency work.

    When this option is enabled in a two-armed environment (with clients and Real Servers on the second interface) the Real Servers are treated as if they are local to the clients, i.e. non-transparent. If the Real Servers are on a completely different network, then they cannot be local and will always be treated as not local. Local is defined as being on the same network.

    Enabling this option requires careful network topology planning and should not be attempted before contacting the KEMP Support team.

    Drop Connections on RS Failure
    This is useful for Microsoft Outlook users whereby it closes the connection immediately when a Real Server failure is detected.

    Exchange users should always select this option. The L7_TIMEOUT option is also set to 86400 at the same time.

    Drop at Drain Time End
    If enabled, all open connections to disabled Real Servers will be dropped at the end of the Real Servers Drain Stop Time or immediately if there are no persist entries associated with the Real Server.

    L7 Authentication Timeout (secs)
    This option supports the integration with 3rd party, multi-factor, authentication solutions which may have secondary processes such as SMS or telephone verification. This setting determines how long (in seconds) the SSO form waits for authentication verification to complete before timing out.

    L7 Connection Drain Time (secs)
    L7 Connection Drain Time impacts only new connections. Existing connections will continue relaying application data to a disabled server until that connection is terminated, unless the Drop at Drain Time End checkbox is selected.

    Setting the L7 Connection Drain Time (secs) to 0 will force all the connections to be dropped immediately when a Real Server is disabled.
    If the service is operating at Layer 4, drain stop does not apply. In this case, the persistence record is discarded, the connection is scheduled to an enabled and healthy server and a new persistence record is created.

    New TCP connections will not be sent to disabled servers and sent to enabled and healthy servers if:
    • Persistence is not enabled or
    • A persistence record for the connection exists and is not expired or
    • If the Real Server is down or
    • If the Drain Stop timer has expired
    If all the above conditions are not true, the connection is sent to the specified server and the persistence record is refreshed.

    The drain stop timer does not impact existing connections.

    Additional L7 Header
    This enables Layer 7 header injection for HTTP/HTTPS Virtual Services. Header injection can be set to  

    X-ClientSide (KEMP LoadMaster specific), X-Forwarded-For, or None.
    100-Continue Handling
    Determines how 100-Continue Handling messages are handled. The available options are:
    • RFC-2616 Compliant: conforms with the behavior as outlined in RFC-2616
    • Require 100-Continue: forces the LoadMaster to wait for the 100-Continue message
    • RFC-7231 Compliant: ensures the LoadMaster does not wait for 100-Continue messages
    Modifying how 100 Continue messages are handled by the system requires an understanding of the relevant technologies as described in the RFCs listed above. It is recommended that you speak with a KEMP Technical Support engineer before making changes to these settings.

    Allow Empty POSTs
    By default the LoadMaster blocks POSTs that do not contain a Content-Length or Transfer-Encoding header to indicate the length of the requests payload. When the Allow Empty POSTs option is enabled, such requests are assumed to have no payload data and are therefore not rejected.

    In version 7.1-24 and later releases, the supported Content-Length limit has been increased to 2TB (from 2GB).

    Least Connection Slow Start
    When using the Least Connection or Weighted Least Connection scheduling methods, a period can be specified during which the number of connections are restricted to a Real Server which has come online and gradually increased. This ensures that the Real Server is not overloaded with an initial flood of connections.
    The value of this Slow Start period can be between 0 and 600 seconds.

    Share SubVS Persistence
    By default, each SubVS of a Virtual Service has an independent persistence table. Enabling this option will allow the SubVS to share this information. In order for this to work, the persistence mode must be the same on all SubVSs within that Virtual Service. A reboot is required to activate this option.

    The only Persistence Mode that cannot be shared is SSL Session ID.
    When setting up shared SubVS persistence, there are some requirements to get this feature fully functional:
    • All Real Servers in the SubVS need to be the same
    • The Persistence Mode needs to be the same across all SubVSs
    • The timeouts need to be set with the same timeout value
    If the above requirements are not correct, the persistence may not work correctly either within the SubVS or across the SubVSs.

     

    Network Options


    Figure 10‑10‑48: Network Options

    Enable Server NAT
    This option enables translation.
    Connection Timeout (secs)
    The length of time (in seconds) that a connection may remain idle before it is closed. This value is independent of the Persistence Timeout value.
    Setting a value of 0 will reset the value to the default setting of 660 seconds.
    Enable Non-Local Real Servers
    Allow non-local Real Servers to be assigned to Virtual Services.
    Enable Alternate GW support

    If there is more than one interface enabled, this option provides the ability to move the default gateway to a different interface.

    Enabling this option adds another option to the Interfaces screen – Use for Default Gateway.
    The Enable Alternate GW support option will appear in the Remote Access screen in GEO only LoadMasters.

    Enable TCP Timestamps
    The LoadMaster can include a timestamp in the SYN when connecting to Real Servers.
    Enable this only upon request from KEMP support.

    Enable TCP Keepalives
    By default the TCP keepalives are enabled which improves the reliability of TCP connections that are long lived (SSH sessions). Keepalives are not usually required for normal HTTP/HTTPS services.
    The keepalive messages are sent from the LoadMaster to the Real Server and to the client. Therefore, if the client is on a mobile network, there may be an issue with additional data traffic.

    Enable Reset on Close
    When this option is enabled, the LoadMaster will close its connection with the Real Servers by using RESET instead of the normal close handshake. This only makes a difference under high loads of many connections.

    Subnet Originating Requests
    With this option enabled, the source IP address of non-transparent requests will come from the LoadMaster’s address on the relevant subnet, i.e. the subnet where the Real Server is located or the subnet of the gateway that can route to the Real Server (if the Real Server is non-local and behind a static route).
    This is the global option/setting.

    It is recommended that the Subnet Originating Requests option is enabled on a per-Virtual Service basis.
    When the global option is disabled, the per Virtual Service Subnet Originating Requests option takes precedence, i.e. it can be enabled or disabled per Virtual Service. This can be set in the Standard Options section of the Virtual Services properties screen (if Transparency is disabled).

    If this option is switched on for a Virtual Service that has SSL re-encryption enabled, all connections currently using the Virtual Service will be terminated because the process that handles the connection must be killed and restarted.


    Enable Strict IP Routing
    When this option is selected, only packets which arrive at the machine over the same interface as the outbound interface are accepted.

    Handle nonHTML Uploads
    Enabling this option ensures that non-HTML uploads function correctly.

    Enable Connection Timeout Diagnostics
    By default, connection timeout logs are not enabled. This is because they may cause too many unnecessary logs. If you wish to generate logs relating to connection timeouts, select the Enable Connection Timeout check box.

    Enable SSL Renegotiation
    Unchecking this option will cause SSL connections to terminate if a renegotiation is requested by the client.

    Size of SSL Diffie-Hellman Key Exchange
    Select the strength of the key used in the Diffie-Hellman key exchanges. If this value is changed, a reboot is required in order to use the new value. The default value is 2048 Bits.

    Use Default Route Only
    Forces traffic from Virtual Services that have default route entries set, to be only routed to the interface where the Virtual Service’s default route is located. This setting can allow the LoadMaster to be directly connected to client networks without returning traffic directly using the adjacent interface.
    Enabling this option affects all Virtual Services in the same network.

    HTTP(S) Proxy
    This option allows clients to specify the HTTP(S) proxy server and port the LoadMaster will use to access the internet.

     

    AFE Configuration


    Figure 10‑49: AFE Configuration

    Maximum Cache Size
    This defines how much memory can be utilized by the cache in megabytes.

    Cache Virtual Hosts
    When this option is disabled, the cache presumes there is only one virtual host supported on the Real Server. Enabling this option allows the cache to support multiple virtual hosts which have different content.

    File Extensions Not to Cache
    A list of files types that should not be cached.

    File Extensions Not to Compress
    A list of file types that should not be compressed.

    Detection Rules
    Select the relevant detection rules and click the Install New Rules button to install them.
    If you are implementing SNORT rules, please remember the following:
    • The destination port must be $HTTP_PORTS
    • A ‘msg’ may be optionally set
    • The flow must be set to ‘to_server,established’
    • The actual filter may be either ‘content’ or ‘pcre’
    • Additional ‘http_’ parameters may be set
    • The classtype must be set to a valid value
    To get updated or customized SNORT rules, please refer to the SNORT website: https://www.snort.org/.
    Detection Level

    Supports four levels of what to do when problems are encountered:
    • Low– only logging with no rejection
    • Default– only critical problems rejected
    • High– Serious and critical problems rejected
    • Paranoid– All detected problems rejected
    Client Limiting:
    It is possible to set a limit of the number of connections per second from a given host (limits up to 100K are allowed). After setting the "default limit" to a value, the system allows you to set different limits for specific hosts/networks so you can limit a network and/or host.

    If you set a network and a host on that network, the host should be placed first since the list is processed in the order that it is displayed.

    To turn client limiting off, set the Client Connection Limiter value to 0.

     

    HA Parameters

    The role of the appliance can be changed by setting the HA Mode. If HA (First) Mode or HA (Second) Mode is selected as the HA Mode, a prompt will appear reminding to add a shared IP. Changing the HA Mode will require a reboot, so after the details are set, click the Reboot button provided. Once the LoadMaster has rebooted, the HA Parameters menu option will be available in the System Configuration > Miscellaneous Options section provided the role is not “Non HA Mode”. HA will NOT work if both machines are specified the same.

    When logged into the HA cluster, use the shared IP address to view and set full functionality to the pair. If you log into the direct IP address of either one of the devices the menu options are quite different (see menus below). Logging into one of the LoadMaster directly is usually reserved for maintenance.

    Figure 1050: Direct IP menu

    Figure 1051: Shared IP menu
    When a LoadMaster is in HA mode, the following screen appears when you select the HA Parameters menu option.

    Figure 10‑52: HA settings

    HA Status
    At the top of the screen, next to the time, icons are shown to denote the real-time status of the LoadMaster units in the cluster. There will be an icon for each unit in the cluster. You can open the WUI for the first or second HA unit by clicking the relevant status icon.

    The possible icons are:

    Green (with ‘A’)
    The unit is online and operational and the HA units are correctly paired.
    The ‘A’ in the middle of the square indicates that this is the master unit.
    Green (without ‘A’)The unit is online and operational and the HA units are correctly paired.
    The absence of an ‘A’ in the middle of the square indicates that this is not the master unit.
    Red/YellowHA-red-smThe unit is not ready to take over. It may be offline or incorrectly paired.
    BlueHA-blue-smThe unit is pacified, i.e. it has rebooted more than 3 timesin 5 minutes. In this state you can only access the machine via the direct machine WUI (not the shared WUI), and, it is not participating in any HA activity, i.e. no changes from the master will be received and it will not take over if the master fails.
    GreyHA-grey-smBoth machines are active, i.e. both are set to master, and something has gone seriously wrong. CALL KEMP support.

    In HA mode each LoadMaster will have its own IP address used only for diagnostic purposes directly on the unit. The HA pair have a shared IP address over which the WUI is used to configure and manage the pair as a single entity.

    Both HA1 and HA2 must be on the same subnet with the same default gateway and be in the same physical site. They must not be separated by an intra-site link and must use the same gateway to return traffic.

    HA Mode
    If using a single LoadMaster, select Non-HA Mode. When setting up HA mode, one LoadMaster must be set to HA (First) and the other HA (Second). If they are both set to the same option, HA will not operate.

    KEMP supplies a license that is HA enabled for each HA unit and specifies the first or second unit. Therefore, it is not recommended that you change this option until you have discussed the issue with KEMP Support.


    HA Timeout
    The time that the Master machine must be unavailable before a switchover occurs. With this option, the time it takes an HA cluster to detect a failure can be adjusted from 3 seconds to 15 seconds in 3 second increments. The default value is 9 seconds. A lower value will detect failures sooner, whereas a higher value gives better protection against a DOS attack.

    HA Initial Wait Time
    How long after the initial boot of a LoadMaster, before the machine decides that it should become active. If the partner machine is running, then this value is ignored. This value can be changed to mitigate the time taken for some intelligent switches to detect that the LoadMaster has started and to bring up the link

    HA Virtual ID
    When using multiple HA LoadMaster clusters on the same network, this value uniquely identifies each cluster so that there are no potential unwanted interactions.

    Switch to Preferred Server
    By default, neither partner in a HA cluster has priority. So that when a machine restarts after a switchover, the machine becomes the slave and stays in that state until forced to Master. Specifying a preferred host means that when this machine restarts, it will always try to become master and the partner will revert to slave mode.

    HA Update Interface
    The interface used to synchronize the HA information within the HA cluster.

    Force Partner Update
    Immediately forces the configuration from the active to standby unit without waiting for a normal update.

    Inter HA L4 TCP Connection Updates
    When using L4 services, enabling updates will allow L4 connections to be maintained across a HA switchover. This option is ignored for L7 services.

    Inter HA L7 Persistence Updates
    When using L7 services, enabling this option will allow persistence information to be shared between the HA partners. If an HA failover occurs, the persistence information will not be lost. Enabling this option can have a significant performance impact.

    HA Multicast Interface
    The network interface used for multicast traffic which is used to synchronize Layer 4 and Layer 7 traffic when Inter-HA Updates are enabled.


    Use Virtual MAC Addresses
    Enabling this option forces the MAC address to switch between a HA pair during a switchover which is useful when gratuitous ARPs (used in communicating changes in HA IP addresses to switches) are not allowed.

     

    Azure HA Parameters

    This menu option is only available in LoadMaster for Azure products.

    Figure 10‑53: Azure HA Parameters

    Azure HA Mode
    Select the required HA mode for this unit. There are three options:
    • Master HA Mode
    • Slave HA Mode
    • Non HA Mode
    If you are only using a single LoadMaster, select Non HA Mode.
    When using HA mode, one machine must be specified as the Master and the second machine must be specified as the Slave.

    HA will not work if both units have the same value selected for the Azure HA Mode.
    Synchronization of Virtual Service settings only occurs from the master to the slave. Changes made to the master will be replicated to the slave. However, changes made to the slave are never replicated to the master.

    If the master unit fails, connections will be directed to the slave unit. The master unit is the master and will never become the slave, even if it fails. Similarly, the slave unit will never become the master. When the master unit comes back up, connections will automatically be directed to the master unit again.
    C:\Users\kgaffney\Downloads\HA_Cloud\master_active.png
    Figure 10‑54: Master unit

    You can tell, at a glance, which unit is the master, and which is the slave, by checking the mode in the top bar of the LoadMaster.

    Partner Name/IP
    Specify the host name or IP address of the HA partner unit.

    Health Check Port
    Set the port over which the health check will be run. The port must be the same on both the master and slave unit in order for HA to function correctly.

     

    AWS HA Parameters

    This menu option is only available in LoadMaster for Amazon Web Services (AWS) products.

    Figure 10‑55: AWS HA Parameters

    AWS HA Mode
    Select the required HA mode for this unit. There are three options:
    • Master HA Mode
    • Slave HA Mode
    • Non HA Mode
    If you are only using a single LoadMaster, select Non HA Mode.
    When using HA mode, one machine must be specified as the Master and the second machine must be specified as the Slave.

    HA will not work if both units have the same value selected for the AWS HA Mode.
    Synchronization of Virtual Service settings only occurs from the master to the slave. Changes made to the master will be replicated to the slave. However, changes made to the slave are never replicated to the master.

    If the master unit fails, connections will be directed to the slave unit. The master unit is the master and will never become the slave, even if it fails. Similarly, the slave unit will never become the master. When the master unit comes back up, connections will automatically be directed to the master unit again.
    C:\Users\kgaffney\Downloads\HA_Cloud\master_active.png
    Figure 10‑56: Master unit

    You can tell, at a glance, which unit is the master, and which is the slave, by checking the mode in the top bar of the LoadMaster.

    Partner Name/IP
    Specify the host name or IP address of the HA partner unit.
    Health Check Port

    Set the port over which the health check will be run. The port must be the same on both the master and slave unit in order for HA to function correctly.

     

    SDN Configuration


    Figure 10‑57: Section of the SDN Configuration screen
    Add New
    Add a new SDN controller connection.
    Modify
    Modify an existing SDN controller connection.
    Delete
    Delete an existing SDN controller connection.
     
    SDN Controller Settings

    Figure 10‑58: SDN Controller Settings






    When adding a new SDN controller connection, initially a screen will appear asking for the Cluster, IPv4 address and Port. After an SDN controller connection has been added, the settings can be updated by clicking the Modify button on the SDN Statistics screen.

    Cluster
    The cluster that the SDN controller will be a member of.
    Keep the Cluster field set to the default value.

    IPv4
    The IPv4 address of the SDN controller.

    Port
    The port of the SDN controller WUI.
    The default Port for the HP VAN Controller is 8443.
    The default Port for the OpenDaylight SDN controller is 8181.

    HTTPS
    Use HTTP/HTTPS to access the SDN controller.

    User
    The username to be used to access the SDN controller.

    Password
    The password of the user to be used to access the SDN controller.

    How to Migrate from Exchange 2010 to Exchange 2016 (Part 5)

    $
    0
    0
    In this part of the series we’ll migrate mail routing across from Exchange 2010 to Exchange 2016, and install Office Online Server (if required) to allow viewing and editing of attachments in Outlook Web App.
    If you would like to read the other parts in this article series please go to:

    Introduction

    During the last part in this series we completed the post installation configuration of Exchange Server 2016 and configured it as per our basic design, then performed some basic tests to ensure it functions correctly. We’re now ready to update the mail routing to the Exchange 2016 server in preparation for the migration. We’ll also integrate with an Office Online Server as an optional step before migration to allow documents to be previewed within Outlook Web App. 







    Migrating Mail Routing

     

    Updating Inbound Mail Routing

    In the previous article, we tested to ensure that our Exchange 2016 server can receive mail and deliver it to Exchange 2010 users. By default, Exchange Server 2016 is already configured to receive email from the Internet using Anonymous permissions on the default receive connector.

    If this hasn’t been changed from the defaults, update either firewall rules to direct traffic on SMTP port 25 to the Exchange 2016 server, or update your spam filter appliance to do so.

    After this change has been applied ensure that inbound mail flow is not interrupted before moving on to migrating outbound mail flow.

     

    Updating Outbound Mail Routing

    With incoming mail now flowing through Exchange 2016, our next step is to make the changes required to allow and then reconfigure Exchange Server 2016 to be responsible for outbound mail flow rather than via the Exchange 2010 server.

    In our example environment our mail flow is direct to the Internet, but in your organization you might use a spam filter appliance. Therefore, make sure if you use a smart host for relay, ensure the IP address of the new Exchange 2016 server is added as a server allowed to relay outbound messages.

    Likewise, if you email direct to recipients then ensure the Firewall rules allow the Exchange 2016 server IP address to initiate connections to Internet hosts on TCP port 25 without tampering (such as SMTP Fixup).
    After ensuring that the Exchange 2016 server is allowed to relay outbound mail, we are ready to update the Send Connector. To perform this step, open the Exchange Admin Center and navigate to Mail Flow and then choose Send Connectors. You should see the Send Connector used for outbound mail flow within the list:

    Image 
    Figure 1: Locating the primary Send Connector for outbound email

    Open the properties for the Send Connector used for outbound mail flow.

    Navigate to the Scoping tab and locate the Source server section. The Exchange 2010 server should be listed. Choose Add (+) and select the Exchange 2016 server, then select the Exchange 2010 and choose Remove (-); leaving only the Exchange 2016 server within the server list:

    Image
    Figure 2: Updating send connector settings


    After verifying both the server IP address is listed as able to relay, and that the correct Exchange 2016 server has been selected, choose Save to apply the configuration.

    As with updating inbound mail routing, ensure you test outbound mail flow is unaffected before continuing.

     

    Installing Office Online Server

    Optionally, before migrating mailboxes we can install Office Online Server. At the time of writing, this is still in preview and is expected to reach general availability (GA) around the same time SharePoint Server 2016 is launched.

    The Office Online Server is required for viewing and editing attachments within a web browser, so for our small organization we will walk through the installation of a single server farm. For larger organizations, high availability may be required.

     

    Installation of Office Online Server

    The current version of Office Online Server, the preview version, can be downloaded from this Microsoft link.

    This must be installed on a standalone Windows 2012 R2 virtual machine or server. Before commencing the install we must also install a number of pre-requisites. First install the Visual Studio 2015 runtime from this link.

    Next, install the .Net Framework 4.5.2, then install all available Windows Updates.
    Finally, before commencing the installation, use the following PowerShell script to install all required pre-requisites, as shown below:






    Install-WindowsFeature Web-Server, Web-Mgmt-Tools, Web-Mgmt-Console, Web-WebServer, Web-Common-Http, Web-Default-Doc, Web-Static-Content, Web-Performance, Web-Stat-Compression, Web-Dyn-Compression, Web-Security, Web-Filtering, Web-Windows-Auth, Web-App-Dev, Web-Net-Ext45, Web-Asp-Net45, Web-ISAPI-Ext, Web-ISAPI-Filter, Web-Includes, InkandHandwritingServices
    Image
    Figure 3: Installing OOS server pre-requisites

    After installation of pre-requisites, perform a re-boot.
    Next, mount the ISO image downloaded for installing Office Online Server and then start setup. For our installation, we’ll leave all defaults as-is. In the preview you will note that the version of Office Online Server is 2013. This is expected:
    Image
    Figure 4: Installing Office Online Server.

     

    Configuration of Office Online Server

    After installation completes, Office Online Server will not be available for use. To make it available for use with Exchange Server 2016 we must install a new Office Online Farm.

    Before we do that, we’ll need to install a valid SSL certificate and configure a DNS name.
    The standard naming for Office Online Server is oos.domain.com. We will choose to use oos.stevieg.org for both the internal and external URLs for our simple farm.

    We’ll use the Manage Computer Certificates management snap-in to import the SSL certificate and check the certificate Friendly Name. Search in the Windows 2012 R2 Start Menu, then launch the snap-in:
    Image
    Figure 5: Locating Computer Certificates

    You should see Certificate – Local Computer launch. Navigate to Personal and right-click Certificates. Choose All Tasks>Import… and import your certificate.
    Image
    Figure 6: Importing an existing SSL certificate

    In our example we’ve imported an example Wildcard certificate. We’ll navigate to Certificates and record its Friendly Name from the list.
    Image
    Figure 7: Checking the Friendly Name of the SSL certificate

    You’ll note from the command below that we specifically need to enable editing of documents when creating the farm. This is dependent on whether or not you have appropriate Office licensing. We are choosing to enable it in the example below:
    New-OfficeWebAppsFarm -InternalURL "https://" -ExternalURL "https://"" -CertificateName "" -EditingEnabled
    Image
    Figure 8: Creating a new Office Online Server Farm

     

    Configuration of Exchange 2016

    To ensure that Exchange Server 2016 can utilize our new Office Online Server farm we must configure the discovery endpoint using PowerShell. This can be configured at the Mailbox server level, or the organization level. It is useful to configure at the mailbox level if you need to maintain co-existence with Exchange 2013.
    As we are upgrading from Exchange 2010, we can simply configure the endpoint at the organizational level with the Set-OrganizationConfig cmdlet and then restart the OWA app pool on our single server, as shown below:

    Set-OrganizationConfig -WacDiscoveryEndpoint https:///hosting/discovery
    Restart-WebAppPool MsExchangeOwaAppPool
    Image
    Figure 9: Configuring Exchange 2016 to use the Office Online Server

     

    Testing Office Online Server with Exchange 2016

    Finally, we’ll need to make sure that document viewing and editing works as expected. If all is configured correctly, then viewing an email with an attachment should present the View option against an attachment:
    Image
    Figure 10: Checking integration of OOS

    If you don’t see the option it may simply be that after making the Organization configuration change, Active Directory had not replicated before re-starting the OWA app pool and attempting access.

    When you see the view option, simply select it to switch to view mode. If you’ve also enabled Editing in the Office Online Farm, you’ll see the Edit and reply option as well.
    Image
    Figure 11: Viewing attachments using OOS

    By selecting Edit and reply, you will be able to compose a reply and edit the document before sending. If the configuration is correct then you should see the full Office Online in editing mode displayed:
    Image
    Figure 12: Editing attachments using OOS

     







    Summary

    In this penultimate part in the series we’ve migrated mail routing across from Exchange 2010 to Exchange 2016, and we’ve installed Office Online Server (if required) to allow viewing and editing of attachments in Outlook Web App. In the final part of this series we will migrate mailboxes, then decommission Exchange Server 2010.


    How to install KEMP Virtual LoadMaster using VMware ESX, ESXi and vSphere

    $
    0
    0

    This document describes the installation of the Virtual LoadMaster (VLM) within a VMware ESX or ESXi hypervisor environment. In our previous articles we have covered VLM installation and configuration using Hyper-V you might be interested to go through.


    The Virtual LoadMaster is VMware ready. The VLM is has been tested with VMware ESX 4.0, 4.1, 5.0, and 5.1, vCenter Server 5.5 and 6.0, VMware ESXi 4.0, 4.1, 5.0, 5.1, 5.5 and 6.0, and vSphere 4.0, 5.0, 5.5 and 6.0.

    1 - Introduction:

    The KEMP Virtual LoadMaster is a version of the KEMP LoadMaster that runs as a virtual machine within a hypervisor and can provide all the features and functions of a hardware-based LoadMaster.

    There are several different versions of the VLM available. Full details of the currently supported versions are available on our website: www.kemptechnologies.com.






    The VMware virtual machine guest environment for the VLM, at minimum, must include:
    • 2 x virtual CPUs (reserve 2 GHz)
    • 2 GB RAM
    • 32 GB disk space
    There may be maximum configuration limits imposed by VMware such as maximum RAM per VM, Virtual NICs per VM etc. For further details regarding the configuration limits imposed by VMware, please refer to the relevant VMware documentation.

    2 - Best Practices

    Some best practices to be aware of before deploying a LoadMaster on VMware are below:
    • Configure an existing or new load balancing port group for the relevant VLAN to avoid port flooding
    • Use the VMXNET3 network adapter type when deploying the VLM

    Figure 2‑1: Security Tab
    • The Security Policy Forged Transmit on the Portgroup should be set to Accept. Ensure this is forced (hard coded) on the port group as any changes to the vSwitch will affect all port groups by default.

    Figure 2‑2: NIC Teaming Tab
    • To prevent the transmission of RARP packets from being sent every time a Virtual Machine is powered on, set the Notify Switches option to No.

     

    3 - Installing Virtual LoadMaster (VLM) using vSphere

    The following instructions describe how to install a Virtual LoadMaster on a VMware ESXi environment using the VMware vSphere client.

    3.1 - Download the OVF File

    The VLM is packaged with an .ovf file for ease of deployment. This file can be freely downloaded from KEMP Technologies for a 30 day evaluation period. To download the VLM please follow the following instructions.
    1. Log on to http://www.KEMPtechnologies.com/try
    2. Within the Select the hypervisor platform section, select the option for VMware ESX, ESXi and vSphere.
    3. Click on the Download OVF button.
    4. Read the end user agreement.
    5. If you wish to continue, select your country from the drop-down list.
    6. Click on the Agree button.
    7. Download the ovf zip file.
    8. Unzip the contents of the file to an accessible location.

     

    3.2 - Deploy the OVF File

    To deploy the VLM we initiate a Deploy OVF Template wizard which gathers all the information required to correctly deploy the VLM.
    1. Open the VMware vSphere client.
    2. Select the correct Resource Pool within which you wish to install the VLM.
    3. Select the File > Deploy OVF Template menu option, this initiates the Deploy OVF Template wizard.
    4. Within the Source screen, click on the Browse button and select the downloaded file.
    Figure 3‑1: Source
    1. Click the Next button.
    2. Within the OVF Template Details screen, ensure that all the details regarding the VLM are correct.
    Figure 3‑2: OVF Template Details
    1. Click the Next button.
    2. Within the End User License Agreement screen, read the KEMP Technologies SOFTWARE LICENSE AGREEMENT.
    Figure 3‑3: End-User License Agreement
    1. If you wish to continue with the installation, you must accept the end-user license agreement by clicking the Accept button.
    2. When the Accept button has been clicked the Next button will become available.
    3. Click the Next button.
    4. In the Name and Location field, enter a name for the VLM into the Name field.
    The name can be up to 80 characters long. It should be unique within the virtual machine folder. Names are case sensitive.
    1. If required, select the folder location within the inventory where the VLM will reside
    Figure 3‑4: Name and Location
    1. Click the Next button.
    2. Within the Resource Pool screen, select the resource pool where the VLM will run.
    This page is only displayed if the cluster contains a resource pool and if you had not selected a resource pool within the inventory tree before initiating the deployment wizard.
    Figure 3‑5: Resource Pool
    1. Click the Next button
    2. Within the Datastore screen, select the datastore in which you wish to store the VLM.
    You may only select from preconfigured datastores within this screen.
    Figure 3‑6: Datastore
    1. Click the Next button.
    2. Within the disk Format screen, select whether you wish to use Thin provisioned format or Thick provisioned format
    KEMP recommends you using the Thick provisioned format option. The estimated disk usage is approximately 32 GB.
    Figure 3‑7: Disk Format
    1. Click the Next button.
    2. Within the Network Mapping screen, select which networks in the inventory should be used for the VLM.
    Select the network mapping for Network and Farm by right-clicking the Destination Network column and choosing the relevant preconfigured network from the drop-down list.
    Figure 3‑8: Network Mapping
    1. Click the Next button.
    2. Within the Ready to Complete screen, ensure that all the details are correct.
    Figure 3‑9: Ready to Complete
    1. Click the Finish button.
    Once the Finish button is clicked, the VLM is deployed.

     

    3.3 - Check the Network Adapter Settings

    Before starting the VLM we must first verify that the network adapters are configured correctly.
    1. Select the deployed VLM within the inventory tree.
    2. Select the Inventory > Virtual Machine > Edit Settings menu option.
    3. Ensure that the network adaptors are configured correctly as follows
      1. Ensure that the Connect at power on: checkbox is selected
      2. Ensure the Adapter Type is correct.
    The default, recommended adapter type is VMXNET3.
    We also have support for the e1000 driver.
    Both E1000 and vmxnet3 support Jumbo frames.
    1. Ensure that the Automatic option is selected within the MAC Address section.
    2. Ensure that the correct Network Label is selected.
    Figure 3‑10: Edit Settings

    If the network adapters are not configured correctly or if they have not been setup you can add a new network adapter by clicking the Add button.






     

    3.4 - Power On the LoadMaster

    To power on the Virtual LoadMaster, follow the steps below:
    1. In the left panel of the client, select the LoadMaster Virtual Machine.

    Figure 3‑11: Power On
    1. Click the Power on the virtual machine link.
    2. Select the Console tab and wait for the Virtual Machine to finish booting.

    Figure 3‑12: IP address
    1. When the Virtual Machine finishes booting, a screen similar to the one above appears which will show the IP address. Take note of this as you will need it to access the LoadMaster Web User Interface (WUI).
    To change the IP address via the console view, follow the steps in Section 4.1.

     

    3.5 - Licensing and Configuration

    You must now configure the LoadMaster to operate within your network configuration.
    1. In your internet browser enter the IP address previously noted, ensure you place ’http://’ before the IP address.
    1. You may get a warning regarding website security certificates. Please click on the continue/ignore option.
    2. The LoadMaster End User License Agreement screen appears.
    Please read the license agreement and, if you are willing to accept the conditions therein, click on the Agree button to proceed.

    Figure 3‑13: License type selection
    1. Select the relevant license type.
    2. A section will then appear asking if you are OK with the LoadMaster regularly contacting KEMP to check for updates and other information. Click the relevant button to proceed.
    C:\Users\kgaffney\Dropbox (Kemp Technologies)\ongoing projects\Reskin\Select_License_Method.PNG
    Figure 3‑14: License Required
    1. If using the Online licensing method, fill out the fields and click License Now.
    If you are starting with a trial license, there is no need to enter an Order ID. If you are starting with a permanent license, enter the KEMP Order ID# if this was provided to you.

    If using the Offline Licensing method, select Offline Licensing, obtain the license text, paste it into the License field and click Apply License.
    1. The Change Password screen appears
    C:\Users\kgaffney\Dropbox (Kemp Technologies)\ongoing projects\Reskin\2fourall_password_cropped.PNG
    1. Enter a password for the bal user in the Password input field and retype it in the Retype Password input field.
    2. The login screen appears again, enter the bal user name and the new password as defined in the previous step.
    3. In the screen informing you that the password has changed, press the Continue button
    4. If your machine has a temporary license you should get a warning informing you that a temporary license has been installed on your machine and for how long the license is valid.
    1. Click on the OK button
    2. You should now connect to the Appliance Vitals screen of the LoadMaster.

    Figure 3‑15: Appliance Vitals
    1. Go toSystem Configuration> Network Setup in the main menu.
    2. Click the eth0 menu option within the Interfaces section.
    1. In the Network Interface 0 screen, enter the IP address of the eth0 interface, the network facing interface of the LoadMaster, in the Interface Address input field.
    2. Click the Set Address button
    3. Click the eth1 menu option within the Interfaces section
    4. In the Network Interface 1 screen, enter the IP address of the eth1 interface, the farm-side interface of the LoadMaster, in the Interface Address input field.
    5. Click the Set Address button
    This interface is optional, depending on the network configuration.
    1. Click on the Local DNS Configuration > Hostname Configuration menu option.
    1. In the Hostname configuration screen, enter the hostname into the Current Hostname input field.
    2. Click on the Set Hostname button.
    3. Click on the Local DNS Configuration > DNS Configuration menu option.
    1. In the DNS configuration screen, enter the IP address(es) of the DNS Server(s) which will be used to resolve names locally on the LoadMaster into the DNS NameServer input field.
    2. Click on the Add button.
    3. Enter the domain name that is to be prepended to requests to the DNS nameserver into the DNS NameServer input field.
    4. Click on the Add button.
    5. Click on the System Configuration > Network Setup > Default Gateway menu option.
    1. In the DNS configuration screen, enter the IP address of the default gateway into the IPv4 Default Gateway Address input field.
    2. If you have an IPv6 Default Gateway, please enter the value in the IPv6 Default Gateway Address input field.
    3. Click on the Set IPv4 Default Gateway button.
    The LoadMaster is now fully installed and ready to be used.

     

    4 - Troubleshooting

     

    4.1 - Cannot access the Web User Interface

    If a connection to the WUI cannot be established, network settings can be configured via the console view.
    1. Login into the VLM via the console using the settings:
    lb100 login: bal
    Password: 1fourall

    Figure 4‑1: Enter IP address
    1. Enter the IP address of the eth0 interface, the network-facing interface of the LoadMaster, in the input field within the Network Side Interface Address field and press Enter on the keyboard.

    Figure 4‑2: Enter default gateway address
    1. Enter the IP address of the Default Gateway.

    Figure 4‑3: Nameserver IP addresses
    1. Enter a space-separated list of nameserver IP addresses.
    2. A message will appear asking to continue licensing via the WUI. Try to access the IP address via a web browser. Ensure to enter http:// before the IP address.
    3. Contact the local KEMP Customer Services Representative for further support if needed.

     

    4.2 - Factory Reset

    If you perform a factory reset on your VLM, all configuration data, including the VLM’s IP address is deleted. During the subsequent reboot the VLM attempts to obtain an IP address via DHCP. If the VLM is on a different subnet to the DHCP server then an IP address will not be obtained and the IP address is set to the default 192.168.1.101.

    The VLM may not be accessible using this address. If this is the case then you must run through the quick setup via the console as described in Section 4.1.






     

    4.3 - VMware Tools

    The VLM supports integration with VMware Tools.

     

    4.4 - Working with VMware Virtual Switches

    When working with VMware Virtual Switches within your configuration, please ensure that the value of the Forged Transmit Blocking option is Accept. If this option’s value is Reject, the LoadMaster is prevented from sending traffic as it appears to come from nodes on the network other than the LoadMaster.


      Deploying KEMP360 Central with Hyper-V

      $
      0
      0

      The purpose of this document is to provide step-by-step instructions on deploying KEMP360 Central with Microsoft Hyper-V.

       

      Introduction

      KEMP360 Central is a centralized management, orchestration, and monitoring application that enables the administration of deployed LoadMaster instances. Use KEMP360 Central to perform administrative tasks on each LoadMaster instance. This provides ease of administration because multiple LoadMasters can be administered in one place, rather than accessing each LoadMaster individually.

       

      Prerequisites

      The Microsoft Hyper-V virtual machine guest environment for the VLM, at minimum, must include:
      • 2 x virtual processors
      • 4 GB RAM
      • 40 GB virtual hard disk capacity
      There may be maximum configuration limits imposed by Hyper-V such as maximum RAM per VM, Virtual NICs per VM etc. For further details regarding the configuration limits imposed by Microsoft Hyper-V, please refer to the relevant Microsoft Hyper-V documentation.

       

      Installing KEMP360 Central using Hyper-V Manager

      The following instructions describe how to install KEMP360 Central on a Hyper-V environment using the Hyper-V Manager.







      Download the Hyper-V Files

      The KEMP360 Central .vhd file is currently unavailable on the KEMP Technologies website.
      KEMP360 Central is packaged within a .vhd file for ease of deployment. This file can be freely downloaded from KEMP Technologies for a 30 day evaluation period.
      1. Log on to http://www.KEMPtechnologies.com/try
      2. Within the Select the hypervisor platform section, select the option for Microsoft Hyper-V.
      3. Click the Download Hyper-V VM button.
      4. Read the end user agreement.
      5. To continue, select your country from the drop-down list.
      6. Click the Agree button.
      7. Download the Hyper-V zip file.
      8. Unzip the contents of the file to an accessible location within the Hyper-V environment.

       

      Importing KEMP360 Central

      To import KEMP360 Central we use the Import Virtual Machine function within the Hyper-V Manager.
      1. Open the Hyper-V Manager and select the relevant server node in the left panel.

      Figure 2‑1: Hyper-V Manager
      1. Click the Import Virtual Machine menu option in the panel on the right.

      Figure 2‑2: Import Virtual Machine
      1. Click Next.

      Figure 2‑3: Import Virtual Machine
      1. Click the Browse button and browse to where you downloaded the Hyper-V files.
      2. Select the KEMP360 folder containing the virtual machine you wish to import and click Select Folder.

      Figure 2‑4: Select the VLM
      1. To select the KEMP360 Central machine click Next.

      1. Select the Copy the virtual machine (create a new unique ID) option.
      2. Click Next.
      3. Choose your destination and hard disks.
      4. Click the Import button.
      5. The KEMP360 Central machine should be imported and should now appear within the Virtual Machines pane in the Hyper-V Manager.

       







      Check the Network Adapter Settings

      Before starting the KEMP360 Central machine, first verify the network adapters are configured correctly.
      1. Right-click the virtual machine you have imported within the Virtual Machines pane.
      2. Click the Settings option.
      3. Click the Network Adapter or Legacy Network Adapter option within the Hardware list.
      KEMP recommend selecting the Network Adapter option as it provides much higher performance and less load on the host.

      Figure 2‑5: Network Adapter settings
      1. Ensure that the network adapter is configured correctly:
        • Ensure that the network adapter is connected to the correct virtual network.
        • Expand the Network Adapter menu and select Advanced Features, select the Static option within the MAC address section and enter the relevant MAC address.
        • Ensure that the Enable spoofing of MAC addresses checkbox is selected.
      2. Click the OK button.
      3. Repeat these steps for the second network adapter.
      Jumbo frames are supported for Hyper-V network synthetic drivers.

       

      Power On the KEMP360 Central Machine

      Once the KEMP360 Central machine has been deployed it can be powered on:
      1. Right-click the KEMP360 Central machine which was imported within the Virtual Machines pane.
      2. Click Start.
      The KEMP360 Central machine should begin to boot up.
      1. Right-click the KEMP360 Central machine and select Connect to open the console window.

      Figure 2‑6: IP address
      1. The KEMP360 Central machine should obtain an IP address via DHCP. Make a note of this address.


      Initial Configuration of KEMP360 Central

      Now that it is deployed, KEMP360 Central needs to be configured. Follow the steps below to complete the initial configuration of KEMP360 Central:
      1. Access the KEMP360 Central UI by opening a web browser and entering the IP address of the KEMP360 Central machinein the address bar.

      Figure 3‑1: Welcome Page
      1. Click Continue.

      Figure 3‑2: Accept the License Agreement
      1. The End User License Agreement (EULA) will be displayed. Click Agree to accept the EULA and continue.

      Figure 3‑3: Enter credentials
      1. Enter your KEMP ID (the email address used when registering the KEMP account).
      Users need a KEMP ID to license KEMP360 Central. If you do not have a KEMP ID, click the link provided and register one. For step-by-step instructions on how to register for a KEMP ID, and for information on licensing in general, refer to the Licensing, Feature Description
      1. Enter your Password.
      2. Click License Now.
      C:\Users\kgaffney\Dropbox (Kemp Technologies)\documentation updates\KEMP 360\Phase 0.2 initial config screenshots\04 - New password.png
      Figure 3‑4: Set Admin Password






      1. Enter a new admin password in the two text boxes provided and click Set Password.
      The bar in the middle represents the strength of the password. The fuller the bar is - the more secure the password is.

      The initial configuration of KEMP360 Central is now complete.

      Security Auditing and Scanning Tool for Linux Systems - Lynis 2.2.0

      $
      0
      0
      In this article we are going to show you how to install Lynis 2.2.0 (Linux Auditing Tool) in Linux systems using source tarball files. A new major upgrade version of Lynis 2.2.0 is released just now, after months of development, which comes with some new features and tests, and many small improvements. I encourage all Linux users to test and upgrade to this most recent version of Lynis.

       

      Introduction:

      Lynis is an open source and much powerful auditing tool for Unix/Linux like operating systems. It scans system for security information, general system information, installed and available software information, configuration mistakes, security issues, user accounts without password, wrong file permissions, firewall auditing, etc.

      Lynis is one of the most trusted automated auditing tool for software patch management, malware scanning and vulnerability detecting in Unix/Linux based systems. This tool is useful for auditors, network and system administrators, security specialists and penetration testers.

       







      Installation of Lynis

      Lynis doesn’t required any installation, it can be used directly from any directory. So, its good idea to create a custom directory for Lynis under /usr/local/lynis.

      # mkdir /usr/local/lynis

      Download stable version of Lynis source files from the trusted website using wget command and unpack it using tar command as shown below.

      # cd /usr/local/lynis
      # wget https://cisofy.com/files/lynis-2.2.0.tar.gz


      Download Lynis Linux Audit Tool
      Download Lynis Linux Audit Tool

      Unpack the tarball
      # tar -xvf lynis-2.2.0.tar.gz

      Unpack Lynis Tool 
      Unpack Lynis Tool

      Running and Using Lynis Basics

      You must be root user to run Lynis, because it creates and writes output to /var/log/lynis.log file. To run Lynis execute the following command.

      # cd lynis
      # ./lynis
       
      By running ./lynis without any option, it will provide you a complete list of available parameters and goes back to the shell prompt. See figure below.

      Lynis Basic Options and Help
      Lynis Basic Options and Help

      To start Lynis process, you must define a --check-all parameter to begin scanning of your entire Linux system. Use the following command to start scan with parameters as shown below.

      # ./lynis --check-all

      Once, you execute above command it will start scanning your system and ask you to Press [Enter] to continue, or [CTRL]+C to stop) every process it scans and completes. See figure attached below.

      Lynis: Scanning Entire Linux System
      Lynis: Scanning Entire Linux System

      Lynis Security Scan Details 
      Lynis Security Scan Details






      To prevent such acknowledgment (i.e. “press enter to continue”) from user while scanning, you need use -c and -Q parameters as shown below.

      # ./lynis -c -Q

      It will do complete scan without waiting for any user acknowledgment. See the following screencast.

      Lynis: Scanning Linux File System
      Lynis: Scanning Linux File System

       

      Creating Lynis Cronjobs

      If you would like to create a daily scan report of your system, then you need to set a cron job for it. Run the following command at the shell.

      # crontab -e

      Add the following cron job with option --cronjob all the special characters will be ignored from the output and the scan will run completely automated.

      30 22 * * * root /path/to/lynis -c -Q --auditor "automated" --cronjob
      The above example cron job will run daily at 10:30pm in the night and creates a daily report under /var/log/lynis.log file.

       

      Lynis Scanning Results

      While scanning you will see output as [OK] or [WARNING]. Where [OK] considered as good result and [WARNING] as bad. But it doesn’t mean that [OK] result is correctly configured and [WARNING] doesn’t have to be bad. You should take corrective steps to fix those issues after reading logs at /var/log/lynis.log.

      In most cases, the scan provides suggestion to fix problems at the end of the scan. See the attached figure that provides a list of suggestion to fix problems.

      Lynis Suggestions Tips
      Lynis Suggestions Tips

       

      Updating Lynis

      If you want to update or upgrade current lynis version, simple type the following command it will download and install latest version of lynis.

      # ./lynis update info         [Show update details]
      # ./lynis update release [Update Lynis release]
       
      See the attached output of the above command in the figure. It says our lynis version is Up-to-date.

      Update Lynis Auditing Tool
      Update Lynis Auditing Tool

       







      Lynis Parameters

      Some of the Lynis parameters for your reference.
      1. --checkall or -c : Start the scan.
      2. --check-update : Checks for Lynis update.
      3. --cronjob : Runs Lynis as cronjob (includes -c -Q).
      4. --help or -h : Shows valid parameters
      5. --quick or -Q : Don’t wait for user input, except on errors
      6. --version or -V : Shows Lynis version.
      That’s it, we hope this article will be much helpful you all to figure out security issues in running systems.

      How to setup wieless hotspot on Windows 10 PC

      $
      0
      0

      In this Windows 10 guide, we'll walk you through the steps to verify if your network adapter supports the feature, how to configure and enable a wireless Hosted Network, and how to stop and remove the settings from your computer when you no longer need the feature.







      Windows 10 includes a feature called "Hosted Network" that allows you to turn your computer into a wireless hotspot, and this guide we'll show you how to do it.

      Whether you're connecting to the internet using a wireless or wired adapter, similar to previous versions, Windows 10 allows you to share an internet connection with other devices with a feature called "Hosted Network".

      Hosted Network is a feature that comes included with the Netsh (Network Shell) command-line utility. It's was previously introduced in Windows 7, and it allows you to use the operating system to create a virtual wireless adapter – something that Microsoft refers to "Virtual Wi-Fi"— and create a SoftAP, which is a software-based wireless access point.

      Through the combination of these two elements, your PC can take its internet connection (be it an ethernet connection or hookup through a cellular adapter) and share it with other wireless devices — essentially acting as a wireless hotspot.

      To follow this guide, you'll need to open the Command Prompt with administrator rights. To do this, use the Windows key + X keyboard shortcut, and select Command Prompt (Admin).

       

      How to check if your wireless adapter supports Hosted Networks in Windows 10

      While some adapters include support for Hosted Network, you will first need to verify your computer's physical wireless adapter supports this feature using the following command:

      NETSH WLAN show drivers


      If the generated output shows Hosted network supported: Yes, then you can continue with the guide. If your wireless adapter isn't supported, you could try using a USB wireless adapter that supports the feature.

       

      How to create a wireless Hosted Network in Windows 10

      Creating a wireless hotspot in Windows 10 is relatively straightforward — don't let the command line scare you. Simply follow the steps below to configure a wireless Hosted Network:
      • While in Command Prompt (Admin) enter the following command:
      NETSH WLAN set hostednetwork mode=allow ssid=Your_SSID key=Your_Passphrase

      Where the SSID would be the name you want to identify your wireless network when trying to connect a new device, and the passphrase is the network security key you want users to use to connect to your network. (Remember that the passphrase has to be at least 8 characters in length.)


      • Once you created a Hosted Network, enter the following command to activate it:
      NETSH WLAN start hostednetwork


      How to share your internet connection with a Hosted Network in Windows 10

      Up to here, you created and started a Hosted Network in your Windows 10 PC. However, any wireless capable device won't be able to access the internet just yet. The last thing you need to do is to share an internet connection using the "Internet Connection Sharing" feature from a physical network adapter.
      • Use the Windows key + X keyboard shortcut to open the Power User menu, and select Network Connections.
      • Next, right-click the network adapter with an internet connection – this could be a traditional Ethernet or wireless network adapter — select Properties.

      • Note: In Network Connections, you should now see a new our new Microsoft Hosted Virtual Adapter which is labeled Local Area Connection* X, and with the SSID name.
      • Click the Sharing tab.
      • Check the Allow other network users to connect through this computer's Internet connection option.
      • Next, from the Home networking connection drop-down menu select the Microsoft Hosted Virtual Adapter.







      • Click OK to finish.
      At this point, you should be able to see and connect any wireless capable device to the newly created software access point, and with access to the internet.

       

      How to stop sharing an internet connection with other devices in Windows 10

      If you want to temporary stop allowing other devices to connect wirelessly through your computer, you can type the following command in the Command Prompt and hit Enter:

      NETSH WLAN stop hostednetwork


      At any time, you can just use the start variant of the command to allow other devices to connect to the internet using your computer as an access point without extra configuration:
       
      NETSH WLAN start hostednetwork


      Similarly, you can also use the following command to enable or disable a wireless Hosted Network:

      NETSH WLAN set hostednetwork mode=allow
      NETSH WLAN set hostednetwork mode= disallow

       

      How to change a Hosted Network settings in Windows 10

      In the case you want to change some of the current settings, such as SSID or network security you can use the following commands:

      NETSH WLAN set hostednetwork ssid=Your_New_SSID
      NETSH WLAN set hostednetwork key=Your_New_Passphrase

       

      How to view the current Hosted Network settings

      There are two commands to view the Hosted Network settings on your computer:

      The following command shows the mode and SSID name in use, max number of clients that can connect, type of authentication, and cipher:

      NETSH WLAN show hostednetwork

      And the following command will also reveal the current network security key among other settings, similar to the previous command:

      NETSH WLAN show hostednetwork setting=security


      How to disable a wireless Hosted Network in Windows 10

      While the setup of a wireless Hosted Network in Windows 10 is not very complicated, Microsoft doesn't make very straightforward to remove the configurations when you no longer need the feature.

      Although you can use the stop or disallow commands, these actions won't eliminate the settings from your computer. If you want completely delete the Hosted Network settings in Windows 10, you'll need to modify the Registry.

      Important: Before you change anything settings on your computer, it's worth noting that editing the Windows Registry can be a dangerous game that can cause irreversible damages to your system if you don't know what you are doing. As such, it's recommended for you to make a full backup of your system or at least System Restore Point before proceeding with this guide. You have been warned!
      • Open the Start menu, do a search for regedit, hit Enter, and click OK to open the Registry with admin rights.
      • Scroll down the following path in the Registry: 
      • HKEY_LOCAL_MACHINE\system\currentcontrolset\services\wlansvc\parameters\hostednetworksettingsRight-click the HostedNetworkSettings DWORD key, select Delete, and click Yes to confirm deletion.
      • Restart your computer
      • Open to the Command Prompt and use the following command: 
      NETSH WLAN show hostednetwork

      You will know that you have successfully deleted the settings when the Settings field reads Not configured.


      Make sure you turn off "Internet Connection Sharing" in the physical network adapter that was sharing the internet with other devices. Use the Windows key + X keyboard shortcut to open the Power User menu, and select Network Connections.

      Right-click the network adapter, and select Properties.
      Click the Sharing tab.

      Uncheck the Allow other network users to connect through this computer's Internet connection option.


      • Click OK to complete the process.

      Things you need to know

      Although the wireless Hosted Network feature in Windows 10 allows you to implement an access point solution to share an internet connection with other devices, it's not meant to be a solution to replace a physical wireless access point.

      Also, there are a few things you want to consider. For example, wireless speeds will dramatically be reduced compared to the rates provided from a physical access point. Perhaps it would not be a big deal for internet browsing, but downloading or transferring big files could be an issue for some users.






      You also need to consider that your computer needs to be always turned on to act as a wireless access point. If the computer enters into sleep, hibernate, or restarts, your wireless hotspot will stop working, and you will need to start manually the feature using the NETSH WLAN start hostednetwork command.

      You cannot run a SorftAP and ad hoc at the same time on Windows. If you need to create a temporary network connection between two computers, setting up ad hoc will turn off SoftAP — you can run one or the other, not both at the same time.

       

      Wrapping things up

      Wireless Hosted Network is a nifty feature in Windows can be a great tool to have for when you need to create a wireless access point on the go. It won't match the performance of a physical wireless access point, but it can be useful for many unexpected scenarios — like having one wired ethernet connection and several devices you want to get online. It's not a replacement for the real thing, but in a sticky situation, it can be just the fix you need.

      How to Use Windows Defender Offline to remove tough viruses from your Windows 10 PC

      $
      0
      0

      In this Windows 10 guide, we'll walk you through the process of downloading and creating a bootable USB drive with the stand-alone version of Microsoft's free antivirus app, and we'll show you how to use it to clean up your computer from malicious and unwanted programs.







      When your Windows 10 PC gets a hard-to-remove virus, you can use Windows Defender Offline to get rid of it once and for all.

      It's happened to you, or to somebody you know. An annoying and dangerous virus or bit of malware has wormed its way onto your computer, and it is wreaking havoc. When these type of malicious codes install themselves on your computer, they can rapidly take control of your PC and cause irreversible damage.

      Even if you have an up-to-date antivirus running on your computer, sometimes these threats are very hard to find and remove, often masquerading as part of the operating system. For this reason, Microsoft offers Windows Defender Offline, which is a version of its antivirus that you can run from a USB drive to help you remove malicious code that is infecting Windows 10.

       

      Making a bootable version of Windows Defender Offline

      Before proceeding with this guide, you will need external media, preferably a USB flash drive with at least 1GB of capacity, but you can also use a CD/DVD, or you can create an ISO image. This process will wipe and reformat the drive, so make sure you back up any content on if you want to keep to another location.

      Quick Tip: You will need to download the correct version of Windows Defender Offline for the computer you wish to scan. If you don't know which version of Windows 10 is running on your PC, you can easily check by using the Windows Key + I keyboard shortcut to open the Settings app, go to System > About, and you will find the version of Windows 10 under "System type".


      After accepting the license agreement, select the type of bootable media you want to create. For this guide, we'll choose the On a USB flash drive that is not password protected option, and click Next.


      If you have more than one USB storage device connected to your computer, you'll be given a drop-down menu to select the drive you want to use. Pick the drive you want to use and click Next.


      Click Next to confirm that the wizard will reformat the USB flash drive.


      Now the necessary files will download, and the wizard will complete creating the Windows Defender Offline bootable media. Then simply click Finish to close the wizard.


      Booting your Windows 10 PC using Windows Defender Offline

      Before you can use the USB flash drive to perform a scan, you have to make sure your computer is configured to boot from removable media. Typically, this requires hitting the one of the keyboard's function keys (F1, F2, F3, F10, or F12), ESC, or the Delete key during boot to access the BIOS and change the boot order.

      If you have a computer using a UEFI BIOS, the steps are a little different. In this case, in Windows 10, you'll need to go to Settings > Update & recovery > Recovery under Advanced startup, click Restart now. Then in the boot menu, click Troubleshoot > Advanced Options > UEFI Firmware Settings > Restart. Your computer will then boot into its BIOS, where you can then change the boot order.

      Note that instructions will vary depending on your computer manufacturer. Always check your PC manufacturer's support website for details.

      Finally, connect the USB flash drive on the infected computer and restart. Then Windows Defender Offline will start automatically performing a full scan of any virus, rootkit, or a piece of bad software that can be recognized using the latest definition update, just like it would happen when you're running Windows.

      Once the scan completes, close the program, remove the USB flash drive, and your computer will automatically reboot.







      While you can create Windows Defender Offline media at any time, it's recommended that you do this at the time you need to clean an infected computer, this way you will have the latest definition update.

      It's important to note that Windows Defender Offline not only works with Windows 10, but you can also use this version of the antivirus on the previous version of the operating system.

      What is “Systemless Root” on Android, and how it works?

      $
      0
      0
       

      Gaining root access on Android devices isn’t a new concept, but the way it is done has changed with Android 6.0 Marshmallow. The new “systemless” root method can be a bit confusing at first, so we’re here to help make sense of it all, why you’d want it, and why this method is the best way to root an Android phone moving forward.







       

      What Exactly Is “Systemless” Root?

      Before we get into what systemless root is, it’s probably best that we first talk about how rooting “normally” works on Android, and what’s required for it to actually do what it’s supposed to do.

      Since Android 4.3, the “su” daemon—the process that handles requests for root access—has to run at startup, and it has to do so with enough permissions to effectively perform the tasks requested of it. This was traditionally accomplished by modifying files found on Android’s /system partition. But in the early days of Lollipop, there was no way to launch the su daemon at boot, so a modified boot image was used—this was effectively the introduction of the “systemless” root, named such because it doesn’t modify any files in the /system partition.

      A way to gain root access the traditional way on Lollipop was later found, which effectively halted progress on the systemless method at the time.

      With the introduction of Marshmallow, however, Google strengthened the security that was first put in place in Lollipop, essentially making it unfeasible to launch the su daemon with the required permissions just by modifying the /system partition. The systemless method was resurrected, and that’s now the default rooting method for phones running Marshmallow. It’s also worth mentioning that this is also true for Android N, as well as Samsung devices running 5.1 (or newer).

       

      What Are the Advantages (and Disadvantages) of Systemless Root?

      As with anything, there are advantages and disadvantages to gaining root access with the systemless method. The primary downside is that it doesn’t work on devices with locked bootloaders by default—there may be workarounds, but they’re very specific to each device. In other words, if there is no workaround for your device and it has a locked bootloader, there’s essentially no way of gaining root access.

      Other than that, however, the systemless method is generally better. For example, it’s much easier to accept over-the-air (OTA) updates when you’re rooted using this method, especially when using a tool like FlashFire. FlashFire can flash stock firmwares and re-root them while flashing, as well as handle OTA installation (again, re-rooting it while flashing). 

      Basically, if you’re running a rooted device, FlashFire is a good tool to have. Keep in mind that it’s currently still in beta, but development is making good progress.

      The systemless root method is also much cleaner, since it doesn’t add or modify files in the /system partition. That means it’s much easier to unroot your phone, too. It doesn’t even survive a factory reset, so it’s much simpler to make sure devices are unrooted and wiped clean before selling them.

      Of course, that last bit is a double-edged sword, as some users would prefer to stay rooted after factory resetting their device—the good news is that you need only re-flash the appropriate SuperSU file to re-gain root access, which is easy. And if you want to unroot without performing a factory reset, you can just flash a clean boot image for your device. One command prompt command and you’re done.

      It’s also worth noting that there are certain services, like Android Pay, that simply won’t work on rooted devices. At one point, Pay did work on systemless devices, but this was completely accidental. There are currently no plans to try and bypass Pay’s protection on rooted devices.

       

      Which Method Should You Use?


      The good news is, you don’t really have to “decide” which root method to use. When you flash SuperSU, it will decide which rooting method is best for your phone, and act accordingly. If your phone is running Lollipop or older, it will most likely use the /system method. If it’s running Marshmallow or newer (or if it’s a 
      Samsung device running 5.1 or newer), it will modify your boot image instead, giving you a systemless root.

      It’s unlikely that the systemless method will ever become backwards compatible for older versions of Android, as that would require a significant amount of work for dozens of devices that will either be upgraded to a newer version of Android or retired. Thus, the focus for this new method is being put on Android Marshmallow and N.






      Android is a complex system, and obtaining root access can open the door to unlocking its full potential. That said, rooting your device isn’t something you should take lightly—unless it’s a developer or other bootloader-unlockable unit with stock images available, you should definitely tread carefully. Developers in the rooting community are going to great length to provide the best rooting experience possible, but that doesn’t always mean it’s going to work perfectly.

      How to Make Your Windows 10 PC Boot Fast

      $
      0
      0
       
      If your computer is booting slow, then you may not have "Fast Startup" configured in Windows 10. Here are the steps to make your PC boot up quickly.







      It's not just about the speed of the processor, memory, and the use of a fast Solid-State Drive on your computer that makes Windows 10 a snappy operating system. Windows 10 also relies on features and different techniques to work fast and provide quick boot times.

      Fast Startup is one of the features initially made available with Windows 8.x, that combines techniques in hibernation and in the shutdown process to enable the operating system to reduce the boot time significantly.

      The feature comes enabled by default on new installations. However, if you have been wondering why you're experiencing slow boot times in your Windows 10 PC. The reason could be that you have upgraded from Windows 8.1, and Fast Startup was previously turned off, or the operating system just didn't enable it during installation. Now to make your computer boot fast again, we'll show you in this Windows 10 guide the steps to enable Fast Startup on your computer.

       

      Enabling Fast Startup in Windows 10

      Follow the steps below to enable (or disable) Fast Startup in your computer running Windows 10:
      • Right-click the Start button and select Power Options.

      From the left pane, click the Require a password on wakeup to access the System Settings of Power Options.

      Click the Change settings that are currently unavailable link to modify the Shutdown settings.

      Note: If you see the Turn on fast startup (recommended) option is unchecked under Shutdown settings, then the feature is not enabled in your computer.


      To enable Fast Startup, check the Turn on fast startup (recommended) option, and click the Save changes button to commit the changes.


      Making available the Fast Startup feature

      As you can see in the screenshot below, there is a chance that after following the above steps, you will find out that the Fast Startup option is not available to you, which also indicates that the feature is not enabled on your computer.


      The reason is likely to be that hibernation is disabled in your Windows 10 computer, but you can enable this feature using the steps below:

      Right-click the Start button and select Command Prompt (Admin).




      Type the following command: powercfg /hibernate on and press Enter. This command will enable the hibernate feature in your computer, which is an important component of fast startup.



      Now you should be able to see the Turn on fast startup (recommended) option in the Shutdown settings.That's all there is to it folks.

      Although, Fast Startup is a great feature to make Windows 10 boot significantly faster than Windows 7, it's important to note that fast is not always a good thing depending on your system configuration and sometimes you may want to keep it turned off.

      For example, Fast Startup could be the root of boot issues. It's also not recommended to enable the feature on systems with very limited hard drive space, as it may not work correctly. 







      And if you create a data partition on a dual-boot setup running Windows 10 (or Windows 8.1), data that you try to save using another version of Windows or from another platform won't save to the partition. 

      This setup could cause data loss as Fast Startup will try to protect the file structure of your primary system from being changed.

      How to Roll Back Windows 10 Phone Redstone (from Fast or Slow Ring) to the More Stable Production Release

      $
      0
      0

      Microsoft is deep into developing the Anniversary Update for Windows 10 and Windows 10 Mobile. The Anniversary Update is known internally as Redstone 1 (Redstone 2 is due early next year) and many Insiders on PC and Mobile are already using the preview OS on their daily devices.







      In this article, we'll walk you through on how to roll back your phone from Redstone (from Fast or Slow Ring) to the more stable production release. While not a hard process you will need some time, a USB cable and a PC.

       

      Should you use Redstone on your primary Windows Phone?

      At this time, it is hard to recommend Fast or Slow Ring Redstone builds (14295+) over the production build (10586.218 as of today) if you are looking for a smooth, bug-free experience. Redstone is in the early stages and is more akin to running Windows 10 Mobile last July than necessarily a mature operating system.

      Personally, I keep Redstone on a secondary device for testing while for my Lumia 950 I prefer Production. Windows 10 Mobile build 10586.168 and 10586.218 have been fantastic to use and if you just want the best mobile experience you should go with that.

      If, however, you want new features then go with Redstone. Just understand that Microsoft is refactoring the OS as the Mobile and PC versions begin to overlap heavily in features and functions. Current Insider builds will likely sharpen up closer to June at which point you may want to consider jumping back into the mix.

      Let's say you are on Windows 10 Mobile build 14295 (or later) and want to go back to Production (10586) – How do you do it?

       

      Can you just switch Insider Rings?


      If you launch the Windows Insider app on your phone and choose a different Ring e.g. Production instead of Fast or Slow, you will stop getting Insider builds. However, this will not revert your OS to an earlier version.

      Currently, if you are a higher build of the OS the only way to go back is to use the Windows Device Recovery Tool (WDRT).

       

      This will wipe your phone! Back it up


      Before we begin let's be very clear: this process will erase your phone and install the old OS (and maybe some firmware if not yet distributed over-the-air).

      You will lose all data on the phone (SD cards are untouched).
      Remember to copy any files to your PC or storage card on the phone (if supported) and to use Windows Phone Backup (Settings > Update & security > Backup > More options > Back up now).

       

      What you need to start



      Before we begin let's see what you will need for this rollback process:
      1. Windows Device Recovery Tool (WDRT) PC software from Microsoft (Download file and more info can be found here)
      2. A Windows PC or laptop
      3. USB data cable for your phone
      4. Phone should have the battery charged 50% or higher for safety
      5. High-speed internet to download your phone's recovery files
      6. About 15 to 30 minutes to spare depending on PC and internet speeds
      Once you have downloaded and installed Windows Device Recovery Tool, you are all set to start the rollback process.

       







      How to roll back

      You should allocate around 30 minutes for the restore process.

      Launch & Connect
      Launch the recovery tool and connect your phone using a USB cable. If your handset isn't detected, click  

      My phone was not detected to force the application to rescan for and detect the phone or choose from the list of manufacturers e.g. BLU.



      Verify phone
      On the next screen, click your phone, wait a few seconds, and you'll see your phone information and the software available for download to roll back to a previous operating system. To continue, click Reinstall software.


      Reminder: Back up!
      Next, the recovery tool will warn you to backup all your data, settings, and apps before proceeding further, as the rolling back process will delete all the previous data on your phone. Click Continue to move forward.


      Survey
      Take the optional survey on why you are rolling back your OS


      Download & Install
      Now, the recovery tool will download the image from Microsoft's servers and replace Windows 10 Mobile that is currently on your phone. The process will take some time depending on your internet connection and the hardware in question.

      For a Lumia 950 XL, the download package is 2.76GB for reference

      Note: Phones sold with Windows Phone 8.1 who upgraded to Windows 10 Mobile will revert to Windows Phone 8.1. They will then need to upgrade to Windows 10 Mobile again. Phones sold with Windows 10 Mobile pre-installed e.g. Lumia 550, 650, 950, 950 XL and other phones like Moly X1 will go back to the production release of Windows 10 Mobile.

      After the tool completes the process, you will receive a message saying "Operation successfully completed." At this point, the phone will reboot, and you will have to go through the Out-of-Box-Experience (OOBE), like on any version of Windows. Then, you'll need to sign-in with your Microsoft account, select to restore your phone from backup (if this is something you prefer), and after a few additional questions, you'll be back to an earlier version of Windows.


      Check for updates!

      In some cases, your phone will roll back to a build of Windows 10 Mobile that is too early. For example, my Lumia 950 XL went back to build 10586.107 even though 10586.218 is the latest release. No worries, just run Phone update on the device to grab the latest version: Settings > Update & security > Phone update >

      Check for updates
      If you check for updates and there is nothing new then you are on the latest release.

       

      Restore from backup?

      One interesting question we get a lot is should you restore from a backup during the Out-of-Box-Experience (OOBE)?

      Yes, it should be fine as that is what it is designed to do. Remember, however, that you are now adding an extra layer of complexity to the situation meaning another area where things may not go right. There is no risk of damaging your device, but some users have claimed that doing a clean install and just manually rebuilding your phone and software is a better choice.

      Personally, I'm agnostic on this issue, but I do not restore from backup as the majority of my things are on the SD card (e.g. music, photos, documents) or in the cloud (OneDrive). I also like starting fresh as I tend to reconsider app choices, layout, etc.








      However, if you have game saves etc. that you need you may want to think out your options.

      I should mention that restoring also takes much longer as your phone will then re-download all your apps and games and install them while merging saved data. If you have 100+ apps and games on your phone, this could take some considerable time depending on your internet connection so set aside a good 30-60 minutes (once again, it will vary on the number of apps and phone's processor) along with an AC plug for your phone to keep it charged.

      Addressing RDP Failures for EC2 Virtual Machines (Part 1)

      $
      0
      0
      This article discusses your options when you are unable to establish RDP connectivity to a virtual machine running on the Amazon cloud.


      Last week I encountered a frustrating issue with a virtual machine. The issue was an RDP failure. While this issue might not initially seem catastrophic, it soon turned into one of those chicken and the egg type situations that every IT professional dreads. My problem was that I was unable to establish a Remote Desktop connection to my virtual machine. Unfortunately however, I couldn’t fix the problem (at least not through traditional means) because I could not connect to the virtual machine.






      Needless to say, this was a frustrating situation. Even so, all was not lost. If you ever run into a situation in which Remote Desktop is unable to connect to your virtual machine, there are some things that you can do to potentially resolve the issue.

      Unfortunately, there is no universal magical cure all for an RDP connection failure. The solution to the problem will ultimately depend on the condition that caused the problem in the first place. That being the case, I will discuss a few different issues.

       

      Remote Access has been Disabled

      One somewhat common problem is that remote access to a virtual machine is accidentally disabled. Perhaps an inexperienced administrator notices that remote access is enabled for the virtual machine, and disables it thinking that they are addressing a security risk. I have also heard stories of administrators accidentally disabling remote access by clicking the wrong option on the screen.

      If you are unable to establish an RDP connection to a virtual machine because remote access has been disabled for that virtual machine, then the easiest option for getting back in is to configure your group policy to allow remote access. The group policy setting that enables or disables RDP connectivity can be found within the Group Policy Object Editor at: Computer Configuration | Administrative Templates | Windows Components | Remote Desktop Services | Remote Desktop Session Host | Connection. You must set the Allow Users to Connect Remotely Using Remote Desktop Services setting to Enabled.

       

      The Windows Firewall is Blocking RDP Traffic

      Another common cause of RDP connection failures is that the Windows firewall is blocking Remote Desktop traffic. Again, this can happen if someone decides to close and RDP related firewall port in an effort to improve security.

      Like the Remote Access Service, the Windows Firewall can be controlled through group policy settings. The Windows Firewall related group policy settings are located within the Group Policy Object Editor at Computer Configuration | Administrative Templates | Network | Network Connections | Windows Firewall | Domain Profile.

      Although it is sometimes possible to fix a Windows firewall problem by using group policy settings, some situations may require you to be a bit more creative. I would recommend trying the group policy approach first however, because it is the easiest.

      If you find that you are unable to reconfigure the Windows firewall through the use of group policy settings then you will need to create a temporary virtual machine and then attach the malfunctioning virtual machine’s virtual hard disk to the temporary virtual machine. If done correctly, you can boot from the temporary virtual machine’s operating system and then modify the registry on the original virtual hard disk.

      Begin the process by stopping the malfunctioning virtual machine and then creating a new virtual machine. It is extremely important that your temporary virtual machine runs a different version of Windows than the virtual machine that you are trying to fix. Otherwise, you will create a disk signature collision that will render the already malfunctioning virtual machine unbootable.

      If you do accidentally use the same version of Windows on both virtual machines then it is possible to correct a disk signature collision, but the process is kind of tedious. Besides, why introduce another problem before you even fix the original problem? My advice is that if your malfunctioning virtual machine is running Windows Server 2012 or 2012 R2, then create a temporary virtual machine that runs Windows Server 2008. Similarly, if your malfunctioning virtual machine is running Windows Server 2008, then create a temporary virtual machine that runs Windows Server 2012 R2. 

      It doesn’t really matter what version of Windows is running on your temporary virtual machine so long as it is newer than Windows Server 2003 and does not match the version of Windows that is running on the virtual machine that you are trying to fix. Incidentally, if you do accidentally use the same version of Windows on both virtual machines, then you can find instructions for fixing the disk signature collision here.

      Once the new virtual machine has been created, you will need to attach the malfunctioning virtual machine’s root volume to the new virtual machine as a secondary disk. After doing so, log into the temporary virtual machine and open the Disk Management Console by entering the diskmgmt.msc command at the server’s Run prompt.

      When the Disk Management Console opens, you should see the virtual hard disk from the malfunctioning virtual machine, but the disk may be listed as being offline. If this happens, then right click on the disk and choose the Online command from the shortcut menu.

      Once the virtual hard disk has been brought online, the next thing that you will need to do is to edit the copy of the registry that exists on the virtual hard disk from the malfunctioning virtual machine. This is easy to do, but you do have to exercise care.

      Enter the Regedit command at the server’s Run prompt. When the Registry Editor opens, select the HKEY_LOCAL_MACHINE hive. Now, select the Load Hive command from the Registry Editor’s File menu. You will need to load the required hive from the disk from the malfunctioning computer. Select the appropriate drive and then go to Windows\System32\config\SYSTEM. You will be prompted to specify a key name, but you can enter any name that you want. The name is irrelevant.







      Now, navigate through the registry editor to HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\services\SharedAccess\Parameters\FirewallPolicy. Now, look for keys with names resembling xxxxProfile. This includes the DomainProfile, PublicProfile, and possibly other profiles. Select these profiles one by one and then change the value of EnableFirewall from 1 to 0. This will disable the firewall for the virtual machine. You can later log into the virtual machine and configure the firewall to support RDP.

      To complete the procedure, close the Registry Editor and re-open the Disk Management console. Right click on the disk from the malfunctioning virtual machine, and choose the Offline command from the shortcut menu. This will take the virtual hard disk offline. Now, shut down the temporary virtual machine and detach the disk from the malfunctioning virtual machine. You can now restore the root volume to the virtual machine where it came from (it should be attached as /dev/sda1). Go ahead and boot the virtual machine and test your ability to establish an RDP connection. If you are able to establish RDP connectivity then you can get rid of your temporary virtual machine.

       

      Conclusion

      As you can see, there are ways to correct RDP connectivity problems for virtual machines running within the Amazon cloud. In Part 2 I will conclude this series by showing you some more things that you can do to solve RDP connectivity problems.

      How to create a Windows 10 ISO file using an Install.ESD image

      $
      0
      0
      Do you want to install the latest build of Windows 10 using an ISO? Then use this guide to turn the install.ESD file into an installation media with the most current version of the operating system. In this Windows 10 guide, we'll show you the steps to convert an encrypted image into an ISO file.








      When you need to do a clean install or upgrade to Windows 10, similar to previous versions, Microsoft provides the installation files through a new ESD (Electronic Software Delivery) image format, which is commonly available via Windows Update.

      We know this image as the install.ESD file, which is around 3GB in size and contains everything that is needed to install the operating system from scratch. Windows Update will typically download this ESD file plus other files to the $WINDOWS.~BT hidden folder on your computer.

      The benefits of the Install.ESD is that it's an encrypted and compressed version of the Install.WIM image, thus making the download faster and more secure, which also results in spending less time during an upgrade.
      While Microsoft makes publicly available the files to install Windows 10 through the and via , the software giant typically doesn't offer the ISO files to install the latest Insider preview builds released in the Fast ring. However, you can use an ESD image to create an ISO file that you can use to do a clean install or upgrade of the latest version of Windows 10 on one or multiple computers.


      How to create an ISO file of Windows 10 with the Install.ESD image

      When a new Insider preview build becomes available, do the following.
      • Download the ESD Decrypter command-line utility using this link.
        Warning: While this command-line utility is known to work successfully, it's still a third-party tool, use it at your own risk.
      • Uncompress the utility to an empty folder on your desktop.
        Note: If you can't open the .7z file to uncompress the utility, you can use popular 7zip tool, which can be downloaded here.
      • Use the Windows key + I keyboard shortcut to open the Settings app.
      • Click Update & security.
      • On Windows Update, click Check for updates and let the latest version download to your system.

        • When the new installation files are ready, and you're asked to restart to begin the process, use the Windows key + E keyboard shortcut to open File Explorer.
        • Click This PC from the left pane.
        • Double-click the Windows installation drive -- Usually the C: drive.
        • Click the View tab on File Explorer.
        • Check the Hidden items option to see the $WINDOWS.~BT the folder that contains the installation files.
        • Open the $WINDOWS.~BT and inside the Sources folder, right-click and copy the Install.ESD file.


        Open the folder where you extracted the ESD Decrypter utility files, right-click, and paste the Install.ESD file on this location.


        • Right-click the decrypt.cmd file and click Run as Administrator.
        • In the ESD Decrypter Script user interface, type 2 to select the Create full ISO with Compressed install.esd option and press Enter to begin the process.


        Once the process completes, you'll end up with ISO file inside the ESD Decrypter folder with a descriptive name and build number (e.g., en_windows_10_pro_14316_x64_dvd.iso).







        You can now use this file to install Windows 10 on a virtual machine, or you can use a tool like Rufus to make a bootable installation media.

        It's important to point out that there are different versions of the ESD Decrypter tool, but version 4.7 continues to work with the latest Windows 10 Insider Preview. You will also find other similar tools around the internet, such as ESD-Decripter, that are based on the command-line tool we're mentioning in this guide.

        Note: The nature of dealing with an encrypted file makes the tool useful as long you're using the correct RSA key to do the decryption. While the RSA key comes integrated in the ESD Decrypter tool, Microsoft can begin to ship a new version of Windows with a different key at any time, which can make the tool unusable unless you provide a new key.

        How to Create and Sync Calendar Events in Windows 10

        $
        0
        0
         
        The Calendar app included with Windows 10 is a modern, universal app that integrates wonderfully with Mail and other Windows 10 apps. If you’re looking for a place in Windows 10 to manage your days, weeks, and months, here’s how to set up a Calendar in Windows 10’s Calendar app.







         

        Add and Configure Your Accounts

        Calendar can sync with your online accounts, like Google Calendar, Outlook, or iCloud. In fact, the Calendar and Mail apps are linked, so if you’ve already set up an account in Mail, it will show up in Calendar as well. If not, you can add it manually in the Calendar app.

        Calendar supports a range of account types: Outlook.com (default), Exchange, Office 365 for business, Google Calendar, and Apple’s iCloud. To add an account to the Calendar app, point your mouse to the lower-left of the screen and click the “Settings” cog. From the right sidebar that appears,  click “Manage Accounts > + Add account.”


        The “Choose an Account” window appears. Just like Mail app, it’s equipped with all kinds of popular calendar services. Choose the type of account you want and follow the on-screen instructions. If your settings are correct, then you get ported directly into that account’s calendar.


        If you do not wish to see calendars from a particular account, then you can switch them off. Point your mouse to the lower-left of the screen and click “Settings.” Select the account listed in the “Manage Accounts” pane that you wish to limit and click “Change mailbox sync settings.”


        Scroll down, and under the “Sync Options” toggle “Calendar” option to off. If you’re having any sync related issues with the calendar server, click “Advanced mailbox settings” and configure the server details.


         

        Working With Different Calendar Views

        The Calendar app has a simple, minimalistic interface. The interface is divided into two sections:

        On the left, Calendar provides an overview of your calendars. Press the “hamburger menu” to collapse that pane. A compact calendar also appears at the left side. You can use it to jump to a date that’s far in the past or future.


        On the right, Calendar offers all the date-based views. Using the toolbar at top, you can switch among different Day, Work Week, Week, and Month view. You can click on them or press “Ctrl + Alt +1″, “Ctrl + Alt + 2”, and so on to switch between different views. The left and right arrow keys go to the previous or next day, and the up and down arrow keys go to the previous or next hour.

        The “Day” view includes a drop-down menu, click them to specify how many Day columns you want to fit on the screen. “Work Week” view displays a traditional Monday through Friday work week in a list view. “Week” view shows up to 24 hours in a day and seven days of the week. “Month” view shows the entire month and highlights today’s date.


         

        Adding a New Event or Appointment

        To enter a new event directly into your calendar, tap or click the correct time slot or date square. A small pane will appear for a new calendar entry on that date. Enter a name for your event. Select the beginning and ending time of your event. If the event has no time (like a Birthday or Anniversary), select the “All Day” checkbox. Enter the location and calendar account associated with it. This is a quick way for adding a calendar and most of the time, this is all you need.


        You can also click “New Event” at the top-left corner of the screen to create an event with more details. A big screen will pop up, allowing you to:
        • Add the start and Ending Date, and their Time
        • Use the “Show As” field to choose between Free, Tentative, Busy, or Out of Office
        • Update the “Reminder” field using increments from None to 1 week.
        • Set the event as Private by selecting the padlock
        • Add an Event Description and Location.
        To invite people, select the text field and start typing. Contacts will appear from your list of contacts in the People app. Choose the contact to add them to the event. You can invite someone by typing the email address too. Finally, click “Save and Close” or if you have invited someone, then click “Send.”


         

        View, Edit, or Delete an Event

        By default, Calendar will show you the event’s name and time in the main view. If you mouse over an event, then a little box will pop out to show more detail, including the name of the event, location, starting date, and time. Click “More Details” to switch the event to “Details” view. Once you’ve opened the event, you can of course edit them as much as you want. Click “Save and Close” when you’re finished editing.







        To delete an event or appointment, open the event and click “Delete” situated at the top of the toolbar. If other people have been invited, then click “Cancel meeting” instead of Delete. Type a short message and click “Send.”


        Adding event and appointments to Calendar app is quite an easy and intuitive procedure. The entries you add to Calendar app will get synced to every Windows 10, Android, or iOS device you have, assuming that you have linked your accounts in those devices as well.



        How to shutdown, restart, or sign-out of Windows 10 hands-free using Cortana

        $
        0
        0

        Here's a handy trick you can use to use voice commands with Cortana to shutdown, restart, hibernate, or sign-out of your Windows 10 PC hands-free.







        Cortana is perhaps one of the best features of Windows 10. You're likely to have interacted with other digital assistants, such as Apple's Siri and Google Now, but with Windows 10, you get it everywhere -- on your PC, tablet, phone, and soon, Microsoft is even working to bring the feature to Xbox One.

        The coolest thing about the personal assistant is that you can use voice commands to get help on anything you need, such as find files on your device, track packages, set up appointments or new reminders, answer common questions, and much more.

        However, as much as you would like to actually "ask anything" and get an instant answer or action, Cortana still limited on the things it can do. For example, you can't ask basic tasks like: "Hey Cortana: Shutdown my PC" or "Hey Cortana: Restart my PC."

        But there are other ways to trick Cortana into performing such tasks. In this Windows 10 guide, we'll show you a simple trick you can use to enable Cortana to shutdown, restart, sign-out, or put your computer into a hibernate state using voice commands.

         

        How to use Cortana to perform power-related tasks on Windows 10

        For this guide, we'll be using Cortana's ability to launch programs to trick it into executing specific commands using traditional shortcuts.

         

        How to shutdown your Windows 10 PC using Cortana

        To use Cortana to shut down your computer use the following steps:
        • Use the Windows key + E to open File Explorer.
        • Click This PC from the left pane.
        • Click the View tab.
        • Check the Hidden items option.
        • Open the Windows installation drive (typically C:) and navigate to the following path: Users\YourUserName\AppData\Roaming\Microsoft\Windows\Start Menu\Programs.



        • In the Programs folder, right-click, select New, and click Shortcut.
        • Type the following command and click Next: 
        shutdown.exe -s -t 00


        Explanation: The command mentioned above starts Shutdown.exe and uses the -s switch request a total shutdown and the -t 00 switch is used to set how long to wait before turning off your computer -- in this case, zero seconds.
        • Name the shortcut with the voice command you want to use with Cortana. For example, "Shutdown" or "Turn off PC".
        • Click Finish to complete.


        Assuming that you have enabled the "Hey Cortana" feature, you can now say: "Hey Cortana: Open Shutdown" to turn off your computer hands-free.

         

        How to use Cortana to restart your Windows 10 PC

        If you want Cortana to be able to restart the computer use the following steps:
        • While in the Programs folder, right-click, select New, and click Shortcut.
        • Type the following command and click Next:shutdown.exe -r -t 00


        Explanation: The command mentioned above starts Shutdown.exe and uses the -r switch to request a full shutdown and restart of your PC, and the -t 00 switch is used to set how long to wait before rebooting your computer.
        • Name the shortcut with the voice command you want to use with Cortana. E.g, "Restart" or "Reboot PC".
        • Click Finish to complete.
        Now you can invoke the command: "Hey Cortana: Open Restart" or use the name you used in the shortcut, and your computer should begin the restart process.

         

        How to use Cortana to hibernate your Windows 10 PC

        You can also use Cortana to put your Windows 10 PC in the hibernation state with the following steps:
        • While in the Programs folder, right-click, select New, and click Shortcut.
        • Type the following command and click Next:shutdown.exe -h


        Explanation: The command mentioned above starts Shutdown.exe and uses the -h switch to perform a hibernation request. However, keep in mind that your computer must have hibernation enabled to work.
        • Name the shortcut with the voice command you want to use with Cortana. E.g, "Hibernate".
        • Click Finish to complete.
        Once you finish creating the shortcut, you can invoke the new command saying: "Hey Cortana: Open Hibernate."

         

        How to use Cortana to sign-out of your Windows 10 account

        Additionally, you can use Cortana to sign out of your account with the following steps:
        • While in the Programs folder, right-click, select New, and click Shortcut.
        • Type the following command and click Next:shutdown.exe -l


        Explanation: The command mentioned above starts Shutdown.exe and uses the -l switch to request a logoff action.
        • Name the shortcut with the voice command you want to use with Cortana. For example, "Sign out".
        • Click Finish to complete.
        Similar to the previous command, you can now say: "Hey Cortana: Open Sign out" to log off of your Windows 10.

         

        Extra

        After creating the new shortcuts in the Start menu Programs folder, you will see four new application entries in the All apps list. You can add more styles to make the new entries more identifiable:
        • While in the Programs folder, right-click the Shutdown shortcut, and click Properties.
        • In the Shortcut tab, click the Change Icon button.



        • If you get a warning, just click OK.
        • Select the appropriate icon and click OK.







        Note: If you don't see any icon listed, then copy and paste this path: %SystemRoot%\system32\shell32.dll and click Enter, and you should now see the list of icons.
        • Click Apply.
        • Click OK.
        • Repeat these steps for each shortcut.


         

        Wrapping things up

        The only caveat with the method is that you must say the word "Open" on every command, but it's a small price to pay for the new actions Cortana can do for you.

        Keep in mind that Microsoft is also working on the Windows 10 Anniversary Update, which will include a lot of new features and enhancements. 

        In particular, the software giant is planning to add various improvements to Cortana that could also include new power-related voice commands, as we have caught a glimpse of the feature during the Microsoft Build 2016 developer conference. In the meantime, you can use this handy trick to sign-out, restart, hibernate, or shutdown your computer hands-free.
        Viewing all 880 articles
        Browse latest View live


        <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>