Quantcast
Channel: Tech Support
Viewing all 880 articles
Browse latest View live

Microsoft Shows Off Project Brainwave

$
0
0

Microsoft showed off Project Brainwave, an AI system that runs workloads in real-time using Intel's 14nm Stratix 10 FPGA chip.






Microsoft's latest system, dubbed Project Brainwave, uses field-programmable gate arrays (FPGAs) from Intel to process artificial intelligence (AI) workloads in real time, a capability that is coming soon to the Redmond, Wash., software giant's cloud.

By attaching high-performance FPGAs directly to datacenter network, it can serve DNNs [deep neural networks] as hardware microservices, where a DNN can be mapped to a pool of remote FPGAs and called by a server with no software in the loop,"

This system architecture both reduces latency, since the CPU does not need to process incoming requests, and allows very high throughput, with the FPGA processing requests as fast as the network can stream them.

Project Brainwave also features a so-called "soft" DNN processing unit (DPU) that exploits the flexibility provided by commercially available FPGAs to match or surpass the performance provided by hard-coded DPUs.

Project Brainwave supports deep learning framework, Microsoft Cognitive Toolkit, along with Tensorflow from Google. Microsoft plans to extend support to many other frameworks.

Microsoft Azure customers will soon be able to run their AI workloads using the Project Brainwave system. Users of the company's other services, including Bing, will indirectly feel the performance-enhancing benefits the technology offers.

Alibaba, too, has high hopes for FPGAs in cloud data centers. In March, the Chinese web services provider announced that it had teamed with Intel to launch a cloud-based workload acceleration service that uses Intel Arria 10 FPGAs.

More information on Project Brainwave, including the record-setting results of tests on the system, is available in this blog post.








Credit: eWEEK

How To Add Physical Volume (PV) to a Volume Group (VG) in LVM

$
0
0

More often we want to add new physical volumes to an existing volume group when logical volumes need to be increased. The physical volume can either be a partition or an whole disk. This guide will take you through the steps to add an entire disk for addition into an existing volume group under CentOS or Red Hat Linux.


The following steps can be applied on CentOS and Red Hat Linux Enterprise.

Adding Physical Volume (VG) to Volume Group (VG)

You must be root to perform the following steps. First, we need to set the partition type to Linux LVM, 0x8e, using fdisk command.

fdisk /dev/sdc

Type t to select the partition:
 
Command (m for help): t
Partition number (1-4): 1

Set partition type as 8e which for Linux LVM.
 
Hex code (type L to list codes): 8e

##output of the partition type 8e is

Device Boot Start End Blocks Id System
/dev/sdc1 1675 2054 3052087 8e Linux

Now save and exit fdisk with the write/quit command by typing w. The changes you have made will be permanent after this command is executed.

Command (m for help): w

Lastly, reload the partition table after changing it by either rebooting the machine or executing partprobe command.
 
partprobe

The partition/disk needs to be added as a physical volume using pvcreate.
 
pvcreate /dev/sdc1

The physical volume needs to be added to the volume group using vgextend.
 
vgextend vg_data /dev/sdc1

How To Configure and Manage IBM v7000 Storage

$
0
0

This guide covered most of the content on the internal Flex System V7000 and the Brocade FC5022 SAN Switch.


Integrated v7000 Setup

If your Flex system came with an integrated v7000 use the following steps to create the v7000 cluster.
  • After you have managed and inventoried your chassis you will see the integrated v7000.
  • From the Chassis Map view right-click on the v7000 node and launch the EZ setup
  • You will first be asked to provide the superuser password.
  • then accept the license agreement
  • Then set the System Name and the Date/Time (it is recommended you set the NTP server just as you did with the CMM and FSM)
  • You will then setup the system licenses you purchased.
  • Next you can configure Email Notifications or skip this step till later.
  • You will then add the enclosure.
  • Next you will configure your RAID options for your internal storage.
  • Once the cluster has been created you will be launched into the v7000 GUI.
  • To setup the service and management IPs (if not already done within the CMM) go to Settings -> Network on the left pane.
Once the GUI is setup you should see a screen similar to the one below.



Managing External Storage Systems w/ FSM

The IBM Flex System Manager can manage the following external storage units.
    • Storwize v7000 (external)
    • San Volume Controller (SVC)
    • XIV
    • (It can also manage DS storage units but you will need an SMI-S agent configured on a separate server.)
To manage an external V7000 and SVC use the following steps.
  1. Log into the FSM and go to the Home tab and to Plugins
  2. Scroll down and click into Storage Management
  3. On the right hand side choose Discover Storage
  4. From the dropdown choose either v7000 or SVC
  5. Enter the IP address (For SVC also enter the version number of SVC you want to manage)
  6. You will also need to have an SSH public and private key pair created (The easiest tool for this is a free app called PuttyGEN)
  7. Place the private key into the FSM where it says ‘Upload Private SSH Key’
  8. On the storage system you will need to change the superuser ssh key.
    1. Log into the SVC or V7000 GUI and go to Access -> Users
    2. Right-click on the superuser user and click properties.
    3. Click change SSH key and place the public key in the pair to the superuser users profile.
  9. One both key pairs are uploaded click ‘Discover’
  10. Allow the job to run and make sure there are no errors
  11. Once complete you should see the storage system as a managed device under the storage group in Resource Explorer.
Managing the XIV is even simpler than the V7000 and SVC. You simply need the management IP address and credentials.


You should see your storage systems managed and in an OK state as seen below.

Upgrading v7000 Controller Firmware via FSM

In this article I will show the steps to upgrade your Flex v7000 via the Flex Systems Manager as well as any caveats discovered.

If you have compliance running on your Flex v7000 then when a new upgrade is available you will see it as a compliance warning. (As shown in image below)


Click the warning link and it will bring you to the screen where you can choose your upgrade and click Install.


During the wizard it will ask you to let the test utility run ; ALLOW THIS TO HAPPEN. The test utility is critical to ensure the update will be successful.


The upgrade job will need to restart the storage system. However, this can be misleading since the v7000 is a dual controller system it will simply do one node at a time so you will NOT need to shutdown your hosts for the upgrade.

On the summary screen, make sure all of the information is correct and then click Finish to launch the job and either schedule this job or run it now.


Display the job properties to see the logs as the upgrade commences (I also like to have the v7000 GUI open to view the upgrade task under Settings -> General ->Upgrade Software



Watch the upgrade and when it completes verify in the FSM under the v7000 node inventory or in the v7000 GUI that the upgrade completed successfully.





Note: If you receive the following error during the upgrade then you need to upgrade the drive firmware (see the article below addressing drive firmware upgrade procedure)

******************* Warning found *******************
This tool has found the internal disks of this system are
not running the recommended firmware versions.
Details follow:
+———————-+———–+————+————-+
| Model | Latest FW | Current FW | Drive count |
+———————-+———–+————+————-+
| ST91000640SS | BD2F | BD2A | 4 |
+———————-+———–+————+————-+
We recommend that you upgrade the drive microcode at an
appropriate time. If you believe you are running the latest
version of microcode, then check for a later version of this tool.
You do not need to upgrade the drive firmware before starting the
software upgrade.

Upgrading v7000 Drive Firmware

  1. First go the IBM support and downloads and search for Flex v7000.
  2. Download the drive microcode package.
  3. Next either pscp or WinSCP the package to the v7000 the the /home/admin/upgrade directory.
  4. Then SSH to the v7000 and type lsdumps -prefix /home/admin/upgrade to see if the upgrade package is there.
     
  5. Now upgrade each individual drive (I know its really annoying!) svctask applydrivesoftware -file IBM4939_DRIVE_20121210 -type firmware -drive 0
  6. To see if each drive has been update type LSDRIVE 1
  7. When every drive is updated try to run the update again.

Mdisk Pools/Adding New Drives

For this article I will be adding new 900 GB drives to my Flex Systems V7000 (internal to the chassis). We will be using RAID5 and the best way to group disks for performance is 8 disks to make an mDisk. An mDisk is a managed disk group or array of disks that can then be added to mDisk pools.

  • First go to your v7000 cluster GUI and go to Pools -> Internal Storage. 

  
  • Next physically add your drives to the v7000.
  • Once the drives are added you should see the new drives, separated by drive type, and listed as ‘unused’.
 
  • Now click the box on the upper left that says ‘Configure Storage’. Choose the RAID type you want and whether you would like to create hot spares. Allowing the IBM recommendations to be run is the best practice. Also, one hot spare per enclosure is enough to keep the recommended fault tolerance to the mdisk arrays. 



  • Once the job runs the wizard will ask you to either create a new pool or to add to an existing pool. (For my configuration I added the 900GB 10k SAS drives to an existing pool and for the 1 TB 7.2k SAS drives I created a new pool with 2 mDisk arrays per IBMs recommendations). 


Once you have mDisk arrays and mDisk pools created you will be able to create vDisks or logical drives from them.

Brocade FC5022 Configuration

Upgrading Brocade FC5022 Firmware via FSM

NOTE: DO ONE SWITCH AT A TIME.
  • First find the compliance war

  • Click the ‘Minor’ link
  • Now select the update and click ‘Install’
  • The switch will need to reboot during the upgrade but as long as the other fabric aka SAN switch is online and healthy then this will not be a problem..

Verify in the confirmation screen that the information looks correct and click ‘Finish’.


Now display the properties to watch the job run
Watch the logs for errors.
 When the job completes, verify the SAN switch inventory reflects the new firmware.




Next either run the job now or schedule it for a time during your maintaince window.















Lenovo Flex System Manager (FSM) Re-Installing

$
0
0

Consider the situation, you have broken the FSM in your IBM Flex or IBM PureFlex™ System, either it won't power on any more, its just failing to start or its inaccessible. In my case after a software update it wouldn't boot up correctly and some how the root user had been corrupted, so this meant that I couldn't get access to the system.  Below I'll go through the steps I took to get the software re-installed from the 'recovery' section of the internal disk on the FSM node.

FMS Recovery From Internal Disk

First you need to log into the CMM (Chassis Management Module) in order to reset the FSM node


Once you've done that  You need to look at booting FSM and opening a console, so select the FSM then from the drop down box select 'Launch Compute Node Console'


This should present a pop-up window which will enable you to launch the remote console to the node


This will present you with IMM window for the node


So put in you access credentials such as USERID and the password you have set for this and you should see this screen


Then from the window above select the 'REMOTE CONTROL' button to control the server


From here you can select the 'Start remote control in multi-user mode' to bring up the console, which will open you up a window to the server


From this window you want to interrupt the boot phase to change the boot disk, so if you get your 'Ctrl-Alt-F1' ready from the menus you should see the window below


You can see that the message ' key pressed' is displayed if you are successful, then your be presented with the 'System Configuration and Boot Management' menu, so select 'Start Options' to get the screen below 


Select Hard Disk 1 to boot of the recovery disk and select 'Full system recovery' as below


Once you have kicked it off you should see a boot phase similar to this, so you know the boot-up is going well


After the boot has completed you should be presented with the system Flex recovery menus, of which the license agreement is first


Select Agree on the menu and move to the next part,  the Welcome page and set-up wizard
So go though and set your, date/time and password, your also need to set-up your networking again, though most of the hard work has been done, in my case I don't need the advanced routing set

So carry on through the menus, setting up the LAN adapter, Gateway, DNS, and such and once you finish you should see the start-up console.  This can take sometime to get through, so go get a tea, or action some other work


And hopefully once you have completed it all OK you should be presented with a lovely login screen


After this point you will need to go in and do the rediscovery of the IBM Flex chassis, the various nodes, switches and storage if you have it.  Due you different various features of the IBM Flex depending on what you have, I won't be covering that part here.

Connecting to the management node locally by using the console breakout cable

Use this information to connect to the IBM® Flex System Manager management node locally by using the console breakout cable and configure the IBM Flex System Enterprise Chassis.

Procedure

To connect to the IBM Flex System Manager management node locally by using the console breakout cable and configure the IBM Flex System Enterprise Chassis, complete the following steps:
  1. Locate the KVM connector on the management node. See the IBM Flex System Manager management node documentation for the location of this connector.
  2. Connect the console breakout cable to the KVM connector; then, tighten the captive screws to secure the cable to the KVM connector.
  3. Connect a monitor, keyboard, and mouse to the console breakout cable connectors.
  4. Power on the management node.
  5. Log in and accept the license agreement. The configuration wizard starts.
 

Reinstalling the management software from the recovery partition

Use this recovery method if the management software is inoperable because of misconfiguration or corruption, but the management node hardware is operational and the hard drives have not failed.

About this task

This procedure returns the management node and management software to factory defaults, and destroys data on the system (but not backups stored on the hard disk drive). After the recovery process is complete, you must configure the system with the Management Server Setup wizard or restore the system from a backup image.

Procedure

  1. Restart the management node.
  2. When the firmware splash screen is displayed, press F12. The setup menu is displayed. The screen displays confirmation that F12 has been pressed.
  3. Select Recovery partition.
  4. When the boot options screen is displayed, select Full system recovery. After approximately 30 minutes, the recovery process ends and the Management Server Setup wizard opens.
  5. Either complete the Management Server Setup wizard or restore a previous configuration from a backup.

What to do next

Important: If your recovery partition is an earlier version of the management software than the version before the recovery procedure, update the management software to the same version as before the recovery operation after you complete the Management Server Setup wizard. If you do not update the management software in this situation, the management software is older than other components in your environment, which might cause compatibility problems.

Restoring the CMM manufacturing default configuration

Use this information to restore the CMM to its manufacturing default configuration.
Attention: When you restore the CMM to its manufacturing default configuration, all configuration settings that you made are erased. Be sure to save your current configuration before you restore the CMM to its default configuration, if you intend use your previous settings.
You can restore the CMM to its manufacturing default configuration in three ways:
  • In the CMM web interface, select Reset to Defaults from the Mgt Module Management menu. All fields and options are fully described in the CMM web interface online help.
  • In the CMM CLI, use the clear command (see clear command for information about command use).
  • If you have physical access to the CMM, push the reset button and hold it for approximately 10 seconds (see CMM controls and indicators for the reset button location).


CMM controls and indicators

The IBM® Flex System Chassis Management Module (CMM) has LEDs and controls that you can use to obtain status information and restart the CMM.


The CMM has the following LEDs and controls:
Reset button
Use this button to restart the Chassis Management Module. Insert a straightened paper clip into the reset button pinhole; then, press and hold the button in for at least one second to restart the CMM. The restart process initiates upon release of the reset button but might not be immediately apparent in some cases. 
Attention: If you press the reset button, hold it for at least 10 seconds, then release it, the CMM will restart and reset back to the factory default configuration. Be sure to save your current configuration before you reset the CMM back to factory defaults. The combined reset and restart process initiates upon release of the reset button but might not be immediately apparent in some cases.
Note: Both the CMM restart and reset to factory default processes require a short period of time to complete.
 
Power-on LED
When this LED is lit (green), it indicates that the CMM has power.
 
Active LED
When this LED is lit (green), it indicates that the CMM is actively controlling the chassis.
Only one CMM actively controls the chassis. If two CMMs are installed in the chassis, this LED is lit on only one CMM.
 
Fault LED
When this LED is lit (yellow), an error has been detected in the CMM. When the error LED is lit, the chassis fault LED is also lit.
 
Ethernet port link (RJ-45) LED
When this LED is lit (green), it indicates that there is an active connection through the remote management and console (Ethernet) port to the management network.
 
Ethernet port activity (RJ-45) LED
When this LED is flashing (green), it indicates that there is activity through the remote management and console (Ethernet) port over the management network.

    Configuring Lenovo Flex Enterprise Chassis Network and FC Switches

    $
    0
    0
    http://techsupportpk.blogspot.com/2013/06/how-to-assign-ips-to-ibm-pureflex.html

    To assign IP’s to your PureFlex network and fibre channel (FC) switches, you will use the Chassis Management Module (CMM).

    First off, log into the CMM using your admin credentials (USERID is the default).

    Click on Chassis Management > Component IP Configuration



    Click on the module that you want to assign an IP address to, in this demonstration we will use the EN4093 10GB Ethernet Switch


    Here, you can configure the management IP for the EN4093 10GB Ethernet Switch in your IBM PureFlex chassis. Hit apply when your complete and you’re complete!


    Installing and Configuring Lenovo Flex Enterprise Chassis from the Scratch

    $
    0
    0

    This step by step guide will help you to configure IBM Flex System Manager (FSM) from the scratch. If anything about this tutorial becomes out of date we will do our best to update accordingly. If issues arise that this article cannot address please post your thoughts or question at the end of this article under comment box section and we will definitely try to address your issue.


    STEP1- Flex System Manager Initial Setup 

    Please follow the below steps to setup your Flex Systems Manager for first time use. IBM has made this very basic but follow carefully as there are important pieces. 

    PREREQUISITES:

    • Ensure you have forward AND reverse lookup zones on your DNS server for the FSM hostname.
    • After you have setup to Chassis Management module then go to the IMM address of you Flex System Manager node.
    • From here click ‘remote control’ then launch the console in multi-user mode (this is important in case for some reason you get locked out of you session you will not be able to launch another)
    • Power on the FSM
    • The first step is to chose your chassis network configuration (best practice here is to use separate networks for both internal and external management of the chassis)

    The next step is to choose your DNS configuration. The FSM should have already found your DNS servers but if it has not then enter them.

    The next step is to configure the Chassis internal network. You will choose ONLY IPv6 for the internal management network (Eth0). Copy and paste the IPV6 address into the field and enter 64 for the prefix length. Make sure there is no  space after the paste and then click add.


    Next you will assign an IPv4 address to the external management port of ETH1.


    Next choose your hostname (make sure this is in your DNS server) You will not need a IPv6 address unless you are using this in your core network.


    Add your DNS servers if they have not already been discovered.


    DO NOT CHECK PERFORM NETWORK VALIDATION.Verify your settings and click finish. Allow at least 1-2 minutes for the setup to complete.Then the FSM will startup for the first time but will need to reboot itself first. Allow this to run for up to 30 minutes.


    Setup will run for upwards of 30 minutes. Next you will need to check for updated FSM code before you manage the chassis (see below)

    STEP2 - Managing your Chassis

    To manage your chassis within the FSM perform the following

    Before we manage the chassis be sure there is a pingable IPv6 address between the CMM and Eth0 of the FSM. Easiest way to do this is to go the network configuration of the FSM and copy the first 4 octets from ETH0 and then assign a static IPv6 address to the CMM and use the last 4 octects of the CMMs self assigned IPv6 address. (you will find this under Chassis management module management ->network->network->IPv6) This will put the CMM and ETH0 of the FSM onto the same IPv6 subnet.

    To verify this was done correctly ssh to the FSM and enter ‘ping6

    • From the home tab click select chassis to manage

    • From the next page click discover new chassis 

    Once you see success click close and check the box next to the newly discovered chassis and click ‘Manage’ . The status column should show this chassis as ‘unmanaged’. Also be sure to check both check boxes, the second box to set IPv6 addresses on the chassis components.


    You will then be asked for the credentials. These will be the credentials of the CMM. It will also ask you to set a recovery password for the CMM, it is CRITICAL that this is saved in the event that the FSM is lost and cannot authenticate the CMM using LDAP. Once managed you will want to run inventory.

    To run inventory on your new chassis perform the following.
    • Right click on your newly discovered chassis and highlight ‘Inventory’ then click ‘Collect Inventory’.
    • Click ‘Run now’ when the job pops up.
    Note: I like to display the properties of the job so I can monitor the progress and see if there were errors.

    STEP3 - Updating the FSM

    There are two ways to update your FSM. First I will outline updating via the FSM (Flex System Manager) GUI.

    Make sure your CMM is at the latest firmware available BEFORE updating the FSM code (In some cases if the CMM is too far back leveled you could lose management of the chassis by upgrading the FSM too far ahead. The simplest way to check the CMM firmware status is make sure compliance is running on the chassis for ‘All systemx and blade-center updates’ and then run Inventory on the CMM. If an update is available it will show as a compliance warning and it is easily updated from the FSM GUI.

    https://www-304.ibm.com/software/brandcatalog/puresystems/centre/update?uid=S_PUREFLEX

    The above site will also give you information about the newest FSM updates and the compatible firmware for all of the available nodes INCLUDING the CMM.

    Click here to download v1.30 Best Practices Doc

    • On the home tab click check and update Flex System Manager

    • Allow the search to find the FSM updates
    • If there are new updates check the box and click install.
    • walk through the wizard and at the end launch the job.

    • Now log back in and run inventory on ALL SYSTEMS. This could take quite a long time depending on how many MEP (managed end points) you have.
    • Lastly check for FSM updates as shown in Step #1 above. If it says you’re up to date then you have successfully upgraded your FSM!
    Note:If the update fails the first time, reboot the FSM, and then try the update again from the GUI.

    Note: If you'd like to check the progress of the upgrade by going to the console of the FSM. To do this go to the IMM of the FSM and launch the remote control in multi user mode to watch the upgrade run.

    STEP4 - Setup FSM Backup and Restore

    There are 3 different ways to backup your FSM. I will quickly outline all 3.
    • Backing up locally - this method is not recommended because the file-space gets locked after the backup and cannot be removed to an off site location. So if you lose the FSM appliance you also lose your backup.
    • Backup to External USB- This method will allow you to backup to an external hard drive that is connected to the front of the FSM via the external USB ports. Also please note that the supported Filesystem formats are ext3, ext4, and vfat.
    • Backup over SFTP - This is the IBM best practice method for backing up your FSM.
    Note The backup size can vary but is upwards of 30 GB. Ensure which ever backup method you use will have adequate space to hold the backup.


    1. Go to the Home tab -> Administration
    2. Scroll down and under Serviceability Tasks -> click ‘Backup and Restore’
    3. Click ‘Backup now’ or you can schedule your backups
    4. Pick your backup method and allow the job to run.

    Restoring the FSM
    • VIA USB
      1. Insert a USB device into the USB port on the management node. The USB device is mounted automatically.
      2. Open a CLI prompt.
      3. Use the restore -l usb command to restore the image from the USB drive.
    • VIA SFTP
      1. Insert a USB device into the USB port on the management node. The USB device is mounted automatically.
      2. Open a CLI prompt
      3. Use the restore -l usb command to restore the image from the USB drive.
    • VIA LOCAL
      1. Open a CLI prompt.
      2. Use the restoreHDD file name command, where file name is the name of the backup file, to restore the image from the hard disk drive.
    Note If you completely lose access to the management node then you may need to reinstall from the recovery partition. Here are the steps to do so.

    About this task

    This procedure returns the management node and management software to factory defaults, and destroys data on the system (but not backups stored on the hard disk drive). After the recovery process is complete, you must configure the system with the Management Server Setup wizard or restore the system from a backup image.

    Procedure

    1. Restart the management node.
    2. When the firmware splash screen is displayed, press F12. The setup menu is displayed. The screen displays confirmation that F12 has been pressed.
    3. Select Recovery partition.
    4. When the boot options screen is displayed, select Full system recovery. After approximately 30 minutes, the recovery process ends and the Management Server Setup wizard opens.
    5. Either complete the Management Server Setup wizard or restore a previous configuration from a backup.

    STEP5 - Updating Managed End Points using FSM

    There are 2 simple steps to updating managed end points (MEPs)
      1. Acquire updates
      2. Show and install updates
     Acquiring Updates - 
    • First in the left pane go to ‘Release Management’ then ‘Updates’

    • Scroll down and click ‘Acquire Updates'
    • From the screen above you can pick which specific updates you want to search for.
    • Select and add your updates
    • Run the job and watch the logs to see new updates being pulled in.
    The next step is Show and Install the updates that you found in the previous steps.
    • From the Updates menu scroll down and choose ‘Show and Install’ Updates
    • Browse to choose the system you wish to check for updates for as seen below
    • Check the update you wish to install and click ‘Install’
    • There will be a brief wizard making sure that the system you wish to install this is correct and possibly is you would like an auto restart of the system if one is needed.
    • Allow the job to run and check the log to make sure it ran properly

    STEP6 - Compliance Policy Configuration

    In order for this to work properly you will need to setup a job to run inventory on the system you wish to set a compliance policy before the compliance check runs (typically once a week will suffice). Use the following directions to set up compliance policies on critical systems.

    • First go to Release Management -> Upgrades
    • Next click Change compliance polices (as seen above) From here click browse to pick your system.
    • Navigate and find your system (easiest way is to pick the All Systems group and search for you system) 
    • Next click Show Compliance Policies (this will allow you to add a compliance policies and also if there is another compliance policy in place)
    • Now click Add and choose the update that applies (if you are unsure choose ALL UPDATES)
    • Now click Save.

    If you have an inventory job setup and the compliance was set proper you should see the following once compliance is configured.




    STEP7 - Configuring Monitors and Thresholds

    For this article I will be using a group that represents my ESXi Cluster.
    • From the FSM Explorer View go to Monitors -> Monitors. (if you are using FSM code 1.2.1 this will jump you to the old interface)


     

    • From the new window first choose ‘browse’ and pick the system you want to configure monitors and thresholds. 
    • Next choose your monitor group and click ‘Show Monitors’ 
    • Next click the Monitor you wish to activate and configure.
    • Next you will configure your thresholds.
    • Now you should see your new Monitor activated and reading data.

    Troubleshooting the Flex System Manager

    STEP1 - Unlocking User Accounts

    Sometimes the FSM administrator account could lock itself out after 20 failed login attempts. To unlock an account do the following
    1. SSH in the FSM using the pe (product engineering) account. (This password should be the same as the USERID password)
    2. Run smcli unlockuser -u USERID
    3. Now try to login again.
    Note: If the USERID account keeps getting locked out please open a ticket with IBM Support to correct this issue.

    Reset User Password

    NOTE: You CAN NOT use a previous password this will cause an error to be displayed.
    1. SSH in to the FSM
    2. Run the following command. smcli chuserpwd [-v] {-u user_name}{-o existing_password}{-p new_password}
    Note: if you receive the following error please open a ticket with IBM support.

    STEP2 - Restart FSM (Software)

    If at an time the FSM GUI becomes buggy or you have lost the tab on the left side of the page to open the contents tree then the simplest way to troubleshoot this is by a software restart of the FSM server. To do this perform the following.
    1. Log into the FSM via SSH
    2. Enter the command ‘smstop
    3. Once this returns enter ‘smstart
    4. To watch the status of the software restart enter ‘smstatus -r‘ (the -r will refresh anytime a change in status is made).
    5. When it says ‘Active’ this means the FSM is back up, however, it may still take at least 5 minutes for the full features to be operational.


    STEP3 - Restart FSM (Hardware)

    There may be times where support asks you to reboot the FSM after possibly making some changes to files after given root access. The easiest way without fumbling through the GUI is to do a hardware restart from the command line. Please follow below to accomplish this.
    1. First log into the FSM. 
    2. Run ‘smshutdown -t now -r    (Below are the flag definitions)

     

    STEP4 - Recovering the Flex Systems Manager Node from Base Media

    NOTE: this recovery would only be needed in SERIOUS situations, for example, after the replacement of the mother board or some other significant hardware failure.

    NOTE 2: If you have any issues during this process getting the recovery to run, then try to reformat the RAID array by deleting and recreating the array in the LSI configuration utility.

    Things you need: 
    1. Latest recovery media on a DVD
    2. An external DVD drive (preferably an IBM supported as seen in the link below)

    • You may need to reactivate your RAID arrays, if you do here are the steps to do so, if not, move on to next step. 
    • Press cntrl+c to enter the LSI configuration utility during the legacy initialization boot up sequence.
    • Verify your RAID arrays are online and optimal by going to RAID properties -> View Existing Volumes ->Manage Volumes->Activate Volume. This should bring your RAID array  back online.

    • Next go to SAS topology and verify your ATA drive is the alternate boot device.
    • Next attached the first recovery ISO to the external DVD tray.
    • Reboot the FSM and during the UEFi splash screen press F12 to select the USB CD/DVD device.
    • Allow the IBM customized media to load and then press ’2′ to recover the management node and ’2′ to recover the complete system and press ‘y’ then wait for the recovery process to begin!

    • This process could take up to 3 hours.
    • Once completed reboot and at the UEFi splash screen presh ‘F12′ and select the recovery partition.
    • Select System Recovery and allow the recovery to run. Once complete refer to the FSM First Time Setup article above!
    Please go through the link below if you need to configure IBM v7000 Storage.

    IBM v7000 Storage Step-by-Step Configuration Guide

      Rac to Rac Data Guard Configuration: Oracle Database 12c

      $
      0
      0

      This step by step guide will take you through the steps to install and configure Oracle Grid Infrastructure 12c and Database 12c including RAC to RAC Data Guard and Data Broker configuration in a Primary and Physical Standby environment for high availability.

      Prerequisites

      You need to download the following software if you don’t have already.

      1.    Oracle Enterprise Linux 6 (64-bit) or Red Hat Enterprise Linux 6 (64bit)
      2.    Oracle Grid Infrastructure 12c (64-bit)
      3.    Oracle Database 12c (64-bit)

         

        Environment

        You need four (Physical or Virtual) machines with 2 network adapters and at least 2GB memory installed on each machine.


         

        Installing Oracle Enterprise Linux 6

        To begin installation, power on your first machine booting from Oracle Linux media and install it as basic server. More specifically, it should be a server installation with a minimum of 4GB swap, separate partition for /u01 with minimum 20GB space, firewall disabled, SELinux set to permissive and the following package groups installed.




        Base System > Base
        Base System > Compatibility libraries
        Base System > Hardware monitoring utilities
        Base System > Large Systems Performance
        Base System > Network file system client
        Base System > Performance Tools
        Base System > Perl Support
        Servers > Server Platform
        Servers > System administration tools
        Desktops > Desktop
        Desktops > Desktop Platform
        Desktops > Fonts
        Desktops > General Purpose Desktop
        Desktops > Graphical Administration Tools
        Desktops > Input Methods
        Desktops > X Window System
        Applications > Internet Browser
        Development > Additional Development
        Development > Development Tools


        If you are on physical machine then you have to install all four machines one by one but if you are on virtual platform then you have an option to clone your first machine with minor changes of ip addresses and hostname of cloned machines. 

        Click Reboot to finish the installation.


         



        Preparing Oracle Enterprise Linux 6

        Since we have completed Oracle Linux installation, now we need to prepare our Linux machines for Gird infrastructure and Database installation. Make sure internet connection is available to perform the following tasks.

        You need to set up network (ip address, netmask, gateway, dns and hostname) on all four machines according to your environment. In our case, we have the following credentials for our lab environment.



        PRIMARY NODE: PDBSRV1

        [root@PDBSRV1 ~]# vi /etc/sysconfig/network

        NETWORKING=yes
        HOSTNAME=PDBSRV1.TSPK.COM
        GATEWAY=192.168.10.1

        Save and close

        [root@PDBSRV1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0

        ONBOOT=yes
        BOOTPROTO=none
        IPADDR=192.168.10.100
        NETMASK=255.255.255.0
        GATEWAY=192.168.10.1
        DNS1=192.168.10.1
        DOMAIN=TSPK.COM
        DEFROUTE=yes

        Save and close

        [root@PDBSRV1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1
        ONBOOT=yes
        BOOTPROTO=none
        IPADDR=192.168.1.100
        NETMASK=255.255.255.0

        Save and close



        Add the following entries in /etc/hosts file on PDBSRV1


        [root@PDBSRV1 ~]# vi /etc/hosts

        # Public
        192.168.10.100  pdbsrv1.tspk.com        pdbsrv1
        192.168.10.101  pdbsrv2.tspk.com        pdbsrv2

        # Private
        192.168.1.100   pdbsrv1-prv.tspk.com    pdbsrv1-prv
        192.168.1.101   pdbsrv2-prv.tspk.com    pdbsrv2-prv

        # Virtual
        192.168.10.103  pdbsrv1-vip.tspk.com    pdbsrv1-vip
        192.168.10.104  pdbsrv2-vip.tspk.com    pdbsrv2-vip

        # SCAN
        #192.168.10.105 pdbsrv-scan.tspk.com    pdbsrv-scan
        #192.168.10.106 pdbsrv-scan.tspk.com    pdbsrv-scan
        #192.168.10.107 pdbsrv-scan.tspk.com    pdbsrv-scan

        Save and close


        PRIMARY NODE: PDBSRV2

        [root@PDBSRV2 ~]# vi /etc/sysconfig/network

        NETWORKING=yes
        HOSTNAME=PDBSRV2.TSPK.COM
        GATEWAY=192.168.10.1

        Save and close

        [root@PDBSRV2 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0

        ONBOOT=yes
        BOOTPROTO=none
        IPADDR=192.168.10.101
        NETMASK=255.255.255.0
        GATEWAY=192.168.10.1
        DNS1=192.168.10.1
        DOMAIN=TSPK.COM
        DEFROUTE=yes

        Save and close

        [root@PDBSRV2 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1
        ONBOOT=yes
        BOOTPROTO=none
        IPADDR=192.168.1.101
        NETMASK=255.255.255.0

        Save and close

        Add the following entries in /etc/hosts file on PDBSRV2

        [root@PDBSRV2 ~]# vi /etc/hosts

        # Public
        192.168.10.100  pdbsrv1.tspk.com        pdbsrv1
        192.168.10.101  pdbsrv2.tspk.com        pdbsrv2

        # Private
        192.168.1.100   pdbsrv1-prv.tspk.com    pdbsrv1-prv
        192.168.1.101   pdbsrv2-prv.tspk.com    pdbsrv2-prv

        # Virtual
        192.168.10.103  pdbsrv1-vip.tspk.com    pdbsrv1-vip
        192.168.10.104  pdbsrv2-vip.tspk.com    pdbsrv2-vip

        # SCAN
        #192.168.10.105 pdbsrv-scan.tspk.com    pdbsrv-scan
        #192.168.10.106 pdbsrv-scan.tspk.com    pdbsrv-scan
        #192.168.10.107 pdbsrv-scan.tspk.com    pdbsrv-scan

        Save and close.

        Now execute the following commands on both primary nodes PDBSRV1 and PDBSRV2

        [root@PDBSRV1 ~]# hostname pdbsrv1.tspk.com
        [root@PDBSRV2 ~]# hostname pdbsrv2.tspk.com

        [root@PDBSRV1 ~]# service network reload
        [root@PDBSRV2 ~]# service network reload



        STANDBY NODE: SDBSRV1

        [root@SDBSRV1 ~]# vi /etc/sysconfig/network

        NETWORKING=yes
        HOSTNAME=SDBSRV1.TSPK.COM
        GATEWAY=192.168.10.1

        Save and close

        [root@SDBSRV1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0

        ONBOOT=yes
        BOOTPROTO=none
        IPADDR=192.168.10.110
        NETMASK=255.255.255.0
        GATEWAY=192.168.10.1
        DNS1=192.168.10.1
        DOMAIN=TSPK.COM
        DEFROUTE=yes

        Save and close

        [root@SDBSRV1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1
        ONBOOT=yes
        BOOTPROTO=none
        IPADDR=192.168.1.110
        NETMASK=255.255.255.0

        Save and close

        Add the following entries in /etc/hosts file on SDBSRV1

        [root@SDBSRV1 ~]# vi /etc/hosts

        # Public
        192.168.10.110  sdbsrv1.tspk.com        sdbsrv1
        192.168.10.111  sdbsrv2.tspk.com        sdbsrv2

        # Private
        192.168.1.110   sdbsrv1-prv.tspk.com    sdbsrv1-prv
        192.168.1.111   sdbsrv2-prv.tspk.com    sdbsrv2-prv

        # Virtual
        192.168.10.113  sdbsrv1-vip.tspk.com    sdbsrv1-vip
        192.168.10.114  sdbsrv2-vip.tspk.com    sdbsrv2-vip

        # SCAN
        #192.168.10.115 sdbsrv-scan.tspk.com    sdbsrv-scan
        #192.168.10.116 sdbsrv-scan.tspk.com    sdbsrv-scan
        #192.168.10.117 sdbsrv-scan.tspk.com    sdbsrv-scan

        Save and close

        STANDBY NODE: SDBSRV2

        [root@PDBSRV2 ~]# vi /etc/sysconfig/network

        NETWORKING=yes
        HOSTNAME=SDBSRV2.TSPK.COM
        GATEWAY=192.168.10.1

        Save and close

        [root@SDBSRV2 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0

        ONBOOT=yes
        BOOTPROTO=none
        IPADDR=192.168.10.111
        NETMASK=255.255.255.0
        GATEWAY=192.168.10.1
        DNS1=192.168.10.1
        DOMAIN=TSPK.COM
        DEFROUTE=yes

        Save and close

        [root@SDBSRV2 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1
        ONBOOT=yes
        BOOTPROTO=none
        IPADDR=192.168.1.111
        NETMASK=255.255.255.0

        Save and close


        Add the following entries in /etc/hosts file on PDBSRV2

        [root@SDBSRV2 ~]# vi /etc/hosts

        # Public
        192.168.10.110  sdbsrv1.tspk.com        sdbsrv1
        192.168.10.111  sdbsrv2.tspk.com        sdbsrv2

        # Private
        192.168.1.110   sdbsrv1-prv.tspk.com    sdbsrv1-prv
        192.168.1.111   sdbsrv2-prv.tspk.com    sdbsrv2-prv

        # Virtual
        192.168.10.113  sdbsrv1-vip.tspk.com    sdbsrv1-vip
        192.168.10.114  sdbsrv2-vip.tspk.com    sdbsrv2-vip

        # SCAN
        #192.168.10.115 sdbsrv-scan.tspk.com    sdbsrv-scan
        #192.168.10.116 sdbsrv-scan.tspk.com    sdbsrv-scan
        #192.168.10.117 sdbsrv-scan.tspk.com    sdbsrv-scan

        Save and close.

        Now execute the following commands on both primary nodes SDBSRV1 and SDBSRV2

        [root@SDBSRV1 ~]# hostname sdbsrv1.tspk.com
        [root@SDBSRV2 ~]# hostname sdbsrv2.tspk.com

        [root@SDBSRV1 ~]# service network reload
        [root@SDBSRV2 ~]# service network reload


        Note: You need to create “A” record for the following entries in your DNS Server to resolve SCAN name of both Primary and Standby site.

        PRIMARY DATABASE SCAN NAME
        192.168.10.105 pdbsrv-scan.tspk.com
        192.168.10.106 pdbsrv-scan.tspk.com
        192.168.10.107 pdbsrv-scan.tspk.com

        STANDBY DATABASE SCAN NAME
        192.168.10.115 sdbsrv-scan.tspk.com
        192.168.10.116 sdbsrv-scan.tspk.com
        192.168.10.117 sdbsrv-scan.tspk.com


        Now, execute the following commands on all four nodes to install and update following packages required for grid and database installation.

        yum install compat-libcap1 compat-libstdc++-33 compat-libstdc++-33.i686 gcc gcc-c++ glibc glibc.i686 glibc-devel glibc-devel.i686 ksh libgcc libgcc.i686 libstdc++ libstdc++.i686 libstdc++-devel libstdc++-devel.i686 libaio libaio.i686 libaio-devel libaio-devel.i686 libXext libXext.i686 libXtst libXtst.i686 libX11 libX11.i686 libXau libXau.i686 libxcb libxcb.i686 libXi libXi.i686 make sysstat unixODBC unixODBC-devel –y

        yum install kmod-oracleasm oracleasm-support –y

        rpm -Uvh http://download.oracle.com/otn_software/asmlib/oracleasmlib-2.0.12-1.el6.x86_64.rpm

        yum install oracle-rdbms-server-12cR1-preinstall –y

        When you are done with the above commands, perform the following steps on all four nodes.

        vi /etc/selinux/config
        SELINUX=permissive

        Save and close

        chkconfig iptables off
        service iptables stop

        chkconfig ntpd off
        service ntpd stop
        mv /etc/ntp.conf /etc/ntp.conf.orig
        rm /var/run/ntpd.pid

        mkdir -p /u01/app/12.1.0/grid
        mkdir -p /u01/app/oracle/product/12.1.0/db_1
        chown -R oracle:oinstall /u01
        chmod -R 775 /u01/

        Set same password for user oracle on all four nodes by executing the following command
         
        passwd oracle

        Set the environment variables on all four nodes and you need to change the highlighted text on each node accordingly.  

        vi /home/oracle.bash_profile

        # Oracle Settings
        export TMP=/tmp
        export TMPDIR=$TMP
        export ORACLE_HOSTNAME=pdbsrv1.tspk.com
        export DB_NAME=PDBRAC
        export DB_UNIQUE_NAME=PDBRAC
        export ORACLE_BASE=/u01/app/oracle
        export GRID_HOME=/u01/app/12.1.0/grid
        export DB_HOME=$ORACLE_BASE/product/12.1.0/db_1
        export ORACLE_HOME=$DB_HOME
        export ORACLE_SID=PDBRAC1
        export ORACLE_TERM=xterm
        export BASE_PATH=/usr/sbin:$PATH
        export PATH=$ORACLE_HOME/bin:$BASE_PATH

        export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
        export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

        alias grid_env='. /home/oracle/grid_env'
        alias db_env='. /home/oracle/db_env'

        Save and close

        vi /home/oracle/grid_env

        export ORACLE_SID=+ASM1
        export ORACLE_HOME=$GRID_HOME

        export PATH=$ORACLE_HOME/bin:$BASE_PATH
        export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
        export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

        Save and close

        vi /home/oracle/db_env

        export ORACLE_SID=PDBRAC1
        export ORACLE_HOME=$DB_HOME

        export PATH=$ORACLE_HOME/bin:$BASE_PATH
        export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
        export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

        Save and close

        The environment variables from .bash_profile, grid_env and db_env on all four nodes will look similar to like as shown in image below.

         

        You need to increase /dev/shm size if it is less than 4GB using the following command. If you don’t increase, it will cause an error during prerequisites check of Grid installation.

        mount -o remount 4G /dev/shm

        To make it persistent even after reboot, you need to modify /etc/fstab accordingly

        # vi /etc/fstab
        tmpfs                   /dev/shm                tmpfs   defaults,size=4G        0 0

        Save and close


        We have already set up openfiler as an iscsi shared storage for this lab environement and now we need to create diskgroup of shared storage using the following command on primary node PDBSRV1 and later we will initialize and scan same diskgroup on PDBSRV2.

        [root@PDBSRV1 ~]# oracleasm configure -i
        Configuring the Oracle ASM library driver.

        This will configure the on-boot properties of the Oracle ASM library
        driver.  The following questions will determine whether the driver is
        loaded on boot and what permissions it will have.  The current values
        will be shown in brackets ('[]').  Hitting without typing an
        answer will keep that current value.  Ctrl-C will abort.

        Default user to own the driver interface [oracle]:
        Default group to own the driver interface [dba]:
        Start Oracle ASM library driver on boot (y/n) [y]:
        Scan for Oracle ASM disks on boot (y/n) [y]:
        Writing Oracle ASM library driver configuration: done

        [root@PDBSRV1 ~]# oracleasm createdisk DISK1 /dev/sdc1
        [root@PDBSRV1 ~]# oracleasm createdisk DISK2 /dev/sdd1
        [root@PDBSRV1 ~]# oracleasm createdisk DISK3 /dev/sde1

        [root@PDBSRV1 ~]# oracleasm scandisks
        [root@PDBSRV1 ~]# oracleasm listdisks
        DISK1
        DISK2
        DISK3

        Now initialize and scan same diskgroup and PDBSRV2 using the following command.

        [root@PDBSRV2 ~]# oracleasm configure -i
        Configuring the Oracle ASM library driver.

        This will configure the on-boot properties of the Oracle ASM library
        driver.  The following questions will determine whether the driver is
        loaded on boot and what permissions it will have.  The current values
        will be shown in brackets ('[]').  Hitting without typing an
        answer will keep that current value.  Ctrl-C will abort.

        Default user to own the driver interface [oracle]:
        Default group to own the driver interface [dba]:
        Start Oracle ASM library driver on boot (y/n) [y]:
        Scan for Oracle ASM disks on boot (y/n) [y]:
        Writing Oracle ASM library driver configuration: done

        [root@PDBSRV2 ~]# oracleasm scandisks
        [root@PDBSRV2 ~]# oracleasm listdisks
        DISK1
        DISK2
        DISK3


        Now, we will create diskgroup on our standby node SDBSRV1 and later we will initialize and scan same diskgroup on SDBSRV2.

        [root@SDBSRV1 ~]# oracleasm configure -i
        Configuring the Oracle ASM library driver.

        This will configure the on-boot properties of the Oracle ASM library
        driver.  The following questions will determine whether the driver is
        loaded on boot and what permissions it will have.  The current values
        will be shown in brackets ('[]').  Hitting without typing an
        answer will keep that current value.  Ctrl-C will abort.

        Default user to own the driver interface [oracle]:
        Default group to own the driver interface [dba]:
        Start Oracle ASM library driver on boot (y/n) [y]:
        Scan for Oracle ASM disks on boot (y/n) [y]:
        Writing Oracle ASM library driver configuration: done

        [root@SDBSRV1 ~]# oracleasm createdisk DISK1 /dev/sdc1
        [root@SDBSRV1 ~]# oracleasm createdisk DISK2 /dev/sdd1
        [root@SDBSRV1 ~]# oracleasm createdisk DISK3 /dev/sde1

        [root@SDBSRV1 ~]# oracleasm scandisks
        [root@SDBSRV1 ~]# oracleasm listdisks
        DISK1
        DISK2
        DISK3

        Now initialize and scan same diskgroup on SDBSRV2 using the following command.

        [root@SDBSRV2 ~]# oracleasm configure -i
        Configuring the Oracle ASM library driver.

        This will configure the on-boot properties of the Oracle ASM library
        driver.  The following questions will determine whether the driver is
        loaded on boot and what permissions it will have.  The current values
        will be shown in brackets ('[]').  Hitting without typing an
        answer will keep that current value.  Ctrl-C will abort.

        Default user to own the driver interface [oracle]:
        Default group to own the driver interface [dba]:
        Start Oracle ASM library driver on boot (y/n) [y]:
        Scan for Oracle ASM disks on boot (y/n) [y]:
        Writing Oracle ASM library driver configuration: done

        [root@SDBSRV2 ~]# oracleasm scandisks
        [root@SDBSRV2 ~]# oracleasm listdisks
        DISK1
        DISK2
        DISK3


        We are done with prequisites on all four machines and now moving to perform grid installation.


        Installing Grid Infrastructure 12c - Primary Site

        We have completed the preparation of all four machines and ready to start Oracle grid infrastructure 12c installation. You should have either VNC or Xmanager installed on your client machine for graphical installation of grid/database. In our case, we have windows 7 client machine and we are using Xmanager.

        Now, copy grid infrastructure and database software on your primary node PDBSRV1 and extract it under /opt or any other directory of your choice. In our case, we have CD Rom media and we will extract it under /opt.

        Login using root user on your primary node PDBSRV1 and perform the following steps.

        # unzip -q /media/linuxamd64_12c_grid_1of2.zip -d /opt
        # unzip -q /media/linuxamd64_12c_grid_2of2.zip -d /opt

        # unzip -q /media/linuxamd64_12c_database_1of2.zip -d /opt
        # unzip -q /media/linuxamd64_12c_database_2of2.zip -d /opt

        Copy cvuqdisk-1.0.9-1.rpm to other three nodes under /opt and install it on each node one by one

        # scp -p /opt/grid/rpm/cvuqdisk-1.0.9-1.rpm pdbsrv2:/opt
        # scp -p /opt/grid/rpm/cvuqdisk-1.0.9-1.rpm sdbsrv1:/opt
        # scp -p /opt/grid/rpm/cvuqdisk-1.0.9-1.rpm sdbsrv2:/opt

        # rpm -Uvh /opt/grid/rpm/cvuqdisk-1.0.9-1.rpm

        Now, logout from root user and login again with oracle user to perform grid installation on your primary node PDBSRV1

        Run grid_env to set environment variable for grid infrastructure installation.

        [oracle@PDBSRV1 ~]$ grid_env
        [oracle@PDBSRV1 ~]$ export DISPLAY=192.168.10.1:0.0

        Now, execute the following command from the directory you have extracted grid in to begin the installation.

        [oracle@PDBSRV1 grid]$ /opt/grid/runInstaller

        Follow the screenshots to set up grid infrastructure according to your environment.

        Select"Skip Software Update" Click Next


        Select "Install and Configure Oracle Grid Infrastructure for a Cluster" Click Next



        Select "Configure a Standard Cluster" Click Next



        Choose "Typical Installation" Click Next



        Change the "SCAN Name" and add secondary host in the cluster, enter oracle user password then Click Next.



        Verify destination path, enter password and choose "dba" as OSASM group. Click Next



        Click "External" for redundancy and select at least one disk or more and Click Next.



        Keep the default and Click Next



        Keep the default and Click Next



        It is safe to ignore since i can not add more than 4GB of memory. Click Next



        Verify and if you are happy with the summary, Click Install.



        Now, stop when the following screen appears and do not click OK. Now login with root user on PDBSRV1 and PDBSRV2 to execute the following scripts. You must execute both scripts on PDBSRV1 first.

        [root@PDBSRV1 ~]# /u01/app/oracle/product/12.1.0/db_1/root.sh
        [root@PDBSRV1 ~]# /u01/app/12.1.0/grid/root.sh 

        When you are done on PDBSRV1 then execute both scripts on PDBSRV2

        [root@PDBSRV1 ~]# /u01/app/oracle/product/12.1.0/db_1/root.sh
        [root@PDBSRV1 ~]# /u01/app/12.1.0/grid/root.sh

        When done, Click OK.



        Setup will continue after successful execution of scripts on both nodes.



        Click close.



        At this point, Grid infrastructure 12c installation completed. We can check the status of the installation using the following commands.

        [oracle@PDBSRV1 ~]$ grid_env
        [oracle@PDBSRV1 ~]$ crsctl stat res -t

           
        Note: If you found ora.oc4j offline then you can enable and start it manually by executing the following command.

        [oracle@PDBSRV1 ~]$ crsctl enable ora.oc4j
        [oracle@PDBSRV1 ~]$ crsctl start ora.oc4j
        [oracle@PDBSRV1 ~]$ crsctl stat res -t 

         

        Installing Oracle Database 12c - Primary Site

        Since we have completed grid installation, now we need to install oracle database 12c by executing runInstaller command from the directory you have extracted the database in.

        [oracle@PDBSRV1 ~]$ db_env
        [oracle@PDBSRV1 ~]$ /opt/database/runInstaller

        Uncheck the security updates checkbox and click the "Next" button and "Yes" on the subsequent warning dialog. 

        Select the "Install database software only" option, then click the "Next" button.



        Accept the "Oracle Real Application Clusters database installation" option by clicking the "Next" button.



        Make sure both nodes are selected, then click the "Next" button.



        Select the required languages, then click the "Next" button.



        Select the "Enterprise Edition" option, then click the "Next" button.



        Enter "/u01/app/oracle" as the Oracle base and "/u01/app/oracle/product/12.1.0/db_1" as the software location, then click the "Next" button.



        Select the desired operating system groups, then click the "Next" button.



        Wait for the prerequisite check to complete. If there are any problems either click the "Fix & Check Again" button, or check the "Ignore All" checkbox and click the "Next" button.



        If you are happy with the summary information, click the "Install" button.



        Wait while the installation takes place.



        When prompted, run the configuration script on each node. When the scripts have been run on each node, click the "OK" button.



        Click the "Close" button to exit the installer.



        At this stage, database installation completed.


        Creating a Database - Primary Site

        Since we have completed database installation, its time to create a database by executing the following command.

        [oracle@PDBSRV1 ~]$ db_env
        [oracle@PDBSRV1 ~]$ dbca

        Select the "Create Database" option and click the "Next" button.



        Select the "Advanced Mode" option. Click the "Next" button.



        Select exactly what shown in image and Click Next.



        Enter the "PDBRAC" in database name and keep the SID as is. 

        Click Next


        Make sure both nodes are select and Click Next



        Keep the default and Click Next



        Select "Use the Same Administrative password for All Accounts" enter the password and Click Next



        Keep the defaults and Click Next.
         

        Select "Sample Schema" we need it for testing purpose later and Click Next



        Increase "Memory Size" and navigate to "Sizing" tab



        Increase the "Processes" and navigate to "Character Sets" tab



        Select the following options and Click "All Initialization Parameters"



        Define "PDBRAC" in db_unique_name and click Close.

        Click Next



        Select the below options and click Next.



        If you happy with the Summary report then Click Finish.



        Database creation process started, it will take several time to complete.



        Click Exit
        Click Close



        We have successfully created a database on Primary nodes (pdbsrv1, pdbsrv2). We can check database status by executing the following command.

        [oracle@PDBSRV1 ~]$ grid_env

        [oracle@PDBSRV1 ~]$ srvctl status database -d pdbrac
        Instance PDBRAC1 is running on node pdbsrv1
        Instance PDBRAC2 is running on node pdbsrv2


        [oracle@PDBSRV1 ~]$ srvctl config database -d pdbrac
        Database unique name: PDBRAC
        Database name: PDBRAC
        Oracle home: /u01/app/oracle/product/12.1.0/db_1
        Oracle user: oracle
        Spfile: +DATA/PDBRAC/spfilePDBRAC.ora
        Password file: +DATA/PDBRAC/orapwpdbrac
        Domain:
        Start options: open
        Stop options: immediate
        Database role: PRIMARY
        Management policy: AUTOMATIC
        Server pools: PDBRAC
        Database instances: PDBRAC1,PDBRAC2
        Disk Groups: DATA
        Mount point paths:
        Services:
        Type: RAC
        Start concurrency:
        Stop concurrency:
        Database is administrator managed


        [oracle@PDBSRV1 ~]$ db_env
        [oracle@PDBSRV1 ~]$ sqlplus / as sysdba

        SQL> SELECT inst_name FROM v$active_instances;

        INST_NAME
        --------------------------------------------------------------------------------
        PDBSRV1.TSPK.COM:PDBRAC1
        PDBSRV2.TSPK.COM:PDBRAC2

        SQL>exit

         

        Installing Grid Infrastructure 12c - Standby Site

        Since we have already installed all perquisites on our Standby site nodes (sdbsrv1, sdbsrv2) for grid/database installation,  we can start grid installation straightaway.

        Login to sdbsrv1 using oracle user and execute the following command to being the installation. Follow the same steps you have performed during installation on primary site nodes with minor changes as show in image below.


        [oracle@SDBSRV1 grid]$ grid_env
        [oracle@SDBSRV1 grid]$ export DISPLAY=192.168.10.1:0.0
        [oracle@SDBSRV1 grid]$ /opt/grid/runInstaller

        Enter "SCAN Name" and add secondadry node "sdbsrv1" enter oracle user password in "OS Password" box and Click Next


         

        Once the grid installation completed, we can check the status of the installation using the following commands.

        [oracle@SDBSRV1 ~]$ grid_env
        [oracle@SDBSRV1 ~]$ crsctl stat res -t   

        Note: If you found ora.oc4j offline then you can enable and start it manually by executing the following command.



        [oracle@SDBSRV1 ~]$ crsctl enable ora.oc4j
        [oracle@SDBSRV1 ~]$ crsctl start ora.oc4j
        [oracle@SDBSRV1 ~]$ crsctl stat res -t  

         

        Installing Database 12c - Standby Site

        We can start database 12c installation by following the same steps we have performed during installation on primary nodes with minor changes as shown in images below.

        [oracle@SDBSRV1 ~]$ db_env
        [oracle@SDBSRV1 ~]$ /opt/database/runInstaller




        You do not need to run "dbca" to create database on Standby nodes. Once the database installation completed, we can start configuring data guard at Primary nodes first.



        Data Guard Configuration - Primary Site

        Login to PDBSRV1 using oracle user and perform the following tasks to prepare data guard configuration.

        [oracle@PDBSRV1 ~]$ db_env
        [oracle@PDBSRV1 ~]$ mkdir /u01/app/oracle/backup
        [oracle@PDBSRV1 ~]$ sqlplus / as sysdba

        alter database force logging;
        alter database open;
        alter system set log_archive_config='DG_CONFIG=(PDBRAC,SDBRAC)' scope=both sid='*';
        alter system set log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PDBRAC' scope=both sid='*';
        alter system set LOG_ARCHIVE_DEST_2='SERVICE=SDBRAC SYNC NOAFFIRM VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=SDBRAC' scope=both sid='*';
        alter system set log_archive_format='%t_%s_%r.arc' scope=spfile sid='*';
        alter system set LOG_ARCHIVE_MAX_PROCESSES=8 scope=both sid='*';
        alter system set REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE scope=both sid='*';
        alter system set fal_server = 'SDBRAC';
        alter system set STANDBY_FILE_MANAGEMENT=AUTO scope=spfile sid='*';
        alter database flashback ON;

        select group#,thread#,bytes from v$log;

        ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 ('+DATA') SIZE 50M;
        ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 ('+DATA') SIZE 50M;
        ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 ('+DATA') SIZE 50M;
        ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 ('+DATA') SIZE 50M;
        ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 ('+DATA') SIZE 50M;
        ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 ('+DATA') SIZE 50M;
        ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 ('+DATA') SIZE 50M;
        ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 ('+DATA') SIZE 50M;

        select group#,thread#,bytes from v$standby_log;

        create pfile='/u01/app/oracle/backup/initSDBRAC.ora' from spfile;

        exit

        Now backup password file from primary database using the following commands. This will be required later on standby database configuration.

        [oracle@PDBSRV1 ~]$ grid_env
        [oracle@PDBSRV1 ~]$ asmcmd pwget --dbuniquename PDBRAC
        [oracle@PDBSRV1 ~]$ asmcmd pwcopy --dbuniquename PDBRAC '+DATA/PDBRAC/orapwpdbrac''/u01/app/oracle/backup/orapwsdbrac'

        Now take the primary database backup using the following commands

        [oracle@PDBSRV1 ~]$ db_env
        [oracle@PDBSRV1 ~]$ rman target / nocatalog

        RMAN> run
        {
        sql "alter system switch logfile";
        allocate channel ch1 type disk format '/u01/app/oracle/backup/Primary_bkp_for_standby_%U';
        backup database;
        backup current controlfile for standby;
        sql "alter system archive log current";
        }
        RMAN> exit

        Now we need to modify/update $ORACLE_HOME/network/admin/tnsnames.ora file on primary node 1 as shown an example below

        [oracle@PDBSRV1 ~]$ vi $ORACLE_HOME/network/admin/tnsnames.ora

        PDBRAC =
          (DESCRIPTION =
            (ADDRESS = (PROTOCOL = TCP)(HOST = pdbsrv-scan)(PORT = 1521))
            (CONNECT_DATA =
              (SERVER = DEDICATED)
              (SERVICE_NAME = PDBRAC)
            )
          )
        SDBRAC =
          (DESCRIPTION =
            (ADDRESS = (PROTOCOL = TCP)(HOST = sdbsrv-scan)(PORT = 1521))
            (CONNECT_DATA =
              (SERVER = DEDICATED)
              (SERVICE_NAME = SDBRAC)
            )
          )

        Save and close

        Copy the tnsnames.ora from PDBSRV1 to all the three nodes under $ORACLE_HOME/network/admin in order to keep the same tnsnames.ora on all the nodes.

        [oracle@PDBSRV1 ~]$ scp -p $ORACLE_HOME/network/admin/tnsnames.ora pdbsrv2:$ORACLE_HOME/network/admin

        [oracle@PDBSRV1 ~]$ scp -p $ORACLE_HOME/network/admin/tnsnames.ora sdbsrv1:$ORACLE_HOME/network/admin

        [oracle@PDBSRV1 ~]$ scp -p $ORACLE_HOME/network/admin/tnsnames.ora sdbsrv2:$ORACLE_HOME/network/admin

        Copy  initSDBRAC.ora and orapwsdbrac from primary node PDBSRV1 to standby nodes SDBSRV1, SDBSRV2


        [oracle@PDBSRV1 ~]$ scp /u01/app/oracle/backup/initSDBRAC.ora oracle@dbsrv1:/u01/app/oracle/product/12.1.0/db_1/dbs/initSDBRAC.ora

        [oracle@PDBSRV1 ~]$ scp /u01/app/oracle/backup/orapwsdbrac oracle@sdbsrv1:/u01/app/oracle/backup/orapwsdbrac

        Copy /u01/app/oracle/backup from primary node pdbsrv1 to standby node sdbsrv1 under the same location as primary

        [oracle@PDBSRV1 ~]$ scp -r /u01/app/oracle/backup sdbsrv1:/u01/app/oracle
         

        Data Guard Configuration - Standby Site

        Login to SDBSRV1, SDBSRV2 using oracle user and perform the following tasks to prepare Standby site data guard configuration.

        [oracle@SDBSRV1 ~]$ mkdir /u01/app/oracle/admin/SDBRAC/adump
        [oracle@SDBSRV2 ~]$ mkdir /u01/app/oracle/admin/SDBRAC/adump

        The following heighlighted parameters need to be modified in our paramter file initSDBRAC.ora for standby database creation in a dataguard environment.

        [oracle@SDBSRV1 ~]$ vi /u01/app/oracle/product/12.1.0/db_1/dbs/initSDBRAC.ora
         
        SDBRAC1.__data_transfer_cache_size=0
        SDBRAC2.__data_transfer_cache_size=0
        SDBRAC1.__db_cache_size=184549376
        SDBRAC2.__db_cache_size=452984832
        SDBRAC1.__java_pool_size=16777216
        SDBRAC2.__java_pool_size=16777216
        SDBRAC1.__large_pool_size=419430400
        SDBRAC2.__large_pool_size=33554432
        SDBRAC1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
        SDBRAC2.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
        SDBRAC1.__pga_aggregate_target=520093696
        SDBRAC2.__pga_aggregate_target=570425344
        SDBRAC1.__sga_target=973078528
        SDBRAC2.__sga_target=922746880
        SDBRAC1.__shared_io_pool_size=0
        SDBRAC2.__shared_io_pool_size=33554432
        SDBRAC1.__shared_pool_size=335544320
        SDBRAC2.__shared_pool_size=369098752
        SDBRAC1.__streams_pool_size=0
        SDBRAC2.__streams_pool_size=0
        *.audit_file_dest='/u01/app/oracle/admin/SDBRAC/adump'
        *.audit_trail='db'
        *.cluster_database=true
        *.compatible='12.1.0.0.0'
        *.control_files='+DATA/SDBRAC/control01.ctl','+DATA/SDBRAC/control02.ctl'
        *.db_block_size=8192
        *.db_domain=''
        *.db_name='PDBRAC'
        *.db_recovery_file_dest='+DATA'
        *.db_recovery_file_dest_size=5025m
        *.db_unique_name='SDBRAC'
        *.diagnostic_dest='/u01/app/oracle'
        *.dispatchers='(PROTOCOL=TCP) (SERVICE=SDBRACXDB)'
        *.fal_server='PDBRAC'
        SDBRAC1.instance_number=1
        SDBRAC2.instance_number=2
        *.log_archive_config='DG_CONFIG=(SDBRAC,PDBRAC)'
        *.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=SDBRAC'
        *.log_archive_dest_2='service=PDBRAC async valid_for=(online_logfile,primary_role) db_unique_name=PDBRAC'
        *.log_archive_format='%t_%s_%r.arc'
        *.log_archive_max_processes=8
        *.memory_target=1416m
        *.open_cursors=300
        *.processes=1024
        *.remote_login_passwordfile='EXCLUSIVE'
        *.sessions=1131
        *.standby_file_management='AUTO'
        SDBRAC2.thread=2
        SDBRAC1.thread=1
        SDBRAC2.undo_tablespace='UNDOTBS2'
        SDBRAC1.undo_tablespace='UNDOTBS1' 

        Save and close

        Now we need to create the ASM directories on standby node SDBSRV1 using the following commands.
        [oracle@SDBSRV1 ~]$ grid_env
        [oracle@SDBSRV1 ~]$ asmcmd mkdir DATA/SDBRAC
        [oracle@SDBSRV1 ~]$ asmcmd
         
        ASMCMD> cd DATA/SDBRAC
        ASMCMD> mkdir PARAMETERFILE DATAFILE CONTROLFILE TEMPFILE ONLINELOG ARCHIVELOG STANDBYLOG
         
        ASMCMD> exit

        Add static listener configuration in the listener.ora file on standby nodes. Add an entry similar to below at the end in listener.ora file. The reason for this is that our standby database will be in nomount stage. In NOMOUNT stage, the database instance will not self-register with the listener, so you must tell the listener it is there.

        [oracle@SDBSRV1 ~]$ cp -p /u01/app/12.1.0/grid/network/admin/listener.ora /u01/app/12.1.0/grid/network/admin/listener.ora.bkp

        [oracle@SDBSRV1 ~]$ vi /u01/app/12.1.0/grid/network/admin/listener.ora

        SID_LIST_LISTENER =
        (SID_LIST =
           (SID_DESC =
               (SID_NAME = SDBRAC)
               (ORACLE_HOME = /u01/app/oracle/product/12.1.0/db_1)
           )
        )

        ADR_BASE_LISTENER = /u01/app/oracle 

        Save and close

        [oracle@SDBSRV1 ~]$scp -p /u01/app/12.1.0/grid/network/admin/listener.ora sdbsrv2:/u01/app/12.1.0/grid/network/admin/listener.ora

        Stop and start the LISTENER using srvctl command as shown an example below

        [oracle@SDBSRV1 ~]$ grid_env
        [oracle@SDBSRV1 ~]$ srvctl stop listener -listener LISTENER
        [oracle@SDBSRV1 ~]$ srvctl start listener -listener LISTENER

        [oracle@SDBSRV2 ~]$ grid_env
        [oracle@SDBSRV2 ~]$ srvctl stop listener -listener LISTENER
        [oracle@SDBSRV2 ~]$ srvctl start listener -listener LISTENER

         

        Creating physical standby database

        [oracle@SDBSRV1 ~]$ db_env
        [oracle@SDBSRV1 ~]$ sqlplus / as sysdba

        SQL> startup nomount pfile='/u01/app/oracle/product/12.1.0/db_1/dbs/initSDBRAC.ora'
        SQL> exit

        Login to Primary server pdbsrv1 as oracle user, connect to both Primary and Standby databases as shown below and run the RMAN active database duplication command.
        [oracle@PDBSRV1 ~]$ rman target sys@PDBRAC auxiliary sys@SDBRAC

        target database Password:
        connected to target database: PDBRAC (DBID=2357433135)
        auxiliary database Password:
        connected to auxiliary database: PDBRAC (not mounted)
         

        RMAN> DUPLICATE TARGET DATABASE FOR STANDBY NOFILENAMECHECK;
        RMAN> exit

        Once the duplication process completed, you need to check whether the Redo Apply is working before proceeding the next steps.

        [oracle@SDBSRV1 ~]$ db_env
        [oracle@SDBSRV1 ~]$ sqlplus / as sysdba

        ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION;

        The above command starts the recovery process using the standby logfiles that the primary is writing the redo to. It also tells the standby to return to the SQL command line once the command is complete. Verifying that Redo Apply is working. You can run the below query to check the status of different processes.

        select PROCESS, PID, STATUS, THREAD#, SEQUENCE# from v$managed_standby; 

        PROCESS   PID                      STATUS          THREAD#  SEQUENCE#

        --------- ------------------------ ------------ ---------- ----------

        ARCH      27871                    CONNECTED             0          0
        ARCH      27873                    CONNECTED             0          0
        ARCH      27875                    CONNECTED             0          0
        ARCH      27877                    CLOSING               2         52
        RFS       7084                     IDLE                  0          0
        RFS       7064                     IDLE                  2         53
        RFS       7080                     IDLE                  0          0
        RFS       7082                     IDLE                  0          0
        RFS       7122                     IDLE                  0          0
        RFS       7120                     IDLE                  1         76
        RFS       7136                     IDLE                  0          0
        RFS       7138                     IDLE                  0          0
        MRP0      14050                    APPLYING_LOG          2         53

         
        To check whether the Primary and Standby databases are in sync or not, execute below query. 

        On Primary Database:

        select THREAD#, max(SEQUENCE#) from v$log_history group by thread#;

           THREAD# MAX(SEQUENCE#)

        ---------- --------------

                 1             78
                 2             53

        On Standby Database:

        SQL> select max(sequence#), thread# from v$archived_log where applied='YES' group by thread#;

        MAX(SEQUENCE#)    THREAD#

        -------------- ----------

                    78          1
                    52          2

        Create new spfile from pfile:

        [oracle@SDBSRV1 ~]$ db_env
        [oracle@SDBSRV1 ~]$ sqlplus / as sysdba
         

        create pfile='/u01/app/oracle/products/12.1.0/db_1/dbs/initSDBRAC.ora' from spfile;
        shutdown immediate;
         

        SQL> exit

        Now remove the static listener entry from standby nodes sdbsrv1, sdbsrv2 that we added in listener.ora file earlier. Save the changes and restart the local listener.

        [oracle@SDBSRV1 ~]$ cp -p /u01/app/12.1.0/grid/network/admin/listener.ora.bkp /u01/app/12.1.0/grid/network/admin/listener.ora

        [oracle@SDBSRV1 ~]$ scp /u01/app/12.1.0/grid/network/admin/listener.ora sdbsrv2:/u01/app/12.1.0/grid/network/admin/listener.ora


        Now start the standby database using the newly created pfile. If everything is proper then the instance should get started.

        [oracle@SDBSRV1 ~]$ db_env
        [oracle@SDBSRV1 ~]$ sqlplus / as sysdba

        startup nomount pfile='/u01/app/oracle/products/12.1.0/db_1/dbs/initSDBRAC.ora';

        ORACLE instance started.
        Total System Global Area 1358954496 bytes
        Fixed Size                  2924208 bytes
        Variable Size             469762384 bytes
        Database Buffers          872415232 bytes
        Redo Buffers               13852672 bytes
         

        alter database mount standby database;
         
        Now that the Standby database has been started with the cluster parameters enabled, we need to create spfile in the central location on ASM diskgroup.

        create spfile='+DATA/SDBRAC/spfileSDBRAC.ora' from pfile='/u01/app/oracle/products/12.1.0/db_1/dbs/initSDBRAC.ora';

        shutdown immediate;
        SQL> exit

        Now we need to check whether the standby database gets started using our new spfile which we have created on ASM diskgroup.

        Rename the old pfile and spfile in $ORACLE_HOME/dbs directory as shown below

        [oracle@SDBSRV1 ~]$ cd $ORACLE_HOME/dbs
        [oracle@SDBSRV1 ~]$ mv initSDBRAC.ora initSDBRAC.ora.orig
        [oracle@SDBSRV1 ~]$ mv spfileSDBRAC.ora spfileSDBRAC.ora.orig

        Now create the below initSDBRAC1.ora file on sdbsrv1 and initSDBRAC2.ora file on sdbsrv2 under $ORACLE_HOME/dbs with the spfile entry so that the instance can start with the newly created spfile.

        [oracle@SDBSRV1 ~]$ cd $ORACLE_HOME/dbs
        [oracle@SDBSRV1 ~]$ vi initSDBRAC1.ora
        spfile='+DATA/SDBRAC/spfileSDBRAC.ora'

        Save and close

        Copy initSDBRAC1.ora to sdbsrv2 as $ORACLE_HOME/dbs/initSDBRAC2.ora

        [oracle@SDBSRV1 ~]$ scp -p $ORACLE_HOME/dbs/initSDBRAC1.ora sdbsrv2:$ORACLE_HOME/dbs/initSDBRAC2.ora
         
        Now start the database on standby node sdbsrv1 as shown an example below

        [oracle@SDBSRV1 ~]$ db_env
        [oracle@SDBSRV1 ~]$ sqlplus / as sysdba
         
        startup mount;

        ORACLE instance started.
        Total System Global Area 1358954496 bytes
        Fixed Size                  2924208 bytes
        Variable Size             469762384 bytes
        Database Buffers          872415232 bytes
        Redo Buffers               13852672 bytes
        Database mounted.

        select name, open_mode from v$database;

        NAME      OPEN_MODE

        --------- --------------------

        SDBRAC    MOUNTED

        show parameter spfile;

        NAME                                 TYPE        VALUE

        ------------------------------------ ----------- -------------------------------

        spfile string      +DATA/SDBRAC/spfileSDBRAC.ora
         
        SQL> exit
         
        Now that the database have been started using the spfile on shared location, we will add the database in cluster. Execute the below command to add the database and its instances in the cluster configuration.

        [oracle@SDBSRV1 ~]$ srvctl add database -db SDBRAC -oraclehome $ORACLE_HOME -dbtype RAC -spfile +DATA/SDBRAC/spfileSDBRAC.ora -role PHYSICAL_STANDBY -startoption MOUNT -stopoption IMMEDIATE -dbname PDBRAC -diskgroup DATA

        [oracle@SDBSRV1 ~]$ srvctl add instance -db SDBRAC -i SDBRAC1 -n sdbsrv1
        [oracle@SDBSRV1 ~]$ srvctl add instance -db SDBRAC -i SDBRAC2 -n sdbsrv2
        [oracle@SDBSRV1 ~]$ srvctl config database -d SDBRAC

        Database unique name: SDBRAC
        Database name: PDBRAC
        Oracle home: /u01/app/oracle/product/12.1.0/db_1
        Oracle user: oracle
        Spfile: +DATA/SDBRAC/spfileSDBRAC.ora
        Password file:
        Domain:
        Start options: open
        Stop options: immediate
        Database role: PHYSICAL_STANDBY
        Management policy: AUTOMATIC
        Server pools: SDBRAC
        Database instances: SDBRAC1,SDBRAC2
        Disk Groups: DATA
        Mount point paths:
        Services:
        Type: RAC
        Start concurrency:
        Stop concurrency:
        Database is administrator managed  
         
        From Primary node pdbsrv1 copy the password file again to the Standby node sdbsrv1.

        [oracle@PDBSRV1 ~]$ scp -p /u01/app/oracle/backup/orapwpdbrac sdbsrv1:$ORACLE_HOME/dbs/orapwsdbrac

        Login on
        standby node sdbsrv1 and copy the password file to ASM diskgroup as shown below.

        [oracle@SDBSRV1 ~]$ grid_env
        [oracle@SDBSRV1 ~]$ asmcmd

        ASMCMD> pwcopy /u01/app/oracle/product/12.1.0/db_1/dbs/orapwsdbrac +DATA/SDBRAC/
        copying /u01/app/oracle/product/12.1.0/db_1/dbs/orapwsdbrac -> +DATA/SDBRAC/orapwsdbrac

        Now we need to tell database where to look for password file using srvctl command as shown an example below

        [oracle@SDBSRV1 ~]$ db_env
        [oracle@SDBSRV1 ~]$ srvctl modify database -d SDBRAC -pwfile +DATA/SDBRAC/orapwsdbrac

        At this point, start the standby RAC database  but before starting the standby RAC database, shutdown the already running instance as shown an example below

        [oracle@SDBSRV1 ~]$ db_env
        [oracle@SDBSRV1 ~]$ sqlplus / as sysdba

        shutdown immediate;
        ORA-01109: database not open
        Database dismounted.
        ORACLE instance shut down.
         
        SQL> exit

        Now we can start the database using the following command.

        [oracle@SDBSRV1 ~]$ srvctl start database -d SDBRAC
        [oracle@SDBSRV1 ~]$ srvctl status database -d SDBRAC
         
        Output
        Instance SDBRAC1 is running on node sdbsrv1
        Instance SDBRAC2 is running on node sdbsrv2

        Now that the standby single instance is converted to standby RAC database, the final step is to start the recovery (MRP) process using the following command on standby node.

        [oracle@SDBSRV1 ~]$ sqlplus / as sysdba

        alter database recover managed standby database disconnect from session;

        SQL> exit
         
        Add an entry similar to below at the end of listener.ora file on sdbsrv1 and sdbsrv2. It is required for dataguard broker configuration.

        [oracle@SDBSRV1 ~]$ vi /u01/app/12.1.0/grid/network/admin/listener.ora
         
        SID_LIST_LISTENER =
         (SID_LIST =
          (SID_DESC =
           (SID_NAME = SDBRAC1)
            (GLOBAL_DBNAME=SDBRAC_DGMGRL)
             (ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1)
         )
        )

         
        Save and close

        [oracle@SDBSRV1 ~]$ srvctl stop listener -listener LISTENER
        [oracle@SDBSRV1 ~]$ srvctl start listener -listener LISTENER
         
        [oracle@SDBSRV2 ~]$ vi /u01/app/12.1.0/grid/network/admin/listener.ora

        SID_LIST_LISTENER =
         (SID_LIST =
          (SID_DESC =
           (SID_NAME = SDBRAC2)
            (GLOBAL_DBNAME = SDBRAC_DGMGRL)
             (ORACLE_HOME = /u01/app/oracle/product/12.1.0/db_1)
        )
        )

        Save and close

        [oracle@SDBSRV2 ~]$ srvctl stop listener -listener LISTENER
        [oracle@SDBSRV2 ~]$ srvctl start listener -listener LISTENER


        At this stage, we have completed the RAC to RAC dataguard configuration but still few more steps needed.



        Dataguard Broker Configuration 12c

        Since our Primary and Standby databases are RAC, we will change the default location of DG Broker files to a centralized location as shown an example below

        Login as oracle user on Primary node pdbsrv1 and execute the below commands.

        [oracle@PDBSRV1 ~]$ grid_env
        [oracle@PDBSRV1 ~]$ asmcmd mkdir DATA/PDBRAC/DGBROKERCONFIGFILE
        [oracle@PDBSRV1 ~]$ db_env
        [oracle@PDBSRV1 ~]$ sqlplus / as sysdba

        show parameter dg_broker_config

        NAME                                 TYPE        VALUE

        ------------------------------------ ----------- ------------------------------

        dg_broker_config_file1               string      /u01/app/oracle/products/12.1.0/db/dbs/dr1pdbrac.dat

        dg_broker_config_file2               string      /u01/app/oracle/products/12.1.0/db/dbs/dr2pdbrac.dat

        alter system set dg_broker_config_file1='+DATA/PDBRAC/DGBROKERCONFIGFILE/dr1pdbrac.dat';

        alter system set dg_broker_config_file2='+DATA/PDBRAC/DGBROKERCONFIGFILE/dr2pdbrac.dat';

        alter system set dg_broker_start=TRUE;
        alter system set LOG_ARCHIVE_DEST_2='' scope=both;


        SQL> exit

        Similarly, change the settings on Standby database server.

        [oracle@SDBSRV1 ~]$ grid_env
        [oracle@SDBSRV1 ~]$ asmcmd mkdir DATA/SDBRAC/DGBROKERCONFIGFILE
        [oracle@SDBSRV1 ~]$ db_env
        [oracle@SDBSRV1 ~]$ sqlplus / as sysdba

        alter system set dg_broker_config_file1='+DATA/SDBRAC/DGBROKERCONFIGFILE/dr1sdbrac.dat';


        alter system set dg_broker_config_file2='+DATA/PDBRAC/DGBROKERCONFIGFILE/dr2sdbrac.dat';
         

        alter system set dg_broker_start=TRUE;
        alter system set LOG_ARCHIVE_DEST_2='' scope=both;
         

        SQL> exit
         
        Register the primary and standby databases in the broker configuration as shown an example below

        [oracle@PDBSRV1 ~]$ dgmgrl
         

        Welcome to DGMGRL, type "help" for information.
        DGMGRL> connect sys/password@PDBRAC
        Connected as SYSDBA.
         

        CREATE CONFIGURATION dg_config AS PRIMARY DATABASE IS PDBRAC CONNECT IDENTIFIER IS PDBRAC;
         

        Output
        Configuration "dg_config" created with primary database "PDBRAC"

        ADD DATABASE SDBRAC AS CONNECT IDENTIFIER IS SDBRAC MAINTAINED AS PHYSICAL;


        Output
        Database "SDBRAC" added


        Now we need to enable the broker configuration and check if the configuration is enabled successfully or not.

        DGMGRL> ENABLE CONFIGURATION;
        Enabled.

        DGMGRL> show configuration;
        Configuration - dg_config
          Protection Mode: MaxPerformance
          Members:
          pdbrac  - Primary database
            sdbrac - Physical standby database
        Fast-Start Failover: DISABLED
        Configuration Status:

        SUCCESS

         
        Note: If you encounter an error"ORA-16629: database reports a different protection level from the protection mode"then perform the following steps.

        DGMGRL> edit configuration set protection mode as MAXPERFORMANCE;
        Succeeded.

        DGMGRL> show configuration;
        Configuration - dgtest
        Protection Mode: MaxPerformance
        Databases:
        pdbrac - Primary database
        sdbrac     - Physical standby database
        Fast-Start Failover: DISABLED
        Configuration Status:

        SUCCESS


        Once the broker configuration is enabled, the MRP process should start on the Standby database server.

        DGMGRL> show database sdbrac
        Database - sdbrac
        Role:               PHYSICAL STANDBY
        Intended State:     APPLY-ON
        Transport Lag:      0 seconds (computed 0 seconds ago)
        Apply Lag:          0 seconds (computed 0 seconds ago)
        Average Apply Rate: 39.00 KByte/s
        Real Time Query:    OFF
        Instance(s):

            sdbrac1 (apply instance)
            sdbrac2

        Database Status:
        SUCCESS


        The output of above command shows that the MRP process is started on instance1. You can login to standby Node sdbsrv1 server and check whether MRP is running or not as shown below.

        [oracle@SDBSRV1 ~]$ ps -ef | grep mrp
        oracle   26667     1  0 15:17 ?        00:00:00 ora_mrp0_sdbrac1
        oracle   27826 20926  0 15:21 pts/1    00:00:00 /bin/bash -c ps -ef | grep mrp


        Now that the MRP process is running, login to both Primary and Standby database and check whether the logs are in sync or not.

        Below are some extra commands which you can use and check status of database.

        DGMGRL> VALIDATE DATABASE pdbrac;
        Database Role:    Primary database
        Ready for Switchover:  Yes
        Flashback Database Status:
          pdbrac:  ON

        DGMGRL> VALIDATE DATABASE sdbrac;
        Database Role:     Physical standby database
        Primary Database:  pdbrac
        Ready for Switchover:  Yes
        Ready for Failover:    Yes (Primary Running)
        Flashback Database Status:

            pdbrac:  ON
            sdbrac:  Off



        Perform switchover activity from primary database (PDBRAC) to physical standby database (SDBRAC) using DGMGRL prompt.

        DGMGRL> switchover to sdbrac;
        Performing switchover NOW, please wait...
        Operation requires a connection to instance "SDBRAC1" on database "sdbrac"
        Connecting to instance "SDBRAC1"...
        Connected as SYSDBA.
        New primary database "sdbrac" is opening...
        Operation requires startup of instance "PDBRAC2" on database "pdbrac"
        Starting instance "PDBRAC2"...
        ORACLE instance started.
        Database mounted.
        Database opened.
        Switchover succeeded, new primary is "sdbrac"

        DGMGRL> show configuration;

        Configuration - dg_config

          Protection Mode: MaxPerformance
          Databases:
          sdbrac - Primary database
            pdbrac - Physical standby database

        Fast-Start Failover: DISABLED
        Configuration Status:
        SUCCESS

        DGMGRL> exit

         

        Conclusion

        We have completed the oracle 12c rac to rac database installation and configuration including data guard configuration for highavailability in a primary and physical standby environment.

        Harman Kardon Introduces Invoke voice-activated Speaker

        $
        0
        0


        With more than sixty years of sound expertise, Harman Kardon proudly showcase a product that brings together incredibly rich audio, as well as best-in-class design to help busy people get the most out of every moment. Invoke "Voice-activated" speakers brings the same passion for connectivity, sound and design you’ve become accustomed to from Harman Kardon, along with voice assistance from Cortana.

        Whether you’re in mood to play your favorite music, manage calendars, set reminders or get updates on the latest news, Cortana helps you stay on top of it all with just the sound of your voice.  Listening to your favorite Spotify playlist and need to make a call with Skype? Through HARMAN’s Sonique far-field voice recognition technology, Invoke will hone in on the sound of your voice, pausing your music and allowing Cortana to assist you. 

        As a technology partner agnostic company, HARMAN has also developed voice-activated speakers such as the JBL LINK built-in with Google Assistant and Harman Kardon Allure with Amazon’s Alexa. According to a study from NPR and Edison Research, 70% of smart speaker owners say they are listening to more audio at home since acquiring their device.

        The Harman Kardon Invoke with Cortana will be available at Best Buy, Microsoft stores and other retail locations, as well as online at HarmanKardon.com and Microsoft.com. Available for $199.95, Invoke can be purchased in Graphite and Pearl Silver.



        Save Battery Life using Power Throttling Feature on Windows 10

        $
        0
        0

        Now you can improve battery life on laptops and tablets by managing power throttling of applications in Windows 10 with latest Fall Creators Update. This feature is designed to boost battery life on portable PCs, so it’s not used on desktops or on laptops when they’re plugged in. It’s only used when a PC is running on battery power.


        To check which processes are power throttled on your Windows 10 machine, open up Task Manager. Click the “Details” tab to view a detailed list of the processes running on your machine. If you don’t see the tabs, click the “More details” option first.

        From the Details tabe, right-click the headings and click “Select Columns”.


        Scroll down through the list and enable the “Power Throttling” column. Click “OK” to save changes you made.


        Here you can see a Power Throttling column, which will give you information about each process’s power throttling state.

        If Power Throttling is disabled on your machine—for example, if you’re on a desktop PC or laptop that’s plugged in—you’ll just see “Disabled” in this column for every application.


        If you are on a portable PC running on battery, you’ll likely see some applications with power throttling “Enabled” and some applications with it “Disabled”.

        We observed this in action with Google Browser. When we had Google Browser minimized in the background, Windows set Power Throttling  to “Enabled” for the chrome.exe processes. When we Alt+Tabbed back to Chrome and it was on our screen, Windows set Power Throttling to “Disabled” for it.


        If you want to disable power throttling on your system, just plug your portable PC into a power outlet. Power Throttling will always be disabled by default if the PC is plugged in.

        For instance, if you can’t plug in for the moment, you can click the battery icon in the notification area, also known as the system tray. Adjust the power slider to control Power Throttling and other power usage settings.

        At “Battery saver” or “Better battery”, Power Throttling will be enabled. At “Better performance”, Power Throttling will be enabled but will be less aggressive. At “Best performance”, Power Throttling will be disabled. Of course, the Best Performance setting will increase power usage and lower your battery life.


        You can also configure Windows 10 to disable Power Throttling for individual processes on your system. This is particularly useful if the auto-detection feature fails and you find Windows throttling important programs, or if a specific background process is important and you want it to get maximum CPU resources.

        To disable Power Throttling for an application, navigate to Settings > System > Battery. Click “Battery Usage by App”.

        If you don’t see a “Battery” screen here, your PC doesn’t have a battery—which means Power Throttling will never be used.


        Select the application you want to adjust here. If an application has “Decided by Windows” underneath it, that means Windows is automatically deciding whether it should be throttled or not.


        Uncheck the “Let Windows decide when this app can run in the background” and “Reduce the work app can do when it’s in the background” options here. Power Throttling will now be disabled for that application.

        Teradata Improves its Analytics Platform Adding New Capabilities

        $
        0
        0

        Teradata is providing a package that troubleshoot and solve the problem enterprises have of bringing together all the required software to manage an analytical ecosystem from different sources and vendors.


        In a recent conference, Teradata claimed that its rearranged analytics platform is designed to allow users throughout an enterprise to deploy their preferred tools and languages, at scale, across multiple data types. Teradata Analytics Platform achieves this by embedding the analytics engine close to the data, which ruled out the need to move data and enable users to run their analytics against larger data sets with higher  speed and frequency.

        Teradata has also introduced new tool called IntelliSphere, describing it as a “single software portfolio enhancing the analytical ecosystem.” In easy word, this means that Teradata finds out what the customer wants and then provides all the software and services needed, whether or not it’s on the Teradata rack. 

        The Teradata Analytics Platform, which locates data wherever it is in the customer’s IT system and transfers its application to those locations, will become available later this quarter on an early-access trial basis.

        Teradata analytics platform bringing advanced analytics functions into it—such as path analysis, graph analysis, sessionization, machine learning algorithms. These advanced functionalities are not just for the data scientist, or few people.

        Teradata bringing those capabilities into the warehouse and then extending the warehouse, so that we can create an architecture underneath where it can reach out to other analytic tools like Spark, Tensorflow, Aster or others, and make those function available to all the users in the warehouse.”

        Enterprises are now finding solution to bring together all the required software to manage an analytical ecosystem from multiple sources and vendors. It can get very complicated and difficult to manage.

        The latest IntelliSphere tool offers advanced analytics at scale. Using IntelliSphere, organizations no longer need to purchase separate software applications to build and manage their ecosystem. Organizations can design their environment to realize the full potential of their data and analytics, with a guarantee that future updates can be leveraged immediately without spending on another license or paying subscription charges.

        Teradata IntelliSphere is bundled of ten software components, including:

            Teradata Listener
            Teradata Data Lab
            Teradata QueryGrid
            Teradata Unity
            Teradata Hybrid Cloud Manager (available later in 2017)
            Teradata Data Mover
            Multi-system Viewpoint
            Teradata Data Stream Extension
            Teradata Ecosystem Manager
            Teradata AppCenter

        As latest software solutions are released in the future they will become part of the IntelliSphere package, so customers will take the benefit of access to new Teradata software products under their existing licensed product.

        Google Announces New Online Payment Solution

        $
        0
        0
        Pay with Google enables buyers to use the credit and debit cards linked with their Google account to make the payment process fast.


        Pay with Google has officially gone live starting Oct. 23, consumers will be able to pay for purchases made on the mobile Web or from within mobile applications using verified credit or debit cards saved within their Google Account.

        When buyers finish making a purchase and arrive at a participating merchant's online checkout page they will be presented with an option for paying with Google. If the buyer chooses that option, Google will send the merchant the buyer's stored payment information as well as the shipping address linked with the account. The merchant then handles the transaction like any other credit or debit card transaction, without the buyer having to do anything more.

        Pay with Google eliminates the need for buyers to fill out payment information and forms when purchasing items online. It is developed primarily to speed up the checkout process for consumers using their mobile phone or tablet to buy and pay for goods and services online.

        Google listed more than two dozen merchants that currently allow buyers to use Pay with Google for online purchases. Buyers will be able to use the payment option at several other merchants soon as google is in process to update its merchants list.

        Google had described the API as giving mobile app developers a path to provide a streamlined checkout experience for their customers. Developers only have to add a few lines of code to integrate Pay with Google into their online sales applications. Google has said it doesn't charge developers any transaction fees on payments made via Pay with Google.

        Lock user accounts if numbers of failed login attempts detected

        $
        0
        0
        This step by step guide will walk you through the steps to configure lock user accounts if predefined numbers of failed login attempts detected on linux servers. This article applies on CentOS, Red Hat Enterprise Linux and Fedora distributions.

        This can be accomplished by using the pam_faillock module which helps temporarily locking user accounts if predefined numbers of consecutive failed login attempts detected and stores a record of such event. Failed login attempts are stored into per-user files in the /var/run/faillock/ directory by default.

         

        Lock User Accounts if Multiple Failed Login Detected

        These user account lock policies can be set up in the /etc/pam.d/system-auth and /etc/pam.d/password-auth files, by adding the following entries into the auth section.

        auth    required       pam_faillock.so preauth silent audit deny=5 unlock_time=600
        auth [default=die] pam_faillock.so authfail audit deny=5 unlock_time=600


        Explanation:

            audit    –   enables user auditing.
            deny    –    used to define the number of attempts (5 in this case), after which the user
                              account should be locked.
                              unlock_time – sets the time (600 seconds = 10 minutes) for which the account
                              should remain locked.

        Note: The order of these lines is highly important, bad configuration can cause all user accounts to be locked out.

        The auth section in both files should have the following contents arranged in this order:

        auth  required      pam_env.so
        auth required pam_faillock.so preauth silent audit deny=5 unlock_time=600
        auth sufficient pam_unix.so nullok try_first_pass
        auth [default=die] pam_faillock.so authfail audit deny=5 unlock_time=600
        auth requisite pam_succeed_if.so uid >= 1000 quiet_success
        auth required pam_deny.so


        Now you need to edit these two files.

        # vi /etc/pam.d/system-auth
        # vi /etc/pam.d/password-auth


        The default entries in auth section in both files will look similar to like below.

        #%PAM-1.0
        # This file is auto-generated.
        # User changes will be destroyed the next time authconfig is run.
        auth required pam_env.so
        auth sufficient pam_fprintd.so
        auth sufficient pam_unix.so nullok try_first_pass
        auth requisite pam_succeed_if.so uid >= 1000 quiet
        auth required pam_deny.so


        After adding the above settings, it should appear as follows.

        #%PAM-1.0
        # This file is auto-generated.
        # User changes will be destroyed the next time authconfig is run.
        auth required pam_env.so
        auth required pam_faillock.so preauth silent audit deny=5 unlock_time=600
        auth sufficient pam_fprintd.so
        auth sufficient pam_unix.so nullok try_first_pass
        auth [default=die] pam_faillock.so authfail audit deny=5 unlock_time=600
        auth requisite pam_succeed_if.so uid >= 1000 quiet
        auth required pam_deny.so


        Then add the following highlighted entry into the account section in both of the above files.

        account     required      pam_unix.so
        account sufficient pam_localuser.so
        account sufficient pam_succeed_if.so uid < 500 quiet
        account required pam_permit.so
        account required pam_faillock.so

         

        Lock Root Account if Failed Login Attempts Detected

        If you want to lock the root account after multiple failed login attempts, then add the even_deny_root option to the lines in both files in the auth section as shown below.

        auth    required    pam_faillock.so preauth silent audit deny=3 even_deny_root unlock_time=600
        auth    [default=die]    pam_faillock.so  authfail  audit  deny=3 even_deny_root unlock_time=600


        When you are done with all of above steps, restart remote accessibility services like SSH to take effect the changes you have made.

        # systemctl restart sshd  [On SystemD]
        # service sshd restart [On SysVInit]


         

        Test User Failed Login Attempts

        To test failed login settings, access your linux machines via ssh providing 5 times wrong password as we have configured the system to lock a user account after 5 failed login attempts. If you have defined all settings correctly, user will be locked out after 5 consecutive failed attempts.

         

        Monitor Failed Authentication Attempts

        You can monitor all failed authentication logs using the faillock command, which is used to display and modify the authentication failure log.

        Execute the following command from root to view particular user's failed login attempts.

        # faillock --user username

        To view all unsuccessful login attempts at once, type faillock command without any argument.

        # faillock 

        To clear a particular user’s authentication failure logs, type the following command.

        # faillock --user username --reset

        To clear all failure logs at once, type the following command.

        # fail --reset

        If you want, not to lock a particular user or users account after multiple failed login attempts, add the following highlighted entry just above where pam_faillock is first called under the auth section in both files (/etc/pam.d/system-auth and /etc/pam.d/password-auth) like below.

        Add full colon separated usernames to the option user in.

        auth   required      pam_env.so
        auth [success=1 default=ignore] pam_succeed_if.so user in jhon:peter
        auth required pam_faillock.so preauth silent audit deny=5 unlock_time=600
        auth sufficient pam_unix.so nullok try_first_pass
        auth [default=die] pam_faillock.so authfail audit deny=5 unlock_time=600
        auth requisite pam_succeed_if.so uid >= 1000 quiet_success
        auth required pam_deny.so



        You are done.

        Microsoft Releases Windows 10 Insider Preview Build 17025 for PC

        $
        0
        0
        The insider preview build adds new Ease of Access settings to make your computer easier to use and fit your needs. It also grouped related settings together which help you see, hear or interact with your computer to assist in discovering settings more quickly. 


        Additionally, insider preview build improved setting descriptions to help you more easily understand the available accessibility features. Navigate to the Ease of Access section in Settings to see what’s available to make your computer easier to use!


        Microsoft also updated the Advanced options under Settings > Apps & Features so that UWP apps that are configured to run at startup will now have a new option to see all available tasks specified by the app developer and their status.


        Insiders from China likely know, Microsoft Yahei is the font use to display Windows UI text in the Chinese (Simplified) language. With this build, updating this font to improve the legibility, symmetry and appearance.

        Below is a sample of the updated font – the blue is the new version, the grey is an outline of the previous font:


        An Introduction to Univention Corporate Server

        $
        0
        0


        Univention Corporate Server is a Debian GNU/Linux derivative that started in 2002. It includes the necessary Open Source software to provide Active Directory domain functionality, among others basically Samba 4, Kerberos and OpenLDAP and integrates them in a maintainable and sophisticated way. The vast majority of software packages are built by the Debian project. Some packages, though, are built by Univention, because they are newer than the stable version of the Debian project, for example, the Linux kernel, Samba or OpenLDAP, or because the packages are customized with patches.

        UCS has three core features:
        • A central identity management system for users, their roles and rights.
        • An app store-like environment, which Univention calls ‘App Center’, for easy testing, provisioning, rollout and life-cycle management of applications.
        • IT infrastructure and device management

        Univention brings all those capabilities together into a single, easy-to-use Open Source product called Univention Corporate Server (UCS).

         

        Why UCS

        It is an Open Source alternative to Microsoft Windows Server, because it provides Active Directory services like Microsoft’s product and can be used for similar purposes. UCS can be part of an Active Directory domain or it can take over existing ones and migrate the data.

        UCS also a kind of Android for servers, because it has a capability to manage apps on your servers and integrate them, for example, by providing a central identity management system. The apps can be operated both on premises or in the cloud. This gives you more flexibility in your environment.

        There is a configuration file template system which is called Univention Configuration Registry that allows to define variables inside a variable tree that can be used in configuration files or scripts, for example, the LDAP base distinguished name. Many variables are used across different servers. 

        It has a web-based management system for users, groups, roles, user policies and infrastructure services like IP address leases, name resolution and the server management itself including software update management, just to name a few. The goal of the management system is to simplify recurring tasks for the system administrator and to lower the learning curve by using a full fledged enterprise Linux system.

        Comparing to other Linux distributions, UCS focuses on a central IT infrastructure management and offers the necessary management interface. A UCS system can be used in several roles depending on it’s purpose. The roles basically determine, for example, if a copy of the directory service is locally available on the system.

         

        UCS App Center

        Installing enterprise applications on UCS with their default methods usually requires several manual steps from download, over installation to configuration and probably integration. This approach is appropriate for IT projects introducing a solution into organizations, because of its flexibility. But it involves too many steps for evaluation or for operation in small and mid-sized organizations where the focus is on using the solution and not keeping it functional with a dedicated technical team.

        Univention Corporate Server fills in this gap to easily evaluate and operate enterprise applications like Kopano, Open-Xchange App Suite or ownCloud and many more. The steps for download, installation and configuration are consolidated in the installation of the app with one click. Apps like Kopano are up and running within a few minutes. Users can be added via the web-based UCS user management and can immediately login to the app. 

        Many apps are also integrated in the UCS directory service which makes the platform the central identity provider for these applications in the environment.

        A UCS-basded IT environment can be set up within one hour, offering groupware, file share and sync, backup and VPN.

         

        Docker in App Center

        The App Center uses the Docker container technology in the background to encapsulate the solutions from the host system. It also simplifies the deployment, because many solutions support Docker meanwhile. This approach also simplifies the necessary steps for app providers to offer their solutions on the UCS platform.

         

        UCS Free Core vs. Enterprise Edition

        UCS comes in two editions: Core and Enterprise. The Core Edition is available for free via download from the Univention website. It has the same features as the Enterprise Edition, but comes without support and a limited maintenance period. Help is provided via the forum at Univention Help.

        In contrast to the Core Edition, the Enterprise Edition comes with maintenance subscription and support. Univention offers a five to seven years lifecycle for the major versions of UCS. The price depends on the number of servers and the number of users in the environment. Depending on the Enterprise subscription level, the yearly prices range from $ 349 to $ 2,049.

         

        UCS Customers and Market

        At present there are more than 6,000 organizations all over the world using UCS daily and in production. These organizations range from small businesses with just a few users to one of Univention’s largest customers who manages more than 30 million users with Univention Corporate Server.

        One of the most important customer groups is the education market in Germany. UCS is used, for example, by several larger German cities to provide schools, students and teachers with reliable, centrally managed access to learning management systems, Wi-Fi, computers, file servers, email and to organize the integration of mobile devices. An extension from the App Center, named UCS@school, provides additional management functions needed by teachers, for example, class room management.

        A third, growing user group are private tech enthusiasts who operate UCS as a safe, open source home server, for example, for mail, groupware and file exchange.


        Credit: SK

        Disaster Recovery Solution for Linux Servers - Step by Step

        $
        0
        0

        Relax-and-Recover is an easy to setup and maintenance free migration & disaster recovery solution compatible with CentOS, RHEL, OEL and many other known linux distributions. It has capability to detect hardware changes and preserves the last condition of the operating system including its partitions, boot loader configuration, all the system data etc if you are restoring linux servers backup images to dissimilar hardware.

        This step by step guide will show you how to use Relax-and-Recover and create a bootable USB backup for your critical linux environment. For this tutorial, we are using CentOS 7 but you are free to use linux distribution of your choice.

         

        Creating NFS Shares 

        First, you need to create an NFS Share to store all of your linux servers backup images. To install NFS utilities on your linux machine you have as a backup server. Run the following command to install it with root privileges.

        yum install nfs-utils -y

        When you are done installing nfs with above command, create nfs share directory and make it accessible from all of your linux servers you are going to take backup using the following commands.

        mkdir /Backup 

        vi /etc/exports
        /Backup    *(fsid=0,rw,sync,no_root_squash,no_subtree_check,crossmnt)

        Save and close 

        Now restart nfs service to take changes effect using the following command.

        systemctl restart nfs

         

        Installing Relax-and-Recover

        To install relax-and-recover package and required dependencies, run the following command with root privileges.

        yum install rear syslinux genisoimage -y

        Once installation completed. Open the relax-and-recover configuration file /etc/rear/local.conf and define backup store location of nfs we have created earlier.

        vi /etc/rear/local.conf

        OUTPUT=ISO
        BACKUP=NETFS
        BACKUP_URL=nfs://your-nfs-server-ip/Backup

        Save and close

         

        Creating Backup

        To take the backup image of your linux server, run the following command with root privileges.

        rear -d -v mkbackup

        This will begin creating an ISO image of your linux server and store it locally under the directory /var/lib/rear/output and then it will automatically move all the files along with backup ISO image to NFS share.

         

        Testing Recovery with backup image

        Let's test recovery method with newly created backup image of our linux server. For that, we need to burn the following ISO file on CD or DVD.



        Booting server with the same bootable CD present the following relax-and-recover menu screen.


        Select the first option from the boot menu Recover localhost and press enter. After this, we will now have our login menu, login using root without password.


        Now we need to configure network settings on our system to access remote NFS Share. To add an IP address, execute the following command.

        ip addr add 192.168.0.11/24 dev enp0s3
        ip link set enp0s3 up

        To start the recovery, run following command

        rear -d -v recover


        Type 1 and press enter to select /dev/sda as your disk.

        Now it will ask for disk layout, type 5 and press enter to continue


        It will start recovery procedure


        It will take server minutes to complete the recovery. Once its done, reboot the system.


        You are done. Now eject the recovery CD/DVD from the system.

         

        Creating Bootable USB Backup 

        If you want to make usb bootable with backup images of your linux servers instead of nfs share then use the following steps.

        Prepare your USB media. Change /dev/sdb to the correct device in your situation. Relax-and-Recover will ‘own’ the device in this example.

        This will destroy all data on that device.
        /usr/sbin/rear format /dev/sdb

        It will ask you to confirm that you want to format the usb device: type Yes and press enter
        The usb device will be labeled REAR-000 by the ‘format’ workflow.

        Now edit the ‘etc/rear/local.conf’ configuration file:

        cat > etc/rear/local.conf <### write the rescue initramfs to USB and update the USB bootloader
        OUTPUT=USB

        ### create a backup using the internal NETFS method, using 'tar'
        BACKUP=NETFS

        ### write both rescue image and backup to the device labeled REAR-000
        BACKUP_URL=usb:///dev/disk/by-label/REAR-000
        EOF


        Now you are ready to create a rescue image. We want verbose output.

        /usr/sbin/rear -v mkrescue

        Output:
        Relax-and-Recover 1.13.0 / $Date$
        Using log file: /home/jeroen/tmp/quickstart/rear/var/log/rear/rear-fireflash.log
        Creating disk layout
        Creating root filesystem layout
        WARNING: To login as root via ssh you need to setup an authorized_keys file in /root/.ssh
        Copying files and directories
        Copying binaries and libraries
        Copying kernel modules
        Creating initramfs
        Writing MBR to /dev/sdb
        Copying resulting files to usb location


        You might want to check the log file for possible errors or see what Relax-and-Recover is doing. 

        Now reboot your system and try to boot from the USB device.

        If that worked, you can dive into the advanced Relax-and-Recover options and start creating full backups. If your USB device has enough space, initiate a backup using:
        /usr/sbin/rear -v mkbackup

        That is it.

        How To Install Nginx, MariaDB and PHP (FEMP Stack) on FreeBSD 11

        $
        0
        0

        This step by step guide will walk you through the steps to install and configure the FEMP Stack on FreeBSD 11.

         

        Prerequisites:

        • A physical or virtual machine with minimal installation of FreeBSD 11
        • A static IP Address configured for a network interface.
        • A regular account created with root privileges or direct access to the system using root account.
        • A DNS server to maintain (A and CNAME records).

         

        Installing MariaDB Database

        To begin, first we’ll install the MariaDB database system, which is the FEMP component that will be used for storing and managing the dynamic data of the website. MariaDB can be installed in FreeBSD directly from the binaries provided by PORTS repositories. However, a simple search using ls command in FreeBSD Ports databases section reveals multiple versions of MariaDB, as shown in the following command output. Also, running Package Manager pkg command displays the same results.

        ls -al /usr/ports/databases/ | grep mariadb
        pkg search mariadb


        Now we will install the latest version of the MariaDB database and client by using the pkg command as shown in the example below.

        pkg install mariadb102-server mariadb102-client

        Once MariaDB installation completed in the system, run the following command in order to enable the MySQL server system-wide. Also, make sure MariaDB daemon started manually as shown below.

        sysrc mysql_enable=”YES”
        service mysql-server start 

        Now we need to secure MariaDB database by running mysql_secure_installation script which set up a root password for MySQL root user, remove the anonymous user, disable remote login for root user and delete the test database. After choosing a strong password for the MySQL root user, answer with yes on all questions, as shown in the below example of the script. Do not confuse the MariaDB database root user with the system root user. Although these accounts have the same name, root, they are not equivalent and are used for different purposes, one for system administration and the other for database administration.

        /usr/local/bin/mysql_secure_installation

        NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB

              SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

        In order to log into MariaDB to secure it, we'll need the current

        password for the root user.  If you've just installed MariaDB, and

        you haven't set the root password yet, the password will be blank,

        so you should just press enter here.


        Enter current password for root (enter for none):

        OK, successfully used password, moving on...


        Setting the root password ensures that nobody can log into the MariaDB

        root user without the proper authorisation.

        Set root password? [Y/n] y

        New password:

        Re-enter new password:

        Password updated successfully!

        Reloading privilege tables..

         ... Success!

        By default, a MariaDB installation has an anonymous user, allowing anyone

        to log into MariaDB without having to have a user account created for

        them.  This is intended only for testing, and to make the installation

        go a bit smoother.  You should remove them before moving into a

        production environment.

        Remove anonymous users? [Y/n] y

         ... Success!

        Normally, root should only be allowed to connect from 'localhost'.  This

        ensures that someone cannot guess at the root password from the network.

        Disallow root login remotely? [Y/n] y

         ... Success!

        By default, MariaDB comes with a database named 'test' that anyone can

        access.  This is also intended only for testing, and should be removed

        before moving into a production environment.

        Remove test database and access to it? [Y/n] y

         - Dropping test database...

         ... Success!

         - Removing privileges on test database...

         ... Success!

        Reloading the privilege tables will ensure that all changes made so far

        will take effect immediately.

        Reload privilege tables now? [Y/n] y

         ... Success!

        Cleaning up...

        All done!  If you've completed all of the above steps, your MariaDB

        installation should now be secure.

        Thanks for using MariaDB!

        When done securing MariaDB database, test if you are allowed to perform local login to the database from root account using the the following command. Once connected to the database prompt, just type quit or exit in order to leave the database console and return to system user console prompt as shown in example below.

        mysql -u root -p
        MariaDB> quit


        Running sockstat command in FreeBSD quickly reveals the fact that MariaDB is opened to external network connections and can be remotely accessed from any network via 3306/TCP port.

        sockstat -4 -6

        To disable completely from remote network connections to MariaDB, you need to force mysql network socket to bind to the loopback interface only by adding the following line to /etc/rc.conf file with the below command.

        sysrc mysql_args="--bind-address=127.0.0.1"

        Afterwards, restart MariaDB daemon to effect the changes and execute sockstat command again to display the network socket for mysql service. This time, MariaDB service should listen for network connections on localhost:3306 socket only.

        service mysql-server restart
        sockstat -4 -6|grep mysql


        If you are developing a remote web application that needs access to the database on this machine, revert MySQL socket changes by removing or commenting the line mysql_args="--bind-address=127.0.0.1" from /etc/rc.conf file and restarting the database to apply changes. In this case, you should take into consideration other alternatives to limit or disallow remote access to MySQL, such as running a firewall locally and filter the IP addresses of clients who need remote login or create MySQL users with the proper IP addresses grants to login to the server.

         

        Installing Nginx Web Server

        The Nginx web server can be installed from the binaries provided by FreeBSD 11 Ports. A simple search through Ports repositories in the www section can show a list of what pre-compiled versions are available for Nginx software, as shown in the example below.

        ls /usr/ports/www/ | grep nginx
        Running the package management command can display the same results as shown in the image below.

        pkg search –o nginx


        To install the most common release of Nginx in FreeBSD, execute the following command. To avoid the prompt add the –y flag while issuing the command:

        pkg -y install nginx


        After Nginx web server software has been installed on your system, you should enable and run the service by issuing the following commands.

        sysrc nginx_enable=”yes”
        service nginx start


        Run sockstat command in order to check if Nginx service is started on your system and on what network sockets it binds on. Normally, it should bind by default on *:80 TCP socket. You can use the grep command line filter to display only the sockets that match nginx server.

        sockstat -4 -6 | grep nginx


        Open a browser on a computer in your network and navigate to the IP address of your server via HTTP protocol. In case you’ve registered a domain name or you use a local DNS server, you can write the fully qualified domain name of your machine or the domain name in browser’s URI filed. A title message saying "Welcome to nginx!" alongside a few HTML lines should be displayed in your browser, as shown in the following figure.


        The location where web files are stored for Nginx in FreeBSD 11 is /usr/local/www/nginx/ directory. This directory is a symbolic link to the nginx-dist directory. To deploy a website, copy the html or php script files into this directory. In order to change Nginx default webroot directory, open Nginx configuration file from /usr/local/etc/nginx/ directory and update root statement line as shown in the below example.

        nano /usr/local/etc/nginx/nginx.conf

        This will be the new webroot path for Nginx:

        root    /usr/local/www/new_html_directory;

         

        Install PHP Programming Language

        By default, Nginx web server cannot directly parse PHP scripts, Nginx needs to pass the PHP code trough the FastCGI gateway to the PHP-FPM daemon, which interprets and executes the PHP scripts. In order to install the PHP-FPM daemon in FreeBSD, search for available PHP pre-compiled binary packages by running the below commands.

        ls /usr/ports/lang/ | grep php
        pkg search –o php


        From the multitude of PHP versions available in FreeBSD Ports repositories, choose to install the latest version of PHP interpreter, currently PHP 7.1 release, by issuing the following command.

        pkg install php71

        In order to install some extra PHP extensions, which might be needed for deploying complex web applications, issue the below command. A list of officially supported PHP extensions can be found by visiting the following link: http://php.net/manual/en/extensions.alphabetical.php

        If you're planning to build a website based on a content management system, review the CMS documentation in order to find out the requirements for your system, especially what PHP modules or extensions are needed.

        php71-mcrypt mod_php71 php71-mbstring php71-curl php71-zlib php71-gd php71-json

        Because we are running a database server in our setup, we should also install the PHP database driver extension, which is used by PHP interpreter to connect to MariaDB database.

        pkg install php71-mysqli

        Next, update the PHP-FPM user and group to match the Nginx runtime user by editing PHP-FPM configuration file. Change the user and group lines variables to www as shown in the below excerpt.

        cp /usr/local/etc/php-fpm.d/www.conf{,.backup}

        nano /usr/local/etc/php-fpm.d/www.conf

        Change the following lines to look as below.

        user = www
        group = www


        By default, Nginx daemon runs with privileges of the 'nobody' system user. Change Nginx runtime user to match PHP-FPM runtime user, by editing /usr/local/etc/nginx/nginx.conf file and update the following line:

        user www;


        PHP-FPM daemon in FreeBSD opens a network socket on localhost:9000 TCP port in listening state. To display this socket you can use sockstat command as shown in the below example.

        sockstat -4 -6| grep php-fpm


        To exchange PHP scripts with PHP FastCGI gateway on 127.0.0.1:9000 network socket, open Nginx configuration file and update the PHP-FPM block as shown in the below sample.

        PHP FastCGI gateway example for Nginx:

                location ~ \.php$ {
                root               /usr/local/www/nginx;
                fastcgi_pass   127.0.0.1:9000;
                fastcgi_index  index.php;
                fastcgi_param SCRIPT_FILENAME $request_filename;   
                include        fastcgi_params;
                       }


        When you are done with all the above changes, create a configuration file for PHP based on the default production file by running the following command. You can change the PHP runtime settings by editing the variables present in php.ini file.

        ln -s /usr/local/etc/php.ini-production /usr/local/etc/php.ini

        Finally, in order to apply all changes made so far, enable the PHP-FPM daemon system-wide and restart PHP-FPM and Nginx services by issuing the below commands.

        sysrc php_fpm_enable=yes
        service php-fpm restart


        Test nginx configurations for syntax errors:

        nginx –t 
        service nginx restart


        To get the current PHP information available for your FEMP stack in FreeBSD, create a phpinfo.php file in your server document root directory by issuing the following command.

        echo "" | tee /usr/local/www/nginx/phpinfo.php

        Now, open a browser and navigate to the phpinfo.php page by visiting your server's domain name or IP address followed /phpinfo.php file, as shown in the below screenshot.

         

        Conclusion

        We’ve successfully installed and configured FEMP Stack in FreeBSD 11. The server is now ready and fully functional to start deploying dynamic web applications in your environment.

        How to Install Virtual I/O Server over the Network using Linux

        $
        0
        0

        This step by step guide will walk you through the steps to remotely install Virtual I/O Server on IBM power hardware using linux remote installation server. These instructions allow you to do a network installation using a Linux Server instead of using NIM on AIX and this function is critical for linux environments for IBM Power Hardware where AIX machines do not co-exist.


        Set Up the Network Installation Server
        This section describes how to set up a remote installation server on linux for VIOS / IVM network installation. The following description is based on Red Hat Linux Enterprise Server 6.4.


        Subnet:   
        172.22.10.0
           
        Netmask:
        255.255.255.0

        Gateway:
        172.22.10.1
           

        Host Name    IP Address    MAC Address
        reminstallsrv     172.22.10.1
        client1        172.22.10.10    00:11:25:c9:30:b7


        The following names and values are examples and should be changed according to your environment:


        NOTE: The server must be able to resolve the client’s host name, by either via DNS or /etc/hosts depending on your environment.

        The version numbers of the RPM packages are from Red Hat Linux Enterprise Server Version 6.4. The following packages need to be installed first using rpm –Uvh, if they are not already there:

        tftp-0.36-44.4.ppc.rpm
        dhcp-3.0.1rc13-28.18.ppc.rpm
        dhcp-server-3.0.1rc13-28.20.ppc.rpm

        Prepare directories, copy VIOS from DVD, unpack the SPOT, and prepare the boot image.


        1. Run the following commands to create the ncessary resources on remote installation server

        mkdir -p /export/vios

        2. Mount the VIOS DVD and copy the resources booti.chrp.mp.ent.Z, bosinst.data, ispot.tar.Z, and mksysb from the directory /nimol/ioserver_res to the local directory /export/vios using the following command:

        cp /mnt/nimol/ioserver_res/* /export/vios

        3. If you do not want an unattended installation, you need to edit bosinst.data and change the following. This will later open the installation dialog and offer you the possibility to select, for example, the target hard disk:

        PROMPT = no
        to
        PROMPT = yes

        4. Now you need to unpack the SPOT from the compressed image. This creates the directory SPOT under the current directory /export/vios, by running the following command:

        tar -xzf ispot.tar.Z

        5. decompress the boot image from the DVD so that you can use it for the network installation. This boot image MUST exist in the directory where TFTP looks for files. Run the following commands:

        cd /var/lib/tftpboot
        gunzip < /export/vios/booti.chrp.mp.ent.Z > client1

        6. Create a file name /var/lib/tftpboot/client1.info with the content shown below:

        vi /var/lib/tftpboot/client1.info


        export NIM_SERVER_TYPE=linux
        export NIM_SYSLOG_PORT=514
        export NIM_SYSLOG_FACILITY=local2
        export NIM_NAME=client1
        export NIM_HOSTNAME=client1
        export NIM_CONFIGURATION=standalone
        export NIM_MASTER_HOSTNAME=reminstallsrv
        export REMAIN_NIM_CLIENT=no
        export RC_CONFIG=rc.bos_inst
        export NIM_BOSINST_ENV="/../SPOT/usr/lpp/bos.sysmgt/nim/methods/c_bosinst_env"
        export NIM_BOSINST_RECOVER="/../SPOT/usr/lpp/bos.sysmgt/nim/methods/c_bosinst_env -a hostname=client1"
        export NIM_BOSINST_DATA=/NIM_BOSINST_DATA
        export SPOT=reminstallsrv:/export/vios/SPOT/usr
        export NIM_BOS_IMAGE=/NIM_BOS_IMAGE
        export NIM_BOS_FORMAT=mksysb
        export NIM_HOSTS="172.22.10.10:client1 172.22.10.1:reminstallsrv"
        export NIM_MOUNTS="reminstallsrv:/export/vios/bosinst.data:/NIM_BOSINST_DATA:file reminstallsrv:/export/vios/mksysb:/NIM_BOS_IMAGE:file"
        export ROUTES="default:0:172.22.10.1"

        Save and close


        7. Add following entries to /etc/exports file

        /export/vios/mksysb *(ro,insecure,no_root_squash)
        /export/vios/SPOT/usr *(ro,insecure,no_root_squash)
        /export/vios/bosinst.data *(ro,insecure,no_root_squash)

        8. Start the NFS server:

        service nfs start

        9. Change disable to no in /etc/xinetd.d/tftp, as shown below:

        service tftp
        {
        socket_type = dgram
        protocol = udp
        wait = yes
        user = root
        server = /usr/sbin/in.tftpd
        server_args = -s /tftpboot
        disable = no
        }


        and restart the xinited daemon:

        service xinetd restart

        10. Edit the file /etc/dhcpd.conf and add the lines shown below:

        always-reply-rfc1048 true;
        allow bootp;
        deny unknown-clients;
        not authoritative;
        default-lease-time 600;
        max-lease-time 7200;
        ddns-update-style none;

        subnet 172.22.10.0 netmask 255.255.255.0 {
        host client {
        fixed-address 172.22.10.10;
        hardware ethernet 00:11:25:c9:30:b7;
        next-server 172.22.10.1;
        filename "client1";
            }
        }


        Note: The value for filename does not necessarily have to be identical to the host name, but it must match with the file name of the boot file we put in /var/lib/tftpboot.

        12. Make the changes below in /etc/sysconfig/dhcpd:

        DHCPD_INTERFACE="eth0"
        DHCPD_RUN_CHROOTED="no"

        13. Restart dhcpd:

        service dhcpd restart

        Start Network Installation on the client

        Note: Depending on the setup on the Linux server, you may have to restart the services xinetd, dhcp and NFS server manually after a reboot.

        Start a Serial over LAN (SoL) session to the client and power on the p701. Enter the SMS menu and start the network installation.

        Important: Directed bootp does not work with the bootp function provided by DHCP. Therefore, the client’s and server’s IP address must be 0.0.0.0 in the IP parameters and the client’s MAC address must be specified in the server’s /etc/dhcpd.conf file to use broadcast bootp. Broadcast bootp, however, works only if the client and server are in the same subnet. Directed bootp may work with the bootpd implementation on Linux, but we did not test it here.


        PowerPC Firmware
        Version MB240_470_014
        SMS 1.6 (c) Copyright IBM Corp. 2000,2005 All rights reserved.
        -------------------------------------------------------------------------------
        IP Parameters
        Port 2-IBM 2 PORT 1000 Base-SX PCI-X Adapter: U788D.001.99DWL3F-P1-T8
        1. Client IP Address [0.0.0.0]
        2. Server IP Address [0.0.0.0]
        3. Gateway IP Address [000.000.000.000]
        4. Subnet Mask [255.255.255.000] 

        Microsoft's Releases in-House Power BI Report Server Software

        $
        0
        0

        Microsoft's latest released Power BI Report Servers now allows organizations to host their own reports in house rather than on Microsoft's cloud.


        The in house Power BI software from Microsoft allows businesses to store and distribute reports generated by the cloud-based Power BI business intelligence and analytics application on-premises rather than on Microsoft's cloud.

        The August 2017 preview version of the software built on the ability to use information from SQL Server Analysis Services data sources as the basis of those reports, extending support to all the data sources supported by Power BI.

        In the official release of Power BI Report Server, end users can now set the data refresh schedules on reports that require connections to other systems to deliver business insights.

        Users can even set multiple refresh schedules for each report, enabling even more control around how often users update their data. Microsoft has also increased the file size limit of file uploads used for the data refresh process to 2Gigabyte.

        For users looking to create reports using live source data, Microsoft has added Direct Query support. This feature currently supports SQL Server, Azure SQL Database, Oracle, SAP HANA and SAP BW [Business Warehouse] and Teradata.

        Developers can now use the REST API for Power BI Report server to extend the software's capabilities. The API, a successor to ReportingService2010 SOAP API, has been extended to support the variety of report types in supported in the new software.

        Microsoft is also working to build on Power BI's "viral" growth to help businesses instill a "data culture" within their organizations. More than 11.5 million data models are hosted on the service, with an estimated 30,000 being added daily, according to the Microsoft's figures. Power BI also handles two million report and dashboard queries per hour.

        How To Manage Virtual I/O Server using Command Line

        $
        0
        0

        VIOS (Virtual I/O Server) is a special purpose partition that can serve I/O resources to other partitions. The type of LPAR is set at creation. The VIOS LPAR type allows for the creation of virtual serveradapters, where a regular AIX/Linux LPAR does not.

        VIOS works by owning a physical resource and mapping that physical resource to virtual resources. Client LPARs can connect to the physical resource via these mappings.

        VIOS is not a hypervisor, nor is it required for sub-CPU virtualization. VIOS can be used to manage other partitions in some situations when a HMC is not used. This is called IVM (Integrated Virtualization Manager).

        The current VIOS runs on an AIX subsystem. (VIOS functionality is available for Linux. This document only deals with the AIX based versions.) The padminaccount logs in with a restricted shell. A root shell can be obtained by theoem_setup_envcommand.

        The root shell is designed for installation of OEM applications and drivers only. It may be required for a small subset of commands. (The purpose of this document is to provide a listing of most frequent tasks and the proper VIOS commands so that access to a root shell is not required.)

        The restricted shell has access to common UNIX utilities such as awk, grep, sed, andvi. The syntax and usage of these commands has not been changed in VIOS. (Use "ls /usr/ios/utils" to get a listing of available UNIX commands.)




        VIOS Setup and Management:

        Accept all VIOS license agreements
        $ license -accept

        Obtain root shell capability
        $ oem_setup_env

        Mirror the rootvg in VIOS to hdisk1
        # extendvg rootvg hdisk1
        # mirrorios hdisk1

        The VIOS will reboot when finished
        (Re)Start the (initial) configuration assistant
        # cfgassist

        Restart or Shutdown the server
        # shutdown -restart
        # shutdown


        List the version of the VIOS system software
        # ioslevel

        List the boot devices for this lpar
        # bootlist -mode normal -ls

        List LPAR name and ID
        # lslparinfo

        Display firmware level of all devices on this VIOS LPAR
        # lsfware -all

        Display the MOTD
        # motd

        Change the MOTD to an appropriate message
        # motd"*****    Unauthorized access is prohibited!    *****"

        List all (AIX) packages installed on the system
        # lssw

        Display a timestamped list of all commands run on the system
        # lsgcl
        To display the current date and time of the VIOS
        chdate

        Change the current time and date to 1:02 AM March 4, 2009
        # chdate -hour 1 -minute 2 -month 3 -day 4 -year 2009

        Change just thetimezone to AST
        # chdate -timezone AST (Visible on next login)

        Brief dump of the system error log
        # errlog

        Detailed dump of the system error log
        # errlog -ls | more

        Remove error log events older than 30 days
        # errlog -rm 30

         

        VIOS Networking Examples

        Enable jumbo frames on the ent0 device
        # chdev -dev ent0 -attr jumbo_frames=yes

        View settings on ent0 device
        # lsdev -dev ent0 -attr

        List TCP and UDP sockets listening and in use
        # lstcpip -sockets -family inet

        List all (virtual and physical) ethernet adapters in the VIOS
        # lstcpip -adapters

        Equivalent of no -L command
        # optimizenet -list

        Set up initial TCP/IP config
        # mktcpip -hostname vios1 -inetaddr 172.22.2.10 -interface ent3 -start -netmask 255.255.252.0 -gateway 172.22.2.1

        Find the default gateway and routing info on the VIOS
        # netstatroutinfo

        List open (TCP) ports on the VIOS IP stack
        lstcpip -sockets | grep LISTEN

        Show interface traffic statistics on 2 second intervals
        netstat -state 2

        Show verbose statistics for all interfaces
        netstat -cdlistats

        Show the default gateway and route table
        netstat -routtable

        Change the default route on en0 (fix a typo from mktcpip)
        chtcpip -interface en0 –gateway -add 172.22.10.1 -remove 192.168.0.1

        Change the IP address on en0 to 172.22.10.10
        chtcpip -interface en0 -inetaddr 172.22.10.10 -netmask 255.255.255.0

         

        User Management

        padminis the only user for most configurations. It is possible to configure additional users, such as operational users for monitoring purposes.

        List attributes of the padmin user
        # lsuserpadmin

        List all users on the system
        # lsuser

        Change the password for the current user
        # passwd

         

        Virtual Disk Setup and Management

        Disks are presented to VIOC by creating a mapping between a physical disk or storage pool volume and the vhost adapter that is associated with the VIOC.

        Best practices configuration suggests that the connecting VIOS vhost adapter and the VIOC vscsi adapter should use the same slot number. This makes the typicallycomplex array of virtual SCSI connections in the system much easier to comprehend.

        The mkvdevcommand is used to create a mapping between a physical disk and the vhostadapter.
        Create a mapping of hdisk3 to the virtual host adapter vhost2.

        # mkvdev -vdev hdisk3 -vadapter vhost2 -dev wd_c3_hd3
        It is called wd_c3_hd3 for "WholeDisk_Client3_HDisk3". The intent of this naming convention is to relay the type of disk, where from, and who to.

        Delete the virtual target device wd_c3_hd3
        # rmvdev -vtd wd_c3_hd3

        Delete the above mapping by specifying the backing device hdisk3
        # rmvdev -vdev hdisk3





        Virtual Optical Media

        Create a 15 Gig virtual mediarepository on the clienthd storage pool
        # mkrep -spclienthd -size 15G

        Extend the virtual repository by an additional 5 Gig to a total of 20 Gig
        # chrep -size 5G

        Find the size of the repository
        # lsrep

        Create an ISO image in repositoryusing .iso file
        # mkvopt -name powerlinux6 -file /mnt/Powerlinux-DVD.iso -ro

        Create a virtual media file directly from a DVD in the physical optical drive
        # mkvopt -name AIX61TL3 -dev cd0 -ro

        Create a virtual DVD on vhost4 adapter
        # mkvdev -fbo -vadapter vhost4 -dev virtual_dvd

        The LPAR connected to vhost4 is called shiva. shiva_dvd is simply a convenient naming convention.

        Load the virtual optical media into the virtual DVD for LPAR shiva
        # loadopt -vtd virtual_dvd -disk powerlinux6

        Unload the previously loaded virtual DVD (-release is a "force" option if the client OS has a SCSI reserve on the device.)
        # unloadopt -vtd virtual_dvd -release

        List virtual media in repositorywith usage information
        # lsrep

        Remove (delete) a virtual DVD image called AIX61TL3
        # rmvopt -name AIX61TL3

        Storage Pools

        List the default storage pool
        # lssp -default

        List all storage pools
        # lssp

        List all disks in the rootvg storage pool
        # lssp -detail -sprootvg

        Create a storage pool called client_boot on hdisk22
        # mksp client_boot hdisk22

        Make the client_boot storage pool the default storage pool
        # chsp -default client_boot

        Add hdisk23 to the client_boot storage pool
        # chsp -add -sp client_boot hdisk23

        List all the physical disks in the client_boot storage pool
        # lssp -detail -sp client_boot

        List all the physical disks in the default storage pool
        # lssp -detail

        List all the backing devices (LVs) in the default storage pool
        # lsspbd

        Create a client disk on adapter vhost1 from client_boot storage pool
        # mkbdsp -sp client_boot 20G -bd lv_c1_boot -vadapter vhost1

        Remove the mapping for the device just created, but save the backing device
        # rmbdsp -vtd vtscsi0 -savebd

        Assign the lv_c1_boot backing device to another vhost adapter
        # mkbdsp -bd lv_c1_boot -vadapter vhost2

        Completely remove the virtual target device ld_c1_boot
        # rmbdsp -vtd ld_c1_boot

        Remove last disk from the sp to delete the sp
        # chsp -rm -sp client_boot hdisk22

        Create a client disk on adapter vhost2 fromrootvg storage pool
        # mkbdsp -sp rootvg 1g -bd host2_hd1 -vadapter vhost2 -tn lv_host2_1

        How to Install Nagios 4 on Ubuntu 16.04

        $
        0
        0
        This step by step guide will walk you through the steps to install and configure Nagios 4 on Ubuntu 16.04 Server. 


        Installing Nagios 4

        There are multiple ways to install Nagios, but we'll install Nagios and its components from source to ensure we get the latest features, security updates, and bug fixes.

        Log into your Ubuntu Server that runs Apache.
        sshusername@your_nagios_server_ip
        Create a nagios user and nagcmd group. You'll use these to run the Nagios process.
        sudo useradd nagios
        sudo groupadd nagcmd
        Then add the user to the group:
        sudo usermod -a -G nagcmd nagios
        Since we are building Nagios and its components from source, we must install a few development libraries to complete the build, including compilers, development headers, and OpenSSL.

        Update package lists to ensure we can download the latest versions of the prerequisites:
        sudo apt-get update
        Now install the required packages:
        sudo apt-get install build-essential libgd2-xpm-dev openssl libssl-dev unzip
        Download the source code for the latest stable release of Nagios Core. Go to the Nagios downloads page, and click the Skip to download link below the form. Copy the link address for the latest stable release so you can download it to your Nagios server.

        Download the release to your home directory with the curl command:
        cd ~
        curl -L -O https://assets.nagios.com/downloads/nagioscore/releases/nagios-4.3.4.tar.gz
        Extract the Nagios archive:
        tar zxf nagios-*.tar.gz
        Then change to the extracted directory:
        cd nagios-*
        Before building Nagios, run the configure script to specify the user and group you want Nagios to use. Use the nagios user and nagcmd group you created:
        ./configure --with-nagios-group=nagios --with-command-group=nagcmd
        You'll see the following output from the configure command:
        Output
        *** Configuration summary for nagios 4.3.4 2017-11-09 ***:

        General Options:
        -------------------------
        Nagios executable: nagios
        Nagios user/group: nagios,nagios
        Command user/group: nagios,nagcmd
        Event Broker: yes
        Install ${prefix}: /usr/local/nagios
        Install ${includedir}: /usr/local/nagios/include/nagios
        Lock file: /run/nagios.lock
        Check result directory: ${prefix}/var/spool/checkresults
        Init directory: /etc/init.d
        Apache conf.d directory: /etc/apache2/sites-available
        Mail program: /bin/mail
        Host OS: linux-gnu
        IOBroker Method: epoll

        Web Interface Options:
        ------------------------
        HTML URL: http://localhost/nagios/
        CGI URL: http://localhost/nagios/cgi-bin/
        Traceroute (used by WAP):


        Review the options above for accuracy. If they look okay,
        type 'make all' to compile the main program and CGIs.
        Now compile Nagios with below command:
        make all
        Now run these make commands to install Nagios, its init scripts, and its default configuration files:
        sudo make install
        sudo make install-commandmode
        sudo make install-init
        sudo make install-config
        You'll use Apache to serve Nagios' web interface, so copy the sample Apache configuration file to the /etc/apache2/sites-available folder:
        sudo /usr/bin/install -c -m 644 sample-config/httpd.conf /etc/apache2/sites-available/nagios.conf
        In order to issue external commands via the web interface to Nagios, add the web server user, www-data, to the nagcmd group:
        sudo usermod -G nagcmd www-data
        Nagios is now installed.

        Now we'll install a plugin which will allow Nagios to collect data from various hosts.


        Installing the check_nrpe Plugin

        Nagios monitors remote hosts using the Nagios Remote Plugin Executor, or NRPE. It consists of two pieces:

        1.The check_nrpe plugin which is used by Nagios server.
        2.The NRPE daemon, which runs on the remote hosts and sends data to the Nagios server.

        Let's install the check_nrpe plugin on our Nagios server.

        Find the download URL for the latest stable release of NRPE at the Nagios Exchange site.

        Download it to your home directory with curl:
        cd ~curl -L -O https://github.com/NagiosEnterprises/nrpe/releases/download/nrpe-3.2.1/nrpe-3.2.1.tar.gz
        Extract the NRPE archive:
        tar zxf nrpe-*.tar.gz
        Then change to the extracted directory:
        cd nrpe-*
        Configure the check_nrpe plugin:
        ./configure
        Now build and install check_nrpe:
        make check_nrpe
        sudo make install-plugin
        Let's configure the Nagios server next.


        Configuring Nagios

        Now let's perform the initial Nagios configuration, which involves editing some configuration files and configuring Apache to serve the Nagios web interface. You only need to perform this step once on your Nagios server.

        Open the main Nagios configuration file in your text editor:
        sudo nano /usr/local/nagios/etc/nagios.cfg
        Find this line in the file:


        /usr/local/nagios/etc/nagios.cfg
        #cfg_dir=/usr/local/nagios/etc/servers
        Uncomment this line by deleting the # character from the front of the line:
        Save the file and exit the editor.

        Now create the directory that will store the configuration file for each server that you will monitor:
        sudo mkdir /usr/local/nagios/etc/servers
        Open the Nagios contacts configuration in your text editor:
        sudo nano /usr/local/nagios/etc/objects/contacts.cfg
        Find the email directive and replace its value with your own email address:
        /usr/local/nagios/etc/objects/contacts.cfg

        define contact{
        contact_name nagiosadmin ; Short name of user
        use generic-contact ; Inherit default values from generic-contact template (defined above)
        alias Nagios Admin ; Full name of user
        emailyour_email@your_domain.com ; <<***** CHANGE THIS TO YOUR EMAIL ADDRESS ******

        Save and exit the editor.
        Next, add a new command to your Nagios configuration that lets you use the check_nrpe command in Nagios service definitions. Open the file /usr/local/nagios/etc/objects/commands.cfg in your editor:
        sudo nano /usr/local/nagios/etc/objects/commands.cfg
        Add the following to the end of the file to define a new command called check_nrpe:
        /usr/local/nagios/etc/objects/commands.cfg
        ...
        define command{
        command_name check_nrpe
        command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$
        }
        This defines the name and specifies the command-line options to execute the plugin. You'll use this command in Step 5.
        Save and exit the editor.
        Now configure Apache to serve the Nagios user interface. Enable the

        Apache rewrite and cgimodules with the a2enmod command:
        sudo a2enmod rewrite
        sudo a2enmod cgi

          Use the htpasswd command to create an admin user called nagiosadmin that can access the Nagios web interface:
          sudo htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin
          Enter a password at the prompt. Remember this password, as you will need it to access the Nagios web interface.

          Note:
           If you create a user with a name other than nagiosadmin, you will need to edit /usr/local/nagios/etc/cgi.cfg and change all the nagiosadmin references to the user you created.
          Now create a symbolic link for nagios.conf to the sites-enabled directory. This enables the Nagios virtual host.

          • sudo ln -s /etc/apache2/sites-available/nagios.conf /etc/apache2/sites-enabled/


          Next, open the Apache configuration file for Nagios.

          • sudo nano /etc/apache2/sites-available/nagios.conf


          If you've configured Apache to serve pages over HTTPS, locate both occurrances of this line:
          /etc/apache2/sites-available/nagios.conf
          #  SSLRequireSSL

          Uncomment both occurrances by removing the # symbol.
          If you want to restrict the IP addresses that can access the Nagios web interface so that only certain IP addresses can access the interface, find the following two lines:
          /etc/apache2/sites-available/nagios.conf
          Order allow,deny
          Allow from all

          Comment them out by adding # symbols in front of them:
          /etc/apache2/sites-available/nagios.conf
          # Order allow,deny
          # Allow from all

          Then find the following lines:
          /etc/apache2/sites-available/nagios.conf
          #  Order deny,allow
          # Deny from all
          # Allow from 127.0.0.1

          Uncomment them by deleting the # symbols, and add the IP addresses or ranges (space delimited) that you want to allow to in the Allow from line:
          /etc/apache2/sites-available/nagios.conf
          Order deny,allow
          Deny from all
          Allow from 127.0.0.1 your_ip_address

          These lines appear twice in the configuration file, so ensure you change both occurrences. Then save and exit the editor.
          Restart Apache to load the new Apache configuration:

          • sudo systemctl restart apache2


          With the Apache configuration in place, you can set up the service for Nagios. Nagios does not provide a Systemd unit file to manage the service, so let's create one. Create the nagios.service file and open it in your editor:

          • sudo nano /etc/systemd/system/nagios.service


          Enter the following definition into the file. This definition specifies when Nagios should start and where Systemd can find the Nagios application. 
          /etc/systemd/system/nagios.service
          [Unit]
          Description=Nagios
          BindTo=network.target

          [Install]
          WantedBy=multi-user.target

          [Service]
          Type=simple
          User=nagios
          Group=nagios
          ExecStart=/usr/local/nagios/bin/nagios /usr/local/nagios/etc/nagios.cfg

          Save the file and exit your editor.
          Then start Nagios and enable it to start when the server boots:
          sudo systemctl enable /etc/systemd/system/nagios.service
          sudo systemctl start nagios
          Nagios is now running, so let's log in to its web interface.


          Accessing the Nagios Web Interface

          Open your favorite web browser, and go to your Nagios server by visiting http://nagios_server_public_ip/nagios.

          Enter the login credentials for the web interface in the popup that appears. Use nagiosadmin for the username, and the password you created for that user.

          After authenticating, you will see the default Nagios home page. Click on the Hosts link in the left navigation bar to see which hosts Nagios is monitoring:


          Here you can see, Nagios is monitoring only "localhost". Let's monitor other server with Nagios.


          Installing NPRE on a Host

          Let's add a new host so Nagios can monitor it. We'll install the Nagios Remote Plugin Executor (NRPE) on the remote host, install some plugins, and then configure the Nagios server to monitor this host.
          Log in to the second server, which we'll call the monitored server.

          • ssh username@your_monitored_server_ip


          First create create a "nagios" user which will run the NRPE agent.
          sudo useradd nagios
          We'll install NRPE from source, which means you'll need the same development libraries you installed on the Nagios server in Step 1. Update your package sources and install the NRPE prerequisites:
          sudo apt-get update
          sudo apt-get install build-essential libgd2-xpm-dev openssl libssl-dev unzip
          NRPE requires that Nagios plugins is installed on the remote host. Let's install this package from source.
          Find the latest release of Nagios Plugins from the Nagios Plugins Download page. Copy the link address for the latest version, and copy the link address so you can download it to your Nagios server.
          Download Nagios Plugins to your home directory with curl:
          cd ~
          curl -L -O http://nagios-plugins.org/download/nagios-plugins-2.2.1.tar.gz
          Extract the Nagios Plugins archive:
          tar zxf nagios-plugins-*.tar.gz
          Change to the extracted directory:
          cd nagios-plugins-*
          Before building Nagios Plugins, configure it to use the nagios user and group, and configure OpenSSL support:
          ./configure --with-nagios-user=nagios --with-nagios-group=nagios --with-openssl
          Now compile the plugins:
          make
          Then install them:
          sudo make install
          Next, install NRPE. Find the download URL for the latest stable release of NRPE at the Nagios Exchange site just like you did in Step 1. Download the latest stable release of NRPE to your monitored server's home directory with curl:
          cd ~
          curl -L -O https://github.com/NagiosEnterprises/nrpe/releases/download/nrpe-3.2.1/nrpe-3.2.1.tar.gz
          Extract the NRPE archive with this command:
          tar zxf nrpe-*.tar.gz
          Then change to the extracted directory:
          cd nrpe-*
          Configure NRPE by specifying the Nagios user and group, and tell it you want SSL support:
          ./configure --enable-command-args --with-nagios-user=nagios --with-nagios-group=nagios --with-ssl=/usr/bin/openssl --with-ssl-lib=/usr/lib/x86_64-linux-gnu
          Now build and install NRPE and its startup script with these commands:
          make all
          sudo make install
          sudo make install-config
          sudo make install-init
          Next, let's update the NRPE configuration file:
          sudo nano /usr/local/nagios/etc/nrpe.cfg
          Find the allowed_hosts directive, and add the private IP address of your Nagios server to the comma-delimited list:
          /usr/local/nagios/etc/nrpe.cfg
          allowed_hosts=127.0.0.1,::1,your_nagios_server_private_ip
          This configures NRPE to accept requests from your Nagios server via its private IP address.
          Save and exit your editor. Now you can start NRPE:
          sudo systemctl start nrpe.service
          Ensure that the service is running by checking its status:
          sudo systemctl status nrpe.service
          You'll see the following output:

          Output

          ...
          Oct 16 07:10:00 nagios systemd[1]: Started Nagios Remote Plugin Executor.
          Oct 16 07:10:00 nagios nrpe[14653]: Starting up daemon
          Oct 16 07:10:00 nagios nrpe[14653]: Server listening on 0.0.0.0 port 5666.
          Oct 16 07:10:00 nagios nrpe[14653]: Server listening on :: port 5666.
          Oct 16 07:10:00 nagios nrpe[14653]: Listening for connections on port 5666
          Oct 16 07:10:00 nagios nrpe[14653]: Allowing connections from: 127.0.0.1,::1,207.154.249.232

          Next, allow access to port 5666 through the firewall. If you are using UFW, configure it to allow TCP connections to port 5666:
          sudo ufw allow 5666/tcp  
          Now you can check the communication with the remote NRPE server. Run the following command on the Nagios server:
          /usr/local/nagios/libexec/check_nrpe -H remote_host_ip
          You'll see the following output:

          Output

          NRPE v3.2.1

          Now let's configure some basic checks that Nagios can monitor.
          First, let's monitor the disk usage of this server. Use the df -h command to look for the root filesystem. You'll use this filesystem name in the NRPE configuration:
          df -h /
          You'll see output similar to this:

          Output

          Filesystem Size Used Avail Use% Mounted on
          udev 490M 0 490M 0% /dev
          tmpfs 100M 3.1M 97M 4% /run
          /dev/sda1 29G 1.4G 28G 5% /
          tmpfs 497M 0 497M 0% /dev/shm
          tmpfs 5.0M 0 5.0M 0% /run/lock
          tmpfs 497M 0 497M 0% /sys/fs/cgroup
          /dev/sda2 105M 3.4M 102M 4% /boot/efi
          tmpfs 100M 0 100M 0% /run/user/0

          Locate the filesystem associated with /. On Ubuntu Server, the filesystem you want is probably /dev/sda1.
          Now open /usr/local/nagios/etc/nrpe.cfg file in your editor:

          The NRPE configuration file is very long and full of comments. There are a few lines that you will need to find and modify:
          • server_address: Set to the private IP address of the monitored server
          • command[check_hda1]: Change /dev/hda1 to whatever your root filesystem is called
          Locate these settings and alter them appropriately:
          /usr/local/nagios/etc/nrpe.cfg
          ...
          server_address=monitored_server_private_ip
          ...
          command[check_vda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/vda1
          ...

          Save and exit the editor.
          Restart the NRPE service to put the change into effect:
          sudo systemctl restart nrpe.service
          Repeat the steps in this section for each additional server you want to monitor.
          Once you are done installing and configuring NRPE on the hosts that you want to monitor, you will have to add these hosts to your Nagios server configuration before it will start monitoring them. Let's do that next.


          Monitoring Hosts with Nagios

          To monitor your hosts with Nagios, you'll add configuration files for each host specifying what you want to monitor. You can then view those hosts in the Nagios web interface.
          On your Nagios server, create a new configuration file for each of the remote hosts that you want to monitor in /usr/local/nagios/etc/servers/. Replace the highlighted word, monitored_server_host_name with the name of your host:
          sudo nano /usr/local/nagios/etc/servers/your_monitored_server_host_name.cfg 
          Add the following host definition, replacing the host_name value with your remote hostname, the aliasvalue with a description of the host, and the address value with the private IP address of the remote host:
          your_monitored_server_host_name.cfg'>/usr/local/nagios/etc/servers/your_monitored_server_host_name.cfg
          define host {
          use linux-server
          host_name your_monitored_server_host_name
          alias My client server
          address your_monitored_server_private_ip
          max_check_attempts 5
          check_period 24x7
          notification_interval 30
          notification_period 24x7
          }

          With this configuration, Nagios will only tell you if the host is up or down. Let's add some services to monitor.
          First, add this block to monitor CPU usage:
          your_monitored_server_host_name.cfg'>/usr/local/nagios/etc/servers/your_monitored_server_host_name.cfg
          define service {
          use generic-service
          host_name your_monitored_server_host_name
          service_description CPU load
          check_command check_nrpe!check_load
          }

          The use generic-service directive tells Nagios to inherit the values of a service template called generic-service which is predefined by Nagios.
          Next, add this block to monitor disk usage:
          your_monitored_server_host_name.cfg'>/usr/local/nagios/etc/servers/your_monitored_server_host_name.cfg
          define service {
          use generic-service
          host_name your_monitored_server_host_name
          service_description /dev/vda1 free space
          check_command check_nrpe!check_vda1
          }

          Now save and quit. Restart the Nagios service to put any changes into effect:
          sudo systemctl restart nagios
          After several minutes, Nagios will check the new hosts and you'll see them in the Nagios web interface. Click on the Services link in the left navigation bar to see all of your monitored hosts and services.


          Conclusion

          You've installed Nagios on a server and configured it to monitor CPU and disk usage of at least one remote machine.
          Viewing all 880 articles
          Browse latest View live


          <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>