Quantcast
Channel: Tech Support
Viewing all 880 articles
Browse latest View live

Close All Apps at Once in iOS 10 on Your iPhone or iPad

$
0
0

The iOS App Switcher was the only solution to view all your open apps and close the ones that you’re not using. For instance, if you have many apps that are open, it can be a difficult task to close each of them one at a time.





This is where KillBackground10 comes in to save your precious time.




This jailbreak tweak allows you to close all apps at once in iOS 10 with the touch of a button. This means you no longer have to waste your time closing each apps individually.

After you install KillBackground10, double press the Home button to open the App Switcher. You’ll notice that there’s a tiny button in the bottom right corner. Tapping on that button will instantly close all the apps that are open.

KillBackground10 comes with a configuration pane in Settings where you can customize how the tweak functions. It offers the following settings:

  • Automatically close the App Switcher after the button is pressed.
  • Enlarge the size of the button.
  • Position the button on the left side of the App Switcher rather than the right.
  • Choose whether the Music app should be closed. This is useful when you have a song playing in the background and you don’t want the Music app to be closed.
  • Exclude the apps that shouldn’t be closed by this tweak.
  • And lots more



As you can see, the tweak offers a bunch of settings that allow you to configure its behavior and the way it functions. It has the ability to disable the apps that shouldn’t be closed by this tweak. An example is the Music app when there’s a song playing in the background.

The Essential Apps You Must Download on iPhone or iPad

$
0
0

If you are a news user or you just switched from Android to iOS and got an iPhone or iPad searching to download and try all the best apps but you don't know much about. This article will guide you the essential apps you should download and use on your iPhone and iPad.






Emails Apps


Gmail: If you want something from the Android world that you’re familiar with, download Gmail. It’s basically the same app with similar UI.

Outlook: Gmail is a bit basic. Outlook integrates email and calendar in one app. Plus, their Focused Inbox is really good at showing just the important emails.

Spark: If you want a free, easy to use, yet feature rich email client that’s available on iPhone, iPad and Mac, Spark is it.

Maps



Google MapsIn the west, Apple Maps is quite good. But in Asian countries, that’s still not the case. So one of the first things you’ll want to do after getting a new iPhone is to download the Google Maps app.

File Syncing


Dropbox: There’s no exposed file system or a file management app in iOS. If you’re just starting out, I’d suggest making Dropbox your file management system. Put all your important documents and pictures in Dropbox. So now it’s automatically available on all your connected devices.

Alternative services like OneDrive, Box are available in the App Store as well.

Social Apps



Facebook, Instagram and Snapchat from the App Store. If you’re into it, get the Twitter app as well.

Communication Apps


WhatsApp, Facebook Messenger, WeChat, Line, Telegram should take care of your personal communications.

Slack and Skype will help you get your work done.

Video Apps

VLC Media Player: Don’t use the default Videos app. It’s shockingly underpowered and syncing videos from iTunes is close to a nightmare. The VLC Media Player is free and plays virtually every format under the sun. Plus, you can easily transfer media to VLC wirelessly from any device.

When it comes to streaming, there’s no shortage of awesome media apps. The obvious apps like YouTube, Netflix and Hulu are all there. Plus you’ll find video apps for all major networks like ABC and Comedy Central.

Music Apps

I would suggest you use the built-in Music app for all your music needs. When the UI isn’t the best, it’s easy to get used to and it integrates with OS features like Siri and Spotlight search really well.

Spotify: The best alternative to Apple Music is currently Spotify.

Podcast Clients


The built in Podcasts app is good enough for a beginner but if you want to listen to a lot of them, I’d advise you to get a podcast client.

Overcast: Overcast is the simplest third party client around. The playlist feature is quite powerful and it has features to boost the volume and cut silence automatically. Overcast is free to use, with ads.

Pocket Casts ($3.99): Pocket Casts is the alternative to Overcast and is just as good. The UI is not as simple as Overcast and it costs $4.99 to get in.

News and Reading Apps


Reeder 3 ($4.99): If you’re into RSS, this is hands down the best RSS reader on iOS. It’s fast, minimal and a joy to use.

Quartz: Quartz is a great way to quickly catch up on the news at the end of the day. The app uses an innovative and actually useful conversational style approach.

Flipboard: The magazine style app is still one of the best ways to discover and read interesting stories.

Nuzzel: Nuzzel is a really good way to bring up the articles that your friends on Twitter and Facebook or sharing. The app will only show links shared by multiple people, so you only get to see the best content that people in your circle are reading.

Kindle: If iBooks store isn’t supported in your country, the next best option is to buy book using Kindle app.

Fitness Apps


MyFitnessPal: If you want to track what you eats and your weight, MyFitnessPal is currently the best way to do it.

Pedometer++: This is a really simple app that shows how much you walked today. The widget is really handy.

Strava: If you’re into cycling, Strava is the best way to track your rides. The community is also helpful.

Gyroscope: Gyroscope is a dashboard for your life. It integrates with the Health app and a myriad of other trackers to give you an overall view of how you’re doing – health and productivity wise.

Sleepcycle: If you want to improve your sleep and want to wake up at the best time possible (and not when you’re groggy), install Sleepcycle and put your phone on your bed next to you when you sleep.

Apps For Moving About Town

Uber: You can’t seem to get around town these days without calling an Uber.

Yelp: The Yelp app will help you decide where to eat next.

Camera Apps

Manual ($3.99): Your iPhone has a really awesome camera and after iOS 10, you can capture and edit in RAW format. The Manual app gives you manual control over things like ISO and shutter speed.

VSCO: VSCO is an alternative to the Camera app. You can capture photos and edit them right inside the app. VSCO’s UI is a bit confusing off late but the pictures can come out of the app are still stellar.

Photo Editing Tools


Prisma: The must have for adding awesome artistic filters on Photos.

MSQRD: The app has a big collection of hilarious face filters.

Enlight: Enlight is a pro level photo editing app that also has some really interesting filters.

Pixelmator ($4.99): If you want an easy to use professional tool for editing photos on your iPhone, Pixelmator is it.

Lightroom: Lightroom’s iPhone app is limited compared to the Mac version but you can still do pro level editing using the app. The features have been adopted for a touchscreen and they’re easy to use. Plus, the app works well if you already have the Mac version.

Task Managers

Todoist: If you don’t want to use the built-in Reminders app, Todoist is your best option. The free version offers the basic features you need and the app is available basically everywhere.

Trello: Trello is a simple project management system you can use to organize anything.

Automation


Workflow ($3.99): While iOS is a closed system, there’s still quite a bit you can automate. And Workflow is basically the only awesome app to do so.

Drafts ($4.99): If your work revolves around text and text files, you’ll like Drafts. Drafts takes you straight to the writing environment. Once you’ve written something done, you can figure out what to do with it. And Drafts integrates with a lot of awesome apps and services so sending the text over to an email app or notes app is never a big issue.

IFTTT: IFTTT is the best web automation tool around. You’ll get recipes to automate all sorts of stuff.

Widget Apps


Launcher: Launcher puts shortcuts for calling, messaging and app actions right inside the Today View using the widget. This means you can get a shortcut to message someone right on the Lock screen (after swiping right).

Pcalc Lite: Pcalc Lite puts a basic calculator right in the Today View.

Widget Calendar ($0.99): Widget Calendar puts a calendar and reminders widget in the Today View.





Miscellaneous Utilities


Scanbot: One of the best apps for quickly scanning photos and sending them to different places. The latest update even has workflows that you can set up to automate the filing process.

Spendee: A really easy to use and feature rich manual expense tracker.


Annotable: Our top pick for image annotation on iOS.

Stacks: A really simple and beautiful currency converter.

1Password: A powerful password manager (and alternative to LastPass).

Motion Stills: Turn those shaky Live Photos into smooth GIFs.

Day One ($4.99): A beautiful way to write your journal on the iPhone. The app lets you import photos and there’s pro level text formatting support.

Editorial ($4.99): If you wish to write in Markdown on your iOS device, Editorial is the best way to go about.

How To Deploy Certification Authority on Windows Server 2016

$
0
0

This article will guide you through the steps to install and configure certification authority on Windows Server 2016. We will be using test.com as our active directory domain through out this guide.





Prerequisites

  • Windows Server 2016 installed on (Bare-metal or Virtual Machine)
  • Active Directory Domain Services

Installing Web Server

To begin with the certification authority, first you need to install web services on your Windows Server 2016 machine. Open up PowerShell and execute the following command:

install-windowsfeature web-server -IncludeManagementTools 


Creating DNS CNAME Record For Web Server

To create CNAME record, Open up DNS Console on your active directory domain server and provide the required information according to your environment as shown in image below.


Creating Shared Folder 

You need to create a shared folder where Certificate Revocation List (CRL) and Certificates  from Certificate Authority (CA) will be stored.

Open up PowerShell and execute the following command:

New-Item c:\cert -type directory
New-SMBShare –Name 'cert'–Path 'C:\cert' -ChangeAccess 'test\cert publishers'


Now, download NTFS Security module from here and import it using the following command.

import-module .\NTFSSecurity.psd1

You need to authorize NTFS Read permissions to Everyone and Anonymous logon using the following command.

add-NTFSAccess -Path C:\cert -Account 'ANONYMOUS LOGON' -AccessRights Read
add-NTFSAccess -Path C:\cert -Account 'Everyone' -AccessRights fullcontrol


Creating Virtual Directory

Open up IIS management console and right click on Default Web Site> Add Virtual Directory


Provide the following information according to your environment and click OK.


Since we have added virtual directory, now on left pane of the virtual directory, double click Request Filtering 


Click Edit Feature Settings 


Check Allow double escaping and click OK





Configuring Certification Authority Server

Here, you need to create certificate authority policy file. Go to C:\Windows directory and create new file CAPolicy.inf


You need to provide following information in this file.

[Version] 
Signature="$Windows NT$" 
[PolicyStatementExtension] 
Policies=InternalPolicy 
[InternalPolicy] 
OID=1.2.3.4.1455.67.89.5 
Notice="Legal Policy Statement" 
URL=http://cert.test.com/cert/cps.txt 
[Certsrv_Server] 
RenewalKeyLength=2048 
RenewalValidityPeriod=Years 
RenewalValidityPeriodUnits=5 
CRLPeriod=weeks 
CRLPeriodUnits=1 
LoadDefaultTemplates=0 
AlternateSignatureAlgorithm=1 
[CRLDistributionPoint] 
[AuthorityInformationAccess]


Installing Certification Authority Role on Active Directory Domain 

Open up PowerShell on  your Active Directory Domain and type the following command to install CA Role.

Add-WindowsFeature Adcs-Cert-Authority -IncludeManagementTools
Install-AdcsCertificationAuthority -CAType EnterpriseRootCA 
 


Open up Certificate Authority console and click Extensions tab in Select Extensions then select CRL Distribution Point (CDP).

Delete last 3 entries:(ldap,http,file) as shown in image below.




After deleting these entries click Add 


and enter http:\\cert.test.com\cert\.crl

check Include in CRL and include in CDP


Now, from select extension choose Authority Information (AIA)

Authority Information (AIA) is used to publish where a copy of the issuer’s certificate may be downloaded. Paths specified in this extension can be used by an application or service to retrieve the issuing CA certificate. These CA certificates are then used to validate the certificate signature and to build a path to a trusted certificate

Again Delete ldap,http and file entries


Then click add and enter http:\\cert.test.com\cert\_.crt

check Include in the AIA extension of issued certificates


All paths specified above points to network share on web server (\\web\cert) and to web virtual directory (http:\\cert.test.com)

Publishing the CRL


Its time to publish certificate to made it available to our users. Open up PowerShell and execute the following command.

certutil -crl

Copy CA Certificate and CRL to network share folder

copy C:\Windows\system32\certsrv\certenroll\*.crt \\WEB\cert
copy C:\Windows\system32\certsrv\certenroll\*.crl \\WEB\cert



To check CA “health” open up PowerShell and type pkiview.msc






Auto-Enrollement Certificates using GPO

On your Active Directory Domain, open up Group Policy Management Editor then Navigate to Computer Configuration> Windows Settings> Security Settings> Public Key> Certificate ServicesClient - Auto Enrollment> Configuration Model and change it toEnabled



We have successfully completed the deployment of certificate authority.

How To Set Up VPN Server on Windows Server 2016

$
0
0

This article will guide you through the steps to set up VPN Server on Windows Server 2016. 





VPN server leveraging IPsec Tunnel Mode with Internet Key Exchange version 2 (IKEv2) with the functionality provided by the IKEv2 Mobility and Multihoming protocol (MOBIKE). This tunneling protocol offers inherent advantages in scenarios where the client moves from one IP network to another (for example, from WLAN to WWAN). 

The scenario permits a user with an active IKEv2 VPN tunnel to disconnect a laptop from a wired connection, walk down the hall to a conference room, connect to a wireless network, and have the IKEv2 VPN tunnel automatically reconnected with no noticeable interruption to the user.

Installing Certificates on VPN Server and VPN Client  

First you need to create certificate templates. Open up Certification Authority Console and from CA console, right click Certificate Templates > Manage > Right Click IPSec > Duplicate template


On Request Handling tab click Allow private key to be exported


Click Extension tab > Application Policies> Edit


Remove IP Security IKE intermediate > then click Add



and choose Server Authentication > OK

 

Click Key Usage > Edit
 

Make sure that Digital signature is selected. If it is, click Cancel. If it is not, select it, and then click OK.


In the Security tab click Object Types> Computers> Add Domain Computers


Make sure Read, Enroll and Autoenroll is selected


In General tab provide a name to template


Now, right click Certification Template> New> Certificate Template to Issue


Choose newly created template, click OK


Enrolling Certificate on VPN Server

Now, on your VPN Server, open up Run  and type mmc> Add/remove snap-in


Click Certificates> Add> Computer Account


Right click Personal> All tasks> Request New Certificate


Check Certificate templates > Properties


Click Subject tab > Subject Name> Common name (from drop-down menu) choose FQDN for VPN Server > Click Add

In the Alternative Name, choose DNS, set FQDN for VPN Server, Click Add


New certificate should be created as shown in image below.


This certificate should be exported and then imported to client machine.

To export certificate, Right-click certificate> All tasks> Export


Export private key, Set password and specify file in which certificate should be saved. Copy file to client computer



To import file on client machine, certificate should be imported into Trusted Root Certification Authority on client.

Open up Run, type mmc> Add > Certificate snap-in-local computer

Right-click Trusted Root Certification Authorities> All task> import

Browse to copied file and enter password to import it.




Installing Roles

You need to add Network Policy Server and Remote Access roles on your VPN Server. Open up Server Manager > Add Roles and Features and select the following to install.


Open up Routing and Remote Access console, right-click on Server > Configure and Enable Routing and Remote Access


Select Remote access (dial-up or VPN)


Check VPN


Select internet facing interface accordingly


Define VPN address pool according to your environment



We’ll use NPS instead of RADIUS


Right click Remote Access Logging> Launch NPS



Click Network Access Policies



Right click Connections to Microsoft Routing and Remote Access Server > Properties


Check Grant access


Click Constraints > Select Microsoft:Secured password (EAP-MSCHAP v2)


If it’s not selected Add it




Enable user VPN access

In ADUS right click Dial-in> Allow access



Client Setting

Open in notepad Windows/System32/Drivers/etc/hosts file and add entry for VPN server (name must be equal to one specified in SSL certificate)


Creating VPN client connection



Use my internet connection (VPN)


I’ll set up an internet connection later


In Internet address type your VPN Server name


Specify username/password


In Security tab,for Type of VPN select IKEv2 > Dataencryption> Require encryption> Authentication:Microsoft:Securedpassword(EAP-MSCHAP v2)


We can see that IKEv2 is used,client got address from our VPN pool (10.10.10.3)






Here you can see one client is connected to VPN Server with user Administrator.


We have successfully completed VPN Server deployment on Windows Server 2016. I hope this article will be helpful to deploy VPN Server in your environment.

How To Configure NIC-Teaming in Hyper-V on Windows Server 2016

$
0
0

NIC Teaming, also known as load balancing and failover (LBFO), allows multiple network adapters on a computer to be placed into a team for bandwidth aggregation and network traffic failover in order to prevent connectivity loss in the event of a network component failure.






This article will guide you through the steps to create a NIC-Teaming in Hyper-v on Windows Server 2016. As an example, we’ll create a virtual switch and later we will add this vSwitch to VM while enabling NIC teaming for new switch.

To begin with the configuration, Open up Hyper-V consolethen click Virtual Switch Manager asshown in image below.


In the Create Virtual Switch, you have three option to select any of them

External: Provides virtual machines access to a physical network to communicate with servers and clients on an external network establishing communication connections between Hyper-V VM’s on the same Hyper-V server

Internal: Provides communication between virtual machines on the same Hyper-V server, and between the virtual machines and the management host operating system.

Private: Only allows communication between virtual machines on the same Hyper-V server.

Choose appropriate Switch type and click Create Virtual Switch


Choose appropriate network adapter, click Apply and OK


Since we have added network adapter to Hyper-V switch, let’s add it to VM

Right-click on VM > Settings


Click Add Hardware > Network > Adapter > Add


Here you need to specify Virtual Switch> Apply > Ok


Go to Adapter > Advanced Features, tick Enable this network adapter to be part of a team in the guest operating system


Now open up PowerShell to create new virtual switch by executing the following commands:

New-VMSwitch -Name External -NetAdapterName 'ethernet'

Adding New Network Adapter to VM:
get-vm -VMName dc | Add-VMNetworkAdapter -SwitchName 'external'

Enabling NIC teaming for VM:
Set-VMNetworkAdapter -VMName dc -AllowTeaming on






Done. I hope this article will be helpful to perform similar tasks in your environment.

How To Upgrade SCCM 1606 to 1610

$
0
0

This article will guide you through the steps to upgrade System Center Configuration Manager from version 1606 to 1610.





Prerequisites

  • For full features of 1610 take a look at technet link
  • Before performing upgrade,go to upgrade check-list and perform site backup
  • Upgrade has to be done using console only (no download link provided by Microsoft)

Downloading Update

Navigate to Administration > Cloud Services > Updates and Services > Check for updates as shown in image below.


You should see 1610 update in the console in “Downloading” state


Meanwhile check C:\Program Files\Microsoft Configuration Manager\Logs\dmpdownloader.log for status


Update files are downloaded to \Microsoft Configuration Manager\EasySetupPayload folder


We can also monitor download status using Resource Manager



Prerequisite check

Once download status is changed to “Available” then right-click to Run Prerequisite check


Status can be viewed from C:\ConfigMgrPrereq.log


Or Monitoring > Distribution Status > Updates and Servicing Status > Right-click on update > Show status




Starting Update

Once prerequisite checks are completed, perform actual installation by right clicking on update > -Install update pack






To check installation status, again view log file C:\Program Files\Microsoft Configuration Manager\Logs\CMUpdate.log or  Monitoring > Distribution Status > Updates and Servicing Status > Right click on update > Show status



Installation has finished


Upgrading Console

After console is reopened you’ll be asked for console upgrade


To check progress take a look at C:\ConfigMgrAdminUISetup.log and C:\ConfigMgrAdminUISetupVerbose.log

Checking version


Administration-Site Configuration-Sites-Right click site-General


Client package update check

Software Library > Application Management > Packages > Check Last Update Date for Client Packages, if it’s “out-of-date”,right click on package > Distribute Content > select DP and click finish



Updating Boot images

Check update time


If it’s not close to current time, right click image > Distribute Content


Select Distribution Point







Upgrading Configuration Management Client

Go to Administration > Site Configuration > Sites > Select site and click Hierarchy settings


Click on Client Upgrade tab, check Upgrade all clients check box, optionally set time frame


 That's it for now.

How To Set Up Windows Container on Nano Server

$
0
0

Container was first introduced in Sun Solaris operating system but now it is widely available in Linux and Windows as well. The container is an isolated area where an application can run without affecting the rest of the system, and without the system affecting the application. Container shares operating system's kernel so it can be configured as “isolated” part of guest OS.






Windows has two different type of container:

  • Windows Server Containers– A Windows Server container shares a kernel with the container host and all containers running on the host.
  • Hyper-V Containers– expand on the isolation provided by Windows Server Containers by running each container in a Hyper-V virtual machine. In this configuration the kernel of the container host is not shared with other Hyper-V Containers.

This article will guide you through the steps to set up Windows Containers on Nano Server using PowerShell command line.

Connecting to Nano Server

Set-Item WSMan:\localhost\Client\TrustedHosts 192.168.1.200 -Force Enter-PSSession -ComputerName 192.168.1.10 -Credential Administrator


Updating Nano Server

#Scan for updates

$ci = New-CimInstance -Namespace root/Microsoft/Windows/WindowsUpdate -ClassName MSFT_WUOperationsSession
$result = $ci | Invoke-CimMethod -MethodName ScanForUpdates -Arguments @{SearchCriteria="IsInstalled=0";OnlineScan=$true}
$result.Updates

# Install all updates

$ci = New-CimInstance -Namespace root/Microsoft/Windows/WindowsUpdate -ClassName MSFT_WUOperationsSession
Invoke-CimMethod -InputObject $ci -MethodName ApplyApplicableUpdates

Restart-Computer

# List Installed Updates

$ci = New-CimInstance -Namespace root/Microsoft/Windows/WindowsUpdate -ClassName MSFT_WUOperationsSession
$result = $ci | Invoke-CimMethod -MethodName ScanForUpdates -Arguments @{SearchCriteria="IsInstalled=1";OnlineScan=$true}
$result.Updates


Installing Container

You can install the OneGet PowerShell module and the latest version of Docker by executing the following commands:
Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
Install-Package -Name docker -ProviderName DockerMsftProvider
Restart-Computer


Once rebooted, Start Docker service and install base image
Start-Service docker
docker pull microsoft/nanoserver

Enabling remote access to docker host (Nano Server)
netsh advfirewall firewall add rule name="Docker daemon " dir=in action=allow protocol=TCP localport=2375
Stop-Service docker
dockerd --unregister-service
dockerd -H npipe:// -H 0.0.0.0:2375 --register-service
Start-Service docker


Connecting to Windows container from remote computer

First you need to download docker client using the following command
Invoke-WebRequest "https://download.docker.com/components/engine/windows-server/cs-1.12/docker.zip" -OutFile "$env:TEMP\docker.zip" -UseBasicParsing

Now, extract it using the following command
Expand-Archive -Path "$env:TEMP\docker.zip" -DestinationPath $env:ProgramFiles

You need to set environment variable which does not require shell to be restarted.
$env:path += ";c:\program files\docker"

Now, add docker directory to system path
[Environment]::SetEnvironmentVariable("Path", $env:Path + ";C:\Program Files\Docker", [EnvironmentVariableTarget]::Machine)

Connect to docker container hosted on Nano Server (192.168.0.200), the following command will create container from microsoft/nanoserver image with container1 and hostname container1
docker -H tcp://192.168.0.200:2375 run -it --name container1 --hostname container1 microsoft/nanoserver cmd


Done.


Set Up an IKEv2 VPN Server with StrongSwan on Ubuntu 16.04

$
0
0

IKEv2, or Internet Key Exchange v2, is a protocol that allows for direct IPSec tunneling between the server and client. In IKEv2 VPN implementations, IPSec provides encryption for the network traffic. IKEv2 is natively supported on new platforms (OS X 10.11+, iOS 9.1+, and Windows 10) with no additional applications necessary, and it handles client hiccups quite smoothly.


This article will guide you through the steps to set up an IKEv2 VPN server using StrongSwan on an Ubuntu 16.04 server and connect to it from Windows, iOS, and macOS clients.


Installing StrongSwan

First, we'll install StrongSwan, an open-source IPSec daemon which we'll configure as our VPN server. We'll also install the StrongSwan EAP plugin, which allows password authentication for clients, as opposed to certificate-based authentication. We'll need to create some special firewall rules as part of this configuration, so we'll also install a utility which allows us to make our new firewall rules persistent.

Execute the following command to install these components:

sudo apt-get install strongswan strongswan-plugin-eap-mschapv2 moreutils iptables-persistent

Note: While installing iptables-persistent, the installer will ask whether or not to save current IPv4 and IPv6 rules. As we want any previous firewall configurations to stay the same, we'll select yes on both prompts.

Now that everything's installed, let's move on to creating our certificates:


Creating a Certificate Authority

An IKEv2 server requires a certificate to identify itself to clients. To help us create the certificate required, StrongSwan comes with a utility to generate a certificate authority and server certificates. To begin, let's create a directory to store all the stuff we'll be working on.

mkdir vpn-certs
cd vpn-certs


Now that we have a directory to store everything, let's generate our root key. This will be a 4096-bit RSA key that will be used to sign our root certificate authority, so it's very important that we also secure this key by ensuring that only the root user can read it.

Execute these commands to generate and secure the key:

ipsec pki --gen --type rsa --size 4096 --outform pem > server-root-key.pem
chmod 600 server-root-key.pem


Now that we have a key, we can move on to creating our root certificate authority, using the key to sign the root certificate:

ipsec pki --self --ca --lifetime 3650 \
--in server-root-key.pem \
--type rsa --dn "C=US, O=VPN Server, CN=VPN Server Root CA" \
--outform pem > server-root-ca.pem


You can change the distinguished name (DN) values, such as country, organization, and common name, to something else to if you want to. The common name here is just the indicator, so you could even make something up.

Later, we'll copy the root certificate (server-root-ca.pem) to our client devices so they can verify the authenticity of the server when they connect.

Now that we've got our root certificate authority up and running, we can create a certificate that the VPN server will use.


Generating a Certificate for the VPN Server

We'll now create a certificate and key for the VPN server. This certificate will allow the client to verify the server's authenticity.

First, create a private key for the VPN server with the following command:

ipsec pki --gen --type rsa --size 4096 --outform pem > vpn-server-key.pem

Then create and sign the VPN server certificate with the certificate authority's key you created in the previous step. Execute the following command, but change the Common Name (CN) and the Subject Alternate Name (SAN) field to your VPN server's DNS name or IP address:

ipsec pki --pub --in vpn-server-key.pem \
--type rsa | ipsec pki --issue --lifetime 1825 \
--cacert server-root-ca.pem \
--cakey server-root-key.pem \
--dn "C=US, O=VPN Server, CN=server_name_or_ip" \
--san server_name_or_ip \
--flag serverAuth --flag ikeIntermediate \
--outform pem > vpn-server-cert.pem


Copy the certificates to a path which would allow StrongSwan to read the certificates:

sudo cp ./vpn-server-cert.pem /etc/ipsec.d/certs/vpn-server-cert.pem
sudo cp ./vpn-server-key.pem /etc/ipsec.d/private/vpn-server-key.pem


Finally, secure the keys so they can only be read by the root user.

sudo chown root /etc/ipsec.d/private/vpn-server-key.pem
sudo chgrp root /etc/ipsec.d/private/vpn-server-key.pem
sudo chmod 600 /etc/ipsec.d/private/vpn-server-key.pem


In this step, we've created a certificate pair that would be used to secure communications between the client and the server. We've also signed the certificates with our root key, so the client will be able to verify the authenticity of the VPN server. Now that we've got all the certificates ready, we'll move on to configuring the software.


Configuring StrongSwan

We've already created all the certificates that we need, so it's time to configure StrongSwan itself.

StrongSwan has a default configuration file, but before we make any changes, let's back it up first so that we'll have a reference file just in case something goes wrong:

sudo cp /etc/ipsec.conf /etc/ipsec.conf.original

The example file is quite long, so to prevent misconfiguration, we'll clear the default configuration file and write our own configuration from scratch. First, clear out the original configuration:

echo '' | sudo tee /etc/ipsec.conf

Then open the file in your text editor:

sudo nano /etc/ipsec.conf

First, we'll tell StrongSwan to log daemon statuses for debugging and allow duplicate connections. Add these lines to the file:

/etc/ipsec.conf

config setup
  charondebug="ike 1, knl 1, cfg 0"
  uniqueids=no


Then, we'll create a configuration section for our VPN. We'll also tell StrongSwan to create IKEv2 VPN Tunnels and to automatically load this configuration section when it starts up. Append the following lines to the file:

/etc/ipsec.conf

conn ikev2-vpn
  auto=add
  compress=no
  type=tunnel
  keyexchange=ikev2
  fragmentation=yes
  forceencaps=yes


Next, we'll tell StrongSwan which encryption algorithms to use for the VPN. Append these lines:

/etc/ipsec.conf

ike=aes256-sha1-modp1024,3des-sha1-modp1024!
esp=aes256-sha1,3des-sha1!


We'll also configure dead-peer detection to clear any "dangling" connections in case the client unexpectedly disconnects. Add these lines:

/etc/ipsec.conf

dpdaction=clear
dpddelay=300s
rekey=no


Then we'll configure the server (left) side IPSec parameters. Add this to the file:
/etc/ipsec.conf

left=%any
leftid=@server_name_or_ip
leftcert=/etc/ipsec.d/certs/vpn-server-cert.pem
leftsendcert=always
leftsubnet=0.0.0.0/0


Note: When configuring the server ID (leftid), only include the @ character if your VPN server will be identified by a domain name:

leftid=@vpn.example.com

If the server will be identified by its IP address, just put the IP address in:

leftid=111.111.111.111

Then we configure the client (right) side IPSec parameters, like the private IP address ranges and DNS servers to use:
/etc/ipsec.conf

right=%any
rightid=%any
rightauth=eap-mschapv2
rightsourceip=10.10.10.0/24
rightdns=8.8.8.8,8.8.4.4
rightsendcert=never


Finally, we'll tell StrongSwan to ask the client for user credentials when they connect:
/etc/ipsec.conf

eap_identity=%identity

The configuration file should look like this:
/etc/ipsec.conf

config setup
charondebug="ike 1, knl 1, cfg 0"
uniqueids=no

conn ikev2-vpn
auto=add
compress=no
type=tunnel
keyexchange=ikev2
fragmentation=yes
forceencaps=yes
ike=aes256-sha1-modp1024,3des-sha1-modp1024!
esp=aes256-sha1,3des-sha1!
dpdaction=clear
dpddelay=300s
rekey=no
left=%any
leftid=@server_name_or_ip
leftcert=/etc/ipsec.d/certs/vpn-server-cert.pem
leftsendcert=always
leftsubnet=0.0.0.0/0
right=%any
rightid=%any
rightauth=eap-mschapv2
rightdns=8.8.8.8,8.8.4.4
rightsourceip=10.10.10.0/24
rightsendcert=never
eap_identity=%identity


Save and close the file once you've verified that you've configured things as shown.

Now that we've configured the VPN parameters, let's move on to creating an account so our users can connect to the server.


Configuring VPN Authentication

Our VPN server is now configured to accept client connections, but we don't have any credentials configured yet, so we'll need to configure a couple things in a special configuration file called ipsec.secrets:

We need to tell StrongSwan where to find the private key for our server certificate, so the server will be able to encrypt and decrypt data.
We also need to set up a list of users that will be allowed to connect to the VPN.

Let's open the secrets file for editing:

sudo nano /etc/ipsec.secrets

First, we'll tell StrongSwan where to find our private key.
/etc/ipsec.secrets

server_name_or_ip : RSA "/etc/ipsec.d/private/vpn-server-key.pem"

Then we'll create the user credentials. You can make up any username or password combination that you like, but we have to tell StrongSwan to allow this user to connect from anywhere:
/etc/ipsec.secrets

your_username %any% : EAP "your_password"

Save and close the file. Now that we've finished working with the VPN parameters, we'll reload the VPN service so that our configuration would be applied:

sudo ipsec reload

Now that the VPN server has been fully configured with both server options and user credentials, it's time to move on to configuring the most important part: the firewall.


Configuring the Firewall & Kernel IP Forwarding

Now that we've got the VPN server configured, we need to configure the firewall to forward and allow VPN traffic through. We'll use IPTables for this.

First, disable UFW if you've set it up, as it can conflict with the rules we need to configure:

sudo ufw disable

Then remove any remaining firewall rules created by UFW:

iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -F
iptables -Z


To prevent us from being locked out of the SSH session, we'll accept connections that are already accepted. We'll also open port 22 (or whichever port you've configured) for future SSH connections to the server. Execute these commands:

sudo iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 22 -j ACCEPT

We'll also need to accept connections on the local loopback interface:

sudo iptables -A INPUT -i lo -j ACCEPT

Then we'll tell IPTables to accept IPSec connections:

sudo iptables -A INPUT -p udp --dport  500 -j ACCEPT
sudo iptables -A INPUT -p udp --dport 4500 -j ACCEPT


Next, we'll tell IPTables to forward ESP (Encapsulating Security Payload) traffic so the VPN clients will be able to connect. ESP provides additional security for our VPN packets as they're traversing untrusted networks:

sudo iptables -A FORWARD --match policy --pol ipsec --dir in  --proto esp -s 10.10.10.10/24 -j ACCEPT
sudo iptables -A FORWARD --match policy --pol ipsec --dir out --proto esp -d 10.10.10.10/24 -j ACCEPT

Our VPN server will act as a gateway between the VPN clients and the internet. Since the VPN server will only have a single public IP address, we will need to configure masquerading to allow the server to request data from the internet on behalf of the clients; this will allow traffic to flow from the VPN clients to the internet, and vice-versa:

sudo iptables -t nat -A POSTROUTING -s 10.10.10.10/24 -o eth0 -m policy --pol ipsec --dir out -j ACCEPT
sudo iptables -t nat -A POSTROUTING -s 10.10.10.10/24 -o eth0 -j MASQUERADE


To prevent IP packet fragmentation on some clients, we'll tell IPTables to reduce the size of packets by adjusting the packets' maximum segment size. This prevents issues with some VPN clients.

sudo iptables -t mangle -A FORWARD --match policy --pol ipsec --dir in -s 10.10.10.10/24 -o eth0 -p tcp -m tcp --tcp-flags SYN,RST SYN -m tcpmss --mss 1361:1536 -j TCPMSS --set-mss 1360

For better security, we'll drop everything else that does not match the rules we've configured:

sudo iptables -A INPUT -j DROP
sudo iptables -A FORWARD -j DROP


Now we'll make the firewall configuration persistent, so that all our configuration work won't get wiped on reboot:

sudo netfilter-persistent save
sudo netfilter-persistent reload


Finally, we'll enable packet forwarding on the server. Packet forwarding is what makes it possible for our server to "route" data from one IP address to the other. Essentially, we're making our server act like a router.

Edit the file /etc/sysctl.conf:

sudo nano /etc/sysctl.conf

We'll need to configure a few things here:

First, we'll enable IPv4 packet forwarding.
We'll disable Path MTU discovery to prevent packet fragmentation problems.
We also won't accept ICMP redirects nor send ICMP redirects to prevent man-in-the-middle attacks.

The changes you need to make to the file are highlighted in the following code:
/etc/sysctl.conf


. . .

# Uncomment the next line to enable packet forwarding for IPv4
net.ipv4.ip_forward=1
. . .

# Do not accept ICMP redirects (prevent MITM attacks)
net.ipv4.conf.all.accept_redirects = 0
# Do not send ICMP redirects (we are not a router)
net.ipv4.conf.all.send_redirects = 0

. . .

net.ipv4.ip_no_pmtu_disc = 1

Make those changes, save the file, and exit the editor. Then restart the server:

sudo reboot

You'll get disconnected from the server as it reboots, but that's expected. After the server reboots, log back in to the server as the sudo, non-root user. You're ready to test the connection on a client.


Testing the VPN Connection on Windows, iOS, and macOS

Now that you have everything set up, it's time to try it out. First, you'll need to copy the root certificate you created and install it on your client device(s) that will connect to the VPN. The easiest way to do this is to log into your server and execute this command to display the contents of the certificate file:

cat ~/vpn-certs/server-root-ca.pem

You'll see output similar to this:

Output
-----BEGIN CERTIFICATE-----
MIIFQjCCAyqgAwIBAgIIFkQGvkH4ej0wDQYJKoZIhvcNAQEMBQAwPzELMAkGA1UE

. . .

EwbVLOXcNduWK2TPbk/+82GRMtjftran6hKbpKGghBVDPVFGFT6Z0OfubpkQ9RsQ
BayqOb/Q

-----END CERTIFICATE-----

Copy this output to your computer, including the -----BEGIN CERTIFICATE----- and -----END CERTIFICATE----- lines, and save it to a file with a recognizable name, such as vpn_root_certificate.pem. Ensure the file you create has the .pem extension.

Alternatively, use SFTP to transfer the file to your computer.

Once you have the vpn_root_certificate.pem file downloaded to your computer, you can set up the connection to the VPN.

Connecting from Windows

First, import the root certificate by following these steps:
  • Press WINDOWS+R to bring up the Run dialog, and enter mmc.exe to launch the Windows Management Console.
  • From the File menu, navigate to Add or Remove Snap-in, select Certificates from the list of available snap-ins, and click Add.
  • We want the VPN to work with any user, so select Computer Account and click Next.
  • We're configuring things on the local computer, so select Local Computer, then click Finish.
  • Under the Console Root node, expand the Certificates (Local Computer) ** entry, expand **Trusted Root Certification Authorities, and then select the Certificates entry:


  • From the Action menu, select All Tasks and click Import to display the Certificate Import Wizard.
  • Click Next to move past the introduction.
  • On the File to Import screen, press the Browse button and select the certificate file that you've saved. 
  • Then click Next. 
  • Ensure that the Certificate Store is set to Trusted Root Certification Authorities, and click Next.
  • Click Finish to import the certificate.

Then configure the VPN with these steps:
  • Launch Control Panel, then navigate to the Network and Sharing Center.
  • Click on Set up a new connection or network, then select Connect to a workplace.
  • Select Use my Internet connection (VPN).
  • Enter the VPN server details. Enter the server's domain name or IP address in the Internet address field, then fill in Destination name with something that describes your VPN connection. Then click Done.

Your new VPN connection will be visible under the list of networks. Select the VPN and click Connect. You'll be prompted for your username and password. Type them in, click OK, and you'll be connected.


Connecting from iOS

To configure the VPN connection on an iOS device, follow these steps:
  • Send yourself an email with the root certificate attached.
  • Open the email on your iOS device and tap on the attached certificate file, then tap Install and enter your passcode. Once it installs, tap Done.
  • Go to Settings, General, VPN and tap Add VPN Configuration. This will bring up the VPN connection configuration screen.
  • Tap on Type and select IKEv2.
  • In the Description field, enter a short name for the VPN connection. This could be anything you like.
  • In the Server and Remote ID field, enter the server's domain name or IP address. The Local ID field can be left blank.
  • Enter your username and password in the Authentication section, then tap Done.
  • Select the VPN connection that you just created, tap the switch on the top of the page, and you'll be connected.

Connecting from macOS

Follow these steps to import the certificate:
  • Double-click the certificate file. Keychain Access will pop up with a dialog that says "Keychain Access is trying to modify the system keychain. Enter your password to allow this."
  • Enter your password, then click on Modify Keychain
  • Double-click the newly imported VPN certificate. This brings up a small properties window where you can specify the trust levels. Set IP Security (IPSec) to Always Trust and you'll be prompted for your password again. This setting saves automatically after entering the password.

Now that the certificate is important and trusted, configure the VPN connection with these steps:
  • Go to System Preferences and choose Network.
  • Click on the small "plus" button on the lower-left of the list of networks.
  • In the popup that appears, Set Interface to VPN, set the VPN Type to IKEv2, and give the connection a name.
  • In the Server and Remote ID field, enter the server's domain name or IP address. Leave the Local ID blank.
  • Click on Authentication Settings, select Username, and enter your username and password you configured for your VPN user. Then click OK.

Finally, click on Connect to connect to the VPN. You should now be connected to the VPN.
 

Troubleshooting Connections

If you are unable to import the certificate, ensure the file has the .pem extention, and not .pem.txt.

If you're unable to connect to the VPN, check the server name or IP address you used. The server's domain name or IP address must match what you've configured as the common name (CN) while creating the certificate. If they don't match, the VPN connection won't work. If you set up a certificate with the CN of vpn.example.com, you must use vpn.example.com when you enter the VPN server details. Double-check the command you used to generate the certificate, and the values you used when creating your VPN connection.

Finally, double-check the VPN configuration to ensure the leftid value is configured with the @ symbol if you're using a domain name:

  leftid=@vpn.example.com

And if you're using an IP address, ensure that the @ symbol is omitted.


Conclusion

We've successfully built a VPN server that uses the IKEv2 protocol. Now you can be assured that your online activities will remain secure.

Oracle Database 12c RAC to RAC Data Guard Configuration

$
0
0

This article will guide you through the steps to install and configure Oracle Grid Infrastructure 12c and Database 12c including RAC to RAC Data-guard and Data-broker configuration in a Primary and Physical Standby environment for high availability.

 

Prerequisites:

You need to download the following software if you don’t have already.

  • Oracle Enterprise Linux 6 (64-bit) or Red Hat Enterprise Linux 6 (64bit)
  • Oracle Grid Infrastructure 12c (64-bit)
  • Oracle Database 12c (64-bit)

 

Environment:

You need four (Physical or Virtual) machines with 2 network adapters and at least 2GB memory installed on each.


 

Installing Oracle Enterprise Linux 6

To begin installation, power on your first machine booting from Oracle Linux media and install it as basic server. More specifically, it should be a server installation with a minimum of 4GB swap, separate partition for /u01 with minimum 20GB space, firewall disabled, SELinux set to permissive and the following package groups installed.



Base System > Base
Base System > Compatibility libraries
Base System > Hardware monitoring utilities
Base System > Large Systems Performance
Base System > Network file system client
Base System > Performance Tools
Base System > Perl Support
Servers > Server Platform
Servers > System administration tools
Desktops > Desktop
Desktops > Desktop Platform
Desktops > Fonts
Desktops > General Purpose Desktop
Desktops > Graphical Administration Tools
Desktops > Input Methods
Desktops > X Window System
Applications > Internet Browser
Development > Additional Development
Development > Development Tools


If you are on physical machine then you have to install all four machines one by one but if you are on virtual platform then you have an option to clone your first machine with minor changes of ip addresses and hostname of cloned machines. 

Click Reboot to finish the installation.


 



Preparing Oracle Enterprise Linux 6

Since we have completed Oracle Linux installation, now we need to prepare our Linux machines for Gird infrastructure and Database installation. Make sure internet connection is available to perform the following tasks.

You need to set up network (ip address, netmask, gateway, dns and hostname) on all four machines according to your environment. In our case, we have the following credentials for our lab environment.



Primary Site: Networking – PDBSRV1

[root@PDBSRV1 ~]# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=PDBSRV1.TSPK.COM
GATEWAY=192.168.10.1

Save and close

[root@PDBSRV1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0

ONBOOT=yes
BOOTPROTO=none
IPADDR=192.168.10.100
NETMASK=255.255.255.0
GATEWAY=192.168.10.1
DNS1=192.168.10.1
DOMAIN=TSPK.COM
DEFROUTE=yes

Save and close

[root@PDBSRV1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1
ONBOOT=yes
BOOTPROTO=none
IPADDR=192.168.1.100
NETMASK=255.255.255.0

Save and close



Add the following entries in /etc/hosts file on PDBSRV1


[root@PDBSRV1 ~]# vi /etc/hosts

# Public
192.168.10.100  pdbsrv1.tspk.com        pdbsrv1
192.168.10.101  pdbsrv2.tspk.com        pdbsrv2

# Private
192.168.1.100   pdbsrv1-prv.tspk.com    pdbsrv1-prv
192.168.1.101   pdbsrv2-prv.tspk.com    pdbsrv2-prv

# Virtual
192.168.10.103  pdbsrv1-vip.tspk.com    pdbsrv1-vip
192.168.10.104  pdbsrv2-vip.tspk.com    pdbsrv2-vip

# SCAN
#192.168.10.105 pdbsrv-scan.tspk.com    pdbsrv-scan
#192.168.10.106 pdbsrv-scan.tspk.com    pdbsrv-scan
#192.168.10.107 pdbsrv-scan.tspk.com    pdbsrv-scan

Save and close

Primary Site: Networking – PDBSRV2

[root@PDBSRV2 ~]# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=PDBSRV2.TSPK.COM
GATEWAY=192.168.10.1

Save and close

[root@PDBSRV2 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0

ONBOOT=yes
BOOTPROTO=none
IPADDR=192.168.10.101
NETMASK=255.255.255.0
GATEWAY=192.168.10.1
DNS1=192.168.10.1
DOMAIN=TSPK.COM
DEFROUTE=yes

Save and close

[root@PDBSRV2 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1
ONBOOT=yes
BOOTPROTO=none
IPADDR=192.168.1.101
NETMASK=255.255.255.0

Save and close

Add the following entries in /etc/hosts file on PDBSRV2

[root@PDBSRV2 ~]# vi /etc/hosts

# Public
192.168.10.100  pdbsrv1.tspk.com        pdbsrv1
192.168.10.101  pdbsrv2.tspk.com        pdbsrv2

# Private
192.168.1.100   pdbsrv1-prv.tspk.com    pdbsrv1-prv
192.168.1.101   pdbsrv2-prv.tspk.com    pdbsrv2-prv

# Virtual
192.168.10.103  pdbsrv1-vip.tspk.com    pdbsrv1-vip
192.168.10.104  pdbsrv2-vip.tspk.com    pdbsrv2-vip

# SCAN
#192.168.10.105 pdbsrv-scan.tspk.com    pdbsrv-scan
#192.168.10.106 pdbsrv-scan.tspk.com    pdbsrv-scan
#192.168.10.107 pdbsrv-scan.tspk.com    pdbsrv-scan

Save and close.

Now execute the following commands on both primary nodes PDBSRV1 and PDBSRV2

[root@PDBSRV1 ~]# hostname pdbsrv1.tspk.com
[root@PDBSRV2 ~]# hostname pdbsrv2.tspk.com

[root@PDBSRV1 ~]# service network reload
[root@PDBSRV2 ~]# service network reload



Standby Site: Networking – SDBSRV1

[root@SDBSRV1 ~]# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=SDBSRV1.TSPK.COM
GATEWAY=192.168.10.1

Save and close

[root@SDBSRV1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0

ONBOOT=yes
BOOTPROTO=none
IPADDR=192.168.10.110
NETMASK=255.255.255.0
GATEWAY=192.168.10.1
DNS1=192.168.10.1
DOMAIN=TSPK.COM
DEFROUTE=yes

Save and close

[root@SDBSRV1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1
ONBOOT=yes
BOOTPROTO=none
IPADDR=192.168.1.110
NETMASK=255.255.255.0

Save and close

Add the following entries in /etc/hosts file on SDBSRV1

[root@SDBSRV1 ~]# vi /etc/hosts

# Public
192.168.10.110  sdbsrv1.tspk.com        sdbsrv1
192.168.10.111  sdbsrv2.tspk.com        sdbsrv2

# Private
192.168.1.110   sdbsrv1-prv.tspk.com    sdbsrv1-prv
192.168.1.111   sdbsrv2-prv.tspk.com    sdbsrv2-prv

# Virtual
192.168.10.113  sdbsrv1-vip.tspk.com    sdbsrv1-vip
192.168.10.114  sdbsrv2-vip.tspk.com    sdbsrv2-vip

# SCAN
#192.168.10.115 sdbsrv-scan.tspk.com    sdbsrv-scan
#192.168.10.116 sdbsrv-scan.tspk.com    sdbsrv-scan
#192.168.10.117 sdbsrv-scan.tspk.com    sdbsrv-scan

Save and close

Standby Site: Networking – SDBSRV1

[root@PDBSRV2 ~]# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=SDBSRV2.TSPK.COM
GATEWAY=192.168.10.1

Save and close

[root@SDBSRV2 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0

ONBOOT=yes
BOOTPROTO=none
IPADDR=192.168.10.111
NETMASK=255.255.255.0
GATEWAY=192.168.10.1
DNS1=192.168.10.1
DOMAIN=TSPK.COM
DEFROUTE=yes

Save and close

[root@SDBSRV2 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1
ONBOOT=yes
BOOTPROTO=none
IPADDR=192.168.1.111
NETMASK=255.255.255.0

Save and close


Add the following entries in /etc/hosts file on PDBSRV2

[root@SDBSRV2 ~]# vi /etc/hosts

# Public
192.168.10.110  sdbsrv1.tspk.com        sdbsrv1
192.168.10.111  sdbsrv2.tspk.com        sdbsrv2

# Private
192.168.1.110   sdbsrv1-prv.tspk.com    sdbsrv1-prv
192.168.1.111   sdbsrv2-prv.tspk.com    sdbsrv2-prv

# Virtual
192.168.10.113  sdbsrv1-vip.tspk.com    sdbsrv1-vip
192.168.10.114  sdbsrv2-vip.tspk.com    sdbsrv2-vip

# SCAN
#192.168.10.115 sdbsrv-scan.tspk.com    sdbsrv-scan
#192.168.10.116 sdbsrv-scan.tspk.com    sdbsrv-scan
#192.168.10.117 sdbsrv-scan.tspk.com    sdbsrv-scan

Save and close.

Now execute the following commands on both primary nodes SDBSRV1 and SDBSRV2

[root@SDBSRV1 ~]# hostname sdbsrv1.tspk.com
[root@SDBSRV2 ~]# hostname sdbsrv2.tspk.com

[root@SDBSRV1 ~]# service network reload
[root@SDBSRV2 ~]# service network reload




Note: You need to create “HOSTA” record for the following entries in your DNS Server to resolve SCAN name of both Primary and Standby site.



Primary Site SCAN

192.168.10.105 pdbsrv-scan.tspk.com

192.168.10.106 pdbsrv-scan.tspk.com

192.168.10.107 pdbsrv-scan.tspk.com



Standby Site SCAN

192.168.10.115 sdbsrv-scan.tspk.com

192.168.10.116 sdbsrv-scan.tspk.com

192.168.10.117 sdbsrv-scan.tspk.com


Now, execute the following commands one by one on all four nodes to install and update following packages required for grid and database installation.

yum install compat-libcap1 compat-libstdc++-33 compat-libstdc++-33.i686 gcc gcc-c++ glibc glibc.i686 glibc-devel glibc-devel.i686 ksh libgcc libgcc.i686 libstdc++ libstdc++.i686 libstdc++-devel libstdc++-devel.i686 libaio libaio.i686 libaio-devel libaio-devel.i686 libXext libXext.i686 libXtst libXtst.i686 libX11 libX11.i686 libXau libXau.i686 libxcb libxcb.i686 libXi libXi.i686 make sysstat unixODBC unixODBC-devel –y

yum install kmod-oracleasm oracleasm-support –y

rpm -Uvh http://download.oracle.com/otn_software/asmlib/oracleasmlib-2.0.12-1.el6.x86_64.rpm

yum install oracle-rdbms-server-12cR1-preinstall –y

When you are done with the above commands, perform the following steps on all four nodes.

vi /etc/selinux/config

SELINUX=permissive

Save and close

chkconfig iptables off
service iptables stop

chkconfig ntpd off
service ntpd stop
mv /etc/ntp.conf /etc/ntp.conf.orig
rm /var/run/ntpd.pid

mkdir -p /u01/app/12.1.0/grid
mkdir -p /u01/app/oracle/product/12.1.0/db_1
chown -R oracle:oinstall /u01
chmod -R 775 /u01/

Set same password for user oracle on all four machines by executing the following command
 
passwd oracle

Set the environment variables on all four nodes and you need to change the highlighted text on each node accordingly.  

vi /home/oracle.bash_profile

# Oracle Settings
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=pdbsrv1.tspk.com
export DB_NAME=PDBRAC
export DB_UNIQUE_NAME=PDBRAC
export ORACLE_BASE=/u01/app/oracle
export GRID_HOME=/u01/app/12.1.0/grid
export DB_HOME=$ORACLE_BASE/product/12.1.0/db_1
export ORACLE_HOME=$DB_HOME
export ORACLE_SID=PDBRAC1
export ORACLE_TERM=xterm
export BASE_PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$BASE_PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

alias grid_env='. /home/oracle/grid_env'
alias db_env='. /home/oracle/db_env'

Save and close


vi /home/oracle/grid_env

export ORACLE_SID=+ASM1
export ORACLE_HOME=$GRID_HOME

export PATH=$ORACLE_HOME/bin:$BASE_PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

Save and close

vi /home/oracle/db_env

export ORACLE_SID=PDBRAC1
export ORACLE_HOME=$DB_HOME

export PATH=$ORACLE_HOME/bin:$BASE_PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib


Save and close

The environment variables from .bash_profile, grid_env and db_env on all four nodes will look similar to like as shown in image below.

 

You need to increase /dev/shm size if it is less than 4GB using the following command. If you don’t increase, it will cause an error during prerequisites check of Grid installation.

mount -o remount 4G /dev/shm

To make it persistent even after reboot, you need to modify /etc/fstab accordingly

vi /etc/fstab
tmpfs                   /dev/shm                tmpfs   defaults,size=4G        0 0

Save and close


Since, we have already added and configured iSCSI storage on all four nodes, now we need to create diskgroup of shared storage using the following command on Primary Node PDBSRV1 and later we will initialize and scan same diskgroup on PDBSRV2.

[root@PDBSRV1 ~]# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface [oracle]:
Default group to own the driver interface [dba]:
Start Oracle ASM library driver on boot (y/n) [y]:
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done

[root@PDBSRV1 ~]# oracleasm createdisk DISK1 /dev/sdc1
[root@PDBSRV1 ~]# oracleasm createdisk DISK2 /dev/sdd1
[root@PDBSRV1 ~]# oracleasm createdisk DISK3 /dev/sde1

[root@PDBSRV1 ~]# oracleasm scandisks
[root@PDBSRV1 ~]# oracleasm listdisks
DISK1
DISK2
DISK3

Now initialize and scan same diskgroup and PDBSRV2 using the following command.

[root@PDBSRV2 ~]# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface [oracle]:
Default group to own the driver interface [dba]:
Start Oracle ASM library driver on boot (y/n) [y]:
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done

[root@PDBSRV2 ~]# oracleasm scandisks
[root@PDBSRV2 ~]# oracleasm listdisks
DISK1
DISK2
DISK3


Now, we will create diskgroup on our Standby node SDBSRV1 and later we will initialize and scan same diskgroup on SDBSRV2.

[root@SDBSRV1 ~]# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface [oracle]:
Default group to own the driver interface [dba]:
Start Oracle ASM library driver on boot (y/n) [y]:
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done

[root@SDBSRV1 ~]# oracleasm createdisk DISK1 /dev/sdc1
[root@SDBSRV1 ~]# oracleasm createdisk DISK2 /dev/sdd1
[root@SDBSRV1 ~]# oracleasm createdisk DISK3 /dev/sde1

[root@SDBSRV1 ~]# oracleasm scandisks
[root@SDBSRV1 ~]# oracleasm listdisks
DISK1
DISK2
DISK3

Now initialize and scan same diskgroup on SDBSRV2 using the following command.

[root@SDBSRV2 ~]# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface [oracle]:
Default group to own the driver interface [dba]:
Start Oracle ASM library driver on boot (y/n) [y]:
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done

[root@SDBSRV2 ~]# oracleasm scandisks
[root@SDBSRV2 ~]# oracleasm listdisks
DISK1
DISK2
DISK3




Installing Grid Infrastructure 12c - Primary Site

We have completed the preparation of all four machines and ready to start Oracle grid infrastructure 12c installation. You should have either VNC or Xmanager installed on your client machine for graphical installation of grid/database. In our case, we have windows 7 client machine and we are using Xmanager.

Now, copy grid infrastructure and database software on your primary node PDBSRV1 and extract it under /opt or any other directory of your choice. In our case, we have CD Rom media and we will extract it under /opt.

Login using root user on your primary node PDBSRV1 and perform the following steps.

unzip -q /media/linuxamd64_12c_grid_1of2.zip -d /opt
unzip -q /media/linuxamd64_12c_grid_2of2.zip -d /opt

unzip -q /media/linuxamd64_12c_database_1of2.zip -d /opt
unzip -q /media/linuxamd64_12c_database_2of2.zip -d /opt

Copy cvuqdisk-1.0.9-1.rpm to other three nodes under /opt and install it on each node one by one

scp -p /opt/grid/rpm/cvuqdisk-1.0.9-1.rpm pdbsrv2:/opt
scp -p /opt/grid/rpm/cvuqdisk-1.0.9-1.rpm sdbsrv1:/opt
scp -p /opt/grid/rpm/cvuqdisk-1.0.9-1.rpm sdbsrv2:/opt

rpm -Uvh /opt/grid/rpm/cvuqdisk-1.0.9-1.rpm

Now, logout from root user and login again with oracle user to perform grid installation on your primary node PDBSRV1

Run grid_env to set environment variable for grid infrastructure installation.

[oracle@PDBSRV1 ~]$ grid_env
[oracle@PDBSRV1 ~]$ export DISPLAY=192.168.10.1:0.0

Now, execute the following command from the directory you have extracted grid in to begin the installation.

[oracle@PDBSRV1 grid]$ /opt/grid/runInstaller

Follow the screenshots to set up grid infrastructure according to your environment.

Select"Skip Software Update" Click Next


Select "Install and Configure Oracle Grid Infrastructure for a Cluster" Click Next



Select "Configure a Standard Cluster" Click Next



Choose "Typical Installation" Click Next



Change the "SCAN Name" and add secondary host in the cluster, enter oracle user password then Click Next.



Verify destination path, enter password and choose "dba" as OSASM group. Click Next



Click "External" for redundancy and select at least one disk or more and Click Next.



Keep the default and Click Next



Keep the default and Click Next



It is safe to ignore since i can not add more than 4GB of memory. Click Next



Verify and if you are happy with the summary, Click Install.



Now, stop when the following screen appears and do not click OK. Now login with root user on PDBSRV1 and PDBSRV2 to execute the following scripts. You must execute both scripts on PDBSRV1 first.

[root@PDBSRV1 ~]# /u01/app/oracle/product/12.1.0/db_1/root.sh
[root@PDBSRV1 ~]# /u01/app/12.1.0/grid/root.sh
When you are done on PDBSRV1 then execute both scripts on PDBSRV2

[root@PDBSRV1 ~]# /u01/app/oracle/product/12.1.0/db_1/root.sh
[root@PDBSRV1 ~]# /u01/app/12.1.0/grid/root.sh

When done, Click OK.



Setup will continue after successful execution of scripts on both nodes.



Click close.



At this point, Grid infrastructure 12c installation completed. We can check the status of the installation using the following commands.

[oracle@PDBSRV1 ~]$ grid_env
[oracle@PDBSRV1 ~]$ crsctl stat res -t

   
Note: If you found ora.oc4j offline then you can enable and start it manually by executing the following command.

[oracle@PDBSRV1 ~]$ crsctl enable ora.oc4j
[oracle@PDBSRV1 ~]$ crsctl start ora.oc4j
[oracle@PDBSRV1 ~]$ crsctl stat res -t  

 

Installing Oracle Database 12c - Primary Site

Since we have completed grid installation, now we need to install oracle database 12c by executing runInstaller command from the directory you have extracted the database in.

[oracle@PDBSRV1 ~]$db_env
[oracle@PDBSRV1 ~]$ /opt/database/runInstaller

Uncheck the security updates checkbox and click the "Next" button and "Yes" on the subsequent warning dialog. 

Select the "Install database software only" option, then click the "Next" button.



Accept the "Oracle Real Application Clusters database installation" option by clicking the "Next" button.



Make sure both nodes are selected, then click the "Next" button.



Select the required languages, then click the "Next" button.



Select the "Enterprise Edition" option, then click the "Next" button.



Enter "/u01/app/oracle" as the Oracle base and "/u01/app/oracle/product/12.1.0/db_1" as the software location, then click the "Next" button.



Select the desired operating system groups, then click the "Next" button.



Wait for the prerequisite check to complete. If there are any problems either click the "Fix & Check Again" button, or check the "Ignore All" checkbox and click the "Next" button.



If you are happy with the summary information, click the "Install" button.



Wait while the installation takes place.



When prompted, run the configuration script on each node. When the scripts have been run on each node, click the "OK" button.



Click the "Close" button to exit the installer.



At this stage, database installation completed.


Creating a Database - Primary Site

Since we have completed database installation, its time to create a database by executing the following command.

[oracle@PDBSRV1 ~]$ db_env
[oracle@PDBSRV1 ~]$ dbca

Select the "Create Database" option and click the "Next" button.



Select the "Advanced Mode" option. Click the "Next" button.



Select exactly what shown in image and Click Next.



Enter the "PDBRAC" in database name and keep the SID as is. Click Next

Make sure both nodes are select and Click Next



Keep the default and Click Next



Select "Use the Same Administrative password for All Accounts" enter the password and Click Next



Keep the defaults and Click Next.
 

Select "Sample Schema" we need it for testing purpose later and Click Next



Increase "Memory Size" and navigate to "Sizing" tab



Increase the "Processes" and navigate to "Character Sets" tab



Select the following options and Click "All Initialization Parameters"



Define "PDBRAC" in db_unique_name and click Close.

Click Next



Select the below options and click Next.



If you happy with the Summary report then Click Finish.



Database creation process started, it will take several time to complete.



Click Exit
Click Close



We have successfully created a database on Primary nodes (pdbsrv1, pdbsrv2). We can check database status by executing the following command.

[oracle@PDBSRV1 ~]$ grid_env
[oracle@PDBSRV1 ~]$ srvctl status database -d pdbrac
Instance PDBRAC1 is running on node pdbsrv1
Instance PDBRAC2 is running on node pdbsrv2

[oracle@PDBSRV1 ~]$ srvctl config database -d pdbrac
Database unique name: PDBRAC
Database name: PDBRAC
Oracle home: /u01/app/oracle/product/12.1.0/db_1
Oracle user: oracle
Spfile: +DATA/PDBRAC/spfilePDBRAC.ora
Password file: +DATA/PDBRAC/orapwpdbrac
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: PDBRAC
Database instances: PDBRAC1,PDBRAC2
Disk Groups: DATA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
Database is administrator managed

[oracle@PDBSRV1 ~]$ db_env
[oracle@PDBSRV1 ~]$ sqlplus / as sysdba

SQL> SELECT inst_name FROM v$active_instances;

INST_NAME
--------------------------------------------------------------------------------
PDBSRV1.TSPK.COM:PDBRAC1
PDBSRV2.TSPK.COM:PDBRAC2

SQL>exit





Installing Grid Infrastructure 12c - Standby Site

Since we have already installed all perquisites on our Standby site nodes (sdbsrv1, sdbsrv2) for grid/database installation,  we can start grid installation straightaway.

Login to sdbsrv1 using oracle user and execute the following command to being the installation. Follow the same steps you have performed during installation on primary site nodes with minor changes as show in image below.



[oracle@SDBSRV1 grid]$ grid_env
[oracle@SDBSRV1 grid]$ export DISPLAY=192.168.10.1:0.0
[oracle@SDBSRV1 grid]$ /opt/grid/runInstaller

Enter "SCAN Name" and add secondadry node "sdbsrv1" enter oracle user password in "OS Password" box and Click Next


 

Once the grid installation completed, we can check the status of the installation using the following commands.

[oracle@SDBSRV1 ~]$ grid_env
[oracle@SDBSRV1 ~]$ crsctl stat res -t 
 

Note: If you found ora.oc4j offline then you can enable and start it manually by executing the following command.

[oracle@SDBSRV1 ~]$ crsctl enable ora.oc4j
[oracle@SDBSRV1 ~]$ crsctl start ora.oc4j
[oracle@SDBSRV1 ~]$ crsctl stat res -t 
 




Installing Database 12c - Standby Site

We can start database 12c installation by following the same steps we have performed during installation on primary nodes with minor changes as shown in images below.

[oracle@SDBSRV1 ~]$ db_env
[oracle@SDBSRV1 ~]$ /opt/database/runInstaller



You do not need to run "dbca" to create database on Standby nodes. Once the database installation completed, we can start configuring data guard at Primary nodes first.



Data Guard Configuration - Primary Site

Login to PDBSRV1 using oracle user and perform the following tasks to prepare data guard configuration.

[oracle@PDBSRV1 ~]$ db_env
[oracle@PDBSRV1 ~]$ mkdir /u01/app/oracle/backup
[oracle@PDBSRV1 ~]$ sqlplus / as sysdba

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> alter database force logging;

SQL> alter database open;

SQL> alter system set log_archive_config='DG_CONFIG=(PDBRAC,SDBRAC)' scope=both sid='*';

SQL> alter system set log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PDBRAC' scope=both sid='*';

SQL> alter system set LOG_ARCHIVE_DEST_2='SERVICE=SDBRAC SYNC NOAFFIRM VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=SDBRAC' scope=both sid='*';

SQL> alter system set log_archive_format='%t_%s_%r.arc' scope=spfile sid='*';
SQL> alter system set LOG_ARCHIVE_MAX_PROCESSES=8 scope=both sid='*';
SQL> alter system set REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE scope=both sid='*';
SQL> alter system set fal_server = 'SDBRAC';
SQL> alter system set STANDBY_FILE_MANAGEMENT=AUTO scope=spfile sid='*';
SQL> alter database flashback ON;
SQL> select group#,thread#,bytes from v$log;

SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 ('+DATA') SIZE 50M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 ('+DATA') SIZE 50M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 ('+DATA') SIZE 50M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 ('+DATA') SIZE 50M;

SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 ('+DATA') SIZE 50M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 ('+DATA') SIZE 50M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 ('+DATA') SIZE 50M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 ('+DATA') SIZE 50M;

SQL> select group#,thread#,bytes from v$standby_log;

SQL> create pfile='/u01/app/oracle/backup/initSDBRAC.ora' from spfile;

SQL> exit

[oracle@PDBSRV1 ~]$ grid_env
[oracle@PDBSRV1 ~]$ asmcmd pwget --dbuniquename PDBRAC
[oracle@PDBSRV1 ~]$ asmcmd pwcopy --dbuniquename PDBRAC '+DATA/PDBRAC/orapwpdbrac''/u01/app/oracle/backup/orapwsdbrac'

[oracle@PDBSRV1 ~]$ db_env
[oracle@PDBSRV1 ~]$ rman target / nocatalog
RMAN> run
{
     sql "alter system switch logfile";
     allocate channel ch1 type disk format '/u01/app/oracle/backup/Primary_bkp_for_standby_%U';
     backup database;
     backup current controlfile for standby;
     sql "alter system archive log current";
}

RMAN> exit

Update $ORACLE_HOME/network/admin/tnsnames.ora file on PDBSRV1

[oracle@PDBSRV1 ~]$ vi $ORACLE_HOME/network/admin/tnsnames.ora

PDBRAC =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = pdbsrv-scan)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = PDBRAC)
    )
  )

PDBRAC1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = pdbsrv1-vip)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = PDBRAC)
      (SID = PDBRAC1)
    )
  )

PDBRAC2 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = pdbsrv2-vip)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = SDBRAC)
      (SID = PDBRAC2)
    )
  )

SDBRAC =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = sdbsrv-scan)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = SDBRAC)
    )
  )

SDBRAC1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = sdbsrv1-vip)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = SDBRAC)
      (SID = SDBRAC1)
    )
  )

SDBRAC2 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = sdbsrv2-vip)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = SDBRAC)
      (SID = SDBRAC2)
    )
  )

Save and close

Copy the tnsnames.ora from PDBSRV1 to all the three nodes under $ORACLE_HOME/network/admin in order to keep the same tnsnames.ora on all the nodes.


[oracle@PDBSRV1 ~]$ scp -p $ORACLE_HOME/network/admin/tnsnames.ora pdbsrv2:$ORACLE_HOME/network/admin

[oracle@PDBSRV1 ~]$ scp -p $ORACLE_HOME/network/admin/tnsnames.ora sdbsrv1:$ORACLE_HOME/network/admin

[oracle@PDBSRV1 ~]$ scp -p $ORACLE_HOME/network/admin/tnsnames.ora sdbsrv2:$ORACLE_HOME/network/admin

Copy  initSDBRAC.ora and orapwsdbrac from primary node PDBSRV1 to standby nodes SDBSRV1, SDBSRV2

[oracle@PDBSRV1 ~]$ scp /u01/app/oracle/backup/initSDBRAC.ora oracle@dbsrv1:/u01/app/oracle/product/12.1.0/db_1/dbs/initSDBRAC.ora

[oracle@PDBSRV1 ~]$ scp /u01/app/oracle/backup/orapwsdbrac oracle@sdbsrv1:/u01/app/oracle/backup/orapwsdbrac

Copy /u01/app/oracle/backup from primary node pdbsrv1 to standby node sdbsrv1 under the same location as primary

[oracle@PDBSRV1 ~]$ scp -r /u01/app/oracle/backup sdbsrv1:/u01/app/oracle

 

Data Guard Configuration - Standby Site

Login to SDBSRV1, SDBSRV2 using oracle user and perform the following tasks to prepare Standby site data guard configuration.

[oracle@SDBSRV1 ~]$ mkdir /u01/app/oracle/admin/SDBRAC/adump
[oracle@SDBSRV2 ~]$ mkdir /u01/app/oracle/admin/SDBRAC/adump

Now modify initSDBRAC.ora file similiar to like below


[oracle@SDBSRV1 ~]$ vi /u01/app/oracle/product/12.1.0/db_1/dbs/initSDBRAC.ora
SDBRAC1.__data_transfer_cache_size=0
SDBRAC2.__data_transfer_cache_size=0
SDBRAC1.__db_cache_size=184549376
SDBRAC2.__db_cache_size=452984832
SDBRAC1.__java_pool_size=16777216
SDBRAC2.__java_pool_size=16777216
SDBRAC1.__large_pool_size=419430400
SDBRAC2.__large_pool_size=33554432
SDBRAC1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
SDBRAC2.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
SDBRAC1.__pga_aggregate_target=520093696
SDBRAC2.__pga_aggregate_target=570425344
SDBRAC1.__sga_target=973078528
SDBRAC2.__sga_target=922746880
SDBRAC1.__shared_io_pool_size=0
SDBRAC2.__shared_io_pool_size=33554432
SDBRAC1.__shared_pool_size=335544320
SDBRAC2.__shared_pool_size=369098752
SDBRAC1.__streams_pool_size=0
SDBRAC2.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/SDBRAC/adump'
*.audit_trail='db'
*.cluster_database=true
*.compatible='12.1.0.0.0'
*.control_files='+DATA/SDBRAC/control01.ctl','+DATA/SDBRAC/control02.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_name='PDBRAC'
*.db_recovery_file_dest='+DATA'
*.db_recovery_file_dest_size=5025m
*.db_unique_name='SDBRAC'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=SDBRACXDB)'
*.fal_server='PDBRAC'
SDBRAC1.instance_number=1
SDBRAC2.instance_number=2
*.log_archive_config='DG_CONFIG=(SDBRAC,PDBRAC)'
*.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=SDBRAC'
*.log_archive_dest_2='service=PDBRAC async valid_for=(online_logfile,primary_role) db_unique_name=PDBRAC'
*.log_archive_format='%t_%s_%r.arc'
*.log_archive_max_processes=8
*.memory_target=1416m
*.open_cursors=300
*.processes=1024
*.remote_login_passwordfile='EXCLUSIVE'
*.sessions=1131
*.standby_file_management='AUTO'
SDBRAC2.thread=2
SDBRAC1.thread=1
SDBRAC2.undo_tablespace='UNDOTBS2'
SDBRAC1.undo_tablespace='UNDOTBS1'

Save and close

Now create the ASM directories

[oracle@SDBSRV1 ~]$ grid_env
[oracle@SDBSRV1 ~]$ asmcmd mkdir DATA/SDBRAC
[oracle@SDBSRV1 ~]$ asmcmd

ASMCMD> cd DATA/SDBRAC
ASMCMD> mkdir PARAMETERFILE DATAFILE CONTROLFILE TEMPFILE ONLINELOG ARCHIVELOG STANDBYLOG
ASMCMD> exit

Configure static listener configuration in the listener.ora file on standby nodes. Add entry similar to below at the end in listener.ora file. The reason for this is that our standby database will be in nomount stage. In NOMOUNT stage, the database instance will not self-register with the listener, so you must tell the listener it is there.

[oracle@SDBSRV1 ~]$ cp -p /u01/app/12.1.0/grid/network/admin/listener.ora /u01/app/12.1.0/grid/network/admin/listener.ora.bkp

[oracle@SDBSRV1 ~]$ vi /u01/app/12.1.0/grid/network/admin/listener.ora
SID_LIST_LISTENER =
(SID_LIST =
   (SID_DESC =
       (SID_NAME = SDBRAC)
       (ORACLE_HOME = /u01/app/oracle/products/12.1.0/db_1)
   )
)

Save and close

[oracle@SDBSRV1 ~]$scp -p  /u01/app/12.1.0/grid/network/admin/listener.ora sdbsrv2:/u01/app/12.1.0/grid/network/admin/listener.ora

Now stop and start the LISTENER using srvctl command

[oracle@SDBSRV1 ~]$ grid_env
[oracle@SDBSRV1 ~]$ srvctl stop listener -listener LISTENER
[oracle@SDBSRV1 ~]$ srvctl start listener -listener LISTENER

[oracle@SDBSRV2 ~]$ grid_env
[oracle@SDBSRV2 ~]$ srvctl stop listener -listener LISTENER
[oracle@SDBSRV2 ~]$ srvctl start listener -listener LISTENER

Create physical standby database


[oracle@SDBSRV1 ~]$ db_env
[oracle@SDBSRV1 ~]$ sqlplus / as sysdba

SQL> startup nomount pfile='/u01/app/oracle/product/12.1.0/db_1/dbs/initSDBRAC.ora' 
SQL> exit

Login to Primary server pdbsrv1 as oracle user, connect to both Primary and Standby databases as shown below and run the RMAN active database duplication command.

[oracle@PDBSRV1 ~]$ rman target sys@PDBRAC auxiliary sys@SDBRAC

target database Password:
connected to target database: PDBRAC (DBID=2357433135)
auxiliary database Password:
connected to auxiliary database: PDBRAC (not mounted)

RMAN> DUPLICATE TARGET DATABASE FOR STANDBY NOFILENAMECHECK;
RMAN> exit

Now that the Standby database is created, first we need to check whether the Redo Apply is working or not before proceeding with the next steps. Using the below command start the Redo Apply.

[oracle@SDBSRV1 ~]$ db_env
[oracle@SDBSRV1 ~]$ sqlplus / as sysdba

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION;

The above command starts the recovery process using the standby logfiles that the primary is writing the redo to. It also tells the standby to return to the SQL command line once the command is complete.

Verifying that Redo Apply is working. You can run the below query to check the status of different processes.

SQL> select PROCESS, PID, STATUS, THREAD#, SEQUENCE# from v$managed_standby;

PROCESS   PID                      STATUS          THREAD#  SEQUENCE#
--------- ------------------------ ------------ ---------- ----------
ARCH      27871                    CONNECTED             0          0
ARCH      27873                    CONNECTED             0          0
ARCH      27875                    CONNECTED             0          0
ARCH      27877                    CLOSING               2         52
RFS       7084                     IDLE                  0          0
RFS       7064                     IDLE                  2         53
RFS       7080                     IDLE                  0          0
RFS       7082                     IDLE                  0          0
RFS       7122                     IDLE                  0          0
RFS       7120                     IDLE                  1         76
RFS       7136                     IDLE                  0          0
RFS       7138                     IDLE                  0          0
MRP0      14050                    APPLYING_LOG          2         53


To check whether the Primary and Standby databases are in sync or not, execute below query.

On Primary:

SQL> select THREAD#, max(SEQUENCE#) from v$log_history group by thread#;

   THREAD# MAX(SEQUENCE#)

---------- --------------
         1             78
         2             53

On Standby:

SQL> select max(sequence#), thread# from v$archived_log where applied='YES' group by thread#;

MAX(SEQUENCE#)    THREAD#
-------------- ----------
            78          1
            52          2


Create new spfile from pfile:

[oracle@SDBSRV1 ~]$ db_env
[oracle@SDBSRV1 ~]$ sqlplus / as sysdba

SQL> create pfile='/u01/app/oracle/products/12.1.0/db_1/dbs/initSDBRAC.ora' from spfile;

SQL> shutdown immediate;
SQL> exit

Now remove the static listener entry from standby nodes sdbsrv1, sdbsrv2 that we had added in listener.ora file earlier. Save the changes and restart the local listener.

[oracle@SDBSRV1 ~]$  cp -p /u01/app/12.1.0/grid/network/admin/listener.ora.bkp /u01/app/12.1.0/grid/network/admin/listener.ora

[oracle@SDBSRV1 ~]$ scp  /u01/app/12.1.0/grid/network/admin/listener.ora sdbsrv2:/u01/app/12.1.0/grid/network/admin/listener.ora

Now start the standby database using the newly created pfile. If everything is proper then the instance should get started.

[oracle@SDBSRV1 ~]$ db_env
[oracle@SDBSRV1 ~]$ sqlplus / as sysdba

SQL> startup nomount pfile='/u01/app/oracle/products/12.1.0/db_1/dbs/initSDBRAC.ora';
ORACLE instance started.

Total System Global Area 1358954496 bytes
Fixed Size                  2924208 bytes
Variable Size             469762384 bytes
Database Buffers          872415232 bytes
Redo Buffers               13852672 bytes
SQL>

SQL> alter database mount standby database;

Database altered.

SQL>

Now that the Standby database has been started with the cluster parameters enabled, we will create Spfile in the central location on ASM diskgroup.

SQL> create spfile='+DATA/SDBRAC/spfileSDBRAC.ora' from pfile='/u01/app/oracle/products/12.1.0/db_1/dbs/initSDBRAC.ora';

SQL> shutdown immediate;
SQL> exit

Now we need to check whether the standby database gets started using our new spfile which we created on ASM diskgroup. Before proceeding with below steps, first shutdown the database.

Rename the old pfile and spfile in $ORACLE_HOME/dbs directory.

[oracle@SDBSRV1 ~]$ cd $ORACLE_HOME/dbs

[oracle@SDBSRV1 ~]$ mv initSDBRAC.ora initSDBRAC.ora.orig
[oracle@SDBSRV1 ~]$ mv spfileSDBRAC.ora spfileSDBRAC.ora.orig

Now create the below initSDBRAC1.ora file on sdbsrv1 and initSDBRAC2.ora file on sdbsrv2 under $ORACLE_HOME/dbs with the spfile entry so that the instance can start with the newly created spfile.

[oracle@SDBSRV1 ~]$ cd $ORACLE_HOME/dbs

[oracle@SDBSRV1 ~]$ vi initSDBRAC1.ora
spfile='+DATA/SDBRAC/spfileSDBRAC.ora'

Save and close

copy initSDBRAC1.ora to sdbsrv2 as $ORACLE_HOME/dbs/initSDBRAC2.ora


[oracle@SDBSRV1 ~]$ scp -p $ORACLE_HOME/dbs/initSDBRAC1.ora sdbsrv2:$ORACLE_HOME/dbs/initSDBRAC2.ora

Now start the database on sdbsrv1

[oracle@SDBSRV1 ~]$ db_env
[oracle@SDBSRV1 ~]$ sqlplus / as sysdba

SQL> startup mount;
ORACLE instance started.

Total System Global Area 1358954496 bytes
Fixed Size                  2924208 bytes
Variable Size             469762384 bytes
Database Buffers          872415232 bytes
Redo Buffers               13852672 bytes
Database mounted.

SQL> select name, open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
SDBRAC    MOUNTED

SQL> show parameter spfile;

NAME                                 TYPE        VALUE
------------------------------------ ----------- -------------------------------
spfile                               string      +DATA/SDBRAC/spfileSDBRAC.ora
SQL> exit


Now that the database have been started using the spfile on shared location, we will add the database in cluster. Execute the below command to add the database and its instances in the cluster configuration.

[oracle@SDBSRV1 ~]$ srvctl add database -db SDBRAC -oraclehome $ORACLE_HOME -dbtype RAC -spfile +DATA/SDBRAC/spfileSDBRAC.ora -role PHYSICAL_STANDBY -startoption MOUNT -stopoption IMMEDIATE -dbname PDBRAC -diskgroup DATA

[oracle@SDBSRV1 ~]$ srvctl add instance -db SDBRAC -i SDBRAC1 -n sdbsrv1

[oracle@SDBSRV1 ~]$ srvctl add instance -db SDBRAC -i SDBRAC2 -n sdbsrv2

[oracle@SDBSRV1 ~]$ srvctl config database -d SDBRAC

Database unique name: SDBRAC
Database name: PDBRAC
Oracle home: /u01/app/oracle/product/12.1.0/db_1
Oracle user: oracle
Spfile: +DATA/SDBRAC/spfileSDBRAC.ora
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Server pools: SDBRAC
Database instances: SDBRAC1,SDBRAC2
Disk Groups: DATA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
Database is administrator managed


From Primary server pdbsrv1 copy the password file again to the Standby server sdbsrv1.

[oracle@PDBSRV1 ~]$ scp -p /u01/app/oracle/backup/orapwpdbrac sdbsrv1:$ORACLE_HOME/dbs/orapwsdbrac

From Standby server sdbsrv1 and copy the password file to ASM diskgroup as shown below.

[oracle@SDBSRV1 ~]$ grid_env
[oracle@SDBSRV1 ~]$ asmcmd
ASMCMD>
ASMCMD> pwcopy /u01/app/oracle/product/12.1.0/db_1/dbs/orapwsdbrac +DATA/SDBRAC/
copying /u01/app/oracle/product/12.1.0/db_1/dbs/orapwsdbrac -> +DATA/SDBRAC/orapwsdbrac


Add the password file location in database configuration using srvctl command.

[oracle@SDBSRV1 ~]$ db_env
[oracle@SDBSRV1 ~]$ srvctl modify database -d SDBRAC -pwfile +DATA/SDBRAC/orapwsdbrac


Start the Standby RAC database. Before starting the standby RAC database, shutdown the already running instance.

[oracle@SDBSRV1 ~]$ db_env
[oracle@SDBSRV1 ~]$ sqlplus / as sysdba

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> shutdown immediate;
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
SQL> exit

Now start the database using the following command.

[oracle@SDBSRV1 ~]$ srvctl start database -d SDBRAC
[oracle@SDBSRV1 ~]$ srvctl status database -d SDBRAC

Instance SDBRAC1 is running on node sdbsrv1
Instance SDBRAC2 is running on node sdbsrv2


Now that the standby single instance is converted to standby RAC database, the final step is to start the recovery (MRP) process. Using the below command start the recovery on Standby.

[oracle@SDBSRV1 ~]$ sqlplus / as sysdba

SQL> alter database recover managed standby database disconnect from session;

Database altered.

SQL> exit

Add entry similar to below at the end in listener.ora file on sdbsrv1 and sdbsrv2. It is required for dataguard broker configuration.

[oracle@SDBSRV1 ~]$ vi /u01/app/12.1.0/grid/network/admin/listener.ora 

SID_LIST_LISTENER=(SID_LIST=(SID_DESC=(SID_NAME=SDBRAC1)(GLOBAL_DBNAME=SDBRAC_DGMGRL)(ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1)))
Save and close
[oracle@SDBSRV1 ~]$ srvctl stop listener -listener LISTENER
[oracle@SDBSRV1 ~]$ srvctl start listener -listener LISTENER

[oracle@SDBSRV2 ~]$ vi /u01/app/12.1.0/grid/network/admin/listener.ora 
SID_LIST_LISTENER=(SID_LIST=(SID_DESC=(SID_NAME=SDBRAC2)(GLOBAL_DBNAME=SDBRAC_DGMGRL)(ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1)))

Save and close


[oracle@SDBSRV2 ~]$ srvctl stop listener -listener LISTENER
[oracle@SDBSRV2 ~]$ srvctl start listener -listener LISTENER


Thats’s it, we have completed the RAC to RAC dataguard configuration in 12c but we have not finished yet.



Dataguard Broker Configuration 12c

To configure a dataguard broker, we need to use the dgmgrl command which is a command line interface.
Perform the below steps to configure Dataguard Broker

Since our Primary and Standby databases are RAC, we will change the default location of DG Broker files to a centralized location so that all nodes can access them.

Login as oracle user on Primary node pdbsrv1 and execute the below commands.

[oracle@PDBSRV1 ~]$ grid_env
[oracle@PDBSRV1 ~]$ asmcmd mkdir DATA/PDBRAC/DGBROKERCONFIGFILE

[oracle@PDBSRV1 ~]$ db_env
[oracle@PDBSRV1 ~]$ sqlplus / as sysdba

SQL> show parameter dg_broker_config

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
dg_broker_config_file1               string      /u01/app/oracle/products/12.1.
                                                 0/db/dbs/dr1pdbrac.dat
dg_broker_config_file2               string      /u01/app/oracle/products/12.1.
                                                 0/db/dbs/dr2pdbrac.dat

SQL> alter system set dg_broker_config_file1='+DATA/PDBRAC/DGBROKERCONFIGFILE/dr1pdbrac.dat';
SQL> alter system set dg_broker_config_file2='+DATA/PDBRAC/DGBROKERCONFIGFILE/dr2pdbrac.dat';

SQL> alter system set dg_broker_start=TRUE;

SQL> alter system set LOG_ARCHIVE_DEST_2='' scope=both;

SQL> exit


Similarly, change the settings on Standby database server.

[oracle@SDBSRV1 ~]$ grid_env
[oracle@SDBSRV1 ~]$ asmcmd mkdir DATA/SDBRAC/DGBROKERCONFIGFILE

[oracle@SDBSRV1 ~]$ db_env
[oracle@SDBSRV1 ~]$ sqlplus / as sysdba

SQL> alter system set dg_broker_config_file1='+DATA/SDBRAC/DGBROKERCONFIGFILE/dr1sdbrac.dat';
SQL> alter system set dg_broker_config_file2='+DATA/PDBRAC/DGBROKERCONFIGFILE/dr2sdbrac.dat';

SQL> alter system set dg_broker_start=TRUE;

SQL> alter system set LOG_ARCHIVE_DEST_2='' scope=both;

SQL> exit


On the primary server, login to command line interface using dgmgrl and register the primary database in the broker configuration.

[oracle@PDBSRV1 ~]$ dgmgrl

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/password@PDBRAC
Connected as SYSDBA.
DGMGRL>

DGMGRL> CREATE CONFIGURATION dg_config AS PRIMARY DATABASE IS PDBRAC CONNECT IDENTIFIER IS PDBRAC;
Configuration "dg_config" created with primary database "PDBRAC"

Now add the standby database in broker configuration.

DGMGRL> ADD DATABASE SDBRAC AS CONNECT IDENTIFIER IS SDBRAC MAINTAINED AS PHYSICAL;
Database "SDBRAC" added

Now we need to enable the broker configuration and check if the configuration is enabled successfully or not.

DGMGRL> ENABLE CONFIGURATION;
Enabled.

DGMGRL> show configuration;
Configuration - dg_config
  Protection Mode: MaxPerformance
  Members:
  pdbrac  - Primary database
    sdbrac - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

Note: If you oberver "ORA-16629: database reports a different protection level from the protection mode" then perform the following steps.

DGMGRL> edit configuration set protection mode as MAXPERFORMANCE;
Succeeded.

DGMGRL> show configuration;
Configuration - dgtest
Protection Mode: MaxPerformance
Databases:
pdbrac - Primary database
sdbrac     - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS



Once the broker configuration is enabled, the MRP process should get start on the Standby database server. You can check using below command.

DGMGRL> show database sdbrac

Database - sdbrac

  Role:               PHYSICAL STANDBY

  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 0 seconds ago)
  Apply Lag:          0 seconds (computed 0 seconds ago)
  Average Apply Rate: 39.00 KByte/s
  Real Time Query:    OFF
  Instance(s):
    sdbrac1 (apply instance)
    sdbrac2

Database Status:
SUCCESS

The output of above command shows that the MRP process is started on instance1. You can login to standby Node sdbsrv1 server and check whether MRP is running or not as shown below.

[oracle@SDBSRV1 ~]$ ps -ef | grep mrp
oracle   26667     1  0 15:17 ?        00:00:00 ora_mrp0_sdbrac1
oracle   27826 20926  0 15:21 pts/1    00:00:00 /bin/bash -c ps -ef | grep mrp


Now that the MRP process is running, login to both Primary and Standby database and check whether the logs are in sync or not.

Below are some extra commands which you can use and check status of database.

DGMGRL> VALIDATE DATABASE pdbrac;

  Database Role:    Primary database


  Ready for Switchover:  Yes


  Flashback Database Status:

    pdbrac:  ON

DGMGRL> VALIDATE DATABASE sdbrac;

  Database Role:     Physical standby database

  Primary Database:  pdbrac

  Ready for Switchover:  Yes

  Ready for Failover:    Yes (Primary Running)

  Flashback Database Status:

    pdbrac:  ON
    sdbrac:  Off
 

Perform switchover activity from primary database (PDBRAC) to physical standby database (SDBRAC) using DGMGRL prompt.

DGMGRL> switchover to sdbrac;
Performing switchover NOW, please wait...
Operation requires a connection to instance "SDBRAC1" on database "sdbrac"
Connecting to instance "SDBRAC1"...
Connected as SYSDBA.
New primary database "sdbrac" is opening...
Operation requires startup of instance "PDBRAC2" on database "pdbrac"
Starting instance "PDBRAC2"...
ORACLE instance started.
Database mounted.
Database opened.
Switchover succeeded, new primary is "sdbrac"


 DGMGRL> show configuration;

Configuration - dg_config

  Protection Mode: MaxPerformance
  Databases:
  sdbrac - Primary database
    pdbrac - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> exit
 


Thats’s it, the DG Broker configuration is completed.




 

Conclusion

In this article, you have learnt how to install and configure oracle grid infrastructure 12c and database 12c in a environment of Primary and Disaster Recover (DR) sites inlcuding how to configure data guard and data broker between Primary site and DR site for highavailability.

How To Perform Security Audits using Lynis

$
0
0

Lynis is a host-based, open-source security auditing application that can evaluate the security profile and posture of Linux and other UNIX-like operating systems.



This article will guide you through the steps to install Lynis and use it to perform a security audit of your Ubuntu 16.04 server. Then you'll explore the results of a sample audit, and configure Lynis to skip tests that aren't relevant to your needs.

Prerequisites

To follow this guide, you'll need:
  • One Ubuntu 16.04 server, configured with a non-root user with sudo privileges and a firewall.

Installing Lynis on Your Server

There are several ways to install Lynis. You can compile it from source, download and copy the binary to an appropriate location on the system, or you can install it using the package manager. Using the package manager is the easist way to install Lynis and keep it updated, so that's the method we'll use.

However, on Ubuntu 16.04, the version available from the repository isn't the most recent version. In order to have access to the very latest features, we'll install Lynis from the project's official repository.

Lynis's software repository uses the HTTPS protocol, so we'll need to make sure that HTTPS support for the package manager is installed. Use the following command to check:

dpkg -s apt-transport-https | grep -i status

If it is installed, the output of that command should be:


Output

Status: install ok installed

If the output says it is not installed, install it using sudo apt-get install apt-transport-https
With the lone dependency now installed, we'll install Lynis. To begin that process, add the repository's key:

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys C80E383C3DE9F082E01391A0366C67DE91CA5D5F

You'll see the following output, indicating the key was added successfully:



Output

Executing: /tmp/tmp.AnVzwb6Mq8/gpg.1.sh --keyserver
keyserver.ubuntu.com
--recv-keys
C80E383C3DE9F082E01391A0366C67DE91CA5D5F
gpg: requesting key 91CA5D5F from hkp server keyserver.ubuntu.com
gpg: key 91CA5D5F: public key "CISOfy Software (signed software packages) " imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)

Then add the Lynis repository to the list of those available to the package manager:

sudo add-apt-repository "deb [arch=amd64] https://packages.cisofy.com/community/lynis/deb/ xenial main"

To make the packages in the newly-added repository available to the system, update the package database:

sudo apt-get update

Finally, install Lynis:

sudo apt-get install lynis

After the installation has completed, you should have access to the lynis command and its sub-commands. Let's look at how to use Lynis next.

Performing an Audit

With the installation completed, you can now use Lynis to perform security audits of your system. Let's start by viewing a list of actions you can perform with Lynis. Execute the following command:

lynis show commands

You'll see the following output:



Output

Commands:
lynis audit
lynis configure
lynis show
lynis update
lynis upload-only

Lynis audits are made possible using profiles, which are like configuration files with settings that control how Lynis conducts an audit. View the settings for the default profile:

lynis show settings

You'll see output like the following:


Output

# Colored screen output
colors=1

# Compressed uploads
compressed-uploads=0

# Use non-zero exit code if one or more warnings were found
error-on-warnings=0

...

# Upload server (ip or hostname)
upload-server=[not configured]

# Data upload after scanning
upload=no

# Verbose output
verbose=0

# Add --brief to hide descriptions, --configured-only to show configured items only, or --nocolors to remove colors

It's always a good idea to verify whether a new version is available before performing an audit. This way you'll get the most up-to-date suggestions and information. Issue the following command to check for updates:

lynis update info

The output should be similar to the following, which shows that the version of Lynis is the most recent:


Output

== Lynis ==

Version : 2.4.8
Status : Up-to-date
Release date : 2017-03-29
Update location : https://cisofy.com/lynis/


2007-2017, CISOfy - https://cisofy.com/lynis/

Alternatively, you could type lynis update check, which generates the following one-line output:



Output

status=up-to-date

If the version requires an update, use your package manager to perform the update.

To run an audit of your system, use the lynis audit system command. You can run Lynis in privileged and non-privileged (pentest) mode. In the latter mode, some tests that require root privileges are skipped. As a result, you should run your audit in privileged mode with sudo. Execute this command to perform your first audit:

sudo lynis audit system

After authenticating, Lynis will run its tests and stream the results to your screen. A Lynis audit typically takes a minute or less.

When Lynis performs an audit, it goes through a number of tests, divided into categories. After each audit, test results, debug information, and suggestions for hardening the system are written to standard output (the screen). More detailed information is logged to /var/log/lynis.log, while report data is saved to /var/log/lynis-report.dat. The report data contains general information about the server and the application itself, so the file you'll need to pay attention to is the log file. The log file is purged (overwritten) on each audit, so results from a previous audit are not saved.

Once the audit is complete, you'll review the results, warnings, and suggestions, and then implement any of the relevant suggestions.

Let's look at the results of a Lynis audit performed on the machine used to write this tutorial. The results you see on your audit may be different, but you should still be able to follow along.

The first significant part of a Lynis audit output is purely informational. It tells you the result of every test, grouped by category. The information takes the form of keywords, like NONE, WEAK, DONE, FOUND, NOT_FOUND, OK, and WARNING.


Output

[+] Boot and services
------------------------------------
- Service Manager [ systemd ]
- Checking UEFI boot [ DISABLED ]
- Checking presence GRUB [ OK ]
- Checking presence GRUB2 [ FOUND ]
- Checking for password protection [ WARNING ]

..

[+] File systems
------------------------------------
- Checking mount points
- Checking /home mount point [ SUGGESTION ]
- Checking /tmp mount point [ SUGGESTION ]
- Checking /var mount point [ OK ]
- Query swap partitions (fstab) [ NONE ]
- Testing swap partitions [ OK ]
- Testing /proc mount (hidepid) [ SUGGESTION ]
- Checking for old files in /tmp [ OK ]
- Checking /tmp sticky bit [ OK ]
- ACL support root file system [ ENABLED ]
- Mount options of / [ OK ]
- Checking Locate database [ FOUND ]
- Disable kernel support of some filesystems
- Discovered kernel modules: udf

...

[+] Hardening
------------------------------------
- Installed compiler(s) [ FOUND ]
- Installed malware scanner [ NOT FOUND ]
- Installed malware scanner [ NOT FOUND ]

...

[+] Printers and Spools
------------------------------------
- Checking cups daemon [ NOT FOUND ]
- Checking lp daemon [ NOT RUNNING ]

Though Lynis performs more than 200 tests out of the box, not all are necessary for your server. How can you tell which tests are necessary and which are not? That's where some knowledge about what should or should not be running on a server comes into play. For example, if you check the results section of a typical Lynis audit, you'll find two tests under the Printers and Spools category:


Output

[+] Printers and Spools
------------------------------------
- Checking cups daemon [ NOT FOUND ]
- Checking lp daemon [ NOT RUNNING ]

Are you actually running a print server on an Ubuntu 16.04 server? Unless you're running a cloud-based print server, you don't need Lynis to be running that test every time.

While that's a perfect example of a test you can skip, others are not so obvious. Take this partial results section, for example:


Output

[+] Insecure services
------------------------------------
- Checking inetd status [ NOT ACTIVE ]

This output says that inetd is not active, but that's expected on an Ubuntu 16.04 server, because Ubuntu replaced inetd with systemd. Knowing that, you may tag that test as one that Lynis should not be performing as part of an audit on your server.

As you review each of the test results, you'll come up with a pretty good list of superfluous tests. With that list in hand, you may, then, customize Lynis to ignore them in future audits. You'll learn how to get that done in Step 5.

In the next sections, we'll go through the different parts of a Lynis audit output so you'll have a better understanding of how to properly audit your system with Lynis. Let's look at how to deal with warnings issued by Lynis first.

Fixing Lynis Audit Warnings

A Lynis audit output does not always carry a warnings section, but when it does, you'll know how to fix the issue(s) raised after reading this section.

Warnings are listed after the results section. Each warning starts with the warning text itself, with the test that generated the warning on the same line in brackets. The next line will contain a suggested solution, if one exists. The last line is a security control URL where you may find some guidance on the warning.

Unfortunately, the URL does not always offer an explanation, so you may need to do some further research.
The following output comes from the warnings section of a Lynis audit performed on the server used for this article. Let's walk through each warning and look at how to resolve or fix it:


Output

Warnings (3):
----------------------------
! Version of Lynis is very old and should be updated [LYNIS]
https://cisofy.com/controls/LYNIS/

! Reboot of system is most likely needed [KRNL-5830]
- Solution : reboot
https://cisofy.com/controls/KRNL-5830/

! Found one or more vulnerable packages. [PKGS-7392]
https://cisofy.com/controls/PKGS-7392/

The first warning says that Lynis needs to be updated. That also means this audit used version of Lynis, so the results might not be complete. This could have been avoided if we'd performed a basic version check before running the results, as shown in Step 3. The fix for this one is easy: update Lynis.

The second warning indicates that the server needs to be rebooted. That's probably because a system update that involved a kernel upgrade was performed recently. The solution here is to reboot the system.

When in doubt about any warning, or just about any test result, you can get more information about the test by querying Lynis for the test id. The command to accomplish that takes this form:

sudo lynis show details test-id

So for the second warning, which has the test id KRNL-5830, we could run this command:

sudo lynis show details KRNL-5830

The output for that particular test follows. This gives you an idea of the process Lynis walks through for each test it performs. From this output, Lynis even gives specific information about the item that gave rise to the warning:


Output

2017-03-21 01:50:03 Performing test ID KRNL-5830 (Checking if system is running on the latest installed kernel)
2017-03-21 01:50:04 Test: Checking presence /var/run/reboot-required.pkgs
2017-03-21 01:50:04 Result: file /var/run/reboot-required.pkgs exists
2017-03-21 01:50:04 Result: reboot is needed, related to 5 packages
2017-03-21 01:50:04 Package: 5
2017-03-21 01:50:04 Result: /boot exists, performing more tests from here
2017-03-21 01:50:04 Result: /boot/vmlinuz not on disk, trying to find /boot/vmlinuz*
2017-03-21 01:50:04 Result: using 4.4.0.64 as my kernel version (stripped)
2017-03-21 01:50:04 Result: found /boot/vmlinuz-4.4.0-64-generic
2017-03-21 01:50:04 Result: found /boot/vmlinuz-4.4.0-65-generic
2017-03-21 01:50:04 Result: found /boot/vmlinuz-4.4.0-66-generic
2017-03-21 01:50:04 Action: checking relevant kernels
2017-03-21 01:50:04 Output: 4.4.0.64 4.4.0.65 4.4.0.66
2017-03-21 01:50:04 Result: Found 4.4.0.64 (= our kernel)
2017-03-21 01:50:04 Result: found a kernel (4.4.0.65) later than running one (4.4.0.64)
2017-03-21 01:50:04 Result: Found 4.4.0.65
2017-03-21 01:50:04 Result: found a kernel (4.4.0.66) later than running one (4.4.0.64)
2017-03-21 01:50:04 Result: Found 4.4.0.66
2017-03-21 01:50:04 Warning: Reboot of system is most likely needed [test:KRNL-5830] [details:] [solution:text:reboot]
2017-03-21 01:50:04 Hardening: assigned partial number of hardening points (0 of 5). Currently having 7 points (out of 14)
2017-03-21 01:50:04 Checking permissions of /usr/share/lynis/include/tests_memory_processes
2017-03-21 01:50:04 File permissions are OK
2017-03-21 01:50:04 ===---------------------------------------------------------------===

For the third warning, PKGS-7392, which is about vulnerable packages, we'd run this command:

sudo lynis show details PKGS-7392

The output gives us more information regarding the packages that need to be updated:


Output

2017-03-21 01:39:53 Performing test ID PKGS-7392 (Check for Debian/Ubuntu security updates)
2017-03-21 01:39:53 Action: updating repository with apt-get
2017-03-21 01:40:03 Result: apt-get finished
2017-03-21 01:40:03 Test: Checking if /usr/lib/update-notifier/apt-check exists
2017-03-21 01:40:03 Result: found /usr/lib/update-notifier/apt-check
2017-03-21 01:40:03 Test: checking if any of the updates contain security updates
2017-03-21 01:40:04 Result: found 7 security updates via apt-check
2017-03-21 01:40:04 Hardening: assigned partial number of hardening points (0 of 25). Currently having 96 points (out of 149)
2017-03-21 01:40:05 Result: found vulnerable package(s) via apt-get (-security channel)
2017-03-21 01:40:05 Found vulnerable package: libc-bin
2017-03-21 01:40:05 Found vulnerable package: libc-dev-bin
2017-03-21 01:40:05 Found vulnerable package: libc6
2017-03-21 01:40:05 Found vulnerable package: libc6-dev
2017-03-21 01:40:05 Found vulnerable package: libfreetype6
2017-03-21 01:40:05 Found vulnerable package: locales
2017-03-21 01:40:05 Found vulnerable package: multiarch-support
2017-03-21 01:40:05 Warning: Found one or more vulnerable packages. [test:PKGS-7392] [details:-] [solution:-]
2017-03-21 01:40:05 Suggestion: Update your system with apt-get update, apt-get upgrade, apt-get dist-upgrade and/or unattended-upgrades [test:PKGS-7392] [details:-] [solution:-]
2017-03-21 01:40:05 ===---------------------------------------------------------------===

The solution for this is to update the package database and update the system.

After fixing the item that led to a warning, you should run the audit again. Subsequent audits should be free of the same warning, although new warnings could show up. In that case, repeat the process shown in this step and fix the warnings.

Now that you know how to read and fix warnings generated by Lynis, let's look at how to implement the suggestions that Lynis offers.

Implementing Lynis Audit Suggestions

After the warnings section, you'll see a series of suggestions that, if implemented, can make your server less vulnerable to attacks and malware. In this step, you'll learn how to implement some suggestions generated by Lynis after an audit of a test Ubuntu 16.04 server. The process to do this is identical to the steps in the previous section.

A specific suggestion starts with the suggestion itself, followed by the test ID. Then, depending on the test, the next line will tell you exactly what changes to make in the affected service's configuration file. The last line is a security control URL where you can find more information about the subject.

Here, for example, is a partial suggestion section from a Lynis audit, showing suggestions pertaining to the SSH service:


Output

Suggestions (36):
----------------------------
* Consider hardening SSH configuration [SSH-7408]
- Details : ClientAliveCountMax (3 --> 2)
https://cisofy.com/controls/SSH-7408/

* Consider hardening SSH configuration [SSH-7408]
- Details : PermitRootLogin (YES --> NO)
https://cisofy.com/controls/SSH-7408/

* Consider hardening SSH configuration [SSH-7408]
- Details : Port (22 --> )
https://cisofy.com/controls/SSH-7408/

* Consider hardening SSH configuration [SSH-7408]
- Details : TCPKeepAlive (YES --> NO)
https://cisofy.com/controls/SSH-7408/

* Consider hardening SSH configuration [SSH-7408]
- Details : UsePrivilegeSeparation (YES --> SANDBOX)
https://cisofy.com/controls/SSH-7408/
...

Depending on your environment, all these suggestions are safe to implement. To make that determination, however, you have to know what each directive means. Because these pertain to the SSH server, all changes have to be made in the SSH daemons configuration file,/etc/ssh/sshd_config. If you have any doubt about any suggestion regarding SSH given by Lynis, look up the directive with man sshd_config. That information is also available online.

One of the suggestions calls for changing the default SSH port from 22. If you make that change, and you have the firewall configured, be sure to insert a rule for SSH access through that new port.

As with the warnings section, you can get more detailed information about a suggestion by querying Lynis for the test id using sudo lynis show details test-id.

Other suggestions require that you to install additional software on your server. Take this one, for example:



Output

* Harden the system by installing at least one malware scanner, to perform periodic file system scans [HRDN-7230]
- Solution : Install a tool like rkhunter, chkrootkit, OSSEC
https://cisofy.com/controls/HRDN-7230/

The suggestion is to install rkhunter, chkrootkit, or OSSEC to satisfy a hardening test (HRDN-7230). OSSEC is a host-based intrusion detection system that can generate and send alerts. It's a very good security application that will help with some of the tests performed by Lynis. However, installing OSSEC alone does not cause this particular test to pass. Installing chkrootkit finally gets it passing. This is another case where you'll sometimes have to additional research beyond what Lynis suggests.

Let's look at another example. Here's a suggestion displayed as a result of a file integrity test.


Output

* Install a file integrity tool to monitor changes to critical and sensitive files [FINT-4350]
https://cisofy.com/controls/FINT-4350/

The suggestion given in the security control URL does not mention the OSSEC program mentioned in the previous suggestion, but installing it was enough to pass the test on a subsequent audit. That's because OSSEC is a pretty good file integrity monitoring tool.

You can ignore some suggestions that don't apply to you. Here's an example:



Output

* To decrease the impact of a full /home file system, place /home on a separated partition [FILE-6310]
https://cisofy.com/controls/FILE-6310/

* To decrease the impact of a full /tmp file system, place /tmp on a separated partition [FILE-6310]
https://cisofy.com/controls/FILE-6310/
 
Historically, core Linux file systems like /home, /tmp, /var, and /usr were mounted on a separate partition to minimize the impact on the whole server when they run out of disk space. This isn't something you'll see that often, especially on cloud servers. These file systems are now just mounted as a directory on the same root partition. But if you perform a Lynis audit on such a system, you'll get a couple of suggestions like the ones shown in the preceding output. Unless you're in a position to implement the suggestions, you'll probably want to ignore them and configure Lynis so the test that caused them to be generated is not performed on future audits.

Performing a security audit using Lynis involves more than just fixing warning and implementing suggestions; it also involves identifying superfluous tests. In the next step, you'll learn how to customize the default profile to ignore such tests.

Customizing Lynis Security Audits

In this section, you'll learn how to customize Lynis so that it runs only those tests that are necessary for your server. Profiles, which govern how audits run, are defined in files with the .prf extension in the /etc/lynis directory. The default profile is aptly named default.prf. You don't edit that default profile directly.

Instead, you add any changes you want to a custom.prf file in the same directory as the profile definition.
Create a new file called /etc/lynis/custom.prf using your text editor:

sudo nano /etc/lynis/custom.prf

Let's use this file to tell Lynis to skip some tests. Here are the tests we want to skip:
  • FILE-6310: Used to check for separation of partitions.
  • HTTP-6622: Used to test for Nginx web server installation.
  • HTTP-6702: Used to check for Apache web server installation. This test and the Nginx test above are performed by default. So if you have Nginx installed and not Apache, you'll want to skip the Apache test.
  • PRNT-2307 and PRNT-2308: Used to check for a print server.
  • TOOL-5002: Use to check for automation tools like Puppet and Salt. If you have no need for such tools on your server, it's OK to skip this test.
  • SSH-7408:tcpkeepalive: Several Lynis tests can be grouped under a single test ID. If there's a test within that test id that you wish to skip, this is how to specify it.
To ignore a test, you pass the skip-test directive the test ID you wish to ignore, one per line. Add the following code to your file /etc/lynis/custom.prf:

# Lines starting with "#" are comments
# Skip a test (one per line)

# This will ignore separation of partitions test
skip-test=FILE-6310

# Is Nginx installed?
skip-test=HTTP-6622

# Is Apache installed?
skip-test=HTTP-6702

# Skip checking print-related services
skip-test=PRNT-2307
skip-test=PRNT-2308

# If a test id includes more than one test use this form to ignore a particular test
skip-test=SSH-7408:tcpkeepalive
 
Save and close the file.

The next time you perform an audit, Lynis will skip the tests that match the test IDs you configured in the custom profile. The tests will be omitted from the results section of the audit output, as well as the suggestions section.

The /etc/lynis/custom.prf file also lets you modify any settings in a profile. To do that, copy the setting from /etc/lynis/default.prf into /etc/lynis/custom.prf and modify it there. You'll rarely need to modify these settings, so focus your effort on finding tests you can skip.

Next, let's take a look at what Lynis calls the hardening index.

Interpreting the Hardening Index

In the lower section of every Lynis audit output, just below the suggestions section, you'll find a section that looks like the following:


Output

Lynis security scan details:

Hardening index : 64 [############ ]
Tests performed : 206
Plugins enabled : 0

This output tells you how many tests were performed, along with a hardening index, a number that Lynis provides to give you a sense of how secure your server is. This number is unique to Lynis. The hardening index will change in relation to the warnings that you fix and the suggestions that you implement. This output, which shows that the system has a hardening index of 64 is from the first Lynis audit on a new Ubuntu 16.04 server.

After fixing the warnings and implementing most of the suggestions, a new audit gave the following output. You can see that the hardening index is slightly higher:



Output

Lynis security scan details:

Hardening index : 86 [################# ]
Tests performed : 205
Plugins enabled : 0

The hardening index is not an accurate assessment of how secure a server is, but merely a measure of how well the server is securely configured (or hardened) based on the tests performed by Lynis. And as you've seen, the higher the index, the better. The objective of a Lynis security audit is not just to get a high hardening index, but to fix the warnings and suggestions it generates.

Conclusion

In this guide, you installed Lynis, used it to perform a security audit of an Ubuntu 16.04 server, explored how to fix the warnings and suggestions it generates, and how to customize the tests that Lynis performs. It takes a little extra time and effort, but it's worth the investment to make your machine more secure, and Lynis makes that process much easier.

How To Host a Website using Caddy on Ubuntu 16.04

$
0
0

Caddy is a new web server developed with ease of use in mind. It's simple enough to be used as a quick development server and robust enough to be used in production environments.



This article will guide you through the steps to install and configure Caddy. After following steps mentioned in this guide, you will have a simple working website served using HTTP/2 and a secure TLS connection.

Prerequisites

To follow this guide, you will need:
  • One Ubuntu 16.04 server set up, including a sudo non-root user and a firewall.
  • A domain name configured to point to your server. This is necessary for Caddy to obtain an SSL certificate for the website; without using a proper domain name, the website will not be served securely with TLS encryption.

The Caddy project provides an installation script that will retrieve and install the Caddy server's binary files. You need to execute the following command to start the installation:

curl -s https://getcaddy.com | bash 

You can view the script by visiting https://getcaddy.com in your browser or downloading the file with wget or curl before you execute it.

During the installation, the script will use sudo to gain administrative privileges in order to put Caddy files in system-wide directories, so it might prompt you for a password.

The command output will look like this:


Caddy installation script output

Downloading Caddy for linux/amd64...
https://caddyserver.com/download/build?os=linux&arch=amd64&arm=&features=
Extracting...
Putting caddy in /usr/local/bin (may require password)
[sudo] password for sammy:
Caddy 0.9.5
Successfully installed

After the script finishes, the Caddy binaries are installed on the server and ready to use. You can verify that Caddy binaries have been put in place by using which to check their location.

which caddy

The command output will say that the Caddy binary can be found in /usr/local/bin/caddy.

Caddy does not create any system-wide configuration during installation and does not install itself as a service, which means it won't start up automatically during boot. In the next two steps, we'll create the files Caddy needs to function and install its service file.

Setting Up Necessary Directories

Caddy's automatic TLS support and unit file (which we'll install in the next step) expect particular directories and files to exist with specific permissions. We'll create them all in this step.

First, create a directory that will house the main Caddyfile, which is a configuration file that tells Caddy what websites should it serve and how.

sudo mkdir /etc/caddy

Change the owner of this directory to the root user and its group to www-data so Caddy can read it.

sudo chown -R root:www-data /etc/caddy

In this directory, create an empty Caddyfile which we'll edit later.

sudo touch /etc/caddy/Caddyfile

Create another directory in /etc/ssl. Caddy needs this to store the SSL private keys and certificates that it automatically obtains from Let's Encrypt.

sudo mkdir /etc/ssl/caddy

Caddy needs to be able to write to this directory when it obtains the certificate, so make the owner the www-data user . You can leave the group as root, unchanged from the default:

sudo chown -R www-data:root /etc/ssl/caddy

Then make sure no one else can read those files by removing all the access rights for others.

sudo chmod 0770 /etc/ssl/caddy

The final directory we need to create is the one where the website itself will be published. We will use /var/www, which is customary and also the default path when using other web servers, like Apache or Nginx.

sudo mkdir /var/www

This directory should be completely owned by www-data.

sudo chown www-data:www-data /var/www

You have now prepared the necessary environment for Caddy to run. In the next step, we will configure Caddy as a system service to ensure it starts with system boot and can be managed with systemctl.

Installing Caddy as a System Service

While Caddy does not install itself as a service, the project provides an official systemd unit file. This file does assume the directory structure we set up in the previous step, so make sure your configuration matches.
Download the file from the official Caddy repository. The additional -o parameter to the curl command will save the file in the /etc/systemd/system/ directory and make it visible to systemd.

sudo curl -s https://raw.githubusercontent.com/mholt/caddy/master/dist/init/linux-systemd/caddy.service -o /etc/systemd/system/caddy.service 

Make systemd aware of the new service file.

sudo systemctl daemon-reload

Then, enable Caddy to run on boot.

sudo systemctl enable caddy.service

You can verify that the service has been properly loaded and enabled to start on boot by checking its status.

sudo systemctl status caddy.service

The output should look as follows:



Caddy service status output

caddy.service - Caddy HTTP/2 web server
Loaded: loaded (/etc/systemd/system/caddy.service; enabled; vendor preset: enabled)
Active: inactive (dead)
Docs: https://caddyserver.com/docs

Specifically, it says that the service is loaded and enabled, but it is not yet running. We will not start the server just yet because the configuration is still incomplete.

You have now configured Caddy as a system service which will start automatically on boot without the need to run it manually. Next, we'll allow web traffic through the firewall.

Allowing HTTP and HTTPS Connections

Because Caddy wasn't installed using APT (Ubuntu's package manager), UFW has no way to know how to manage rules for it. We'll add those rules manually here.

Caddy serves websites using HTTP and HTTPS protocols, so we need to allow access to the appropriate ports in order to make Caddy available from the internet.

sudo ufw allow http
sudo ufw allow https

Both commands, when run, will output the following success messages:



UFW output

Rule added
Rule added (v6)


This will allow Caddy to serve websites to the visitors freely. In the next step, we will create a sample web page and update the Caddyfile to serve it in order to test the Caddy installation.

Creating a Test Web Page and a Caddyfile

Let's start by creating a very simple HTML page which will display a plain Hello World! message. This command will create an index.html file in the website directory we created earlier with just the one line of text, Hello World!, inside.

echo 'Hello World!' | sudo tee /var/www/index.html

Next, we'll fill out the Caddyfile. The Caddyfile, in its simplest form, consists of one or more server blocks which each define the configuration for a single website. A server block starts with an address definition and is followed by curly braces. Inside the curly braces, you can include configuration directives to apply to that website.

An address definition is specified in the form protocol://host:port. Caddy will assume some defaults by itself if you leave some fields blank. For example, if you specify the protocol but not the port, the latter will be automatically derived (i.e. port 80 is assumed for HTTP, and port 443 is assumed for HTTPS). The rules governing the address format are described in-depth in the official Caddyfile documentation.

Open the Caddyfile you created in Step 2 using nano or your favorite text editor.

sudo nano /etc/caddy/Caddyfile

Paste in the following contents:
 
http:// {
root /var/www
gzip
}

Then save the file and exit. Let's explain what this specific Caddyfile does.

Here, we're using http:// for the address definition. This tells Caddy it should bind to port 80 and serve all requests using plain HTTP protocol (without TLS encryption), regardless of the domain name used to connect to the server. This will allow you to access the websites Caddy is hosting using your server's IP address.

Inside the curly braces of our server block, there are two directives:
  • The root directive tells Caddy where the website files are located. In our example, it's /var/www, where we created the test page.
  • The gzip directive tells Caddy to use Gzip compression to make the website faster. It does not need additional configuration.
Once the configuration file is ready, start the Caddy service.

sudo systemctl start caddy

We can now test if the website works. For this you use your server's public IP address. If you do not know your server's IP address, you can get it with curl -4 icanhazip.com. Once you have it, visit http://your_server_ip in your favorite browser to see the Hello World! website.

This means your Caddy installation is working correctly. In the next step, you will enable a secure connection to your website with Caddy's automatic TLS support.

Configuring Automatic TLS

One of the main features that distinguishes Caddy from other web servers is its ability to automatically request and renew TLS certificates from Let's Encrypt, a free certificate authority (CA). In addition, setting Caddy up to automatically serve websites over secure connection only requires a one line change in the Caddyfile.

Caddy takes care of enabling secure HTTPS connection for all configured server blocks and obtaining necessary certificates automatically, assuming some requirements are met by the server blocks configuration.
In order for TLS to work, the following requirements must be met:
  • Caddy must be able to bind itself to port 443 for HTTPS, and the same port must be accessible from the internet.
  • The protocol must not be set to HTTP, the port must not be not set to 80, and TLS must not be explicitly turned off or overridden with other settings (e.g. with the tls directive in the server block).
  • The hostname must be valid domain name; it must not not empty or set to localhost or an IP address. This is necessary because Let's Encrypt can only issue certificates to valid domain names.
  • Caddy must know the email address that can be used for key recovery with Let's Encrypt.
If you've been following this tutorial, the first requirement is already met. However, the current server block address is configured simply as http://, defining a plain HTTP scheme with no encryption as well as no domain name. We have also not provided Caddy with an e-mail address which Let's Encrypt requires when requesting for a certificate. If the address is not supplied in the configuration, Caddy asks for it during startup. However, because Caddy is installed as a system service, it cannot ask questions during startup and in the result it will not start properly at all.

To fix this, open the Caddyfile for editing again.

sudo nano /etc/caddy/Caddyfile

First, replace the address definition of http:// with your domain. This removes the insecure connection forced by HTTP and provides a domain name for the TLS certificate. Second, provide Caddy with an email address using the tls directive inside the server block.

The modified Caddyfile should look as follows, with your domain and email address substituted in:
/etc/caddy/Caddyfile
 
example.com {
root /var/www
gzip
tls sammy@example.com
}

Save the file and exit the editor. To apply the changes, restart Caddy.

sudo systemctl restart caddy

Now direct your browser to https://example.com to verify if the changes were applied correctly. If so, you should once again see the Hello World! page. This time you can check that the website is served with HTTPS by looking at the URL or for a lock symbol in the URL bar.



Conclusion

You have now configured Caddy to properly serve your website over a secure TLS connection. It will automatically obtain and renew certificates from Let's Encrypt, serve your site over a secure connection using the newer HTTP/2 protocol, and reduce loading time by using gzip compression.


How To Install iCinga and iCinga Web on Ubuntu 16.04

$
0
0
 
iCinga is a flexible and powerful open-source monitoring system used to oversee the health of networked hosts and services. It could be used to monitor the load and uptime of a cluster of web workers, free disk space on a storage device, memory consumption on a caching service, and so on. Once properly set up, Icinga can give you an at-a-glance overview of the status of large numbers of hosts and services, as well as notifications, downtime scheduling, and long-term storage of performance data.


This article will guide you through the steps to install the Icinga core, its database backend, and the Icinga Web interface. Lastly, we'll set up email notification so you can receive alerts in your inbox when a service is misbehaving.

Prerequisites

To follow this article, you will need:

Installing Icinga

To get the latest version of Icinga, we first need to add a software repository maintained by the Icinga team. We will then install the software with apt-get and run through a few configuration screens to set up Icinga's database backend.

First, download the Icinga developers' package signing key and add it to the apt system:
curl -sSL https://packages.icinga.com/icinga.key | sudo apt-key add -

This key will be used to automatically verify the integrity of any software we download from the Icinga repository. Now we need to add the repository address to an apt configuration file. Open up the file with your favorite text editor. We'll use nano throughout this tutorial:

sudo nano /etc/apt/sources.list.d/icinga.list 


This will open a new blank text file. Paste in the following line:
 
deb https://packages.icinga.com/ubuntu icinga-xenial main

Save and close the file, then refresh your package cache:

sudo apt-get update

apt-get will now download information from the repository we just added, making the Icinga packages available to install:

sudo apt-get install icinga2 icinga2-ido-mysql

This will install the main Icinga software, along with a database adapter that enables Icinga to put historical data and other information into a MySQL database. You'll be presented with a few configuration screens for the database adapter:
  1. Enable Icinga 2's ido-mysql feature? YES
  2. Configure database for icinga2-ido-mysql with dbconfig-common? YES
  3. You'll then be prompted to set up an Icinga database password. Create a strong password and record it for later. We'll need it when setting up the web interface.
Now we need to actually enable the Icinga database backend. The icinga2 command can enable and disable features on the command line. While we're at it, we'll also enable the command feature which will eventually let us run manual health checks from the web interface. 

sudo icinga2 feature enable ido-mysql command

Now restart icinga2 to use the new features:

sudo systemctl restart icinga2

And finally, let's check the status of icinga2 to make sure it's running properly:

sudo systemctl status icinga2 

Output
icinga2.service - Icinga host/service/network monitoring system
   Loaded: loaded (/lib/systemd/system/icinga2.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2017-04-20 00:54:55 UTC; 3s ago
  Process: 15354 ExecStartPre=/usr/lib/icinga2/prepare-dirs /usr/lib/icinga2/icinga2 (code=exited, status=0/SUCCESS)
 Main PID: 15416 (icinga2)
    Tasks: 11
   Memory: 7.7M
      CPU: 488ms
. . .





If you see Active: active (running), Icinga is up and running. Now that we've set up the Icinga core system and database backend, it's time to get the web interface up and running.

Installing the Icinga Web Interface

The Icinga core is completely configurable and usable without a web interface, but Icinga Web provides a nice browsable overview of the health of your hosts and services, and allows you to schedule downtime, acknowledge issues, manually trigger health checks, and send notifications, right from your browser.

Let's install Icinga Web with apt-get:

sudo apt-get install icingaweb2

The rest of Icinga Web's setup is done in a web browser, but before we switch over, there's one setting we need to update. Icinga Web needs a timezone to be set for the PHP environment, so let's edit the PHP config file:

sudo nano /etc/php/7.0/apache2/php.ini

We need to find a specific line to update. In nano we can press CTRL-W to bring up a search interface, type in date.timezone, then hit ENTER. The cursor will move to the line we need to update. First, uncomment the line by removing the initial ; semicolon, and then type in your correct timezone.

You can find the correct timezone format in the PHP manual's timezone section. It should look something like this when you're finished:
 
date.timezone = America/New_York

Save and close the file. Restart Apache to update:

sudo systemctl restart apache2

Now it's time to work through Incinga Web's browser-based setup.

Setting up the Icinga Web Interface

Before we switch over to our browser for the web-based setup process, we need to create a setup token. This is a key we generate on the command line that authorizes us to use the web setup tool. We create this key with the icingacli command:

sudo icingacli setup token create

A short token will be printed:



Output
1558c2c0ec4572ab


Copy the token to your clipboard, then switch to your browser and load the Icinga Web address. By default this is your server’s domain name or IP address followed by /icingaweb2:
 
https://icinga-master.example.com/icingaweb2

 


You'll be presented with a configuration screen. Paste in the token you copied to your clipboard, and press Next to begin the process. There are many pages of options to go through. We'll step through them one at a time.

Module Setup

On the second page, you'll have the option to enable some extra modules for the web interface. We can safely accept the default of only enabling the Monitoring module. Click Next to continue.

Environment Status

The third page shows the status of our PHP environment. You shouldn't see any red boxes, which would indicate an issue or misconfiguration. You may see some yellow boxes mentioning PostgreSQL modules being missing. We can safely ignore these, as we're using MySQL, not PostgreSQL. Click Next to continue.

Icinga Web Authentication

The fourth page lets us choose how we want to authenticate Icinga Web users. If you wanted to integrate with an LDAP service for authentication, this would be the place to choose that. We'll use the default, Database, to store users in our MySQL database. Click Next to continue.

User Database Setup

The fifth page asks us to set up a database to store the user data. This is separate from the database we previously set up during the command line install.

Most of the defaults are fine, but we also need to choose a database name and user/password combination:
  1. Resource Name: icingaweb_db
  2. Database Type: MySQL
  3. Host: localhost
  4. Port:
  5. Database Name: icingaweb_users
  6. Username: icingaweb
  7. Password: set and record a password
  8. Character Set:
  9. Persistent: leave unchecked
  10. Use SSL: leave unchecked
Hit Next to continue.

Create User Database

The next page will say that your database doesn't exist and you don't have the credentials to create it. Enter root for the username, type in the MySQL root password and click Next to create the Icinga Web database and user.

Name the Authentication Provider

Now we need to name the authentication backend we just created. The default icingaweb2 is fine. Click Next.

Create Admin Account

Now that we've set up our user database, we can create our first Icinga Web administrative account. Choose a username and password and click Next to continue.

Preferences and Log Storage

Next we're presented with options on how to store user preferences and logs. The defaults are fine and will store preferences in the database while logging to syslog. Hit Next to continue.

Configuration Review

We are presented with a page to review all of our configurations. Click Next to confirm the configuration details and move on to configuring the monitoring module.

Introduction to Monitoring Configuration

Now we start configuring the actual monitoring module for Icinga Web. Click Next to start.

Select Monitoring Backend

First up, we select our monitoring backend. The default name of icinga and type of IDO are fine. This indicates that Icinga Web will retrieve information from the ido-mysql database we configured earlier when installing things on the command line.

Set up Monitoring Database

We need to enter the connection details for the ido-mysql database. We created this password during installation.

This page has all the same options as the user-database setup screen:
  1. Resource Name: icinga_ido
  2. Database Type: MySQL
  3. Host: localhost
  4. Port:
  5. Database Name: icinga2
  6. Username: icinga2
  7. Password: password you created during installation
  8. Character Set
  9. Persistent: unchecked
  10. Use SSL: unchecked
Click Next to continue.

Select Command Transport Method

Next is a Command Transport prompt. This lets us specify how Icinga Web will pass commands to Icinga when we manually run health checks in the web interface. The default of Local Command File is fine and will work with the command feature we enabled back in Step 1. Click Next to continue.

Set Up Monitoring Interface Security

This lets you specify data that should be masked in the web interface, to prevent any potential onlookers from seeing passwords and other sensitive information. The defaults are fine. Hit Next to continue.

Monitoring Module Configuration Summary

Once again, we're presented with a summary of our configuration. Hit Finish to finish the setup of Icinga Web. A Congratulations! message will load.

Click Login to Icinga Web 2 and log in with your administrator username and password.

 

The main interface of Icinga Web will load. Explore a little and familiarize yourself with the interface. If your server has no swap space set up, you may see a red Critical Error box. We can ignore this for now, or you can Acknowledge the issue by clicking the red box, selecting Acknowledge from the right-hand column, filling out a comment, and finally clicking the Acknowledge problem button.

Now that we've finished setting up Icinga, and Icinga Web, let's set email notifications.

Setting up Email

Monitoring isn't too helpful if you can't receive alerts when something goes wrong. Icinga's default config has some scripts to email an administrator, but we need to set up email on our server before they'll work. The simplest way to do that is to use a program called ssmtp to route all the server's mail through a standard SMTP server.

First, install ssmtp and some helper mail utilities:

sudo apt-get install ssmtp mailutils

And now we edit the ssmtp configuration file with our SMTP details. These should be provided by your ISP, email provider, or IT department. You'll need a username, pasword, and the address of your SMTP server:

sudo nano /etc/ssmtp/ssmtp.conf

There will be some existing configuration in the file. Delete it and replace it with this very basic setup that should work with most SMTP servers:

mailhub=mail.example.com:465
UseTLS=yes
FromLineOverride=yes
AuthUser=smtp_username
AuthPass=smtp_password

Save and close the file. To test the connection, use the mail command:

echo "hello world" | mail -s "test subject"sammy@example.com

You should see an email in your inbox shortly. Now we need to update a few settings for Icinga to send mail.

Setting up and Testing Notifications

To get email notifications working, update the email address Icinga is sending to:

sudo nano /etc/icinga2/conf.d/users.conf

Change the email line to the address you'd like to receive notifications at:

. . .
email = "sammy@example.com"
. . .

Restart Icinga one last time:

sudo systemctl restart icinga2

The icinga-master host is already configured to send notifications when problems arise. Let's cause a problem and see what happens. We'll use a command called stress to increase the system's load in order to trigger a warning.

Install stress:

sudo apt-get install stress

stresscan manipulate load, disk IO, memory, and other system metrics. The Icinga default configuration
will trigger a warning when the system's load is over five. Let's cause that now:

stress --cpu 6

Switch back to the Icinga Web interface and you'll see the load metric slowly rise. After a few checks it will enter a softWarning state. Soft means that the check has to fail a few more times before it's considered a hard state, at which time notifications will be sent. This is to avoid sending notifications for transient issues that quickly fix themselves.

Wait for the warning to reach a hard state and send the notification. You should receive an email with the details of what's going wrong.

Press CTRL-C to exit the stress command. The system load will recover fairly quickly and revert to Ok in the Icinga Web interface. You'll also receive another email telling you that the issue has cleared up.




Conclusion

In this tutorial we have successfully set up Icinga and Icinga Web, including Icinga's email notification feature. Currently we are only monitoring the Icinga host itself though. Continue on to our next tutorial How To Monitor Hosts and Services with Icinga where we will set up remote monitoring.

How To Monitor Hosts and Services with iCinga

$
0
0
 
This article will guide you through the steps to use iCinga and set up two different kinds of monitoring configurations. The first is based on simple network checks of your host's external services, such as making a periodic HTTP request to your website. The other configuration uses a software agent running on the host to gather more detailed system information such as load and number of running processes.



Prerequisites

You should have completed the previous guide How To Install Icinga and Icinga Web on Ubuntu 16.04. This will leave you with the Icinga core and Icinga Web interface running on a single host, which we'll refer to as the icinga-master node throughout.

Setting up Simple Host Monitoring

One simple way to monitor a server with Icinga is to set up a regular check of its externally available services. So for a web host, we'd regularly ping the server's IP address and also try to access a web page.

This will tell us if the host is up, and if the web server is functioning correctly.

Let's set up monitoring for a web server. Pick one of the Apache servers mentioned as a prerequisite and make sure the default Apache placeholder page is being served properly. We will call this server web-1.example.com. We won't need to log into it at all, all the health checks will be configured and executed on the master node.
 
Note: Icinga always defaults to using the Fully Qualified Domain Name (FQDN) of any host it's dealing with. A FQDN is a hostname plus its domain name, so web-1.example.com, for example. If you don't have a proper domain set up for a host, the FQDN will be something like web-1.localdomain.

These are fine to use, just be consistent, and if you don't have a "real" FQDN always use the server's IP address in any Icinga address field you configure.


Log into the master node. To add a new host, we need to edit Icinga's hosts.conf file. Open it in a text editor:

sudo nano /etc/icinga2/conf.d/hosts.conf

This will open a file with some explanatory comments and a single host block defined. The existing object Host NodeName configuration block defines the icinga-master host, which is the host we installed Icinga and Icinga Web on. Position your cursor at the bottom of the file, and add a new host:
 
object Host "web-1.example.com" {
import "generic-host"
address = "web-1_server_ip"
vars.http_vhosts["http"] = {
http_uri = "/"
}
vars.notification["mail"] = {
groups = [ "icingaadmins" ]
}
}

This defines a host called web-1.example.com, imports some default host configs from a template called generic-host, points Icinga to the correct IP address, and then defines a few variables that tell Icinga to check for an HTTP response at the root (/) URL and notify the icingaadmins group via email when problems occur.

Save and close the file, then restart Icinga:

sudo systemctl restart icinga2

Switch back to the Icinga Web interface in your browser. The interface updates itself fairly rapidly, so you don't need to refresh the page. The new host information should populate in short order, and the health checks will change from Pending to Ok once Icinga gathers enough information.

This is a great way to monitor external services on a host, and there are other checks available for SSH servers, SMTP, and so on. But it'd also be nice to know more details about the internal health of the servers we're monitoring.

Next, we'll set up monitoring via an Icinga agent, so we can keep an eye on more detailed system information.

Setting up Agent-based Monitoring

Icinga provides a mechanism for securely communicating between a master and client node in order to run more extensive remote health checks. Instead of only knowing that our web server is successfully serving pages, we could also monitor CPU load, number of process, disk space and so on.

We're going to set up a second server to monitor. We'll call it web-2.example.com. We need to install the Icinga software on the remote machine, run some setup wizards to make the connection, then update some configuration files on the Icinga master node.

Note: There are many ways to architect an Icinga installation, complete with multiple tiers of master/satellite/client nodes, high-availability failover, and multiple ways to share configuration details between nodes. We are setting up a simple two tier structure with one master node and multiple client nodes. Further, all configuration will be done on the master node, and our health check commands will be scheduled on the master node and pushed to the clients. The Icinga project calls this setup Top Down Command Endpoint mode.


Set up the Master Node

First, we need to set up the master node to make client connections. We do that by running the node setup wizard on our master node:

sudo icinga2 node wizard

This will start a script that asks us a few questions, and sets things up for us. In the following, we hit ENTER to accept most defaults. Non-default answers are highlighted. Some informational output has been removed for clarity:

Output
Please specify if this is a satellite setup ('n' installs a master setup) [Y/n]: n
Starting the Master setup routine...
Please specify the common name (CN) [icinga-master.example.com]: ENTER to accept the default, or type your FQDN
Checking for existing certificates for common name 'icinga-master.example.com'...
Certificates not yet generated. Running 'api setup' now.
Enabling feature api. Make sure to restart Icinga 2 for these changes to take effect.
Please specify the API bind host/port (optional): ENTER
Bind Host []: ENTER
Bind Port []: ENTER
Done.

Now restart your Icinga 2 daemon to finish the installation!



Restart Icinga to finish updating the configuration:

sudo systemctl restart icinga2

Open up a firewall port to allow external connections to Icinga:

sudo ufw allow 5665

Now we'll switch to the client node, install Icinga and run the same wizard.

Set up the Client Node

Log into the server we're calling web-2.example.com. We need to install the Icinga repository again, and then install Icinga itself. This is the same procedure we used on the master node. First install the key:
curl -sSL https://packages.icinga.com/icinga.key | sudo apt-key add -

Open the icinga.list file:

sudo nano /etc/apt/sources.list.d/icinga.list

Paste in the repository details:

deb https://packages.icinga.com/ubuntu icinga-xenial main

Save and close the file, then update the package cache:

sudo apt-get update

Then, install icinga2. Note that we do not need the icinga2-ido-mysql package that we installed on the master node:

sudo apt-get install icinga2

Now we run through the node wizard on this server, but we do a satellite config instead of master:

sudo icinga2 node wizard 

Output
Please specify if this is a satellite setup ('n' installs a master setup) [Y/n]: y
Starting the Node setup routine...
Please specify the common name (CN) [web-2.example.com]: ENTER
Please specify the master endpoint(s) this node should connect to:
Master Common Name (CN from your master setup): icinga-master.example.com
Do you want to establish a connection to the master from this node? [Y/n]: y
Please fill out the master connection information:
Master endpoint host (Your master's IP address or FQDN): icinga-master_server_ip
Master endpoint port [5665]: ENTER
Add more master endpoints? [y/N]: ENTER
Please specify the master connection for CSR auto-signing (defaults to master endpoint host):
Host [icinga-master_server_ip]: ENTER
Port [5665]: ENTER

The wizard will now fetch the public certificate from our master node and show us its details. Confirm the information, then continue:

Output
. . .
Is this information correct? [y/N]: y
information/cli: Received trusted master certificate.

Please specify the request ticket generated on your Icinga 2 master.
 (Hint: # icinga2 pki ticket --cn 'web-2.example.com'):




At this point, switch back to your master server and run the command the wizard prompted you to. Don't forget sudo in front of it:

sudo icinga2 pki ticket --cn 'web-2.example.com'

The command will output a key. Copy it to your clipboard, then switch back to the client node, paste it in and hit ENTER to continue with the wizard.

Output
. . .
information/cli: Requesting certificate with ticket '5f53864a504a1c42ad69faa930bffa3a12600546'.

Please specify the API bind host/port (optional):
Bind Host []: ENTER
Bind Port []: ENTER
Accept config from master? [y/N]: n
Accept commands from master? [y/N]: y
Done.

Now restart your Icinga 2 daemon to finish the installation!



Now open up the Icinga port on your firewall:

sudo ufw allow 5665

And restart Icinga to fully update the configuration:

sudo systemctl restart icinga2

You can verify there's a connection between the two servers with netstat:

netstat | grep :5665

It might take a moment for the connection to be made. Eventually netstat will output a line showing an ESTABLISHED connection on the correct port.

Output
tcp        0      0 web-2_server_ip:58188     icinga-master_server_ip:5665     ESTABLISHED 

This shows that our servers have connected and we’re ready to configure the client checks.

Configure the Agent Monitoring

Even though the master and client are now connected, there's still some setup to do on the master to enable monitoring. We need to set up a new host file. Switch back to the master node.

One important level of organization in an Icinga install is the concept of a zone. All client nodes must create their own zone, and report to a parent zone, in this case our master node. By default our master node's zone is named after its FQDN. We'll make a directory named after our master zone within Icinga's zones.d directory. This will hold the information for all the master zone's clients.

Create the zone directory:

sudo mkdir /etc/icinga2/zones.d/icinga-master.example.com

We're going to create a services configuration file. This will define some service checks that we'll perform on any remote client node. Open the file now:

sudo nano /etc/icinga2/zones.d/icinga-master.example.com/services.conf

Paste in the following, then save and close:

apply Service "load" {
import "generic-service"
check_command = "load"
command_endpoint = host.vars.client_endpoint
assign where host.vars.client_endpoint
}

apply Service "procs" {
import "generic-service"
check_command = "procs"
command_endpoint = host.vars.client_endpoint
assign where host.vars.client_endpoint
}
 
This defines two service checks. The first will report on the CPU load, and the second will check the number of processes on the server. The last two lines of each service definition are important. command_endpoint tells Icinga that this service check needs to be sent to a remote command endpoint. The assign where line automatically assigns the service check to any host that has a client_endpoint variable defined.

Let's create such a host now. Open a new file in the zone directory we previously created. Here we've named the file after the remote host:

sudo nano /etc/icinga2/zones.d/icinga-master.example.com/web-2.example.com<^>.conf

Paste in the following configuration, then save and close the file:

object Zone "web-2.example.com" {
endpoints = [ "web-2.example.com" ]
parent = "icinga-master.example.com"
}

object Endpoint "web-2.example.com" {
host = "web-2_server_ip"
}

object Host "web-2.example.com" {
import "generic-host"
address = "web-2_server_ip"
vars.http_vhosts["http"] = {
http_uri = "/"
}
vars.notification["mail"] = {
groups = [ "icingaadmins" ]
}
vars.client_endpoint = name
}

This file defines a zone for our remote host, and ties it back to the parent zone. It also defines the host as an endpoint, and then defines the host itself, importing some default rules from the generic-host template. It also sets some vars to create an HTTP check and enable email notifications. Note that because this host has vars.client_endpoint = name defined, it will also be assigned the service checks we just defined in services.conf.

Restart Icinga to update the config:

sudo systemctl restart icinga2

Switch back to the Icinga Web interface, and the new host will show up with checks Pending. After a few moments, those checks should turn Ok. This means our client node is successfully running checks for the master node.




Conclusion

In this guide we set up two different types of monitoring with Icinga, external service checks and agent-based host checks.

How To Manage Logs using Graylog 2 on Ubuntu 16.04

$
0
0

Graylog is a powerful open-source log management platform. It aggregates and extracts important data from server logs, which are often sent using the Syslog protocol. It also allows you to search and visualize the logs in a web interface.



This article will guide you through the steps to install and configure Graylog on Ubuntu 16.04, and set up a simple input that receives system logs.

Prerequisites

Before you begin this guide, you'll need:

Configuring Elasticsearch

We need to modify the Elasticsearch configuration file so that the cluster name matches the one set in the Graylog configuration file. To keep things simple, we'll set the Elasticsearch cluster name to the default Graylog name of graylog. You may set it to whatever you wish, but make sure you update the Graylog configuration file to reflect that change.

Open the Elasticsearch configuration file in your editor:

sudo nano /etc/elasticsearch/elasticsearch.yml 

Find the following line:
 
cluster.name: 

Change the cluster.name value to graylog:
 
cluster.name: graylog

Save the file and exit your editor.

Since we modified the configuration file, we have to restart the service for the changes to take effect.
Now that you have configured Elasticsearch, let's move on to installing Graylog.

sudo systemctl restart elasticsearch  

Installing Graylog

In this step, we we'll install the Graylog server.

First, download the package file containing the Graylog repository configuration. Visit the Graylog download page to find the current version number. We'll use version 2.2 for this tutorial.

wget https://packages.graylog2.org/repo/packages/graylog-2.2-repository_latest.deb  


Next, install the repository configuration from the .deb package file, again replacing 2.2 with the version you downloaded.

sudo dpkg -i graylog-2.2-repository_latest.deb

Now that the repository configuration has been updated, we have to fetch the new list of packages. Execute this command:


sudo apt-get update

Next, install the graylog-server package:

sudo apt-get install graylog-server

Lastly, start Graylog automatically on system boot with this command:

sudo systemctl enable graylog-server.service

Graylog is now successfully installed, but it's not started yet. We have to configure it before it will start.

Configuring Graylog

Now that we have Elasticsearch configured and Graylog installed, we need to change a few settings in the default Graylog configuration file before we can use it. Graylog's configuration file is located at /etc/graylog/server/server.conf by default.

First, we need to set the password_secret value. Graylog uses this value to secure the stored user passwords. We will use a randomly-generated 128-character value.

We will use pwgen to generate the password, so install it if it isn't already installed:

sudo apt install pwgen  

Generate the password and place it in the Graylog configuration file. We'll use the sed program to inject the password_secret value into the Graylog configuration file. This way we don't have to copy and paste any values. Execute this command to create the secret and store it in the file:

sudo -E sed -i -e "s/password_secret =.*/password_secret = $(pwgen -s 128 1)/" /etc/graylog/server/server.conf


Next, we need to set the root_password_sha2 value. This is an SHA-256 hash of your desired password.

Once again, we'll use the sed command to modify the Graylog configuration file so we don't have to manually generate the SHA-256 hash using shasum and paste it into the configuration file.

Execute this command, but replace password below with your desired default administrator password:
Note: There is a leading space in the command, which prevents your password from being stored as plain text in your Bash history.

sudo sed -i -e "s/root_password_sha2 =.*/root_password_sha2 = $(echo -n 'password' | shasum -a 256 | cut -d'' -f1)/" /etc/graylog/server/server.conf

Now, we need to make a couple more changes to the configuration file. Open the Graylog configuration file with your editor:

sudo nano /etc/graylog/server/server.conf  

Find and change the following lines, uncommenting them and replacing graylog_public_ip with the public IP of your server. This can be an IP address or a fully-qualified domain name.

...
rest_listen_uri = http://your_server_ip_or_domain:9000/api/

...
web_listen_uri = http://your_server_ip_or_domain:9000/

...

Save the file and exit your editor.

Since we changed the configuration file, we have to restart (or start) the graylog-server service. The restart command will start the server even if it is currently stopped.

sudo systemctl restart graylog-server

Next, check the status of the server.

sudo systemctl status graylog-server

The output should look something like this:

graylog-server.service - Graylog server
   Loaded: loaded (/usr/lib/systemd/system/graylog-server.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2017-03-03 20:10:34 PST; 1 months 7 days ago
     Docs: http://docs.graylog.org/
 Main PID: 1300 (graylog-server)
    Tasks: 191 (limit: 9830)
   Memory: 1.2G
      CPU: 14h 57min 21.475s
   CGroup: /system.slice/graylog-server.service
           ├─1300 /bin/sh /usr/share/graylog-server/bin/graylog-server
           └─1388 /usr/bin/java -Xms1g -Xmx1g -XX:NewRatio=1 -server -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMSCon


You should see active for the status.

If the output reports that the system isn't running, check /var/log/syslog for any errors. Make sure you installed Java when you installed Elasticsearch, and that you changed all of the values in Step 3. Then attept to restart the Graylog service again.

If you have configured a firewall with ufw, add a firewall exception for TCP port 9000 so you can access the web interface:

sudo ufw allow 9000/tcp

Once Graylog is running, you should be able to access http://your_server_ip:9000 with your web browser. You may have to wait up to five minutes after restarting graylog-server before the web interface starts. Additionally, ensure that MongoDB is running.

Now that Graylog is running properly, we can move on to processing logs.

Creating an Input

Let's add a new input to Graylog to receive logs. Inputs tell Graylog which port to listen on and which protocol to use when receiving logs. We 'll add a Syslog UDP input, which is a commonly used logging protocol.

When you visit http://your_server_ip:9000 in your browser, you'll see a login page. Use admin for your username, and use the password you entered in Step 3 for your password.

Once logged in, you'll see a page titled "Getting Started" that looks like the following image:



To view the inputs page, click the System dropdown in the navigation bar and select Inputs.

You'll then see a dropdown box that contains the text Select Input. Select Syslog UDP from this dropdown, and then click on the Launch new input button.

A modal with a form should appear. Fill in the following details to create your input:
  1. For Node, select your server. It should be the only item in the list.
  2. For Title, enter a suitable title, such as Linux Server Logs.
  3. For Bind address, use your server's private IP. If you also want to be able to collect logs from external servers (not recommended, as Syslog does not support authentication), you can set it to 0.0.0.0 (all interfaces).
  4. For Port, enter 8514. Note that we are using port 8514 for this tutorial because ports 0 through 1024 can be only used by the root user. You can use any port number above 1024 should be fine as long as it doesn't conflict with any other services.
Click Save. The local input listing will update and show your new input, as shown in the following figure:



Now that an input has been created, we can send some logs to Graylog. 

Configure Servers to Send Logs to Graylog

We have an input configured and listening on port 8514, but we are not sending any data to the input yet, so we won't see any results. rsyslog is a software utility used to forward logs and is pre-installed on Ubuntu, so we'll configure that to send logs to Graylog. In this tutorial, we'll configure the Ubuntu server running Graylog to send its system logs to the input we just created, but you can follow these steps on any other servers you may have.

If you want to send data to Graylog from other servers, you need to add a firewall exception for UDP port 8514.


  • sudo ufw allow 8514/udp


Create and open a new rsyslog configuration file in your editor.

  • sudo nano /etc/rsyslog.d/60-graylog.conf


Add the following line to the file, replacing your_server_private_ip with your Graylog server's private IP.

*.* @your_server_private_ip:8514;RSYSLOG_SyslogProtocol23Format

Save and exit your editor.

Restart the rsyslog service so the changes take effect.

  • sudo systemctl restart rsyslog


Repeat these steps for each server you want to send logs from.

You should now be able to view your logs in the web interface. Click the Sources tab in the navigation bar to view a graph of the sources. It should look something like this:



You can also click the Search tab in the navigation bar to view a overview of the most recent logs.




Conclusion

You now have a working Graylog server with an input source that can collect logs from other servers.

How To Install Ubuntu 16.10/16.04 in Dual-Boot with Windows 10 or 8

$
0
0

This article will guide you through the steps how you can perform the installation of Ubuntu 16.10 and Ubuntu 16.04 in dual-boot on a machines with Windows 10.



This tutorial assumes that your machine has Windows 10 OS or an older version of Microsoft Windows, such as Windows 8.1 or 8 already installed. In case your hardware uses UEFI then you should modify the EFI settings and disable Secure Boot feature.
If your machine has no other Operating System already installed and you plan to use a Windows alongside Ubuntu 16.04/16.10, you should first install any version of Microsoft Windows and then proceed with Ubuntu 16.04 installation.

In this particular scenario, on Windows installation steps, when formatting the hard disk, you should allocate a free space on the disk with at least 20 GB in size in order to use it later as a partition for Ubuntu installation.

Prerequisites


Prepare Windows Machine for Dual-Boot 

The first thing you need to take care is to create a free space on the computer hard disk in case the system is installed on a single partition.

Login to your Windows machine with an administrative account and right click on the Start Menu -> Command Prompt (Admin) in order to enter Windows Command Line.


Now type diskmgmt.msc on command prompt and the Disk Management utility should open. From here, right click on C: partition and select Shrink Volume in order to resize the partition.

C:\Windows\system32\>diskmgmt.msc


On Shrink C: enter a value on space to shrink in MB (use at least 20000 MB depending on the C: partition size) and hit Shrink to start partition resize as illustrated below (the value of space shrink from below image is lower and only used for demonstration purposes).

Once the space has been resized you will see a new unallocated space on the hard drive. Leave it as default and reboot the computer in order to proceed with Ubuntu 16.04 installation.



Install Ubuntu 16.04 with Windows Dual-Boot

Now it’s time to install Ubuntu 16.04. Go the download link from the topic description and grab Ubuntu Desktop 16.04 ISO image.

Burn the image to a DVD or create a bootable USB stick using a utility such as Universal USB Installer (BIOS compatible) or Rufus (UEFI compatible).

Place the USB stick or DVD in the appropriate drive, reboot the machine and instruct the BIOS/UEFI to boot-up from the DVD/USB by pressing a special function key (usually F12, F10 or F2 depending on the vendor specifications).

Once the media boot-up a new grub screen should appear on your monitor. From the menu select Install Ubuntu and hit Enter to continue.


After the boot media finishes loading into RAM you will end-up with a completely functional Ubuntu system running in live-mode.

On the Launcher hit on the second icon from top, Install Ubuntu 16.04 LTS, and the installer utility will start. Choose the language you wish to perform the installation and click on Continue button to proceed further.


Next, leave both options from Preparing to Install Ubuntu unchecked and hit on Continue button again.


Now it’s time to select an Installation Type. You can choose to Install Ubuntu alongside Windows Boot Manager, option that will automatically take care of all the partition steps.

Use this option if you don’t require personalized partition scheme. In case you want a custom partition layout, check the Something else option and hit on Continue button to proceed further.

The option Erase disk and install Ubuntu should be avoided on dual-boot because is potentially dangerous and will wipe out your disk.


On this step we’ll create our custom partition layout for Ubuntu 16.04. On this guide will recommend that you create two partitions, one for root and the other for home accounts data and no partition for swap (use a swap partition only if you have limited RAM resources or you use a fast SSD).

To create the first partition, the root partition, select the free space (the shrink space from Windows created earlier) and hit on the + icon below. On partition settings use the following configurations and hit OK to apply changes:
  1. Size = at least 20000 MB
  2. Type for the new partition = Primary
  3. Location for the new partition = Beginning
  4. Use as = EXT4 journaling file system
  5. Mount point = /



Create the home partition using the same steps as above. Use all the available free space left for home partition size. The partition settings should look like this:
  1. Size = all remaining free space
  2. Type for the new partition = Primary
  3. Location for the new partition = Beginning
  4. Use as = EXT4 journaling file system
  5. Mount point = /home


When finished, hit the Install Now button in order to apply changes to disk and start the installation process.
A pop-up window should appear to inform you about swap space. Ignore the alert by pressing on Continue button.

Next a new pop-up window will ask you if you agree with committing changes to disk. Hit Continue to write changes to disk and the installation process will now start.



On the next screen adjust your machine physical location by selecting a city nearby from the map. When done hit Continue to move ahead.


Next, select your keyboard layout and click on Continue button.


Pick up a username and password for your administrative sudo account, enter a descriptive name for your computer and hit Continue to finalize the installation.

This are all the settings required for customizing Ubuntu 16.04 installation. From here on the installation process will run automatically until it reaches the end.



After the installation process reaches its end hit on Restart Now button in order to complete the installation.
The machine will reboot into the Grub menu, where for ten seconds, you will be presented to choose what OS you wish to use further: Ubuntu 16.04 or Microsoft Windows.

Ubuntu is designated as default OS to boot from. Thus, just press Enter key or wait for those 10 seconds timeout to drain.






After Ubuntu finishes loading, login with the credentials created during the installation process and enjoy it. Ubuntu 16.04 provides NTFS file system support automatically so you can access the files from Windows partitions just by clicking on the Windows volume.



To switch between Windows/Ubuntu, just reboot the computer and select Windows or Ubuntu from the Grub menu.

What’s New in Android O

$
0
0

Google unveiled a preview of the latest version of Android. Codenamed O, the next iteration of Google’s mobile operating system is coming this summer, but you can get your hands on the beta right now.




Just like with Android N, Google is offering a formal beta program for Android O. Device support for this build is much smaller than it was for the N preview, however, with only a handful of units being eligible:
  • Nexus 6P
  • Nexus 5X
  • Google Pixel
  • Google Pixel XL
  • Nexus Player
  • Pixel C
If you have any of those devices, you can jump in on the beta here. A word of caution, however: I do not recommend using this if it’s your only phone. This is very much a beta and not meant for daily use.

Here are the best features you’ll see when it arrives.

Fluid Experiences

Google is bringing a new set of features to Android O that it calls “Fluid Experiences”. It includes Picture in Picture, Notification Dots, Autofill, and Smart Text Selection. Here’s a brief look at each one.


Picture in Picture Puts One App Above Another

In Android Nougat (7.x), we got the ability to run two apps on the screen at once with Multi-window. While a super useful feature in its own right, it’s not always best solution. So with O, Google is bringing Picture in Picture mode to the small screen. This will let users open an app in the foreground, while keeping something like a YouTube video running in a smaller window on top. The early implementation looks really solid so far.


Notification Dots Let You Know What Apps Have Notifications

If you’ve ever used something like Nova Launcher that has built-in notification “badges,” then you already know what Notification Dots are all about. Basically, this a quick way to see pending notifications (aside from using the notification bar, of course) on home screen icons. Unfortunately, they are exactly what the name suggests: dots. Not numbers or anything of the sort. It’s also unclear if these will work in the app drawer as well.

One cool thing about Notification Dots is the long-press action. With the long-press features introduced with Pixel Launcher, you are able to do more with home screen icons, and Notification Dots takes this a step further by allowing you to actually see the notification by long-pressing the icon. It’s rad.


Autofill Passwords in Apps

Chrome has had autofill features for a long time—be it passwords or form data. Now that feature is coming to Android apps as well. For example, if Chrome has your Twitter or Facebook login credentials saved, the app will autofill and login on your Android phone. This is a feature that’s way overdue, and I’m so glad to see it coming front and center in Android O.



Smart Text Selection Gives You Context-Aware Shortcuts

How many times has someone sent you a text with certain information—like an address, for example—and you had to copy and paste it into Google Maps? I’d like to think that happens to most people pretty regularly (or at least some form of the copy/paste/search analogy). Smart Text Selection is a new feature that will streamline that process by automatically selecting relevant text.

For example, if someone sends you an address, you can double tap the street name and it will automatically select the entire address. Or if it’s a business name, it will highlight the entire thing if you just select one word. It looks pretty brilliant.

To make this feature even more useful, Smart Text Selection will also offer quick actions in the suggests bar, so if you select a phone number, it will offer the dialer. An address will suggest maps. And so on.



Vitals: Speed, Security, and Battery Life

With each major release over the last two or so years, Google has put a lot of focus on Android optimization overall. Making the OS more efficient in both terms of performance and battery life has been a front-and-center effort, and Android O is no different.

With this release, Google is bringing a new set of optimizations it collectively refers to as “Vitals.” While slightly ambiguous at the keynote itself, we know that this will maximize security with Google Play Protect, optimize boot times and app performance, and intelligently limit background activity for apps to save battery life.


At this point, there isn’t a whole lot to say about the speed and battery life, but Google released some more info about Google Play Protect, their new security initiative in the Google Play store.


Google Play Protect

Part of the “Vitals” program mentioned earlier, Google Play Protect is Google’s newest initiative to ensure all apps found in the Play Store are safe, secure, and compliant with the company’s guidelines.

When an app enters Google Play in the first place, it has to undergo a security screening to make sure it’s in compliance with security standards. The thing is, once an app is in the Play Store, it doesn’t have to undergo this security screening again—if an app is updated it can easily slip something under the hood that may be a questionable integrity.

In order to combat this, Google is implementing Play Protect, which uses machine learning to scan billions of apps daily to make sure best security practices are in place. In other words, this will cut back on Android “malware” and other questionable applications that may squeak their way into the Play Store.
Android Device Manager, which has now been renamed to “Find My Device” is also part of Play Protect.

You can use this service to secure and locate your lost or stolen Android device.




Visual Positioning Service: Augmented Reality That’s Useful


Google has been pushing VR (Virtual Reality) and AR (Augmented Reality) fairly heavily over the past couple of years, and its newly announced Visual Positioning Service is using AR to help you find your way around indoors—like GPS for the inside of places. It’s awesome.

In the demo Google provided during the I/O keynote, they used Lowe’s as an example—these stores are rather big, so finding one particular item can be a huge pain. VPS used the phone’s camera to pinpoint exact items and compare them with a database of Lowe’s store layouts to provide exact directions to the item they were looking for. It was kind of surreal.

At the time, however, this feature will only work on Tango-enabled phones—of which there are only two at the moment. There is a third hitting the scene later this year in the ASUS ZenFone AR, but hopefully we’ll start to see more Tango-equipped handsets hitting the scene so this killer technology can actually get used by more than a handful of people.


Android Go: Optimized for Low-Cost Phones

A couple of years ago, Google announced project called Android One to bring low-cost smartphones to impoverished countries across the globe. Today, it announced Android Go, which at first blush appears to basically be a US version of the program.

Android Go’s purpose is to optimize all version of the operating system for low-cost hardware, starting with Android O. Essentially, from this point forward, every version of Android will have a “Go” edition that is optimized to work on anywhere from 512MB to 1GB of RAM, as well as lower-end processors and limited storage situations.


The company is also releasing lite versions of the entire Google suite for Go devices, and it will specifically curate the Play Store on these devices to highlight apps that are optimized for use on low-power phones. It will also bring data usage front and center, since many low-income users are on pay-as-you-go data plans. Data Saver in Chrome will be enabled by default, the Data Usage section of Settings will be accessible directly from the Quick Settings panel. Users will even be able to “top up” their data directly from this screen on compatible carriers. That’s neat.

In other words: Go is an initiative to make low-end Android devices perform much, much better than they currently do so low-income families can still have access to the technology they deserve. It warms my heart to see companies like Google making a push to better the lives of the little guys.


Google Lens: Like Google Goggles, But for the Future

This is honestly one of the coolest things Google announced at I/O, and while not technically part of Android O, it’s definitely worth talking about here. Basically, Lens is a new smart feature that uses your phone’s camera to understand what you’re looking at.

It can do things like read signs in other languages and provide translations, identify plants and flowers, read Wi-Fi names and passwords off of routers and automatically connect, or even add calendar events by snapping a picture of an event billboard. And that’s just the stuff Google showed it doing on the I/O main stage—I’m absolutely certain that is a feature that will do so much more once it gets into users’ hands.




Once it start to roll out, Lens will be available in both Assistant and Photos, but we could possibly see it start to integrate into other apps as well. At this time, Google made no indication that Lens would be available as a standalone app. There was a lot of information to come out of Google I/O’s keynote today, covering everything from Android to Google Assistant, Photos, and a lot more.

How to install the Android O Beta on Your Pixel or Nexus Device

$
0
0

Android Codenamed "O" is the upcoming version of Google’s mobile operating system, but you don’t need  to wait until the release date to get your hands on the latest features. If you have a compatible Nexus or Pixel device, you can install the developer preview of Android O beta right now.




You can get Android O on these modern Nexus devices:
  • Nexus 6P
  • Nexus 5X
  • Nexus Player
  • Pixel C
  • Pixel
  • Pixel XL

How to Enroll in the Android O Beta Program

If you have one of the devices mentioned above connected to your Google account, you can head here to get started with the beta program. After you sign in with your Google account, make sure to read all of the caveats before you just start clicking buttons. In particular: if you ever want to opt out of the beta program and roll back to the latest stable version of Android, your device will have to be wiped with a factory reset.


Once you’ve confirmed that you’re cool with the fact this may be super buggy on your device, you can simply hit the “Enroll Device” button below the eligible device you want to push N to.


A popup will appear with a standard set of this may wreck your device warnings, so if that doesn’t scare you off, click the checkbox and press the “Join beta” button.


Within a few minutes, the enrolled device should receive an update notification. This will install like any other update would—it downloads directly from Google, then flashes automatically.

Upon reboot, you’ll be running Android O!


How to Roll Back to Nougat

OK, so you gave Android O a shot, fell in love with all sorts of neat stuff about it, but ultimately just couldn’t handle the bugginess. There’s no shame in that, and it’s super easy to roll back to a stable version of Nougat.

If you enrolled in the program using the OTA method, you need only go back to the Android Beta Program landing page and “unenroll” your device by clicking the button. Keep in mind here that this will perform a factory reset on your device, so if you’re OK with losing everything and starting over, go for it. There is currently no recommended way of rolling back without wiping the device. Back up anything you need before continuing!

How To Convert an MBR Disk to GPT and Switch From BIOS to UEFI on Windows 10

$
0
0

Using the Unified Extensible Firmware Interface (UEFI) can make your PC more secure and a bit faster if you're still using the legacy basic input/output system (BIOS). Windows 10 makes it easier to switch from legacy BIOS to UEFI using the new MBR2GPT disk conversion tool included with the Creators Update. 





This guide will show you, how to switch from BIOS to UEFI and convert MBR disk to GPT without any data loss.

How to convert a disk from MBR to GPT on Windows 10

Previously, you needed to back up your data, repartition the disk using GPT, reinstall the OS, and then restore the data. In the Creators Update, Windows 10 introduces a new command-line utility called MBR2GPT that lets you convert a disk formatted using Master Boot Record (MBR) to GUID Partition Table (GPT)-style partition without modifying or deleting that data stored on disk, which is a requirement to move to UEFI mode. 

  • First, you need to convert the disk using MBR to GPT-style partition, which is the main requirement to run Windows 10 in UEFI mode.
  • Secondly, you must change your motherboard firmware settings to make the switch from BIOS to UEFI mode.

Checking disk partition type

1. Use the Windows key + X keyboard shortcut to open the Power User menu and click on Disk Management.2. Right-click the disk with the Windows 10 installation and select Properties.



3. Click on the Volumes tab and look at "Partition style". If it reads GUID Partition Table (GPT), the disk doesn't need to be converted, but if you see the Master Boot Record (MBR) label, then you can use the conversion tool.


In addition, make sure to check your PC manufacturer's support website to see if the device includes support of UEFI mode before using the conversion tool.

Converting disk to GPT partition style

In order to convert the disk from MBR to GPT, you need to start your computer in Windows PE (Preinstallation Environment), and then do the following.

Warning: Although this is a non-destructive process, you should always keep a full backup of your computer and data in case something goes wrong.

1. Open Settings.
2. Click on Update & security.
3. Click on Recovery.4. Under "Advanced startup," click the Restart now button.


5. Click on Troubleshoot.
6. Click on Advanced options.7. Click on Command Prompt to boot into Windows PE.


8. Click on your account and login with your credentials (if applicable).
9. Type the following command to validate and make sure the disk meets the requirements and press Enter:

mbr2gpt /validate


If everything checks out successfully, you can continue with the next step, but it's also possible that you'll get an error, in case the disk didn't meet the requirements. For example, if the drive was already using a GPT partition style.

Quick Tip: "MBR2GPT.exe" is located inside the "System32" folder inside the "Windows" folder, and if you want to see all the available options included with this tool, you can use the mbr2gpt /? command.


Type the following command to convert the disk from MBR to GPT and press Enter:

mbr2gpt /convert


As you execute the commands, the tool will validate the disk. The partition will be reconfigured to include an EFI system partition (ESP) as needed. Then the UEFI boot files and GPT components will be installed in the new partition. The boot configuration data (BCD) will be updated, and finally, the drive letter is restored.

The tool was designed to run in the Windows PE using Command Prompt, but it's also possible to use it when Windows 10 desktop is fully loaded. This is not recommended, though, because you may encounter some problems with other applications running on the system.

In the case you want to use the tool while Windows 10 is fully loaded, you'll need to append the /allowFullOS switch after each command mentioned above. For example, you can use this command mbr2gpt /validate /allowFullOS to validate the disk. Otherwise, you won't be able to use the tool.


MBR2GPT return codes

If the conversion was applied successfully, you should see a return code of 0. If the process fails, you'll receive a different return error code.

Actually, you can get eleven different return codes 1 through 10, and code 100, each one indicating a particular problem during the conversion process.

Here's what those codes mean:

Return CodeDescription
1User canceled the conversion.
2Internal error.
3Initialization error.
4Invalid command-line parameters.
5Error on the geometry and layout of the selected disk.
6One or more volumes on the disk is encrypted.
7Geometry and layout of the disk don't meet requirements.
8Error while creating the EFI system partition.
9Error installing boot files.
10Error while applying GPT layout.
100Successful conversion, but some boot configuration data didn't restore.


How to change the firmware mode from BIOS to UEFI

Once you've completed the steps to switch to the GPT-style partition, it's the time to access the motherboard's firmware to change from BIOS to UEFI. Otherwise, Windows 10 won't boot.

In order to do this process, you can use tools provided by your PC manufacturer, or change the settings manually in the firmware interface.

This process typically requires hitting one of the function keys (F1, F2, F3, F10, or F12), the ESC, or Delete key as you boot your computer. However, these settings will vary by manufacturer, and even by model. So make sure to check your PC manufacturer support website for more specific details.

After getting the access to the firmware (BIOS) interface, look for the Boot menu, and make sure to change from legacy BIOS to UEFI.

Then reboot your computer, and use these steps to verify that you're indeed running Windows 10 in GPT style partition:
  1. Use the Windows key + X to open the Power User menu and click on Disk Management.
  2. Right-click the disk with the Windows 10 installation and select Properties.
  3. Click on the Volumes tab, and under "Partition style," it should read GUID Partition Table (GPT).
To make sure, your device is using UEFI, do the following:
  1. Open Start.
  2. Search for msinfo32 or System Information and press Enter.
In the System Information summary, you should now see that "BIOS Mode" is set to "UEFI."

 



Conclusion

It's been possible for a long time to convert an MBR disk to GPT to switch from BIOS to UEFI, but now you can make the conversion in minutes without wasting time doing a clean install of Windows 10 and backing up your data. This tool not only comes in handy for anyone who wants to make their PCs a bit more secure and faster, but it'll also benefit organizations that want to significantly reduce time and the cost to move to a more advanced firmware.

How To Allow SFTP Without Shell Access on Ubuntu/CentOS/RHEL Linux

$
0
0

SFTP stands for SSH File Transfer Protocol and it's a secure way of transferring files to a server using an encrypted SSH connection. Despite the name, it's a completely different protocol than FTP (File Transfer Protocol), though it's widely supported by modern FTP clients.






SFTP is available by default with no additional configuration on all servers that have SSH access enabled. It's secure and easy to use, but comes with a disadvantage: in a standard configuration, the SSH server grants file transfer access and terminal shell access to all users with an account on the system.

In some cases, you might want only certain users to be allowed file transfers and no SSH access. In this guide, we'll set up the SSH daemon to limit SFTP access to one directory with no SSH access allowed on per user basis.

The following steps and commands was performed on Ubuntu 16.04 in our lab but you can use any linux distribution you like.

Creating a New User

First, create a new user who will be granted only file transfer access to the server. Here, we're using the username Peter, but you can use any username you like.

sudo adduser peter            #If you are on Ubuntu



You'll be prompted to create a password for the account, followed by some information about the user. The user information is optional, so you can press ENTER to leave those fields blank.

useradd peter                 #If you are on CentOS/RHEL
passwd peter                  #Set Password for user peter

You have now created a new user that will be granted access to the restricted directory. In the next step we will create the directory for file transfers and set up the necessary permissions.

Creating a Directory for File Transfers

In order to restrict SFTP access to one directory, we first have to make sure the directory complies with the SSH server's permissions requirements, which are very particular.

Specifically, the directory itself and all directories above it in the filesystem tree must be owned by root and not writable by anyone else. Consequently, it's not possible to simply give restricted access to a user's home directory because home directories are owned by the user, not root.

Note: Some versions of OpenSSH do not have such strict requirements for the directory structure and ownership, but most modern Linux distributions (including Ubuntu 16.04) do.


There are a number of ways to work around this ownership issue. In this guide, we'll create and use /var/sftp/uploads as the target upload directory. /var/sftp will be owned by root and will be unwritable by other users; the subdirectory /var/sftp/uploads will be owned by peter, so that user will be able to upload files to it.

First, create the directories.

  • sudo mkdir -p /var/sftp/uploads


Set the owner of /var/sftp to root.

  • sudo chown root:root /var/sftp


Give root write permissions to the same directory, and give other users only read and execute rights.

  • sudo chmod 755 /var/sftp


Change the ownership on the uploads directory to peter.

  • sudo chown peter:peter/var/sftp/uploads


Now that the directory structure is in place, we can configure the SSH server itself.

Restricting Access to One Directory

In this step, we'll modify the SSH server configuration to disallow terminal access for peter but allow file transfer access.

Open the SSH server configuration file using nano or your favorite text editor.

  • sudo nano /etc/ssh/sshd_config


Scroll to the very bottom of the file and append the following configuration snippet:
/etc/ssh/sshd_config
. . .

Match User peter
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /var/sftp
PermitTunnel no
AllowAgentForwarding no
AllowTcpForwarding no
X11Forwarding no


Then save and close the file.

Here's what each of those directives do:
  • Match User tells the SSH server to apply the following commands only to the user specified. Here, we specify peter.
  • ForceCommand internal-sftp forces the SSH server to run the SFTP server upon login, disallowing shell access.
  • PasswordAuthentication yes allows password authentication for this user.
  • ChrootDirectory /var/sftp/ ensures that the user will not be allowed access to anything beyond the /var/sftp directory.
  • AllowAgentForwarding no, AllowTcpForwarding no. and X11Forwarding no disables port forwarding, tunneling and X11 forwarding for this user.
This set of commands, starting with Match User, can be copied and repeated for different users too. Make sure to modify the username in the Match User line accordingly.

Note: You can omit the PasswordAuthentication yes line and instead set up SSH key access for increased security.

To apply the configuration changes, restart the service.

  • sudo systemctl restart sshd


You have now configured the SSH server to restrict access to file transfer only for peter. The last step is testing the configuration to make sure it works as intended.

Verifying the Configuration

Let's ensure that our new peter user can only transfer files.

Logging in to the server as peter using normal shell access should no longer be possible. Let's try it:

  • ssh peter@localhost


You'll see the following message before being returned to your original prompt:



Error message

This service allows sftp connections only.
Connection to localhost closed.


This means that peter can no longer can access the server shell using SSH.

Next, let's verify if the user can successfully access SFTP for file transfer.

  • sftp peter@localhost


Instead of an error message, this command will show a successful login message with an interactive prompt.



SFTP prompt

Connected to localhost.
sftp>

You can list the directory contents using ls in the prompt:

  • sftp> ls


This will show the uploads directory that was created in the previous step and return you to the sftp> prompt.



SFTP file list output

uploads

To verify that the user is indeed restricted to this directory and cannot access any directory above it, you can try changing the directory to the one above it.

  • sftp> cd ..


This command will not give an error, but listing the directory contents as before will show no change, proving that the user was not able to switch to the parent directory.

You have now verified that the restricted configuration works as intended. The newly created peter user can access the server only using he SFTP protocol for file transfer and has no ability to access the full shell.




Conclusion

You've restricted a user to SFTP-only access to a single directory on a server without full shell access. While this guide uses only one directory and one user for brevity, you can extend this example to multiple users and multiple directories.

The SSH server allows more complex configuration schemes, including limiting access to groups or multiple users at once or limited access to certain IP addresses. You can find examples of additional configuration options and explanation of possible directives in the OpenSSH Cookbook.

How To Set Up OpenLDAP and phpLDAPadmin on Ubuntu 16.04

$
0
0

Lightweight Directory Access Protocol (LDAP) is a standard protocol designed to manage and access hierarchical directory information over a network. It is most often used as a centralized authentication system or for corporate email and phone directories.





This article will take you through the steps to install and configure the OpenLDAP server on Ubuntu 16.04. We will then install phpLDAPadmin, a web interface for viewing and manipulating LDAP information. We will secure the web interface and the LDAP service with SSL certificates from Let's Encrypt, a provider of free and automated certificates.

Prerequisites

Before starting this tutorial, you should have an Ubuntu 16.04 server set up with Apache and PHP. You can follow our tutorial How To Install Linux, Apache, MySQL, PHP (LAMP) stack on Ubuntu 16.04, skipping Step 2 as we will not need the MySQL database server.

Additionally, since we will be entering passwords into the web interface, we should secure Apache with SSL encryption. Read How To Secure Apache with Let's Encrypt on Ubuntu 16.04 to download and configure free SSL certificates. You will need a domain name to complete this step. We will use these same certificates to provide secure LDAP connections as well.

Note: the Let's Encrypt guide assumes that your server is accessible to the public internet. If that's not the case, you'll have to use a different certificate provider or perhaps your organization's own certificate authority. Either way, you should be able to complete the tutorial with minimal changes, mostly regarding the paths or filenames of the certificates.

Installing and Configuring the LDAP Server

Our first step is to install the LDAP server and some associated utilities. Luckily, the packages we need are all available in Ubuntu's default repositories.

Log into your server. Since this is our first time using apt-get in this session, we'll refresh our local package index, then install the packages we want:

  • sudo apt-get update

  • sudo apt-get install slapd ldap-utils


During the installation, you will be asked to select and confirm an administrator password for LDAP. You can enter anything here, because you'll have the opportunity to update it in just a moment.

Even though we just installed the package, we're going to go right ahead and reconfigure it. The slapd package has the ability to ask a lot of important configuration questions, but by default they are skipped over in the installation process. We gain access to all of the prompts by telling our system to reconfigure the package:

  • sudo dpkg-reconfigure slapd


There are quite a few new questions to answer in this process. We will be accepting most of the defaults. Let's go through the questions:
  • Omit OpenLDAP server configuration? No
  • DNS domain name?
    • This option will determine the base structure of your directory path. Read the message to understand exactly how this will be implemented. You can actually select whatever value you'd like, even if you don't own the actual domain. However, this tutorial assumes you have a proper domain name for the server, so you should use that. We'll use example.com throughout the tutorial.
  • Organization name?
    • For this guide, we will be using example as the name of our organization. You may choose anything you feel is appropriate.
  • Administrator password? enter a secure password twice
  • Database backend? MDB
  • Remove the database when slapd is purged? No
  • Move old database? Yes
  • Allow LDAPv2 protocol? No
At this point, your LDAP server is configured and running. Open up the LDAP port on your firewall so external clients can connect:

  • sudo ufw allow ldap


Let's test our LDAP connection with ldapwhoami, which should return the username we're connected as:

  • ldapwhoami -H ldap:// -x



Output

anonymous

anonymous is the result we're expecting, since we ran ldapwhoami without logging in to the LDAP server. This means the server is running and answering queries. Next we'll set up a web interface to manage LDAP data.

Installing and Configuring the phpLDAPadmin Web Interface

Although it is very possible to administer LDAP through the command line, most users will find it easier to use a web interface. We're going to install phpLDAPadmin, a PHP application which provides this functionality.

The Ubuntu repositories contain a phpLDAPadmin package. You can install it with apt-get:

  • sudo apt-get install phpldapadmin


This will install the application, enable the necessary Apache configurations, and reload Apache.

The web server is now configured to serve the application, but we need to make some additional changes.

We need to configure phpLDAPadmin to use our domain, and to not autofill the LDAP login information.

Begin by opening the main configuration file with root privileges in your text editor:

  • sudo nano /etc/phpldapadmin/config.php


Look for the line that starts with $servers->setValue('server','name'. In nano you can search for a string by typing CTRL-W, then the string, then ENTER. Your cursor will be placed on the correct line.

This line is a display name for your LDAP server, which the web interface uses for headers and messages about the server. Choose anything appropriate here:

/etc/phpldapadmin/config.php
$servers->setValue('server','name','Example LDAP');

Next, move down to the $servers->setValue('server','base' line. This config tells phpLDAPadmin what the root of the LDAP hierarchy is. This is based on the value we typed in when reconfiguring the slapd package. In our example we selected example.com and we need to translate this into LDAP syntax by putting each domain component (everything not a dot) into a dc= notation:
/etc/phpldapadmin/config.php
$servers->setValue('server','base', array('dc=example,dc=com'));

Now find the login bind_id configuration line and comment it out with a # at the beginning of the line:
/etc/phpldapadmin/config.php
#$servers->setValue('login','bind_id','cn=admin,dc=example,dc=com');

This option pre-populates the admin login details in the web interface. This is information we shouldn't share if our phpLDAPadmin page is publicly accessible.

The last thing that we need to adjust is a setting that controls the visibility of some phpLDAPadmin warning messages. By default the application will show quite a few warning messages about template files. These have no impact on our current use of the software. We can hide them by searching for the hide_template_warning parameter, uncommenting the line that contains it, and setting it to true:
/etc/phpldapadmin/config.php
$config->custom->appearance['hide_template_warning'] = true;

This is the last thing that we need to adjust. Save and close the file to finish. We don't need to restart anything for the changes to take effect.

Next we'll log into phpLDAPadmin.

Logging into the phpLDAPadmin Web Interface

Having made the necessary configuration changes to phpLDAPadmin, we can now begin to use it. Navigate to the application in your web browser. Be sure to substitute your domain for the highlighted area below:

https://example.com/phpldapadmin

The phpLDAPadmin landing page will load. Click on the login link in the left-hand menu on the page. A login form will be presented:


The Login DN is the username that you will be using. It contains the account name as a cn= section, and the domain name you selected for the server broken into dc= sections as described in previous steps. The default admin account that we set up during install is called admin, so for our example we would type in the following:

cn=admin,dc=example,dc=com

After entering the appropriate string for your domain, type in the admin password you created during configuration, then click the Authenticate button.

You will be taken to the main interface:


At this point, you are logged into the phpLDAPadmin interface. You have the ability to add users, organizational units, groups, and relationships.

LDAP is flexible in how you can structure your data and directory hierarchies. You can create whatever kind of structure you'd like and also create rules for how they interact.

Now that we've logged in and familiarized ourselves with the web interface, let's take a moment to provide more security to our LDAP server.

Configuring StartTLS LDAP Encryption

Although we've encrypted our web interface, external LDAP clients are still connecting to the server and passing information around in plain text. Let's use our Let's Encrypt SSL certificates to add encryption to our LDAP server.

Copying the Let's Encrypt Certificates

Because the slapd daemon runs as the user openldap, and Let's Encrypt certificates can only be read by the root user, we'll need make a few adjustments to allow slapd access to the certificates. We'll create a short script that will copy the certificates to /etc/ssl/, the standard system directory for SSL certificates and keys. The reason we're making a script to do this, instead of just entering the commands manually, is that we'll need to repeat this process automatically whenever the Let's Encrypt certificates are renewed. We'll update the certbot cron job later to enable this.

First, open a new text file for the shell script:

  • sudo nano /usr/local/bin/renew.sh


This will open a blank text file. Paste in the following script. Be sure to update the SITE=example.com portion to reflect where your Let’s Encrypt certificates are stored. You can find the correct value by listing out the certificate directory with sudo ls /etc/letsencrypt/live.
/usr/local/bin/renew.sh
#!/bin/sh

SITE=example.com

# move to the correct let's encrypt directory
cd /etc/letsencrypt/live/$SITE

# copy the files
cp cert.pem /etc/ssl/certs/$SITE.cert.pem
cp fullchain.pem /etc/ssl/certs/$SITE.fullchain.pem
cp privkey.pem /etc/ssl/private/$SITE.privkey.pem

# adjust permissions of the private key
chown :ssl-cert /etc/ssl/private/$SITE.privkey.pem
chmod 640 /etc/ssl/private/$SITE.privkey.pem

# restart slapd to load new certificates
systemctl restart slapd

This script moves into the Let's Encrypt certificate directory, copies files over to /etc/ssl, then updates the private key's permissions to make it readable by the system's ssl-cert group. It also restarts slapd, which will ensure that new certificates are loaded when this script is run from our certbot renewal cron job.

Save and close the file, then make it executable:

  • sudo chmod u+x /usr/local/bin/renew.sh


Then run the script with sudo:

  • sudo /usr/local/bin/renew.sh


Verify that the script worked by listing out the new files in /etc/ssl:

  • sudo su -c 'ls -al /etc/ssl/{certs,private}/example.com*'


The sudo command above is a little different than normal. The su -c '. . .' portion wraps the whole ls command in a root shell before executing it. If we didn't do this, the * wildcard filename expansion would run with your non-sudo user's permissions, and it would fail because /etc/ssl/private is not readable by your user.

ls will print details about the three files. Verify that the ownership and permissions look correct:



Output

-rw-r--r-- 1 root root 1793 May 31 13:58 /etc/ssl/certs/example.com.cert.pem
-rw-r--r-- 1 root root 3440 May 31 13:58 /etc/ssl/certs/example.com.fullchain.pem
-rw-r----- 1 root ssl-cert 1704 May 31 13:58 /etc/ssl/private/example.com.privkey.pem

Next we'll automate this with certbot.

 

Updating the Certbot Renewal Cron Job

We need to update our certbot cron job to run this script whenever the certificates are updated:

  • sudo crontab -e


You should already have a certbot renew line. Add the highlighted portion below:
crontab
 
15 3 * * * /usr/bin/certbot renew --quiet --renew-hook /usr/local/bin/renew.sh

Save and close the crontab. Now, whenever certbot renews the certificates, our script will be run to copy the files, adjust permissions, and restart the slapd server.

 

Configuring slapd to Offer Secure Connections

We need to add the openldap user to the ssl-cert group so slapd can read the private key:

  • sudo usermod -aG ssl-cert openldap


Restart slapd so it picks up the new group:

  • sudo systemctl restart slapd


Finally, we need to configure slapd to actually use these certificates and keys. To do this we put all of our config changes in an LDIF file — which stands for LDAP data interchange format — and then load the changes into our LDAP server with the ldapmodify command.

Open up a new LDIF file:

  • cd ~

  • nano ssl.ldif


This will open a blank file. Paste the following into the file, updating the filenames to reflect your domain:
ssl.ldif
dn: cn=config
changetype: modify
add: olcTLSCACertificateFile
olcTLSCACertificateFile: /etc/ssl/certs/example.com.fullchain.pem
-
add: olcTLSCertificateFile
olcTLSCertificateFile: /etc/ssl/certs/example.com.cert.pem
-
add: olcTLSCertificateKeyFile
olcTLSCertificateKeyFile: /etc/ssl/private/example.com.privkey.pem

Save and close the file, then apply the changes with ldapmodify:

  • sudo ldapmodify -H ldapi:// -Y EXTERNAL -f ssl.ldif



Output

SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
modifying entry "cn=config"

We don't need to reload slapd to load the new certificates, this happened automatically when we updated the config with ldapmodify. Run the ldapwhoami command one more time, to verify. This time we need to use the proper hostname and add the -ZZ option to force a secure connection:

  • ldapwhoami -H ldap://example.com -x -ZZ


We need the full hostname when using a secure connection because the client will check to make sure that the hostname matches the hostname on the certificate. This prevents man-in-the-middle attacks where an attacker could intercept your connection and impersonate your server.

The ldapwhoami command should return anonymous, with no errors. We've successfully encrypted our LDAP connection.




Conclusion

In this guide we installed and configured the OpenLDAP slapd server, and the LDAP web interface phpLDAPadmin. We also set up encryption on both servers, and updated certbot to automatically handle slapd's Let's Encrypt certificate renewal process.
Viewing all 880 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>