Quantcast
Channel: Tech Support
Viewing all 880 articles
Browse latest View live

Dell Introduces New Rugged Enterprise Notebooks

$
0
0

The laptop, which start at $1,399, include two revamped models and an all-new thinner and lighter version for mobile workers.

Dell has introduced three new rugged notebooks aimed at enterprise workers who require rugged machines that can withstand tough environmental and physical conditions, including water, dirt, drops and drastic temperature changes.

The Latitude 7424 Rugged Extreme, the Latitude 5424 Rugged and the Latitude 5420 Rugged, are the company's latest entries in its 12-year history of building rugged machines for industries from oil and gas to manufacturing, emergency services, warehousing and more.

The Latitude 7424 Rugged Extreme, which starts at $3,499, supersedes an earlier Latitude Rugged 7414 model, while the Latitude 5424 Rugged, which starts at $1,499, supersedes an earlier Latitude Rugged 5414 model. The Latitude 5420 Rugged, which starts at $1,399, is a new addition to the company's lineup that's essentially a thinner and lighter version of the 5424 Rugged machines for users who need a more compact device.

All three notebooks feature a 14-inch Full HD WVA display (1920 x 1080 resolution and 16:9 aspect ratio) with optional glove-capable touch-screen capabilities, a choice of 8th generation Intel Core i5, i7 quad-core processors, vPro 7th generation Intel Core i3 dual-core processors or 6th generation Intel Core i5 dual-core processors, as well as a choice of 8GB, 16GB or 32GB of 2400MHz DDR4 memory.




Several Graphics Processing Options
Other options for all three rugged notebooks include several graphics processing options, including integrated Intel HD 520, 620 and UHD 620 or AMD Radeon 540 and RX540 chips. The notebooks can be configured with PCIe NVMe solid-state drives in 128GB, 256GB, 512GB, 1TB or 2TB capacities, or with 256GB, 512GB or 1TB PCIe NVMe self-encrypting drives.

Optional 8X DVD-ROM, 8X DVD+/-RW or Blu-ray RW drives can be configured for the Latitude 7424 or 5424 models. The Latitude 5420 can use optional DVD or Blu-ray drives only as external devices. The devices also can be ordered with an optional integrated FHD video web or IR camera with a privacy shutter.

All three models can be configured with Microsoft Windows 10 Pro 64-bit, Microsoft Windows 10 Pro or, only for 6th generation processors, with Windows 7 Professional 64-bit.

The three models all include a three-cell 51Whr lithium-ion battery and offer an optional second 3-cell lithium-ion battery for hot-swapping, as well as 10/100/1000 gigabit Ethernet and Intel Dual Band Wireless AC 8265 (802.11ac) 2x2 with Bluetooth 4.1, Intel Dual Band Wireless AC 8265 (802.11ac) 2x2 without Bluetooth, or with a Qualcomm QCA61x4A 802.11ac Dual Band (2x2) wireless adapter plus Bluetooth 4.1. Optional broadband capabilities are also available, including Qualcomm Snapdragon X20 LTE for use with multiple carriers.

Several Ports to Choose From
The machines also include a range of ports, such as USB 3.0 Type A, USB 3.0 Type C, a native RS-232 serial port, an RJ-45 gigabit Ethernet network connector, an HDMI port and a universal audio jack. Optional connectors include an RJ-45 gigabit Ethernet network connector a second serial, VGA, display port or Fischer USB port.

Slots for SD cards and SIM cards are included, while optional PCMCIA or Express Card 54mm slots can be included.

Security options include a fingerprint reader as well as a contactless and contacted SmartCard reader as well as TPM 2.0.

The Latitude 7424 notebook is 13.95 inches long, 10.04 inches wide, 2.02 inches thick and starts at 7.6 pounds depending on configuration, while the 5424 model is 13.66 inches long, 9.62 inches wide, 1.74 inches thick and starts at 5.5l pounds. The 5420 is 13.66 inches long, 9.62 inches wide, 1.29 inches thick and starts at 4.9 pounds.




The Latitude 7424 Rugged Extreme model meets MIL-STD-810G standards for durability and drops from six feet, as well as IP-65 standards against dust and water. The model 5424 and 5420 machines meet MIL-STD-810G standards for drops from 36 inches, as well as IP-52 standards for dust and water protection.

Mirantis Announces Cloud Computing to the Edge Without OpenStack

$
0
0

The Mirantis Cloud Platform Edge is a new Kubernetes-based effort to enable containers and virtual machines to run at the edge of the network.

The concept of edge computing has been steadily evolving in recent years as a way to bring cloud computing type approaches to the edge of network deployments.

On Oct. 25, Mirantis announced its entry into the edge computing market with Mirantis Cloud Platform Edge (MCP Edge). While Mirantis has strong ties as a founding member of the OpenStack Foundation, the MCP Edge technology is not based on OpenStack, instead using the open-source Kubernetes container orchestration system at its core.

It is Kubernetes plus Virtlet. You can still run VMs [virtual machines] using Virtlet, with direct access to hardware acceleration like SRI-OV [Single-Root Input/Output Virtualization], but Kubernetes is the only resource scheduler.




Mirantis has raised approximately $220 million in venture funding, including a $100 million Series B round of funding in August 2015, and has been an active participant in the OpenStack community since 2010. Over the years, Mirantis announced a series of big wins with its OpenStack efforts, including automobile giant Volkswagen Group and NTT Communications.

StarlingX
Edge computing has been an area of interest for the OpenStack Foundation on multiple levels. On Oct. 24, the OpenStack Foundation announced that the open-source StarlingX edge computing project was becoming a stand-alone project at the foundation. Renski said there is no intersection between StarlingX and MCP Edge.

According to Renski, perception of StarlingX in the OpenStack community is a bit controversial.

"It is a fork of the WindRiver Titanium Cloud project that was spun out into open source after Intel sold WindRiver off to private equity," he said. "It is mainly WindRiver KVM, OpenStack and Ceph—no Kubernetes. It doesn't support containers and assumes that edge will be VM-based, which we don't believe to be the long-term scenario. "

MCP Edge
The MCP Edge platform is currently based on the Kubernetes 1.11 release, with Docker as the core container engine. Renski said that for the next release, Mirantis will be migrating to the ContainerD container runtime as the basis. ContainerD is an open-source effort from the Cloud Native Computing Foundation (CNCF), which is also set to be the basis for Docker container engine releases.

MCP Edge also integrates the Cni-genie open-source technology that was originally developed by Huawei, which Renski said allows MCP Edge to use multiple interfaces in Kubernetes pods, including VM pods from Virtlet. Virtlet is a core element of MCP Edge, making it possible to provision and schedule VMs.

Because of the way Virtlet is implemented, those VMs can take advantage of hardware acceleration like SRI-OV, which is not something you can do with KubeVirt.

KubeVirt is an open-source project that also enables VM workloads to run on Kubernetes.

Edge Optimization
The core Mirantis Cloud Platform is different from MCP Edge in a number of respects. While MCP is optimized for traditional large-scale cloud deployments, MCP Edge is being optimized for small footprint deployments, according to Renski. He said that a typical MCP Edge deployment will be six nodes, all of which can run workloads.

All services are HA [high availability], and any two nodes can go down without affecting the control plane performance. If you wanted to do the same with OpenStack, you'll need to dedicate three nodes for controller HA and you'll have three dead nodes out of six.





Private Cloud
While Mirantis is now moving forward on edge computing technology, it is still focused on its core business of enabling private cloud deployments.

Red Hat Enterprise Linux 7.6 Released with Improved Security

$
0
0

The latest release of Red Hat's flagship Linux platform adds TPM 2.0 support for security authentication, as well as integrating the open source nftables firewall technology.

Red Hat announced the general availability of its flagship Red Hat Enterprise Linux (RHEL) 7.6 release on Oct. 30, providing organizations with improved security, management and container features.  Among the enhanced features is support for the Trusted Platform Module (TPM) 2.0 specification for security authentication.

TPM 2.0 support has been added incrementally over recent releases of Red Hat Enterprise Linux 7, as the technology has matured. The TPM 2.0 integration in 7.6 provides an additional level of security by tying the hands-off decryption to server hardware in addition to the network bound disk encryption (NBDE) capability, which operates across the hybrid cloud footprint from on-premise servers to public cloud deployments.

RHEL 7.6 is the second major milestone release of Red Hat's enterprise Linux platform in 2018, following RHEL 7.5 which came out on April 10. In 2017, Red Hat only had one major milestone update for its enterprise platform with the release of RHEL 7.4 in August 2017.




Firewall
In addition to TPM 2.0 support, RHEL 7.6 also provides enhanced support for the open-source nftables firewall technology. For the past two decades, the primary Linux firewall technology has been the iptables project, with nftables considered to be the replacement for it, according to Red Hat.

Iptables remains fully supported in Red Hat Enterprise Linux 7 to provide stability and consistency for existing installations. However, nftables will enable enterprises to benefit from increased scale with complex rule matching, improved latency with on-the-fly rules changes, atomic transactions with rollbacks and improved visibility and debuggability.

While RHEL 7.6 is moving forward on firewall support, it isn't yet fully embracing the new TLS 1.3 cryptographic standard. TLS 1.3 became a formal standard in March and is the protocol used to help secure data in motion across the internet. The client side of TLS 1.3 is supported by the Firefox web browser and other select client applications. The server side, requires dependencies which would violate Red Hat's commitment to application compatibility and ABI/KABI stability in Red Hat Enterprise Linux 7.6.

Ansible
Management and automation in RHEL 7.6 get a boost with support for Red Hat Enterprise Linux System Roles, which are a set of Ansible modules. Ansible is Red Hat's configuration management and automation platform.

Red Hat Enterprise Linux System Roles, powered by Red Hat Ansible Automation, are incorporated into the Satellite configuration management capabilities. It is Now fully supported, these System Roles provide consistency across Red Hat Enterprise Linux releases and integrate with Red Hat products such as Red Hat Satellite Server and Red Hat Ansible Tower.

Container Toolkit
Red Hat is also adding a new project to its container toolkit in RHEL 7.6 with the inclusion of the Podman project. With Podman, containers can be run outside of Kubernetes. The Podman project joins Buildah for building images, and Skopeo for signing images in the container toolkit.





With a more distributed container toolkit, customers have more choice in how they build, deploy, find and share cloud-native applications, all without having to run a container daemon or engine on a system that was never intended to do so.

Wireless access points from multiple vendors are potentially at risk

$
0
0

Internet of Things (IoT) security vendor Armis has found another set of Bluetooth flaws; this time the issues are in Texas Instruments chips that are used in widely deployed enterprise WiFi access points.

Bleedingbit was publicly announced by IoT security firm Armis on Nov. 1; it impacts Bluetooth Low Energy (BLE) chips made by Texas Instruments (TI) that are used in Cisco, Meraki and Aruba wireless access points. According to Armis, the impacted vendors were all contacted in advance of the disclosure so that patches could be made available.

The Bleedingbit vulnerabilities include a memory corruption issue that impacts TI BLE chips (cc2640, cc2650 and cc2640r2). Another Bleedingbit issue is a backdoor within the over-the-air firmware download (OAD) capability on the TI BLE chip (cc2540). Armis warned that the issues could have potentially enabled an attacker to gain unauthorized access to an enterprise network.

Bluetooth is a local protocol that only works within a limited physical range. Ben Seri, head of research at Armis, commented that BLE has a nominal range of 100 meters and if the attacker adds a directional antenna, the range can be doubled or even tripled.

The attacker only needs to be within this range for the initial attack. After an AP has been compromised, the attacker can create an outbound connection, over the internet to a C&C [command and control] server, and the attacker can walk away.

Bleedingbit vulnerabilities were found by a combination of static analysis and dynamic fuzzing. There are now quite a few tools for dynamic fuzzing of BLE: Basic gatttool fuzzing, Ubertooth hacking, BLE-Replay and other tools to conduct man-in-the-middle (MiTM) attacks.

OAD
The OAD feature is something that Armis views as being a backdoor that shouldn't have been present. Seri explained that the OAD feature, by default, does not have any framework for validating authenticity of a firmware upgrade.

OAD is on the firmware of the BLE chip and not of the main system of the access point. Firmware upgrades of the main system are signed and protected. Nevertheless, once an attacker compromises the BLE chip of an Aruba AP, he can then get access to the console connection of that AP, and through it, he can target the main system as well. TI does provide guidelines for secure OAD, and those need to be implemented by manufacturers if they choose to use this feature in production.

Network Segmentation Corruption
One of the risks that Bleedingbit represents is that it can potentially enable an attacker to modify network segmentation rules that might be in place on an AP. Seri explained that an access point, by design, can serve WiFi for different networks including different SSID, VLAN and subnets.

So once an attacker compromises an access point—via Bleedingbit, for example—the attacker can leverage the inherent power an access point has over these various segments of the networks. He may move freely between these segments, and he can also can create a bridge within the compromised AP that will allow devices within these segments to communicate directly between them.

Bleedingbit can eliminate the walls built by network segmentation to prevent unmanaged devices from reaching critical assets within corporate networks.

BlueBorne
The Bleedingbit vulnerabilities are not the first time Armis has reported Bluetooth-related security vulnerabilities. In September 2017, Armis reported the BlueBorne vulnerabilities, a set of eight flaws that impact the Bluetooth stacks used on both Windows and Linux systems.

Seri said that both BlueBorne and Bleedingbit are airborne attack vectors that can be exploited remotely via the air, unlike most attacks conducted through the internet. He added that airborne attacks allow an unauthenticated perpetrator to penetrate a secure network of which they are not a member.

Airborne attacks are beneficial to attackers for several reasons, according to Seri. First, they allow them to operate virtually undetected, as traditional security measures cannot detect them. Second, they are contagious by their nature, allowing the attack to spread to any device in the vicinity of the initial breach.

Both also show that new protocols or expanded use of these wireless protocols increase risks and the expanded attack landscape.

At this point, it's not entirely clear what, if any, impact the Bleedingbit flaw has had to date on vulnerable devices.



How To Set Up Laravel, Nginx, and MySQL with Docker Compose on Ubuntu 18.04

$
0
0

This tutorial will take you through the steps to install and build a web application using the Laravel framework, with Nginx as the web server and MySQL as the database, everything inside Docker containers. We will set up the entire stack configuration in a docker-compose file, along with configuration files for PHP, MySQL, and Nginx.

Prerequisites
To complete this tutorial, you will need one Ubuntu 18.04 server with a non-root user having sudo privileges.

Installing Docker
The Docker installation package available in the official Ubuntu repository may not be the latest version. To ensure we get the latest version, we'll install Docker from the official Docker repository. To do that, we'll add a new package source, add the GPG key from Docker to ensure the downloads are valid, and then install the package.

First, update your existing list of packages:

sudo apt update

Next, install a few prerequisite packages which let apt use packages over HTTPS:

sudo apt install apt-transport-https ca-certificates curl software-properties-common

Then add the GPG key for the official Docker repository to your system:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Add the Docker repository to APT sources:

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"

Next, update the package database with the Docker packages from the newly added repo:

sudo apt update

Make sure you are about to install from the Docker repo instead of the default Ubuntu repo:

apt-cache policy docker-ce

You'll see output like this, although the version number for Docker may be different:

docker-ce:
  Installed: (none)
  Candidate: 18.03.1~ce~3-0~ubuntu
  Version table:
 18.03.1~ce~3-0~ubuntu 500
500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages

Notice that docker-ce is not installed, but the candidate for installation is from the Docker repository for Ubuntu 18.04 (bionic).

Finally, install Docker:

sudo apt install docker-ce

Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it's running:

sudo systemctl status docker

The output should be similar to the following, showing that the service is active and running:

Output
● docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2018-07-05 15:08:39 UTC; 2min 55s ago
 Docs: https://docs.docker.com
 Main PID: 10096 (dockerd)
Tasks: 16
   CGroup: /system.slice/docker.service
   ├─10096 /usr/bin/dockerd -H fd://
   └─10113 docker-containerd --config /var/run/docker/containerd/containerd.toml

Installing Docker now gives you not just the Docker service (daemon) but also the docker command line utility, or the Docker client. We'll explore how to use the docker command later in this tutorial.

Executing the Docker Command Without Sudo
By default, the docker command can only be run the root user or by a user in the docker group, which is automatically created during Docker's installation process. If you attempt to run the docker command without prefixing it with sudo or without being in the docker group, you'll get an output like this:

Output
docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
See 'docker run --help'.

If you want to avoid typing sudo whenever you run the docker command, add your username to the docker group:

sudo usermod -aG docker ${USER}

To apply the new group membership, log out of the server and back in, or type the following:

su - ${USER}

You will be prompted to enter your user's password to continue.

Confirm that your user is now added to the docker group by typing:

id -nG

Output
username sudo docker

If you need to add a user to the docker group that you're not logged in as, declare that username explicitly using:

sudo usermod -aG docker username

The rest of this article assumes you are running the docker command as a user in the docker group. If you choose not to, please prepend the commands with sudo.

Installing Docker Compose
Although we can install Docker Compose from the official Ubuntu repositories, it is several minor version behind the latest release, so we'll install Docker Compose from the Docker's GitHub repository. The command below is slightly different than the one you'll find on the Releases page. By using the -o flag to specify the output file first rather than redirecting the output, this syntax avoids running into a permission denied error caused when using sudo.

We'll check the current release and if necessary, update it in the command below:

sudo curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose

Next we'll set the permissions:

sudo chmod +x /usr/local/bin/docker-compose

Then we'll verify that the installation was successful by checking the version:

docker-compose --version

This will print out the version we installed:

Output
docker-compose version 1.21.2, build a133471


Downloading Laravel and Installing Dependencies
As a first step, we will get the latest version of Laravel and install the dependencies for the project, including Composer, the application-level package manager for PHP. We will install these dependencies with Docker to avoid having to install Composer globally.

First, check that you are in your home directory and clone the latest Laravel release to a directory called laravel-app:

cd ~
git clone https://github.com/laravel/laravel.git laravel-app

Move into the laravel-app directory:

cd ~/laravel-app

Next, use Docker's composer image to mount the directories that you will need for your Laravel project and avoid the overhead of installing Composer globally:

docker run --rm -v $(pwd):/app composer install

Using the -v and --rm flags with docker run creates an ephemeral container that will be bind-mounted to your current directory before being removed. This will copy the contents of your ~/laravel-app directory to the container and also ensure that the vendor folder Composer creates inside the container is copied to your current directory.

As a final step, set permissions on the project directory so that it is owned by your non-root user:

sudo chown -R $USER:$USER ~/laravel-app

This will be important when you write the Dockerfile for your application image in Step 4, as it will allow you to work with your application code and run processes in your container as a non-root user.

With your application code in place, you can move on to defining your services with Docker Compose.

Creating the Docker Compose File
Building your applications with Docker Compose simplifies the process of setting up and versioning your infrastructure. To set up our Laravel application, we will write a docker-compose file that defines our web server, database, and application services.

Open the file:

nano ~/laravel-app/docker-compose.yml

In the docker-compose file, you will define three services: app, webserver, and db. Add the following code to the file, being sure to replace the root password for MYSQL_ROOT_PASSWORD, defined as an environment variable under the db service, with a strong password of your choice:

version: '3'
services:

  #PHP Service
  app:
    build:
      context: .
      dockerfile: Dockerfile
    image: example.com/php
    container_name: app
    restart: unless-stopped
    tty: true
    environment:
      SERVICE_NAME: app
      SERVICE_TAGS: dev
    working_dir: /var/www
    networks:
      - app-network

  #Nginx Service
  webserver:
    image: nginx:alpine
    container_name: webserver
    restart: unless-stopped
    tty: true
    ports:
      - "80:80"
      - "443:443"
    networks:
      - app-network

  #MySQL Service
  db:
    image: mysql:5.7.22
    container_name: db
    restart: unless-stopped
    tty: true
    ports:
      - "3306:3306"
    environment:
      MYSQL_DATABASE: laravel
      MYSQL_ROOT_PASSWORD: your_mysql_root_password
      SERVICE_TAGS: dev
      SERVICE_NAME: mysql
    networks:
      - app-network

#Docker Networks
networks:
  app-network:
    driver: bridge

The services defined here include:

app: This service definition contains the Laravel application and runs a custom Docker image, example.com/php, that you will define in Step 4. It also sets the working_dir in the container to /var/www.

webserver: This service definition pulls the nginx:alpine image from Docker and exposes ports 80 and 443.

db: This service definition pulls the mysql:5.7.22 image from Docker and defines a few environmental variables, including a database called laravel for your application and the root password for the database. You are free to name the database whatever you would like, and you should replace your_mysql_root_password with your own strong password. This service definition also maps port 3306 on the host to port 3306 on the container.

Each container_name property defines a name for the container, which corresponds to the name of the service. If you don't define this property, Docker will assign a name to each container by combining a historically famous person's name and a random word separated by an underscore.

To facilitate communication between containers, the services are connected to a bridge network called app-network. A bridge network uses a software bridge that allows containers connected to the same bridge network to communicate with each other. The bridge driver automatically installs rules in the host machine so that containers on different bridge networks cannot communicate directly with each other. This creates a greater level of security for applications, ensuring that only related services can communicate with one another. It also means that you can define multiple networks and services connecting to related functions: front-end application services can use a frontend network, for example, and back-end services can use a backend network.

Let's look at how to add volumes and bind mounts to your service definitions to persist your application data.

Persisting Data
Docker has powerful and convenient features for persisting data. In our application, we will make use of volumes and bind mounts for persisting the database, and application and configuration files. Volumes offer flexibility for backups and persistence beyond a container's lifecycle, while bind mounts facilitate code changes during development, making changes to your host files or directories immediately available in your containers. Our setup will make use of both.

Warning: By using bind mounts, you make it possible to change the host filesystem through processes running in a container, including creating, modifying, or deleting important system files or directories. This is a powerful ability with security implications, and could impact non-Docker processes on the host system. Use bind mounts with care.

In the docker-compose file, define a volume called dbdata under the db service definition to persist the MySQL database:

#MySQL Service
db:
  ...
    volumes:
      - dbdata:/var/lib/mysql
    networks:
      - app-network

The named volume dbdata persists the contents of the /var/lib/mysql folder present inside the container. This allows you to stop and restart the db service without losing data.

At the bottom of the file, add the definition for the dbdata volume:

#Volumes
volumes:
  dbdata:
    driver: local

With this definition in place, you will be able to use this volume across services.

Next, add a bind mount to the db service for the MySQL configuration files you will create in Step 7:

#MySQL Service
db:
  ...
    volumes:
      - dbdata:/var/lib/mysql
      - ./mysql/my.cnf:/etc/mysql/my.cnf

This bind mount binds ~/laravel-app/mysql/my.cnf to /etc/mysql/my.cnf in the container.

Next, add bind mounts to the webserver service. There will be two: one for your application code and another for the Nginx configuration definition that you will create in Step 6:

#Nginx Service
webserver:
  ...
  volumes:
      - ./:/var/www
      - ./nginx/conf.d/:/etc/nginx/conf.d/
  networks:
      - app-network

The first bind mount binds the application code in the ~/laravel-app directory to the /var/www directory inside the container. The configuration file that you will add to ~/laravel-app/nginx/conf.d/ will also be mounted to /etc/nginx/conf.d/ in the container, allowing you to add or modify the configuration directory's contents as needed.

Finally, add the following bind mounts to the app service for the application code and configuration files:

#PHP Service
app:
  ...
  volumes:
       - ./:/var/www
       - ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
  networks:
      - app-network

The app service is bind-mounting the ~/laravel-app folder, which contains the application code, to the /var/www folder in the container. This will speed up the development process, since any changes made to your local application directory will be instantly reflected inside the container. You are also binding your PHP configuration file, ~/laravel-app/php/local.ini, to /usr/local/etc/php/conf.d/local.ini inside the container. You will create the local PHP configuration file in Step 5.

Your docker-compose file will now look like this:

version: '3'
services:

  #PHP Service
  app:
    build:
      context: .
      dockerfile: Dockerfile
    image: example.com/php
    container_name: app
    restart: unless-stopped
    tty: true
    environment:
      SERVICE_NAME: app
      SERVICE_TAGS: dev
    working_dir: /var/www
    volumes:
      - ./:/var/www
      - ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
    networks:
      - app-network

  #Nginx Service
  webserver:
    image: nginx:alpine
    container_name: webserver
    restart: unless-stopped
    tty: true
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./:/var/www
      - ./nginx/conf.d/:/etc/nginx/conf.d/
    networks:
      - app-network

  #MySQL Service
  db:
    image: mysql:5.7.22
    container_name: db
    restart: unless-stopped
    tty: true
    ports:
      - "3306:3306"
    environment:
      MYSQL_DATABASE: laravel
      MYSQL_ROOT_PASSWORD: your_mysql_root_password
      SERVICE_TAGS: dev
      SERVICE_NAME: mysql
    volumes:
      - dbdata:/var/lib/mysql/
      - ./mysql/my.cnf:/etc/mysql/my.cnf
    networks:
      - app-network

#Docker Networks
networks:
  app-network:
    driver: bridge
#Volumes
volumes:
  dbdata:
    driver: local

Save the file and exit your editor when you are finished making changes.

With your docker-compose file written, you can now build the custom image for your application.

Creating the Dockerfile
Docker allows you to specify the environment inside of individual containers with a Dockerfile. A Dockerfile enables you to create custom images that you can use to install the software required by your application and configure settings based on your requirements. You can push the custom images you create to Docker Hub or any private registry.

Our Dockerfile will be located in our ~/laravel-app directory. Create the file:

nano ~/laravel-app/Dockerfile

This Dockerfile will set the base image and specify the necessary commands and instructions to build the Laravel application image. Add the following code to the file:

FROM php:7.2-fpm

# Copy composer.lock and composer.json
COPY composer.lock composer.json /var/www/

# Set working directory
WORKDIR /var/www

# Install dependencies
RUN apt-get update && apt-get install -y \
    build-essential \
    mysql-client \
    libpng-dev \
    libjpeg62-turbo-dev \
    libfreetype6-dev \
    locales \
    zip \
    jpegoptim optipng pngquant gifsicle \
    vim \
    unzip \
    git \
    curl

# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*

# Install extensions
RUN docker-php-ext-install pdo_mysql mbstring zip exif pcntl
RUN docker-php-ext-configure gd --with-gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ --with-png-dir=/usr/include/
RUN docker-php-ext-install gd

# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer

# Add user for laravel application
RUN groupadd -g 1000 www
RUN useradd -u 1000 -ms /bin/bash -g www www

# Copy existing application directory contents
COPY . /var/www

# Copy existing application directory permissions
COPY --chown=www:www . /var/www

# Change current user to www
USER www

# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["php-fpm"]

First, the Dockerfile creates an image on top of the php:7.2-fpm Docker image. This is a Debian-based image that has the PHP FastCGI implementation PHP-FPM installed. The file also installs the prerequisite packages for Laravel: mcrypt, pdo_mysql, mbstring, and imagick with composer.

The RUN directive specifies the commands to update, install, and configure settings inside the container, including creating a dedicated user and group called www. The WORKDIR instruction specifies the /var/www directory as the working directory for the application.

Creating a dedicated user and group with restricted permissions mitigates the inherent vulnerability when running Docker containers, which run by default as root. Instead of running this container as root, we've created the www user, who has read/write access to the /var/www folder thanks to the COPY instruction that we are using with the --chown flag to copy the application folder's permissions.

Finally, the EXPOSE command exposes a port in the container, 9000, for the php-fpm server. CMD specifies the command that should run once the container is created. Here, CMD specifies "php-fpm", which will start the server.

Save the file and exit your editor when you are finished making changes.

You can now move on to defining your PHP configuration.

Configuring PHP
Now that you have defined your infrastructure in the docker-compose file, you can configure the PHP service to act as a PHP processor for incoming requests from Nginx.

To configure PHP, you will create the local.ini file inside the php folder. This is the file that you bind-mounted to /usr/local/etc/php/conf.d/local.ini inside the container in Step 2. Creating this file will allow you to override the default php.ini file that PHP reads when it starts.

Create the php directory:

mkdir ~/laravel-app/php

Next, open the local.ini file:

nano ~/laravel-app/php/local.ini

To demonstrate how to configure PHP, we'll add the following code to set size limitations for uploaded files:

upload_max_filesize=40M
post_max_size=40M

The upload_max_filesize and post_max_size directives set the maximum allowed size for uploaded files, and demonstrate how you can set php.ini configurations from your local.ini file. You can put any PHP-specific configuration that you want to override in the local.ini file.

Save the file and exit your editor.

With your PHP local.ini file in place, you can move on to configuring Nginx.

Configuring Nginx
With the PHP service configured, you can modify the Nginx service to use PHP-FPM as the FastCGI server to serve dynamic content. The FastCGI server is based on a binary protocol for interfacing interactive programs with a web server.

To configure Nginx, you will create an app.conf file with the service configuration in the ~/laravel-app/nginx/conf.d/ folder.

First, create the nginx/conf.d/ directory:

mkdir -p ~/laravel-app/nginx/conf.d

Next, create the app.conf configuration file:

nano ~/laravel-app/nginx/conf.d/app.conf

Add the following code to the file to specify your Nginx configuration:

server {
    listen 80;
    index index.php index.html;
    error_log  /var/log/nginx/error.log;
    access_log /var/log/nginx/access.log;
    root /var/www/public;
    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass app:9000;
        fastcgi_index index.php;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;
    }
    location / {
        try_files $uri $uri/ /index.php?$query_string;
        gzip_static on;
    }
}

The server block defines the configuration for the Nginx web server with the following directives:

listen: This directive defines the port on which the server will listen to incoming requests.
error_log and access_log: These directives define the files for writing logs.
root: This directive sets the root folder path, forming the complete path to any requested file on the local file system.

In the php location block, the fastcgi_pass directive specifies that the app service is listening on a TCP socket on port 9000. This makes the PHP-FPM server listen over the network rather than on a Unix socket. Though a Unix socket has a slight advantage in speed over a TCP socket, it does not have a network protocol and thus skips the network stack. For cases where hosts are located on one machine, a Unix socket may make sense, but in cases where you have services running on different hosts, a TCP socket offers the advantage of allowing you to connect to distributed services. Because our app container is running on a different host from our webserver container, a TCP socket makes the most sense for our configuration.

Save the file and exit your editor when you are finished making changes.

Thanks to the bind mount you created in Step 2, any changes you make inside the nginx/conf.d/ folder will be directly reflected inside the webserver container.

Next, let's look at our MySQL settings.

Configuring MySQL
With PHP and Nginx configured, you can enable MySQL to act as the database for your application.

To configure MySQL, you will create the my.cnf file in the mysql folder. This is the file that you bind-mounted to /etc/mysql/my.cnf inside the container in Step 2. This bind mount allows you to override the my.cnf settings as and when required.

To demonstrate how this works, we'll add settings to the my.cnf file that enable the general query log and specify the log file.

First, create the mysql directory:

mkdir ~/laravel-app/mysql

Next, make the my.cnf file:

nano ~/laravel-app/mysql/my.cnf

In the file, add the following code to enable the query log and set the log file location:

[mysqld]
general_log = 1
general_log_file = /var/lib/mysql/general.log

This my.cnf file enables logs, defining the general_log setting as 1 to allow general logs. The general_log_file setting specifies where the logs will be stored.

Save the file and exit your editor.

Our next step will be to start the containers.

Running the Containers and Modifying Environment Settings
Now that you have defined all of your services in your docker-compose file and created the configuration files for these services, you can start the containers. As a final step, though, we will make a copy of the .env.example file that Laravel includes by default and name the copy .env, which is the file Laravel expects to define its environment:

cp .env.example .env

We will configure the specific details of our setup in this file once we have started the containers.

With all of your services defined in your docker-compose file, you just need to issue a single command to start all of the containers, create the volumes, and set up and connect the networks:

docker-compose up -d

When you run docker-compose up for the first time, it will download all of the necessary Docker images, which might take a while. Once the images are downloaded and stored in your local machine, Compose will create your containers. The -d flag daemonizes the process, running your containers in the background.

Once the process is complete, use the following command to list all of the running containers:

docker ps

You will see the following output with details about your app, webserver, and db containers:

Output
CONTAINER ID        NAMES               IMAGE                             STATUS              PORTS
c31b7b3251e0        db                  mysql:5.7.22                      Up 2 seconds        0.0.0.0:3306->3306/tcp
ed5a69704580        app                 example.com/php                  Up 2 seconds        9000/tcp
5ce4ee31d7c0        webserver           nginx:alpine                      Up 2 seconds        0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp

The CONTAINER ID in this output is a unique identifier for each container, while NAMES lists the service name associated with each. You can use both of these identifiers to access the containers. IMAGE defines the image name for each container, while STATUS provides information about the container's state: whether it's running, restarting, or stopped.

You can now modify the .env file on the app container to include specific details about your setup.

Open the file using docker-compose exec, which allows you to run specific commands in containers. In this case, you are opening the file for editing:

docker-compose exec app nano .env

Find the block that specifies DB_CONNECTION and update it to reflect the specifics of your setup. You will modify the following fields:


    DB_HOST will be your db database container.
    DB_DATABASE will be the laravel database.
    DB_USERNAME will be the username you will use for your database. In this case, we will use laraveluser.
    DB_PASSWORD will be the secure password you would like to use for this user account.

DB_CONNECTION=mysql
DB_HOST=db
DB_PORT=3306
DB_DATABASE=laravel
DB_USERNAME=laraveluser
DB_PASSWORD=your_laravel_db_password

Save your changes and exit your editor.

Next, set the application key for the Laravel application with the php artisan key:generate command. This command will generate a key and copy it to your .env file, ensuring that your user sessions and encrypted data remain secure:

docker-compose exec app php artisan key:generate

You now have the environment settings required to run your application. To cache these settings into a file, which will boost your application's load speed, run:

docker-compose exec app php artisan config:cache

Your configuration settings will be loaded into /var/www/bootstrap/cache/config.php on the container.

As a final step, visit http://your_server_ip in the browser. You will see the following home page for your Laravel application:



With your containers running and your configuration information in place, you can move on to configuring your user information for the laravel database on the db container.

Creating a User for MySQL
The default MySQL installation only creates the root administrative account, which has unlimited privileges on the database server. In general, it's better to avoid using the root administrative account when interacting with the database. Instead, let's create a dedicated database user for our application's Laravel database.

To create a new user, execute an interactive bash shell on the db container with docker-compose exec:

docker-compose exec db bash

Inside the container, log into the MySQL root administrative account:

mysql -u root -p

You will be prompted for the password you set for the MySQL root account during installation in your docker-compose file.

Start by checking for the database called laravel, which you defined in your docker-compose file. Run the show databases command to check for existing databases:

mysql> show databases;

You will see the laravel database listed in the output:

Output
+--------------------+
| Database           |
+--------------------+
| information_schema |
| laravel            |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
5 rows in set (0.00 sec)

Next, create the user account that will be allowed to access this database. Our username will be laraveluser, though you can replace this with another name if you'd prefer. Just be sure that your username and password here match the details you set in your .env file in the previous step:

mysql> GRANT ALL ON laravel.* TO 'laraveluser'@'%' IDENTIFIED BY 'your_laravel_db_password';

Flush the privileges to notify the MySQL server of the changes:

mysql> FLUSH PRIVILEGES;

Exit MySQL:

mysql> EXIT;

Finally, exit the container:

# exit

You have configured the user account for your Laravel application database and are ready to migrate your data and work with the Tinker console.

Migrating Data and Working with the Tinker Console
With your application running, you can migrate your data and experiment with the tinker command, which will initiate a PsySH console with Laravel preloaded. PsySH is a runtime developer console and interactive debugger for PHP, and Tinker is a REPL specifically for Laravel. Using the tinker command will allow you to interact with your Laravel application from the command line in an interactive shell.

First, test the connection to MySQL by running the Laravel artisan migrate command, which creates a migrations table in the database from inside the container:

docker-compose exec app php artisan migrate

This command will migrate the default Laravel tables. The output confirming the migration will look like this:

Output
Migration table created successfully.
Migrating: 2014_10_12_000000_create_users_table
Migrated:  2014_10_12_000000_create_users_table
Migrating: 2014_10_12_100000_create_password_resets_table
Migrated:  2014_10_12_100000_create_password_resets_table

Once the migration is complete, you can run a query to check if you are properly connected to the database using the tinker command:

docker-compose exec app php artisan tinker

Test the MySQL connection by getting the data you just migrated:

>>> \DB::table('migrations')->get();

ou will see output that looks like this:

Output
=> Illuminate\Support\Collection {#2856
     all: [
       {#2862
         +"id": 1,
         +"migration": "2014_10_12_000000_create_users_table",
         +"batch": 1,
       },
       {#2865
         +"id": 2,
         +"migration": "2014_10_12_100000_create_password_resets_table",
         +"batch": 1,
       },
     ],
   }

You can use tinker to interact with your databases and to experiment with services and models.

With your Laravel application in place, you are ready for further development and experimentation.

Conclusion
You now have a LEMP stack application running on your server, which you've tested by accessing the Laravel welcome page and creating MySQL database migrations.

How To Set Up Jupyter Notebook with Python 3 on Ubuntu 18.04 Server

$
0
0
An open-source web application, Jupyter Notebook lets you create and share interactive code, visualizations, and more. This tool can be used with several programming languages, including Python, Julia, R, Haskell, and Ruby. It is often used for working with data, statistical modeling, and machine learning.

In this tutorial we will set up Jupyter Notebook to run from an Ubuntu 18.04 server and how to connect to and use the notebook.

Prerequisites
To follow this tutorial, you should have a fresh Ubuntu 18.04 server instance with a basic firewall and a non-root user with sudo privileges configured. You can learn how to set this up by going through our basic server setup guide.

Set Up Python
To begin the process, we’ll install the dependencies we need for our Python programming environment from the Ubuntu repositories. Ubuntu 18.04 comes preinstalled with Python 3.6. We will use the Python package manager pip to install additional components a bit later.

We first need to update the local apt package index and then download and install the packages:

sudo apt update

Next, install pip and the Python header files, which are used by some of Jupyter’s dependencies:

sudo apt install python3-pip python3-dev

We can now move on to setting up a Python virtual environment into which we’ll install Jupyter.

Create a Python Virtual Environment for Jupyter
Now that we have Python 3, its header files, and pip ready to go, we can create a Python virtual environment to manage our projects. We will install Jupyter into this virtual environment.

To do this, we first need access to the virtualenv command which we can install with pip.

Upgrade pip and install the package by typing:

sudo -H pip3 install --upgrade pip
sudo -H pip3 install virtualenv

The -H flag ensures that the security policy sets the home environment variable to the home directory of the target user.

With virtualenv installed, we can start forming our environment. Create and move into a directory where we can keep our project files. We’ll call this my_project_dir, but you should use a name that is meaningful for you and what you’re working on.

mkdir ~/my_project_dir
cd ~/my_project_dir

Within the project directory, we’ll create a Python virtual environment. For the purpose of this tutorial, we’ll call it my_project_env but you should call it something that is relevant to your project.

virtualenv my_project_env

This will create a directory called my_project_env within your my_project_dir directory. Inside, it will install a local version of Python and a local version of pip. We can use this to install and configure an isolated Python environment for Jupyter.

Before we install Jupyter, we need to activate the virtual environment. You can do that by typing:

source my_project_env/bin/activate

Your prompt should change to indicate that you are now operating within a Python virtual environment. It will look something like this: (my_project_env)user@host:~/my_project_dir$.

You’re now ready to install Jupyter into this virtual environment.

Install Jupyter
With your virtual environment active, install Jupyter with the local instance of pip.

Note: When the virtual environment is activated (when your prompt has (my_project_env) preceding it), use pip instead of pip3, even if you are using Python 3. The virtual environment's copy of the tool is always named pip, regardless of the Python version.

pip install jupyter

At this point, you’ve successfully installed all the software needed to run Jupyter. We can now start the Notebook server.

Run Jupyter Notebook
You now have everything you need to run Jupyter Notebook! To run it, execute the following command:

jupyter notebook

A log of the activities of the Jupyter Notebook will be printed to the terminal. When you run Jupyter Notebook, it runs on a specific port number. The first Notebook you run will usually use port 8888. To check the specific port number Jupyter Notebook is running on, refer to the output of the command used to start it:

Output
[I 21:23:21.198 NotebookApp] Writing notebook server cookie secret to /run/user/1001/jupyter/notebook_cookie_secret
[I 21:23:21.361 NotebookApp] Serving notebooks from local directory: /home/username/my_project_dir
[I 21:23:21.361 NotebookApp] The Jupyter Notebook is running at:
[I 21:23:21.361 NotebookApp] http://localhost:8888/?token=1fefa6ab49a498a3f37c959404f7baf16b9a2eda3eaa6d72
[I 21:23:21.361 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[W 21:23:21.361 NotebookApp] No web browser found: could not locate runnable browser.
[C 21:23:21.361 NotebookApp]

Copy/paste this URL into your browser when you connect for the first time,
to login with a token:
http://localhost:8888/?token=1fefa6ab49a498a3f37c959404f7baf16b9a2eda3eaa6d72

If you are running Jupyter Notebook on a local computer (not on a server), you can navigate to the displayed URL to connect to Jupyter Notebook. If you are running Jupyter Notebook on a server, you will need to connect to the server using SSH tunneling as outlined in the next section.

At this point, you can keep the SSH connection open and keep Jupyter Notebook running or you can exit the app and re-run it once you set up SSH tunneling. Let's choose to stop the Jupyter Notebook process. We will run it again once we have SSH tunneling set up. To stop the Jupyter Notebook process, press CTRL+C, type Y, and then ENTER to confirm. The following output will be displayed:

Output
[C 21:28:28.512 NotebookApp] Shutdown confirmed
[I 21:28:28.512 NotebookApp] Shutting down 0 kernels

We’ll now set up an SSH tunnel so that we can access the Notebook.

Connect to the Server Using SSH Tunneling
In this section we will learn how to connect to the Jupyter Notebook web interface using SSH tunneling. Since Jupyter Notebook will run on a specific port on the server (such as :8888, :8889 etc.), SSH tunneling enables you to connect to the server’s port securely.

The next two subsections describe how to create an SSH tunnel from 1) a Mac or Linux, and 2) Windows. Please refer to the subsection for your local computer.

SSH Tunneling with a Mac or Linux
If you are using a Mac or Linux, the steps for creating an SSH tunnel are similar to using SSH to log in to your remote server, except that there are additional parameters in the ssh command. This subsection will outline the additional parameters needed in the ssh command to tunnel successfully.

SSH tunneling can be done by running the following SSH command in a new local terminal window

ssh -L 8888:localhost:8888 your_server_username@your_server_ip

The ssh command opens an SSH connection, but -L specifies that the given port on the local (client) host is to be forwarded to the given host and port on the remote side (server). This means that whatever is running on the second port number (e.g. 8888) on the server will appear on the first port number (e.g. 8888) on your local computer.

Optionally change port 8888 to one of your choosing to avoid using a port already in use by another process.

server_username is your username (e.g. peter) on the server which you created and your_server_ip is the IP address of your server.

For example, for the username peter and the server address 203.0.113.0, the command would be:

ssh -L 8888:localhost:8888 peter@203.0.113.0

If no error shows up after running the ssh -L command, you can move into your programming environment and run Jupyter Notebook:

jupyter notebook

You’ll receive output with a URL. From a web browser on your local machine, open the Jupyter Notebook web interface with the URL that starts with http://localhost:8888. Ensure that the token number is included, or enter the token number string when prompted at http://localhost:8888.

SSH Tunneling with Windows and Putty
If you are using Windows, you can create an SSH tunnel using Putty.

First, enter the server URL or IP address as the hostname as shown:



Next, click SSH on the bottom of the left pane to expand the menu, and then click Tunnels. Enter the local port number you want to use to access Jupyter on your local machine. Choose 8000 or greater to avoid ports used by other services, and set the destination as localhost:8888 where :8888 is the number of the port that Jupyter Notebook is running on.

Now click the Add button, and the ports should appear in the Forwarded ports list:

Forwarded ports list



Finally, click the Open button to connect to the server via SSH and tunnel the desired ports. Navigate to http://localhost:8000 (or whatever port you chose) in a web browser to connect to Jupyter Notebook running on the server. Ensure that the token number is included, or enter the token number string when prompted at http://localhost:8000.

Using Jupyter Notebook
This section goes over the basics of using Jupyter Notebook. If you don’t currently have Jupyter Notebook running, start it with the jupyter notebook command.

You should now be connected to it using a web browser. Jupyter Notebook is a very powerful tool with many features. This section will outline a few of the basic features to get you started using the Notebook. Jupyter Notebook will show all of the files and folders in the directory it is run from, so when you’re working on a project make sure to start it from the project directory.

To create a new Notebook file, select New > Python 3 from the top right pull-down menu:



This will open a Notebook. We can now run Python code in the cell or change the cell to markdown. For example, change the first cell to accept Markdown by clicking Cell > Cell Type > Markdown from the top navigation bar. We can now write notes using Markdown and even include equations written in LaTeX by putting them between the $$ symbols. For example, type the following into the cell after changing it to markdown:

# First Equation

Let us now implement the following equation:
$$ y = x^2$$

where $x = 2$

To turn the markdown into rich text, press CTRL+ENTER, and the following should be the results:



You can use the markdown cells to make notes and document your code. Let's implement that equation and print the result. Click on the top cell, then press ALT+ENTER to add a cell below it. Enter the following code in the new cell.

x = 2
y = x**2
print(y)

To run the code, press CTRL+ENTER. You’ll receive the following results:



You now have the ability to import modules and use the Notebook as you would with any other Python development environment!

Conclusion
You should now be able to write reproducible Python code and notes in Markdown using Jupyter Notebook. To get a quick tour of Jupyter Notebook from within the interface, select Help > User Interface Tour from the top navigation menu to learn more.

The 22 apps contains malware you should delete from your Android phone now

$
0
0

Experts at security firm Sophos are warning Android phone owners about 22 dodgy apps that drain battery life and could result in a big phone bill. The Sun reported the "click-fraud" apps pretend to be normal apps on the Google Play Store, but secretly perform criminal actions out of sight.

The 22 apps have been collectively downloaded more than 22 million times.

"From the user's perspective, these apps drain their phone's battery and may cause data overawes as the apps are constantly running and communicating with servers in the background," researchers told The Sun.

Warning signs include increased data usage and fast-draining battery life, however, pinning such issues to a particular app is hard. While Google removed the dodgy apps from the Play Store on the week of November 25, they can still operate if already installed on an Android phone.

These are the apps you should uninstall right now:

• Sparkle FlashLight — com.sparkle.flashlight
• Snake Attack — com.mobilebt.snakefight
• Math Solver — com.mobilebt.mathsolver
• ShapeSorter — com.mobilebt.shapesorter
• Tak A Trip — com.takatrip.android
• Magnifeye — com.magnifeye.android
• Join Up — com.pesrepi.joinup
• Zombie Killer — com.pesrepi.zombiekiller
• Space Rocket — com.pesrepi.spacerocket
• Neon Pong — com.pesrepi.neonpong
• Just Flashlight — app.mobile.justflashlight
• Table Soccer — com.mobile.tablesoccer
• Cliff Diver — com.mobile.cliffdiver
• Box Stack — com.mobile.boxstack
• Jelly Slice — net.kanmobi.jellyslice
• AK Blackjack — com.maragona.akblackjack
• Colour Tiles — com.maragona.colortiles
• Animal Match — com.beacon.animalmatch
• Roulette Mania — com.beacon.roulettemania
• HexaFall — com.atry.hexafall
• HexaBlocks — com.atry.hexablocks
• PairZap — com.atry.pairzap

How To Install Memcached on Ubuntu 18.04 Server

$
0
0

Memory object caching systems like Memcached can optimize back-end database performance by temporarily storing information in memory, retaining frequently or recently requested records. In this way, they reduce the number of direct requests to your databases.

In this tutorial, we will show you how to set up Memcached on an Ubuntu 18.04 server and secure it by binding its installation to a local or private network interface and creating an authorized user for your Memcached instance.

Prerequisites
This guide assumes that you have a Ubuntu 18.04 server set up with a non-root sudo user.




Installing Memcached from the Official Repositories
If you don't already have Memcached installed on your server, you can install it from the official Ubuntu repositories. First, make sure that your local package index is updated:

sudo apt update

Next, install the official package as follows:

sudo apt install memcached

We can also install libmemcached-tools, a library that provides several tools to work with your Memcached server:

sudo apt install libmemcached-tools

Memcached should now be installed as a service on your server, along with tools that will allow you to test its connectivity. We can now move on to securing its configuration settings.

Securing Memcached Configuration Settings
To ensure that our Memcached instance is listening on the local interface 127.0.0.1, we will check the default setting in the configuration file located at /etc/memcached.conf. The current version of Memcached that ships with Ubuntu and Debian has the -l parameter set to the local interface, which prevents denial of service attacks from the network. We can inspect this setting to ensure that it is set correctly.

You can open /etc/memcached.conf with nano:

sudo nano /etc/memcached.conf

To inspect the interface setting, find the following line in the file:

-l 127.0.0.1

If you see the default setting of -l 127.0.0.1 then there is no need to modify this line. If you do modify this setting to be more open, then it is also a good idea to also disable UDP, as it is more likely to be exploited in denial of service attacks. To disable UDP (while leaving TCP unaffected), add the following option to the bottom of this file:

-U 0

Save and close the file when you are done.

Restart your Memcached service to apply your changes:

sudo systemctl restart memcached

Verify that Memcached is currently bound to the local interface and listening only for TCP connections by typing:

sudo netstat -plunt

You should see the following output:

Output
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:11211         0.0.0.0:*               LISTEN      2279/memcached

This confirms that memcached is bound to the 127.0.0.1 address using only TCP.

Adding Authorized Users
To add authenticated users to your Memcached service, it is possible to use Simple Authentication and Security Layer (SASL), a framework that de-couples authentication procedures from application protocols. We will enable SASL within our Memcached configuration file and then move on to adding a user with authentication credentials.

Configuring SASL Support
We can first test the connectivity of our Memcached instance with the memcstat command. This will help us establish that SASL and user authentication are enabled after we make changes to our configuration files.

To check that Memcached is up and running, type the following:

memcstat --servers="127.0.0.1"

You should see output like the following:

Output
Server: 127.0.0.1 (11211)
         pid: 2279
         uptime: 65
         time: 1546620611
         version: 1.5.6

Now we can move on to enabling SASL. First, we will add the -S parameter to /etc/memcached.conf. Open the file again:

sudo nano /etc/memcached.conf

At the bottom of the file, add the following:

-S

Next, find and uncomment the -vv option, which will provide verbose output to /var/log/memcached. The uncommented line should look like this:

-vv

Save and close the file.

Restart the Memcached service:

sudo systemctl restart memcached

Next, we can take a look at the logs to be sure that SASL support has been enabled:

sudo journalctl -u memcached

You should see the following line, indicating that SASL support has been initialized:

Output
Jan 07 09:30:12 memcached systemd-memcached-wrapper[2310]: Initialized SASL.

We can check the connectivity again, but because SASL has been initialized, this command should fail without authentication:

memcstat --servers="127.0.0.1"

This command should not produce output. We can type the following to check its status:

echo $?

$? will always return the exit code of the last command that exited. Typically, anything besides 0 indicates process failure. In this case, we should see an exit status of 1, which tells us that the memcstat command failed.

Adding an Authenticated User
Now we can download sasl2-bin, a package that contains administrative programs for the SASL user database. This will allow us to create our authenticated user:

sudo apt install sasl2-bin

Next, we will create the directory and file that Memcached will check for its SASL configuration settings:

sudo mkdir /etc/sasl2
sudo nano /etc/sasl2/memcached.conf

Add the following to the SASL configuration file:

mech_list: plain
log_level: 5
sasldb_path: /etc/sasl2/memcached-sasldb2

In addition to specifying our logging level, we will set mech_list to plain, which tells Memcached that it should use its own password file and verify a plaintext password. We will also specify the path to the user database file that we will create next. Save and close the file when you are finished.

Now we will create a SASL database with our user credentials. We will use the saslpasswd2 command to make a new entry for our user in our database using the -c option. Our user will be peter here, but you can replace this name with your own user. Using the -f option, we will specify the path to our database, which will be the path we set in /etc/sasl2/memcached.conf:

sudo saslpasswd2 -a memcached -c -f /etc/sasl2/memcached-sasldb2 peter

You will be asked to type and re-verify a password of your choosing.

Finally, we will give the memcache user ownership over the SASL database:

sudo chown memcache:memcache /etc/sasl2/memcached-sasldb2

Restart the Memcached service:

sudo systemctl restart memcached

Running memcstat again will confirm whether or not our authentication process worked. This time we will run it with our authentication credentials:

memcstat --servers="127.0.0.1" --username=peter --password=your_password

You should see output like the following:

Output
Server: 127.0.0.1 (11211)
         pid: 2772
         uptime: 31
         time: 1546621072
         version: 1.5.6 Ubuntu

Our Memcached service is now successfully running with SASL support and user authentication.

Allowing Access Over the Private Network (Optional)
We have covered how to configure Memcached to listen on the local interface, which can prevent denial of service attacks by protecting the Memcached interface from exposure to outside parties. There may be instances where you will need to allow access from other servers, however. In this case, you can adjust your configuration settings to bind Memcached to the private network interface.

Limiting IP Access With Firewalls
Before you adjust your configuration settings, it is a good idea to set up firewall rules to limit the machines that can connect to your Memcached server. You will need to know the client server’s private IP address to configure your firewall rules.

If you are using the UFW firewall, you can limit access to your Memcached instance by typing the following:

sudo ufw allow from client_server_private_IP/32 to any port 11211

With these changes in place, you can adjust the Memcached service to bind to your server's private networking interface.

Binding Memcached to the Private Network Interface
Now that your firewall is in place, you can adjust the Memcached configuration to bind to your server's private networking interface instead of 127.0.0.1.

We can open the /etc/memcached.conf file again by typing:

sudo nano /etc/memcached.conf

Inside, find the -l 127.0.0.1 line that you checked or modified earlier, and change the address to match your server's private networking interface:

-l memcached_server_private_IP

Save and close the file when you are finished.

Next, restart the Memcached service:

sudo systemctl restart memcached

Check your new settings with netstat to confirm the change:

sudo netstat -plunt

Output
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address                                Foreign Address         State       PID/Program name

tcp        0      0 memcached_server_private_IP:11211            0.0.0.0:*               LISTEN      2912/memcached

Test connectivity from your external client to ensure that you can still reach the service. It is a good idea to also check access from a non-authorized client to ensure that your firewall rules are effective.






Conclusion
In this guide, we have covered how to install and secure your Memcached server by configuring it to bind to your local or private network interface and by enabling SASL authentication.

What is Database Sharding

$
0
0
Any application or website that sees significant growth will eventually need to scale in order to accommodate increases in traffic. For data-driven applications and websites, it's critical that scaling is done in a way that ensures the security and integrity of their data. It can be difficult to predict how popular a website or application will become or how long it will maintain that popularity, which is why some organizations choose a database architecture that allows them to scale their databases dynamically.

This article will discuss one such database architecture: sharded databases. Sharding has been receiving lots of attention in recent years, but many don't have a clear understanding of what it is or the scenarios in which it might make sense to shard a database. We will go over what sharding is, some of its main benefits and drawbacks, and also a few common sharding methods.

What is Sharding?
Sharding is a database architecture pattern related to horizontal partitioning — the practice of separating one table's rows into multiple different tables, known as partitions. Each partition has the same schema and columns, but also entirely different rows. Likewise, the data held in each is unique and independent of the data held in other partitions.

It can be helpful to think of horizontal partitioning in terms of how it relates to vertical partitioning. In a vertically-partitioned table, entire columns are separated out and put into new, distinct tables. The data held within one vertical partition is independent from the data in all the others, and each holds both distinct rows and columns. The following diagram illustrates how a table could be partitioned both horizontally and vertically:



Sharding involves breaking up one's data into two or more smaller chunks, called logical shards. The logical shards are then distributed across separate database nodes, referred to as physical shards, which can hold multiple logical shards. Despite this, the data held within all the shards collectively represent an entire logical dataset.

Database shards exemplify a shared-nothing architecture. This means that the shards are autonomous; they don't share any of the same data or computing resources. In some cases, though, it may make sense to replicate certain tables into each shard to serve as reference tables. For example, let's say there's a database for an application that depends on fixed conversion rates for weight measurements. By replicating a table containing the necessary conversion rate data into each shard, it would help to ensure that all of the data required for queries is held in every shard.

Oftentimes, sharding is implemented at the application level, meaning that the application includes code that defines which shard to transmit reads and writes to. However, some database management systems have sharding capabilities built in, allowing you to implement sharding directly at the database level.

Given this general overview of sharding, let's go over some of the positives and negatives associated with this database architecture.

Benefits of Sharding
The main appeal of sharding a database is that it can help to facilitate horizontal scaling, also known as scaling out. Horizontal scaling is the practice of adding more machines to an existing stack in order to spread out the load and allow for more traffic and faster processing. This is often contrasted with vertical scaling, otherwise known as scaling up, which involves upgrading the hardware of an existing server, usually by adding more RAM or CPU.

It's relatively simple to have a relational database running on a single machine and scale it up as necessary by upgrading its computing resources. Ultimately, though, any non-distributed database will be limited in terms of storage and compute power, so having the freedom to scale horizontally makes your setup far more flexible.

Another reason why some might choose a sharded database architecture is to speed up query response times. When you submit a query on a database that hasn't been sharded, it may have to search every row in the table you're querying before it can find the result set you're looking for. For an application with a large, monolithic database, queries can become prohibitively slow. By sharding one table into multiple, though, queries have to go over fewer rows and their result sets are returned much more quickly.

Sharding can also help to make an application more reliable by mitigating the impact of outages. If your application or website relies on an unsharded database, an outage has the potential to make the entire application unavailable. With a sharded database, though, an outage is likely to affect only a single shard. Even though this might make some parts of the application or website unavailable to some users, the overall impact would still be less than if the entire database crashed.

Drawbacks of Sharding
While sharding a database can make scaling easier and improve performance, it can also impose certain limitations. Here, we'll discuss some of these and why they might be reasons to avoid sharding altogether.

The first difficulty that people encounter with sharding is the sheer complexity of properly implementing a sharded database architecture. If done incorrectly, there's a significant risk that the sharding process can lead to lost data or corrupted tables. Even when done correctly, though, sharding is likely to have a major impact on your team's workflows. Rather than accessing and managing one's data from a single entry point, users must manage data across multiple shard locations, which could potentially be disruptive to some teams.

One problem that users sometimes encounter after having sharded a database is that the shards eventually become unbalanced. By way of example, let's say you have a database with two separate shards, one for customers whose last names begin with letters A through M and another for those whose names begin with the letters N through Z. However, your application serves an inordinate amount of people whose last names start with the letter G. Accordingly, the A-M shard gradually accrues more data than the N-Z one, causing the application to slow down and stall out for a significant portion of your users. The A-M shard has become what is known as a database hotspot. In this case, any benefits of sharding the database are canceled out by the slowdowns and crashes. The database would likely need to be repaired and resharded to allow for a more even data distribution.

Another major drawback is that once a database has been sharded, it can be very difficult to return it to its unsharded architecture. Any backups of the database made before it was sharded won't include data written since the partitioning. Consequently, rebuilding the original unsharded architecture would require merging the new partitioned data with the old backups or, alternatively, transforming the partitioned DB back into a single DB, both of which would be costly and time consuming endeavors.

A final disadvantage to consider is that sharding isn't natively supported by every database engine. For instance, PostgreSQL does not include automatic sharding as a feature, although it is possible to manually shard a PostgreSQL database. There are a number of Postgres forks that do include automatic sharding, but these often trail behind the latest PostgreSQL release and lack certain other features. Some specialized database technologies — like MySQL Cluster or certain database-as-a-service products like MongoDB Atlas — do include auto-sharding as a feature, but vanilla versions of these database management systems do not. Because of this, sharding often requires a "roll your own" approach. This means that documentation for sharding or tips for troubleshooting problems are often difficult to find.

These are, of course, only some general issues to consider before sharding. There may be many more potential drawbacks to sharding a database depending on its use case.

Now that we've covered a few of sharding's drawbacks and benefits, we will go over a few different architectures for sharded databases.

Sharding Architectures
Once you've decided to shard your database, the next thing you need to figure out is how you'll go about doing so. When running queries or distributing incoming data to sharded tables or databases, it's crucial that it goes to the correct shard. Otherwise, it could result in lost data or painfully slow queries. In this section, we'll go over a few common sharding architectures, each of which uses a slightly different process to distribute data across shards.

Key Based Sharding
Key based sharding, also known as hash based sharding, involves using a value taken from newly written data — such as a customer's ID number, a client application's IP address, a ZIP code, etc. — and plugging it into a hash function to determine which shard the data should go to. A hash function is a function that takes as input a piece of data (for example, a customer email) and outputs a discrete value, known as a hash value. In the case of sharding, the hash value is a shard ID used to determine which shard the incoming data will be stored on. Altogether, the process looks like this:



To ensure that entries are placed in the correct shards and in a consistent manner, the values entered into the hash function should all come from the same column. This column is known as a shard key. In simple terms, shard keys are similar to primary keys in that both are columns which are used to establish a unique identifier for individual rows. Broadly speaking, a shard key should be static, meaning it shouldn't contain values that might change over time. Otherwise, it would increase the amount of work that goes into update operations, and could slow down performance.

While key based sharding is a fairly common sharding architecture, it can make things tricky when trying to dynamically add or remove additional servers to a database. As you add servers, each one will need a corresponding hash value and many of your existing entries, if not all of them, will need to be remapped to their new, correct hash value and then migrated to the appropriate server. As you begin rebalancing the data, neither the new nor the old hashing functions will be valid. Consequently, your server won't be able to write any new data during the migration and your application could be subject to downtime.

The main appeal of this strategy is that it can be used to evenly distribute data so as to prevent hotspots. Also, because it distributes data algorithmically, there's no need to maintain a map of where all the data is located, as is necessary with other strategies like range or directory based sharding.

Range Based Sharding
Range based sharding involves sharding data based on ranges of a given value. To illustrate, let's say you have a database that stores information about all the products within a retailer's catalog. You could create a few different shards and divvy up each products' information based on which price range they fall into, like this:



The main benefit of range based sharding is that it's relatively simple to implement. Every shard holds a different set of data but they all have an identical schema as one another, as well as the original database. The application code just reads which range the data falls into and writes it to the corresponding shard.

On the other hand, range based sharding doesn't protect data from being unevenly distributed, leading to the aforementioned database hotspots. Looking at the example diagram, even if each shard holds an equal amount of data the odds are that specific products will receive more attention than others. Their respective shards will, in turn, receive a disproportionate number of reads.

Directory Based Sharding
To implement directory based sharding, one must create and maintain a lookup table that uses a shard key to keep track of which shard holds which data. In a nutshell, a lookup table is a table that holds a static set of information about where specific data can be found. The following diagram shows a simplistic example of directory based sharding:




Here, the Delivery Zone column is defined as a shard key. Data from the shard key is written to the lookup table along with whatever shard each respective row should be written to. This is similar to range based sharding, but instead of determining which range the shard key's data falls into, each key is tied to its own specific shard. Directory based sharding is a good choice over range based sharding in cases where the shard key has a low cardinality and it doesn't make sense for a shard to store a range of keys. Note that it's also distinct from key based sharding in that it doesn't process the shard key through a hash function; it just checks the key against a lookup table to see where the data needs to be written.

The main appeal of directory based sharding is its flexibility. Range based sharding architectures limit you to specifying ranges of values, while key based ones limit you to using a fixed hash function which, as mentioned previously, can be exceedingly difficult to change later on. Directory based sharding, on the other hand, allows you to use whatever system or algorithm you want to assign data entries to shards, and it's relatively easy dynamically add shards using this approach.

While directory based sharding is the most flexible of the sharding methods discussed here, the need to connect to the lookup table before every query or write can have a detrimental impact on an application's performance. Furthermore, the lookup table can become a single point of failure: if it becomes corrupted or otherwise fails, it can impact one's ability to write new data or access their existing data.

Should we Shard?
Whether or not one should implement a sharded database architecture is almost always a matter of debate. Some see sharding as an inevitable outcome for databases that reach a certain size, while others see it as a headache that should be avoided unless it's absolutely necessary, due to the operational complexity that sharding adds.

Because of this added complexity, sharding is usually only performed when dealing with very large amounts of data. Here are some common scenarios where it may be beneficial to shard a database:

The amount of application data grows to exceed the storage capacity of a single database node.
The volume of writes or reads to the database surpasses what a single node or its read replicas can handle, resulting in slowed response times or timeouts.
The network bandwidth required by the application outpaces the bandwidth available to a single database node and any read replicas, resulting in slowed response times or timeouts.

Before sharding, you should exhaust all other options for optimizing your database. Some optimizations you might want to consider include:

Setting up a remote database. If you're working with a monolithic application in which all of its components reside on the same server, you can improve your database's performance by moving it over to its own machine. This doesn't add as much complexity as sharding since the database's tables remain intact. However, it still allows you to vertically scale your database apart from the rest of your infrastructure.

Implementing caching. If your application's read performance is what's causing you trouble, caching is one strategy that can help to improve it. Caching involves temporarily storing data that has already been requested in memory, allowing you to access it much more quickly later on.

Creating one or more read replicas. Another strategy that can help to improve read performance, this involves copying the data from one database server (the primary server) over to one or more secondary servers. Following this, every new write goes to the primary before being copied over to the secondaries, while reads are made exclusively to the secondary servers. Distributing reads and writes like this keeps any one machine from taking on too much of the load, helping to prevent slowdowns and crashes. Note that creating read replicas involves more computing resources and thus costs more money, which could be a significant constraint for some.

Upgrading to a larger server. In most cases, scaling up one's database server to a machine with more resources requires less effort than sharding. As with creating read replicas, an upgraded server with more resources will likely cost more money. Accordingly, you should only go through with resizing if it truly ends up being your best option.

Bear in mind that if your application or website grows past a certain point, none of these strategies will be enough to improve performance on their own. In such cases, sharding may indeed be the best option for you.

Conclusion
Sharding can be a great solution for those looking to scale their database horizontally. However, it also adds a great deal of complexity and creates more potential failure points for your application. Sharding may be necessary for some, but the time and resources needed to create and maintain a sharded architecture could outweigh the benefits for others.

How To Make Nvidia Graphics Adapter Functional on a CentOS 7 Laptop

$
0
0

If you are dealing a situation where a laptop or desktop machine has to run with CentOS 7 operating system having dual graphics adapter and your desktop or laptop has no option in BIOS to make nvidia display adapter primary then this step by step guide will walk you through the steps to make nvidia graphics adapter functional on CentOS x86_64bit operating system.
 
I have installed CentOS 7.6 x86_64bit operating system on a laptop in order to meet customer requirements.

Note: You have to disable "Secure Boot" option from the BIOS before performing the following steps.


STEP1 – Install latest epel repository

sudo rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
sudo rpm -ivh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

yum -y update
reboot



STEP2 – Uninstall generic (nouveau) driver

Switch to single user console mode by pressing key combination of Ctrl + Alt + F2 at the login prompt screen and perform the following steps:

rpm -qa |grep *nouveau*
yum -y autoremove xorg-x11-drv-nouveau
yum -y autoremove xorg-x11-drv-nouveau
reboot



STEP3 – Install latest nvidia driver

Switch to single user console mode by pressing key combination of Ctrl + Alt + F2 at the login prompt screen and perform the following steps:

yum -y install nvidia-detect
yum -y install kmod-nvidia
reboot



STEP4 – Install bumblebee package

Switch to single user console mode by pressing key combination of Ctrl + Alt + F2 at the login prompt screen and perform the following steps:

yum -y install bumblebee
usermod –aG bumblebee yourusername



STEP5– Update bumblebee configuration files

vi /etc/bumblebee/bumblebee.conf

[bumblebeed]
VirtualDisplay=:8
KeepUnusedXServer=false
ServerGroup=bumblebee
TurnCardOffAtExit=false
NoEcoModeOverride=false
Driver=nvidia
XorgConfDir=/etc/bumblebee/xorg.conf.d

[optirun]
Bridge=auto
VGLTransport=proxy
PrimusLibraryPath=/usr/lib64/primus:/usr/lib32/primus
AllowFallbackToIGC=false

[driver-nvidia]
KernelDriver=nvidia-drm
PMMethod=auto
LibraryPath=
XorgModulePath=
XorgConfFile=/etc/bumblebee/xorg.conf.nvidia

Driver=nouveau
[driver-nouveau]
KernelDriver=nouveau
PMMethod=auto
XorgConfFile=/etc/bumblebee/xorg.conf.nouveau


Save and close.

Now edit and remove blacklist nvidia line from the following file and leave blacklist nouveau line as is.

vi /etc/modprobe.d/bumblebee.conf
 

blacklist nouveau

Save and close.

Now verify nvidia graphic card bus id with the following command:

lscpi | grep nvidia

and update the nvidia display adapter bus id in xorg.conf.nvidia file:

vi /etc/bumblebee/xorg.conf.nvidia

Section "ServerLayout"
    Identifier  "Layout0"
    Option      "AutoAddDevices""false"
    Option      "AutoAddGPU""false"
EndSection

Section "Device"
    Identifier  "DiscreteNvidia"
    Driver      "nvidia"
    VendorName  "NVIDIA Corporation"

    BusID "PCI:01:00:0"

    Option "ProbeAllGpus""false"

    Option "NoLogo""true"
    Option "UseEDID""false"
    Option "UseDisplayDevice""none"
EndSection


Save and close.

At this point, edit /usr/share/applications/nvidia-settings.desktop file and change the Exec line like below:

vi /usr/share/applications/nvidia-settings.desktop
 

Exec=optirun nvidia-settings -c :8.0

Save and close.

reboot

Now you can login with graphical desktop at the login prompt screen, then from terminal, execute the following command to open nvidia control panel:

optirun nvidia-settings -c :8.0


If you can see the above screen on your laptop or desktop running CentOS 7.6 with dual graphics adapter then congrats, you have successfully made nvidia graphics driver functional.

Migrate Active Directory Domain Services From Windows 2012 R2 to Windows Server 2019

$
0
0
Active Directory migration is now a very simple and straight forward task but still few important things you need to consider before you jump into migration process of your active directory domain controllers.

• Evaluate organizational requirement for active directory migration
• Make Plan for implementation Process
• Prepare Physical / Virtual resources for Domain Controller
• Install Windows server 2019 Standard or Datacenter edition
• Install latest patches from Windows Updates
• Assign Static IP address to Domain Controller
• Install Active Directory Domain Services Role
• Migrate Application and Server Roles from the Existing Domain Controllers.
• Migrate FSMO roles to new Domain Controllers
• Demote old Active Directory domain controllers
• Raise the Domain and Forest Functional level

As per the below diagram techsupportpk.com domain has single domain controller with all 5 FSMO role running on Windows server 2012 R2. Domain and forest functional level currently operating at Windows server 2012 R2. We will add a new domain controller with Windows server 2019 and it will be the new FSMO role holder for the domain. once FSMO role migration completed, Domain controller running windows server 2012 R2 will be demoted. After that, forest and domain function level will be raised to the latest available version.



In this demonstration, we are using Win2K12R2 as the domain controller hostname for windows server 2012 R2 and Win2K19 as the domain controller hostname for windows server 2019. These steps can also be applied if you are migrating from Windows server 2008 R2 to Windows Server 2019 or Windows server 2016 to Windows Server 2019.

Note: When you add a new domain controllers to the existing infrastructure it is recommended to add it to the forest root level first and then go to the domain tree levels.


STEP1 - Add Windows server 2019 to the existing domain as member

Log in to the Server 2019 as a member of local administrators group and join your server to domain. After restart, log in to the server as Enterprise Administrator.



Assign static IP address to the server. Launch the PowerShell Console as an Administrator. Install the AD DS Role using the following command:

Install-WindowsFeature –Name AD-Domain-Services -IncludeManagementTools

When above process completed then configure the new server as additional domain controller using the following command:

Install-ADDSDomainController -CreateDnsDelegation:$false -NoGlobalCatalog:$true -InstallDns:$true -DomainName "techsupportpk.com" -SiteName "Default-First-Site-Name" -ReplicationSourceDC "Win2K12R2.techsupportpk.com" -DatabasePath "C:\Windows\NTDS" -LogPath "C:\Windows\NTDS" -NoRebootOnCompletion:$true -SysvolPath "C:\Windows\SYSVOL" -Force:$true

Make sure to replace Win2K12R2.techsupportpk.com with your existing FQDN.

Once execute the command it will ask for SafeModeAdministrator Password. Please use complex password to proceed.

After configuration completed, restart the system and log back in as administrator to check the AD DS status using the following command:

Get-Service adws,kdc,netlogon,dns


Execute the following command to list down the domain controllers along with the IP address and Sites it belongs to.

Get-ADDomainController -Filter * | Format-Table Name, IPv4Address, Site


STEP2 - Migrate FSMO Roles

Its time to migrate all five FSMO roles to the new domain controller using the following command:

Move-ADDirectoryServerOperationMasterRole -Identity Win2K19 -OperationMasterRole SchemaMaster, DomainNamingMaster, PDCEmulator, RIDMaster, InfrastructureMaster

Make sure to replace Win2K19 with your Windows Server 2019 computer name.

Press to "A" Yes to All

Once migration process completed, you can verify the new FSMO role holder using the following command:

Netdom query fsmo

This will confirm that you are now running all 5 FSMO roles on Windows Server 2019 domain.


STEP3 - Demote Windows 2012 R2 Domain

At this point we need to demote the old windows domain controller which running with windows server 2012 R2. To do that execute the following command as enterprise administrator from the relevant DC.

Open up PowerShell on Windows 2012 R2 domain controller and execute the following command:

Uninstall-ADDSDomainController -DemoteOperationMasterRole -RemoveApplicationPartition

Once execute the above command it will ask to define password for the local administrator account.


STEP4 - Raise Domain and Forest  Functional Level

The last step is to raise the domain and forest functional level to windows server 2019. To do that, execute the following commands on Windows Server 2019 domain controller:

To upgrade domain functional levels:

Set-ADDomainMode –identity techsupportpk.com -DomainMode Windows2016Domain

Make sure to replace techsupportpk.com with your domain name.

To upgrade forest function levels

Set-ADForestMode -Identity techsupportpk.com -ForestMode Windows2016Forest

Note: With windows server 2019, there is no domain or forest functional level called windows2019 so we have to keep 2016.

Now you have successfully completed the migration from Active Directory domain services from Windows 2012 R2 to Windows Server 2019.

SOLVED: Printer USB Composite Device (Error Code 10) on Windows 10

$
0
0

You need perform following steps in order to update a Brother printer’s firmware when you are facing an issue on Windows 10 computer recognizes it as a USB Composite Device instead of a printer:

1. Connect the printer in question to a different computer – a computer that recognizes it as a printer and not a USB Composite Device, preferably a computer running on any version of Windows 7.

2. Wait a couple of minutes for the computer to successfully recognize and install the printer as a device.

3. Click hereto go to the Downloads section of the official Brother website.

4. Either search for your Brother printer using its model number:




5. On the Downloads page for your specific Brother Printer, select your OS family (for example Windows) and OS version (for example Windows 10 (64-bit)) and click on Search.


6. Under the Firmware section, click on the Firmware Update Tool.


7. In the window that pops up, click on your preferred language.


8. Click on Agree to the EULA and Download, and the Firmware Update Tool should start downloading automatically once you get to the next page.


9. Once the Firmware Update Tool has been downloaded successfully, launch it.


10. The wizard will automatically recognize your printer during the process. Follow the wizard through to the end. Once you have successfully followed the wizard through to completion, exit, disconnect your printer from the computer and connect it to the computer that was previously recognizing it as a USB Composite Device. The computer should now be recognizing it as a printer and not a USB Composite Device and you should be able to print.

Note: If you do not have access to another computer that recognizes your Brother Printer as a printer, you may fix this issue by connecting your Printer to the same computer via its Network port and following the procedure described above.

How To Set Up LEMP Stack on Ubuntu 19.04

$
0
0
The LEMP software stack is a group of software that can be used to serve dynamic web pages and web applications. This is an acronym that describes a Linux operating system, with an Nginx (pronounced like “Engine-X”) web server. The backend data is stored in the MySQL database and the dynamic processing is handled by PHP.



This tutorial demonstrates how to install a LEMP stack on an Ubuntu 19.04 server. The Ubuntu operating system takes care of the first requirement. We will describe how to get the rest of the components up and running.


Prerequisites
Before you begin this guide, you should have a regular, non-root user account on your Ubuntu 19.04 server with sudo privileges.


Installing the Nginx Web Server
Since this is our first time using apt for this session, start off by updating your server’s package index. Following that, install the server:

sudo apt updatesudo apt install -y nginx

On Ubuntu 19.04, Nginx is configured to start running upon installation.

If you have the ufw firewall running, you will need to allow connections to Nginx. Nginx registers itself with ufw upon installation, so the procedure is rather straightforward.

It is recommended that you enable the most restrictive profile that will still allow the traffic you want. Since you haven't configured SSL for your server in this guide, you will only need to allow traffic on port 80.

Enable this by typing:

sudo ufw allow 'Nginx HTTP'
sudo ufw allow 'OpenSSH'

sudo ufw enable




sudo ufw status

Output
Status: active

To                         Action      From
--                         ------      ----     
Nginx HTTP                 ALLOW       Anywhere                 
OpenSSH                    ALLOW       Anywhere                           
Nginx HTTP (v6)            ALLOW       Anywhere (v6)            
OpenSSH (v6)               ALLOW       Anywhere (v6)

sudo systemctl enable nginx
sudo systemctl restart nginx


Installing MySQL
Now that you have a Nginx web server, you need to install MySQL (a database management system) to store and manage the data for your website or app.


sudo apt install -y mysql-server


The MySQL database software is now installed, but its configuration is not yet complete.

To secure the installation, MySQL comes with a script that will ask whether we want to modify some insecure defaults. Initiate the script by typing:

sudo mysql_secure_installation

Securing the MySQL server deployment.

Connecting to MySQL using a blank password.
The 'validate_password' plugin is installed on the server.
The subsequent steps will run with the existing configuration
of the plugin.
Please set the password for root here.

New password:

Re-enter new password:

Estimated strength of the password: 100
Do you wish to continue with the password provided?(Press y|Y for Yes, any other key for No) : y
By default, a MySQL installation has an anonymous user,
allowing anyone to log into MySQL without having to have
a user account created for them. This is intended only for
testing, and to make the installation go a bit smoother.
You should remove them before moving into a production
environment.

Remove anonymous users? (Press y|Y for Yes, any other key for No) : y
Success.


Normally, root should only be allowed to connect from
'localhost'. This ensures that someone cannot guess at
the root password from the network.

Disallow root login remotely? (Press y|Y for Yes, any other key for No) : y
Success.

By default, MySQL comes with a database named 'test' that
anyone can access. This is also intended only for testing,
and should be removed before moving into a production
environment.


Remove test database and access to it? (Press y|Y for Yes, any other key for No) : y
 - Dropping test database...
Success.

 - Removing privileges on test database...
Success.

Reloading the privilege tables will ensure that all changes
made so far will take effect immediately.

Reload privilege tables now? (Press y|Y for Yes, any other key for No) : y
Success.

All done!


If you prefer to use a password when connecting to MySQL as root, you will need to switch its authentication method from auth_socket to mysql_native_password. To do this, open up the MySQL prompt from your terminal:

sudo mysql

Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.7.25-1 (Ubuntu)

Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.


Next, check which authentication method each of your MySQL user accounts use with the following command:

mysql> select user,authentication_string,plugin,host from mysql.user;

+------------------+-------------------------------------------+-----------------------+-----------+
| user             | authentication_string                     | plugin                | host      |
+------------------+-------------------------------------------+-----------------------+-----------+
| root             |                                           | auth_socket           | localhost |
| mysql.session    | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost |
| mysql.sys        | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost |
| debian-sys-maint | *7BB9DB0875131D73FC5741A0910EF9379829BB56 | mysql_native_password | localhost |
+------------------+-------------------------------------------+-----------------------+-----------+
4 rows in set (0.00 sec)

In this example, you can see that the root user does in fact authenticate using the auth_socket plugin. To configure the root account to authenticate with a password, run the following ALTER USER command. Be sure to change password to a strong password of your choosing:

mysql> ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'YourPassword';

Query OK, 0 rows affected (0.00 sec)

Then, run FLUSH PRIVILEGES which tells the server to reload the grant tables and put your new changes into effect:

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

Check the authentication methods employed by each of your users again to confirm that root no longer authenticates using the auth_socket plugin:

mysql> SELECT user,authentication_string,plugin,host FROM mysql.user;

+------------------+-------------------------------------------+-----------------------+-----------+
| user             | authentication_string                     | plugin                | host      |
+------------------+-------------------------------------------+-----------------------+-----------+
| root             | *8232A1298A49F710DBEE0B330C42EEC825D4190A | mysql_native_password | localhost |
| mysql.session    | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost |
| mysql.sys        | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost |
| debian-sys-maint | *7BB9DB0875131D73FC5741A0910EF9379829BB56 | mysql_native_password | localhost |
+------------------+-------------------------------------------+-----------------------+-----------+
4 rows in set (0.00 sec)

You can see in this example output that the root MySQL user now authenticates using a password. Once you confirm this on your own server, you can exit the MySQL shell:

mysql> exit

Note: After configuring your root MySQL user to authenticate with a password, you'll no longer be able to access MySQL with the sudo mysql command used previously. Instead, you must run the following:

mysql -u root -p

Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.7.25-1 (Ubuntu)

Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> exit

At this point, your database system is now set up and you can move on to installing PHP.


Installing PHP
Since Nginx does not contain native PHP processing like some other web servers, you will need to install php-fpm, which stands for "fastCGI process manager". We will tell Nginx to pass PHP requests to this software for processing.

Install the php-fpm module along with an additional helper package, php-mysql, which will allow PHP to communicate with your database backend. The installation will pull in the necessary PHP core files. Do this by typing:

sudo apt install -y php-fpm php-mysql

You now have all of the required LEMP stack components installed, but you still need to make a few configuration changes in order to tell Nginx to use the PHP processor for dynamic content.

Now open a new server block configuration file within the /etc/nginx/sites-available/ directory. In this example, the new server block configuration file is named sample.com, although you can name yours whatever you’d like:

sudo nano /etc/nginx/sites-available/sample.com

Add the following content, which was taken and slightly modified from the default server block configuration file, to your new server block configuration file:

server {
        listen 80;
        root /var/www/html;
        index index.php index.html index.htm index.nginx-debian.html;
        server_name sample.com;

        location / {
                try_files $uri $uri/ =404;
        }

        location ~ \.php$ {
                include snippets/fastcgi-php.conf;
                fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
        }

        location ~ /\.ht {
                deny all;
        }
}

After adding this content, save and close the file. Enable your new server block by creating a symbolic link from your new server block configuration file (in the /etc/nginx/sites-available/ directory) to the /etc/nginx/sites-enabled/ directory:

sudo ln -s /etc/nginx/sites-available/sample.com /etc/nginx/sites-enabled/

Then, unlink the default configuration file from the /sites-enabled/ directory:

sudo unlink /etc/nginx/sites-enabled/default

Test your new configuration file for syntax errors by typing:

sudo nginx -t

Output
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

If any errors are reported, go back and recheck your file before continuing.

When you are ready, reload Nginx to make the necessary changes:

sudo systemctl reload nginx


Creating a PHP File to Test Configuration
To do this, use your text editor to create a test PHP file called info.php in your document root:

sudo nano /var/www/html/info.php

Enter the following lines into the new file. This is valid PHP code that will return information about your server:

<?php
phpinfo();
?>

When you are finished, save and close the file.

Now, you can visit this page in your web browser by visiting your server's domain name or IP address followed by /info.php:

http://your_server_domain_or_IP/info.php


You should see a web page that has been generated by PHP with information about your server:

If you see a page that looks like this, you've set up PHP processing with Nginx successfully.


Wrapping up
A LEMP stack is a powerful platform that will allow you to set up and serve nearly any website or application from your server.

How To Set Up Nginx Web Server on Ubuntu 19.04

$
0
0
Nginx is one of the most popular web servers in the world and is responsible for hosting some of the largest and highest-traffic sites on the internet. It is more resource-friendly than Apache in most cases and can be used as a web server or reverse proxy.



In this tutorial, we'll show you how to install Nginx web server on an Ubuntu 19.04 machine.

Prerequisites
Before you begin this tutorial, you should have a non-root user with sudo privileges configured on your Ubuntu 19.04 server.


Installing Nginx
Nginx is available in Ubuntu's default repositories, you ca install it from these repositories using the apt packaging system.

sudo apt update

sudo apt install nginx


Output
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  fontconfig-config fonts-dejavu-core libfontconfig1 libgd3 libjbig0 libjpeg-turbo8 libjpeg8
  libnginx-mod-http-geoip libnginx-mod-http-image-filter libnginx-mod-http-xslt-filter libnginx-mod-mail
  libnginx-mod-stream libtiff5 libwebp6 libxpm4 nginx-common nginx-core
Suggested packages:
  libgd-tools fcgiwrap nginx-doc ssl-cert
The following NEW packages will be installed:
  fontconfig-config fonts-dejavu-core libfontconfig1 libgd3 libjbig0 libjpeg-turbo8 libjpeg8
  libnginx-mod-http-geoip libnginx-mod-http-image-filter libnginx-mod-http-xslt-filter libnginx-mod-mail
  libnginx-mod-stream libtiff5 libwebp6 libxpm4 nginx nginx-common nginx-core
0 upgraded, 18 newly installed, 0 to remove and 95 not upgraded.
Need to get 2,432 kB of archives.
After this operation, 7,895 kB of additional disk space will be used.
Do you want to continue? [Y/n] y



Configuring Firewall
Before testing Nginx, the firewall software needs to be adjusted to allow access to the service. Nginx registers itself as a service with ufw upon installation, making it straightforward to allow Nginx access.

List the application configurations that ufw knows how to work with by typing:

sudo ufw app list

You should get a listing of the application profiles:

Output
Available applications:
  Nginx Full
  Nginx HTTP
  Nginx HTTPS
  OpenSSH

As you can see, there are three profiles available for Nginx:

  • Nginx Full: This profile opens both port 80 (normal, unencrypted web traffic) and port 443 (TLS/SSL encrypted traffic)
  • Nginx HTTP: This profile opens only port 80 (normal, unencrypted web traffic)
  • Nginx HTTPS: This profile opens only port 443 (TLS/SSL encrypted traffic)

It is recommended that you enable the most restrictive profile that will still allow the traffic you've configured. Since we haven't configured SSL for our server yet in this guide, we will only need to allow traffic on port 80.

You can enable this by typing:

sudo ufw allow 'Nginx HTTP'

You can verify the change by typing:

sudo ufw status

You should see HTTP traffic allowed in the displayed output:

Output
Status: active

To                         Action      From
--                         ------      ----
OpenSSH                    ALLOW       Anywhere                 
Nginx HTTP                 ALLOW       Anywhere                 
OpenSSH (v6)               ALLOW       Anywhere (v6)            
Nginx HTTP (v6)            ALLOW       Anywhere (v6)



At the end of the installation process, Ubuntu 19.04 starts Nginx. The web server should already be up and running.

We can check with the systemd init system to make sure the service is running by typing:

systemctl status nginx

Output
nginx.service - A high performance web server and a reverse proxy server
   Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2019-04-16 12:33:13 UTC; 11min ago
     Docs: man:nginx(8)
 Main PID: 2666 (nginx)
    Tasks: 2 (limit: 2277)
   Memory: 4.7M
   CGroup: /system.slice/nginx.service
           ├─2666 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
           └─2667 nginx: worker process

Apr 16 12:33:13 ubuntu1904 systemd[1]: Starting A high performance web server and a reverse proxy server...
Apr 16 12:33:13 ubuntu1904 systemd[1]: nginx.service: Failed to parse PID from file /run/nginx.pid: Invalid ar
Apr 16 12:33:13 ubuntu1904 systemd[1]: Started A high performance web server and a reverse proxy server.


As you can see above, the service appears to have started successfully. However, the best way to test this is to actually request a page from Nginx.

You can access the default Nginx landing page to confirm that the software is running properly by navigating to your server's IP address.

http://your_server_ip

You should see the default Nginx landing page:


This page is included with Nginx to show you that the server is running correctly.


Setting Up Server Blocks
When using the Nginx web server, server blocks (similar to virtual hosts in Apache) can be used to encapsulate configuration details and host more than one domain from a single server. We will set up a domain called sample.com, but you should replace this with your own domain name.

Create the directory for sample.com as follows, using the -p flag to create any necessary parent directories:

sudo mkdir -p /var/www/sample.com/html

Next, assign ownership of the directory with the $USER environment variable:

sudo chown -R $USER:$USER /var/www/sample.com/html

The permissions of your web roots should be correct if you haven't modified your umask value, but you can make sure by typing:

sudo chmod -R 755 /var/www/sample.com

Next, create a sample index.html page using nano or your favorite editor:

sudo nano /var/www/sample.com/html/index.html

Inside, add the following sample HTML:

<html>
    <head>
        <title>Welcome to sample.com!</title>
    </head>
    <body>
        <h1>Success!  The sample.com server block is working!</h1>
    </body>
</html>

Save and close the file when you are finished.

In order for Nginx to serve this content, it's necessary to create a server block with the correct directives. Instead of modifying the default configuration file directly, let’s make a new one at /etc/nginx/sites-available/example.com:

sudo nano /etc/nginx/sites-available/sample.com

Paste in the following configuration block, which is similar to the default, but updated for our new directory and domain name:

server {
        listen 80;
        listen [::]:80;

        root /var/www/sample.com/html;
        index index.html index.htm index.nginx-debian.html;

        server_name sample.com www.sample.com;

        location / {
                try_files $uri $uri/ =404;
        }
}

Notice that we’ve updated the root configuration to our new directory, and the server_name to our domain name.

Next, let's enable the file by creating a link from it to the sites-enabled directory, which Nginx reads from during startup:

sudo ln -s /etc/nginx/sites-available/sample.com /etc/nginx/sites-enabled/

Two server blocks are now enabled and configured to respond to requests based on their listen and server_name directives

  • sample.com: Will respond to requests for sample.com and www.sample.com.
  • default: Will respond to any requests on port 80 that do not match the other two blocks.

To avoid a possible hash bucket memory problem that can arise from adding additional server names, it is necessary to adjust a single value in the /etc/nginx/nginx.conf file. Open the file:

sudo nano /etc/nginx/nginx.conf

Find the server_names_hash_bucket_size directive and remove the # symbol to uncomment the line:

http {
   
    server_names_hash_bucket_size 64;
   
}

Save and close when you are finished

Next, test to make sure that there are no syntax errors in any of your Nginx files:

sudo nginx -t

Output
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

If there aren't any problems, restart Nginx to enable your changes:

sudo systemctl restart nginx

Nginx should now be serving your domain name. You can test this by navigating to http://sample.com, where you should see something like this:



Wrapping up
Now that you have your Nginx web server installed, you have many options for the type of content to serve and the technologies you want to use to create a richer experience.

How To Set Up LibreNMS on Ubuntu 18.04 Server

$
0
0
LibreNMS is an open source auto-discovering and one of the best PHP/MySQL/SNMP based NMS software which includes support for a wide range of network devices, server hardware and almost all of the operating systems.



This article will walk you through the steps to install and configure LibreNMS on your Ubuntu 18.04 server.

Installing LibreNMS Dependencies
First, you need to install required dependencies on your Ubuntu 18.04 machine using the following commands:

sudo apt install -y composer fping git graphviz imagemagick mariadb-client mariadb-server

sudo apt install -y php7.2-cli php7.2-curl php7.2-fpm php7.2-gd php7.2-mysql php7.2-snmp php7.2-xml php7.2-zip

sudo apt install -y nginx-full mtr-tiny nmap acl

sudo apt install -y python-memcache python-mysqldb rrdtool snmp snmpd whois

sudo apt install -y policycoreutils-python-utils policycoreutils


Disabling Selinux
We need to change selinux from permissive to disabled in order to allow error free installation of LibreNMS:

sudo nano /etc/selinux/config

SELINUX=disabled

Save and close


Creating LibreNMS User
Now we need to create a regular user called librenms and adding it to www-data group:

sudo useradd librenms -d /opt/librenms -M -r

sudo usermod -a -G librenms www-data


Cloning LibreNMS
At this point we will clone librenms package from github on our Ubuntu machine using the following command:

sudo git clone https://github.com/librenms/librenms.git /opt/librenms


Configuring MySQL
To fulfill librenms database requirement, we need to perform following steps to make it available. Replace highlighted parameters as per your need.

sudo systemctl restart mysql

sudo mysql -u root -p

CREATE DATABASE librenms CHARACTER SET utf8 COLLATE utf8_unicode_ci;

CREATE USER 'librenms'@'localhost' IDENTIFIED BY 'P@ssw0rd';

GRANT ALL PRIVILEGES ON librenms.* TO 'librenms'@'localhost';

FLUSH PRIVILEGES;

mysql> exit


sudo nano /etc/mysql/mariadb.conf.d/50-server.cnf

Within the [mysqld] section please add following:

innodb_file_per_table=1
sql-mode=""
lower_case_table_names=0

Save and close.

sudo systemctl restart mysql


Configuring PHP Timezone
We need to add or update timezone on our Ubuntu machine and in the following two php files as well:

sudo timedatectl set-timezone Asia/Karachi

sudo nano /etc/php/7.2/fpm/php.ini

Set the timezone according to your location:

date.time = Asia/Pakistan

Save and close

sudo nano /etc/php/7.2/cli/php.ini

Set the timezone according to your location:

date.time = Asia/Pakistan

Save and close

sudo systemctl restart php7.2-fpm

sudo service php7.2-fpm start

sudo ln -s /var/run/php/php7.2-fpm.sock /var/run/php/php7.0-fpm.sock


Configuring Nginx
We need to create librenms.conf file with the following parameters under Nginx /etc/nginx/conf.d directory to enable libreNMS WebGui:

sudo nano /etc/nginx/conf.d/librenms.conf

Paste and replace highlighted parameters according to your need:

server {
 listen      80;
 server_name librenms.example.com;
 root        /opt/librenms/html;
 index       index.php;

 charset utf-8;
 gzip on;
 gzip_types text/css application/javascript text/javascript application/x-javascript image/svg+xml text/plain text/xsd text/xsl text/xml image/x-

icon;
 location / {
  try_files $uri $uri/ /index.php?$query_string;
 }
 location /api/v0 {
  try_files $uri $uri/ /api_v0.php?$query_string;
 }
 location ~ \.php {
  include fastcgi.conf;
  fastcgi_split_path_info ^(.+\.php)(/.+)$;
  fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
 }
 location ~ /\.ht {
  deny all;
 }
}

When you are done, Save and close

Delete default configuration file from nginx and restart nginx services to take changes effect

sudo rm /etc/nginx/sites-enabled/default

sudo systemctl restart nginx


Configuring SNMP
Copy snmpd.conf.example file from /opt/librenms directory to /etc/snmpd like below:

sudo cp /opt/librenms/snmpd.conf.example /etc/snmp/snmpd.conf

sudo nano /etc/snmp/snmpd.conf

Replace the text which says RANDOMSTRINGGOESHERE and set your own community string:

com2sec readonly  default         public

Save and close

sudo curl -o /usr/bin/distro https://raw.githubusercontent.com/librenms/librenms-agent/master/snmp/distro

sudo chmod +x /usr/bin/distro

sudo systemctl restart snmpd


Configuring Cron Job

sudo cp /opt/librenms/librenms.nonroot.cron /etc/cron.d/librenms


Configure Logs

sudo cp /opt/librenms/misc/librenms.logrotate /etc/logrotate.d/librenms


Change Permission
sudo chown -R librenms:librenms /opt/librenms

sudo setfacl -d -m g::rwx /opt/librenms/rrd /opt/librenms/logs

sudo setfacl -R -m g::rwx /opt/librenms/rrd /opt/librenms/logs

sudo setfacl -d -m g::rwx /opt/librenms/rrd /opt/librenms/logs /opt/librenms/bootstrap/cache/ /opt/librenms/storage/

sudo chmod -R ug=rwX /opt/librenms/rrd /opt/librenms/logs /opt/librenms/bootstrap/cache/ /opt/librenms/storage/


Run Composer Wrapper

sudo /opt/librenms/scripts/composer_wrapper.php install --no-dev


Configure UFW Firewall

sudo ufw allow ssh
sudo ufw allow http
sudo ufw allow https
sudo ufw allow 161/udp
sudo ufw enable


Verify Nginx Configuration

sudo systemctl restart nginx

sudo nginx -t

Output
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful


Installing LibreNMS
To install LibreNMS through web gui, open up web browser, type the domain name you set up during Nginx configuration or ip address of your Ubuntu machine in the address bar and press Enter.

http://librenms.example.com or http://ip_address

You will be redirected to the install.php page showing the result of PHP module support checks.

Make sure all status is green as shown below. If not then go back to your Ubuntu server and install or verify missing dependencies. If all well click 'Next Stage' to continue.



Provide database credentials and click Next Stage to continue


Click Goto Add User



In the following step, you need to create a username and password to access LibreNMS web gui console.

Provide Username, Password and Email to create it and click Add User



Click Generate Config



At this point you need to copy following php script parameters and go back to your Ubuntu machine


and create config.php file under /opt/librenms directory using the following command:

sudo nano /opt/librenms/config.php

Paste copied script, save and close the file

Change config.php permission using the following command:

sudo chown -R librenms:librenms /opt/librenms/config.php

Run validation check using the following command:

sudo /opt/librenms/validate.php 

If you see validation check output similar to below then you are at good to go


Go back to your web browser, access librenms web gui with http://hostname.domain.name or http://ip_address

You will be presented the following login screen. Login with your username and password and start adding your network devices to monitor.



Wrapping up
You have successfully installed and configured LibreNMS on your Ubuntu 18.04 server. If you have any question or suggestion, please leave the comments below.

How To Set Up LibreNMS on Ubuntu 19.04 Server

$
0
0
LibreNMS is an open source auto-discovering and one of the best PHP/MySQL/SNMP based NMS software which includes support for a wide range of network devices, server hardware and almost all of the operating systems.



This article will take you through the steps to install and configure LibreNMS on an Ubuntu 19.04 server.

Installing LibreNMS Dependencies
First, you need to install required dependencies on your Ubuntu 19.04 machine using the following commands:

sudo apt install -y composer fping git graphviz imagemagick mariadb-client mariadb-server

sudo apt install -y php7.2-cli php7.2-curl php7.2-fpm php7.2-gd php7.2-mysql php7.2-snmp php7.2-xml php7.2-zip

sudo apt install -y nginx-full mtr-tiny nmap acl

sudo apt install -y python-memcache python-mysqldb rrdtool snmp snmpd whois

sudo apt install -y policycoreutils-python-utils policycoreutils


Disabling Selinux
We need to change selinux from permissive to disabled in order to allow error free installation of LibreNMS:

sudo nano /etc/selinux/config

SELINUX=disabled

Save and close


Creating LibreNMS User
Now we need to create a regular user called librenms and adding it to www-data group:

sudo useradd librenms -d /opt/librenms -M -r

sudo usermod -a -G librenms www-data


Cloning LibreNMS
At this point we will clone librenms package from github on our Ubuntu machine using the following command:

sudo git clone https://github.com/librenms/librenms.git /opt/librenms


Configuring MySQL
To fulfill librenms database requirement, we need to perform following steps to make it available. Replace highlighted parameters as per your need.

sudo systemctl restart mysql

sudo mysql -u root -p

CREATE DATABASE librenms CHARACTER SET utf8 COLLATE utf8_unicode_ci;

CREATE USER 'librenms'@'localhost' IDENTIFIED BY 'P@ssw0rd';

GRANT ALL PRIVILEGES ON librenms.* TO 'librenms'@'localhost';

FLUSH PRIVILEGES;

mysql> exit


sudo nano /etc/mysql/mariadb.conf.d/50-server.cnf

Within the [mysqld] section please add following:

innodb_file_per_table=1
sql-mode=""
lower_case_table_names=0

Save and close.

sudo systemctl restart mysql


Configuring PHP Timezone
We need to add or update timezone on our Ubuntu machine and in the following two php files as well:

sudo timedatectl set-timezone Asia/Karachi

sudo nano /etc/php/7.2/fpm/php.ini

Set the timezone according to your location:

date.time = Asia/Pakistan

Save and close

sudo nano /etc/php/7.2/cli/php.ini

Set the timezone according to your location:

date.time = Asia/Pakistan

Save and close

sudo systemctl restart php7.2-fpm

sudo service php7.2-fpm start

sudo ln -s /var/run/php/php7.2-fpm.sock /var/run/php/php7.0-fpm.sock


Configuring Nginx
We need to create librenms.conf file with the following parameters under Nginx /etc/nginx/conf.d directory to enable libreNMS WebGui:

sudo nano /etc/nginx/conf.d/librenms.conf

Paste and replace highlighted parameters according to your need:

server {
 listen      80;
 server_name librenms.example.com;
 root        /opt/librenms/html;
 index       index.php;

 charset utf-8;
 gzip on;
 gzip_types text/css application/javascript text/javascript application/x-javascript image/svg+xml text/plain text/xsd text/xsl text/xml image/x-

icon;
 location / {
  try_files $uri $uri/ /index.php?$query_string;
 }
 location /api/v0 {
  try_files $uri $uri/ /api_v0.php?$query_string;
 }
 location ~ \.php {
  include fastcgi.conf;
  fastcgi_split_path_info ^(.+\.php)(/.+)$;
  fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
 }
 location ~ /\.ht {
  deny all;
 }
}

When you are done, Save and close

Delete default configuration file from nginx and restart nginx services to take changes effect

sudo rm /etc/nginx/sites-enabled/default

sudo systemctl restart nginx


Configuring SNMP
Copy snmpd.conf.example file from /opt/librenms directory to /etc/snmpd like below:

sudo cp /opt/librenms/snmpd.conf.example /etc/snmp/snmpd.conf

sudo nano /etc/snmp/snmpd.conf

Replace the text which says RANDOMSTRINGGOESHERE and set your own community string:

com2sec readonly  default         public

Save and close

sudo curl -o /usr/bin/distro https://raw.githubusercontent.com/librenms/librenms-agent/master/snmp/distro

sudo chmod +x /usr/bin/distro

sudo systemctl restart snmpd


Configuring Cron Job

sudo cp /opt/librenms/librenms.nonroot.cron /etc/cron.d/librenms


Configure Logs

sudo cp /opt/librenms/misc/librenms.logrotate /etc/logrotate.d/librenms


Change Permission
sudo chown -R librenms:librenms /opt/librenms

sudo setfacl -d -m g::rwx /opt/librenms/rrd /opt/librenms/logs

sudo setfacl -R -m g::rwx /opt/librenms/rrd /opt/librenms/logs

sudo setfacl -d -m g::rwx /opt/librenms/rrd /opt/librenms/logs /opt/librenms/bootstrap/cache/ /opt/librenms/storage/

sudo chmod -R ug=rwX /opt/librenms/rrd /opt/librenms/logs /opt/librenms/bootstrap/cache/ /opt/librenms/storage/


Run Composer Wrapper

sudo /opt/librenms/scripts/composer_wrapper.php install --no-dev


Configure UFW Firewall

sudo ufw allow ssh
sudo ufw allow http
sudo ufw allow https
sudo ufw allow 161/udp
sudo ufw enable


Verify Nginx Configuration

sudo systemctl restart nginx

sudo nginx -t

Output
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful


Installing LibreNMS
To install LibreNMS through web gui, open up web browser, type the domain name you set up during Nginx configuration or ip address of your Ubuntu machine in the address bar and press Enter.

http://librenms.example.com or http://ip_address

You will be redirected to the install.php page showing the result of PHP module support checks.

Make sure all status is green as shown below. If not then go back to your Ubuntu server and install or verify missing dependencies. If all well click 'Next Stage' to continue.



Provide database credentials and click Next Stage to continue


Click Goto Add User



In the following step, you need to create a username and password to access LibreNMS web gui console.

Provide Username, Password and Email to create it and click Add User



Click Generate Config



At this point you need to copy following php script parameters and go back to your Ubuntu machine


and create config.php file under /opt/librenms directory using the following command:

sudo nano /opt/librenms/config.php

Paste copied script, save and close the file

Change config.php permission using the following command:

sudo chown -R librenms:librenms /opt/librenms/config.php

Run validation check using the following command:

sudo /opt/librenms/validate.php 

If you see validation check output similar to below then you are at good to go


Go back to your web browser, access librenms web gui with http://hostname.domain.name or http://ip_address

You will be presented the following login screen. Login with your username and password and start adding your network devices to monitor.



Wrapping up
You have successfully installed and configured LibreNMS on your Ubuntu 19.04 server. If you have any question or suggestion, please leave the comments below.

How To Administer and Manage Ubuntu 19.04 Server using Webmin

$
0
0

Webmin is a web-based control panel which allows you to administer and manage your linux servers through a modern web-based interface. With Webmin, you can quickly install, configure and change settings for common packages including web servers and databases, as well as manage users, groups, disks partitioning and raid configuration.

This tutorial will walk you through the steps to install and configure Webmin on an Ubuntu 19.04 server.

Installing Webmin
First, we need to add the Webmin repository into our Ubuntu 19.04 server's apt source list so that we can easily install and update Webmin using Ubuntu apt package manager.

sudo nano /etc/apt/sources.list

add following line to the bottom of the file to include webmin repository in Ubuntu source list:

deb http://download.webmin.com/download/repository sarge contrib



Save and close.

wget http://www.webmin.com/jcameron-key.asc

sudo apt-key add jcameron-key.asc

sudo apt update

sudo apt install webmin

Output
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  apt-show-versions libapt-pkg-perl libauthen-pam-perl libio-pty-perl libnet-ssleay-perl libpython-stdlib
  libpython2-stdlib libpython2.7-minimal libpython2.7-stdlib perl-openssl-defaults python python-minimal
  python2 python2-minimal python2.7 python2.7-minimal
Suggested packages:
  python-doc python-tk python2-doc python2.7-doc binutils binfmt-support
The following NEW packages will be installed:
  apt-show-versions libapt-pkg-perl libauthen-pam-perl libio-pty-perl libnet-ssleay-perl libpython-stdlib
  libpython2-stdlib libpython2.7-minimal libpython2.7-stdlib perl-openssl-defaults python python-minimal
  python2 python2-minimal python2.7 python2.7-minimal webmin
0 upgraded, 17 newly installed, 0 to remove and 2 not upgraded.
Need to get 20.2 MB of archives.
After this operation, 191 MB of additional disk space will be used.
Do you want to continue? [Y/n] y



Webmin install complete. You can now login to https://your_server_name:10000/ as root with your root password, or as any user who can use sudo to run commands as root.

Click Advanced 


Click Proceed to labserver (unsafe)


You will be presented following login screen.


Provide your Ubuntu 19.04 server username and password to log in.


One you logged in, following screen will show up.


If your Webmin interface says 2 package updates are available then click to install them.


Click Update Selected Packages


Click Install Now


Once update completed, click Return to package list


Schedule update check and click Save


As you can see dashboard on the left side of the web interface with options to administer and manage your Ubuntu 19.04 server and its components.



Wrapping up
Webmin gives you access to administer and manage almost everything you'd normally need to access through the console, and it organizes them in an intuitive way. For example, if you have MySQL installed, you would find the configuration tab for it under Servers, and then MySQL.

How To Install Docker and Docker-Compose on Ubuntu 19.04

$
0
0
Docker Compose


Docker complements kernel namespacing with a high-level API which operates at the process level. It runs unix processes with strong guarantees of isolation and repeatability across servers. Docker is a great building block for automating distributed systems: large-scale web deployments, database clusters, continuous deployment systems, private PaaS, service-oriented architectures, etc.

docker-compose is a service management software built on top of docker. Define your services and their relationships in a simple YAML file, and let compose handle the rest.

In this tutorial, we'll show you how to install the latest version of Docker and Docker-Compose to help you manage multi-container applications on a Ubuntu 19.04 server.

Prerequisites
To follow this guide you will need a Ubuntu 19.04 server and a non-root user with sudo privileges.

Installing Docker

sudo apt -y install docker.io

Output
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  bridge-utils cgroupfs-mount containerd dns-root-data dnsmasq-base pigz runc ubuntu-fan
Suggested packages:
  ifupdown aufs-tools debootstrap docker-doc rinse zfs-fuse | zfsutils
The following NEW packages will be installed:
  bridge-utils cgroupfs-mount containerd dns-root-data dnsmasq-base docker.io pigz runc ubuntu-fan
0 upgraded, 9 newly installed, 0 to remove and 0 not upgraded.
Need to get 52.5 MB of archives.
After this operation, 258 MB of additional disk space will be used.
Get:1 http://pk.archive.ubuntu.com/ubuntu disco/universe amd64 pigz amd64 2.4-1 [57.4 kB]
Get:2 http://pk.archive.ubuntu.com/ubuntu disco/main amd64 bridge-utils amd64 1.6-2ubuntu1 [30.5 kB]
Get:3 http://pk.archive.ubuntu.com/ubuntu disco/universe amd64 cgroupfs-mount all 1.4 [6,320 B]
Get:4 http://pk.archive.ubuntu.com/ubuntu disco/universe amd64 runc amd64 1.0.0~rc7+git20190403.029124da-0ubun                              tu1 [1,904 kB]
Get:5 http://pk.archive.ubuntu.com/ubuntu disco/universe amd64 containerd amd64 1.2.6-0ubuntu1 [19.4 MB]
Get:6 http://pk.archive.ubuntu.com/ubuntu disco/main amd64 dns-root-data all 2018091102 [5,472 B]
Get:7 http://pk.archive.ubuntu.com/ubuntu disco/main amd64 dnsmasq-base amd64 2.80-1ubuntu1 [314 kB]
Get:8 http://pk.archive.ubuntu.com/ubuntu disco/universe amd64 docker.io amd64 18.09.5-0ubuntu1 [30.7 MB]
Get:9 http://pk.archive.ubuntu.com/ubuntu disco/main amd64 ubuntu-fan all 0.12.12 [34.6 kB]
Fetched 52.5 MB in 52s (1,001 kB/s)
Preconfiguring packages ...
Selecting previously unselected package pigz.
(Reading database ... 97275 files and directories currently installed.)
Preparing to unpack .../0-pigz_2.4-1_amd64.deb ...
Unpacking pigz (2.4-1) ...
Selecting previously unselected package bridge-utils.
Preparing to unpack .../1-bridge-utils_1.6-2ubuntu1_amd64.deb ...
Unpacking bridge-utils (1.6-2ubuntu1) ...
Selecting previously unselected package cgroupfs-mount.
Preparing to unpack .../2-cgroupfs-mount_1.4_all.deb ...
Unpacking cgroupfs-mount (1.4) ...
Selecting previously unselected package runc.
Preparing to unpack .../3-runc_1.0.0~rc7+git20190403.029124da-0ubuntu1_amd64.deb ...
Unpacking runc (1.0.0~rc7+git20190403.029124da-0ubuntu1) ...
Selecting previously unselected package containerd.
Preparing to unpack .../4-containerd_1.2.6-0ubuntu1_amd64.deb ...
Unpacking containerd (1.2.6-0ubuntu1) ...
Selecting previously unselected package dns-root-data.
Preparing to unpack .../5-dns-root-data_2018091102_all.deb ...
Unpacking dns-root-data (2018091102) ...
Selecting previously unselected package dnsmasq-base.
Preparing to unpack .../6-dnsmasq-base_2.80-1ubuntu1_amd64.deb ...
Unpacking dnsmasq-base (2.80-1ubuntu1) ...
Selecting previously unselected package docker.io.
Preparing to unpack .../7-docker.io_18.09.5-0ubuntu1_amd64.deb ...
Unpacking docker.io (18.09.5-0ubuntu1) ...
Selecting previously unselected package ubuntu-fan.
Preparing to unpack .../8-ubuntu-fan_0.12.12_all.deb ...
Unpacking ubuntu-fan (0.12.12) ...
Setting up dnsmasq-base (2.80-1ubuntu1) ...
Setting up runc (1.0.0~rc7+git20190403.029124da-0ubuntu1) ...
Setting up dns-root-data (2018091102) ...
Setting up bridge-utils (1.6-2ubuntu1) ...
Setting up pigz (2.4-1) ...
Setting up cgroupfs-mount (1.4) ...
Setting up containerd (1.2.6-0ubuntu1) ...
Setting up ubuntu-fan (0.12.12) ...
Setting up docker.io (18.09.5-0ubuntu1) ...
Processing triggers for dbus (1.12.12-1ubuntu1) ...
Processing triggers for systemd (240-6ubuntu5) ...
Processing triggers for man-db (2.8.5-2) ...

Installing Docker-Compose

sudo apt -y install docker-compose

Output
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  golang-docker-credential-helpers libsecret-1-0 libsecret-common python3-cached-property python3-distutils python3-docker
  python3-dockerpty python3-dockerpycreds python3-docopt python3-lib2to3 python3-texttable python3-websocket
The following NEW packages will be installed:
  docker-compose golang-docker-credential-helpers libsecret-1-0 libsecret-common python3-cached-property python3-distutils python3-docker
  python3-dockerpty python3-dockerpycreds python3-docopt python3-lib2to3 python3-texttable python3-websocket
0 upgraded, 13 newly installed, 0 to remove and 0 not upgraded.
Need to get 1,045 kB of archives.
After this operation, 5,620 kB of additional disk space will be used.
Get:1 http://pk.archive.ubuntu.com/ubuntu disco/universe amd64 python3-cached-property all 1.5.1-2 [10.8 kB]
Get:2 http://pk.archive.ubuntu.com/ubuntu disco/main amd64 libsecret-common all 0.18.8-1 [3,936 B]
Get:3 http://pk.archive.ubuntu.com/ubuntu disco/main amd64 libsecret-1-0 amd64 0.18.8-1 [94.7 kB]
Get:4 http://pk.archive.ubuntu.com/ubuntu disco/universe amd64 golang-docker-credential-helpers amd64 0.6.1-1 [478 kB]
Get:5 http://pk.archive.ubuntu.com/ubuntu disco/main amd64 python3-lib2to3 all 3.7.3-1ubuntu1 [75.4 kB]
Get:6 http://pk.archive.ubuntu.com/ubuntu disco/main amd64 python3-distutils all 3.7.3-1ubuntu1 [140 kB]
Get:7 http://pk.archive.ubuntu.com/ubuntu disco/universe amd64 python3-dockerpycreds all 0.3.0-1 [5,252 B]
Get:8 http://pk.archive.ubuntu.com/ubuntu disco/universe amd64 python3-websocket all 0.53.0-1 [32.2 kB]
Get:9 http://pk.archive.ubuntu.com/ubuntu disco/universe amd64 python3-docker all 3.4.1-4 [77.0 kB]
Get:10 http://pk.archive.ubuntu.com/ubuntu disco/universe amd64 python3-dockerpty all 0.4.1-1 [10.8 kB]
Get:11 http://pk.archive.ubuntu.com/ubuntu disco/universe amd64 python3-docopt all 0.6.2-2 [19.4 kB]
Get:12 http://pk.archive.ubuntu.com/ubuntu disco/universe amd64 python3-texttable all 1.6.0-1 [10.9 kB]
Get:13 http://pk.archive.ubuntu.com/ubuntu disco/universe amd64 docker-compose all 1.21.0-3 [85.6 kB]
Fetched 1,045 kB in 2s (447 kB/s)
Selecting previously unselected package python3-cached-property.
(Reading database ... 97600 files and directories currently installed.)
Preparing to unpack .../00-python3-cached-property_1.5.1-2_all.deb ...
Unpacking python3-cached-property (1.5.1-2) ...
Selecting previously unselected package libsecret-common.
Preparing to unpack .../01-libsecret-common_0.18.8-1_all.deb ...
Unpacking libsecret-common (0.18.8-1) ...
Selecting previously unselected package libsecret-1-0:amd64.
Preparing to unpack .../02-libsecret-1-0_0.18.8-1_amd64.deb ...
Unpacking libsecret-1-0:amd64 (0.18.8-1) ...
Selecting previously unselected package golang-docker-credential-helpers.
Preparing to unpack .../03-golang-docker-credential-helpers_0.6.1-1_amd64.deb ...
Unpacking golang-docker-credential-helpers (0.6.1-1) ...
Selecting previously unselected package python3-lib2to3.
Preparing to unpack .../04-python3-lib2to3_3.7.3-1ubuntu1_all.deb ...
Unpacking python3-lib2to3 (3.7.3-1ubuntu1) ...
Selecting previously unselected package python3-distutils.
Preparing to unpack .../05-python3-distutils_3.7.3-1ubuntu1_all.deb ...
Unpacking python3-distutils (3.7.3-1ubuntu1) ...
Selecting previously unselected package python3-dockerpycreds.
Preparing to unpack .../06-python3-dockerpycreds_0.3.0-1_all.deb ...
Unpacking python3-dockerpycreds (0.3.0-1) ...
Selecting previously unselected package python3-websocket.
Preparing to unpack .../07-python3-websocket_0.53.0-1_all.deb ...
Unpacking python3-websocket (0.53.0-1) ...
Selecting previously unselected package python3-docker.
Preparing to unpack .../08-python3-docker_3.4.1-4_all.deb ...
Unpacking python3-docker (3.4.1-4) ...
Selecting previously unselected package python3-dockerpty.
Preparing to unpack .../09-python3-dockerpty_0.4.1-1_all.deb ...
Unpacking python3-dockerpty (0.4.1-1) ...
Selecting previously unselected package python3-docopt.
Preparing to unpack .../10-python3-docopt_0.6.2-2_all.deb ...
Unpacking python3-docopt (0.6.2-2) ...
Selecting previously unselected package python3-texttable.
Preparing to unpack .../11-python3-texttable_1.6.0-1_all.deb ...
Unpacking python3-texttable (1.6.0-1) ...
Selecting previously unselected package docker-compose.
Preparing to unpack .../12-docker-compose_1.21.0-3_all.deb ...
Unpacking docker-compose (1.21.0-3) ...
Setting up python3-cached-property (1.5.1-2) ...
Setting up python3-texttable (1.6.0-1) ...
Setting up python3-docopt (0.6.2-2) ...
Setting up python3-lib2to3 (3.7.3-1ubuntu1) ...
Setting up python3-websocket (0.53.0-1) ...
update-alternatives: using /usr/bin/python3-wsdump to provide /usr/bin/wsdump (wsdump) in auto mode
Setting up libsecret-common (0.18.8-1) ...
Setting up python3-dockerpty (0.4.1-1) ...
Setting up python3-distutils (3.7.3-1ubuntu1) ...
Setting up libsecret-1-0:amd64 (0.18.8-1) ...
Setting up golang-docker-credential-helpers (0.6.1-1) ...
Setting up python3-dockerpycreds (0.3.0-1) ...
Setting up python3-docker (3.4.1-4) ...
Setting up docker-compose (1.21.0-3) ...
Processing triggers for man-db (2.8.5-2) ...
Processing triggers for libc-bin (2.29-0ubuntu2) ...

Verifying Docker Compose Version

sudo docker-compose --version

Output
docker-compose version 1.21.0, build unknown

Check Available Docker Images

sudo docker images

Output
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
hello-world         latest              fce289e99eb9        3 months ago        1.84kB

sudo nano docker-compose.yml

Put the following contents into the file:

ubuntu1904-test:
 image: hello-world

Save the file and close text editor.

Now, execute the following command:

sudo docker-compose up

docker-compose creates a container, attaches, and runs the hello program, which confirms that the installation appears to be working:

Output
Creating muhammad_ubuntu1904-test_1 ... done
Attaching to muhammad_ubuntu1904-test_1
ubuntu1904-test_1  |
ubuntu1904-test_1  | Hello from Docker!
ubuntu1904-test_1  | This message shows that your installation appears to be working correctly.
ubuntu1904-test_1  |
ubuntu1904-test_1  | To generate this message, Docker took the following steps:
ubuntu1904-test_1  |  1. The Docker client contacted the Docker daemon.
ubuntu1904-test_1  |  2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
ubuntu1904-test_1  |     (amd64)
ubuntu1904-test_1  |  3. The Docker daemon created a new container from that image which runs the
ubuntu1904-test_1  |     executable that produces the output you are currently reading.
ubuntu1904-test_1  |  4. The Docker daemon streamed that output to the Docker client, which sent it
ubuntu1904-test_1  |     to your terminal.
ubuntu1904-test_1  |
ubuntu1904-test_1  | To try something more ambitious, you can run an Ubuntu container with:
ubuntu1904-test_1  |  $ docker run -it ubuntu bash
ubuntu1904-test_1  |
ubuntu1904-test_1  | Share images, automate workflows, and more with a free Docker ID:
ubuntu1904-test_1  |  https://hub.docker.com/
ubuntu1904-test_1  |
ubuntu1904-test_1  | For more examples and ideas, visit:
ubuntu1904-test_1  |  https://docs.docker.com/get-started/
ubuntu1904-test_1  |
muhammad_ubuntu1904-test_1 exited with code 0

Docker containers only run as long as the command is active, so once hello finished running, the container stopped. Consequently, when we look at active processes, the column headers will appear, but the hello-world container won't be listed because it's not running:

sudo docker ps -a

Output
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

We can see the container information, by using the -a flag. This shows all containers, not just active ones:

sudo docker ps -a

Output
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                     PORTS               NAMES
a6ba72d8a265        hello-world         "/hello"            10 seconds ago      Exited (0) 9 seconds ago                       muhammad_ubuntu1904-test_1


Wrapping up
We've now installed Docker and Docker-Compose, tested our installation by running a Hello World example.

How To Install OctoberCMS on an Ubuntu 18.04 Server

$
0
0
Open source content management system

OctoberCMS is a free, open-source, self-hosted CMS platform based on the Laravel PHP Framework. There are two ways you can install OctoberCMS, either using the Wizard installer or Command-line installation instructions. Before you proceed, you should check that your Ubuntu 18.04 server meets the prerequisites.

This guide will show you through the steps to install and configure OctoberCMS on an Ubuntu 18.04 server using the wizard installer.

Prerequisites
To follow this guide, you will need a Ubuntu 18.04 server with a regular user having sudo privileges. You will need to replace red highlighted text throughout this tutorial to meet your environment needs.

Installing Dependencies
First, you need to install number of dependencies on your Ubuntu 18.04 server by executing the following command:

sudo timedatectl set-timezone Asia/Karachi

sudo apt -y install vim wget curl git socat unzip bash-completion

sudo apt -y install php-cli php-fpm php-pdo php-common php-mysqlnd php-curl php-json php-zip php-gd php-xml php-mbstring

Starting PHP Service
Once you are finished installing dependencies, execute the following commands to start php service and enable it on boot.

sudo systemctl start php7.2-fpm

sudo systemctl enable php7.2-fpm

Output
Synchronizing state of php7.2-fpm.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable php7.2-fpm

Now edit the following two php files and set the timezone according to your location:

sudo vi /etc/php/7.2/fpm/php.ini

date.timezone = Asia/Karachi

Save and close.

sudo vi /etc/php/7.2/fpm/php.ini

date.timezone = Asia/Karachi

Save and close.

Restart the PHP-FPM service:

sudo systemctl restart php7.2-fpm


Installing Database
To meet octobercms database requirement, we need to install mariadb server using the following command:

sudo apt -y install mariadb-server

Once you are done with mariadb installation, execute the following commands to start its service and enable it on boot.

sudo systemctl start mariadb

sudo systemctl enable mariadb

Securing Database
We need to execute following command to make our database secure

sudo mysql_secure_installation

Output
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none):
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

You already have a root password set, so you can safely answer 'n'.

Change the root password? [Y/n] n
 ... skipping.

By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] y
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] y
 ... Success!

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] y
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y
 ... Success!

Cleaning up...

All done!

If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

Creating Database
At this point, we need to create a database to meet octobercms requirement using the following commands:

sudo mysql -u root -p

create database octobercms;

grant all on octobercms.* to 'ocmsadmin'@'localhost' identified by 'yourpassword';

flush privileges;

quit;


Creating SSL Certificate
To make OctoberCMS web access safe and secure, we need to generate a self-signed ssl certificate using openssl:
 
sudo mkdir -p /etc/sslcerts

sudo openssl req -x509 -nodes -days 1825 -newkey rsa:2048 -keyout /etc/sslcerts/example.com.key -out /etc/sslcerts/example.com.crt

Output
Generating a RSA private key
..........................+++++
........................................+++++
writing new private key to '/etc/sslcerts/example.com.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:PK
State or Province Name (full name) [Some-State]:Sindh
Locality Name (eg, city) []:Karachi
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Example
Organizational Unit Name (eg, section) []:Example
Common Name (e.g. server FQDN or YOUR name) []:labserver.example.com
Email Address []:support@example.com

Installing Nginx
To host October CMS and make its web services available to users, we need to install either Nginx or Apache:

sudo apt -y install nginx-full

When you are done with installation, execute following commands to start nginx service and enable it on boot.


sudo systemctl start nginx

sudo systemctl enable nginx


Output
Synchronizing state of nginx.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable nginx


Now create octobercms.conf file under /etc/nginx/conf.d directory and insert the following contents:

sudo nano /etc/nginx/conf.d/octobercms.conf
server {
    listen [::]:443 ssl;
    listen 443 ssl;
    listen [::]:80;
    listen 80;

    server_name example.com;

    index index.php index.html;
    root /var/www;

    ssl_certificate /etc/sslcerts/example.com.crt;
    ssl_certificate_key /etc/sslcerts/example.com.key;

    location / {
        try_files $uri /index.php$is_args$args;
    }

    location ~ \.php$ {
        include fastcgi_params;
        fastcgi_pass unix:/run/php/php7.2-fpm.sock;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_read_timeout 120s;
    }

    rewrite ^themes/.*/(layouts|pages|partials)/.*.htm /index.php break;
    rewrite ^bootstrap/.* /index.php break;
    rewrite ^config/.* /index.php break;
    rewrite ^vendor/.* /index.php break;
    rewrite ^storage/cms/.* /index.php break;
    rewrite ^storage/logs/.* /index.php break;
    rewrite ^storage/framework/.* /index.php break;
    rewrite ^storage/temp/protected/.* /index.php break;
    rewrite ^storage/app/uploads/protected/.* /index.php break;
}


Save and close file

Verify nginx configuration using the following command:

sudo nginx -t

Output
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful


If you see any error in the above output then go back to configuration file and fix the errors.


When done, type the following command to reload nginx service to take changes into effect

sudo systemctl reload nginx
 

Cloning October CMS
We need to clone octobercms on our local Ubuntu machine from github using the following command:

sudo mkdir -p /var/www/octobercms

sudo chown -R $USER:$USER /var/www/octobercms

sudo chmod -R go+rwx /var/www/octobercms

cd /var/www/octobercms

sudo git clone https://github.com/octobercms/install.git

sudo mv install/* /var/www/octobercms
 


Installing October CMS
Open up web browser on any client machine you can access your Ubuntu server like https://example.com/octobercms/install.php and follow the installation instructions to complete.

I am on Windows 7 client machine so i have to adjust example.com entry against Ubuntu server's ip address in C:\Windows\System32\drivers\hosts file to access octobercms web services through it.


I am using chrome web browser on my windows 7 client machine.

We are using self signed ssl certificate for our test environemnt so we have to ignore this certificate warning and Click Advanced to continue

Again click Proceed to example.com (unsafe)

If System Check is all green like below then click Agree & Continue. If you see anything missing then go back to your Ubuntu 18.04 machine and fix it first then come back to this page and referesh to load it again.


Remember, we have created a database, username and password earlier? Provide those database credentials here and click on Administrator 


On this screen, you need to create a user account for your October CMS administrative access. When you are done, click Advanced


Keep the information default on this page and click Continue


On this page you have three option to go with. I am going with Start from scratch 


Let the installation progress complete.


Almost done.


If you see error like below then click Try again to fix it


This screen will show you two different links:

1. Website Address - you can access, your blog, website etc through this link
2. Administration Area - You can administer and manage October CMS through this link.



Wrapping up
You can now navigate to https://example.com/octobercms where you should see your latest installation of OctoberCMS. To log in the aministration area at https://example.com/octobercms/backend (by default), you can use the same username and password you have created earlier.

How To Installl OctoberCMS on an Ubuntu 19.04 Server

$
0
0
Open source content management system

OctoberCMS is a free, open-source, self-hosted CMS platform based on the Laravel PHP Framework. There are two ways you can install OctoberCMS, either using the Wizard installer or Command-line installation instructions. Before you proceed, you should check that your Ubuntu 19.04 server meets the prerequisites.

This guide will take you through the steps to deploy OctoberCMS on an Ubuntu 19.04 server using the wizard installer.

Prerequisites
To follow this guide, you will need a Ubuntu 19.04 server with a regular user having sudo privileges. You will need to replace red highlighted text throughout this tutorial to meet your environment needs.

Installing Dependencies
First, you need to install number of dependencies on your Ubuntu 19.04 server by executing the following command:

sudo timedatectl set-timezone Asia/Karachi

sudo apt -y install vim wget curl git socat unzip bash-completion

sudo apt -y install php-cli php-fpm php-pdo php-common php-mysqlnd php-curl php-json php-zip php-gd php-xml php-mbstring

Starting PHP Service
Once you are finished installing dependencies, execute the following commands to start php service and enable it on boot.

sudo systemctl start php7.2-fpm

sudo systemctl enable php7.2-fpm

Output
Synchronizing state of php7.2-fpm.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable php7.2-fpm

Now edit the following two php files and set the timezone according to your location:

sudo vi /etc/php/7.2/fpm/php.ini

date.timezone = Asia/Karachi

Save and close.

sudo vi /etc/php/7.2/fpm/php.ini

date.timezone = Asia/Karachi

Save and close.

Restart the PHP-FPM service:

sudo systemctl restart php7.2-fpm


Installing Database
To meet octobercms database requirement, we need to install mariadb server using the following command:

sudo apt -y install mariadb-server

Once you are done with mariadb installation, execute the following commands to start its service and enable it on boot.

sudo systemctl start mariadb

sudo systemctl enable mariadb

Securing Database
We need to execute following command to make our database secure

sudo mysql_secure_installation

Output
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none):
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

You already have a root password set, so you can safely answer 'n'.

Change the root password? [Y/n] n
 ... skipping.

By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] y
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] y
 ... Success!

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] y
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y
 ... Success!

Cleaning up...

All done!

If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

Creating Database
At this point, we need to create a database to meet octobercms requirement using the following commands:

sudo mysql -u root -p

create database octobercms;

grant all on octobercms.* to 'ocmsadmin'@'localhost' identified by 'yourpassword';

flush privileges;

quit;


Creating SSL Certificate
To make OctoberCMS web access safe and secure, we need to generate a self-signed ssl certificate using openssl:
 
sudo mkdir -p /etc/sslcerts

sudo openssl req -x509 -nodes -days 1825 -newkey rsa:2048 -keyout /etc/sslcerts/example.com.key -out /etc/sslcerts/example.com.crt

Output
Generating a RSA private key
..........................+++++
........................................+++++
writing new private key to '/etc/sslcerts/example.com.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:PK
State or Province Name (full name) [Some-State]:Sindh
Locality Name (eg, city) []:Karachi
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Example
Organizational Unit Name (eg, section) []:Example
Common Name (e.g. server FQDN or YOUR name) []:labserver.example.com
Email Address []:support@example.com

Installing Nginx
To host October CMS and make its web services available to users, we need to install either Nginx or Apache:

sudo apt -y install nginx-full

When you are done with installation, execute following commands to start nginx service and enable it on boot.


sudo systemctl start nginx

sudo systemctl enable nginx


Output
Synchronizing state of nginx.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable nginx


Now create octobercms.conf file under /etc/nginx/conf.d directory and insert the following contents:

sudo nano /etc/nginx/conf.d/octobercms.conf
server {
    listen [::]:443 ssl;
    listen 443 ssl;
    listen [::]:80;
    listen 80;

    server_name example.com;

    index index.php index.html;
    root /var/www;

    ssl_certificate /etc/sslcerts/example.com.crt;
    ssl_certificate_key /etc/sslcerts/example.com.key;

    location / {
        try_files $uri /index.php$is_args$args;
    }

    location ~ \.php$ {
        include fastcgi_params;
        fastcgi_pass unix:/run/php/php7.2-fpm.sock;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_read_timeout 120s;
    }

    rewrite ^themes/.*/(layouts|pages|partials)/.*.htm /index.php break;
    rewrite ^bootstrap/.* /index.php break;
    rewrite ^config/.* /index.php break;
    rewrite ^vendor/.* /index.php break;
    rewrite ^storage/cms/.* /index.php break;
    rewrite ^storage/logs/.* /index.php break;
    rewrite ^storage/framework/.* /index.php break;
    rewrite ^storage/temp/protected/.* /index.php break;
    rewrite ^storage/app/uploads/protected/.* /index.php break;
}


Save and close file

Verify nginx configuration using the following command:

sudo nginx -t

Output
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful


If you see any error in the above output then go back to configuration file and fix the errors.


When done, type the following command to reload nginx service to take changes into effect

sudo systemctl reload nginx
 

Cloning October CMS
We need to clone octobercms on our local Ubuntu machine from github using the following command:

sudo mkdir -p /var/www/octobercms

sudo chown -R $USER:$USER /var/www/octobercms

sudo chmod -R go+rwx /var/www/octobercms

cd /var/www/octobercms

sudo git clone https://github.com/octobercms/install.git

sudo mv install/* /var/www/octobercms
 


Installing October CMS
Open up web browser on any client machine you can access your Ubuntu server like https://example.com/octobercms/install.php and follow the installation instructions to complete.

I am on Windows 7 client machine so i have to adjust example.com entry against Ubuntu server's ip address in C:\Windows\System32\drivers\hosts file to access octobercms web services through it.


I am using chrome web browser on my windows 7 client machine.

We are using self signed ssl certificate for our test environemnt so we have to ignore this certificate warning and Click Advanced to continue

Again click Proceed to example.com (unsafe)

If System Check is all green like below then click Agree & Continue. If you see anything missing then go back to your Ubuntu 19.04 machine and fix it first then come back to this page and referesh to load it again.


Remember, we have created a database, username and password earlier? Provide those database credentials here and click on Administrator 


On this screen, you need to create a user account for your October CMS administrative access. When you are done, click Advanced


Keep the information default on this page and click Continue


On this page you have three option to go with. I am going with Start from scratch 


Let the installation progress complete.


Almost done.


If you see error like below then click Try again to fix it


This screen will show you two different links:

1. Website Address - you can access, your blog, website etc through this link
2. Administration Area - You can administer and manage October CMS through this link.



Wrapping up
You can now navigate to https://example.com/octobercms where you should see your latest installation of OctoberCMS. To log in the aministration area at https://example.com/octobercms/backend (by default), you can use the same username and password you have created earlier.
Viewing all 880 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>