Quantcast
Channel: Tech Support
Viewing all 880 articles
Browse latest View live

Adaptive SQL Plan Management (SPM) in Oracle Database 12c Release 1 (12.1)

$
0
0
SQL Plan Management was introduced in Oracle 11g to provide a "conservative plan selection strategy" for the optimizer. The basic concepts have not changed in Oracle 12c, but there have been some changes to the process of evolving SQL plan baselines. As with previous releases, auto-capture of SQL plan baselines is disabled by default, but evolution of existing baselines is now automated. In addition, manual evolution of sql plan baselines has been altered to a task-based approach. This article focuses on the changes in 12c.


  • SYS_AUTO_SPM_EVOLVE_TASK
  • Manually Evolving SQL Plan Baselines

SYS_AUTO_SPM_EVOLVE_TASK

In Oracle database 12c the evolution of existing baselines is automated as an advisor task called SYS_AUTO_SPM_EVOLVE_TASK, triggered by the existing "sql tuning advisor" client under the automated database maintenance tasks.
CONN sys@pdb1 AS SYSDBA

COLUMN client_name FORMAT A35
COLUMN task_name FORMAT a30

SELECT client_name, task_name
FROM dba_autotask_task;

CLIENT_NAME TASK_NAME
----------------------------------- ------------------------------
auto optimizer stats collection gather_stats_prog
auto space advisor auto_space_advisor_prog
sql tuning advisor AUTO_SQL_TUNING_PROG

SQL>
You shouldn't alter the "sql tuning advisor" client directly to control baseline evolution. Instead, amend the parameters of the SYS_AUTO_SPM_EVOLVE_TASK advisor task.

CONN sys@pdb1 AS SYSDBA

COLUMN parameter_name FORMAT A25
COLUMN parameter_value FORMAT a15

SELECT parameter_name, parameter_value
FROM dba_advisor_parameters
WHERE task_name = 'SYS_AUTO_SPM_EVOLVE_TASK'
AND parameter_value != 'UNUSED'
ORDER BY parameter_name;

PARAMETER_NAME PARAMETER_VALUE
------------------------- ---------------
ACCEPT_PLANS TRUE
DAYS_TO_EXPIRE UNLIMITED
DEFAULT_EXECUTION_TYPE SPM EVOLVE
EXECUTION_DAYS_TO_EXPIRE 30
JOURNALING INFORMATION
MODE COMPREHENSIVE
TARGET_OBJECTS 1
TIME_LIMIT 3600
_SPM_VERIFY TRUE

SQL>
If you don't wish existing baselines to be evolved automatically, set the ACCEPT_PLANS parameter to FALSE.
BEGIN
DBMS_SPM.set_evolve_task_parameter(
task_name => 'SYS_AUTO_SPM_EVOLVE_TASK',
parameter => 'ACCEPT_PLANS',
value => 'FALSE');
END;
/
Typically, the ACCEPT_PLANS and TIME_LIMIT parameters will be the only ones you will interact with. The rest of this article assumes you have the default settings for these parameters. If you have modified them, switch them back to the default values using the following code.
BEGIN
DBMS_SPM.set_evolve_task_parameter(
task_name => 'SYS_AUTO_SPM_EVOLVE_TASK',
parameter => 'ACCEPT_PLANS',
value => 'TRUE');
END;
/

BEGIN
DBMS_SPM.set_evolve_task_parameter(
task_name => 'SYS_AUTO_SPM_EVOLVE_TASK',
parameter => 'TIME_LIMIT',
value => 3600);
END;
/
The DBMS_SPM package has a function called REPORT_AUTO_EVOLVE_TASK to display information about the the actions taken by the automatic evolve task. With no parameters specified it produces a text report for the latest run of the task.

SET LONG 1000000 PAGESIZE 1000 LONGCHUNKSIZE 100 LINESIZE 100

SELECT DBMS_SPM.report_auto_evolve_task
FROM dual;

REPORT_AUTO_EVOLVE_TASK
--------------------------------------------------------------------------------
GENERAL INFORMATION SECTION
---------------------------------------------------------------------------------------------

Task Information:
---------------------------------------------
Task Name : SYS_AUTO_SPM_EVOLVE_TASK
Task Owner : SYS
Description : Automatic SPM Evolve Task
Execution Name : EXEC_1
Execution Type : SPM EVOLVE
Scope : COMPREHENSIVE
Status : COMPLETED
Started : 02/17/2015 06:00:04
Finished : 02/17/2015 06:00:04
Last Updated : 02/17/2015 06:00:04
Global Time Limit : 3600
Per-Plan Time Limit : UNUSED
Number of Errors : 0
---------------------------------------------------------------------------------------------

SUMMARY SECTION
---------------------------------------------------------------------------------------------
Number of plans processed : 0
Number of findings : 0
Number of recommendations : 0
Number of errors : 0
---------------------------------------------------------------------------------------------

SQL>

Manually Evolving SQL Plan Baselines

In previous releases, evolving SQL plan baselines was done using the EVOLVE_SQL_PLAN_BASELINE function. In 12c this has been replaced by a task-based approach, which typically involves the following steps.
  • CREATE_EVOLVE_TASK
  • EXECUTE_EVOLVE_TASK
  • REPORT_EVOLVE_TASK
  • IMPLEMENT_EVOLVE_TASK
In addition, the following routines can interact with an evolve task.
  • CANCEL_EVOLVE_TASK
  • RESUME_EVOLVE_TASK
  • RESET_EVOLVE_TASK
In order to show this in action we need to create a SQL plan baseline, so the rest of this section is an update of the 11g process to manually create a baseline and evolve it.
CONN test/test@pdb1

DROP TABLE spm_test_tab PURGE;

CREATE TABLE spm_test_tab (
id NUMBER,
description VARCHAR2(50)
);

INSERT /*+ APPEND */ INTO spm_test_tab
SELECT level,
'Description for ' || level
FROM dual
CONNECT BY level <= 10000;
COMMIT;
Query the table using an unindexed column, which results in a full table scan.
SET AUTOTRACE TRACE

SELECT description
FROM spm_test_tab
WHERE id = 99;

Execution Plan
----------------------------------------------------------
Plan hash value: 1107868462

----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 25 | 14 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| SPM_TEST_TAB | 1 | 25 | 14 (0)| 00:00:01 |
----------------------------------------------------------------------------------
Identify the SQL_ID of the SQL statement by querying the V$SQL view.
CONN sys@pdb1 AS SYSDBA

SELECT sql_id
FROM v$sql
WHERE plan_hash_value = 1107868462
AND sql_text NOT LIKE 'EXPLAIN%';

SQL_ID
-------------
gat6z1bc6nc2d

SQL>
Use this SQL_ID to manually load the SQL plan baseline.
SET SERVEROUTPUT ON
DECLARE
l_plans_loaded PLS_INTEGER;
BEGIN
l_plans_loaded := DBMS_SPM.load_plans_from_cursor_cache(
sql_id => 'gat6z1bc6nc2d');

DBMS_OUTPUT.put_line('Plans Loaded: ' || l_plans_loaded);
END;
/
Plans Loaded: 1

PL/SQL procedure successfully completed.

SQL>
The DBA_SQL_PLAN_BASELINES view provides information about the SQL plan baselines. We can see there is a single plan associated with our baseline, which is both enabled and accepted.
COLUMN sql_handle FORMAT A20
COLUMN plan_name FORMAT A30

SELECT sql_handle, plan_name, enabled, accepted
FROM dba_sql_plan_baselines
WHERE sql_text LIKE '%spm_test_tab%'
AND sql_text NOT LIKE '%dba_sql_plan_baselines%';

SQL_HANDLE PLAN_NAME ENA ACC
-------------------- ------------------------------ --- ---
SQL_7b76323ad90440b9 SQL_PLAN_7qxjk7bch8h5tb65c37c8 YES YES

SQL>
Flush the shared pool to force another hard parse, create an index on the ID column, then repeat the query to see the affect on the execution plan.
CONN sys@pdb1 AS SYSDBA
ALTER SYSTEM FLUSH SHARED_POOL;

CONN test/test@pdb1

CREATE INDEX spm_test_tab_idx ON spm_test_tab(id);
EXEC DBMS_STATS.gather_table_stats(USER, 'SPM_TEST_TAB', cascade=>TRUE);

SET AUTOTRACE TRACE

SELECT description
FROM spm_test_tab
WHERE id = 99;

Execution Plan
----------------------------------------------------------
Plan hash value: 1107868462

----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 25 | 14 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| SPM_TEST_TAB | 1 | 25 | 14 (0)| 00:00:01 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("ID"=99)

Note
-----
- SQL plan baseline "SQL_PLAN_7qxjk7bch8h5tb65c37c8" used for this statement
Notice the query doesn't use the newly created index, even though we forced a hard parse. The note explains the SQL plan baseline is used. Looking at the DBA_SQL_PLAN_BASELINES view we can see why.
CONN sys@pdb1 AS SYSDBA

SELECT sql_handle, plan_name, enabled, accepted
FROM dba_sql_plan_baselines
WHERE sql_handle = 'SQL_7b76323ad90440b9';

SQL_HANDLE PLAN_NAME ENA ACC
-------------------- ------------------------------ --- ---
SQL_7b76323ad90440b9 SQL_PLAN_7qxjk7bch8h5t3652c362 YES NO
SQL_7b76323ad90440b9 SQL_PLAN_7qxjk7bch8h5tb65c37c8 YES YES

SQL>
The SQL plan baseline now contains a second plan, but it has not yet been accepted.
Note: If you don't see the new row in the DBA_SQL_PLAN_BASELINES view go back and rerun the query from "spm_test_tab" until you do. It sometimes takes the server a few attempts before it notices the need for additional plans.

For the new plan to be used we need to wait for the maintenance window or manually evolve the SQL plan baseline. Create a new evolve task for this baseline.
SET SERVEROUTPUT ON
DECLARE
l_return VARCHAR2(32767);
BEGIN
l_return := DBMS_SPM.create_evolve_task(sql_handle => 'SQL_7b76323ad90440b9');
DBMS_OUTPUT.put_line('Task Name: ' || l_return);
END;
/
Task Name: TASK_21

PL/SQL procedure successfully completed.

SQL>
Execute the evolve task.
SET SERVEROUTPUT ON
DECLARE
l_return VARCHAR2(32767);
BEGIN
l_return := DBMS_SPM.execute_evolve_task(task_name => 'TASK_21');
DBMS_OUTPUT.put_line('Execution Name: ' || l_return);
END;
/
Execution Name: EXEC_21

PL/SQL procedure successfully completed.

SQL>
Report on the result of the evolve task.
SET LONG 1000000 PAGESIZE 1000 LONGCHUNKSIZE 100 LINESIZE 100

SELECT DBMS_SPM.report_evolve_task(task_name => 'TASK_21', execution_name => 'EXEC_21') AS output
FROM dual;

OUTPUT
----------------------------------------------------------------------------------------------------
GENERAL INFORMATION SECTION
---------------------------------------------------------------------------------------------

Task Information:
---------------------------------------------
Task Name : TASK_21
Task Owner : SYS
Execution Name : EXEC_21
Execution Type : SPM EVOLVE
Scope : COMPREHENSIVE
Status : COMPLETED
Started : 02/18/2015 08:37:41
Finished : 02/18/2015 08:37:41
Last Updated : 02/18/2015 08:37:41
Global Time Limit : 2147483646
Per-Plan Time Limit : UNUSED
Number of Errors : 0
---------------------------------------------------------------------------------------------

SUMMARY SECTION
---------------------------------------------------------------------------------------------
Number of plans processed : 1
Number of findings : 1
Number of recommendations : 1
Number of errors : 0
---------------------------------------------------------------------------------------------

DETAILS SECTION
---------------------------------------------------------------------------------------------
Object ID : 2
Test Plan Name : SQL_PLAN_7qxjk7bch8h5t3652c362
Base Plan Name : SQL_PLAN_7qxjk7bch8h5tb65c37c8
SQL Handle : SQL_7b76323ad90440b9
Parsing Schema : TEST
Test Plan Creator : TEST
SQL Text : SELECT description FROM spm_test_tab WHERE id = 99

Execution Statistics:
-----------------------------
Base Plan Test Plan
---------------------------- ----------------------------
Elapsed Time (s): .000019 .000005
CPU Time (s): .000022 0
Buffer Gets: 4 0
Optimizer Cost: 14 2
Disk Reads: 0 0
Direct Writes: 0 0
Rows Processed: 0 0
Executions: 10 10


FINDINGS SECTION
---------------------------------------------------------------------------------------------

Findings (1):
-----------------------------
1. The plan was verified in 0.02000 seconds. It passed the benefit criterion
because its verified performance was 15.00740 times better than that of the
baseline plan.

Recommendation:
-----------------------------
Consider accepting the plan. Execute
dbms_spm.accept_sql_plan_baseline(task_name => 'TASK_21', object_id => 2,
task_owner => 'SYS');


EXPLAIN PLANS SECTION
---------------------------------------------------------------------------------------------

Baseline Plan
-----------------------------
Plan Id : 101
Plan Hash Value : 3059496904

-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost | Time |
-----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 25 | 14 | 00:00:01 |
| * 1 | TABLE ACCESS FULL | SPM_TEST_TAB | 1 | 25 | 14 | 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
------------------------------------------
* 1 - filter("ID"=99)


Test Plan
-----------------------------
Plan Id : 102
Plan Hash Value : 911393634

---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost | Time |
---------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 25 | 2 | 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED | SPM_TEST_TAB | 1 | 25 | 2 | 00:00:01 |
| * 2 | INDEX RANGE SCAN | SPM_TEST_TAB_IDX | 1 | | 1 | 00:00:01 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
------------------------------------------
* 2 - access("ID"=99)

---------------------------------------------------------------------------------------------

SQL>
If the evolve task has completed and has reported recommendations, implement them. The recommendations suggests using ACCEPT_SQL_PLAN_BASELINE, but you should really use IMPLEMENT_EVOLVE_TASK.
SET SERVEROUTPUT ON
DECLARE
l_return NUMBER;
BEGIN
l_return := DBMS_SPM.implement_evolve_task(task_name => 'TASK_21');
DBMS_OUTPUT.put_line('Plans Accepted: ' || l_return);
END;
/
Plans Accepted: 1

PL/SQL procedure successfully completed.

SQL>
The DBA_SQL_PLAN_BASELINES view shows the second plan as been accepted.
CONN sys/pdb1 AS SYSDBA

SELECT sql_handle, plan_name, enabled, accepted
FROM dba_sql_plan_baselines
WHERE sql_handle = 'SQL_7b76323ad90440b9';

SQL_HANDLE PLAN_NAME ENA ACC
-------------------- ------------------------------ --- ---
SQL_7b76323ad90440b9 SQL_PLAN_7qxjk7bch8h5t3652c362 YES YES
SQL_7b76323ad90440b9 SQL_PLAN_7qxjk7bch8h5tb65c37c8 YES YES

SQL>
Repeating the earlier test shows the more efficient plan is now available for use.
CONN test/test@pdb1

SET AUTOTRACE TRACE LINESIZE 130

SELECT description
FROM spm_test_tab
WHERE id = 99;

Execution Plan
----------------------------------------------------------
Plan hash value: 2338891031

--------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 25 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| SPM_TEST_TAB | 1 | 25 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | SPM_TEST_TAB_IDX | 1 | | 1 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("ID"=99)

Note
-----
- SQL plan baseline "SQL_PLAN_7qxjk7bch8h5t3652c362" used for this statement
If you want to remove the plans, drop them using the DROP_SQL_PLAN_BASELINE function.

CONN sys@pdb1 AS SYSDBA

SET SERVEROUTPUT ON
DECLARE
l_plans_dropped PLS_INTEGER;
BEGIN
l_plans_dropped := DBMS_SPM.drop_sql_plan_baseline (sql_handle => 'SQL_7b76323ad90440b9');
DBMS_OUTPUT.put_line('Plans Dropped: ' || l_plans_dropped);
END;
/
Plans Dropped: 2

PL/SQL procedure successfully completed.

SQL>

Do you Know What your Children are Doing on the Internet?

$
0
0
Excessive use of anything is termed as addiction and it won’t be wrong to say that children today are addicted to internet. With the availability of smartphones, kids are hooked to the cyber world 24/7.

This has opened unauthorized territories to the young impressionable minds. They are exposed to explicit sexual content, strangers who are ready to pounce on any young ones either for money or for fulfillment of any other pleasure and identity theft.


It’s a reality that children today are overly obsessed with internet.

But do parents know what their children are up to? If they don’t than it’s important for them to wake up and keep an eye on their kids’ activities.

Dangers of the cyber world!

Addiction is continued use of anything despite having the urge to put an end to it. Similarly, for many children internet usage is their lifeline. Not getting to use internet leads to frustration, depression and impatience. Excessive use of internet results in weak family bond, poor grades and sleepless nights.

Access to Sexually Explicit Material

Secondly, cyber world knows no age limitations. Many websites do have age restrictions but they are very easy to bypass. Hence, sexually explicit material is readily available online. This leads to unbounded curiosity and restlessness among the young ones.

They find ways to explore this arena which may lead to befriending strangers.

Disclosure of Confidential Information

There are many unethical individuals lurking around seeking personal information to commit cyber crime. Children end up dispensing more than required information online which puts their parents at high risk of identity theft.

Similarly, these unethical individuals are great at luring young minds into committing dangerous activities like credit card scams or stalking. Seemingly, they will not pose any threat and will come across well wishers but alas! What do kids know?

Cyber-bullying

Here the question arises of how to do so? Sitting with them whenever they are online is mission impossible. No longer is internet usage limited to a single desktop setup in the lounge area, Laptops, tablets and smartphones have made it accessible anywhere anytime.

How to Monitor their Internet Use?

It is imperative that children do not ever find out that their parents know what they are doing online. Parents need to be one step ahead of their children and should know a wee bit more than kids. 

If parental controls are to be set, they should be such that children don’t know how to bypass them or else it’s all utterly useless.

Telling children directly is also not the solution as this will raise serious trust and privacy issues. As it is children are losing out on strong family bonds because of spending excessive time on internet.

So parents need to tread on this sensitive issue very carefully without breaking the weakening communication link.

Restricting kids’ Internet Use

The best solution is to turn to technology! With innovative and dynamic monitoring applications available, it is very easy to circumvent all the stated problems and yet keep an eye on children. 

Effective Parental Monitoring System

Effective Parental control and internet filtering software allows you to monitor internet usage through smartphones. Any powerful software allows you to filter, block and monitor your child’s internet activities. You can simply put a time limit on the device when it cannot be used.

Secondly, specific websites and applications can be blocked without your children knowing about it. They will never even suspect you for such actions.

Internet filtering software are user friendly and efficient. Yes there are many free software available which allows you to provide protective and secure cyber environment for your children. You can search within this blog as we have posted plenty of method already.

Conclusion

So stop fretting over losing control and take the reins in your hands! Parents need to get their act together and educate themselves.

All this allow you to trim off the loose ends and build a healthy relationship with your children.

Galaxy S6 camera stacks up against iPhone 6 and iPhone 6 Plus, Here's How?

$
0
0
The Galaxy S6 (and Galaxy S6 edge) have been receiving ravishing reviews from various publications. The latest Galaxies are easily the best smartphones Samsung has ever manufactured.

Samsung has also been highlighting how good the 16MP f/1.9 rear camera on the Galaxy S6 is. But how exactly does it stack up against the excellent 8MP shooter found on the iPhone 6 and iPhone 6 Plus?

We take a look at some of the comparison done by other publications to find out.

According to The Verge, the Galaxy S6 camera is “fast, reliable and takes great photos,” and is “easily the best camera on any Android phone ever.” The shooter is also able to hold its own against the iPhone 6 Plus and is able to consistently shoot decent pictures irrespective of the situation.


On the whole, the S6 holds its own against the iPhone, and we wouldn’t hesitate for a second to use it as our primary smartphone camera.

The comparison images from the website shows that the white balance of both handsets differ significantly, they are both able to produce usable photos in various conditions.

In their comparison, Business Insider found that the Galaxy S6 is able to take brighter photos than the iPhone 6 Plus in low-light, but the latter is still able to produce better images as they are sharper. They even pitted HTC’s latest flagship — the One M9 — against the iPhone 6 and Galaxy S6, but the 20MP module on the handset fell flat on its face due to its tendency to over-expose photos.

The overall winner? The iPhone 6. It took the best photos overall, especially indoors and in low light. The Galaxy S6 was also quite good, coming in very close to the iPhone in most settings. The HTC did spectacularly well in a couple of outdoor settings, but overall seemed to have problems with exposure.

Unlike Business Insider and The Verge, CNET pitted the iPhone 6 against the Galaxy S6 and the HTC One M9. The publication echoed the same thoughts as Business Insider: While the Galaxy S6 took brighter shots in low-light, the iPhone 6 managed to capture more details and produce sharper images.

 
As for the iPhone, its biggest strength is with low-light environments. Though it won’t have the brightest exposure in the end per se, its photos are sharper and look more natural. It also reduces the amount of lens flare beaming from different light sources. In addition, its white balance captures the purest and cleanest white hues.

Overall though, the publication found that the cameras on the iPhone 6 and Galaxy S6 are equally good, while that on the HTC One M9 is a disappointment.

All in all, the M9 proved a disappointment, while the Galaxy S6 and the iPhone 6 were pretty neck-and-neck. Personally, I’d give the Galaxy S6 the slight edge, since I’m partial to its saturated tones that come off bright without looking too unrealistic (a characteristic that plagued Galaxy cameras before).

It looks like Samsung has finally managed to catch up to Apple in terms of camera performance on its devices. Do keep in mind though that the iPhone 6 and iPhone 6 Plus are six-months old at this point, and the next iPhone is only six-months away at this point, while the Galaxy S6 is going to be Samsung’s flagship handset for the next one year.

Driver Toolkit 8.4 Full Version With License Key Download

$
0
0
DriverToolkit scans PC devices and detect the best drivers for your PC with our Superlink Driver-Match Technology. You may specify the driver package to download, or download all recommended driver packages with one-click. When download is finished, just click the ‘Install’ button to start driver installation. 






It's quick and easy!

The Ultimate Solution for PC Drivers

  • Download & Update the latest drivers for your PC
  • Quick fix unknown, outdated or corrupted drivers
  • Features including driver backup, restore & uninstall
  • 8,000,000+ database of hardware & drivers
  • Designed for Windows 8, 7, vista & xp (32 & 64-bit)

Why Choose DriverToolkit?

  • Quick Fix Driver Problems

    Hardware devices doesn't work or performing erratically. Such situations can often be caused by missing or outdated drivers. DriverToolkit automatically checks for driver updates, makes your drivers are always up-to-date, keeps your PC running at peak performance!
  • Excellent at Searching Drivers

    No more frustrating searches for drivers. Let DriverToolkit do the hard work for you. Our daily-updated driver database contains more than 8,000,000 driver entities, which empower DriverToolkit to offer the latest official drivers for 99.9% hardware devices of all PC vendors.

  • Simple and Easy to Use

    DriverToolkit is designed in easy-to-use interface. It is fast, obvious and instantly intuitive. Any driver issues can be fixed in few clicks. There is no prerequisite knowledge required for DriverToolkit. It's so simple you can't do anything wrong!

  • 100% Safe and Secure

    All drivers come from official manufacturers, and double checked by our computer professionals. Besides, DriverToolkit backs up your current drivers before any new driver installation by default, and you can restore old drivers whenever you want with one-click.

 Download

Microsoft Age-Guessing Site Uses Face-Recognition Tech

$
0
0

Microsoft Research peels back the curtain on its viral hit, How-Old.net, which uses a machine-learning technology called Project Oxford. HoloLens, Microsoft's buzz-worthy augmented-reality technology, wasn't the only thing that resonated with the IT community following last week's Build developer conference.


How-Old.net, a Website where users upload photos and it guesses their age and gender, was a hit on social media and attracted widespread tech coverage over the weekend. Once photos are uploaded, the site draws a box around the subjects' faces, along with their ages and a male or female icon.

Just three hours after the team sent an internal email, users flooded the Internet with screenshots of their supposed age, which ranged from spot-on to humorously inaccurate.

"Within hours, over 210,000 images had been submitted and we had 35,000 users from all over the world (about 29K of them from Turkey, as it turned out—apparently there were a bunch of tweets from Turkey mentioning this page)," Corom Thompson and Santosh Balasubramanian, engineers in Information Management and Machine Learning at Microsoft, wrote in a company blog post.

Predictably, many users uploaded images of celebrities and other recognizable people. "But over half the pictures analyzed were of people uploading their own images," said Thompson and Balasubramanian. "This insight prompted us to improve the user experience and we did some additional testing around image uploads from mobile devices."

The site is based, in part, on the face-recognition component of Project Oxford, a collection of Azure machine-learning application programming interfaces (APIs) and services currently in beta. "This technology automatically recognizes faces in photos, groups faces that look alike and verifies whether two faces are the same," Allison Linn, a Microsoft Research writer, stated in a separate blog post.

Apart from guessing ages, Linn noted that the technology has other, potentially more business-friendly applications.  "It can be used for things like easily recognizing which users are in certain photos and allowing a user to log in using face authentication."

Thompson and Balasubramanian admitted that How-Old.net may miss the mark. "Now, while the API is reasonably good at locating the faces and identifying gender, it isn't particularly accurate with age, but it's often good for a laugh and users have fun with it."



Microsoft is increasingly relying on its machine-learning research to enhance its software and services portfolio. "We want to have rich application services, in particular, data services such as machine learning, and democratize the access to those capabilities so that every developer on every platform can build intelligent apps," said CEO Satya Nadella during his opening remarks at Build.

In February, Microsoft announced the general availability of its cloud-based predictive analytics offering, Microsoft Azure Machine Learning. T. K. Rengarajan, corporate vice president of the Data Platform unit, and Joseph Sirosh, corporate vice president of Machine Learning said in a statement at the time that "developers and data scientists can build and deploy apps to improve customer experiences, predict and prevent system failures, enhance operational efficiencies, uncover new technical insights, or a universe of other benefits" with the big data processing platform in mere hours.

Google Buys iOS Time-Management App Vendor Timeful

$
0
0
Google acquired Timeful, a vendor of an iOS-only time-management application.  Will the app soon be available on Android devices? Google revealed that it acquired Timeful, a vendor of an iOS-only time-management application, for an undisclosed sum.

The purchase gives Google access to technology designed to let people organize and schedule daily activities more efficiently. The software's key feature is its ability to intelligently suggest times during the day or week for users to accomplish tasks on their to-do lists based on their habits and other scheduled events on their calendar.


"You can tell Timeful you want to exercise three times a week or that you need to call the bank by next Tuesday, and their system will make sure you get it done based on an understanding of both your schedule and your priorities," Google's Director of Product Development Alex Gawley said in announcing the purchase Monday.

Timeful's technology will work with Gmail, Google Inbox, Calendar and future time-management and scheduling apps from the company. "The Timeful team has built an impressive system that helps you organize your life by understanding your schedule, habits and needs," the company said.

In a notice announcing Google's purchase of the company, Timeful said that iPhone users would be able to continue to download and use the mobile app as always. Users who choose to can also export their data from the site at any time. Moving forward, Timeful will focus on developing new projects in conjunction with Google, the company said.

Timeful is a free application first introduced in Apple's Play store just last July. It works with iCal, Outlook and Google Cal. The application brings together into the Calendar, the user's scheduled events, to-do items and habits, such as daily jogging or walking. It then runs what the company has described as sophisticated algorithms to figure out the optimal times during the day or week for the user to accomplish items on their list of things to do.

Mobile-application ranking Website App Annie describes Timeful as an application that learns from the user's behavior, adapts to his or her schedule and gets better at personalizing recommendations with continued use over time.  "Timeful brings everything that competes for your time together into one place–your meetings, events, to-dos, and even good habits you're looking to develop," App Annie noted in its description of the software. Some, like Business Insider have rated Timeful as an extremely useful application for iOS users.

User response to the application itself, however, appears somewhat muted. Soon after its release last year, the application ranked briefly among the top productivity applications for iOS in App Annie. But in the past several months, it has ranked well below 100 in most of the markets in which it is available. At the time of Google's announcement Monday, Timeful ranked 456 among the list of most popular productivity applications for iOS in the United States.

A handful of user reviews on Apple's iTunes appeared to reflect some frustration among users with recent tweaks to the product
"I used your app faithfully for months and had a good understanding of how it worked," a reviewer using the handle Michaelphd noted. "It was my favorite calendar/to-do app by far. But when you started forcing my to-do's into my calendar, I deleted the app."

Another reviewer going by the name bro12345621 lamented Timeful's decision to do away with its suggestions feature altogether. "Automatically setting times for tasks without my consent just leads to annoying notifications unless I take the time to reschedule every single one I don't want to do."

It's too soon to say what Google's plans for the product are, but some might assume that the company will make a version of Timeful available for Android users as well.

Rombertik Malware Corrupts Drives to Prevent Code Analysis

$
0
0
The malware, which attempts to steal information about Web sites and users, deletes the master boot record—or all user files—to avoid detection, according to a Cisco analysis.

Attackers are adopting increasingly malicious tactics to evade security researchers’ analysis efforts, with a recently-discovered data-stealing program erasing the master boot record of a system’s hard drive if it detects signs of an analysis environment, according to report published by Cisco on May 4.

The malware, dubbed Rombertik, compromises systems and attempts to steal information, such as login credentials and personal information, from the victim’s browser sessions, researchers with Cisco’s Talos security intelligence group stated in the report.

When the malware installs itself, the software runs several anti-analysis checks, attempting to determine if the system on which it is running is an analysis environment. If the last check fails, the malware deletes the master boot record, or MBR, which is required to correctly start up the computer system.

“The interesting bit with Rombertik is that we are seeing malware authors attempting to be incredibility evasive,” Alexander Chiu, a threat researcher with Cisco, said in an e-mail. “If Rombertik detects it’s being analyzed running in memory, it actively tries to trash the MBR of the computer it’s running on. This is not common behavior.”

Attackers are increasingly attempting to prevent defenders from analyzing the tools and programs they use to conduct criminal and espionage operations. In a recent analysis, researchers with security firm Seculert found a variant of the Dyre banking trojan that used a simple check—counting the number of processing cores—to detect if it was in a virtual environment.

Rombertik has been identified to propagate via spam and phishing messages sent to would-be victims.  Like previous spam and phishing campaigns Talos has discussed, attackers use social engineering tactics to entice users to download, unzip, and open the attachments that ultimately result in the user’s compromise.







“At a high level, Rombertik is a complex piece of malware that is designed to hook into the user’s browser to read credentials and other sensitive information for exfiltration to an attacker controlled server, similar to Dyre,” Cisco’s researchers stated in the report. “However, unlike Dyre which was designed to target banking information, Rombertik collects information from all websites in an indiscriminate manner.”

Rombertik is distributed through various spam campaigns, often camouflaged as a PDF file. In reality, the attachment is a screensaver executable which, if the user opens the binary, attempts to run on the system. The prevalence of the malware is currently not known.

During an installation attempt, Rombertik attempts multiple times to determine if it might be in an analysis environment. The program has a lot of unused code, including uncalled functions and images which the malware authors included to try to camouflage the malware’s functionality, Cisco’s researchers stated.

The program also attempts to outlast automated analysis by writing a byte to memory nearly a billion times. Automated systems are often designed to run for a limited length of time, so as to efficiently process as many files as possible. The technique of writing data so many times could potentially crash some environments, Cisco stated.

“If an analysis tool attempted to log all of the 960 million write instructions, the log would grow to over 100 gigabytes,” the researchers said. “Even if the analysis environment was capable of handling a log that large, it would take over 25 minutes just to write that much data to a typical hard drive. This complicates analysis.”

When it reaches its final check, Rombertik deletes the MBR—or if it's unable to— it deletes all files in the user’s account, according to Cisco.

Ace Translator 14.5 with Text-to-Speech Full Version for Windows

$
0
0

Ace Translator employs the power of Internet machine language translation engines, and enables you to easily translate Web contents, letters, chat, and emails between major International languages. The new version 14 supports 91 languages, and with text-to-speech (TTS) support for 46 languages, which makes it an ideal language learning app as well.


 Download

Ace Translator supports translations between the following 91 languages. Languages marked with  have TTS feature enabled.

   English
   Latin
   French     Français
   German     Deutsch
   Italian     Italiano
   Dutch     Nederlands
   Portuguese     português
   Spanish     Español
   Catalan     català
   Greek     Ελληνικά
   Russian     русский
   Chinese (Simplified)     中文(简体)
   Chinese (Traditional)     中文(繁體)
   Japanese     日本語
   Korean     한국어
   Finnish     suomi
   Czech     čeština
   Danish     Dansk
   Romanian     Română
   Bulgarian     български
   Croatian     hrvatski
   Urdu     اردو
   Punjabi     ਪੰਜਾਬੀ
   Tamil     தமிழ்
   Hindi     हिन्दी
   Gujarati     ગુજરાતી
   Kannada     ಕನ್ನಡ
   Telugu     తెలుగు
   Marathi     मराठी
   Malayalam     മലയാളം
   Bengali     বাংলা
   Indonesian     Bahasa Indonesia
   Javanese     Basa Jawa
   Filipino
   Cebuano
   Latvian     latviešu
   Lithuanian     lietuvių
   Norwegian     norsk
   Serbian     српски
   Ukrainian     українська
   Slovak     slovenčina
   Slovenian     slovenščina
   Swedish     svenska
   Polish     polski
   Vietnamese     Tiếng Việt
   Arabic     العربية
   Hebrew     עברית
   Turkish     Türkçe
   Hungarian     magyar
   Thai     ภาษาไทย
   Albanian     Shqip
   Maltese     Malti
   Estonian     eesti
   Belarusian     беларуская
   Icelandic     íslenska
   Malay     Bahasa Melayu
   Irish     Gaeilge
   Macedonian     македонски
   Persian     فارسی
   Galician     galego
   Welsh     Cymraeg
   Yiddish     אידיש
   Zulu     isiZulu
   Afrikaans
   Swahili     Kiswahili
   Hausa     Harshen Hausa
   Haitian Creole     Kreyòl Ayisyen
   Armenian     հայերեն
   Azerbaijani     Azərbaycanca
   Georgian     ქართული
   Basque     euskara
   Esperanto
   Bosnian     bosanski
   Hmong
   Lao     ພາສາລາວ
   Khmer     ភាសាខ្មែរ
   Burmese     မြန်မာဘာသာ
   Igbo     Asụsụ Igbo
   Yoruba     Èdè Yorùbá
   Maori     Māori
   Nepali     नेपाली
   Somali     Soomaali
   Mongolian     Монгол
   Sinhala     සිංහල
   Tajik     Тоҷикӣ
   Uzbek     O‘zbek
   Kazakh     қазақ
   Sundanese     Basa Sunda
   Sesotho
   Malagasy

   Chichewa
  

System Requirements:
Microsoft Windows 10/8.1/7/Vista/XP/2012/2008/2003
An active Internet connection

Linux Systemd Essentials: Working with Services, Units, and the Journal

$
0
0
In recent years, Linux distributions have increasingly transitioned from other init systems to systemd. Thesystemd suite of tools provides a fast and flexible init model for managing an entire machine from boot onwards.

In this guide, we'll give you a quick run through of the most important commands you'll want to know for managing a systemd enabled server. These should work on any server that implements systemd (any OS version at or above Ubuntu 15.04, Debian 8, CentOS 7, Fedora 15). Let's get started.


Basic Unit Management

The basic object that systemd manages and acts upon is a "unit". Units can be of many types, but the most common type is a "service" (indicated by a unit file ending in .service). To manage services on asystemd enabled server, our main tool is the systemctl command.

All of the normal init system commands have equivalent actions with the systemctl command. We will use the nginx.service unit to demonstrate (you'll have to install Nginx with your package manager to get this service file).

For instance, we can start the service by typing:
sudo systemctl start nginx.service

We can stop it again by typing:
sudo systemctl stop nginx.service

To restart the service, we can type:
sudo systemctl restart nginx.service

To attempt to reload the service without interrupting normal functionality, we can type:
sudo systemctl reload nginx.service

Enabling or Disabling Units

By default, most systemd unit files are not started automatically at boot. To configure this functionality, you need to "enable" to unit. This hooks it up to a certain boot "target", causing it to be triggered when that target is started.

To enable a service to start automatically at boot, type:
sudo systemctl enable nginx.service

If you wish to disable the service again, type:
sudo systemctl disable nginx.service

Getting an Overview of the System State

There is a great deal of information that we can pull from a systemd server to get an overview of the system state.

For instance, to get all of the unit files that systemd has listed as "active", type (you can actually leave off the list-units as this is the default systemctl behavior):

  • systemctl list-units



To list all of the units that systemd has loaded or attempted to load into memory, including those that are not currently active, add the --all switch:

  • systemctl list-units --all



To list all of the units installed on the system, including those that systemd has not tried to load into memory, type:

  • systemctl list-unit-files



Viewing Basic Log Information

systemd component called journald collects and manages journal entries from all parts of the system. This is basically log information from applications and the kernel.

To see all log entries, starting at the oldest entry, type:

  • journalctl



By default, this will show you entries from the current and previous boots if journald is configured to save previous boot records. Some distributions enable this by default, while others do not (to enable this, either edit the /etc/systemd/journald.conf file and set the Storage= option to "persistent", or create the persistent directory by typing sudo mkdir -p /var/log/journal).

If you only wish to see the journal entries from the current boot, add the -b flag:

  • journalctl -b



To see only kernel messages, such as those that are typically represented by dmesg, you can use the -kflag:

  • journalctl -k



Again, you can limit this only to the current boot by appending the -b flag:
journalctl -k -b

Querying Unit States and Logs

While the above commands gave you access to the general system state, you can also get information about the state of individual units.

To see an overview of the current state of a unit, you can use the status option with the systemctlcommand. This will show you whether the unit is active, information about the process, and the latest journal entries:

  • systemctl status nginx.service



To see all of the journal entries for the unit in question, give the -u option with the unit name to thejournalctl command:

  • journalctl -u nginx.service



As always, you can limit the entries to the current boot by adding the -b flag:
journalctl -b -u nginx.service

Inspecting Units and Unit Files

By now, you know how to modify a unit's state by starting or stopping it, and you know how to view state and journal information to get an idea of what is happening with the process. However, we haven't seen yet how to inspect other aspects of units and unit files.

A unit file contains the parameters that systemd uses to manage and run a unit. To see the full contents of a unit file, type:

  • systemctl cat nginx.service



To see the dependency tree of a unit (which units systemd will attempt to activate when starting the unit), type:

  • systemctl list-dependencies nginx.service



This will show the dependent units, with target units recursively expanded. To expand all dependent units recursively, pass the --all flag:

  • systemctl list-dependencies --all nginx.service



Finally, to see the low-level details of the unit's settings on the system, you can use the show option:

  • systemctl show nginx.service



This will give you the value of each parameter being managed by systemd.

Modifying Unit Files

If you need to make a modification to a unit file, systemd allows you to make changes from thesystemctl command itself so that you don't have to go to the actual disk location.

To add a unit file snippet, which can be used to append or override settings in the default unit file, simply call the edit option on the unit:

  • sudo systemctl edit nginx.service



If you prefer to modify the entire content of the unit file instead of creating a snippet, pass the --full flag:

  • sudo systemctl edit --full nginx.service



After modifying a unit file, you should reload the systemd process itself to pick up your changes:

  • sudo systemctl daemon-reload




Using Targets (Runlevels)

Another function of an init system is to transition the server itself between different states. Traditional init systems typically refer to these as "runlevels", allowing the system to only be in one runlevel at any one time.

In systemd, "targets" are used instead. Targets are basically synchronization points that the server can used to bring the server into a specific state. Service and other unit files can be tied to a target and multiple targets can be active at the same time.

To see all of the targets available on your system, type:

  • systemctl list-unit-files --type=target



To view the default target that systemd tries to reach at boot (which in turn starts all of the unit files that make up the dependency tree of that target), type:

  • systemctl get-default



You can change the default target that will be used at boot by using the set-default option:

  • sudo systemctl set-default multi-user.target



To see what units are tied to a target, you can type:

  • systemctl list-dependencies multi-user.target



You can modify the system state to transition between targets with the isolate option. This will stop any units that are not tied to the specified target. Be sure that the target you are isolating does not stop any essential services:

  • sudo systemctl isolate multi-user.target





Stopping or Rebooting the Server

For some of the major states that a system can transition to, shortcuts are available. For instance, to power off your server, you can type:

  • sudo systemctl poweroff



If you wish to reboot the system instead, that can be accomplished by typing:

  • sudo systemctl reboot



You can boot into rescue mode by typing:

  • sudo systemctl rescue



Note that most operating systems include traditional aliases to these operations so that you can simply type sudo poweroff or sudo reboot without the systemctl. However, this is not guaranteed to be set up on all systems.

Next Steps

By now, you should know the basics of how to manage a server that uses systemd. However, there is much more to learn as your needs expand. Below are links to guides with more in-depth information about some of the components we discussed in this guide:

How To Use Systemctl to Manage Systemd Services and Units


Introduction

Systemd is an init system and system manager that is widely becoming the new standard for Linux machines. While there are considerable opinions about whether systemd is an improvement over the traditional SysV init systems it is replacing, the majority of distributions plan to adopt it or have already done so.

Due to its heavy adoption, familiarizing yourself with systemd is well worth the trouble, as it will make administrating these servers considerably easier. Learning about and utilizing the tools and daemons that comprise systemd will help you better appreciate the power, flexibility, and capabilities it provides, or at least help you to do your job with minimal hassle.

In this guide, we will be discussing the systemctl command, which is the central management tool for controlling the init system. We will cover how to manage services, check statuses, change system states, and work with the configuration files.

Service Management

The fundamental purpose of an init system is to initialize the components that must be started after the Linux kernel is booted (traditionally known as "userland" components). The init system is also used to manage services and daemons for the server at any point while the system is running. With that in mind, we will start with some simple service management operations.

In systemd, the target of most actions are "units", which are resources that systemd knows how to manage. Units are categorized by the type of resource they represent and they are defined with files known as unit files. The type of each unit can be inferred from the suffix on the end of the file.

For service management tasks, the target unit will be service units, which have unit files with a suffix of.service. However, for most service management commands, you can actually leave off the .servicesuffix, as systemd is smart enough to know that you probably want to operate on a service when using service management commands.


Starting and Stopping Services

To start a systemd service, executing instructions in the service's unit file, use the start command. If you are running as a non-root user, you will have to use sudo since this will affect the state of the operating system:
sudo systemctl start application.service

As we mentioned above, systemd knows to look for *.service files for service management commands, so the command could just as easily be typed like this:
sudo systemctl start application

Although you may use the above format for general administration, for clarity, we will use the .servicesuffix for the remainder of the commands to be explicit about the target we are operating on.

To stop a currently running service, you can use the stop command instead:
sudo systemctl stop application.service


Restarting and Reloading

To restart a running service, you can use the restart command:
sudo systemctl restart application.service

If the application in question is able to reload its configuration files (without restarting), you can issue thereload command to initiate that process:
sudo systemctl reload application.service

If you are unsure whether the service has the functionality to reload its configuration, you can issue thereload-or-restart command. This will reload the configuration in-place if available.

Otherwise, it will restart the service so the new configuration is picked up:
sudo systemctl reload-or-restart application.service


Enabling and Disabling Services

The above commands are useful for starting or stopping commands during the current session. To tellsystemd to start services automatically at boot, you must enable them.

To start a service at boot, use the enable command:
sudo systemctl enable application.service

This will create a symbolic link from the system's copy of the service file (usually in /lib/systemd/systemor /etc/systemd/system) into the location on disk where systemd looks for autostart files (usually/etc/systemd/system/some_target.target.wants. We will go over what a target is later in this guide).

To disable the service from starting automatically, you can type:
sudo systemctl disable application.service

This will remove the symbolic link that indicated that the service should be started automatically.

Keep in mind that enabling a service does not start it in the current session. If you wish to start the service and enable it at boot, you will have to issue both the start and enable commands.


Checking the Status of Services

To check the status of a service on your system, you can use the status command:
systemctl status application.service

This will provide you with the service state, the cgroup hierarchy, and the first few log lines.

For instance, when checking the status of an Nginx server, you may see output like this:
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2015-01-27 19:41:23 EST; 22h ago
Main PID: 495 (nginx)
CGroup: /system.slice/nginx.service
├─495 nginx: master process /usr/bin/nginx -g pid /run/nginx.pid; error_log stderr;
└─496 nginx: worker process

Jan 27 19:41:23 desktop systemd[1]: Starting A high performance web server and a reverse proxy server...
Jan 27 19:41:23 desktop systemd[1]: Started A high performance web server and a reverse proxy server.

This gives you a nice overview of the current status of the application, notifying you of any problems and any actions that may be required.

There are also methods for checking for specific states. For instance, to check to see if a unit is currently active (running), you can use the is-active command:
systemctl is-active application.service

This will return the current unit state, which is usually active or inactive. The exit code will be "0" if it is active, making the result simpler to parse programatically.

To see if the unit is enabled, you can use the is-enabled command:
systemctl is-enabled application.service

This will output whether the service is enabled or disabled and will again set the exit code to "0" or "1" depending on the answer to the command question.
A third check is whether the unit is in a failed state. This indicates that there was a problem starting the unit in question:
systemctl is-failed application.service

This will return active if it is running properly or failed if an error occurred. If the unit was intentionally stopped, it may return unknown or inactive. An exit status of "0" indicates that a failure occurred and an exit status of "1" indicates any other status.


System State Overview

The commands so far have been useful for managing single services, but they are not very helpful for exploring the current state of the system. There are a number of systemctl commands that provide this information.


Listing Current Units

To see a list of all of the active units that systemd knows about, we can use the list-units command:
systemctl list-units

This will show you a list of all of the units that systemd currently has active on the system. The output will look something like this:
UNIT                                      LOAD   ACTIVE SUB     DESCRIPTION
atd.service loaded active running ATD daemon
avahi-daemon.service loaded active running Avahi mDNS/DNS-SD Stack
dbus.service loaded active running D-Bus System Message Bus
dcron.service loaded active running Periodic Command Scheduler
dkms.service loaded active exited Dynamic Kernel Modules System
getty@tty1.service loaded active running Getty on tty1

. . .
The output has the following columns:
  • UNIT: The systemd unit name
  • LOAD: Whether the unit's configuration has been parsed by systemd. The configuration of loaded units is kept in memory.
  • ACTIVE: A summary state about whether the unit is active. This is usually a fairly basic way to tell if the unit has started successfully or not.
  • SUB: This is a lower-level state that indicates more detailed information about the unit. This often varies by unit type, state, and the actual method in which the unit runs.
  • DESCRIPTION: A short textual description of what the unit is/does.
Since the list-units command shows only active units by default, all of the entries above will show "loaded" in the LOAD column and "active" in the ACTIVE column. This display is actually the default behavior of systemctl when called without additional commands, so you will see the same thing if you call systemctl with no arguments:
systemctl

We can tell systemctl to output different information by adding additional flags. For instance, to see all of the units that systemd has loaded (or attempted to load), regardless of whether they are currently active, you can use the --all flag, like this:
systemctl list-units --all

This will show any unit that systemd loaded or attempted to load, regardless of its current state on the system. Some units become inactive after running, and some units that systemd attempted to load may have not been found on disk.

You can use other flags to filter these results. For example, we can use the --state= flag to indicate the LOAD, ACTIVE, or SUB states that we wish to see. You will have to keep the --all flag so thatsystemctl allows non-active units to be displayed:
systemctl list-units --all --state=inactive

Another common filter is the --type= filter. We can tell systemctl to only display units of the type we are interested in. For example, to see only active service units, we can use:
systemctl list-units --type=service


Listing All Unit Files

The list-units command only displays units that systemd has attempted to parse and load into memory. Since systemd will only read units that it thinks it needs, this will not necessarily include all of the available units on the system. To see every available unit file within the systemd paths, including those that systemd has not attempted to load, you can use the list-unit-files command instead:
systemctl list-unit-files

Units are representations of resources that systemd knows about. Since systemd has not necessarily read all of the unit definitions in this view, it only presents information about the files themselves. The output has two columns: the unit file and the state.
UNIT FILE                                  STATE   
proc-sys-fs-binfmt_misc.automount static
dev-hugepages.mount static
dev-mqueue.mount static
proc-fs-nfsd.mount static
proc-sys-fs-binfmt_misc.mount static
sys-fs-fuse-connections.mount static
sys-kernel-config.mount static
sys-kernel-debug.mount static
tmp.mount static
var-lib-nfs-rpc_pipefs.mount static
org.cups.cupsd.path enabled

. . .
The state will usually be "enabled", "disabled", "static", or "masked". In this context, static means that the unit file does not contain an "install" section, which is used to enable a unit. As such, these units cannot be enabled. Usually, this means that the unit performs a one-off action or is used only as a dependency of another unit and should not be run by itself.

We will cover what "masked" means momentarily.


Unit Management

So far, we have been working with services and displaying information about the unit and unit files thatsystemd knows about. However, we can find out more specific information about units using some additional commands.


Displaying a Unit File

To display the unit file that systemd has loaded into its system, you can use the cat command (this was added in systemd version 209). For instance, to see the unit file of the atd scheduling daemon, we could type:
systemctl cat atd.service
[Unit]
Description=ATD daemon

[Service]
Type=forking
ExecStart=/usr/bin/atd

[Install]
WantedBy=multi-user.target

The output is the unit file as known to the currently running systemd process. This can be important if you have modified unit files recently or if you are overriding certain options in a unit file fragment (we will cover this later).


Displaying Dependencies

To see a unit's dependency tree, you can use the list-dependencies command:
systemctl list-dependencies sshd.service

This will display a hierarchy mapping the dependencies that must be dealt with in order to start the unit in question. Dependencies, in this context, include those units that are either required by or wanted by the units above it.
sshd.service
├─system.slice
└─basic.target
├─microcode.service
├─rhel-autorelabel-mark.service
├─rhel-autorelabel.service
├─rhel-configure.service
├─rhel-dmesg.service
├─rhel-loadmodules.service
├─paths.target
├─slices.target

. . .
The recursive dependencies are only displayed for .target units, which indicate system states. To recursively list all dependencies, include the --all flag.

To show reverse dependencies (units that depend on the specified unit), you can add the --reverse flag to the command. Other flags that are useful are the --before and --after flags, which can be used to show units that depend on the specified unit starting before and after themselves, respectively.


Checking Unit Properties

To see the low-level properties of a unit, you can use the show command. This will display a list of properties that are set for the specified unit using a key=value format:
systemctl show sshd.service
Id=sshd.service
Names=sshd.service
Requires=basic.target
Wants=system.slice
WantedBy=multi-user.target
Conflicts=shutdown.target
Before=shutdown.target multi-user.target
After=syslog.target network.target auditd.service systemd-journald.socket basic.target system.slice
Description=OpenSSH server daemon

. . .
If you want to display a single property, you can pass the -p flag with the property name. For instance, to see the conflicts that the sshd.service unit has, you can type:
systemctl show sshd.service -p Conflicts
Conflicts=shutdown.target


Masking and Unmasking Units

We saw in the service management section how to stop or disable a service, but systemd also has the ability to mark a unit as completely unstartable, automatically or manually, by linking it to /dev/null. This is called masking the unit, and is possible with the mask command:
sudo systemctl mask nginx.service

This will prevent the Nginx service from being started, automatically or manually, for as long as it is masked.

If you check the list-unit-files, you will see the service is now listed as masked:
systemctl list-unit-files
. . .

kmod-static-nodes.service static
ldconfig.service static
mandb.service static
messagebus.service static
nginx.service masked
quotaon.service static
rc-local.service static
rdisc.service disabled
rescue.service static

. . .
If you attempt to start the service, you will see a message like this:
sudo systemctl start nginx.service
Failed to start nginx.service: Unit nginx.service is masked.

To unmask a unit, making it available for use again, simply use the unmask command:
sudo systemctl unmask nginx.service

This will return the unit to its previous state, allowing it to be started or enabled.


Editing Unit Files

While the specific format for unit files is outside of the scope of this tutorial, systemctl provides builtin mechanisms for editing and modifying unit files if you need to make adjustments. This functionality was added in systemd version 218.

The edit command, by default, will open a unit file snippet for the unit in question:
sudo systemctl edit nginx.service

This will be a blank file that can be used to override or add directives to the unit definition. A directory will be created within the /etc/systemd/system directory which contains the name of the unit with .dappended. For instance, for the nginx.service, a directory called nginx.service.d will be created.

Within this directory, a snippet will be created called override.conf. When the unit is loaded, systemdwill, in memory, merge the override snippet with the full unit file. The snippet's directives will take precedence over those found in the original unit file.

If you wish to edit the full unit file instead of creating a snippet, you can pass the --full flag:
sudo systemctl edit --full nginx.service

This will load the current unit file into the editor, where it can be modified. When the editor exits, the changed file will be written to /etc/systemd/system, which will take precedence over the system's unit definition (usually found somewhere in /lib/systemd/system).

To remove any additions you have made, either delete the unit's .d configuration directory or the modified service file from /etc/systemd/system. For instance, to remove a snippet, we could type:
sudo rm -r /etc/systemd/system/nginx.service.d

To remove a full modified unit file, we would type:
sudo rm /etc/systemd/system/nginx.service

After deleting the file or directory, you should reload the systemd process so that it no longer attempts to reference these files and reverts back to using the system copies. You can do this by typing:
sudo systemctl daemon-reload


Adjusting the System State (Runlevel) with Targets

Targets are special unit files that describe a system state or synchronization point. Like other units, the files that define targets can be identified by their suffix, which in this case is .target. Targets do not do much themselves, but are instead used to group other units together.

This can be used in order to bring the system to certain states, much like other init systems use runlevels. They are used as a reference for when certain functions are available, allowing you to specify the desired state instead of the individual units needed to produce that state.

For instance, there is a swap.target that is used to indicate that swap is ready for use. Units that are part of this process can sync with this target by indicating in their configuration that they are WantedBy= orRequiredBy= the swap.target. Units that require swap to be available can specify this condition using the Wants=Requires=, and After= specifications to indicate the nature of their relationship.


Getting and Setting the Default Target

The systemd process has a default target that it uses when booting the system. Satisfying the cascade of dependencies from that single target will bring the system into the desired state. To find the default target for your system, type:
systemctl get-default
multi-user.target

If you wish to set a different default target, you can use the set-default. For instance, if you have a graphical desktop installed and you wish for the system to boot into that by default, you can change your default target accordingly:
sudo systemctl set-default graphical.target

Listing Available Targets

You can get a list of the available targets on your system by typing:
systemctl list-unit-files --type=target

Unlike runlevels, multiple targets can be active at one time. An active target indicates that systemd has attempted to start all of the units tied to the target and has not tried to tear them down again. To see all of the active targets, type:
systemctl list-units --type=target


Isolating Targets

It is possible to start all of the units associated with a target and stop all units that are not part of the dependency tree. The command that we need to do this is called, appropriately, isolate. This is similar to changing the runlevel in other init systems.

For instance, if you are operating in a graphical environment with graphical.target active, you can shut down the graphical system and put the system into a multi-user command line state by isolating themulti-user.target. Since graphical.target depends on multi-user.target but not the other way around, all of the graphical units will be stopped.

You may wish to take a look at the dependencies of the target you are isolating before performing this procedure to ensure that you are not stopping vital services:
systemctl list-dependencies multi-user.target

When you are satisfied with the units that will be kept alive, you can isolate the target by typing:
sudo systemctl isolate multi-user.target


Using Shortcuts for Important Events

There are targets defined for important events like powering off or rebooting. However, systemctl also has some shortcuts that add a bit of additional functionality.

For instance, to put the system into rescue (single-user) mode, you can just use the rescue command instead of isolate rescue.target:
sudo systemctl rescue

This will provide the additional functionality of alerting all logged in users about the event.
To halt the system, you can use the halt command:
sudo systemctl halt

To initiate a full shutdown, you can use the poweroff command:
sudo systemctl poweroff

A restart can be started with the reboot command:
sudo systemctl reboot

These all alert logged in users that the event is occurring, something that simply running or isolating the target will not do. Note that most machines will link the shorter, more conventional commands for these operations so that they work properly with systemd.

For example, to reboot the system, you can usually type:
sudo reboot


Conclusion

By now, you should be familiar with some of the basic capabilities of the systemctl command that allow you to interact with and control your systemd instance. The systemctl utility will be your main point of interaction for service and system state management.

While systemctl operates mainly with the core systemd process, there are other components to thesystemd ecosystem that are controlled by other utilities. Other capabilities, like log management and user sessions are handled by separate daemons and management utilities (journald/journalctl andlogind/loginctl respectively). Taking time to become familiar with these other tools and daemons will make management an easier task.


How To Use Journalctl to View and Manipulate Systemd Logs


Introduction

Some of the most compelling advantages of systemd are those involved with process and system logging. When using other tools, logs are usually dispersed throughout the system, handled by different daemons and processes, and can be fairly difficult to interpret when they span multiple applications.Systemd attempts to address these issues by providing a centralized management solution for logging all kernel and userland processes. The system that collects and manages these logs is known as the journal.

The journal is implemented with the journald daemon, which handles all of the messages produced by the kernel, initrd, services, etc. In this guide, we will discuss how to use the journalctl utility, which can be used to access and manipulate the data held within the journal.


General Idea

One of the impetuses behind the systemd journal is to centralize the management of logs regardless of where the messages are originating. Since much of the boot process and service management is handled by the systemd process, it makes sense to standardize the way that logs are collected and accessed. Thejournald daemon collects data from all available sources and stores them in a binary format for easy and dynamic manipulation.

This gives us a number of significant advantages. By interacting with the data using a single utility, administrators are able to dynamically display log data according to their needs. This can be as simple as viewing the boot data from three boots ago, or combining the log entries sequentially from two related services to debug a communication issue.

Storing the log data in a binary format also means that the data can be displayed in arbitrary output formats depending on what you need at the moment. For instance, for daily log management you may be used to viewing the logs in the standard syslog format, but if you decide to graph service interruptions later on, you can output each entry as a JSON object to make it consumable to your graphing service. Since the data is not written to disk in plain text, no conversion is needed when you need a different on-demand format.

The systemd journal can either be used with an existing syslog implementation, or it can replace thesyslog functionality, depending on your needs. While the systemd journal will cover most administrator's logging needs, it can also complement existing logging mechanisms. For instance, you may have a centralized syslog server that you use to compile data from multiple servers, but you also may wish to interleave the logs from multiple services on a single system with the systemd journal. You can do both of these by combining these technologies.


Setting the System Time

One of the benefits of using a binary journal for logging is the ability to view log records in UTC or local time at will. By default, systemd will display results in local time.

Because of this, before we get started with the journal, we will make sure the timezone is set up correctly. The systemd suite actually comes with a tool called timedatectl that can help with this.

First, see what timezones are available with the list-timezones option:
timedatectl list-timezones

This will list the timezones available on your system. When you find the one that matches the location of your server, you can set it by using the set-timezone option:
sudo timedatectl set-timezone zone

To ensure that your machine is using the correct time now, use the timedatectl command alone, or with the status option. The display will be the same:
timedatectl status
Local time: Thu 2015-02-05 14:08:06 EST
Universal time: Thu 2015-02-05 19:08:06 UTC
RTC time: Thu 2015-02-05 19:08:06
Time zone: America/New_York (EST, -0500)
NTP enabled: no
NTP synchronized: no
RTC in local TZ: no
DST active: n/a

The first line should display the correct time.


Basic Log Viewing

To see the logs that the journald daemon has collected, use the journalctl command.
When used alone, every journal entry that is in the system will be displayed within a pager (usually less) for you to browse. The oldest entries will be up top:
journalctl
-- Logs begin at Tue 2015-02-03 21:48:52 UTC, end at Tue 2015-02-03 22:29:38 UTC. --
Feb 03 21:48:52 localhost.localdomain systemd-journal[243]: Runtime journal is using 6.2M (max allowed 49.
Feb 03 21:48:52 localhost.localdomain systemd-journal[243]: Runtime journal is using 6.2M (max allowed 49.
Feb 03 21:48:52 localhost.localdomain systemd-journald[139]: Received SIGTERM from PID 1 (systemd).
Feb 03 21:48:52 localhost.localdomain kernel: audit: type=1404 audit(1423000132.274:2): enforcing=1 old_en
Feb 03 21:48:52 localhost.localdomain kernel: SELinux: 2048 avtab hash slots, 104131 rules.
Feb 03 21:48:52 localhost.localdomain kernel: SELinux: 2048 avtab hash slots, 104131 rules.
Feb 03 21:48:52 localhost.localdomain kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/
Feb 03 21:48:52 localhost.localdomain kernel: SELinux: 8 users, 102 roles, 4976 types, 294 bools, 1 sens,
Feb 03 21:48:52 localhost.localdomain kernel: SELinux: 83 classes, 104131 rules

. . .
You will likely have pages and pages of data to scroll through, which can be tens or hundreds of thousands of lines long if systemd has been on your system for a long while. This demonstrates how much data is available in the journal database.

The format will be familiar to those who are used to standard syslog logging. However, this actually collects data from more sources than traditional syslog implementations are capable of. It includes logs from the early boot process, the kernel, the initrd, and application standard error and out. These are all available in the journal.

You may notice that all of the timestamps being displayed are local time. This is available for every log entry now that we have our local time set correctly on our system. All of the logs are displayed using this new information.

If you want to display the timestamps in UTC, you can use the --utc flag:
journalctl --utc


Journal Filtering by Time

While having access to such a large collection of data is definitely useful, such a large amount of information can be difficult or impossible to inspect and process mentally. Because of this, one of the most important features of journalctl is its filtering options.


Displaying Logs from the Current Boot

The most basic of these which you might use daily, is the -b flag. This will show you all of the journal entries that have been collected since the most recent reboot.
journalctl -b

This will help you identify and manage information that is pertinent to your current environment.

In cases where you aren't using this feature and are displaying more than one day of boots, you will see that journalctl has inserted a line that looks like this whenever the system went down:
. . .

-- Reboot --

. . .
This can be used to help you logically separate the information into boot sessions.


Past Boots

While you will commonly want to display the information from the current boot, there are certainly times when past boots would be helpful as well. The journal can save information from many previous boots, sojournalctl can be made to display information easily.

Some distributions enable saving previous boot information by default, while others disable this feature. To enable persistent boot information, you can either create the directory to store the journal by typing:

  • sudo mkdir -p /var/log/journal



Or you can edit the journal configuration file:

  • sudo nano /etc/systemd/journald.conf



Under the [Journal] section, set the Storage= option to "persistent" to enable persistent logging:
/etc/systemd/journald.conf
. . .
[Journal]
Storage=persistent

When saving previous boots is enabled on your server, journalctl provides some commands to help you work with boots as a unit of division. To see the boots that journald knows about, use the --list-boots option with journalctl:
journalctl --list-boots
-2 caf0524a1d394ce0bdbcff75b94444fe Tue 2015-02-03 21:48:52 UTC—Tue 2015-02-03 22:17:00 UTC
-1 13883d180dc0420db0abcb5fa26d6198 Tue 2015-02-03 22:17:03 UTC—Tue 2015-02-03 22:19:08 UTC
0 bed718b17a73415fade0e4e7f4bea609 Tue 2015-02-03 22:19:12 UTC—Tue 2015-02-03 23:01:01 UTC

This will display a line for each boot. The first column is the offset for the boot that can be used to easily reference the boot with journalctl. If you need an absolute reference, the boot ID is in the second column. You can tell the time that the boot session refers to with the two time specifications listed towards the end.

To display information from these boots, you can use information from either the first or second column.

For instance, to see the journal from the previous boot, use the -1 relative pointer with the -b flag:
journalctl -b -1

You can also use the boot ID to call back the data from a boot:
journalctl -b caf0524a1d394ce0bdbcff75b94444fe


Time Windows

While seeing log entries by boot is incredibly useful, often you may wish to request windows of time that do not align well with system boots. This may be especially true when dealing with long-running servers with significant uptime.

You can filter by arbitrary time limits using the --since and --until options, which restrict the entries displayed to those after or before the given time, respectively.

The time values can come in a variety of formats. For absolute time values, you should use the following format:
YYYY-MM-DD HH:MM:SS

For instance, we can see all of the entries since January 10th, 2015 at 5:15 PM by typing:
journalctl --since "2015-01-10 17:15:00"

If components of the above format are left off, some defaults will be applied. For instance, if the date is omitted, the current date will be assumed. If the time component is missing, "00:00:00" (midnight) will be substituted. The seconds field can be left off as well to default to "00":
journalctl --since "2015-01-10" --until "2015-01-11 03:00"

The journal also understands some relative values and named shortcuts. For instance, you can use the words "yesterday", "today", "tomorrow", or "now". You do relative times by prepending "-" or "+" to a numbered value or using words like "ago" in a sentence construction.

To get the data from yesterday, you could type:
journalctl --since yesterday

If you received reports of a service interruption starting at 9:00 AM and continuing until an hour ago, you could type:
journalctl --since 09:00 --until "1 hour ago"

As you can see, it's relatively easy to define flexible windows of time to filter the entries you wish to see.


Filtering by Message Interest

We learned above some ways that you can filter the journal data using time constraints. In this section we'll discuss how to filter based on what service or component you are interested in. The systemd journal provides a variety of ways of doing this.


By Unit

Perhaps the most useful way of filtering is by the unit you are interested in. We can use the -u option to filter in this way.

For instance, to see all of the logs from an Nginx unit on our system, we can type:
journalctl -u nginx.service

Typically, you would probably want to filter by time as well in order to display the lines you are interested in. For instance, to check on how the service is running today, you can type:
journalctl -u nginx.service --since today

This type of focus becomes extremely helpful when you take advantage of the journal's ability to interleave records from various units. For instance, if your Nginx process is connected to a PHP-FPM unit to process dynamic content, you can merge the entries from both in chronological order by specifying both units:
journalctl -u nginx.service -u php-fpm.service --since today

This can make it much easier to spot the interactions between different programs and debug systems instead of individual processes.


By Process, User, or Group ID

Some services spawn a variety of child processes to do work. If you have scouted out the exact PID of the process you are interested in, you can filter by that as well.

To do this we can filter by specifying the _PID field. For instance if the PID we're interested in is 8088, we could type:
journalctl _PID=8088

At other times, you may wish to show all of the entries logged from a specific user or group. This can be done with the _UID or _GID filters. For instance, if your web server runs under the www-data user, you can find the user ID by typing:
id -u www-data
33

Afterwards, you can use the ID that was returned to filter the journal results:
journalctl _UID=33 --since today

The systemd journal has many fields that can be used for filtering. Some of those are passed from the process being logged and some are applied by journald using information it gathers from the system at the time of the log.

The leading underscore indicates that the _PID field is of the latter type. The journal automatically records and indexes the PID of the process that is logging for later filtering. You can find out about all of the available journal fields by typing:
man systemd.journal-fields

We will be discussing some of these in this guide. For now though, we will go over one more useful option having to do with filtering by these fields. The -F option can be used to show all of the available values for a given journal field.

For instance, to see which group IDs the systemd journal has entries for, you can type:
journalctl -F _GID
32
99
102
133
81
84
100
0
124
87

This will show you all of the values that the journal has stored for the group ID field. This can help you construct your filters.


By Component Path

We can also filter by providing a path location.

If the path leads to an executable, journalctl will display all of the entries that involve the executable in question. For instance, to find those entries that involve the bash executable, you can type:
journalctl /usr/bin/bash

Usually, if a unit is available for the executable, that method is cleaner and provides better info (entries from associated child processes, etc). Sometimes, however, this is not possible.


Displaying Kernel Messages

Kernel messages, those usually found in dmesg output, can be retrieved from the journal as well.

To display only these messages, we can add the -k or --dmesg flags to our command:
journalctl -k

By default, this will display the kernel messages from the current boot. You can specify an alternative boot using the normal boot selection flags discussed previously. For instance, to get the messages from five boots ago, you could type:
journalctl -k -b -5


By Priority

One filter that system administrators often are interested in is the message priority. While it is often useful to log information at a very verbose level, when actually digesting the available information, low priority logs can be distracting and confusing.

You can use journalctl to display only messages of a specified priority or above by using the -poption. This allows you to filter out lower priority messages.

For instance, to show only entries logged at the error level or above, you can type:
journalctl -p err -b

This will show you all messages marked as error, critical, alert, or emergency. The journal implements the standard syslog message levels. You can use either the priority name or its corresponding numeric value. In order of highest to lowest priority, these are:
  • 0: emerg
  • 1: alert
  • 2: crit
  • 3: err
  • 4: warning
  • 5: notice
  • 6: info
  • 7: debug
The above numbers or names can be used interchangeably with the -p option. Selecting a priority will display messages marked at the specified level and those above it.


Modifying the Journal Display

Above, we demonstrated entry selection through filtering. There are other ways we can modify the output though. We can adjust the journalctl display to fit various needs.


Truncate or Expand Output

We can adjust how journalctl displays data by telling it to shrink or expand the output.

By default, journalctl will show the entire entry in the pager, allowing the entries to trail off to the right of the screen. This info can be accessed by pressing the right arrow key.

If you'd rather have the output truncated, inserting an ellipsis where information has been removed, you can use the --no-full option:
journalctl --no-full
. . .

Feb 04 20:54:13 journalme sshd[937]: Failed password for root from 83.234.207.60...h2
Feb 04 20:54:13 journalme sshd[937]: Connection closed by 83.234.207.60 [preauth]
Feb 04 20:54:13 journalme sshd[937]: PAM 2 more authentication failures; logname...ot

You can also go in the opposite direction with this and tell journalctl to display all of its information, regardless of whether it includes unprintable characters. We can do this with the -a flag:
journalctl -a


Output to Standard Out

By default, journalctl displays output in a pager for easier consumption. If you are planning on processing the data with text manipulation tools, however, you probably want to be able to output to standard output.
You can do this with the --no-pager option:
journalclt --no-pager
This can be piped immediately into a processing utility or redirected into a file on disk, depending on your needs.


Output Formats

If you are processing journal entries, as mentioned above, you most likely will have an easier time parsing the data if it is in a more consumable format. Luckily, the journal can be displayed in a variety of formats as needed. You can do this using the -o option with a format specifier.

For instance, you can output the journal entries in JSON by typing:
journalctl -b -u nginx -o json
{ "__CURSOR" : "s=13a21661cf4948289c63075db6c25c00;i=116f1;b=81b58db8fd9046ab9f847ddb82a2fa2d;m=19f0daa;t=50e33c33587ae;x=e307daadb4858635", "__REALTIME_TIMESTAMP" : "1422990364739502", "__MONOTONIC_TIMESTAMP" : "27200938", "_BOOT_ID" : "81b58db8fd9046ab9f847ddb82a2fa2d", "PRIORITY" : "6", "_UID" : "0", "_GID" : "0", "_CAP_EFFECTIVE" : "3fffffffff", "_MACHINE_ID" : "752737531a9d1a9c1e3cb52a4ab967ee", "_HOSTNAME" : "desktop", "SYSLOG_FACILITY" : "3", "CODE_FILE" : "src/core/unit.c", "CODE_LINE" : "1402", "CODE_FUNCTION" : "unit_status_log_starting_stopping_reloading", "SYSLOG_IDENTIFIER" : "systemd", "MESSAGE_ID" : "7d4958e842da4a758f6c1cdc7b36dcc5", "_TRANSPORT" : "journal", "_PID" : "1", "_COMM" : "systemd", "_EXE" : "/usr/lib/systemd/systemd", "_CMDLINE" : "/usr/lib/systemd/systemd", "_SYSTEMD_CGROUP" : "/", "UNIT" : "nginx.service", "MESSAGE" : "Starting A high performance web server and a reverse proxy server...", "_SOURCE_REALTIME_TIMESTAMP" : "1422990364737973" }

. . .
This is useful for parsing with utilities. You could use the json-pretty format to get a better handle on the data structure before passing it off to the JSON consumer:
journalctl -b -u nginx -o json-pretty
{
"__CURSOR" : "s=13a21661cf4948289c63075db6c25c00;i=116f1;b=81b58db8fd9046ab9f847ddb82a2fa2d;m=19f0daa;t=50e33c33587ae;x=e307daadb4858635",
"__REALTIME_TIMESTAMP" : "1422990364739502",
"__MONOTONIC_TIMESTAMP" : "27200938",
"_BOOT_ID" : "81b58db8fd9046ab9f847ddb82a2fa2d",
"PRIORITY" : "6",
"_UID" : "0",
"_GID" : "0",
"_CAP_EFFECTIVE" : "3fffffffff",
"_MACHINE_ID" : "752737531a9d1a9c1e3cb52a4ab967ee",
"_HOSTNAME" : "desktop",
"SYSLOG_FACILITY" : "3",
"CODE_FILE" : "src/core/unit.c",
"CODE_LINE" : "1402",
"CODE_FUNCTION" : "unit_status_log_starting_stopping_reloading",
"SYSLOG_IDENTIFIER" : "systemd",
"MESSAGE_ID" : "7d4958e842da4a758f6c1cdc7b36dcc5",
"_TRANSPORT" : "journal",
"_PID" : "1",
"_COMM" : "systemd",
"_EXE" : "/usr/lib/systemd/systemd",
"_CMDLINE" : "/usr/lib/systemd/systemd",
"_SYSTEMD_CGROUP" : "/",
"UNIT" : "nginx.service",
"MESSAGE" : "Starting A high performance web server and a reverse proxy server...",
"_SOURCE_REALTIME_TIMESTAMP" : "1422990364737973"
}

. . .
The following formats can be used for display:
  • cat: Displays only the message field itself.
  • export: A binary format suitable for transferring or backing up.
  • json: Standard JSON with one entry per line.
  • json-pretty: JSON formatted for better human-readability
  • json-sse: JSON formatted output wrapped to make add server-sent event compatible
  • short: The default syslog style output
  • short-iso: The default format augmented to show ISO 8601 wallclock timestamps.
  • short-monotonic: The default format with monotonic timestamps.
  • short-precise: The default format with microsecond precision
  • verbose: Shows every journal field available for the entry, including those usually hidden internally.
These options allow you to display the journal entries in the whatever format best suits your current needs.


Active Process Monitoring

The journalctl command imitates how many administrators use tail for monitoring active or recent activity. This functionality is built into journalctl, allowing you to access these features without having to pipe to another tool.


Displaying Recent Logs

To display a set amount of records, you can use the -n option, which works exactly as tail -n.
By default, it will display the most recent 10 entries:
journalctl -n

You can specify the number of entries you'd like to see with a number after the -n:
journalctl -n 20


Following Logs

To actively follow the logs as they are being written, you can use the -f flag. Again, this works as you might expect if you have experience using tail -f:
journalctl -f


Journal Maintenance

You may be wondering about the cost is of storing all of the data we've seen so far. Furthermore, you may be interesting in cleaning up some older logs and freeing up space.


Finding Current Disk Usage

You can find out the amount of space that the journal is currently occupying on disk by using the --disk-usage flag:
journalctl --disk-usage
Journals take up 8.0M on disk.


Deleting Old Logs

If you wish to shrink your journal, you can do that in two different ways (available with systemd version 218 and later).

If you use the --vacuum-size option, you can shrink your journal by indicating a size. This will remove old entries until the total journal space taken up on disk is at the requested size:
sudo journalctl --vacuum-size=1G

Another way that you can shrink the journal is providing a cutoff time with the --vacuum-time option. Any entries beyond that time are deleted. This allows you to keep the entries that have been created after a specific time.

For instance, to keep entries from the last year, you can type:
sudo journalctl --vacuum-time=1years


Limiting Journal Expansion

You can configure your server to place limits on how much space the journal can take up. This can be done by editing the /etc/systemd/journald.conf file.

The following items can be used to limit the journal growth:
  • SystemMaxUse=: Specifies the maximum disk space that can be used by the journal in persistent storage.
  • SystemKeepFree=: Specifies the amount of space that the journal should leave free when adding journal entries to persistent storage.
  • SystemMaxFileSize=: Controls how large individual journal files can grow to in persistent storage before being rotated.
  • RuntimeMaxUse=: Specifies the maximum disk space that can be used in volatile storage (within the/run filesystem).
  • RuntimeKeepFree=: Specifies the amount of space to be set aside for other uses when writing data to volatile storage (within the /run filesystem).
  • RuntimeMaxFileSize=: Specifies the amount of space that an individual journal file can take up in volatile storage (within the /run filesystem) before being rotated.
By setting these values, you can control how journald consumes and preserves space on your server.


Conclusion

As you can see, the systemd journal is incredibly useful for collecting and managing your system and application data. Most of the flexibility comes from the extensive metadata automatically recorded and the centralized nature of the log. The journalctl command makes it easy to take advantage of the advanced features of the journal and to do extensive analysis and relational debugging of different application components.

Understanding Systemd Units and Unit Files


Introduction

Increasingly, Linux distributions are adopting or planning to adopt the systemd init system. This powerful suite of software can manage many aspects of your server, from services to mounted devices and system states.

In systemd, a unit refers to any resource that the system knows how to operate on and manage. This is the primary object that the systemd tools know how to deal with. These resources are defined using configuration files called unit files.

In this guide, we will introduce you to the different units that systemd can handle. We will also be covering some of the many directives that can be used in unit files in order to shape the way these resources are handled on your system.

What do Systemd Units Give You?

Units are the objects that systemd knows how to manage. These are basically a standardized representation of system resources that can be managed by the suite of daemons and manipulated by the provided utilities.

Units in some ways can be said to similar to services or jobs in other init systems. However, a unit has a much broader definition, as these can be used to abstract services, network resources, devices, filesystem mounts, and isolated resource pools.

Ideas that in other init systems may be handled with one unified service definition can be broken out into component units according to their focus. This organizes by function and allows you to easily enable, disable, or extend functionality without modifying the core behavior of a unit.

Some features that units are able implement easily are:
  • socket-based activation: Sockets associated with a service are best broken out of the daemon itself in order to be handled separately. This provides a number of advantages, such as delaying the start of a service until the associated socket is first accessed. This also allows the system to create all sockets early in the boot process, making it possible to boot the associated services in parallel.
  • bus-based activation: Units can also be activated on the bus interface provided by D-Bus. A unit can be started when an associated bus is published.
  • path-based activation: A unit can be started based on activity on or the availability of certain filesystem paths. This utilizes inotify.
  • device-based activation: Units can also be started at the first availability of associated hardware by leveraging udev events.
  • implicit dependency mapping: Most of the dependency tree for units can be built by systemd itself. You can still add dependency and ordering information, but most of the heavy lifting is taken care of for you.
  • instances and templates: Template unit files can be used to create multiple instances of the same general unit. This allows for slight variations or sibling units that all provide the same general function.
  • easy security hardening: Units can implement some fairly good security features by adding simple directives. For example, you can specify no or read-only access to part of the filesystem, limit kernel capabilities, and assign private /tmp and network access.
  • drop-ins and snippets: Units can easily be extended by providing snippets that will override parts of the system's unit file. This makes it easy to switch between vanilla and customized unit implementations.
There are many other advantages that systemd units have over other init systems' work items, but this should give you an idea of the power that can be leveraged using native configuration directives.


Where are Systemd Unit Files Found?

The files that define how systemd will handle a unit can be found in many different locations, each of which have different priorities and implications.

The system's copy of unit files are generally kept in the /lib/systemd/system directory. When software installs unit files on the system, this is the location where they are placed by default.

Unit files stored here are able to be started and stopped on-demand during a session. This will be the generic, vanilla unit file, often written by the upstream project's maintainers that should work on any system that deploys systemd in its standard implementation. You should not edit files in this directory. Instead you should override the file, if necessary, using another unit file location which will supersede the file in this location.

If you wish to modify the way that a unit functions, the best location to do so is within the/etc/systemd/system directory. Unit files found in this directory location take precedence over any of the other locations on the filesystem. If you need to modify the system's copy of a unit file, putting a replacement in this directory is the safest and most flexible way to do this.

If you wish to override only specific directives from the system's unit file, you can actually provide unit file snippets within a subdirectory. These will append or modify the directives of the system's copy, allowing you to specify only the options you want to change.

The correct way to do this is to create a directory named after the unit file with .d appended on the end. So for a unit called example.service, a subdirectory called example.service.d could be created. Within this directory a file ending with .conf can be used to override or extend the attributes of the system's unit file.

There is also a location for run-time unit definitions at /run/systemd/system. Unit files found in this directory have a priority landing between those in /etc/systemd/system and /lib/systemd/system. Files in this location are given less weight than the former location, but more weight than the latter.

The systemd process itself uses this location for dynamically created unit files created at runtime. This directory can be used to change the system's unit behavior for the duration of the session. All changes made in this directory will be lost when the server is rebooted.


Types of Units

Systemd categories units according to the type of resource they describe. The easiest way to determine the type of a unit is with its type suffix, which is appended to the end of the resource name. The following list describes the types of units available to systemd:

.service: A service unit describes how to manage a service or application on the server. This will include how to start or stop the service, under which circumstances it should be automatically started, and the dependency and ordering information for related software.

.socket: A socket unit file describes a network or IPC socket, or a FIFO buffer that systemd uses for socket-based activation. These always have an associated .service file that will be started when activity is seen on the socket that this unit defines.

.device: A unit that describes a device that has been designated as needing systemd management byudev or the sysfs filesystem. Not all devices will have .device files. Some scenarios where .deviceunits may be necessary are for ordering, mounting, and accessing the devices.

.mount: This unit defines a mountpoint on the system to be managed by systemd. These are named after the mount path, with slashes changed to dashes. Entries within /etc/fstab can have units created automatically.

.automount: An .automount unit configures a mountpoint that will be automatically mounted. These must be named after the mount point they refer to and must have a matching .mount unit to define the specifics of the mount.

.swap: This unit describes swap space on the system. The name of these units must reflect the device or file path of the space.

.target: A target unit is used to provide synchronization points for other units when booting up or changing states. They also can be used to bring the system to a new state. Other units specify their relation to targets to become tied to the target's operations.

.path: This unit defines a path that can be used for path-based activation. By default, a .service unit of the same base name will be started when the path reaches the specified state. This uses inotify to monitor the path for changes.

.timer: A .timer unit defines a timer that will be managed by systemd, similar to a cron job for delayed or scheduled activation. A matching unit will be started when the timer is reached.

.snapshot: A .snapshot unit is created automatically by the systemctl snapshot command. It allows you to reconstruct the current state of the system after making changes. Snapshots do not survive across sessions and are used to roll back temporary states.

.slice: A .slice unit is associated with Linux Control Group nodes, allowing resources to be restricted or assigned to any processes associated with the slice. The name reflects its hierarchical position within the cgroup tree. Units are placed in certain slices by default depending on their type.

.scope: Scope units are created automatically by systemd from information received from its bus interfaces. These are used to manage sets of system processes that are created externally.

As you can see, there are many different units that systemd knows how to manage. Many of the unit types work together to add functionality. For instance, some units are used to trigger other units and provide activation functionality.

We will mainly be focusing on .service units due to their utility and the consistency in which administrators need to managed these units.


Anatomy of a Unit File

The internal structure of unit files are organized with sections. Sections are denoted by a pair of square brackets "[" and "]" with the section name enclosed within. Each section extends until the beginning of the subsequent section or until the end of the file.


General Characteristics of Unit Files

Section names are well defined and case-sensitive. So, the section [Unit] will not be interpreted correctly if it is spelled like [UNIT]. If you need to add non-standard sections to be parsed by applications other than systemd, you can add a X- prefix to the section name.

Within these sections, unit behavior and metadata is defined through the use of simple directives using a key-value format with assignment indicated by an equal sign, like this:
[Section]
Directive1=value
Directive2=value

. . .

In the event of an override file (such as those contained in a unit.type.d directory), directives can be reset by assigning them to an empty string. For example, the system's copy of a unit file may contain a directive set to a value like this:
Directive1=default_value

The default_value can be eliminated in an override file by referencing Directive1 without a value, like this:
Directive1=

In general, systemd allows for easy and flexible configuration. For example, multiple boolean expressions are accepted (1yeson, and true for affirmative and 0no off, and false for the opposite answer). Times can be intelligently parsed, with seconds assumed for unit-less values and combining multiple formats accomplished internally.

[Unit] Section Directives

The first section found in most unit files is the [Unit] section. This is generally used for defining metadata for the unit and configuring the relationship of the unit to other units.

Although section order does not matter to systemd when parsing the file, this section is often placed at the top because it provides an overview of the unit. Some common directives that you will find in the[Unit] section are:
  • Description=: This directive can be used to describe the name and basic functionality of the unit. It is returned by various systemd tools, so it is good to set this to something short, specific, and informative.
  • Documentation=: This directive provides a location for a list of URIs for documentation. These can be either internally available man pages or web accessible URLs. The systemctl status command will expose this information, allowing for easy discoverability.
  • Requires=: This directive lists any units upon which this unit essentially depends. If the current unit is activated, the units listed here must successfully activate as well, else this unit will fail. These units are started in parallel with the current unit by default.
  • Wants=: This directive is similar to Requires=, but less strict. Systemd will attempt to start any units listed here when this unit is activated. If these units are not found or fail to start, the current unit will continue to function. This is the recommended way to configure most dependency relationships. Again, this implies a parallel activation unless modified by other directives.
  • BindsTo=: This directive is similar to Requires=, but also causes the current unit to stop when the associated unit terminates.
  • Before=: The units listed in this directive will not be started until the current unit is marked as started if they are activated at the same time. This does not imply a dependency relationship and must be used in conjunction with one of the above directives if this is desired.
  • After=: The units listed in this directive will be started before starting the current unit. This does not imply a dependency relationship and one must be established through the above directives if this is required.
  • Conflicts=: This can be used to list units that cannot be run at the same time as the current unit. Starting a unit with this relationship will cause the other units to be stopped.
  • Condition...=: There are a number of directives that start with Condition which allow the administrator to test certain conditions prior to starting the unit. This can be used to provide a generic unit file that will only be run when on appropriate systems. If the condition is not met, the unit is gracefully skipped.
  • Assert...=: Similar to the directives that start with Condition, these directives check for different aspects of the running environment to decide whether the unit should activate. However, unlike theCondition directives, a negative result causes a failure with this directive.
Using these directives and a handful of others, general information about the unit and its relationship to other units and the operating system can be established.

[Install] Section Directives

On the opposite side of unit file, the last section is often the [Install] section. This section is optional and is used to define the behavior or a unit if it is enabled or disabled. Enabling a unit marks it to be automatically started at boot. In essence, this is accomplished by latching the unit in question onto another unit that is somewhere in the line of units to be started at boot.

Because of this, only units that can be enabled will have this section. The directives within dictate what should happen when the unit is enabled:
  • WantedBy=: The WantedBy= directive is the most common way to specify how a unit should be enabled. This directive allows you to specify a dependency relationship in a similar way to theWants= directive does in the [Unit] section. The difference is that this directive is included in the ancillary unit allowing the primary unit listed to remain relatively clean. When a unit with this directive is enabled, a directory will be created within /etc/systemd/system named after the specified unit with .wants appended to the end. Within this, a symbolic link to the current unit will be created, creating the dependency. For instance, if the current unit has WantedBy=multi-user.target, a directory called multi-user.target.wants will be created within /etc/systemd/system (if not already available) and a symbolic link to the current unit will be placed within. Disabling this unit removes the link and removes the dependency relationship.
  • RequiredBy=: This directive is very similar to the WantedBy= directive, but instead specifies a required dependency that will cause the activation to fail if not met. When enabled, a unit with this directive will create a directory ending with .requires.
  • Alias=: This directive allows the unit to be enabled under another name as well. Among other uses, this allows multiple providers of a function to be available, so that related units can look for any provider of the common aliased name.
  • Also=: This directive allows units to be enabled or disabled as a set. Supporting units that should always be available when this unit is active can be listed here. They will be managed as a group for installation tasks.
  • DefaultInstance=: For template units (covered later) which can produce unit instances with unpredictable names, this can be used as a fallback value for the name if an appropriate name is not provided.


Unit-Specific Section Directives

Sandwiched between the previous two sections, you will likely find unit type-specific sections. Most unit types offer directives that only apply to their specific type. These are available within sections named after their type. We will cover those briefly here.

The devicetargetsnapshot, and scope unit types have no unit-specific directives, and thus have no associated sections for their type.

The [Service] Section

The [Service] section is used to provide configuration that is only applicable for services.

One of the basic things that should be specified within the [Service] section is the Type= of the service. This categorizes services by their process and daemonizing behavior. This is important because it tellssystemd how to correctly manage the servie and find out its state.

The Type= directive can be one of the following:
  • simple: The main process of the service is specified in the start line. This is the default if the Type=and Busname= directives are not set, but the ExecStart= is set. Any communication should be handled outside of the unit through a second unit of the appropriate type (like through a .socketunit if this unit must communicate using sockets).
  • forking: This service type is used when the service forks a child process, exiting the parent process almost immediately. This tells systemd that the process is still running even though the parent exited.
  • oneshot: This type indicates that the process will be short-lived and that systemd should wait for the process to exit before continuing on with other units. This is the default Type= and ExecStart= are not set. It is used for one-off tasks.
  • dbus: This indicates that unit will take a name on the D-Bus bus. When this happens, systemd will continue to process the next unit.
  • notify: This indicates that the service will issue a notification when it has finished starting up. Thesystemd process will wait for this to happen before proceeding to other units.
  • idle: This indicates that the service will not be run until all jobs are dispatched.
Some additional directives may be needed when using certain service types. For instance:
  • RemainAfterExit=: This directive is commonly used with the oneshot type. It indicates that the service should be considered active even after the process exits.
  • PIDFile=: If the service type is marked as "forking", this directive is used to set the path of the file that should contain the process ID number of the main child that should be monitored.
  • BusName=: This directive should be set to the D-Bus bus name that the service will attempt to acquire when using the "dbus" service type.
  • NotifyAccess=: This specifies access to the socket that should be used to listen for notifications when the "notify" service type is selected This can be "none", "main", or "all. The default, "none", ignores all status messages. The "main" option will listen to messages from the main process and the "all" option will cause all members of the service's control group to be processed.
So far, we have discussed some pre-requisite information, but we haven't actually defined how to manage our services. The directives to do this are:
  • ExecStart=: This specifies the full path and the arguments of the command to be executed to start the process. This may only be specified once (except for "oneshot" services). If the path to the command is preceded by a dash "-" character, non-zero exit statuses will be accepted without marking the unit activation as failed.
  • ExecStartPre=: This can be used to provide additional commands that should be executed before the main process is started. This can be used multiple times. Again, commands must specify a full path and they can be preceded by "-" to indicate that the failure of the command will be tolerated.
  • ExecStartPost=: This has the same exact qualities as ExecStartPre= except that it specifies commands that will be run after the main process is started.
  • ExecReload=: This optional directive indicates the command necessary to reload the configuration of the service if available.
  • ExecStop=: This indicates the command needed to stop the service. If this is not given, the process will be killed immediately when the service is stopped.
  • ExecStopPost=: This can be used to specify commands to execute following the stop command.
  • RestartSec=: If automatically restarting the service is enabled, this specifies the amount of time to wait before attempting to restart the service.
  • Restart=: This indicates the circumstances under which systemd will attempt to automatically restart the service. This can be set to values like "always", "on-success", "on-failure", "on-abnormal", "on-abort", or "on-watchdog". These will trigger a restart according to the way that the service was stopped.
  • TimeoutSec=: This configures the amount of time that systemd will wait when stopping or stopping the service before marking it as failed or forcefully killing it. You can set separate timeouts withTimeoutStartSec= and TimeoutStopSec= as well.

The [Socket] Section

Socket units are very common in systemd configurations because many services implement socket-based activation to provide better parallelization and flexibility. Each socket unit must have a matching service unit that will be activated when the socket receives activity.

By breaking socket control outside of the service itself, sockets can be initialized early and the associated services can often be started in parallel. By default, the socket name will attempt to start the service of the same name upon receiving a connection. When the service is initialized, the socket will be passed to it, allowing it to begin processing any buffered requests.

To specify the actual socket, these directives are common:
  • ListenStream=: This defines an address for a stream socket which supports sequential, reliable communication. Services that use TCP should use this socket type.
  • ListenDatagram=: This defines an address for a datagram socket which supports fast, unreliable communication packets. Services that use UDP should set this socket type.
  • ListenSequentialPacket=: This defines an address for sequential, reliable communication with max length datagrams that preserves message boundaries. This is found most often for Unix sockets.
  • ListenFIFO: Along with the other listening types, you can also specify a FIFO buffer instead of a socket.
There are more types of listening directives, but the ones above are the most common.
Other characteristics of the sockets can be controlled through additional directives:
  • Accept=: This determines whether an additional instance of the service will be started for each connection. If set to false (the default), one instance will handle all connections.
  • SocketUser=: With a Unix socket, specifies the owner of the socket. This will be the root user if left unset.
  • SocketGroup=: With a Unix socket, specifies the group owner of the socket. This will be the root group if neither this or the above are set. If only the SocketUser= is set, systemd will try to find a matching group.
  • SocketMode=: For Unix sockets or FIFO buffers, this sets the permissions on the created entity.
  • Service=: If the service name does not match the .socket name, the service can be specified with this directive.

The [Mount] Section

Mount units allow for mount point management from within systemd. Mount points are named after the directory that they control, with a translation algorithm applied.

For example, the leading slash is removed, all other slashes are translated into dashes "-", and all dashes and unprintable characters are replaced with C-style escape codes. The result of this translation is used as the mount unit name. Mount units will have an implicit dependency on other mounts above it in the hierarchy.

Mount units are often translated directly from /etc/fstab files during the boot process. For the unit definitions automatically created and those that you wish to define in a unit file, the following directives are useful:
  • What=: The absolute path to the resource that needs to be mounted.
  • Where=: The absolute path of the mount point where the resource should be mounted. This should be the same as the unit file name, except using conventional filesystem notation.
  • Type=: The filesystem type of the mount.
  • Options=: Any mount options that need to be applied. This is a comma-separated list.
  • SloppyOptions=: A boolean that determines whether the mount will fail if there is an unrecognized mount option.
  • DirectoryMode=: If parent directories need to be created for the mount point, this determines the permission mode of these directories.
  • TimeoutSec=: Configures the amount of time the system will wait until the mount operation is marked as failed.

The [Automount] Section

This unit allows an associated .mount unit to be automatically mounted at boot. As with the .mount unit, these units must be named after the translated mount point's path.

The [Automount] section is pretty simple, with only the following two options allowed:
  • Where=: The absolute path of the automount point on the filesystem. This will match the filename except that it uses conventional path notation instead of the translation.
  • DirectoryMode=: If the automount point or any parent directories need to be created, this will determine the permissions settings of those path components.

The [Swap] Section

Swap units are used to configure swap space on the system. The units must be named after the swap file or the swap device, using the same filesystem translation that was discussed above.
Like the mount options, the swap units can be automatically created from /etc/fstab entries, or can be configured through a dedicated unit file.

The [Swap] section of a unit file can contain the following directives for configuration:
  • What=: The absolute path to the location of the swap space, whether this is a file or a device.
  • Priority=: This takes an integer that indicates the priority of the swap being configured.
  • Options=: Any options that are typically set in the /etc/fstab file can be set with this directive instead. A comma-separated list is used.
  • TimeoutSec=: The amount of time that systemd waits for the swap to be activated before marking the operation as a failure.

The [Path] Section

A path unit defines a filesystem path that systmed can monitor for changes. Another unit must exist that will be be activated when certain activity is detected at the path location. Path activity is determined thorugh inotify events.

The [Path] section of a unit file can contain the following directives:
  • PathExists=: This directive is used to check whether the path in question exists. If it does, the associated unit is activated.
  • PathExistsGlob=: This is the same as the above, but supports file glob expressions for determining path existence.
  • PathChanged=: This watches the path location for changes. The associated unit is activated if a change is detected when the watched file is closed.
  • PathModified=: This watches for changes like the above directive, but it activates on file writes as well as when the file is closed.
  • DirectoryNotEmpty=: This directive allows systemd to activate the associated unit when the directory is no longer empty.
  • Unit=: This specifies the unit to activate when the path conditions specified above are met. If this is omitted, systemd will look for a .service file that shares the same base unit name as this unit.
  • MakeDirectory=: This determines if systemd will create the directory structure of the path in question prior to watching.
  • DirectoryMode=: If the above is enabled, this will set the permission mode of any path components that must be created.

The [Timer] Section

Timer units are used to schedule tasks to operate at a specific time or after a certain delay. This unit type replaces or supplements some of the functionality of the cron and at daemons. An associated unit must be provided which will be activated when the timer is reached.

The [Timer] section of a unit file can contain some of the following directives:
  • OnActiveSec=: This directive allows the associated unit to be activated relative to the .timer unit's activation.
  • OnBootSec=: This directive is used to specify the amount of time after the system is booted when the associated unit should be activated.
  • OnStartupSec=: This directive is similar to the above timer, but in relation to when the systemdprocess itself was started.
  • OnUnitActiveSec=: This sets a timer according to when the associated unit was last activated.
  • OnUnitInactiveSec=: This sets the timer in relation to when the associated unit was last marked as inactive.
  • OnCalendar=: This allows you to activate the associated unit by specifying an absolute instead of relative to an event.
  • AccuracySec=: This unit is used to set the level of accuracy with which the timer should be adhered to. By default, the associated unit will be activated within one minute of the timer being reached. The value of this directive will determine the upper bounds on the window in which systemd schedules the activation to occur.
  • Unit=: This directive is used to specify the unit that should be activated when the timer elapses. If unset, systemd will look for a .service unit with a name that matches this unit.
  • Persistent=: If this is set, systemd will trigger the associated unit when the timer becomes active if it would have been triggered during the period in which the timer was inactive.
  • WakeSystem=: Setting this directive allows you to wake a system from suspend if the timer is reached when in that state.

The [Slice] Section

The [Slice] section of a unit file actually does not have any .slice unit specific configuration. Instead, it can contain some resource management directives that are actually available to a number of the units listed above.

Some common directives in the [Slice] section, which may also be used in other units can be found in the systemd.resource-control man page. These are valid in the following unit-specific sections:
  • [Slice]
  • [Scope]
  • [Service]
  • [Socket]
  • [Mount]
  • [Swap]


Creating Instance Units from Template Unit Files

We mentioned earlier in this guide the idea of template unit files being used to create multiple instances of units. In this section, we can go over this concept in more detail.

Template unit files are, in most ways, no different than regular unit files. However, these provide flexibility in configuring units by allowing certain parts of the file to utilize dynamic information that will be available at runtime.


Template and Instance Unit Names

Template unit files can be identified because they contain an @ symbol after the base unit name and before the unit type suffix. A template unit file name may look like this:
example@.service

When an instance is created from a template, an instance identifier is placed between the @ symbol and the period signifying the start of the unit type. For example, the above template unit file could be used to create an instance unit that looks like this:
example@instance1.service

An instance file is usually created as a symbolic link to the template file, with the link name including the instance identifier. In this way, multiple links with unique identifiers can point back to a single template file. When managing an instance unit, systemd will look for a file with the exact instance name you specify on the command line to use. If it cannot find one, it will look for an associated template file.


Template Specifiers

The power of template unit files is mainly seen through its ability to dynamically substitute appropriate information within the unit definition according to the operating environment. This is done by setting the directives in the template file as normal, but replacing certain values or parts of values with variable specifiers.

The following are some of the more common specifiers will be replaced when an instance unit is interpreted with the relevant information:
  • %n: Anywhere where this appears in a template file, the full resulting unit name will be inserted.
  • %N: This is the same as the above, but any escaping, such as those present in file path patterns, will be reversed.
  • %p: This references the unit name prefix. This is the portion of the unit name that comes before the@ symbol.
  • %P: This is the same as above, but with any escaping reversed.
  • %i: This references the instance name, which is the identifier following the @ in the instance unit. This is one of the most commonly used specifiers because it will be guaranteed to be dynamic. The use of this identifier encourages the use of configuration significant identifiers. For example, the port that the service will be run at can be used as the instance identifier and the template can use this specifier to set up the port specification.
  • %I: This specifier is the same as the above, but with any escaping reversed.
  • %f: This will be replaced with the unescaped instance name or the prefix name, prepended with a/.
  • %c: This will indicate the control group of the unit, with the standard parent hierarchy of/sys/fs/cgroup/ssytemd/ removed.
  • %u: The name of the user configured to run the unit.
  • %U: The same as above, but as a numeric UID instead of name.
  • %H: The host name of the system that is running the unit.
  • %%: This is used to insert a literal percentage sign.
By using the above identifiers in a template file, systemd will fill in the correct values when interpreting the template to create an instance unit.


Conclusion

When working with systemd, understanding units and unit files can make administration simple. Unlike many other init systems, you do not have to know a scripting language to interpret the init files used to boot services or the system. The unit files use a fairly simple declarative syntax that allows you to see at a glance the purpose and effects of a unit upon activation.

Breaking functionality such as activation logic into separate units not only allows the internal systemdprocesses to optimize parallel initialization, it also keeps the configuration rather simple and allows you to modify and restart some units without tearing down and rebuilding their associated connections. Leveraging these abilities can give you more flexibility and power during administration.

By learning how to leverage your init system's strengths, you can control the state of your machines and more easily manage your services and processes.

How To Set Up an Apache Active-Passive Cluster Using Pacemaker on CentOS 7

$
0
0

High availability is an important topic nowadays because service outages can be very costly. It's prudent to take measures which will keep your your website or web application running in case of an outage. With the Pacemaker stack, you can configure a high availability cluster.


Pacemaker is a cluster resource manager. It manages all cluster services (resources) and uses the messaging and membership capabilities of the underlying cluster engine. We will use Corosync as our cluster engine. Resources have a resource agent, which is a external program that abstracts the service.
In an active-passive cluster, all services run on a primary system. If the primary system fails, all services get moved to the backup system. An active-passive cluster makes it possible to do maintenance work without interruption.

In this tutorial, you will learn how to build a high availability Apache active-passive cluster. The web cluster will get addressed by its virtual IP address and will automatically fail over if a node fails.

Your users will access your web application by the virtual IP address, which is managed by Pacemaker. The Apache service and the virtual IP are always located on the same host. When this host fails, they get migrated to the second host and your users will not notice the outage.

Prerequisites

Before you get started with this tutorial, you will need the following:
  • Two CentOS 7 machines, which will be the cluster nodes. We'll refer to these as webnode01 (IP address: your_first_server_ip) and webnode02 (IP address: your_second_server_ip).
  • A user on both servers with root privileges. 
You'll have to run some commands on both servers, and some commands on only one.

Step 1 — Configuring Name Resolution

First, we need to make sure that both hosts can resolve the hostname of the two cluster nodes. To accomplish that, we'll add entries to the /etc/hosts file. Follow this step on both webnode01 and webnode02.
Open /etc/hosts with nano or your favorite text editor.

  • sudo nano /etc/hosts


Add the following entries to the end of the file.
/etc/hosts
your_first_server_ip webnode01.example.com webnode01
your_second_server_ip webnode02.example.com webnode02

Save and close the file.

 

Step 2 — Installing Apache

In this section, we will install the Apache web server. You have to complete this step on both hosts.
First, install Apache.

  • sudo yum install httpd


The Apache resource agent uses the Apache server status page for checking the health of the Apache service. You have to activate the status page by creating the file /etc/httpd/conf.d/status.conf.

  • sudo nano /etc/httpd/conf.d/status.conf


Paste the following directive in this file. These directives allow the access to the status page from localhost but not from any other host.

/etc/httpd/conf.d/status.conf

SetHandler server-status
Order Deny,Allow
Deny from all
Allow from 127.0.0.1


Save and close the file.

 

Step 3 — Installing Pacemaker

Now we will install the Pacemaker stack. You have to complete this step on both hosts.
Install the Pacemaker stack and the pcs cluster shell. We'll use the latter later to configure the cluster.

  • sudo yum install pacemaker pcs


Now we have to start the pcs daemon, which is used for synchronizing the Corosync configuration across the nodes.

  • sudo systemctl start pcsd.service


In order that the daemon gets started after every reboot, we will also enable the service.

  • sudo systemctl enable pcsd.service


After you have installed these packages, there will be a new user on your system called hacluster. After the installation, remote login is disabled for this user. For tasks like synchronizing the configuration or starting services on other nodes, we have to set the same password for this user.

  • sudo passwd hacluster


 

Step 4 — Configuring Pacemaker

Next, we'll allow cluster traffic in FirewallD to allow our hosts to communicate.
First, check if FirewallD is running.

  • sudo firewall-cmd --state


If it's not running, start it.

  • sudo systemctl start firewalld.service


You'll need to do this on both hosts. Once it's running, add the high-availability service to FirewallD.

  • sudo firewall-cmd --permanent --add-service=high-availability


After this change, you need to reload FirewallD.

  • sudo firewall-cmd --reload


If you want to learn more about FirewallD, you can read this guide about how to configure FirewallD on CentOS 7.


Now that our two hosts can talk to each other, we can set up the authentication between the two nodes by running this command on one host (in our case, webnode01).

  • sudo pcs cluster auth webnode01 webnode02

  • Username: hacluster


You should see the following output:
Output
webnode01: Authorized
webnode02: Authorized

Next, we'll generate and synchronize the Corosync configuration on the same host. Here, we'll name the cluster webcluster, but you can call it whatever you like.

  • sudo pcs cluster setup --name webcluster webnode01 webnode02


You'll see the following output:
Output
Shutting down pacemaker/corosync services...
Redirecting to /bin/systemctl stop pacemaker.service
Redirecting to /bin/systemctl stop corosync.service
Killing any remaining services...
Removing all cluster configuration files...
webnode01: Succeeded
webnode02: Succeeded

The corosync configuration is now created and distributed across all nodes. The configuration is stored in the file /etc/corosync/corosync.conf.

 

Step 5 — Starting the Cluster

The cluster can be started by running the following command on webnode01.

  • sudo pcs cluster start --all


To ensure that Pacemaker and corosync starts at boot, we have to enable the services on both hosts.

  • sudo systemctl enable corosync.service

  • sudo systemctl enable pacemaker.service


We can now check the status of the cluster by running the following command on either host.

  • sudo pcs status


Check that both hosts are marked as online in the output.
Output
. . .

Online: [ webnode01 webnode02 ]

Full list of resources:


PCSD Status:
webnode01: Online
webnode02: Online

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled

Note: After the first setup, it can take some time before the nodes are marked as online.

 

Step 6 — Disabling STONITH and Ignoring Quorum

 

What is STONITH?

You will see a warning in the output of pcs status that no STONITH devices are configured and STONITH is not disabled:
Warning
. . .
WARNING: no stonith devices and stonith-enabled is not false
. . .
What does this mean and why should you care?
When the cluster resource manager cannot determine the state of a node or of a resource on a node, fencing is used to bring the cluster to a known state again.

Resource level fencing ensures mainly that there is no data corruption in case of an outage by configuring a resource. You can use resource level fencing, for instance, with DRBD (Distributed Replicated Block Device) to mark the disk on a node as outdated when the communication link goes down.

Node level fencing ensures that a node does not run any resources. This is done by resetting the node and the Pacemaker implementation of it is called STONITH (which stands for "shoot the other node in the head"). Pacemaker supports a great variety of fencing devices, e.g. an uninterruptible power supply or management interface cards for servers.

Because the node level fencing configuration depends heavily on your environment, we will disable it for this tutorial.

  • sudo pcs property set stonith-enabled=false


Note: If you plan to use Pacemaker in a production environment, you should plan a STONITH implementation depending on your environment and keep it enabled.

 

What is Quorum?

A cluster has quorum when more than half of the nodes are online. Pacemaker's default behavior is to stop all resources if the cluster does not have quorum. However, this does not make sense in a two-node cluster; the cluster will lose quorum if one node fails.

For this tutorial, we will tell Pacemaker to ignore quorum by setting the no-quorum-policy:

  • sudo pcs property set no-quorum-policy=ignore


 

Step 7 — Configuring the Virtual IP address

From now on, we will interact with the cluster via the pcs shell, so all commands need only be executed on one host; it doesn't matter which one.

The Pacemaker cluster is now up and running and we can add the first resource to it, which is the virtual IP address. To do this, we will configure the ocf:heartbeat:IPaddr2 resource agent, but first, let's cover some terminology.

Every resource agent name has either three or two fields that are separated by a colon:
  • The first field is the resource class, which is the standard the resource agent conforms to. It also tells Pacemaker where to find the script. The IPaddr2 resource agent conforms to the OCF (Open Cluster Framework) standard.
  • The second field depends on the standard. OCF resources use the second field for the OCF namespace.
  • The third field is the name of the resource agent.
Resources can have meta-attributes and instance attributes. Meta-attributes do not depend on the resource type; instance attributes are resource agent-specific. The only required instance attribute of this resource agent is ip (the virtual IP address), but for the sake of explicitness we will also set cidr_netmask (the subnetmask in CIDR notation).

Resource operations are actions the cluster can perform on a resource (e.g. start, stop, monitor). They are indicated by the keyword op. We will add the monitor operation with an interval of 20 seconds so that the cluster checks every 20 seconds if the resource is still healthy. What's considered healthy depends on the resource agent.

First, we will create the virtual IP address resource. Here, we'll use 127.0.0.2 as our virtual IP and Cluster_VIP for the name of the resource.

  • sudo pcs resource create Cluster_VIP ocf:heartbeat:IPaddr2 ip=127.0.0.2 cidr_netmask=24 op monitor interval=20s


Next, check the status of the resource.

  • sudo pcs status


Look for the following line in the output:
Output
...
Full list of resources:

Cluster_VIP (ocf::heartbeat:IPaddr2): Started webnode01
...
The virtual IP address is active on the host webnode01.

 

Step 8 — Adding the Apache Resource

Now we can add the second resource to the cluster, which will the Apache service. The resource agent of the service is ocf:heartbeat:apache.

We will name the resource WebServer and set the instance attributes configfile (the location of the Apache configuration file) and statusurl (the URL of the Apache server status page). We will choose a monitor interval of 20 seconds again.

  • sudo pcs resource create WebServer ocf:heartbeat:apache configfile=/etc/httpd/conf/httpd.conf statusurl="http://127.0.0.1/server-status" op monitor interval=20s


We can query the status of the resource like before.

  • sudo pcs status


You should see WebServer in the output running on webnode02.
Output
...
Full list of resources:

Cluster_VIP (ocf::heartbeat:IPaddr2): Started webnode01
WebServer (ocf::heartbeat:apache): Started webnode02
...
As you can see, the resources run on different hosts. We did not yet tell Pacemaker that these resources must run on the same host, so they are evenly distributed across the nodes.

Note: You can restart the Apache resource by running sudo pcs resource restart WebServer (e.g. if you change the Apache configuration). Make sure not to use systemctl to manage the Apache service.

 

Step 9 — Configuring Colocation Constraints

Almost every decision in a Pacemaker cluster, like choosing where a resource should run, is done by comparing scores. Scores are calculated per resource, and the cluster resource manager chooses the node with the highest score for a particular resource. (If a node has a negative score for a resource, the resource cannot run on that node.)

We can manipulate the decisions of the cluster with constraints. Constraints have a score. If a constraint has a score lower than INFINITY, it is only a recommendation. A score of INFINITY means it is a must.
We want to ensure that both resources are run on the same host, so we will define a colocation constraint with a score of INFINITY.

  • sudo pcs constraint colocation add WebServer Cluster_VIP INFINITY


The order of the resources in the constraint definition is important. Here, we specify that the Apache resource (WebServer) must run on the same hosts the virtual IP (Cluster_VIP) is active on. This also means that WebSite is not permitted to run anywhere if Cluster_VIP is not active.

It is also possible to define in which order the resources should run by creating ordering constraints or to prefer certain hosts for some resources by creating location constraints.
Verify that both resources run on the same host.

  • sudo pcs status


Output
...
Full list of resources:

Cluster_VIP (ocf::heartbeat:IPaddr2): Started webnode01
WebServer (ocf::heartbeat:apache): Started webnode01
...
Both resources are now on webnode01.

 

Conclusion

You have set up an Apache two node active-passive cluster which is accessible by the virtual IP address. You can now configure Apache further, but make sure to synchronize the configuration across the hosts. You can write a custom script for this (e.g. with rsync) or you can use something like csync2.

If you want to distribute the files of your web application among the hosts, you can set up a DRBD volume and integrate it with Pacemaker.

How to root Nexus 5, Nexus 6, Nexus 7, and Nexus 9 running Android 6.0 Marshmallow

$
0
0

It has been more than a month since Google started rolling out the Android 6.0 Marshmallow OTA update for the Nexus 5, Nexus 6, Nexus 7, and the Nexus 9. Since it even released the factory images for these devices on the same day, many enthusiasts and advanced users were able to get Android 6.0 up and running on their Nexus device within a few hours.


If you are an advanced user, chances are you must be missing root access on Marshmallow. Thankfully, Chainfire has figured out a way to get root access on the Nexus 5, Nexus 6, Nexus 7, and Nexus 9 running Android 6.0. The process, however, is slightly different from previous versions of Android and requires one to flash a slightly ramdisk on their device.There are a few advantages to this method though, including the fact that this method does not modify the system partition of your Nexus device at all, which makes it cleaner and more efficient than previous root methods.

Note: This goes without saying, but following the steps below will wipe all your data from the device. So make sure to create backups of all your important data before proceeding with the steps below.

Step 1: Download the ADB/Fastboot files, TWRP custom recovery, ramdisk, modified SuperSU ZIP file and the drivers from below and put them all inside a folder called ‘root’ on your desktop.



Unlock the bootloader

Step 2: If you own a Nexus 6 or Nexus 9, head over to Settings -> About Phone and tap the ‘Build Number’ option seven times to enable Developer Options. Now, head over to Settings -> Developer Options and check the ‘Enable OEM unlock’ option.


Step 3: You will now have to reboot your Nexus device into Bootloader (a.k.a Fastboot mode) so that you can execute the Fastboot command to unlock its bootloader. To do this, switch off your device and reboot it into bootloader mode by using the key combination mentioned below

    Nexus 5: Volume Up + Volume Down for a few seconds and then press Power button.
    Nexus 6: Volume Down for a few seconds and then press the Power button.
    Nexus 7 (2013): Volume Down for a few seconds and then press the Power button.
    Nexus 9: Volume Down for a few seconds and then press the Power button.

Step 4: Connect the device to your PC and start a new Command prompt or Terminal window on it. Use the ‘cd’ command and navigate to the ‘root‘ folder on your desktop. The below command should work for majority of users:

    cd/desktop/root

Alternatively, you can simply drag ‘n’ drop the root folder to the Terminal or Command Prompt window as well.

Step 5: Run the following command to first make sure that the Nexus 5, Nexus 6, Nexus 7, or Nexus 9 is being detected by your PC.

    fastboot devices

Mac users will have to prefix a “./” before every Fastboot or ADB command you run. Therefore, the above command will look something like this for you:

    ./fastboot devices

If detected, you will receive a valid response along with the device ID of your Nexus device. If not, you will get a time out error in which case you need to repeat Steps 1-4 mentioned above.

Step 6: Finally, unlock the bootloader of your Nexus device by running the following command:

    fastboot oem unlock

You will need to confirm the selection on your handset by pressing the Volume Up button. Your device might reboot multiple times during the process.

Once the bootloader is unlocked, your Nexus device will automatically boot back into Android. If it does not do so automatically, make sure to do it on your own.

Flashing TWRP recovery

Step 7: You can skip the set up process for now. Transfer the ramdisk and the SuperSU ZIP files to the internal storage of the handset, and then repeat steps 2-5 from above again.

Step 8: Now flash TWRP recovery on your Nexus handset by executing the below command:

    fastboot flash recovery twrp.img


Step 9: With the recovery flashed, it is now time to flash the modified ramdisk on the handset. For this, you need to boot your handset into recovery mode. This can be done by pressing the Volume Up/down button and then confirming your selection by pressing the Power button.

Step 10: Boot your Nexus device into TWRP recovery by running the following command:

    fastboot boot twrp.img

Step 11: Now proceed to flash the ramdisk followed by the modified SuperSU zip file. Once done, simply boot your Nexus device back into Android by selecting the ‘reboot system’ option.

The first boot can take its own fair share of time, so don’t panic if your device is stuck at the Android logo for 5 minutes. However, if your device does not boot even after 15 minutes, repeat steps 8-11 from above.

Oracle Database 12c Release 1 (12.1.) RAC on Oracle Linux 7 Using NFS

$
0
0
This article describes the installation of Oracle Database 12c Release 2 (12.1 64-bit) RAC on Oracle Linux 7.1 64-bit using NFS to provide the shared storage.


  • Introduction
  • Download Software
  • Operating System Installation
  • Oracle Installation Prerequisites
    • Automatic Setup
    • Manual Setup
    • Additional Setup
  • Create Shared Disks
  • Install the Grid Infrastructure
  • Install the Database
  • Check the Status of the RAC
  • Direct NFS Client

 

Introduction

NFS is an abbreviation of Network File System, a platform independent technology created by Sun Microsystems that allows shared access to files stored on computers via an interface called the Virtual File System (VFS) that runs on top of TCP/IP. Computers that share files are considered NFS servers, while those that access shared files are considered NFS clients. An individual computer can be either an NFS server, a NFS client or both.

We can use NFS to provide shared storage for a RAC installation. In a production environment we would expect the NFS server to be a NAS, but for testing it can just as easily be another server, or even one of the RAC nodes itself.

In this case, I'm doing the installations on VirtualBox VMs and the NFS shares are on the host server. If you have access to a NAS or a third server you can easily use that for the shared storage. Whichever route you take, the fundamentals of the installation are the same.

The Single Client Access Name (SCAN) should really be defined in the DNS or GNS and round-robin between one of 3 addresses, which are on the same subnet as the public and virtual IPs. You can try to use a single IP address in the "/etc/hosts" file, which it will cause the cluster verification to fail, but it allows me to complete the install without the presence of a DNS.

Assumptions. You need two machines available to act as your two RAC nodes. They can be physical or virtual. In this case I'm using two virtual machines called "ol7-121-rac1" and "ol7-121-rac2". If you want a different naming convention or different IP addresses that's fine, but make sure you stay consistent with how they are used.

Download Software

Download the following software.

Operating System Installation

This article uses Oracle Linux 7.1. More specifically, it should be a server installation with a minimum of 2G swap (preferably 3-4G), firewall disabled and SELinux set to permissive. Oracle recommend a default server installation, but if you perform a custom installation include the following package groups.
  • Server with GUI
  • Hardware Monitoring Utilities
  • Large Systems Performance
  • Network file system client
  • Performance Tools
  • Compatibility Libraries
  • Development Tools
To be consistent with the rest of the article, the following information should be set during the installation.
Node 1.
  • hostname: ol7-121-rac1.localdomain
  • enp0s3 (eth0): DHCP (Connect Automatically)
  • enp0s8 (eth1): IP=192.168.56.101, Subnet=255.255.255.0, Gateway=192.168.56.1, DNS=192.168.56.1, Search=localdomain (Connect Automatically)
  • enp0s9 (eth2): IP=192.168.1.101, Subnet=255.255.255.0, Gateway=, DNS=, Search= (Connect Automatically)
Node 2.
  • hostname: ol7-121-rac2.localdomain
  • enp0s3 (eth0): DHCP (Connect Automatically)
  • enp0s8 (eth1): IP=192.168.56.101, Subnet=255.255.255.0, Gateway=192.168.56.1, DNS=192.168.56.1, Search=localdomain (Connect Automatically)
  • enp0s9 (eth2): IP=192.168.1.101, Subnet=255.255.255.0, Gateway=, DNS=, Search= (Connect Automatically)
You are free to change the IP addresses to suit your network, but remember to stay consistent with those adjustments throughout the rest of the article.

In this article, I performed the installation using VirtualBox virtual machines, so I also configured a NAT adapter on each machine to allow access to the internet. If you are using physical machines, or virtual machines with direct access to the internet over the public network, like bridged connections, this extra adapter will not be necessary, so ignore the references to it.

Oracle Installation Prerequisites

Perform either the Automatic Setup or the Manual Setup to complete the basic prerequisites. The Additional Setup is required for all installations.

Automatic Setup

If you plan to use the "oracle-rdbms-server-12cR1-preinstall" package to perform all your prerequisite setup, issue the following command.

# yum install oracle-rdbms-server-12cR1-preinstall -y
# yum install ntp -y

It is probably worth doing a full update as well, but this is not strictly speaking necessary.

# yum update -y

Manual Setup

If you have not used the "oracle-rdbms-server-12cR1-preinstall" package to perform all prerequisites, you will need to manually perform the following setup tasks.

Add or amend the following lines to the "/etc/sysctl.conf" file.
fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500

Run the following command to change the current kernel parameters.
/sbin/sysctl -p

Add the following lines to the "/etc/security/limits.conf" file.
oracle   soft   nofile    1024
oracle hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft stack 10240
oracle hard stack 32768

In addition to the basic OS installation, the following packages must be installed whilst logged in as the root user. This includes the 64-bit and 32-bit versions of some packages.
# From Public Yum or ULN
yum install binutils -y
yum install compat-libstdc++-33 -y
yum install compat-libstdc++-33.i686 -y
yum install gcc -y
yum install gcc-c++ -y
yum install glibc -y
yum install glibc.i686 -y
yum install glibc-devel -y
yum install glibc-devel.i686 -y
yum install ksh -y
yum install libgcc -y
yum install libgcc.i686 -y
yum install libstdc++ -y
yum install libstdc++.i686 -y
yum install libstdc++-devel -y
yum install libstdc++-devel.i686 -y
yum install libaio -y
yum install libaio.i686 -y
yum install libaio-devel -y
yum install libaio-devel.i686 -y
yum install libXext -y
yum install libXext.i686 -y
yum install libXtst -y
yum install libXtst.i686 -y
yum install libX11 -y
yum install libX11.i686 -y
yum install libXau -y
yum install libXau.i686 -y
yum install libxcb -y
yum install libxcb.i686 -y
yum install libXi -y
yum install libXi.i686 -y
yum install make -y
yum install sysstat -y
yum install unixODBC -y
yum install unixODBC-devel -y
yum install zlib-devel -y
yum install zlib-devel.i686 -y

Create the new groups and users.
groupadd -g 54321 oinstall
groupadd -g 54322 dba
groupadd -g 54323 oper
#groupadd -g 54324 backupdba
#groupadd -g 54325 dgdba
#groupadd -g 54326 kmdba
#groupadd -g 54327 asmdba
#groupadd -g 54328 asmoper
#groupadd -g 54329 asmadmin

useradd -u 54321 -g oinstall -G dba,oper oracle

Uncomment the extra groups you require.

Additional Setup

The following steps must be performed, whether you did the manual or automatic setup.
Perform the following steps whilst logged into the "ol7-121-rac1" virtual machine as the root user.
Set the password for the "oracle" user.

passwd oracle

Apart form the localhost address, the "/etc/hosts" file can be left blank, but I prefer to put the addresses in for reference.
 
127.0.0.1       localhost.localdomain   localhost 
 
# Public
192.168.56.101 ol7-121-rac1.localdomain ol7-121-rac1
192.168.56.102 ol7-121-rac2.localdomain ol7-121-rac2 
 
# Private
192.168.1.101 ol7-121-rac1-priv.localdomain ol7-121-rac1-priv
192.168.1.102 ol7-121-rac2-priv.localdomain ol7-121-rac2-priv 
 
# Virtual
192.168.56.103 ol7-121-rac1-vip.localdomain ol7-121-rac1-vip
192.168.56.104 ol7-121-rac2-vip.localdomain ol7-121-rac2-vip 
 
# SCAN
#192.168.56.105 ol7-121-scan.localdomain ol7-121-scan
#192.168.56.106 ol7-121-scan.localdomain ol7-121-scan
#192.168.56.107 ol7-121-scan.localdomain ol7-121-scan 
 
# NAS
192.168.56.1 nas1.localdomain nas1

The SCAN address is commented out of the hosts file because it must be resolved using a DNS, so it can round-robin between 3 addresses on the same subnet as the public IPs.

Make sure the "/etc/resolv.conf" file includes a nameserver entry that points to the correct nameserver. Also, if the "domain" and "search" entries are both present, comment out one of them. For this installation my "/etc/resolv.conf" looked like this.
 
#domain localdomain
search localdomain
nameserver 192.168.56.1

If you are doing this installation on a virtual machine and you've configured a NAT interface, you might find the changes to the "resolv.conf" will be overwritten by the network manager. For this reason, this interface should now be disabled on startup. You can enable it manually if you need to access the internet from the VMs. Edit config file associated with the NAT network adapter, in this case the "/etc/sysconfig/network-scripts/ifcfg-enp0s3" (eth0) file, making the following change. This will take effect after the next restart.
 
ONBOOT=no

There is no need to do the restart now. You can just run the following command. Remember to amend the adapter name if yours are named differently.
 
# ifdown enp0s3
# #ifdown eth0

At this point, the networking for the first node should look something like the following. Notice that enp0s3 (eth0), my NAT adapter, has no associated IP address because it is disabled. If you are not using a VM and only configured two network adapters, you will not see this.
 
# ifconfig -a
enp0s3: flags=4163 mtu 1500
ether 08:00:27:eb:72:86 txqueuelen 1000 (Ethernet)
RX packets 10 bytes 1716 (1.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 55 bytes 8308 (8.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

enp0s8: flags=4163 mtu 1500
inet 192.168.56.101 netmask 255.255.255.0 broadcast 192.168.56.255
inet6 fe80::a00:27ff:fe84:31f5 prefixlen 64 scopeid 0x20
ether 08:00:27:84:31:f5 txqueuelen 1000 (Ethernet)
RX packets 342 bytes 33597 (32.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 302 bytes 43228 (42.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

enp0s9: flags=4163 mtu 1500
inet 192.168.1.101 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::a00:27ff:fe0d:9dd9 prefixlen 64 scopeid 0x20
ether 08:00:27:0d:9d:d9 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 28 bytes 3941 (3.8 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73 mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 0 (Local Loopback)
RX packets 16 bytes 1708 (1.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 16 bytes 1708 (1.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

#

With this in place and the DNS configured the SCAN address is being resolved to all three IP addresses.
 
# nslookup ol7-121-scan
Server: 192.168.56.1
Address: 192.168.56.1#53

Name: ol7-121-scan.localdomain
Address: 192.168.56.105
Name: ol7-121-scan.localdomain
Address: 192.168.56.106
Name: ol7-121-scan.localdomain
Address: 192.168.56.107

#

Amend the "/etc/security/limits.d/20-nproc.conf" file as described below. See MOS Note [ID 1487773.1]
 
# Change this
* soft nproc 4096

# To this
* - nproc 16384

Change the setting of SELinux to permissive by editing the "/etc/selinux/config" file, making sure the SELINUX flag is set as follows.
 
SELINUX=permissive

If you have the Linux firewall enabled, you will need to disable it.

# systemctl stop firewalld
# systemctl disable firewalld

Either configure NTP, or make sure it is not configured so the Oracle Cluster Time Synchronization Service (ctssd) can synchronize the times of the RAC nodes. If you want to deconfigure NTP do the following, which is what I did for this installation.

# systemctl stop ntpd
Shutting down ntpd: [ OK ]
# systemctl disable ntpd
# mv /etc/ntp.conf /etc/ntp.conf.orig
# rm /var/run/ntpd.pid

If your RAC is going to be permanently connected to your main network and you want to use NTP, you must add the "-x" option into the following line in the "/etc/sysconfig/ntpd" file.
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

Then restart NTP.
# systemctl restart ntpd

Log in as the "oracle" user and add the following lines at the end of the "/home/oracle/.bash_profile" file.
Remember to set the hostnames and ORACLE_SID values correctly in the following scripts. Node 2 will use ol7-121-rac2 and cdbrac2.
 
# Oracle Settings 
 
export TMP=/tmp
export TMPDIR=$TMP

export ORACLE_HOSTNAME=ol7-121-rac1.localdomain
export ORACLE_UNQNAME=CDBRAC
export ORACLE_BASE=/u01/app/oracle
export GRID_HOME=/u01/app/12.1.0.2/grid
export DB_HOME=$ORACLE_BASE/product/12.1.0.2/db_1
export ORACLE_HOME=$DB_HOME
export ORACLE_SID=cdbrac1
export ORACLE_TERM=xterm
export BASE_PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$BASE_PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi

alias grid_env='. /home/oracle/grid_env'
alias db_env='. /home/oracle/db_env'

Create a file called "/home/oracle/grid_env" with the following contents.
 
export ORACLE_HOME=$GRID_HOME
export PATH=$ORACLE_HOME/bin:$BASE_PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
Create a file called "/home/oracle/db_env" with the following contents.
export ORACLE_SID=cdbrac1
export ORACLE_HOME=$DB_HOME
export PATH=$ORACLE_HOME/bin:$BASE_PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

Once the "/home/oracle/.bash_profile" has been run, you will be able to switch between environments as follows.
 
$ grid_env
$ echo $ORACLE_HOME
/u01/app/12.1.0.2/grid
$ db_env
$ echo $ORACLE_HOME
/u01/app/oracle/product/12.1.0.2/db_1
$

We've made a lot of changes, so it's worth doing a reboot of the machines at this point to make sure all the changes have taken effect.
 
# shutdown -r now
 

Create Shared Disks

First we need to set up some NFS shares. In this case we will do on the host machine, but you can do the on a NAS or a third server if you have one available. Create the following directories.
 
mkdir /shared_config
mkdir /shared_grid
mkdir /shared_home
mkdir /shared_data

Add the following lines to the "/etc/exports" file.
/shared_config               *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/shared_grid *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/shared_home *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/shared_data *(rw,sync,no_wdelay,insecure_locks,no_root_squash)

Run the following command to export the NFS shares.
 
chkconfig nfs on
service nfs restart

On both ol7-121-rac1 and ol7-121-rac2 create the directories in which the Oracle software will be installed.
 
mkdir -p /u01/app/12.1.0.2/grid
mkdir -p /u01/app/oracle/product/12.1.0.2/db_1
mkdir -p /u01/oradata
mkdir -p /u01/shared_config
chown -R oracle:oinstall /u01/app /u01/app/oracle /u01/oradata /u01/shared_config
chmod -R 775 /u01/app /u01/app/oracle /u01/oradata /u01/shared_config

Add the following lines to the "/etc/fstab" file.
 
nas1:/shared_config /u01/shared_config  nfs  rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0  0 0
nas1:/shared_grid /u01/app/12.1.0.2/grid nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
nas1:/shared_home /u01/app/oracle/product/12.1.0.2/db_1 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
nas1:/shared_data /u01/oradata nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0

Mount the NFS shares on both servers.
 
mount /u01/shared_config
mount /u01/app/12.1.0.2/grid
mount /u01/app/oracle/product/12.1.0.2/db_1
mount /u01/oradata

Make sure the permissions on the shared directories are correct.
 
chown -R oracle:oinstall /u01/shared_config
chown -R oracle:oinstall /u01/app/12.1.0.2/grid
chown -R oracle:oinstall /u01/app/oracle/product/12.1.0.2/db_1
chown -R oracle:oinstall /u01/oradata
 

Install the Grid Infrastructure

Start both RAC nodes, login to ol7-121-rac1 as the oracle user and start the Oracle installer.
./runInstaller

Select the "Install and Configure Grid Infrastructure for a Cluster" option, then click the "Next" button.


Select the "Configure a Standard Cluster" option, then click the "Next" button.


Select the "Advanced Installation" option, then click the "Next" button.


Select the the required language support, then click the "Next" button.


Enter cluster information and uncheck the "Configure GNS" option, then click the "Next" button.


On the "Specify Node Information" screen, click the "Add" button.


Enter the details of the second node in the cluster, then click the "OK" button.


Click the "SSH Connectivity..." button and enter the password for the "oracle" user. Click the "Setup" button to configure SSH connectivity, and the "Test" button to test it once it is complete. Click the "Next" button.


Check the public and private networks are specified correctly, then click the "Next" button.


Select the "Shared File System" option, then click the "Next" button.


Select the required level of redundancy and enter the OCR File Location(s), then click the "Next" button.


Select the required level of redundancy and enter the Voting Disk File Location(s), then click the "Next" button.


Accept the default failure isolation support by clicking the "Next" button.


Don't register with Cloud Control. Click the "Next" button.


Select the preferred OS groups for each option, then click the "Next" button. Click the "Yes" button on the subsequent message dialog.


Enter "/u01/app/oracle" as the Oracle Base and "/u01/app/12.1.0.2/grid" as the software location, then click the "Next" button.


Accept the default inventory directory by clicking the "Next" button.


Ignore the root configuration, we will run the scripts manually. Click the "Next" button.


Wait while the prerequisite checks complete. If you have any issues, either fix them or check the "Ignore All" checkbox and click the "Next" button. If there are no issues, you will move directly to the summary screen.


If you are happy with the summary information, click the "Install" button.


Wait while the setup takes place.


When prompted, run the configuration scripts on each node.


The output from the "orainstRoot.sh" file should look something like that listed below.
 
# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
#
 
Once the scripts have completed, return to the "Execute Configuration Scripts" screen on ol7-121-rac1 and click the "OK" button.

Grid - Execute Configuration Scripts

Wait for the configuration assistants to complete.


We expect the verification phase to fail with an error relating to the SCAN, assuming you are not using DNS.
 
INFO: Checking Single Client Access Name (SCAN)...
INFO: Checking name resolution setup for "rac-scan.localdomain"...
INFO: ERROR:
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "rac-scan.localdomain"
INFO: ERROR:
INFO: PRVF-4657 : Name resolution setup check for "rac-scan.localdomain" (IP address: 192.168.2.201) failed
INFO: ERROR:
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "rac-scan.localdomain"
INFO: Verification of SCAN VIP and Listener setup failed

Provided this is the only error, it is safe to ignore this and continue by clicking the "Next" button.
Click the "Close" button to exit the installer.


The grid infrastructure installation is now complete.

Install the Database

Start all the RAC nodes, login to ol7-121-rac1 as the oracle user and start the Oracle installer.
./runInstaller
 
Uncheck the security updates checkbox and click the "Next" button.


Accept the "Create and configure a database" option by clicking the "Next" button.


Accept the "Server Class" option by clicking the "Next" button.


Select the "Oracle Real Application Clusters database installation" option, then click the "Next" button.


Select the "Admin managed" option, then click the "Next" button.


Make sure both nodes are selected, then click the "Next" button.


Accept the "Typical install" option by clicking the "Next" button.


Enter "/u01/app/oracle/product/12.1.0.2/db_1" for the software location. The storage type should be set to "File System" with the file location set to "/u01/oradata". Enter the appropriate passwords and database name, in this case "cdbrac".


Wait for the prerequisite check to complete. If there are any problems either fix them, or check the "Ignore All" checkbox and click the "Next" button.


If you are happy with the summary information, click the "Install" button.


Wait while the installation takes place.


When prompted, run the configuration scripts on each node. When the scripts have been run on each node, click the "OK" button.


Once the software installation is complete the Database Configuration Assistant (DBCA) will start automatically.


Once the Database Configuration Assistant (DBCA) has finished, click the "OK" button.


Click the "Close" button to exit the installer.



The RAC database creation is now complete.

Check the Status of the RAC

There are several ways to check the status of the RAC. The srvctl utility shows the current configuration and status of the RAC database.

$ srvctl config database -d cdbrac
Database unique name: cdbrac
Database name: cdbrac
Oracle home: /u01/app/oracle/product/12.1.0.2/db_1
Oracle user: oracle
Spfile: /u01/oradata/cdbrac/spfilecdbrac.ora
Password file: /u01/oradata/cdbrac/orapwcdbrac
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups:
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: dba
Database instances: cdbrac1,cdbrac2
Configured nodes: ol7-121-rac1,ol7-121-rac2
Database is administrator managed
$

$ srvctl status database -d cdbrac
Instance cdbrac1 is running on node ol7-121-rac1
Instance cdbrac2 is running on node ol7-121-rac2
$

The V$ACTIVE_INSTANCES view can also display the current status of the instances.
$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Mon Sep 28 17:32:56 2015

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Advanced Analytics
and Real Application Testing options

SQL> SELECT inst_Name FROM v$active_instances;

INST_NAME
------------------------------------------------------------
ol7-121-rac1.localdomain:cdbrac1
ol7-121-rac2.localdomain:cdbrac2

SQL>

Direct NFS Client

The Direct NFS Client should be used for CRS-related files, so it is important to have separate NFS mounts for the different types of files, rather than trying to compact them into a single NFS share.

For improved NFS performance, Oracle recommend using the Direct NFS Client shipped with Oracle 12c. The direct NFS client looks for NFS details in the following locations.
  1. $ORACLE_HOME/dbs/oranfstab
  2. /etc/oranfstab
  3. /etc/mtab
Since we already have our NFS mount point details in the "/etc/fstab", and therefore the "/etc/mtab" file also, there is no need to configure any extra connection details.

For the client to work we need to switch the "libodm12.so" library for the "libnfsodm12.so" library, which can be done manually or via the "make" command.
srvctl stop database -d cdbrac

# manual method
cd $ORACLE_HOME/lib
mv libodm12.so libodm12.so_stub
ln -s libnfsodm12.so libodm12.so

# make method
$ cd $ORACLE_HOME/rdbms/lib
$ make -f ins_rdbms.mk dnfs_on

srvctl start database -d vdbrac

With the configuration complete, you can see the direct NFS client usage via the following views.
  • v$dnfs_servers
  • v$dnfs_files
  • v$dnfs_channels
  • v$dnfs_stats
For example.
SQL> SELECT svrname, dirname FROM v$dnfs_servers;

SVRNAME DIRNAME
------------- -----------------
nas1 /shared_data

SQL>

The Direct NFS Client supports direct I/O and asynchronous I/O by default.

Doulci Software 2.0 Available For Download - Bypass iCloud Activation

$
0
0

Finally wait is over and here we have  brought everyone's favorite latest doulci software available for download. You can download latest doulci software from the link below and follow the bellow instruction to run doulci software and to bypass iCloud activation from iPhone and iPAD.














Download Doulci 2.0 2015

How To Install Go 1.5.1 on Ubuntu 14.04

$
0
0
Go is a modern programming language developed by Google that uses high-level syntax similar to scripting languages. It is popular for many applications and at many companies, and has a robust set of tools and over 90,000 repos. This tutorial will walk you through downloading and installing Go 1.5.1, as well as building a simple Hello World application.


Prerequisites

  • One Ubuntu 14.04 Machine (Physical or Virtual)
  • One sudo non-root user

Step 1 — Installing Go

In this step, we’ll install Go on your server.
To begin, connect to your Ubuntu server via ssh:

  • ssh sammy@your_server_ip 


Once connected, update and upgrade the Ubuntu packages on your server. This ensures that you have the latest security patches and fixes, as well as updated repos for your new packages.

  • sudo apt-get update

  • sudo apt-get -y upgrade


With that complete, you can begin downloading the latest package for Go by running this command, which will pull down the Go package file, and save it to your current working directory, which you can determine by running pwd.

  • curl -O https://storage.googleapis.com/golang/go1.5.1.linux-amd64.tar.gz


Next, use tar to unpack the package, which will open and expand the downloaded file, and create a folder using the package name.

  • tar -xvf go1.5.1.linux-amd64.tar.gz


Some users prefer different locations for their Go installation, or may have mandated software locations. With the Go package now in place, you can either leave it in the home directory or move it to another location. The most common location for the Go folder is /usr/local, which also ensures Go is in your $PATH for Linux.

This is what we'll use for this tutorial, so move the directory to /usr/local.

  • sudo mv go /usr/local


The location you pick to house your Go folder will be referenced later in this tutorial, so remember where you placed it if the location is different than /usr/local.

 

Step 2 — Setting Go Paths

In this step, we’ll set some paths that Go needs. The paths in this step are all given are relative to the location of your Go installation in /usr/local. If you chose a new directory, or left the file in download location, modify the commands to match your new location.

First, set Go's root value, which tells Go where to look for its files. First, open your .profile file for editing.

  • nano ~/.profile


If you installed Go in /usr/local, add this line at the end of the file:
 
export PATH=$PATH:/usr/local/go/bin

If you chose an alternate installation location for Go, add these lines instead to the same file. This example shows the commands if Go is installed in your home directory:
 
export GOROOT=$HOME/go
export PATH=$PATH:$GOROOT/bin

With the appropriate line(s) copied into your profile, save and close the file. Next, refresh your profile.

  • source ~/.profile


 

Step 3 — Testing Your Installation

Now that Go is installed and the paths are set for your server, you can test that Go is working as expected.
Create a new directory for your Go workspace, which is where Go will build its files.

  • mkdir ~/work


Now you can point Go to the new workspace you just created by exporting GOPATH.

  • export GOPATH=$HOME/work


Then, create a directory hierarchy in this folder for you to create your test file.

If you plan to use Git to commit and store your Go code on GitHub, you can replace the value user with your GitHub username. This is recommended because this will allow you to import external Go packages. However, if you do not plan to use GitHub to store and manage your code, you can use any folder structure, like ~/my_project.

  • mkdir -p work/src/github.com/user/hello


Next, you can create a simple “Hello World” Go file.

  • nano ~/work/src/github.com/user/hello/hello.go


Inside your editor, paste in the content below, which uses the main Go packages, imports the formatted IO content component, and sets a new function to print 'Hello World' when run.
package main

import "fmt"

func main() {
fmt.Printf("hello, world\n")
}

This file will show "Hello, World" if it successfully runs, which shows that Go is building files correctly. Save and close the file, then compile it invoking the Go command install.

  • go install github.com/user/hello


With the file compiled, you can run it by simply referring to the file at your Go path.

  • $GOPATH/bin/hello


If that command returns "Hello World", then Go is successfully installed and functional.

 

Conclusion

By downloading and installing the latest Go package and setting its paths, you now have a Ubuntu Machine to use for Go development.

iCloud Bypass (Unlock iPhone 4, 4s, 5, 5s, 5c, 6)

$
0
0

This video will help you to bypass iCloud activation on your Apple devices like iPhone, iPAD, iPod touch.




Watch this video as well


How To Secure Apache with Let's Encrypt on Ubuntu 14.04

$
0
0
This tutorial will show you how to set up a TLS/SSL certificate from Let’s Encrypt on an Ubuntu 14.04 server running Apache as a web server. We will also cover how to automate the certificate renewal process using a cron job.


SSL certificates are used within web servers to encrypt the traffic between the server and client, providing extra security for users accessing your application. Let’s Encrypt provides an easy way to obtain and install trusted certificates for free.

Prerequisites

In order to complete this guide, you will need:
  • An Ubuntu 14.04 server with a non-root sudo user, which you can set up by following our Initial Server Setup guide
  • The Apache web server installed with one or more domain names properly configured
When you are ready to move on, log into your server using your sudo account.

Step 1 — Install the Server Dependencies

The first thing we need to do is to update the package manager cache with:

  • sudo apt-get update


We will need git in order to download the Let’s Encrypt client. To install git, run:

  • sudo apt-get install git


 

Step 2 — Download the Let’s Encrypt Client


Next, we will download the Let’s Encrypt client from its official repository, placing its files in a special location on the server. We will do this to facilitate the process of updating the repository files when a new release is available. Because the Let’s Encrypt client is still in beta, frequent updates might be necessary to correct bugs and implement new functionality.

We will clone the Let’s Encrypt repository under /opt, which is a standard directory for placing third-party software on Unix systems:

  • sudo git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt


This will create a local copy of the official Let’s Encrypt repository under /opt/letsencrypt.

 

Step 3 — Set Up the SSL Certificate

Generating the SSL Certificate for Apache using the Let’s Encrypt client is quite straightforward. The client will automatically obtain and install a new SSL certificate that is valid for the domains provided as parameters.

Access the letsencrypt directory:

  • cd /opt/letsencrypt


To execute the interactive installation and obtain a certificate that covers only a single domain, run the letsencrypt-auto command with:

  • ./letsencrypt-auto --apache -d example.com


If you want to install a single certificate that is valid for multiple domains or subdomains, you can pass them as additional parameters to the command. The first domain name in the list of parameters will be the base domain used by Let’s Encrypt to create the certificate, and for that reason we recommend that you pass the bare top-level domain name as first in the list, followed by any additional subdomains or aliases:

  • ./letsencrypt-auto --apache -d example.com -d www.example.com


For this example, the base domain will be example.com. We will need this information for the next step, where we automate the certificate renewal process.

After the dependencies are installed, you will be presented with a step-by-step guide to customize your certificate options. You will be asked to provide an email address for lost key recovery and notices, and you will be able to choose between enabling both http and https access or force all requests to redirect to https.

When the installation is finished, you should be able to find the generated certificate files at /etc/letsencrypt/live. You can verify the status of your SSL certificate with the following link (don’t forget to replace example.com with your base domain):
https://www.ssllabs.com/ssltest/analyze.html?d=example.com&latest

You should now be able to access your website using a https prefix.

 

Step 4 — Set Up Auto Renewal

Let’s Encrypt certificates are valid for 90 days, but it’s recommended that you renew the certificates every 60 days to allow a margin of error. At the time of this writing, automatic renewal is still not available as a feature of the client itself, but you can manually renew your certificates by running the Let’s Encrypt client again with the same parameters previously used.

To manually renew a Let’s Encrypt certificate for Apache with no interaction in the command line, you can run:

  • ./letsencrypt-auto certonly --apache --renew-by-default -d example.com -d www.example.com


If you provided multiple domain names when first installing the certificate, you’ll need to pass the same list of domains again for the renewal command, otherwise the Let’s Encrypt client will generate a new certificate instead of renewing the existing one.

A practical way to ensure your certificates won’t get outdated is to create a cron job that will automatically handle the renewal requests for you.

To facilitate this process, we will use a shell script that will verify the certificate expiration date for the provided domain and request a renewal when the expiration is less than 30 days away. The script will be scheduled to run once a week. This way, even if a cron job fails, there’s a 30-day window to try again every week.

First, download the script and make it executable. Feel free to review the contents of the script before downloading it.

  • sudo curl -L -o /usr/local/sbin/le-renew http://do.co/le-renew

  • sudo chmod +x /usr/local/sbin/le-renew


The le-renew script takes as an argument the base domain name associated with the certificate you want to renew. You can check which domain was used by Let’s Encrypt as your base domain name by looking at the contents inside /etc/letsencrypt/live, which is the directory that holds the certificates generated by the client.

You can run the script manually with:

  • sudo le-renew example.com


Since we just created the certificate and there is no need for renewal just yet, the script will simply output how many days are left until the certificate expiration:

 


Output

Checking expiration date for example.com...
The certificate is up to date, no need for renewal (89 days left).

Next, we will edit the crontab to create a new job that will run this command every week. To edit the crontab for the root user, run:

  • sudo crontab -e


Include the following content, all in one line:

 


crontab

30 2 * * 1 /usr/local/sbin/le-renew example.com>> /var/log/le-renew.log
Save and exit. This will create a new cron job that will execute the le-renew command every Monday at 2:30 am. The output produced by the command will be piped to a log file located at /var/log/le-renewal.log.

 

Step 5 — Updating the Let’s Encrypt Client (optional)

Whenever new updates are available for the client, you can update your local copy by running a git pull from inside the Let’s Encrypt directory:

  • cd /opt/letsencrypt

  • sudo git pull


This will download all recent changes to the repository, updating your client.

 

Conclusion

In this guide, we saw how to install a free SSL certificate from Let’s Encrypt in order to secure a website hosted with Apache. Because the Let’s Encrypt client is still in beta, we recommend that you check the official Let’s Encrypt blog for important updates from time to time.

Apple has reportedly put together a team to kickstart virtual reality endeavor

$
0
0
Virtual reality is on everyone’s mind, at least in the tech industry, and it has companies going crazy to jump on the “next big thing” bandwagon. Apparently Apple’s making waves in its own ranks to not be completely left behind.

According to a new report from FT, and citing people familiar with the matter, Apple has been putting together a secret team to kickstart its VR endeavors, aiming to take on Facebook’s Oculus Rift, and even Microsoft’s HoloLens headsets. The teams are working on both virtual reality and augmented reality, and Apple seems to be weighing which experience would be best for its customers moving forward, but it all seems like it might still be a project that’s just getting off the ground.


The team itself is comprised of many different folks that have been involved with altering reality in one way or another, whether that’s from companies that Apple has directly acquired, or from persons that have been hired on from competing companies, including from Microsoft and the camera start-up Lytro. This news also follows in line with Apple hiring Doug Bowman, a leading researcher in virtual reality.

As of now, Apple has reportedly put together a variety of prototype headsets over the last few months, and work is still underway on improving them and the technology within them. Tim Cook, Apple’s CEO, on the company’s most recent earnings call, even confirmed his appreciation for VR, and said that it’s “really cool and has some interesting applications.”

The report states that while Apple has had passion for VR in the past, even dating back to 2000, the technology back then was simply too weak, and so the company abandoned whatever those plans were. However, with Facebook’s acquisition of Oculus in 2014, Apple’s drive to bring VR to its customer has apparently been given a big boost. Indeed, the company has made acquisitions in the past that could boost its efforts in VR, including the acquisition of PrimeSense, a company that focuses on real-time motion capture.

There’s no doubt that VR is a big thing right now, as just about any company that can is focusing on the technology in one way or another. That includes HTC, Samsung, Facebook/Oculus, Microsoft, Google, and many others. Many believe Apple’s entry into the market is inevitable, and this report would seem to echo those expectations.

Apple reportedly developing long range wireless charging technology for 2017 iPhone

$
0
0
A report from Bloomberg claims that Apple is working with its partners in the United States and Asia to develop a long range wireless charging technology that it could use on the iPhone in 2017. Sources of the website claim that Apple is working on a wireless charging technology that would make it possible to charge an iPhone or iPad even when it is placed further away from the wireless charging dock.

To achieve this feat, Apple will have to overcome quite a lot of technical barriers including the loss of power over a distance when it travels wirelessly. The company still has not taken any final decision on whether it wants to include this wireless charging tech in the iPhone due to be launched in 2017 or not.


The Apple Watch is the only device in the company’s product portfolio at the moment that makes use of wireless charging. However, the Apple Watch uses inductive wireless charging, similar to what is found on many popular Android smartphones including the Galaxy S6 and Note 5 from Samsung.

Android OEMs have been including wireless charging on their smartphones since quite a few years now, but they all require that the handset be placed on top of the charging mat directly. Previously, charging the device using wireless charging was also a slow process, though Samsung has managed to make notable improvements this year in this department.

A patent filing from Apple from last year explained a new wireless charging mechanism that the company had developed for the iPhone.

Samsung Galaxy S7 and S7 edge revealed in new leaked image

$
0
0

Samsung’s Galaxy S7 and Galaxy S7 edge are some of the most highly-anticipated devices coming down the pipe, and now we may know what they look like.

Based on an image released via Twitter by Evan Blass, the serial leaker known as evleaks, the first glimpse of what the Galaxy S7 and S7 edge are expected to look like has been revealed. And, as expected, both devices look exactly like their respective predecessor, with the familiar design aesthetic carried over from the Galaxy S6 and S6 edge.

As far as features go, though, Samsung’s said to be cramming a ton of feature into the Galaxy S7, some of which didn’t make it into last year’s release. That includes a microSD card slot, significantly larger batteries, and water resistance. All of those are just rumors, just like the picture, but they’ve been popping up consistently enough at this point that many believe they’ll be in the final product.

How To Protect SSH With Fail2Ban on CentOS 7

$
0
0
While connecting to your server through SSH can be very secure, the SSH daemon itself is a service that must be exposed to the Internet to function properly. This comes with some inherent risk and offers a vector of attack for would-be assailants.


Any service that is exposed to the network is a potential target in this way. If you pay attention to application logs for these services, you will often see repeated, systematic login attempts that represent brute-force attacks by users and bots alike.

A service called Fail2ban can mitigate this problem by creating rules that automatically alter your iptables firewall configuration based on a predefined number of unsuccessful login attempts. This will allow your server to respond to illegitimate access attempts without intervention from you.

In this guide, we'll cover how to install and use Fail2ban on a CentOS 7 server.


Install Fail2ban on CentOS 7

While Fail2ban is not available in the official CentOS package repository, it is packaged for the EPEL project. EPEL, standing for Extra Packages for Enterprise Linux, can be installed with a release package that is available from CentOS:

    sudo yum install epel-release

You will be prompted to continue---press y, followed by Enter:

yum prompt
Transaction Summary
============================================================================
Install  1 Package

Total download size: 14 k
Installed size: 24 k
Is this ok [y/d/N]: y

Now we should be able to install the fail2ban package:

    sudo yum install fail2ban

Again, press y and Enter when prompted to continue.

Once the installation has finished, use systemctl to enable the fail2ban service:

    sudo systemctl enable fail2ban


Configure Local Settings

The Fail2ban service keeps its configuration files in the /etc/fail2ban directory. There, you can find a file with default values called jail.conf. Since this file may be overwritten by package upgrades, we shouldn't edit it in-place. Instead, we'll write a new file called jail.local. Any values defined in jail.local will override those in jail.conf.

jail.conf contains a [DEFAULT] section, followed by sections for individual services. jail.local may override any of these values. Additionally, files in /etc/fail2ban/jail.d/ can be used to override settings in both of these files. Files are applied in the following order:

    /etc/fail2ban/jail.conf
    /etc/fail2ban/jail.d/*.conf, alphabetically
    /etc/fail2ban/jail.local
    /etc/fail2ban/jail.d/*.local, alphabetically

Any file may contain a [DEFAULT] section, executed first, and may also contain sections for individual jails. The last vavalue set for a given parameter takes precedence.

Let's begin by writing a very simple version of jail.local. Open a new file using nano (or your editor of choice):

    sudo nano /etc/fail2ban/jail.local

Paste the following:
/etc/fail2ban/jail.local

[DEFAULT]
# Ban hosts for one hour:
bantime = 3600

# Override /etc/fail2ban/jail.d/00-firewalld.conf:
banaction = iptables-multiport

[sshd]
enabled = true

This overrides three settings: It sets a new default bantime for all services, makes sure we're using iptables for firewall configuration, and enables the sshd jail.

Exit and save the new file (in nano, press Ctrl-X to exit, y to save, and Enter to confirm the filename). Now we can restart the fail2ban service using systemctl:

    sudo systemctl restart fail2ban

The systemctl command should finish without any output. In order to check that the service is running, we can use fail2ban-client:

    sudo fail2ban-client status

Output
Status
|- Number of jail:      1
`- Jail list:   sshd

You can also get more detailed information about a specific jail:

    sudo fail2ban-client status sshd


Explore Available Settings

The version of jail.local we defined above is a good start, but you may want to adjust a number of other settings. Open jail.conf, and we'll examine some of the defaults. If you decide to change any of these values, remember that they should be copied to the appropriate section of jail.local and adjusted there, rather than modified in-place.

    sudo nano /etc/fail2ban/jail.conf


Default Settings for All Jails

First, scroll through the [DEFAULT] section.

ignoreip = 127.0.0.1/8

You can adjust the source addresses that Fail2ban ignores by adding a value to the ignoreip parameter. Currently, it is configured not to ban any traffic coming from the local machine. You can include additional addresses to ignore by appending them to the end of the parameter, separated by a space.

bantime = 600

The bantime parameter sets the length of time that a client will be banned when they have failed to authenticate correctly. This is measured in seconds. By default, this is set to 600 seconds, or 10 minutes.

findtime = 600
maxretry = 3

The next two parameters that you want to pay attention to are findtime and maxretry. These work together to establish the conditions under which a client should be banned.

The maxretry variable sets the number of tries a client has to authenticate within a window of time defined by findtime, before being banned. With the default settings, Fail2ban will ban a client that unsuccessfully attempts to log in 3 times within a 10 minute window.

destemail = root@localhost
sendername = Fail2Ban
mta = sendmail

If you wish to configure email alerts, you may need to override the destemail, sendername, and mta settings. The destemail parameter sets the email address that should receive ban messages. The sendername sets the value of the "From" field in the email. The mta parameter configures what mail service will be used to send mail.

action = $(action_)s

This parameter configures the action that Fail2ban takes when it wants to institute a ban. The value action_ is defined in the file shortly before this parameter. The default action is to simply configure the firewall to reject traffic from the offending host until the ban time elapses.

If you would like to configure email alerts, you can override this value from action_ to action_mw. If you want the email to include the relevant log lines, you can change it to action_mwl. You'll want to make sure you have the appropriate mail settings configured if you choose to use mail alerts.


Settings for Individual Jails

After [DEFAULT], we'll encounter sections configuring individual jails for different services. These will typically include a port to be banned and a logpath to monitor for malicious access attempts. For example, the SSH jail we already enabled in jail.local has the following settings:
/etc/fail2ban/jail.local

[sshd]

port    = ssh
logpath = %(sshd_log)s

In this case, ssh is a pre-defined variable for the standard SSH port, and %(sshd_log)s uses a value defined elsewhere in Fail2ban's standard configuration (this helps keep jail.conf portable between different operating systems).

Another setting you may encounter is the filter that will be used to decide whether a line in a log indicates a failed authentication.

The filter value is actually a reference to a file located in the /etc/fail2ban/filter.d directory, with its .conf extension removed. This file contains the regular expressions that determine whether a line in the log is bad. We won't be covering this file in-depth in this guide, because it is fairly complex and the predefined settings match appropriate lines well.

However, you can see what kind of filters are available by looking into that directory:

    ls /etc/fail2ban/filter.d

If you see a file that looks to be related to a service you are using, you should open it with a text editor. Most of the files are fairly well commented and you should be able to tell what type of condition the script was designed to guard against. Most of these filters have appropriate (disabled) sections in jail.conf that we can enable in jail.local if desired.

For instance, pretend that we are serving a website using Nginx and realize that a password-protected portion of our site is getting slammed with login attempts. We can tell Fail2ban to use the nginx-http-auth.conf file to check for this condition within the /var/log/nginx/error.log file.

This is actually already set up in a section called [nginx-http-auth] in our /etc/fail2ban/jail.conf file. We would just need to add an enabled parameter for the nginx-http-auth jail to jail.local:
/etc/fail2ban/jail.local

[DEFAULT]
# Ban hosts for one hour:
bantime = 3600

# Override /etc/fail2ban/jail.d/00-firewalld.conf:
banaction = iptables-multiport

[sshd]
enabled = true

[nginx-http-auth]
enabled = true

And restart the fail2ban service:

    sudo systemctl restart fail2ban


Monitor Fail2ban Logs and Firewall Configuration

It's important to know that a service like Fail2ban is working as-intended. Start by using systemctl to check the status of the service:

    sudo systemctl status fail2ban

If something seems amiss here, you can troubleshoot by checking logs for the fail2ban unit since the last boot:

    sudo journalctl -b -u fail2ban

Next, use fail2ban-client to query the overall status of fail2ban-server, or any individual jail:

    sudo fail2ban-client status
    sudo fail2ban-client status jail_name

Follow Fail2ban's log for a record of recent actions (press Ctrl-C to exit):

    sudo tail -F /var/log/fail2ban.log

List the current rules configured for iptables:

    sudo iptables -L

Show iptables rules in a format that reflects the commands necessary to enable each rule:

    sudo iptables -S


Conclusion

You should now be able to configure some basic banning policies for your services. Fail2ban is very easy to set up, and is a great way to protect any kind of service that uses authentication.
Viewing all 880 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>