Quantcast
Channel: Tech Support
Viewing all 880 articles
Browse latest View live

How to Disable Lock Screen on Windows 10

$
0
0

With Windows 10’s Anniversary Update, Microsoft no longer lets you disable the lock screen using a group policy setting or registry hack. But there are still workarounds for now.



The group policy setting that disables the lock screen is still available, but it only works on Enterprise and Education editions of Windows. Even Windows 10 Professional users can’t use it.

 

How to Disable the Lock Screen (Except at Boot)

Follow the instructions below and you’ll only see the lock screen once: when you boot your computer. The lock screen won’t appear when you actually lock your computer or it wakes from sleep. If you put your computer to sleep or hibernate it, you’ll never see the lock screen at all.

We’ve seen a variety of ways to do this online, involving everything from the Local Security Policy editor to the Task Scheduler. But the easiest way to do this is by simply renaming the “Microsoft.LockApp” system app.

To do this, open File Explorer and head to C:\Windows\SystemApps.


Locate the “Microsoft.LockApp_cw5n1h2txyewy” folder in the list.

Right-click it, select “Rename”, and rename it to something like

“Microsoft.LockApp_cw5n1h2txyewy.backup” (without the quotes).


If you ever want to restore your lock screen, just return to the C:\Windows\SystemApps folder, locate the “Microsoft.LockApp_cw5n1h2txyewy.backup” file, and rename it back to “Microsoft.LockApp_cw5n1h2txyewy”.


With the LockApp folder renamed, Windows 10 won’t be able to load the lock screen anymore. Lock your computer and it will go straight to the login screen where you can type a password. Wake up from sleep and it will go straight to the login screen. Unfortunately, you’ll still see the lock screen when you boot your computer–that first lock screen seems to be a part of the Windows shell.

This works very well. There’s no error message or any other apparent problem. Windows 10 just goes straight to the login screen because it can’t load the lock screen first.

Microsoft will probably break this tweak in the future. When you upgrade to a new major build of Windows 10, an update will likely restore the “LockApp” folder to its original place. You may need to rename the folder again in the future if you start seeing the lock screen again.

 

How to Skip the Lock Screen at Boot (and Sign in Automatically)





If you’d like to get past the lock screen even when booting your computer, considering having your computer automatically sign in when you boot it up.. Your computer will automatically sign into your user account and you won’t even have to enter a password when it boots.

There’s a potential security risk to logging into your Windows PC automatically, though. Don’t do this unless you have a desktop PC located somewhere secure. If you carry your laptop around with you, you probably don’t want to have that laptop automatically sign into Windows.

The old netplwiz panel will let you enable automatic login on Windows 10. Press Windows+R on your keyboard, type netplwiz , and press Enter. Select the account you want to automatically sign in with, uncheck the “Users must enter a user name and password to use this computer” option, click “OK”, and enter the password for your account. Windows will store it in the registry and automatically sign into your computer for you when it boots.

How to Manage Lync External Access Policies Based on Active Directory Group Membership

$
0
0

This is a fairly simple script that uses a scheduled task that runs every 4 hours, looks at the members of a given AD security group, including nested groups, and applies a Lync policy to each member. The name of the AD security group and the type and name of the policy are all configurable.



The ActiveDirectory and Lync PowerShell modules are used to complete this. The actual moving parts are pretty simple – really just two lines of code. But some extra error catching, installation code, and safeguards make it a tad bigger.

Caveat – users get policies when they launch the Lync client. So even though a policy might be assigned to a user, they won’t see any change until the client is restarted.

Caveat #2 – if you configure this script with several scheduled tasks to handle different policies and different AD groups, make sure users don’t end up in multiple groups, or you could have unintended results. Also removing a user from a group does NOT revert their policy back. The reason I didn’t add that is because moving a user from one group to another could cause problems if the script set them back to a default policy, yet another group needed to change it to a different policy.

 

Installation

Execution Policy: Third-party PowerShell scripts may require that the PowerShell Execution Policy be set to either AllSigned, RemoteSigned, or Unrestricted. The default is Restricted, which prevents scripts – even code signed scripts – from running. For more information about setting your Execution Policy, see Using the Set-ExecutionPolicy Cmdlet.

Download the script from the DOWNLOAD section below. Open it in your favorite text editor.

Find the line that reads

50[string]$GroupDN= "",

and put the Distinguished Name of the group in between the quotes. For example
50[string]$GroupDN= "CN=Lync Policy Group,DC=contoso,DC=com",

Next, define the policy that will be granted to members of the group. Find the line that reads
53[string]$PolicyName= "",

and put the name of the Lync policy in between those quotes, such as
53[string]$PolicyName= "Executives External Access Policy",

The last thing we need to do in the script file is define what KIND of policy we’re going to grant.

Find the line that reads
[string]$PolicyType = "ExternalAccess",

And adjust accordingly. The allowed values are Archiving,Client,ClientVersion,Conferencing,ExternalAccess,HostedVoicemail,Location,Mobility,Pin,Presence,Voice to represent the various types of policies you can apply to a user. The default is ExtnerAccess.

Next, ensure that the server where the script will run has both the ActiveDirectory and Lync PowerShell modules installed. Domain controllers typically have the ActiveDirectory module, and Lync servers have the Lync module. Install the appropriate ones using these steps.

To install the ActiveDirectory module, open PowerShell and type the following:
1Import-ModuleServerManager
2Add-WindowsFeature -nameAD-Domain-Services–IncludeManagementTools

To install the Lync Server Management Tools, which includes the PowerShell module, install the core components. See Install Lync Server Administrative Tools for details.

This will ensure that both modules are available. The ActiveDirectory module is used to get the members of the AD security group, and the Lync module is used to actually grant the policy.

The script must run as a member of the CsUserAdministrator or CsAdministrator groups, as those have the rights to assign policies.

Next, open PowerShell and run the script with the -install switch. The script will prompt for the password of the currently logged on user, and then create the scheduled task to run the script every 4 hours.
Grant-CsPolicyByADGroup.ps1 -install

The scheduled task will run every 4 hours, with a start time of when you ran the -install option. You can open the scheduled task in Task Manager and adjust as needed.

You can run the script manually as well. Just run
Grant-CsPolicyByADGroup.ps1

Note that it may take a while before the policy is visible on the user account due to AD replication.

 

Download

v1.6 – 09-23-2014 – Grant-CsPolicyByADGroup.v1.6.zip
v1.5 – 02-08-2014 – Grant-CsPolicyByADGroup.v1.5.zip
v1.4 – 01-27-2014 – Grant-CsPolicyByADGroup.v1.4.zip
v1.2 – 10-16-2012 – Grant-CsPolicyByADGroup.v1.2.zip
v1.1 – 09-19-2012 – Grant-CsPolicyByADGroup.v1.1.zip
v1.0 – 09-10-2012 – Grant-CsPolicyByADGroup.v1.0.zip

How to Temporarily Lock Your Windows PC if Someone Tries to Guess Your Password

$
0
0

If you’re worried about someone trying to guess your Windows password, you can have Windows temporarily block sign in attempts after a specific number of failed attempts.




Assuming you haven’t set Windows up to sign you in automatically, Windows allows an unlimited number of password attempts for local user accounts at the sign in screen. While it’s handy if you can’t remember your password, it also offers other people who have physical access to your PC an unlimited number of tries to get in. 

While there are still ways people can bypass or reset a password, setting up your PC to temporarily suspend sign in attempts after several failed attempts can at least help prevent casual break-in attempts if you’re using a local user account. Here’s how to get it set up.

A couple of quick notes before you get started. Using this setting can let somebody prank you by incorrectly entering the password several times and thus locking you out of your PC for a time. It would be wise to have another administrator account that can unlock the regular account.

Also, these settings only apply to local user accounts, and will not work if you sign on to Windows 8 or 10 using a Microsoft account. If you want to use the lockout settings, you’d need to revert your Microsoft account to a local one first. If you prefer to keep using your Microsoft account, you can head to your security settings page and log in. 

There, you’ll be able to change things like adding two-step verification, setting up trusted devices, and more. Unfortunately, there is no lockout setting for Microsoft accounts that works like the one we’re covering here for local accounts. However, these settings will work just fine for local user accounts in Windows 7, 8, and 10.

Home Users: Set a Sign In Limit with the Command Prompt

If you’re using a Home edition of Windows, you’ll need to use the Command Prompt to set a limit on sign in attempts. You can also set the limit this way if you’re using a Pro or Enterprise edition of Windows, but if you are using one of those editions you can do it much more easily using the Local Group Policy Editor (which we cover a bit later in this article).

Please note that you’ll need to complete all of the following instructions or you could end up locking yourself out completely.

To start, you’ll need to open the Command Prompt with administrative privileges. Right-click the Start menu (or hit Windows+X on your keyboard) to open the Power Users menu, then click “Command Prompt (Admin).”



At the prompt, type the following command and then hit Enter:
 
net accounts

This command lists your current password policy, which by default should be “Lockout threshold: Never,” which means that your account will not lock you out no matter how many times a password is entered incorrectly.


You’ll start by setting the lockout threshold to the number of failed sign in attempts you want to allow before sign in is temporarily locked. You can set the number to anything you like, but we recommend setting it to at least three. This way, you have room to accidentally type the wrong password a time or two before locking yourself out. Just type the following following command, substituting the number at the end with the number of failed password attempts you want to allow.
 
net accounts /lockoutthreshold:3


Now, you’re going to set a lockout duration. This number specifies how long, in minutes, an account will be locked out if the threshold for failed password attempts is reached. We recommend 30 minutes, but you can set whatever you like here.

net accounts /lockoutduration:30


And finally, you’re going to set a lockout window. This number specifies how long in minutes before the counter for failed password attempts is reset, assuming the actual lockout threshold is not reached. So, for example, say the lockout duration is 30 minutes and the lockout threshold is three attempts. You could could enter two bad passwords, wait 30 minutes after the last bad password attempt, and then have three more tries.  Set the lockout window using the following command, replacing the number at the end with the number of minutes you want to use. Again, we feel like 30 minutes is a good amount of time.

net accounts /lockoutwindow:30


When you’re done, you can use the net accounts command again to review your settings. They should look something like the settings below, depending on what you chose.

 

Now you’re all set.  Your account will automatically prevent people from logging in if the password is entered incorrectly too many times.  If you ever want to change or remove the settings, just repeat the steps with the new options you want.

And here’s how it works in practice. The sign in screen gives no indication that a lockout threshold is in place or how many attempts you have. Everything will appear as it always does until you enter enough failed password attempts to meet the threshold. At that point, you’ll be given the following message. And again, there is no indication about how long the account is locked out.



If you want to turn the setting off, all you have to do is go back into an administrative command prompt and set the account threshold to 0 using the following command.

net accounts /lockoutthreshold:0
 
You don’t need to worry about the other two settings. When you set the lockout threshold to 0, the lockout duration and lockout window settings become inapplicable.

 

Pro and Enterprise Users: Set a Sign In Limit with Local Group Policy Editor

If you’re using a Pro or Enterprise edition, the easiest way to set a sign in limit is with the Local Group Policy Editor. An important note, though: if your PC is part of a company network, it’s very likely that group policy settings governing the sign in limit are already set at the domain level and will supersede anything you set in local group policy. And if you are part of a company network, you should always check with your admin before making changes like this, anyway.

Group policy is a powerful tool. If you haven’t used it before, we suggest learning a little more about what it can do before you get started. Also, if you want to apply a policy to only specific users on a PC, you’ll need to perform a few extra steps to get things set up.

To open Local Group Policy Editor, hit Start, type “gpedit.msc,” and then click the result. Alternatively, if you want to apply the policy to specific users or groups, open the MSC file you created for those users.



In Local Group Policy Editor, on the left-hand side, drill down to Computer Configuration > Windows Settings > Security Settings > Account Policies > Account Lockout Policy. On the right-hand side, double-click the “Account lockout threshold” setting.

 

In the setting’s properties window, note that by default, it’s set “0 invalid logon attempts,” which effectively means that the setting is turned off. To change this, just select a new number greater than one. We recommend setting this to at least three to help ensure you don’t get locked out of your own system when you accidentally type the wrong password yourself. Click “OK” when you’re done.



Windows now automatically configures the two related settings to thirty minutes. “Account lockout duration” controls how long the PC is locked against further sign in attempts when the account lockout threshold you set is met. “Reset account lockout counter after” controls how much time must pass after the last failed password attempt before the threshold counter is reset. For example, say you enter an invalid password and then enter another invalid password right away, but you do not try a third time. Thirty minutes after that second attempt (at least, going by the settings we’ve used here), the counter would reset and you could have another three tries.

You can’t change these values here, so just go ahead and click the “OK” button.

 




Back in the main Local Group Policy Editor window, you’ll see that all three settings in the “Account Lockout Policy” folder have changed to reflect the new configuration. You can change any of the settings by double-clicking them to open their properties windows, but honestly thirty minutes is a pretty solid setting for both lockout duration and resetting the lockout counter.


 

Once you’ve settled on the settings you want to use, close Local Group Policy Editor. The settings take place immediately, but since they affect sign in, you’ll have to sign out and back in to see the policy in effect. And if you want to turn the whole thing off again, just go back in and change the “Account lockout threshold” setting back to 0.

Measuring And Validating Network Bandwidth to Support Office 365 and Unified Communications

$
0
0


Using some of the available tools to help to understand the network bandwidth requirements deploying on-premise solutions or transitioning to Office 365.








When planning connectivity between clients and Exchange Server/Office 365 the administrator has several tools available to test and validate the process. The Microsoft Team also provides a lot of bandwidth calculators that can help designing the proper solution.

In this article, we are going to explore some of the tools available to help on the network bandwidth requirements for either on-premise implementations or Office 365 transitions.

 

Exchange, Skype for Business and SharePoint network bandwidth calculators

When you are planning your Exchange/Skype for Business on-premise deployment or migrations to Office 365, the network bandwidth is one of the most important components of the design. Here are some of the calculators available:
  • Exchange Client Connectivity Bandwidth Calculator, you can download it from here
  • Lync 2010/2013/Skype for Business Bandwidth Calculator, download it from here
  • SharePoint Online guidelines to measure bandwidth requirements, the document can be found here
Using Exchange Client Connectivity Bandwidth Calculator, the administrator is able to define profile type (Light, Medium, Heavy and Very Heavy), the environment type and Time Zone information.


Exchange Client Connectivity Bandwidth Calculator, defining the user Profiles and Exchange environment.

The result will be a complete table with the network bandwidth (from exchange to client and vice-versa) requirements, recommended maximum network latency and number of TCP Connections expected for the workload entered. On top of that a graph predicting the bandwidth utilization throughout the day will be displayed.

Network bandwidth utilization showing the expected network utilization based on the number of clients

Office 365 Network Analysis Tool

This test is one of my favourites and performs several tests within the same tool. The drawback of this test tool is that it requires java, and the first step is to download it from here. Although this tool may be replaced in the near future, it is still a great tool for administrators and summarizes all requirements in a few tabs.

After installing Java on your machine, click on one of the sites based on your location, keep in mind that Office 365 user will be located on the same region where you define your initial subscription.
Note:The following site provides datacenter locations based on your Region.

In the first page, type in the Office 365 tenant name, we can use the first domain created during the initial subscription which is .onmicrosoft.com. Then, click OK and wait a few moments for the tool to test all network related items between the computer that is running the tool and the datacenter.


Starting the Fast Track Network Analysis

The Fast Track Network Analysis Tool provides a lot of essential information to help the administrator to identify possible issues and actions to be taken before moving towards Office365. Here is a brief summary of some of the main tabs provided by the tool:
  • Route: includes all network routing information, provides a list of all settings checked and a map of the location.
  • Speed: it compares the current download and upload speed with several technologies (from 14.4 modem to T4) and provides round trip time, consistency of the connection and so forth.
  • VoIP: creates a graph with the numbers gathered displaying the Jitter and Packet Loss, the same graph has level and the administrator can have an idea about the quality (Radio, Standard, broken sound or even if VoIP is not supported)
  • Capacity: Summarizes the download and upload capacity in a graph with the results and packet loss
The first tab is the Firewall, and in a glance we can check which protocols are allowed or not in the firewall and its response time.

 
Firewall summary containing all ports required by Office 365, their status (open or blocked) and response time

At the end of the test, a Connection Summary will be available and it will list all items checked and they will be assigned a color based on their status (green, yellow and red, where green is okay, yellow is attention and red requires immediate attention because it is not working properly).

 
All results classified in informational, warning and critical

 

Office 365 Client Performance Analyzer (OCPA)

Another option available is the Office 365 Performance Analyzer which will be replaced in the near future, however it is still a good tool to measure performance. To download the tool, click here. The tool checks the network performance between the client that is being run and the Office365 datacenter, checks ports and measures bandwidth available and so forth.

The installation process is straight forward and just requires the acceptance of the license agreement. In the initial page, click on either Run Exchange Analyzer or Run SharePoint Analyzer. In this Tutorial we are going to run the Exchange Analyzer, a pop-up asking for the e-mail will be requested, type in your e-mail address in Office 365 and click OK and wait a couple of minutes.


 

Initial screen of OCPA (Office 365 Client Performance Analyzer)

After a while, a new page with the results of the test will be displayed. The administrator can analyze the results and compare the actual values with the expected values and start doing some troubleshooting and/or design decisions.


 

Results of Microsoft Office 365 Client Performance Analyzer

 

Microsoft Office 365 Support and Recovery Assistant (SaRA)

One of the newest tools available is SaRA and it can be found here. However, this tool is more related to specific issues on the client side and as part of the process of identifying the issue a series of tests is performed, such as network connectivity, authentication, protocol, services and etc.


 







Test results from SaRA tool.

In order to properly design any on-premise implementation or transition to Office 365 the administrator must account the network bandwidth requirements and in this article we went over some of these tools to help the administrator.

How to Port Your Old Phone Number to Google Voice

$
0
0

If you want to keep your old phone number after you got a new one, or you just want a second phone number to play around with, you can port that number to the awesome Google Voice service. Here’s how to do it.







Why Would I Want to Do This?

If you recently switched carriers and got a new phone number, but you want to keep your old phone number lying around just in case, you can port it to Google Voice so that you don’t have to pay for a second plan. Calls to your old number will get forwarded to your new one, and you’ll never miss an important call because someone forgot to update their address book.

Sure, you can get a new phone from Google Voice and use it for texting and call forwarding too. However, if you have an existing number that you want to use with Google Voice, you can port it to the service and use that instead.

 

What’s the Catch?




First, porting a phone number to Google Voice requires a one-time fee of $20.

Secondly, when you port a number to Google Voice, you can’t use the Google Voice app to send text messages–it requires a data connection over Wi-Fi or LTE/3G. You can, however, have Google Voice forward texts to your new number. When you reply to them using your regular messaging app, they will appear to come from your Google Voice number, which is pretty cool.

The same goes for making and receiving calls–as long as you have call forwarding turned on, you can make and receive calls from your Google Voice number, even without a data connection.

Lastly, to port a number to Google Voice, you need two phone numbers:
  • Your old phone number, which you are porting to Google Voice. This number must still be active when you start the porting process–do not cancel your account yet!
  • Your new phone number, to which you’ll forward your Google Voice calls and texts. This can be a number on a new carrier, or on the same carrier you currently use.
In my case, I was switching to a new carrier (Cricket), so I just started a new account with them, and ported my Verizon number over. When I did so, Google cancelled my Verizon account for me.

If you’re getting a new number on the same carrier, you’ll just have to add a number to your account, after which Google Voice will cancel the old number for you.

Make sure you aren’t in the middle of a contract, since porting your number could incur an early termination fee (ETF) from your carrier! If you aren’t sure, call customer service and make sure they make a note on your account not to charge you an ETF when you cancel.

 

How to Port Your Phone Number

The first step is to head to www.google.com/voice. If you’ve never used Google Voice before, you’ll go through the process of accepting the terms and services agreement before you can start using it. Then you’ll skip these first few steps.

If you’re an existing user, click on the settings gear icon in the top-right corner and select “Settings”.


Select the “Phones” tab if it isn’t already selected.

 
Next to your current Google Voice number, click “Change/Port”. Keep in mind that porting a number into Google Voice will replace your current Google Voice number after 90 days, but you can pay an extra $20 to keep that number (so you’ll end up with two Voice numbers).


 
Next, click on “I want to use my mobile number”. If you’re a new Google Voice user, this will be the first screen you see after accepting the terms and services.


 
Type in the phone number that you want to port over, and then click “Check for available options”.


 
Click on “Port your number”.


 
Click on the checkboxes and read all the things you’ll need to understand before the porting process. Then click “Next: Phone Verification”.


 
The next step is confirming that you actually own and operate the phone number that you’re porting over, so Google Voice will call you at that number and then you’ll enter in the two-digit number shown on the screen on your phone’s keypad. Click on “Call me now” to begin that process.


 
Once that process is done, enter in your carrier plan account information, like the account number, PIN, last four social security number digits, and so on. In my case, this was my Verizon account info. Then click on “Next: Confirmation”.


 
Make sure all of the details are correct and then click on “Next: Google Payments”.


 
If you have a credit card on file with Google, you can go ahead and click “Buy” when the pop-up appears. If not, you’ll need to enter in your credit card details before continuing.

 
After the purchase, you’ll receive a “Purchase Confirmation” pop-up. Click “Done” to complete the process.


 
On the next page, you’ll be reminded about a few things, like how your existing Google Voice number will be replaced (unless you want to keep it for $20 more), as well as how you’ll need to connect a new phone number to your Google Voice account as a forwarding phone.

 
At this point, all that’s left to do is wait for the porting process to complete, which can take up to 24 hours, with text messaging capabilities taking up to three business days to fully complete.

In the meantime, a yellow status bar will appear at the top in Google Voice, saying that your phone number is in the process of being ported.


 

How to Forward Calls to Your Main Number

Once you have your old phone number ported over to Google Voice, you can use it to text message anyone, as long as you have a Wi-Fi or data connection, or they’ve texted you first with SMS forwarding turned on. The only way to make and receive calls through your old number is to use your main phone number as a forwarding number. In other words, whenever someone calls your old phone number, that call will be forwarded to your main number.

To set up a forwarding number, go back to Google Voice Settings and select the “Phones” tab like you did earlier on. Only this time click on “Add another phone”.


 
Enter in a name for your forwarding number and type in the phone number below that. You can also select whether or not you want text messages to forward as well. If you want to configure even more settings, click on “Show advanced settings”.


 
Within these settings, you can get direct access to your old number’s voicemail and even set when you want calls forwarded to you at certain times, sort of like Do Not Disturb (although Google Voice has an actual, separate Do Not Disturb feature). After you’ve customized settings, click “Save”.

 
After that, Google Voice will call your forwarding number to verify that you own and operate it, and you’ll be prompted to enter in the two-digit number shown on the screen on your phone’s keypad. Click on “Connect” to begin that process.


 






Once your number has been verified it will now show up under the “Phones” tab in Google Voice right below your ported number.


 
You’ll see a new setting here: the ability to receive text notifications on your forwarding number whenever someone leaves a voicemail on your old, ported number. Check the box next to this if you want to enable it.

At that point, though, your forwarding number is all set up and you’re good to go. If you ever want to make a call using your old phone number, you can do so from within the Google Voice app on your smartphone (if you have a data connection), or by calling your own Google Voice number to make a call.

How to Disable All The Activity Notifications on Your Apple Watch

$
0
0

By default, your Apple Watch will remind you to stand, notify you of your goal completions and achievements, and give you a weekly summary of your activity. Tired of seeing all these notifications? No worries. They’re easy to disable.







Now, the Activity app on the Apple Watch can be useful. I sometimes like knowing how many steps I’ve taken in a day, but I want to check on that myself and not get a notification on my progress, or lack thereof. I also prefer not to get reminded to stand in the middle of my work. So, I disabled the activity notifications on my watch and I thought I had disabled all of them, but I was still getting Progress Updates. 

I finally figured out which setting I had missed. If you don’t want to use your Apple Watch as a fitness tracker, or you just don’t want to be bothered by all the notifications, I’ll show you how to completely disable all the activity notifications on your Apple Watch.
To completely silence activity notifications, tap the Watch app icon on the Home screen on your iPhone.



Make sure the “My Watch” screen is active. The My Watch icon at the bottom of the screen should be orange.

Scroll down on the My Watch screen and tap “Activity”.


If you only want to turn off the reminders for the day, tap the “Mute Reminders for Today” slider button. However, to disable all activity notifications, let’s make our way down the screen. Tap the slider button for “Stand Reminders” to prevent reminders from displaying on your watch telling you to stand (slider buttons should be black and white when they’re off on this screen).

The setting I missed initially is “Progress Updates”. Notice it says “Every 4 hours”. Tap the setting to change it.

On the Progress Updates screen, tap “None” to turn off the updates. Then, tap “Activity” in the upper-left corner of the screen to return to the Activity screen.








To avoid receiving notifications when you reach your daily Move, Exercise, and Stand goals, tap the “Goal Completions” slider button to turn it off. Tap the Achievements slider button to avoid getting notifications when you reach a milestone or personal best. Finally, tap the “Weekly Summary” slider button to turn off that notification.

Now, you won’t be bothered by any activity notifications.

The Alternatives to uTorrent on Windows

$
0
0

Remember when uTorrent was great? The upstart BitTorrent client was super lightweight and trounced other popular BitTorrent clients. But that was long ago, before BitTorrent, Inc. bought uTorrent and crammed it full of crapware and scammy advertisements.


Screw that. Whether you need to download a Linux ISO or well, do whatever else you do with BitTorrent, you don’t have to put up with what uTorrent’s become. Use a better BitTorrent client instead.

 

Make Sure You Use a VPN for Privacy!

If you’re torrenting and you aren’t using a VPN, you are just asking for trouble. ISPs are blocking subscribers and sending notices to them to stop, and the problem is getting worse.

The solution to use torrents and retain your privacy is really very simple: Just use a VPN like StrongVPN to keep your torrenting private so nobody can see.

StrongVPN is a great choice — they’ve got unlimited bandwidth, clients for any device, blazing-fast connections, great security, and a low monthly price. Plus as an added benefit, you can use them to watch streaming media like Netflix that might be blocked in your country.

Download StrongVPN to Keep Your Torrenting Private Today

 

qBittorrent: an Open-Source, Junk-Free uTorrent


 
We recommend qBittorrent. It aims to be a “free software alternative to uTorrent”, so it’s the closest thing to a junkware-free version of uTorrent you’ll find.

qBitTorrent strives to offer the features most users will want while using as little CPU and memory as possible. The developers are taking a middle path–not cramming every possible feature in, but also avoiding the minimal design of applications like Transmission.

The application includes an integrated torrent search engine, BitTorrent extensions like DHT and peer exchange, a web interface for remote control, priority and scheduling features, RSS downloading support, IP filtering, and many more features.

It’s available for Windows as well as Linux, macOS, FreeBSD–even Haiku and OS/2!

 

Deluge: a Plug-In Based Client You Can Customize


 

Deluge is another open-source, cross-platform BitTorrent client. Overall, Deluge and qBittorrent are fairly similar and have many of the same features. But, while qBittorrent generally follows uTorrent, Deluge has a few of its own ideas.

Instead of being a feature-filled client, like qBittorrent, Deluge relies on a plug-in system to get you the advanced features you want. It starts off as a more minimal client, and you have to add the features you want through the plug-ins–like RSS support, for example.

Deluge is built with a client-server architecture–the Deluge client can run as a daemon or service in the background, while the Deluge user interface can connect to that background service. This means you could run Deluge on a remote system–perhaps a headless server–and control it via Deluge on your desktop. But Deluge will function like a normal desktop application by default.

 

Transmission: a Minimal Client Overcome by Security Issues


 
Transmission isn’t as popular on Windows, mostly known as a client for macOS and Linux. In fact, it’s installed by default on Ubuntu, Fedora, and other Linux distributions. The official version doesn’t support Windows, but the Transmission-Qt Win project is an “unofficial Windows build of Transmission-Qt” with various tweaks, additions, and modifications to work better on Windows.

Warning: Since the original writing of this article, Transmission has had some serious security problems. In March 2016, Transmission’s servers were compromised and the official Mac version of Transmission contained ransomware. The project cleaned things up. In August 2016, Transmission’s servers were again compromised and the official Mac version of Transmission contained a different type of malware. That’s two major compromises in five months, which is practically unheard of. It suggests there’s something seriously wrong with the Transmisison project’s security. We recommend staying away from Transmission entirely until the project cleans up its act.

Transmission uses its own libTransmission backend. Like Deluge, Transmission can run as a daemon on another system. You could then use the Transmission interface on your desktop to manage the Transmission servicerunning on another computer.

Transmission has a different interface that won’t be immediately familiar to uTorrent users. Instead, it’s designed to be as simple and minimal as possible. It dispenses with a lot of the knobs and toggles in the typical BitTorrent client interface for something more basic. It’s still more powerful than it first appears–you can double-click a torrent to view more information, choose the files you want to download, and adjust other options.

 

uTorrent 2.2.1: a Junk-Free Version of uTorrent That’s Old and Out of Date


 
Some people prefer sticking with an older, pre-junk version of uTorrent. uTorrent 2.2.1 seems to be the old version of choice. But we’re not crazy about this idea.

Sure, you get to keep using uTorrent and you won’t have to worry about updates trying to install garbage software onto your system, activating obnoxious ads, and pushing BitCoin miners on your PC. But uTorrent 2.2.1 was released in 2011. This software is over five years old and may contain security exploits that will never be fixed. It will also never be updated to contain new BitTorrent features that could speed up your downloads. So why waste your time when you could use the similar and much more up to date qBittorrent?

It may have made sense to stick with uTorrent 2.2.1 years ago, but modern alternatives have improved dramatically.

Sure, there are many more BitTorrent clients for Windows, but these are our favorite ones that won’t try to install junkware on your system. With the exception of the old versions of uTorrent, they’re all open-source applications. Thanks to community-driven development, they’ve resisted the temptation to overload their BitTorrent clients with junkware to make a quick buck.

How to Disable the Password Reveal Button on the Sign-in Screen in Windows 10

$
0
0
 

Make your sign-in experience more secure by disabling the Password Reveal button on the Sign-in screen on Windows 10. In this Windows 10 guide, we'll walk you through the steps to further tighten security by disabling the Password Reveal button on the Sign-in screen.







On Windows 10, Microsoft is adding a lot of significant changes to the operating system, including tweaks to the sign-in experience. For example, now the Sign-in screen shares a deeper integration with the Lock screen, and your email address is no longer visible for anyone to see, which was a security concern among users.

However, one thing that hasn't change in the sign-in screen is the option that reveals in the box in plain text the password you enter before trying to access your account. The option is called "Password Reveal," and while it's there on purpose to reduce the chances you don't enter a password incorrectly, if you have security concerns you can easily disable the option. 

 

How to disable the Password Reveal button using Group Policy

If you're running Windows 10 Pro, Enterprise, or Education, you can use the Local Group Policy to disable the Password Reveal button.
  • Use the Windows key + R keyboard shortcut to open the Run command.
  • Type gpedit.msc and click OK to open the Local Group Policy Editor.
  • Browse the following path:Computer Configuration > Administrative Templates > Windows Components > Credential User Interface

  • On the right, double-click the Do not display the password reveal button policy.
  • Select Enabled to disable the feature.
  • Click OK to complete the task.

 

After completing the steps, you can lock your computer (Windows key + L), or you can reboot your computer, and you should no longer see the password reveal button while trying to sign-in to your account.

 

How to disable the Password Reveal button using the Registry

In the case, you're running Windows 10 Home, you won't have access to the Local Group Policy Editor, but you can still get the same results modifying the Registry.

Important: This is a friendly reminder to let you know that editing the registry is risky, and it can cause irreversible damage to your installation if you don't do it correctly. It's recommended to make a full backup of your computer before proceeding.
  • Use the Windows key + R keyboard shortcut to open the Run command, type regedit, and click OK to open the registry.
  • Browse the following path:
    HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows\CredUI
  • If the CredUI (folder) key isn't present, you have to create it. Right-click Windows (folder) key, and select New, and click Key.

 
  • Name the key CredUI and press Enter.
  • Right-click on the right side, select New, and click DWORD (32-bit) Value.
 
  • Name the key DisablePasswordReveal and click OK.
  • Double-click the newly created key, and set its value from 0 to 1.

 






  • Restart your computer to complete the task.
Now at login, you should no longer see the password reveal option.

 
Sign-in screen with password reveal (left), without password reveal (right)

It's worth pointing out that configuring this policy will remove the password reveal button on the sign-in screen as well as through the operating system and apps.

High Availability and Load Balancing Exchange Server 2016

$
0
0

In this article we’ll look at how Load Balancing works in Exchange 2016 and provide a sample implementation using a real-world load balancer.







Load Balancing in Exchange 2016 is simpler than previous versions. With a consolidation in the number of Exchange Server roles in Microsoft’s newest version of Exchange there are less decisions to make, and the way that traffic affinity is handled within Exchange makes it very simple to reliably load balance traffic.

If you are migrating from a previous version, then things have changed considerably. In Exchange 2010, the version you may be leaving behind, load balancing was often painful. Not only did web-based traffic require load balancing, but MAPI traffic required load balancing too, when using a Client Access Array. To make matters more complex, affinity between servers was required for most protocols.

Setting up, configuring and maintaining load balancing for Exchange 2010 required a reasonable amount of skill to configure properly, especially if configured for services such as SSL offload.

In Exchange 2013, load balancing was simplified considerably and it also reduced the number of roles, resulting in just Client Access and Mailbox roles.

Exchange 2016 simplifies things further and has a single role – the Mailbox role. The single role server provides all the functionality Exchange 2013 multi-role servers provided effectively allowing any inbound client traffic to connect to any Exchange mailbox server and route to the server currently hosting the active mailbox being accessed.

This means in Exchange 2016 the decisions for load balancing are very simple and it’s reasonably hard to make a bad decision. You can complicate it, but should you wish to make Load Balancing very simple you can do so.

Changes to how clients access Exchange 2016

One of the biggest changes to Exchange 2016 is to the server roles. In Exchange 2010, the familiar Client Access, Hub Transport, Mailbox and Unified Messaging Roles can be deployed together, or apart. In Exchange 2013 the roles were consolidated to just the Client Access role and Mailbox role.

Exchange 2016 consolidates these roles further to just the Mailbox role. Inside a Mailbox server, the Client Access proxy components introduced in Exchange 2013 are still present, along with the underlying full Client Access, Transport, Mailbox and Unified Messaging components. However, a Mailbox server always contains these multiple roles and cannot be split. Effectively, Microsoft are mandating multi-role servers for new deployments.

In Exchange 2013 the Client Access role did something very special. It ensured that when a user attempted to access their mailbox, the request was proxied back to the Mailbox server actively serving the users mailbox. This always ensured that services like Outlook on the Web were always rendered for the user on the mailbox server itself, removing the need for affinity.

The Exchange 2016 mailbox role now includes the same functionality, meaning that if two servers host different mailboxes they will proxy traffic for each other when required. The mailbox server hosting the active copy of the mailbox will serve the user accessing it, even if the user connects to another mailbox server.
Finally all client traffic from native Exchange clients like Outlook connect over HTTP/HTTPS. No client connectivity directly via MAPI is allowed.

 

Improvements to Load Balancing

These make load balancing for Exchange 2016 suddenly quite simple. HTTPS-only access from clients means that we've only got one protocol to consider, and HTTP is a great choice because it's failure states are well known, and clients typically respond in a uniform way.

The second improvement is to the way that affinity now works. As Outlook on the web is rendered on the same server that's hosting that user's mailbox database it doesn’t matter which mailbox server the load balancer directs traffic to. These have no notable performance impact if the load balancer directs traffic to another server because the OWA session is already running.

The challenge for session affinity when using forms-based authentication has also been solved. Previously it was essential to use affinity to avoid re-prompting for login when the load balancer redirected traffic to another server. This was solved in Exchange 2013 by improving the way HTTP cookies are handled by Exchange.

The authentication cookie is provided to use the user after logon, encrypted using the Client Access Server's SSL certificate. This enables a logged in user to resume that session on a different Client Access server without re-authentication; assuming servers share the same SSL certificate, and therefore are able to decrypt the authentication cookie the client presents.


Figure 1: Example request for a client accessing Exchange 2016

The new simplified infrastructure provides us the opportunity to make load balancing as simple as it possibly can – if desired, we can use DNS round robin, a technique that simply provides the client with the IP address of every mailbox server, and let the HTTP client do the rest. If one Exchange server fails completely, then the protocol is smart enough to attempt connection to another server.

There are a number of downsides to DNS round robin. It is not actually load balancing the traffic and there are no guarantees that each server will receive its fair share. It also has no service level monitoring, meaning that if a single service is failed, for example Outlook on the web, clients will simply see the error page rather than be automatically directed to another server. Finally it requires more external IP addresses when publishing externally, as each individual Exchange server will require an external IP address.

These reasons typically mean that a load balancer is very desirable when publishing Exchange 2016 to clients.

We will want a load balancer to monitor each Exchange client facing service and if a failure does occur, direct traffic to another server and take that service offline. We will also want the load balancer to perform some level of distribution of load to ensure that one particular mailbox server is not proxying the majority of client access.

When load balancing services using a load balancer we can utilize either Layer 4 or Layer 7 load balancing. Layer 4 load balancers do not look at the contents of the traffic and simply send the traffic onto the configured destination. Layer 7 load balancers have the ability to inspect the content of the traffic and direct accordingly.

A layer 4 load balancer requires less resources to perform well, but has a trade-off. When using a single IP address, it can only monitor a single service (such as Outlook on the web, ActiveSync, MAPI-HTTP etc) meaning that although the configuration is very simple, there is a risk one service may fail and although the server is available, clients may still connect to the failed service.

This typically means that a resilient Layer 4 implementation requires multiple IP addresses configured along with separate HTTP names configured per service (such as owa.contoso.com, eas.contoso.com, mapi.contoso.com). This allows service level monitoring information to be used.

A layer 7 load balancer trades off the raw performance benefits of layer 4 load balancing for the simplicity of having a single HTTP name (such as mail.contoso.com) with the benefits of per-service monitoring.

Layer 7 load balancers understand the HTTP path being accessed (such as /owa, /Microsoft-Server-ActiveSync, /mapi) and can then direct traffic only to working servers based on monitoring data. This means only a single IP address is required.

 

Summary

In above steps we’ve looked at how clients access Exchange 2016 and the various options available for configuring load balancing. In the below steps, we’ll configure simple load balancing using a Layer 4 load balancer.

Implementing Simple Load Balancing

We'll first look at the simplest configuration for load balancing in Exchange 2016, using a KEMP load balancer as an example to try out the configuration on.

In our example, we'll be using a single HTTPS namespace for services like OWA, EWS, OAB and ActiveSync along with our Autodiscover namespace.

These two names will share Virtual IP (VIP) using the same SAN certificate. We'll be forwarding using Layer 4, and performing a check against the OWA URL. On the back-end we've just got two client access servers to load balance:

 

To add our single service, we'll log into our blank load balancer, and choose to add a new service, specifying our single VIP (in this case 192.168.15.51) along with the HTTPS port, TCP port 443:


 
Figure 2: Creating the initial VIP

Next, we'll choose to inform the load balancer under the heading Standard Options that the service is Layer 4 by deselecting Force L7. We'll also make sure affinity is switched off by selecting None within Persistence Options, and leave Round Robin as the scheduling method to distribute load:


 
Figure 3: Selecting Standard Options for the Load Balancer

We'll leave the SSL properties and advanced options at their defaults, then move onto adding our Client Access servers under the heading Real Servers.

First, we'll define what to monitor by ensuring that within Real Server Check Parameters, the HTTPS Protocol is defined and the URL is configured. We'll use /owa/healthcheck.htm as the URL then ensure we save that setting by choosing Set URL:


 
Figure 4: Configuring the OWA check URL

Next choose Add New and then on the page that follows, enter the IP address of your first Client Access server in the field Real Server Address. Leave all other options as their defaults, and choose Add this real server to save the configuration. Repeat the process for each Client Access server.


 
Figure 5: Adding client access servers

After adding both of our client access servers, choose View/Modify Services to list the VIPs. We should see our new VIP listed, along with each Client Access server under the Real Servers column. If all is well, the status should show in green as Up:

 
Figure 6: Completed load balancer configuration for a single IP

After ensuring that DNS records for our HTTPS namespaces - mail.stevieg.org and autodiscover.stevieg.org are configured to point at our VIP of 192.168.15.51, we'll then configure our Mailbox servers to use these names, by visiting the Exchange Admin Center, and then navigating to Virtual Directories. Within Virtual Directories, click on the Configure External Access Domain button highlighted below:

 
Figure 7: Configuring the HTTPS namespaces

Next, we'll select both of our servers hosting the Client Access role and enter our primary HTTPS name, then choose Save to implement our configuration of OWA, ECP, OAB, EWS and ActiveSync virtual directories.


 
Figure 8: Configuring a single namespace and applying it to all servers

Finally, we'll configure Outlook Anywhere by navigating to the Servers page and choosing each server one by one and selecting the Edit icon highlighted below:


 
Figure 9: Editing the individual server Outlook Anywhere publishing

We'll then navigate to the Outlook Anywhere tab of Server Properties window and enter our HTTPS namespace, mail.stevieg.org for both the internal and external names:


 
Figure 10: Configuring the internal and external URL for Outlook Anywhere

After saving the configuration, along with performing an iisreset /noforce against these servers, we should have a complete configuration.

 

Implementing Per-Service Load Balancing

With per-service load balancing, we have a number of different options available. We can use Layer 4 load balancing and use a separate name for each server and different corresponding IP address.

Alternatively, as we’ll describe here, we can use the same basic configuration used for the simple single server Layer 4 load balancing, and then add on some more intelligent features.

The feature we’ll add on using the KEMP load balancer is known as sub-virtual servers and requires the use of Layer 7 level load balancing. In essence, Layer 7 load balancing is able to examine the contents of the request, such as the URL specified, and then make intelligent decisions. Based on the URL provided, the Layer 7 load balancer is able to pass the request to a sub-virtual server which will then load-balance an individual service.

This has a number of benefits over and above the Layer 4 simple load balancer. Firstly, it provides service-level awareness, something we do not have with the configuration implemented above – we only have awareness for the service status of the Outlook on the web (OWA) service.

Secondly it only requires a single IP address. Layer 4 per-service load balancing requires multiple IP addresses as it is simply passing the request through as-is and is unable to make any intelligent services.
The great thing about what we’ll do now is that it doesn’t require any additional, special Exchange configuration. The changes we made earlier in this article to align the namespaces for HTTPS traffic for a single virtual IP all apply as-is for this example.

To get started though, let’s skip back to the KEMP load balancer and Add a new virtual service. We’ll create a new virtual service for Exchange 2016 HTTPS using our space VIP for this example implementation, 192.168.15.51, with basic options as shown below:


 






Figure 11: Creating a new virtual service

We will then examine the configuration for the newly created virtual service. As you’ll see when examining the configuration, the service type is set to HTTP/HTTPS and the Scheduling Method is set to Round Robin:


 
Figure 12: Viewing the newly created VS Scroll down to the SubVSs section within the configuration, and choose Add new

 
Figure 13: Creating a new virtual service

Upon choosing Add new a new sub virtual server will be shown. Using a similar process to the single Layer 4 load balancer, we’ll set a check URL for monitoring, and then add both real servers.

In the example below, we’re creating a sub virtual service for the MAPI protocol. We’ll specify the virtual directory check URL /mapi/healthcheck.htm and specify to use HTTP 1.1 and the GET method. When you’re finished, choose Add new to add the next one:


 
Figure 14
 
We’ll need to repeat this process a number of times. The table below shows the multiple services we’ll add and the corresponding check URLs:

ServiceCheck URL
Autodiscover/Autodiscover/healthcheck.htm
ActiveSync/Microsoft-server-activesync/healthcheck.htm
Exchange Admin Center/ecp/healthcheck.htm
Exchange Web Services/ews/healthcheck.htm
HTTP/MAPI/mapi/healthchech.htm
Offline Address Book/oab/healthcheck.htm
Outlook on the Web/owa/healthcheck.htm
Exchange Management Shell/powershell/healthcheck.htm
Outlook Anywhere/rpc/healthcheck.htm
Table 1

When you have finished creating each new sub virtual service, you should see each individual service listed. Each will be capable of monitoring an individual service.

 
Figure 15: Viewing individual status monitors

For the purposes of this article we will not go deeper on the KEMP specific details, as this is solely to illustrate the generic process. For the KEMP load balancer devices using this specific kind of per-service load balancing you’ll need to also import the specific SSL certificate used for the services, enable SSL re-encryption and configure sub virtual server rules to direct traffic based on the entered URL.

 

Summary

In the above steps we’ve explored the simple load balancing configuration required to direct traffic to your Exchange 2016 organization using Layer 4 load balancing, which provides an efficient, simple form of load balancing. As a representative example we’ve also looked at how the KEMP load balancer can be used to perform Layer 7 based per-service monitoring of services, at the expense of complexity.

How to Configure Credential Guard Through Group Policy on Windows Server 2016

$
0
0
 

Credential Guard in Windows Server 2016 allows you to protect in-memory credentials. This article explains how Credential Guard works and how you can configure it via Group Policy.








 

Credential Guard requirements

At first blush, the Credential Guard hardware and software requirements seem pretty steep, at least if your shop doesn’t have fairly current hardware. Here’s the list:
  • Operating systems: 64-bit Windows 10 Enterprise or Windows Server 2016
  • Firmware: UEFI firmware v2.3.1 or higher. The key point here is that the system’s UEFI must support Secure Boot. Recall that Secure Boot is a UEFI feature embraced by Microsoft that prevents unsigned (unauthorized) OS loaders or device drivers from loading during startup.
  • CPU virtualization extensions: Intel VT-x or AMD-V with SLAT support
  • TPM v 1.2 or 2.0: Trusted Platform Module (TPM) is a motherboard chip that stores Credential Guard encryption keys
As of this writing, you can’t enable Credential Guard on a Windows 10-based VM.

 

Enabling Credential Guard via Group Policy

The easiest way to deploy Credential Guard is to do so in local or domain Group Policy. Pop open your Group Policy editor and navigate to the following location:

Computer Configuration\Administrative Templates\System\Device Guard

We’re looking for the policy named “Turn on Virtualization Based Security,” as shown in the following screen capture from my Windows Server 2016 Technical Preview (TP) 5 VM:


 

Disabling Credential Guard

After enabling this policy, you have two choices about how you want Credential Guard to behave:
  • Enabled with UEFI lock: Credential Guard can’t be remotely disabled. An administrator needs to locally logon to the machine in question to disable the feature (along with modifying Group Policy if necessary).
  • Enabled without lock: Credential Guard can be disabled remotely via Group Policy.
However, you’ll want to perform due diligence before enabling Credential Guard across your enterprise. The reason for this is that Credential Guard prevents the use of older NTLM credentials and unconstrained Kerberos delegation for security reasons. That second point may tweak a few of you in our readership because Kerberos delegation is a standard method to allow our line-of-business (LOB) applications to forward account credentials.

We can use good ol’ System Information (msinfo32.exe) to determine whether Credential Guard is enabled or disabled. My virtual machine wouldn’t work with UEFI, so you can see in the following screenshot that Credential Guard is enabled but not running:








 

Remaining vulnerabilities

We must always keep in mind that no single security feature or measure alone provides complete protection. Here’s a laundry list of scenarios that Credential Guard can’t mitigate:
  • Software that doesn’t store secrets within Windows feature protection (Microsoft recently announced that Credential Guard now protects credentials stored in Windows Credential Manager.)
  • Local user accounts
  • Microsoft Accounts
  • Key loggers or other physical attacks

How to Setup Storage Replica in Windows Server 2016

$
0
0
Storage Replica is a new feature in Windows Server 2016 that allows us to do storage-agnostic block-level replication of data.






The main features of Storage Replica:
  • It performs zero data-loss block-level replication of data.
  • It is storage-agnostic (but it requires that we have data volumes, which are NTFS formatted).
  • It is configurable as synchronous or asynchronous.
  • Replication is based on volume source and destination.
  • It uses SMB 3 as the transport protocol and is supported using TCP/IP or RDMA.
  • It can replicate open files, as it operates on block level.
It supports different use cases, including host-to-host replication, cluster-to-cluster replication, and same-host replication (if we want to synchronize data from one volume to another).

Nano Server also supports Storage Replica, but you need to add it as a separate component when building the server image.

The diagram below describes how Storage Replica works with a synchronous configuration. (1) When an application writes data down to the file system (for instance the D:\ drive), IO filtering will intercept the IO and (2) write it to the log volume on the same host. (3) The data will replicate across to the secondary site and write it to the log volume there. When the data is written to the log volumes it will (4) send an acknowledgment to the primary server, which will in turn send an (5) acknowledgment to the application. The data also will be flushed to the volume from the logs using write-through.

The log volume’s purpose is to record all block changes that occur, similar to an SQL database transaction log that stores all transactions and modifications. In case of a power outage on the remote site, it would need to get all the changes that occurred since the outage in order to compare them.

It is important to be aware that in a synchronous configuration, the application needs to wait for acknowledgment from the remote site. A constrained network  will affect application performance considerably. As most TCP/IP networks add about 2–5 ms latency, it can create a bad user experience; instead consider using RDMA, which has a considerably lower overhead, because it does not use TCP/IP (which gives a much lower latency). So for synchronous replication, recommendations are that we have a maximum of 5 ms latency and high bandwidth available between source and destination resources.

By design, the Data and Log volumes on the remote site will be unmounted and marked as non-accessible.

 
Now, if we were to configure asynchronous replication, the picture would be quite different. Instead, it would write data locally first to the log file, then send an acknowledgement to the application, giving the same application performance it would give if we didn’t have Storage Replica installed. Next, it would replicate the data from the log volume in the other site and write it to the log volume there. 

Because the application does not have to wait for the remote site, we do not need to have such strict requirements on the network layer; this allows asynchronous deployment in WAN scenarios. It is also important to be aware that if the network link between the two sites fails, the log volumes on the source would store all block changes until the link comes back up and then replicate the changes that happened since the link went down.

Requirements

There are some requirements we need to be aware of before configuring this feature.
  • We need to have to volumes available on each location, one for data and one for logs.
  • Volumes need to be configured as GPT and be the same size at the source and destination.
  • Log volumes need to be identical sizes on both source and destination.
  • Data volumes should not be higher than 10 TB.
  • We need to have Windows Server 2016 as the source and as the target resource.
It is important to note that Storage Replica is a Windows feature that will be available only in the Datacenter edition of 2016.

Installing Storage Replica

Launch a PowerShell console with administrator privileges and execute the following command:

Install-WindowsFeature Storage-Replica, FS-FileServer -Restart –IncludeManagementTools 

This PowerShell command also installs the File and Storage Services server role i (FS-Fileserver). Whereas this feature is not required for Storage Replica to work, we will use it later in this post to run the Test-SRTopology command. After we have successfully run the command Test-SRTopology, we can safely remove the file server role from the servers we want to use with Storage Replica.

For this setup, I have two virtual machines, which have two additional volumes each; I will use them as data and log volumes. Since Storage Replica does not have any UI management, we must use PowerShell to do all configuration.

Before we set up any storage replication options, we need to verify support for our topology. We can do this using the PowerShell command Test-SRTopology; it will generate an HTML report that we can use to see if we have a supported topology. We can use the cmdlet in a requirements-only mode for a quick test as well as a long-running performance-evaluation mode. It is also important that we generate some IO against the source volume while we are running the test to get more-detailed information about the benchmark.

Open PowerShell and make sure that Storage Replica Module is present.

Import-module StorageReplica 

Then we need to test our Storage Replica topology.

Test-SRTopology -SourceComputerName NTX-SR01 -SourceVolumeName f: -SourceLogVolumeName g: -DestinationComputerName NTX-SR02 -DestinationVolumeName f: -DestinationLogVolumeName g: -DurationInMinutes 30 -ResultPath c:\temp


NOTE: If you are using non-US regional settings on the Windows Server 2016 TP5, the Test-SRTopology cmdlet might fail when generating the report, giving you the following error message:

WARNING: Plotting chart from file c:\temp\SRDestinationDataVolumeBytesPerSec.csv failed.

In that case, you need to switch the regional settings to US, reboot, and rerun the cmdlet.

After running the command, PowerShell will generate an HTML report, which will list whether the environment meets all the requirements.


 
After we successfully run the cmdlet, we can start setting up our replication configuration.

New-SRPartnership -SourceComputerName NTX-SR01 -SourceRGName SR01 -SourceVolumeName e: -SourceLogVolumeName f: -DestinationComputerName NTX-SR02 -DestinationRGName SR02 -DestinationVolumeName e: -DestinationLogVolumeName f: 

We can now run Get-SRgroup to see the configuration’s properties. By default, it is set up to run with synchronous replication, and by default the log file is set to 8 GB. You can change it to asynchronous by using the command Set-SRPartnership -ReplicationMode Asynchronous.


 
If we open File Explorer on the destination machine, we will also notice that the E:\ drive is inaccessible and that the log file is stored on the F:\ drive.








 
When we start to write data to the E:\ drive on the source computer, it will replicate block by block to the destination computer. The easiest way to see how the progress is going is by using Performance Monitoring, since Storage Replica includes a set of built-in metrics.

 
In upcoming posts, we will take a closer look at more-advanced configuration of Storage Replica using delegated access, sizing, and network configuration; we will also look at how to configure Storage Replica in a Stretched Cluster environment.

How to Use Security Compliance Manager in Windows Server 2016

$
0
0
You may be familiar with Microsoft Security Essentials or the Microsoft Baseline Security Analyzer (MBSA), but have you ever seen the Security Compliance Manager (SCM) tool? Learn how to develop, compare, deploy, and troubleshoot security baselines in Windows Server 2016.






As you know, you define Windows Server and Windows Client security settings in Group Policy, specifically under Computer Configuration\Policies\Windows Settings\Security Settings, as shown in the following screenshot:
 

Group Policy is difficult enough to audit and troubleshoot on its own. But what if your IT department is subject to industry and/or governmental compliance regulations that require you to strictly oversee security policies?

As you know, different Windows Server workloads have different security requirements. Today, I’d like to teach you how to use the free Security Compliance Manager (SCM) tool. SCM is one of Microsoft’s many “solutions accelerators” that are intended to make our lives as Windows systems administrators easier.

In part one, we’ll cover installing the tool, setting it up, and creating baselines. In part two, we’ll deal with exporting baselines to various formats and applying them to domain- and non-domain-joined servers. Let’s begin.

Installing SCM 4.0

Sadly, SCM is poorly documented in the Microsoft TechNet sites. In fact, if you Google security compliance manager download, you’ll probably reach a download link for a previous version. To manage Windows Server 2016 and Windows 10 baselines, you’ll need SCM v4.

Go ahead and download SCM v4.0 and install it on your administrative workstation. SCM is a database-backed application; if you don’t have access to a full SQL Server instance, the installer will give you SQL Server 2008 Express Edition.

NOTE: I’ve had SCM 4.0 installation fail on servers that had Windows Internal Database (WID) installed. The installer detects WID and won’t let you override that choice, leading to inevitable setup failures. This behavior is annoying, to be sure.

After setup, the tool will start automatically. As you can see in the following screen capture, SCM is nothing more than a Microsoft Management Console (MMC) application. I’ll describe each annotation for you.

 

  • A: Baseline library pane. The Custom Baselines section is where your own baselines (whether created with the tool or imported via GPO backup) are displayed. Clicking on any section heading shows the documentation links list as shown in the image.
  • B: Details pane. The documentation home page has some useful links; this is where you view and work with your security baselines.
  • C: Action pane. As is the case with MMC consoles, this context-sensitive section contains all your commands.
At first launch, you were likely asked if you wanted to update the baselines. If you did, fine, but I want to show you how to configure baseline updates manually. First of all, what the heck is a security baseline, anyway?

A security baseline is nothing more than a foundational “steady state” security configuration. It’s a reference against which you’ll evaluate the Group Policy security settings of all your servers and, potentially, your client devices.

Click File and Check for Updates from within the SCM tool to query the Microsoft servers for updated baselines. The good news is that Microsoft frequently tweaks its baselines. The bad news is that your baseline library can quickly grow too large to manage efficiently.

That’s why you can deselect any updates you don’t need, as shown in the following figure:

 
As of this writing, Microsoft has Windows 10 baselines available from within SCM. However, you’ll need to download Windows Server 2016 Technical Preview baselines separately from the Microsoft Security Guidance blog. Here’s how you import manually downloaded security baselines into SCM:
  1. Download the .zip archive and extract its contents.
  2. In the SCM Actions pane under Import, click GPO Backup (folder).
  3. In the Browse for Folder dialog, select the appropriate GPO backup. Because the folder names use Globally Unique Identifiers (GUIDs), some trial and error is required.
  4. In the GPO Name dialog, optionally change the name of the imported baseline and click OK. I show you this workflow in the following screen capture:
Manual baseline import into SCM.

 

Creating your first baseline

The built-in security baselines are all read-only, so you’ll need to create a duplicate of any baseline you plan to modify.

To duplicate a baseline, select it in the baseline library pane and then click Duplicate in the Actions pane. Give the new baseline a name, and you’re ready to rumble.

That is… until you see how cumbersome and complicated the baseline user interface is. Here, let me show you:


 

You can use the arrow buttons to collapse or expand each GPO security policy section. I want to draw your attention to the key three columns in a baseline:
  • Default: This is the operating system default setting.
  • Microsoft: This is the Microsoft-recommended policy setting as it exists in the source, read-only baseline.
  • Customized: This is the setting you’ve manually added to the baseline.
Because your baselines all exist in a SQL Server database, there’s no save functionality; all your work is automatically committed to the database.

 

Comparing baselines

You’re not limited by the built-in baselines that Microsoft offers, or even those that you download yourself from the Internet. Suppose you want to develop new security baselines based on GPOs that are in production on your Active Directory Domain Services (AD DS) domain.

To do this, start by performing a GPO export from one of your domain controllers. If you have the Remote Server Administration Tools (RSAT) installed on your workstation, fire up the Group Policy Management Console (GPMC), right-click the GPO in question, and select Back Up from the shortcut menu as shown here:


 

Now you can import your newly backed-up GPO by using the same procedure we used earlier in this article.

To perform a comparison, select your newly imported GPO in the baseline library pane, and then click Compare/Merge from the Actions pane. In the Compare Baselines dialog that appears, you can select another baseline—either another custom baseline or one of the Microsoft-provided ones.

In the following screenshot, you can see the results of my comparison between two versions of my custom Server Defaults Policy baseline:


 
  • Summary: Quick “roll up” of comparison results.
  • Settings that differ, Settings that match: Detailed list of GPO settings and their policy paths in the GPO Editor.
  • Settings only in Baseline A, B: Here you can isolate settings from each compared baseline individually.
  • Merge Baselines: You can create a new, third baseline that contains settings merged from the two present ones.
  • Export to Excel: Save an Excel workbook that contains the comparison results. This is handy for archival/offline analysis purposes.

 

SCM export options

In the Export section of the SCM 4.0 Microsoft Management Console (MMC), you’ll see the following options:

  • Excel (.xlsm): Macro-enabled Excel workbook. Note that you have to have Microsoft Excel installed on your SCM computer to make this export method work. I show you what a representative baseline worksheet looks like in the next screen capture.
  • GPO Backup (folder): This is the most common export method because the format can be easily imported into domain Group Policy.
  • SCAP v1.0 (.cab): Security Content Automation Protocol. This is a vendor-neutral data reporting format.
  • SCCM DCM 2007 (.cab): System Center Configuration Manager Desired Configuration Management format. Use this export format if you use SCCM in your on-premises environment.
  • SCM (.cab): This is “native” Security Compliance Manager format. Use this export method when you want to import baselines easily into another SCM instance running on another computer.
 

Notice the additional documentation Microsoft gives you in an exported baseline workbook. The Vulnerability and Countermeasure columns are particularly enlightening.

 

Deploy a baseline to Active Directory

From the SCM v4 console, select your target security baseline from the baseline library pane, then click GPO Backup (folder) under Export in the Actions pane. The resulting globally unique identifier (GUID)-named folder is ready for import in your Active Directory Domain Services (AD DS) Group Policy infrastructure.


 

Next, fire up the Group Policy Management Console (GPMC), which you should already have installed on your administrative workstation via the RSAT tools pack.

Follow these steps to import your baseline into an existing GPO:
  1. Open the destination GPO and navigate to Computer Configuration > Policies > Windows Settings > Security Settings.
  2. Right-click the Security Settings node and select Import Policy from the shortcut menu.
  3. Navigate to the inf file located deep inside your GPO backup folder.
You should see that the baseline security settings have been applied to your destination GPO.

 

Deploy a baseline to a workgroup server

Sigh. In part one, I told you that Microsoft’s Security Compliance Manager documentation is a bit scattered and incomplete. I know many administrators who reached great levels of frustration looking for a version of LocalGPO.wsf that works with Windows 10 or Windows Server 2016.

LocalGPO.wsf is a Windows script file that allows you to deploy security baselines to workgroup computers, among many other cool tasks. What you need to know is that Microsoft deprecated LocalGPO.wsf and instead offers LGPO.exe for local GPO management in Windows 10 and Windows Server 2016.

You’ll need to download the LGPO zip archive and unpack it on the target Windows Server or Windows Client machine, along with your exported SCM security baseline in GPO backup format.

Next, open an elevated Windows PowerShell console and run the following command; the following simple example imports the security baseline in the current working directory to the local computer’s local Group Policy:

.\LGPO.exe /g '.\{GPO-GUID}\'

Differentiating SCM from related tools

Microsoft is known for deploying tool after tool with associated three-letter acronym (TLA) after TLA. And then it changes those tool names every year (half-kidding).

Anyway, I want to close this tutorial by briefly describing some other first-party security management tools that are often confused with Security Compliance Management.

First, there’s the trusty Security Configuration and Analysis (SCA) MMC snap-in, shown below alongside the Security Templates snap-in:

 







These two MMC snap-ins ship by default in Windows Server and Windows Client. SCA is nice inasmuch as you can view your local system’s current security settings and configure the local Group Policy with settings from an imported template. However, SCA is definitely not a centralized security settings management console like SCM is.

It’s beyond our scope today, but another difference between SCM and SCA is that only SCM can work with digitally signed security baselines. On the other hand, only SCA can change file system and registry key security policy settings.

Second, there’s the Microsoft Baseline Security Analyzer (MBSA). The tool hasn’t been updated in a year or so, but is still functional.



Wrapping up

I hope you’re now in a better position than you were with regard to understanding Security Compliance Manager. This tool should save you a lot of time and administrative headaches, especially if you’re tasked with documenting and more strictly controlling the GPO security policies in use in your environment.

Active Directory's New Features in Windows Server 2016

$
0
0
Active Directory received three major enhancements with the release of Windows Server 2016. This article will review Privileged Access Management, Azure AD Join, and Microsoft Passport.






Microsoft’s biggest focus for Windows Server 2016 is security. You can see this push across each server role. Hyper-V has shielded VMs, application servers have code integrity, and Active Directory Domain Services has Privileged Access Management.
 

However, the updates to Active Directory in Server 2016 are not completely related to security. Two big features stand out in particular. You should expect to hear a lot about Azure Active Directory Join over the next few months (especially if you support small/medium organizations). The second feature of note is Microsoft Passport. Though it is still a bit early to tell, Microsoft Passport has the potential to remove a lot of user frustrations (and IT concerns) with passwords. Enough with the exposition though. Let’s bite into some meat!

 

Privileged Access Management in Server 2016

Privileged Access Management (PAM) is the Active Directory equivalent of Privileged Access Workstation (PAW). Where PAW focuses on desktop and server resources, PAM targets forest access, security groups, and membership.

At its core, PAM utilizes Microsoft Identity Manager (MIM) and does require an AD forest functional level of 2012R2+. Microsoft believes that an organization with a business use for PAM is an organization that should assume an already-breached AD environment. Because of this, MIM creates a new AD forest when PAM is configured. This AD forest is isolated for the use of privileged accounts. Because MIM creates it, it is free of any malicious activity.


 
With this secure forest, MIM can now provide the ability to manage and escalate permission requests. Similar to other permission flow applications, like AGPM, MIM provides workflows for administrative privileges through the use of approval requests. When a user is granted additional administrative privileges, he or she is made a member of shadow security groups in the new trusted forest.

Through the use of an expiring links feature, membership to the sensitive security groups is time-controlled. If a user is allotted an hour of additional permissions, the escalated membership is removed after an hour. This timed permission set is stored as a time-to-live value.

All of this is designed to be transparent to the user. By using a forest trust and secondary secure accounts in the new forest, users can receive these additional permissions without having to log off of their primary machines. The Kerberos Key Distribution Center (KDC) is aware of multiple time-bound group memberships. Users in multiple shadow security groups have their Kerberos ticket lifetime limited to the lowest time-to-live value.

What is Azure Active Directory Join?

Azure AD Join is to AD Domain Services as Intune is to SCCM. Azure AD Join is primarily aimed at smaller organizations that do not yet have an Active Directory infrastructure. Microsoft calls these organizations cloud-first/cloud-only organizations.

The core purpose of Azure AD Join is to provide the benefits of an on-premises AD environment without the accompanying complexity. Devices purchased with Windows 10 can be self-provisioned into Azure AD. 

This allows an organization without full-time IT staff to manage many of its company resources in-house.

 

Organizations already using Office 365 may benefit the most from Azure AD Join. With a Windows 10 device, a user can use the same account to log on, check email, sync Windows settings, etc. When needed, IT support can configure MDM policies and configure the Windows store for the organization.

One big potential market for Azure AD Join is education. Currently, Google’s Chromebook is a dominant platform. While there isn’t any doubt that a traditional domain-joined mobile device is more customizable than a Chromebook, price and speed aren’t strong points for Windows devices. A very cheap device capable of joining Azure AD with access to a configurable store and Office 365 apps could do a lot to stop the jump to rival platforms.

 

Microsoft Passport may take the pain out of passwords

Credential recycling is one of the top security issues targeting users. I think every administrator knows someone who uses the same password across many services. When an employee uses the same username, such as an email address, exploiting a credential chain becomes much easier. Once you have one credential set, you have them all.

Microsoft Passport aims to change that. By utilizing two-factor authentication, Passport can provide more security than a simple password without the complexity of traditional solutions like physical smart cards. It is designed to be paired with Windows Hello (the built-in biometric sign-in for Windows 10 Pro/Enterprise).








 

Passport’s two-factor authentication is made up of the user’s existing credentials plus a credential specific to the device the user is using (which is linked to the user). Each user on a device has a specific authenticator (called a hello) or a PIN. This provides confirmation that the person entering the credentials is actually the user.

This technology can be deployed in a traditional on-premises AD environment or in Azure AD. In some configurations, you will need a domain controller running Windows Server 2016. By using Microsoft Passport, IT administrators do not have to worry about password recycling as the second authentication method is always required. Excessive password policies (such as longer lengths or shorter expirations) may be modified due to the increased security that Passport provides. An easier logon process can make users quite a bit happier with IT.

Each of these Active Directory improvements targets the ever-widening audience for Windows Server. PAM provides a way to mitigate privilege credential theft in highly secure environments. Azure AD Join provides the benefits of AD to small organizations that lack the funds and infrastructure for an on-premises solution. Finally, Microsoft Passport aims to change the way authentication occurs. By complying with the FIDO alliance, Microsoft Passport can work across a variety of platforms and devices (and hopefully see wide adoption).

How to Disable SSL in Windows Server 2016 With PowerShell

$
0
0
Cracking SSL-encrypted communications has become easy, if not trivial, for a motivated attacker. In July 2016, the de facto standard for encrypting traffic on the web should be via TLS 1.2. In this post, you will learn how to disable SSL in Windows Server 2016, Windows 2012 R2, and Windows Server 2008 R2.






It would probably surprise you to learn that TLS 1.2 was first defined in 2008, with TLS 1.0 taking over from SSL 3.0 in the late ’90s. SSL 3.0 is now vulnerable to the much publicized POODLE attack, and SSL 2.0 to the DROWN attack as well as the FREAK attack.

It may surprise you even further to learn that most Windows Server 2008 R2 Servers will happily accept SSL 2.0 and SSL 3.0 in addition to TLS 1.0 out of the box, and that they WILL NOT support TLS 1.1 or 1.2 without the administrator specifically enabling it.

A recent test I performed on Windows Server 2016 TP5 shows that still today a default install will support SSL 3.0. However, all is certainly not lost, and our quest for better-secured servers can be helped drastically by setting just a few registry keys. I prefer to use PowerShell for this type of repetitive task.

To disable SSL 2.0 and SSL 3.0, simply paste the following into an elevated PowerShell window: 

New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server' -Force
New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server' -name Enabled -value 0 –PropertyType DWORD

 

New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server' -Force

New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server' -name Enabled -value 0 –PropertyType DWORD

 

You should then enable TLS 1.1 and TLS 1.2:

New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server' -Force

New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client' -Force


New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server' -name 'Enabled' -value '0xffffffff'–PropertyType DWORD


New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server' -name 'DisabledByDefault' -value 0 –PropertyType DWORD


New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client' -name 'Enabled' -value 1 –PropertyType DWORD


New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client' -name 'DisabledByDefault' -value 0 –PropertyType DWORD









 

Then, simply reboot your server and bask in the glory of a job well done!

Additionally, you can disable the RC4 Cipher, which will assist with preventing a BEAST attack. You need to consider the effect of disabling TLS 1.0 before you go ahead and do that, though, as a lot of older software requires patching to support it—specifically SQL Server 2008 R2, which is used in SBS 2011. Exchange 2010 and 2013 require patching to support TLS 1.2, and some applications will simply not function at all without it.

There are some very useful resources to assist with this type of configuration. IISCRYPTO is one of the most well-known: it’s a GUI-based tool to take care of these changes, mentioned in regard to the FREAK attack. I have also used this tool, which takes care of similar tasks but works in PowerShell.

New Group Policy Settings in Windows 10 version 1607

$
0
0
In Windows 10 1607 (Anniversary Update), new Group Policy settings were introduced. This post lists all the new settings and discusses the most interesting ones.






Each time Microsoft releases a new Windows 10 version, new .ADMX templates become available for download. The latest Excel spreadsheet identifying settings can be downloaded here.

I found that some settings in the spreadsheet were not marked as new. Thus, I put together the list below. Please note that:
  • To make the list more readable, I removed all settings for the App-V and the UE-V client (except “Enable APP-V” and the “Enable UE-V”).
  • I included additional new settings I am aware of.
  • I will discuss the highlighted settings this post.

Policy Setting NameScopePolicy Path
Let Windows apps access account informationMachineWindows Components\App Privacy
Let Windows apps access notificationsMachineWindows Components\App Privacy
Enable App-V ClientMachineSystem\App-V
Control Device Reactivation for Retail devicesMachineWindows Components\Software Protection Platform
Allow Use of CameraMachineWindows Components\Camera
Configure Windows spotlight on lock screenUserWindows Components\Cloud Content
Turn off all Windows spotlight featuresUserWindows Components\Cloud Content
Do not suggest third-party content in Windows spotlightUserWindows Components\Cloud Content
Configure the Commercial IDMachineWindows Components\Data Collection and Preview Builds
Absolute Max Cache Size (in GB)MachineWindows Components\Delivery Optimization
Maximum Download Bandwidth (in KB/s)MachineWindows Components\Delivery Optimization
Maximum Download Bandwidth (Percentage)MachineWindows Components\Delivery Optimization
Minimum Background QoS (in KB/s)MachineWindows Components\Delivery Optimization
Modify Cache DriveMachineWindows Components\Delivery Optimization
Monthly Upload Data Cap (in GB)MachineWindows Components\Delivery Optimization
Allow companion device for secondary authenticationMachineWindows Components\Microsoft Secondary Authentication Factor
Turn on cloud candidate for CHSUserWindows Components\IME
Allow edge swipeMachineWindows Components\Edge UI
Allow edge swipeUserWindows Components\Edge UI
Enable Win32 long pathsMachineSystem\Filesystem
Continue experiences on this deviceMachineSystem\Group Policy
Enable Font ProvidersMachineNetwork\Fonts
Process Mitigation OptionsMachineSystem\Mitigation Options
Process Mitigation OptionsUserSystem\Mitigation Options
Allow Internet Explorer to use the SPDY/3 network protocolMachineInternet Control Panel\Advanced Page
Allow Internet Explorer to use the SPDY/3 network protocolUserInternet Control Panel\Advanced Page
Send all sites not included in the Enterprise Mode Site List to Microsoft Edge.MachineWindows Components\Internet Explorer
Send all sites not included in the Enterprise Mode Site List to Microsoft Edge.UserWindows Components\Internet Explorer
KDC support for PKInit Freshness ExtensionMachineSystem\KDC
Handle Caching on Continuous Availability SharesMachineNetwork\Lanman Workstation
Offline Files Availability on Continuous Availability SharesMachineNetwork\Lanman Workstation
Block user from showing account details on sign-inMachineSystem\Logon
Disable MDM EnrollmentMachineWindows Components\MDM
Prevent access to the about:flags page in Microsoft EdgeMachineWindows Components\Microsoft Edge
Prevent access to the about:flags page in Microsoft EdgeUserWindows Components\Microsoft Edge
Show message when opening sites in Internet ExplorerMachineWindows Components\Microsoft Edge
Show message when opening sites in Internet ExplorerUserWindows Components\Microsoft Edge
Allow ExtensionsMachineWindows Components\Microsoft Edge
Allow ExtensionsUserWindows Components\Microsoft Edge
Turn off Windows default printer managementUserControl Panel\Printers
Allow Cortana above lock screenMachineWindows Components\Search
Enable UEVMachineWindows Components\Microsoft User Experience Virtualization
Configure the ‘Block at First Sight’ featureMachineWindows Components\Windows Defender\MAPS
Define proxy auto-config (.pac) for connecting to the networkMachineWindows Components\Windows Defender
Suppress all notificationsMachineWindows Components\Windows Defender\Client Interface
Allow suggested apps in Windows Ink WorkspaceMachineWindows Components\Windows Ink Workspace
Allow Windows Ink WorkspaceMachineWindows Components\Windows Ink Workspace
Only display the private store within the Windows Store appUserWindows Components\Store
Only display the private store within the Windows Store appMachineWindows Components\Store
Do not include drivers with Windows UpdatesMachineWindows Components\Windows Update
Select when Feature Updates are receivedMachineWindows Components\Windows Update\Defer Windows Updates
Select when Quality Updates are receivedMachineWindows Components\Windows Update\Defer Windows Updates
Turn off auto-restart for updates during active hoursMachineWindows Components\Windows Update
Turn off unsolicited network traffic on the Offline Maps settings pageMachineWindows Components\Maps
Don’t allow this PC to be projected toMachineWindows Components\Connect
Require pin for pairingMachineWindows Components\Connect
Turn off notification mirroringUserStart Menu and Taskbar\Notifications

Enable App-V

The App-V client now is part of Windows 10 and can be enabled using Group Policy or PowerShell (Enable-Appv) on Windows Enterprise and Education.

Enable-Appv using Group Policy

Let’s hope that App-V being a part of Windows 10 will help spread it, because it is really good technology.

 

Send all sites not included in the Enterprise Mode Site List to Microsoft Edge ^

This is an interesting, new setting. Although Internet Explorer is still around to provide compatibility, a day will come when websites will have issues when used in Internet Explorer. This new setting can be used to ensure that sites that are not included in our Enterprise Mode Site List are opened in Edge.

 

Prevent access to the about:flags page in Microsoft Edge

The about:flags page in Edge allows you to enable experimental browser features or features that are of interest to developers. It might make sense to disable access to this page to prevent unnecessary service desk calls.

Windows 10 1607 introduced a new feature that allows you to set “active hours” when Windows Update won’t reboot the computer.


 

Allow extensions

Extensions in Edge are one of the new cool features. Most extensions target consumer users and are of little value in a corporate environment. Extensions also pose a security risk because it is often unclear what data they collect. With the help of this new Group Policy setting, we can disable extensions in Edge.


 

Turn off Windows default printer management

In with Windows 10 1511 that the last printer used is set as the default printer.


 
In many organizations this behavior is unwanted. We had to use a Group Policy preference setting and a Registry key to turn it off. In Windows 10 1607, we now have a new Group Policy setting that can be used to turn off the default printer management.








 

Only display the private store within the Windows Store app

This policy allows you to control which applications can be installed from the Store.

 

Turn off auto restart for updates during active hours

This policy allows you to configure the new Active hours feature in Windows 10. Please read this post for more information.

If you are aware of another new Group Policy in Windows 10 1607, please leave a comment below.

How to Set Windows 10 Ethernet Connection Metered with PowerShell

$
0
0
If you set your internet connection to metered, Windows will limit automatic downloads such as Windows Update. Whereas a Wi-Fi connection can be set to a metered connection easily with a few mouse clicks, things are a bit more complicated with an Ethernet connection. I wrote a little PowerShell script that allows quick switching between metered and not metered connections.






Unfortunately, the procedure to set an Ethernet connection as metered is quite long-winded, because, by default, Administrators don’t have the right to change the corresponding Registry key. For the sake of completeness, I show you how to do it with the Registry editor. But if you want to avoid all this click-click, you can simply run the PowerShell script mentioned in the end of this post.
  1. Run Registry editor (Windows key + R, type regedit, click OK)
  2. Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\NetworkList\DefaultMediaCost
  3. Right click DefaultMediaCost, select Permissions, and click Advanced.
 
Change Permissions on DefaultMediaCost key

Click Change to assign a different owner for the key.

 
Change owner of Registry key

Type Administrators in the form field and click OK.


 
Setting Administrators as key owner

Check Replace owner on subcontainers and objects and click OK.


 
Replace owner on subcontainers and objects

Select the Administrators group, give it Full Control, and click OK.


 
Assign Full Control permissions to Administrators

Double-click the Ethernet key and set its value to 2.


 
Set Ethernet connection as metered

You can set a Favorite in the Registry editor, if you want to change the key quickly later. To reset the Ethernet connection as not metered, you have to change the value to 1.

All right, this is really a lot of click-click. If you have to do this often on different machines, you can just run the PowerShell script below.

After I assign the Administrators group as the owner of the DefaultMediaCost key, I give the group full control permissions.

In the last part of the script, I check to see if the Ethernet connection is set as metered or not and then ask the user whether the current configuration should be changed.






Ethernet Connection Metered PowerShell Script


<#
.SYNOPSIS   :  PowerShell script to set Ethernet connection as metered or not metered

.AUTHOR     :  Rock Brave

.SITE       :  http://infosbird.com
#>

#We need a Win32 class to take ownership of the Registry key
$definition = @"
using System;
using System.Runtime.InteropServices;

namespace Win32Api
{

    public class NtDll
    {
        [DllImport("ntdll.dll", EntryPoint="RtlAdjustPrivilege")]
        public static extern int RtlAdjustPrivilege(ulong Privilege, bool Enable, bool CurrentThread, ref bool Enabled);
    }
}
"@

Add-Type -TypeDefinition $definition -PassThru | Out-Null
[Win32Api.NtDll]::RtlAdjustPrivilege(9, $true, $false, [ref]$false) | Out-Null

#Setting ownership to Administrators
$key = [Microsoft.Win32.Registry]::LocalMachine.OpenSubKey("SOFTWARE\Microsoft\Windows NT\CurrentVersion\NetworkList\DefaultMediaCost",[Microsoft.Win32.RegistryKeyPermissionCheck]::ReadWriteSubTree,[System.Security.AccessControl.RegistryRights]::takeownership)
$acl = $key.GetAccessControl()
$acl.SetOwner([System.Security.Principal.NTAccount]"Administrators")
$key.SetAccessControl($acl)

#Giving Administrators full control to the key
$rule = New-Object System.Security.AccessControl.RegistryAccessRule ([System.Security.Principal.NTAccount]"Administrators","FullControl","Allow")
$acl.SetAccessRule($rule)
$key.SetAccessControl($acl)

#Setting Ethernet as metered or not metered
$path = "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\NetworkList\DefaultMediaCost"
$name = "Ethernet"
$metered = Get-ItemProperty -Path $path | Select-Object -ExpandProperty $name

Clear
if ($metered -eq 1) {
    $SetMetered = Read-Host "Ethernet is currently not metered. Set to metered? (y/n)"
    if ($SetMetered -eq "y") {
        New-ItemProperty -Path $path -Name $name -Value "2" -PropertyType DWORD -Force | Out-Null
        Write-Host "Ethernet is now set to metered."
    } else {
        Write-Host "Nothing was changed."
    }
} elseif ($metered -eq 2) {
    $SetMetered = Read-Host "Ethernet is currently metered. Set to not metered? (y/n)"
    if ($SetMetered -eq "y") {
        New-ItemProperty -Path $path -Name $name -Value "1" -PropertyType DWORD -Force | Out-Null
        Write-Host "Ethernet is now set as not metered."
     } else {
        Write-Host "Nothing was changed."
     }
}

How to Restrict Users to Private Store with Group Policy in Windows 10

$
0
0

A new Group Policy setting (Only display the private store within the Windows Store app) in the Anniversary Update (Windows 10 1607) allows admins to disable the public store and restrict users to the private store in the Windows Store for Business.






Windows 10 1511 introduced the Windows Store for Business, allowing you to create a private store through which you can offer volume-purchased apps to users in addition to the free apps of the public store. You can also register a mobile device management (MDM) client or a client management tool (for instance, Configuration Manager or Intune) to synchronize the apps you have licensed.

If you log on to the store using an Azure AD account, you will see a new tab with the name of your organization.

 
Private store in Windows 10

In Windows 10 1511, it was possible to restrict the store to only show apps to the end user that had been published in the business store, thereby restricting access to all apps available in the public store. However, you could only do this through the MDM channel using an Open Mobile Alliance (OMA) Device Management (DM) policy.

In Windows 10 1607, we now have a new Group Policy setting: Only display the private store within the Windows Store app. You can find the new policy under Computer Configuration > Administrative Templates > Windows Components > Store.


 

The new Group Policy setting – Only display the private store within the Windows Store app

If we enable this setting and don’t log in with an Azure AD account (for instance, with a Microsoft account), the Store app will not show any apps.


 
Store without apps after login with Microsoft account

Only if you use an Azure AD account will you see the apps that are published for users in the business store.


 
Only the private store is available

This is a very useful feature for many organizations because you can restrict the apps available to users in the store. Users can install apps through the store, but admins maintain some level of control over the available apps.

The first time you launch the Store app and log in using an Azure AD account, unregistered computers will be registered automatically. Devices can also be registered in Azure AD with other methods such as Group Policy, Azure AD Join, and Intune.

It is important to note that when working with Intune, devices are always registered in Azure AD (The setting “Users may register their devices with Azure AD” is turned on for all users and cannot be changed.)


 
Devices managed with Intune are always registered in Azure AD







This is can be helpful because users can log in with their Azure AD accounts on any computer running Windows 10 1511 or later without the need to prepare the device for Azure AD.

I have been asked a couple of times if this new Group Policy setting also allows us to restrict the Edge extensions users can install.

No, this setting does not affect Edge extensions. Users can still install all extensions that are available in the public store. In the screenshot below, you can see that only the tab for the private store is avaiable but all Edge extensions are available.


 
Microsoft Edge extensions in the store

How to Install OpenDNS Umbrella Virtual Appliances on Hyper-V 2012 R2

$
0
0

OpenDNS Umbrella is another layer of security beyond firewalls and antivirus software. Your organization can utilize it to protect networks from malware, breaches, botnets, phishing, and cryptoware at the DNS layer. In this article, we'll demonstrate how to set up and configure OpenDNS virtual appliances and make sure that DNS information is collected accurately for reporting.







By simply pointing your DNS forwarding to OpenDNS servers out of the box, there’s not much to install or configure other than your blocking policies and user accounts in order to be protected. However, while the product does go to work immediately, reporting will be lacking until you set up an internal network IP address resolution alongside Active Directory.

You’re probably already familiar with OpenDNS; the service has long been trusted with consumer-grade firewalls and Wi-Fi Access Points. OpenDNS is now owned by Cisco, and the service is relatively inexpensive at approximately $115 for a three-year, 250-license package on CDW. OpenDNS Umbrella extends that protection to your enterprise by categorizing your DNS traffic in the OpenDNS data centers, rather than relying on your own firewall’s DNS capabilities. This is especially useful if you are running pfSense firewalls, as the packaged domain blocking and reporting is minimal in several areas.

In addition to Umbrella, OpenDNS can protect your roaming devices by installing a remote client. For now, we’ll look at the default reporting and why it’s necessary to set up virtual appliances.

After pointing your internal DNS servers to OpenDNS, the service immediately begins to collect and categorize data. Running a report will produce something similar to the following screenshot, wherein only your external IP address is resolved for all traffic going in and out of your network.

 
Malware activity report

Here, we can see that there are several records indicating that malware and high-risk software is running on our network. However, an obfuscated IP are only our single external IP address, and we want to know which clients on our network could be compromised.
To get this information, we want two OpenDNS virtual appliances on our network to intercept the DNS requests so that the client information can be recorded and sent to OpenDNS. Below is a diagram from an OpenDNS setup guide located here that gives us a high-level overview of the virtual appliance purpose. I find the diagram is easiest to understand by looking at the endpoints first.


 
Virtual appliance diagram

As outlined, internal DNS requests will still be provided by our internal DNS server, but we’ll need to point our clients to the two virtual appliances.

 

Minimum requirements and initial setup

Here are the minimum requirements for our virtual appliances:
  • Generation 1 VM
  • 4 CPU cores and 1.5GB RAM
  • 7GB of Disk Space
  • 2 virtual appliances per site for high availability
  • The following open outbound ports: 53 TCP & UDP, 443 TCP & UDP, 80 TCP, 2222 TCP, 123 UDP, and 53 UDP
The virtual appliance will employ DNSCrypt between itself and OpenDNS. This means any EDNS packets are encrypted, cannot be intercepted, and are enabled by default. According to dnscrypt.org,
“DNSCrypt is a protocol that authenticates communications between a DNS client and a DNS resolver. It prevents DNS spoofing. It uses cryptographic signatures to verify that responses originate from the chosen DNS resolver and haven’t been tampered with.”

Next, after logging into OpenDNS, we download the virtual appliance VMs; VHDs and configuration files from Configuration -> System Settings -> Sites & Active Directory -> Download Components.
Here, we’ve download the zipped VAs, a Windows Domain Controller configuration file, and a Windows Service connector.

With our data downloaded and unzipped, we have the following files ready for installation:


 
OpenDNS virtual appliance VHDs

Because OpenDNS requires two virtual appliances on our network, we make two copies of the VHD files and name them appropriately in the default Virtual Hard Disks folder on our Hyper-V 2012 host.


 
Renamed VHD files

Next, we create our first VM and choose to use an existing VHD named “forwarder-VA-1” for the first hard drive. Keep in mind the minimum requirements of 4 cores and 1.5GB RAM. In the settings of the VM, we add the “dynamic-VA-1.vhd” as our second hard drive under the same IDE controller as the “forwarder-VA-1” HDD.


 
IDE controller second VHD

We apply the settings to the VM, connect to the VM by opening the console, start the VM, and are brought to the Forwarder Configuration.

Configuration of virtual appliance forwarder

 
Forwarder configuration

I named the machine “DynamicForwarder-IP-Address,” or “DF1-10.X.X.X,” and provided the IP address, Netmask, Gateway, Local DNS1, and Local DNS2. I then hit Save.

After a moment, we see all of the connectivity settings turn green with the exception of the AD Connector, which we have not yet setup.


 
Dynamic forwarder setup

On the OpenDNS configuration site, we should now find our Dynamic Forwarder listed, which it is.


 
OpenDNS system settings

With our initial Dynamic Forwarder complete, we can begin to set up our 2nd Dynamic Forwarder, ideally on a separate Hyper-V host.

After setting up our 2nd virtual appliance, we’ll find that our status has been changed from red to green because we now have redundant VAs.


 
Verified redundant VAs

 

Configure OpenDNS Umbrella sites and domains

Next, in the OpenDNS website, we make sure that our VAs have been added to the correct site. In the System Settings -> Internal Domains, we check that all of our internal domains are listed correctly.






We can now test by pointing a single client’s DNS server settings to our virtual appliances and seeing that the IP addresses are resolved. We can see below that they are now revealed. It’s also recommended we create A-Records for our new DNS Virtual appliances so that we can run “nslookup” by both name and IP address.


Activity report with resolution

From here, it’s possible to go on toward setting up Active Directory Services, either on a dedicated server or an existing Domain Controller. This will give us further information, down to the computer name, which helps us easily find rogue machines on our network.

 

Conclusion

OpenDNS has an excellent solution for locking down DNS, which we know is a scary point of vulnerability—especially for malwares that “phone home.” Although we can and often do point our DNS to 8.8.8.8, OpenDNS Umbrella gives us added protection and insight into internet traffic.

How to Integrate OpenDNS Umbrella with Active Directory

$
0
0
In my previous article, we set up redundant OpenDNS Umbrella virtual appliances to forward DNS data from our internal network to OpenDNS. We concluded with reports that correctly display IP addresses from our internal network. Now we want to go further and record Active Directory information such as computer login and group information.






To do this, we’ll need to create an OpenDNS service user account on our domain, set up our domain controllers with the OpenDNS Connector service, and run a configuration script against all of our domain controllers. The connector will watch our domain controllers that the configuration script has modified and will relay the information to our OpenDNS Umbrella account.

 

Download required files

To get started, first download the Sites and Active Directory components from OpenDNS Umbrella > Configuration > System Settings > Sites & Active Directory > Download Components:
 
Download AD components

Download the files for the Windows Configuration and the Windows Service.


 
AD service and configuration script download

When finished downloading, you’ll have three files: OpenDNS-WindowsConfigurationScript-20130627.wsf, Config.dat, and Setup.msi. The Config.dat file will contain your OpenDNS account identity information.


 
AD Connector files

Create optional new domain controller

You have the option to keep the connector service separate from your domain controllers by using a member server. The minimum specs are similar to those for the OpenDNS Virtual Appliances and are as follows:
  • Static IP address
  • 4 CPU cores and 1.5GB RAM
  • 7GB of disk space
  • The following open outbound ports: 53 TCP & UDP, 443 TCP & UDP, 80 TCP, 2222 TCP, 123 UDP, and 53 UDP
  • White-list the processes OpenDNSAuditClient.exe and OpenDNSAuditService.exe if local anti-virus software is installed
Install AD Domain Services Snap-Ins and the Command-Line Tools feature via Remote Server Administration Tools > Role Administration Tools > AD DS & AD LDS Tools > AD DS Tools. Again, you can either run the onnector service from a new stand-alone member server with AD Tools, or run it from an existing DC. I chose to do the latter and ran the service on my PDC.

If you create a new member server, be sure to open WMI ports in the firewall with the following command run as Administrator:

netsh advfirewall firewall set rule group=”Windows Management Instrumentation (WMI)” new enable=yes

Create an OpenDNS service user

In addition to the connector and scripts, we need to create a new service user on our domain so our services run uninterrupted. Create a new domain user account and set the logon name to OpenDNS_Connector.

Provide a strong password and then check the Password Never Expires checkbox. Add the new user to the following groups:
  • Event Log Readers
  • Distributed COM users
  • Enterprise Read-only domain controllers (RDOC)

 

Set up domain controllers with cscript

Next we want to run the .wsf script against all of our domain controllers. Open and run an elevated command prompt as Administrator.

According to the instructions here (OpenDNS Umbrella account required), from the command prompt, enter c:\>cscript<filename> where <filename> is the name of the configuration script you downloaded. The script will display your current configuration, then offer to auto-configure the domain controller for operation y/n. Type y to continue. If the auto-configure steps are successful, the script will register the domain controller with the Umbrella Dashboard.


 
Cscript on domain controller

Again, be sure to run the .wsf visual basic script on all of your domain controllers in each of your sites. In this instance, I only was required to run the .wsf script against one domain controller.

 

Install the connector

If you have a domain controller that is a Core 2012 R2 server, simply copy the files to the c:\ drive, then run the Setup.msi file from the default command prompt with the command: c:\>.\Setup.msi.


 
OpenDNS Connector setup

When prompted, provide the password for the OpenDNS Connector service account that we created earlier.


 
OpenDNS service account credentials

After we’ve verified authentication, we can jump onto our OpenDNS dashboard and confirm that our AD Connector is listed, which we can see in the following screenshot.


 
AD Connector established






Testing AD integration

Once our AD Connector and AD Servers are reporting to our OpenDNS Umbrella account, we can now see our AD User and Group information listed in the reports under the Identity column.

 
Activity displays identity information

 

Conclusion

With just a little extra work, we can extend the amount of information available to us on our OpenDNS Umbrella Reporting console to include Active Directory information. If your organization has multiple sites, you can set up Connectors to monitor each location concurrently at no additional cost. It’s nice that OpenDNS Umbrella provides the support and tools necessary to secure and categorize DNS, provide accurate reporting, and manage DNS traffic from a single web-based console.

How to Find Active Directory Users with empty password using PowerShell

$
0
0
If the PASSWD_NOTREQD flag is set in the userAccountControl attribute, the corresponding user account can have an empty password, even if the domain password policy disallows empty passwords. This presents a security risk. The PowerShell script that i will create can find users accounts in your Active Directory domain where the PASSWD_NOTREQD flag is set.





Viewing the PASSWD_NOTREQD flag in ADUC

You can view the userAccountControl attribute in Active Directory and Users (ADUC). Make sure you that you have enabled Advanced Features in ADUC in the View menu.

 
Enabling Advanced Features in ADUC

You can then view the value of the userAccountControl attribute in the Attribute Editor tab of the of the user account’s properties.


 
userAccountAttribute in ADUC

The userAccountControl values for user account with expiring passwords is 0x200 (512 decimal).


 
userAccountControl with NORMAL_ACCOUNT flag and expiring password

Accounts with non-expiring passwords have the value 0x10200 (66048 decimal).


 
userAccountControl with NORMAL_ACCOUNT flag and non-expiring password

User accounts with the PASSWD_NOTREQD flag have the extra bitmask of 0x20 set and are showing  as 0x220 (544 decimal) for accounts with expiring passwords.


 
userAccountControl attribute with PASSWD_NOTREQD flag and expiring password

User accounts with non-expiring passwords have the value 0x10220 (66080 decimal).


 
userAccountControl attribute with PASSWD_NOTREQD flag and non-expiring password

As you can see in the above screenshots, the last two values correspond to the PASSWD_NOTREQD flag. The flag could have been set by manipulating the userAccountControl attribute through Attribute Editor or programmatically, for instance via PowerShell.

If the user accounts are disabled with a non-expiring password, the userAccountControl attribute is set to 0x10222 (66082 decimal).


 
userAccountControl attribute with ACCOUNTDISABLE and PASSWD_NOTREQD flags, expiring password

Disabled user accounts with an expiring password are set to 0x222 (546 decimal).


 
userAccountControl attribute with ACCOUNTDISABLE and PASSWD_NOTREQD flags, non-expiring password


Using PowerShell to find users with PASSWD_NOTREQD flag

First, we create a report folder named c:\admin.

New-Item -Path c:\admin -ItemType directory -force

After that, we get the distinguished name of the domain and save it in the variable called $domainDN.

$domainDN = get-addomain | select -ExpandProperty DistinguishedName

Now we can use Get-ADUser with an LDAP filter to search for the affected user accounts. We use the domain DN as SearchBase and save it to a text file.

Get-ADUser -Properties Name,distinguishedname,useraccountcontrol,objectClass -LDAPFilter "(&(userAccountControl:1.2.840.113556.1.4.803:=32)(!(IsCriticalSystemObject=TRUE)))" -SearchBase "$domainDN" | select SamAccountName,Name,useraccountcontrol,distinguishedname >C:\admin\PwNotReq.txt

Instead of saving everything to a text file, we can view the output with Out-GridView:

Get-ADUser -Properties Name,distinguishedname,useraccountcontrol,objectClass -LDAPFilter "(&(userAccountControl:1.2.840.113556.1.4.803:=32)(!(IsCriticalSystemObject=TRUE)))" -SearchBase "$domainDN" | select SamAccountName,Name,useraccountcontrol,distinguishedname | Out-GridView

This is the complete PowerShell script:

# Create admin folder
New-Item -Path c:\admin -ItemType directory -force
# Get domain dn
$domainDN = get-addomain | select -ExpandProperty DistinguishedName
# Save pwnotreq users to txt
Get-ADUser -Properties Name,distinguishedname,useraccountcontrol,objectClass -LDAPFilter "(&(userAccountControl:1.2.840.113556.1.4.803:=32)(!(IsCriticalSystemObject=TRUE)))" -SearchBase "$domainDN" | select SamAccountName,Name,useraccountcontrol,distinguishedname >C:\admin\PwNotReq.txt
# Output pwnotreq users in grid view
Get-ADUser -Properties Name,distinguishedname,useraccountcontrol,objectClass -LDAPFilter "(&(userAccountControl:1.2.840.113556.1.4.803:=32)(!(IsCriticalSystemObject=TRUE)))" -SearchBase "$domainDN" | select SamAccountName,Name,useraccountcontrol,distinguishedname | Out-GridView

In this post, I only covered accounts that are not using smartcards. The table below gives an overview of the possible userAccounControl values of user accounts that use smartcards.

512     Enabled account
514     Disabled account
544     Enabled, Password Not Required
546     Disabled, Password Not Required
66048   Enabled, Password Doesn't Expire
66050   Disabled, Password Doesn't Expire
66080   Enabled, Password Doesn't Expire & Not Required
66082   Disabled, Password Doesn't Expire & Not Required
262656  Enabled, Smartcard Required
262658  Disabled, Smartcard Required
262688  Enabled, Smartcard Required, Password Not Required
262690  Disabled, Smartcard Required, Password Not Required
328192  Enabled, Smartcard Required, Password Doesn't Expire
328194  Disabled, Smartcard Required, Password Doesn't Expire
328224  Enabled, Smartcard Required, Password Doesn't Expire & Not Required
328226  Disabled, Smartcard Required, Password Doesn't Expire & Not Required

If the script found a user account where the PASSWD_NOTREQD flag is set, you can edit the user object in ADUC. For instance, you can change the userAccountControl attribute value from 544 to 512 (NORMAL_ACCOUNT with expiring password).

 
userAccountControl attribute before the change


 
userAccountControl attribute after the change

If you found many user accounts with the PASSWD_NOTREQD flag, you can automate the task with PowerShell.

$UsersNoPwdRequired = Get-ADUser -LDAPFilter "(&(userAccountControl:1.2.840.113556.1.4.803:=32)(!(IsCriticalSystemObject=TRUE)))"
foreach($user in $UsersNoPwdRequired )
    {
    Set-ADAccountControl $user -PasswordNotRequired $false
    }







If you want to log the corresponding user names, you can save them to a text file.

# log file
if ($logfile -eq $null)
{
$logfile = "C:\admin\ADUsersChangedPWNOTREQD.txt"
New-Item $logfile -ItemType File
}
# set flag PasswordNotRequired to false
$UsersNoPwdRequired = Get-ADUser -LDAPFilter "(&(userAccountControl:1.2.840.113556.1.4.803:=32)(!(IsCriticalSystemObject=TRUE)))"
foreach($user in $UsersNoPwdRequired )
    {
    Set-ADAccountControl $user -PasswordNotRequired $false
    Add-Content $logfile "$User"
    } 

 
Log file with user accounts
Viewing all 880 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>