In a previous post we talked about one of my favorite tools, System Information. We’ll guess what… it gets even better. You can run the tool on an entire collection!
This will give you a nice quick overview of the machines. This view allows customization and adding / removing columns.
I’d often run the WOL tool on the collection first and give it a couple minutes before running this tool, but then I’d run this to get a nice overview. As you can see in my image, a few issues stand out right away that I need to spend some time looking into, a couple of which I wasn’t actually expecting to see. So perhaps a future post in troubleshooting the CM Client is coming. 🙂
Ways I’d use this report.
Was the machine on?
Was someone logged on to machine?
Check Cache Size
Check Cache “levels”
Here’s a scenario, a new BIOS came out, I tested on my 1 test machine, (successfully), and wanted to test on another machine or two… well.. the old adage was true.. “I don’t always test, but when I do, I test in Production”. I used to have a collection for Each Model of Machine, and a collection that would list all machines of that model with a down-level version of the BIOS… see how I did it in this old Post HERE.
I’d right click on the collection, find a couple … “volunteers” (Computers On, No User logged On) and use the Rurun Deployment Right Click Tool to trigger the update. I’d use the Right Click Tools “Ping System” to watch it reboot and come back up, then I’d use the System Information Right Click Tool on the device to see if it now had the new version of the BIOS installed.
Using a combination of Right Click Tools, I could do several tests on “Pilot / Volunteer” computers with no user impact, all remotely.
I’d also like to see the overall “Cache Health” of a random sampling, I’d confirm that machines were getting the right Cache size via Policy, and look for any “odd” things… like an empty cache for example..
The System Information Tool, a great tool to give a collection overview and status of a large number of machines.
Over the past 7 years of being a ConfigMgr admin and having the Right Click Tools at my finger tips, one of them stands out as my most used tool: System Information. I’d consider this single Right Click Tool a Swiss Army Knife in your pocket.
You can run this tool on both individual Machines or on entire collections, as shown above it was run on my HP Laptop Device. This tool has made improvements overtime, and I’m quite sure will continue to have tweaks in the future. As you see, you get a nice overview of the machine, both OS & Hardware info in the General Tab. I’d often use this to confirm the BIOS Version right after I’d push a deployment to the test machine. Since this is data is pulling straight from the machine, you get instant results without having to wait for Hardware Inventory.
The Add/Remove Programs tab gives you a listing of the “Legacy” Applications installed. I’d often use this to remove rough / unapproved apps. The Rough app issue became much smaller once we setup AppLocker, but we still had some devs with local admin rights who liked to abuse things. This too is a nice spot check to make sure a machine updated Chrome / etc with the latest version you’re deploying. IT also provides the Uninstall String if available. I’d use this often to grab uninstall strings for scripting uninstalls to push out.
Windows Update, you guessed it, shows a list of installed updates, then allows you to link to the KB. Once again, I’d use this to confirm my ADRs were pushing updates to my Test group and that it was getting installed without having to wait for reporting to catch up. Also if you get a report from your security team of a machine missing patches, you can confirm / deny pretty quickly.
Services Tab gives a list of all Services, with options to Stop (If Running), Start (if Stopped) and set the Startup Type. This is handy when troubleshooting client issues, and you need to stop a service while you do some remote troubleshooting. This was also helpful if I was looking at a known-issue machine I received a ticket on, I could look for rouge services, perhaps malware, or look for services that should be running and aren’t, and vise-versa.
Drivers tab.. yep, as you can see the tab names are pretty self explanatory. I found this handy to confirm that a machine would get the drivers applied that I had in my driver packages for OSD / IPU.
User Profiles Tab shows a list of all profiles. I’ve seen machines with hundreds of profiles (Computer Lab Machines / Shared PCs). Now you can use GPO to have profiles auto clean up after X days, and I’d recommend doing that for several situations. I found this tool handy when I was trying to manually clean up a machine with low disk space. Your Service Desk will probably find this helpful too, they can remove their own profile from machines after they remoted to machines to assist users, or resolve issues.
Quickly check which users are in which groups, and remove someone if needed. Once again, handy on machines as a quick confirmation no funny business is going on with the admin group.
Lastly, the Battery Page. Ever think about kicking off a large deployment on a laptop and was like… sure hope it’s plugged in or has a lot of battery, take a quick glance here to make sure you’re not causing a bigger problem by kicking off that deployment (Rerun Deployment Right Click Tool).
Bonus Tips… COPY & PASTE.. EVERYTHING. Pastes really nicely into Excel as well.
Personal Pros: It gets information real-time by connecting to the machine and pulling back the info, so it’s not limited to waiting on hardware inventory.
Personal Cons: It’s Real-time, meaning if the machine is off, no data. Typically this wasn’t a huge deal, WOL (using another Right Click Tool) would wake the machines up and I could get the info.
Overall, I love this tool and use it a ton as a CM admin (often being 3rd tier support for Service Desk) and back when I was on the Service desk. This one tool pretty much gives you a complete picture of the machine in question with the ability to do some basic tasks all in one spot.
Another one of my Top 10 Right Click Tools is Advanced Collection Information. Right now you’re thinking, ha, that tool is obsolete, in 1906, ConfigMgr now gives a list of collections a machine is in the lower pane. At first glance, you’d be right, however, once you launch the tool and take a closer look, you’ll see some additional useful information.
Collections: List of all Collections the machine is in, along with the folder location of said collection and the collection ID.
Collection Variables: List of Variables applied to the machine from the collection it is in. Useful to confirm / troubleshoot when these variables are used in a Task Sequence, like enabling Debug Mode in 1906.
Maintenance Windows: List of Maintenance Windows that are applied to this machine. Ever wonder why or why not a deployment didn’t run when expected, could be a Maintenance Window. Execmgr and status messages will help point to this, as they will say the deployment is ready, but waiting for an available maintenance window.
Power Plans: I didn’t grab a screen capture, I’ve actually never used CM to create Power Plans, but if I did, I’d find this tab useful. 🙂
So hopefully you can see, while ConfigMgr continues to add features and enhance the UI, Right Click Tools add value to the ConfigMgr Console and make the ConfigMgr admin’s life easier.
This week, I wanted to highlight another nifty little tool, which is great for troubleshooting and reporting. Before I tell you which one it is, I’m going to talk about something instrumental in ConfigMgr, but not something we often think about, Status Messages.
What are Status Messages: In System Center Configuration Manager, status messages are the universal means for components to communicate information about their health to the System Center Configuration Manager administrator. Status messages are similar to Windows NT Events; they have a severity, ID, description, and so on. – Microsoft Docs
If you want to go info deeper details about the messages themselves, Microsoft Docs has you pretty well covered. So what do I use them for? Personally, I live in a world of Task Sequences and Deployments, and while Status messages can tell you so much more, I find them primarily useful to keep me notified about a deployment for a machine is doing.
Several canned CM reports rely on status message data to surface data about Deployments. This is very helpful when trying to do near-real-time reporting. You can Monitor deployments, like see which step of a Tasks Sequence a machine is on, how many machines have started a Deployment, or worse, how many have failed a deployment.
Downside of Status Messages, requires network connectivity. This is fine most of the time, but lets say you have deployments running while a machine is offline (Powered On, but not connected to the network). Guess what, you aren’t getting those status messages, but that deployment is going, or not going, and you have no idea. You won’t really know until it comes back online, and you get updates via Hardware Inventory.
So, yeah, the reports are great, and when monitoring large deployments of machines, you’ll want to use the reports, but when you’re troubleshooting, and are focusing on one machine, I just want to stay in the console, where all my tools are. That’s when this nifty Recast Right Click Tool comes in, All Status Messages / Device Status Messages.
From a Deployment
This will show all Status Messages from ALL computers for that deployment, which can get busy, but also help in tracking down patterns.
From a Device
You can see here that it’s pulling back all the Status Messages for this specific machine. I also noticed that the upgrade task sequence has been failing on this machine, which I honestly wasn’t expecting. It’s nice that there is so much info in the Status Messages.
To be clear, the Config Manager Status Message Viewer is built into ConfigMgr, you can pull up this same data without the Recast Right Click Tools. What Recast Right Click Tools provide is a shortcut to this Viewer, that pre-populates the Viewer with the deployment info or Computer name, saving you a step and making it easily accessible. Much of what the Right Click Tools do is surface information, and shorten processes to get to data, enabling the admins to be more effective in their roles. This is another great example of a tool that reduces the “mount of clicks required to access this data, and also putting it front and center as a reminder when I right click on a machine, that this data is available.
Good hook of a title right? Here’s why I say that, LAPS (Local Admin Password Solution) has been around for several years, and I’d be willing to bet, a lot of orgs haven’t implemented it yet (total guess here, no actual data, other than my twitter survey shown below). Good news, many have, as I had at a previous employer. It’s simple to setup, and greatly reduces a previously easily exploitable attach vector. LAPS … mitigates the risk of lateral escalation that results when customers use the same administrative local account and password combination on their computers . -Microsoft
So there you have it, you have it setup, if not, see previous post, now you move on with your operational duties and focus on what fire your manager throws at you today. Why is that? Who seriously did client health for LAPS once it was setup? Who confirmed it was working on all end points? I certainly didn’t, I had “bigger fish to fry”. But as with any implementation, you need to check up on it from time to time. This shouldn’t be a surprise, you do this with the ConfigMgr Client, you monitor Client health, perhaps even implemented auto remediation scripts. You probably monitor your AV / Anti-Malware system, IDS, Disk Encryption, etc. So why not LAPS?
So the team at Recast Software created a nifty dashboard for you to monitor your LAPS “health”. This is included in the Enterprise version of the Right Click Tools. Here is an image from their Documentation, has a bit more date than my little lab:
Why I like this is because I’m already in the CM Console every day, to have a dashboard for LAPS, to keep it visible and at the front of our minds, I find this highly useful. It only takes a few seconds, pull it up, check my stats, and move on. If I find an anomaly, I can start looking into it.
What else do I like? Glad you asked, I like that I can look up the passwords here. No need to make a special package for my Service Desk Techs to be able to lookup passwords, they already have the CM Console, now I just grant them permissions to this feature and they now have a powerful tool for when they need to look up these passwords for their support needs.
Is that all? Nope, I like that I can export this list to CSV file, and provide it to the Security Team / Audit Folks, who want to confirm compliance.
Currently in version 4.0, this dashboard is querying AD to see which computers have a LAPS password. In 4.1, additional features are planned to be incorporated into this dashboard, which will require additional permissions, but I’ll cover that once it’s released.
So how do you set this up? I’ve got the Right Click Tools Enterprise license and setup the Recast Management Server, what are the permissions I need to allow my Service Desk to view this dashboard? What permissions are required to allow my Service Desk the ability to view the passwords? I’m going to go over that, building off of my last post where I setup LAPS and created AD Groups with different permissions for LAPS. Assumptions before continuing: You have Specific AD Groups you want to grant permissions to.
First, in my Lab, I have my Service Desk Tier 1 -3 Support positions have different access to the CM Console. I want all of them to have the ability to see the Dashboards and pull up the passwords. In CM:
Before I add any permissions, this is what the Dashboard would look like without the proper permissions: (Using Service Desk Tier 1 User)
So now we know what it looks like when you don’t have rights, lets add some permissions. In Recast Management Server, I’ve created a ReLAPS role with just permissions for the ReLAPS console. Testing with 3 “Rights” from the options.. getting close…
Ok, this looks better: (Still using a Service Desk Tier 1 User)
So what does this look like in the Recast Management Server Console?
Created a ReLAPS Role with minimum requirements for ReLAPS console.
Then added the LAPS Read Only Group, and assigned it to the ReLAPS Role:
Ok, now you can rest easy knowing your Service Desk has the ability to do the only tasks you want them to do, and no more. Sure, you probably already granted them far more rights to use other tools already, but hey, if you find you have a need to only allow users to view that Dashboard, you will now know how.
In a future post, once Right Click Tools 4.1 is released, I’ll be creating an updated post with the additional permissions required. I’m also thinking about going into scoping LAPS, ie allowing Server Admins to only see Server LAPS passwords and Service Desk to only see Workstation LAPS passwords. Let me know if this is something of interest.
“The “Local Administrator Password Solution” (LAPS) provides management of local account passwords of domain joined computers. Passwords are stored in Active Directory (AD) and protected by ACL, so only eligible users can read it or request its reset.” – Microsoft. Basically, it reduces the risk of having a default (backdoor perhaps) local administrator & default password on your machines by having each machine use a different complex password for the account. Before LAPS, most organizations had a generic local admin ex: ORG_LocalAdmin, with the same password on each machine ex: ORG_P@ssword. Problem with that is, if a machine was compromised, the malware / hacker could move laterally among all your machines gathering more and more data to deepen the security breach. With LAPS implemented, you remove that attach vector, if one machine is compromised, the ability to move laterally to another machine is greatly reduced.
There are quite a few guides out there, and the Microsoft Docs are pretty good too. I didn’t do extensive searching before creating this post, so note that this may be redundant.
In this Walk-Through, I’ll cover
Create Source Folders
Create End Point Installer Application
Deploy LAPS Application to End Points
Extend AD Schema (From Domain Controller)
Setup LAPS AD Groups and Permissions
Manually Install LAPS Admin Client
Verify Permissions and Read / Reset Access
Basic Enable of Group Policy
Tests to confirm Permissions are working
Things I’m not covering
The Why’s behind each step. Much of the details and reasons why you have to perform these steps are already documented well in the Microsoft “LAPS_OperationGuide” which is part of the download, and quite honestly, that’s what I’m using as I create this Walk Through, so I suggest you look over that before you even start.
Every Deployment Scenario. This is a generic and SIMPLE Lab, while much of this is the same for any environment, each environment is different, each organization is setup differently. LAPS setup will probably require multiple teams involvement (AD / CM / Deployments / GPO)
Things to Consider beforehand
Active Directory Structure (OUs with Workstations)
I’ve downloaded all of the Files into a “LAPS” directory then created a new folder to move the MSI Files into.
In the CM Console, Create a new Application. Point it to the x64 version of the MSI
Once you choose the MSI can Next, it will pull the information for the Application from the MSI
As you click Next, you’ll come to General Information, I added “Microsoft” as publisher, and changed /q to /qn
At this point, just click Next, leaving the defaults until it completes and you click Close. You’ll now have the Local Administrator Password Solution application in your console. We just need to make a couple tweaks. In the Properties of the application, click on Deployment Types Tab, choose the Deployment and click Edit, go into Requirements an add the x64 for versions of Windows in your Environment.
At this point, we have the App, lets get it deployed to the workstations. Since you’ve added the logic into the app, you can safely deploy it to your all workstations. NOTE, this is when knowing your environment, you deploy to the appropriate collection. Perhaps you have a business reason to not deploy it to all workstations. Just use best practices for deployments (Maintenance windows, etc). Rest of this example is just generic.
I left Scheduling set to defaults, User Experience , Alerts all defaults
Admin Client / LAPS Management Client
So now that the Client is being deployed, lets get the infrastructure setup. First we’ll switch over to a client test machines / or your typical admin workstation. Lets get the LAPS Client Installed along with the Management Tools. Once you kick off the installer (Double click the MSI), click through the first couple screens to get to the “Custom Setup”, once here Enable all options.
Go ahead and let it install. We’ll need to grab some of the items it installed and we’ll copy them back out to our source server for easy access.
Go to C:\Windows\PolicyDefinitions, here you will grab the AdmPwd.admx file, and the AdmPwd.adml file from the en-US subfolder. I created a folder called GPO_ADMX in my source location to copy them to.
Also, Copy the AdmPwd.PS folder from the PowerShell Modules: C:\Windows\System32\WindowsPowerShell\v1.0\Modules
You’ll need those later.
Now, these steps you can do from your workstation (and should), but to make sure I had connection and rights, I did it from my actual Domain Controller. You’d typically do this from an admin machine with proper credentials, as your DC’s should be CORE and not even have a desktop experience. You typically never want to actually log onto a DC. But this is lab, and I’m just making a demo.
Modify the AD Schema
On the Domain Controller, copy the AdmPwd.PS folder you uploaded to your source into the local module repository on your DC, then launch Admin PowerShell Console. In this image, you can see I tried to Import-Module before I had copied the files onto the DC, after the copy, the command runs correctly:
Run the command: Update-AdmPwdADSchema:
In my lab, you can see it successfully added 2 attributes and modified one class.
Hopefully you considered a few things before starting this Journey, like which OU the workstations are in that you want to apply this to, and who do you want to have permissions? For my lab, it’s pretty easy, I have 1 Master OU setup for WorkStations, and all other workstations fall into Sub OUs of that Master OU.
At this point, it’s nice to check and see who has rights to view that info in AD. In your PowerShell console, type “Find-AdmPwdExtendedrights -identity <OU Name> | Format-Table
As you can see, rights are pretty clean, I’m ok with those folks having rights to LAPS.
Now, in AD, lets setup a Read & Reset Group, to grant access to LAPS. I’ve created two groups: LAPS Read Only & LAPS Reset PWD:
Now we need to grant Machines the ability to update it’s own password, we we grant access to the “SELF built-in account” for all machines in the Workstation OU: Set-AdmPwdComputerSelfPermission -OrgUnit <OU Name>
Next we need to grant users rights to look up that information, this is where those groups come in. We’re going to give “LAPS Read Only” rights to Read LAPS Passwords: Set-AdmPwdReadPasswordPermission -OrgUnit <OU Name> -AllowedPrincipals <FQDN Group Name>
We’re going to give “LAPS Reset PWD” rights to Reset LAPS Passwords: Set-AdmPwdReadPasswordPermission -OrgUnit <OU Name> -AllowedPrincipals <FQDN Group Name>
We’re also going to confirm it did something using the Find.. command:
Now, in AD, you can nest the groups you want in your LAPS Security Groups to have access:
For my Lab, I have Service Desk Tier 1 & 2 Read only, and Tier 3 can Reset.
Group Policy You’ll need to copy the ADMX & ADML files you copied to your source folder into your Group Policy Central Store, which can be located here: \\FQDN\SYSVOL\FQDN\policies
Now you can Launch Group Policy and create your LAPS Policy. For this Demo, I’m going to create a new simple Policy, but you can always add it into one you already have. The new GPO is set to defaults, except I disable User Policies, as this will all be machine based, no point in having it look for user policies:
I’ve setup the basic settings to make this work with my lab. In my lab, I have a local admin account on the computer besides the disabled default, which is named “MyLocalAdmin”, which is the account I want LAPS to manage:
OK, that’s it, you have it all setup. Now it’s time to confirm you get the results you wanted
Standard End Users (Should have No Rights)
Service Desk Tier 1 (Should have Read Access)
Service Desk Tier 3 (Should have Read / Reset Access)
Test 1: Standard User:
Test 2: Tier 1 Service Desk:
Test 3: Tier 3 Service Desk:
We learn from this test, Reset Permissions does not include Read. So, unless you have a need for a group to be able to reset this password, and not read it, I’d nest the LAPS Reset PWD group inside of the LAPS Read Only Group
Now we have the desired results, Tier 3 Support can both Read & Reset the LAPS password.
I hope you found this LAPS overview useful, and hopefully provided additional information not found in other ones. The main reason I’ve writing this, is for Part 2, configuring Recast Management Server User / Groups to view ReLAPS (LAPS) Dashboard in the CM Console. Stay Tuned!
Ever wonder why a machine doesn’t show up in a collection? You added it, either manually, <Start Plug for great product> or via an awesome tool like the Recast Right Click Tools “Add Computer to Collection(s)” <End Plug> and when you show the members of the collection, it just never shows up? I have, and it nearly drove me crazy, or perhaps a deeper level of crazy than I already am. So, why does it do that anyway? That’s what I’m intending to explain in this post.
What this Post is NOT. Collection Evaluation Troubleshooting, there are already plenty of great posts out there that give guidance in that area.
There are several things that affect, or is it effect… whichever.. the resulting collection membership, or the evaluated results.
Direct Membership Rules
Collection Queries: The process of dynamically adding machines to a collection based on criteria or properties of computers. Common queries are based on:
Hardware Type (Make / Model)
Operating System Information
For some pre-created Community Queries, check out Ander’s Post Example of a Query for 1709 Computers
Direct Membership Rules: Simple 1 to 1 mapping of Computers. This is when you manually add Computers into a Collection, either individually or via Batch, like I blogged about recently.
In this example, you can see the Direct Members that have been added to this Collection. If your collection ONLY has direct members, there is no point in checking the boxes for updating the collection.
Include Collections: When you “Nest” Collections. Collection A includes Collection B & C. This is when things become a bit more complex, but still straight forward. Why? Say you already have several collections based on Queries, Marketing & Sales or Windows 1607 and Windows 1709. Lets say you want to deploy a Task Sequence to Both Marketing & Sales, why make two deployments? Why Create a new Collection with 2 Queries, you already have those machines in collections, create a new Collection that Includes both and deploy to that. Example below of Include Collection, contains two other collections. 1607 (4 devices) & 1709 (7 Devices) = Total of 11 Devices in New Collection. 10 VMs and 1 Dell, this will be useful to know in the next example.
Exclude Collection: When you Nest a Collection of machines you want Excluded from your collection results. This is where it can be really tricky and make you scratch your head a bit. In this example, we’ll build off our last, we left the collection with 11 devices, but I want to exclude Dell’s from the upgrade, because they are not compatible with 1809 (just hypothetical). You can see in this example, I have 1 Dell machine, now that it’s been excluded, the total of machines goes down to 10. Simple right?
Now, it’s not just that simple, this example ONLY works because the Dell machine is actually running 1709, which means it was in the 1709 Collection that was included. So while it’s still in the 1709 Collection which is included, the exclude rule overrides the include rule and the Dell Machine is removed from the 1809 Deployment Collection.
Exclude Collections are great for keeping yourself safe… say you have a group of high risk machines, you create a “High Risk” Collection, and exclude it from all of your normal deployments, just make sure you’re still dealing with those high risk machines. While Exclude collections are great to keep you safe, what’s even better, using a Limiting Collection which we’ll be taking about next.
Alright, so we’ve covered the ways you add machines and exclude them from collections, but one other way to limit the machines that can be in the collection is by a limiting collection.
Limiting Collection: The collection by which you limit another collection. Say you have a collection of all Machines for a line of business “Marketing”, and they are the only group that you know has approved upgrading to 1809 from 1607 and 1709. At this point, you’d have one confusing query.. All Machines that are 1607 and 1709, but not Dell, and only in Marketing… argh. However, with Includes, Excludes, and Limiting Collections, it makes it very simple. For the example, lets take a look at our Marketing Collection:
So now that we know the Pool we want to pull from, we can Limit our Deployment Collection to only Machines in Marketing.
So the final tally? 6 Machines will receive the deployment. Break down of the Deployment Collection:
Include 1607 Machines +4
Include 1709 Machines +7
Exclude Dell -1
Total = 10 Machines of All Workstations fit the above criteria
Limit by Machines in Marketing that fit the above criteria = 6
So while there are 10 Machines in the Marketing Collection, 4 of them do not fit within the criteria and have been excluded (the HP, and the Machines not on 1607 or 1709).
Now lets say, you get a request to add someone to get the 1809 deployment, so you think “Hey, I’ll just quick add a computer to the collection via direct membership so the user gets the deployment on their computer.” So you go ahead and add the 1709 computer to the collection, but they never get the deployment… you look at the Collection to see the members, and you don’t see the machine you added, but you check the direct membership and you see it there, what is going on?? aaaahhhhhh!
Then you finally remember, LIMITING COLLECTION! “PC08” isn’t in Marketing, the computer will not be evaluated to be in the collection due to limiting the collection membership to only machines in the Marketing Collection!
So now I hope you understand how collection memberships work, how Direct Membership on a Collection doesn’t always equal what the evaluated membership is. There are a lot of moving parts to Collections, of which any can drastically change the collection membership.
Hey Recast Right Click Tools users. This is a nifty tip that I often forget about, but is pretty powerful when adding or removing machines to and from collections. Bonus.. learn about Direct Membership vs Evaluated Membership.
Blog Summary: Wild Cards!
Lets say I want to add all machines that start with “town” into a collections… wild cards make this simple.
In this Demo, I highlighted 2 collections at the same time, and launch the tool “Add Computers to Collection(s)”
I’ve added my wildcard name, %town%, lets see what happens:
You can see here, it has added the machines with the string ‘town’ in the name into the collections.
Something also to note, in my demo I have one collection limited to “All System” (FYI not a good practice) and one collection limited to 1607 machines collection. After I’ve added the devices, you can see the collection counts are different, as only 1 machine in the batch is in the 1607 collection (the limiting collection). If I show the evaluated results of that collection, there would only be 1 device, as opposed to the direct membership there would be 14, the same as the collection without a limiting collection.
Now, lets say you want to remove machines that have the word “pc” or “hp” in the name, and leave the rest…
Pretty simple. After the removals, there are only 3 devices in each collection (Direct Membership), and the one with a limiting collection now evaluates to 0 machines, because none of the machines in that collection are part of the 1607 Collection.
I hope this demo shows the power of the Add / Remove Computers Right Click Tools, along with the difference between Direct Collection Membership and Evaluated Collection Membership.
This might not be a widely known fact, but Rollback in Windows 10 has been partially broken for a very long time (1803), and still is with current media of 1809 & 1903 as of today, 2019/08/20. In this post we deep dive into what the issue is and what you can do to fix it.
What is exactly broken? SetupRollback.cmd is not triggered in Windows if the machine fails the upgrade process.
Why should I care? If you have created your own SetupRollback.cmd file, or expect to leverage it in the case of an upgrade failure and the machine rolls back, you will not get the experience you are expecting. The Same goes for OS Uninstall (Revert back to Previous OS). You would need to rely on outside processes to restore full functionality to the machine. You know that folder in the In-Place Upgrade Task Sequence Template that says Rollback, with the condition _SMSTSSetupRollback = True.. guess what never gets set if the SetupRollback.cmd file never gets run? Yep, that variable to trigger the RollBack Section of your Upgrade TS.
What should I do if I need this? This is a two part fix. Both Windows upgrade media needs an update (Dynamic Update, August and newer for 1809) and ConfigMgr needs a Variable Set. As of now, I don’t know if there is a fix for Win 10 1903.. still coming? I have been told it will be built into Win 10 1909 whenever that is released.
ConfigMgr: In the Task Sequence, you need to leverage /postrollbackcontext command, and set it to system (/postrollbackcontext system) otherwise it will try to launch SetupRollback.cmd in the user context, which helps nobody. This behavior is supposed to change in 1910, and that will be the default which we should be able to confirm at that point.
Windows Upgrade Media: A couple ways to do this. Enable Dynamic Updates during your Upgrade. This by far is the easiest way, if your infrastructure can handle it. If it can’t be enabled, you’ll need to “inject” them into your offline media. There are several guides out there on how to accomplish this (including below), along with a community tool, OSDBuilder, which will help automate the process. Short Version.. Download the KB, extract the CAB file, copy the extracted files / folder structure into your Upgrade Media overwriting the files that were previously there.
Updating Offline Media (ConfigMgr 1809 Source Content)
Download, then Extract (expand):
Go to folder: (Contents of the Extracted KB)
Copy to your 1809 Upgrade Media
Now update your DPs with your latest upgrade media, and you’re set. Please make sure you’re also updating it with the other monthly patches and dynamic updates.
In the Task Sequence:
Set Variable Step:
Upgrade Step (If you can update Dynamic Updates):
Now with the Rollback Mechanism working properly, the Task Sequence is supposed to kick back in after the machines fails to upgrade, allowing you to run additional cleanup / diagnostic tasks (Like trigger SetupDiag for example).
The relationship between man and machine is a lot less dramatic (or in some cases, ridiculous) in real life than it’s often portrayed in fiction– but that doesn’t mean the potential for trouble is less real. At Recast, we have a unique opportunity of getting to talk with thousands of senior-level system administrators about their environments every year. We often begin conversations with a frank discussion about what’s working and what isn’t– and why. From our experience, and with few exceptions, the success or failure of a systems management effort at any organization hinges on the ability of it’s IT department to manage people problems.
What Do You Mean “People Problems?”
Simply put– people are messy. All the great things we can do when we work together are easily turned on their head when we don’t. In IT, this is often more pronounced than in other professions. More often than not, those with the technical knowledge to make well-reasoned decisions are not the ones granted decision-making power, which means IT is faced with a fundamental problem and responsibility: communication.
How Bad Can It Be?
Organizations that operate on the highest levels of success in IT are also the ones who have the best communication strategies. Organizations who utterly lack the communication skills to overcome people problems, well– they end up in the news. The point is, communicating well, especially when faced with a technically-important decision, is paramount.
Tips? Best Practices?
By and large, organizations who do this well have a few things in common. Here are a few places to get started:
Technical teams and Organizational leadership regularly meet and discuss ways to meet the needs from both sides.
I.T. puts SLA’s on response times for different tasks and sticks to them.
I.T. over-communicates updates, outages, successes to userbase, and consistently requests/acts on feedback.
When a poor decision is made– iterate, re-litigate, reiterate. Decisions shouldn’t be set in stone, as technology advances, so should you.
Right Click Tools Can Help
With smarter data, comes better decision making. RCT Enterprise’s Security and Compliance Dashboards can help you communicate, decide, and act on common fall-down points for most organizations. You can get a 1 on 1 session with an expert anytime by scheduling a walkthrough here.