top of page

82 results found with an empty search

  • Zero Trust for the Home Lab - An Introduction to Zero Trust and its Practical Limits for the Home Lab (Part 1)

    Introduction If you're a regular visitor to this site, you’ve probably noticed I enjoy 'messing' with security, especially when it comes to Windows. I've put a lot of effort into securing my home lab over the years, it would be a pretty tough nut to crack. But the saying goes, "Pride before a fall". Of course, I'm a realist and know full well that nothing is 100% secure and there are vulnerabilities that I'm in denial about, but it helps to hide behind layers of firewalls, WDAC, and delegation. There’s this concept called Zero Trust Architecture; it’s intriguing, and I’ll explain what it means in a moment. But with my Home Lab in mind, I’ve been wondering, what aspects of it can realistically be implemented using consumer-grade equipment? How close can I get to that elusive state of Security Nirvana without breaking the bank or the Home Lab. This series of articles will first explore the theory behind each of the Zero Trust security enhancements, followed by its practical implementation, the fun part. Although the theory is wordy and a bit.... boring, it's important to understand the principles and how they apply to the implementation of the tech. The goal? To create the world’s most secure home lab. This should be entirely doable, after all, who else is unhinged enough to even try? Zero Trust Architecture Zero Trust Architecture (ZTA) is a security framework based on the principle of "never trust, always verify." Unlike traditional security models that rely on network perimeters, Zero Trust focuses on securing individual resources by enforcing strict identity verification, least privilege access, and continuous monitoring. The Problem with Traditional Security Models The Perimeter-Based Security Model In the past, organizations secured their networks using firewalls, VPNs, and other perimeter-based defenses. The assumption was that once inside the network, users and devices could be trusted. However, this approach has several flaws: Insider Threats: Employees or compromised accounts can misuse their privileges. Remote Work & Cloud Adoption: Users no longer work within a controlled corporate network. Advanced Cyber Threats: Attackers can breach a single point in the network and move laterally to access sensitive data. Core Principles of Zero Trust Architecture To successfully implement Zero Trust, organizations follow these key principles: Verify Explicitly: Authenticate and authorise every access request based on multiple data points, such as user identity, device health, location, and behavior. Use multi-factor authentication (MFA) to ensure secure logins. Use Least Privilege Access: Grant users and applications only the minimum access they need to perform their tasks. Implement Just-In-Time (JIT) access and role-based access control (RBAC). Assume Breach: Design the network with the assumption that threats exist both inside and outside. Implement micro-segmentation to contain potential intrusions. Continuously monitor and analyze network traffic for anomalies. Implementing Zero Trust: A Step-by-Step Guide Micro-Segmentation and Network Security Break up the network into smaller, isolated segments to limit lateral movement. Use software-defined perimeters (SDP) to restrict access to applications based on user identity and context. Deploy next-generation firewalls and intrusion detection systems to monitor network activity. Implement IPSec to encrypt and authenticate traffic between devices, enforcing secure communication within and across network segments. Use 802.1X and RADIUS for network-level access control, tying access policies to user identity and device trustworthiness. Enforce policy-based routing and segmentation at both physical and virtual levels. Device and Endpoint Security Implement endpoint detection and response (EDR) solutions to detect and mitigate threats. Enforce device compliance checks, ensuring only secure, managed devices can access resources. Use mobile device management (MDM) solutions to secure BYOD (Bring Your Own Device) environments. Continuously assess device posture, including OS patch levels, security configurations, and threat exposure. Implement Strong Identity and Access Management (IAM) Enforce multi-factor authentication (MFA) for all users. Implement Single Sign-On (SSO) to streamline authentication. Adopt passwordless authentication methods such as biometrics or security keys. Continuously verify user identities using risk-based authentication (RBA), which adjusts security policies based on user behavior. Leverage RADIUS for centralized authentication and accounting, particularly for network access control and device-level authentication. Integrate 802.1X for port-based network access control, ensuring that only authenticated users and compliant devices gain network access. Enforce Least Privilege and Access Controls Use role-based access control (RBAC) to restrict access based on job roles. Implement attribute-based access control (ABAC), which considers additional factors like device security posture and user location. Utilize Just-In-Time (JIT) access to grant temporary permissions when needed. Review access regularly to minimize privilege creep and enforce the principle of least privilege. Continuous Monitoring and Threat Detection Deploy Security Information and Event Management (SIEM) solutions to collect and analyze security logs. Use User and Entity Behavior Analytics (UEBA) to detect anomalies in user behavior. Implement automated threat response to isolate compromised accounts or devices in real-time. Home Lab State of Play This lab isn’t just a casual test environment, it’s been running continuously for over a decade, operating 24/7 as a secure, managed domain for browsing and related services. The family acts as the user base, providing constant, real-world UAT, often quite vocally when something breaks. The domain serves as a representative platform for the technologies I’m learning, testing, and developing. The current state, and state is nearer the mark, is a mostly flat network with an out-of-support Zyxel USG60W with multiple Firewall rules dependent on the device's IP and MAC. I’m running multiple Intel NUCs hosting Hyper-V, one is well overdue for retirement and out of support, which in turn runs a Windows 2019 Domain environment. AppLocker and Group Policy are actively deployed, while WDAC is managed via SCCM. There's extensive delegation and a strict separation of privileges throughout the environment. Laptops protect their data with Bitlocker TPM and Pin. DNS queries are handled by two PiHoles with fairly strict filtering lists. As for monitoring, SCOM was decommissioned some time ago due to NUC resource limitations, so currently, there's no centralized monitoring in place. A serious lack of time and it just works has led to the system being largely neglected. This forms an ideal starting point and mirrors what’s often seen in corporate environments, underfunded infrastructure, overworked admins stretched to the breaking point. The Zero Trust Plan of Attack Building on the core principles of Zero Trust to "never trust, always verify" and keeping budget limitations in mind, I’ll explore each technology and explain how it tackles specific challenges. Each of these will be documented in the upcoming blogs. Micro-Segmentation and Network Security Software-Defined Perimeters. This may be a step too far for the home lab. Networking: Replace the Zyxel with a pfSense Netgate 4200 and implement VLANs. Firewall: Transition from Zyxel policies to pfSense. IPSec: Assume compromise and that the network is hostile. Implement a VPN - Not required. PiHole, DNSSec and DNSTLS. Device and Endpoint Security Device Compliance, implement NAP - No longer possible with Windows Server. Endpoint Detection and Response (EDR). Mobile Device Management (MDM) is currently handled through SCCM. There are no plans to transition to Microsoft Azure, particularly Intune, as it lacks enterprise features and would expand the lab's attack surface. The approach is to maintain secure data processing on-premises while using the cloud for processing and storing less sensitive data. Implement Strong Identity and Access Management (IAM) Single-Sign-On, is currently supported within the Microsoft Domain, but not so for all the Linux devices. Authenticate and verify Devices, implement Radius Server and 802.1x MFA, Yubikey smartcard and pins will be implemented. Risk-Based Authentication. Enforce Least Privilege and Access Controls Attribute-Based Access Control (ABAC) Role-Based Access Control (RBAC) Just-In-Time access requires as a minimum PowerShell commands to enable group membership TTL (Time-to-Live), this could be extended further with a Bastion Forest, MIM, PAM and PIM. Continuous Monitoring and Threat Detection Security Information and Event Management (SIEM), implement an event management solution that supports both Windows and Linux. User and Entity Behavior Analytics (UEBA), implement PFSense's IPA and IDA solutions. Realtime response Threat intelligence feeds (pfBlockerNG) Intrusion detection and prevention systems (Snort/Suricata) The Keys to the Zero Trust Kingdom In a Windows environment, an Enterprise Certificate Authority (CA) is the trust anchor for machine identities, user certificates, network authentication, and service encryption. It’s a critical component in any enterprise PKI and foundational to implementing a Zero Trust security model. But without a Hardware Security Module (HSM), your CA's private keys are exposed to unnecessary risk. I don't have an HSM, they're quite expensive. This needs to be called out for the enterprise implementation of Zero Trust. Where to Start..... The CA holds the keys, but the network forms the foundation of Zero Trust, making it the logical place to start. Replacing the outdated Zyxel hardware is the first step, followed by implementing proper network segmentation and firewall policies. The only question... what have I started? Related Posts: Part 1  - Zero Trust Introduction Part 2  - VLAN Tagging and Firewalls with pfSense Part 3 - pfSense and 802.1x Part 4 - IPSec for the Windows Domain Part 5  - AD Delegation and Separation of Duties Part 6  - Yubikey and Domain Smartcard Authentication Setup Part 7  - IPSec between Windows Domain and Linux using Certs

  • Create a WMI Filter on a PDC with PowerShell

    While building automation for domain deployment , OU structure, and delegation , I experienced one of those, too hard to do now, which nearly slipped past me, scripting the creation of a WMI filter for the PDC Emulator role. Time Source The goal is to make sure the PDC, and only the PDC, manages authoritative time. This particular GPO includes a root NTP server setting, whether that’s an IP address, a local atomic clock, or an external internet time source, and it’s vital that only the PDC syncs with it. Every other domain controller should, in turn, sync from the PDC, maintaining a proper hierarchy and preventing clock chaos. Not Keen on WMI Filters I’ll be honest, I don’t generally like WMI filters in GPO. They introduce performance hits and slow down policy processing, especially in larger environments. But in this case, it’s a pragmatic exception. The filter ensures that only the PDC receives and applies the external time configuration, keeping time consistent across the domain and preventing catastrophic drift when an upstream time source fails spectacularly. Prevent death and destruction I’ve experienced this first-hand, the time source collapsed in a heap, and the PDC leapt forward a full 24 hours in an instant. The aftermath was... memorable. It's hard to believe the chaos inflicted when time goes awry. MaxPhaseCorrention - Thank MS's Default value To guard against that kind of chaos, the GPO settings MaxPosPhaseCorrection and MaxNegPhaseCorrection limit how far the system clock is allowed to jump forward or backward during synchronization. The issue is that Microsoft's default value is 86400 seconds or 24 hours, these overly generous settings have the potential to lead to carnage. The recommendation is to set both POS and NEG settings to 3600 or 1 hour. These settings and the WMI filter ensure the domain’s time stays sane, stable, and immune to upstream meltdowns. GPO Settings - Prevent the Meltdown These are the current settings provided from GitHub , with the annex at the end of the blog providing the technical details. Computer Configuration/Policies/Administrative Templates/System/Windows Time Service: Global Configuration Settings MaxNegPhaseCorrection = 3600 MaxPosPhaseCorrection = 3600 Computer Configuration/Policies/Administrative Templates/System/Windows Time Service/Time Providers: Configure Windows NTP Client = Enabled NTPServer = 192.168.30.1,0x8 Type = NTP CrossSiteSyncFlags = 2 ResolverPeerBackoffMinutes = 15 ResolverPeerBackoffMaxTimes = 7 SpeicalPollInterval = 1024 EventLogFlags = 0 Enable Windows NTP Client = Enabled Enable Windows NTP Server = Enabled Script Prep Download both the script and the zip file from GitHub . Copy the files to the PDC into the 'C:\ADBackups\PDCNTP\' directory. Don’t use another domain controller. Running GPO scripts from a secondary DC introduces extra latency when connecting back to the PDC, which can cause failures. Extract the zip file, ensuring the GUID directory is nested within the 'PDCNTP' directory. C:\ADBackups\PDCNTP\{A5214940-95CC-4E93-837D-5D64CA58935C}\ If you prefer to use your own GPO export, that’s fine, as long as the path is correct, the script will automatically resolve the appropriate GUID and BackupID. Execution of the Script With Domain Admin privileges, open an elevated PowerShell window and execute the following commands CD to C:\ADBackups\PDCNTP\ ; .\Create_WMI_NTP_PDC.ps1 Open Group Policy Management. Confirm that the GPO has been created and linked to the Domain Controller OU Update the IP Address for the NTP Server, it's unlikely we share the same time server IP. The only remaining task for me is to integrate the NTP GPO into the fully automated Domain deployment script . Enjoy and thanks for your time. ANNEX - Breakdown of GPO Settings Computer Configuration > Policies > Administrative Templates > System > Windows Time Service > Global Configuration Settings MaxNegPhaseCorrection Current value: 3600 (1 hour) Purpose: Defines the maximum number of seconds the clock can be moved backward when synchronizing time. If the correction exceeds this, Windows logs an event instead of applying the adjustment. Relevance: Prevents the PDC from winding time back too far due to an erratic NTP source, which can break Kerberos authentication and replication. Alternative values: 0 — disables large backward corrections entirely. 300 — 5 minutes (useful for high-availability or sensitive environments). 86400 — 24 hours (default Microsoft value, overly generous for a PDC). 4294967295 (0xFFFFFFFF) — disables the limit completely (not recommended). MaxPosPhaseCorrection Current value: 3600 (1 hour) Purpose: Defines the maximum number of seconds the clock can be moved forward. Relevance: Protects against catastrophic jumps when an upstream NTP server malfunctions. This provides a 1-hour safety window in either direction, preventing massive jumps while still allowing normal synchronization drift to be corrected automatically. Alternative values: 300 — a conservative 5-minute correction limit. 900 — 15 minutes (a good balance for stable networks). 86400 — 24 hours (default). 4294967295 — disables the limit (unsafe on domain controllers). Computer Configuration > Policies > Administrative Templates > System > Windows Time Service > Time Providers Configure Windows NTP Client = Enabled Enables policy control of NTP client behaviour to enforce the following parameters. NTPServer = 192.168.30.1,0x8 Purpose: Defines the external NTP source the PDC syncs with. The ,0x8 flag tells Windows to use client mode. Alternative formats and examples: pool.ntp.org ,0x8 — public NTP pool. time.google.com ,0x8 — Google’s NTP service. ntp.nist.gov ,0x8 — US NIST time source. gps-clock.local,0x8 — local GPS or atomic reference. Relevance: This should be a reliable and stratum-1 or stratum-2 source. The PDC is the only domain controller that should query an external NTP server. Type = NTP Purpose: Forces synchronization using the NTP protocol with the specified NTPServer. Other valid values: NT5DS — Default for domain-joined machines (syncs from domain hierarchy). AllSync — Uses all available sync mechanisms (rarely needed). NoSync — Disables synchronization entirely. Relevance: For the PDC, NTP ensures it pulls time from the defined external source, not from another DC. CrossSiteSyncFlags = 2 Purpose: Controls cross-site time synchronization. Value meanings: 0 — Allow synchronization across all sites. 1 — Only sync from DCs in the same site. 2 — Never sync from DCs in other sites (recommended for PDC). Relevance: Keeps the PDC isolated as the domain’s root time authority, avoiding cross-site time loops. ResolverPeerBackoffMinutes = 15 Purpose: Specifies how long the service waits before retrying after a failed NTP sync. Alternatives: 5 — More aggressive retry. 30 — More relaxed retry, suitable for unreliable WANs. ResolverPeerBackoffMaxTimes = 7 Purpose: Defines the maximum number of exponential backoff attempts before giving up. Alternatives: 3 — Faster failover (good for testing). 10 — More patient retry window. SpecialPollInterval = 1024 Purpose: Sets how often (in seconds) the PDC polls the NTP source — roughly every 17 minutes. Alternatives: 3600 — Once per hour (lighter network load). 900 — Every 15 minutes (more aggressive for accuracy). 86400 — Once per day (not advised for volatile networks). Relevance: Frequent polling maintains accurate time and compensates for drift. EventLogFlags = 0 Purpose: Controls event logging verbosity. Values: 0 — Only critical errors. 1 — Informational and error events. 2 — All events, including debugging. Relevance: On a PDC, 0 keeps logs clean while still alerting to serious time issues. Enable Windows NTP Client = Enabled Purpose: Ensures the time service actively synchronizes with the defined NTP source. Relevance: Essential for keeping the PDC accurate and stable. Enable Windows NTP Server = Enabled Purpose: Turns the PDC into an NTP server for the domain. Relevance: Other DCs and domain members sync from the PDC rather than directly from the external NTP source, maintaining a clean and authoritative time hierarchy.

  • Windows PE add-on for the Windows ADK for Windows 11, version 22H2 Error

    Windows ADK PE for Windows 11 22H2 fails to install completely generating the following errors. Error 1 Clicking on the Windows PE tab crashes MMC generating the following error: Could not find a part of the path 'C:\Program Files (x86)\Windows Kits\10\Assessment and Deployment Kit\Windows Preinstallation Environment\x86\WinPE_OCs'. Google suggests the ADK PE isn't installed..... Error 2 It's not possible to update the boot images. Unable to open the specified WIM file. ---> System.Exception: Unable to open the specified WIM file. ---> System.ComponentModel.Win32Exception: The system cannot find the path specified. Something is definitely wrong and missing Both errors suggest missing files, with error 1 providing a path: C:\Program Files (x86)\Windows Kits\10\Assessment and Deployment Kit\Windows Preinstallation Environment\ Comparing the latest ADK PE for Windows 11 22H2 to an older installation something is very wrong. The x86 and Arm directories are missing..... ADK PE for Windows 11 22H2 ADK PE for Windows 10 1809 Is this a one-off... The initial problem presented itself whilst upgrading MDT and ADK installed on a 2012 R2 Server. A new instance of 2022 Server and Windows 11 were equally affected. Each instance of ADK PE was from a fresh download. Microsoft, I've got the contact details of some really good Test Managers. The Fix Not wishing to faff and spend too much time comparing downloads against different versions, it was taking me away from my planned day of Hack the Box. There's the recommended fix, downloading an earlier version on ADK PE, 1809 for Windows 10 should do the trick. Windows 11 is purely cosmetic under the hood it identifies as Windows 10. Alternatively, copy the missing contents from a working installation, not recommended and it's a bit quick and dirty. It's functional with some minor scripting errors with the MDT deployment wizard.

  • Shift+F10 PXE Attack....nearly 4 years on

    During MDT or ConfiMgr deployment of Windows 10, press Shift+F10 whilst Windows detects devices. A command prompt with System Privileges will pop up, allowing all sorts of shenanigans and without being logged by SIEM, those agents won't be running yet. Also, during Windows 10 upgrades, Bitlocker drive encryption is disabled, allowing the same attack. This is an old issue raised some 3 to 4 years ago.... Well, today on my test rig during a 1909 deployment, I was just curious, it can't still be vulnerable.... oops. The fix is pretty straightforward, although I can't take credit, that belongs to Johan Arwidmark and this post here # Declare Mount Folders for DISM Offline Update $mountFolder1 = 'D:\Mount1' $mountFolder2 = 'D:\Mount2' $WinImage = 'D:\MDTDeployment\Operating Systems\Windows 10 x64 1909\sources' #Mount install.wim to first mount folder Mount-WindowsImage -ImagePath $WinImage\install.wim -Index 1 -Path $mountFolder1 #Mount winre.wim to second mount folder Mount-WindowsImage -ImagePath $mountFolder1\Windows\System32\Recovery\winre.wim -Index 1 -Path $mountFolder2 #Create folder for DisableCMDRequest.TAG file in Winre.wim New-Item $mountFolder2\Windows\setup\scripts -ItemType Directory #Create DisableCMDRequest.TAG file for Winre.wim New-Item $mountFolder2\Windows\setup\scripts\DisableCMDRequest.TAG -ItemType File #Commit changes to Winre.wim Dismount-WindowsImage -Path $mountFolder2 -Save #Create folder for DisableCMDRequest.TAG in install.wim New-Item $mountFolder1\Windows\setup\scripts -ItemType Directory #Create DisableCMDRequest.TAG file for install.wim New-Item $mountFolder1\Windows\setup\scripts\DisableCMDRequest.TAG -ItemType File #Commit changes to Winre.wim Dismount-WindowsImage -Path $mountFolder1 -Save

  • Deploying without MDT or SCCM\MECM....

    The best methods for deploying Windows are SCCM and then MDT, hands down. But what if you don’t have either deployment service? Seriously… despite all the step-by-step guides and even scripts claiming you can deploy MDT in 45 minutes, some still opt to manually deploy or clone Windows, maybe they never moved past RIS. The real question is: can Windows 10 and a suite of applications, including Office, be automated without fancy deployment tools? The short answer: yes, but it’s not pretty. There are problems that MDT and SCCM simply make disappear. I’m not thrilled about dealing with these issues. Manual prep takes way more time, is less functional, and only starts to make sense if you have more than a handful of Windows clients to deploy. If you ever consider doing it this way, it’s only for very limited scenarios. My recommendation: use the proper deployment services designed specifically for Windows. It’s faster, cleaner, and far less frustrating. Pre-requisites 16Gb USB3 as a minimum, preferably 32Gb Windows 10 media MS Office 2019 Chrome, MS Edge, Visual C++, Notepad++ Windows ADK Windows Media Download Windows 10 ISO and double click to mount as D:\ Create a directory at C:\ named "Version of Windows" eg C:\Windows21H2\. Don't copy the contents of D:\ directly to the USB due to install.wim being larger than the permitted supported file size for Fat32, greater than 4Gb. Copy the files from D:\ (Windows ISO) to C:\Windows21H2\ Split install.wim into 2Gbs files to support Fat32. Dism /Split-Image /ImageFile:C:\Window21H2\sources\install.wim /SWMFile:C:\Window21H2\sources\install.swm /FileSize:2000 Delete C:\Window21H2\sources\install.wim. Insert USB pen and format as Fat32, in this case, it will be assigned as E:\ Copy the entire contents from C:\Windows21H2\ to E:\. Applications Create directory E:\Software, this is the root for all downloaded software to be saved to. Create the following sub-directories under E:\Software, and download the software to the relevant sub-directory. 7Zip & cmd.exe /c 7z2107-x64. exe /S Chrome & cmd.exe /c msiexec.exe /i GoogleChromeStandaloneEnterprise64. msi /norestart /quiet Drivers & cmd /c pnputil.exe /add-driver Path/*. inf /subdirs /install MS-VS-CPlus & cmd.exe /c vcredist_x86_2013. exe /S MS-Win10-CU & cmd /c wusa.exe windows10.0-kb5011487-x64. msu /quiet /norestart MS-Win10-SSU & cmd /c wusa.exe ssu-19041.1161-x64. msu /quiet MS-Edge & cmd.exe /c msiexec.exe /i MicrosoftEdgeEnterpriseX64. msi /norestart /quiet MS-Office2019 & cmd.exe /c MS-Office2019\Office\Setup64. exe NotepadPlus & cmd.exe /c npp.8.3.3.Installer.x64. exe /S TortoiseSVN & cmd.exe /c msiexec.exe /i TortoiseSVN-1.14.2.29370-x64-svn-1.14.1. msi /qn /norestart WinSCP & cmd.exe /c WinSCP-5.19.6-Setup.exe /VERYSILENT /NORESTART /ALLUSERS I've provided the unattended commands with an extension, its important the correct file type is downloaded for the script to work correctly. Place any driver files in the 'Drivers' directory unpacked as *.inf files. AutoUnatteneded Download ADK for Windows ( here ). Install only the 'Deployment Tools'. From the Start Menu open 'Windows System Image Manager' and create a 'New Answer File' and save it to the root of the E:\ (USB), name the file 'AutoUnattend.xml'. I cheated at this point, didn't fancy creating the AutoUnattend.xml from scratch, so I "borrowed" a pre-configured unattend.xml from MDT. To save you the pain download the 'AutoUnattend.xml' from Github ( here ). Save to the Root of E:\ (USB). Within the autounattend.xml the following line is referenced to execute 'InstallScript.ps1' at first logon. C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -executionpolicy bypass -file D:\software\InstallScript.ps1 Not that the PartitionID is '3' and the InstallFrom is updated from 'install.wim' to 'install.swm'. OnError 0 3 install.swm To select a different edition, the default is Education to run the following command with Admin rights. dism /Get-WimInfo /WimFile:"d:\sources\install.wim" Index : 1 Name : Windows 10 Education Description : Windows 10 Education Index : 2 Name : Windows 10 Education N Description : Windows 10 Education N Index : 3 Name : Windows 10 Enterprise Description : Windows 10 Enterprise Index : 4 Name : Windows 10 Enterprise N Description : Windows 10 Enterprise N Index : 5 Name : Windows 10 Pro Description : Windows 10 Pro Edit the AutoUnattend.xml and update the MetaData value under OSImage to reflect the desired index value. The Script Download 'InstallScript.ps1' from ( here ) and save it to E:\Software. A Brief Script Overview The first action is to copy the Software directory to C:\ so it can be referenced between reboots. The script adds Registry settings to Autologon as 'FauxAdmin' with a password of 'Password1234'. I strongly suggest changing the hardcoded password to something more secure. Warning: During the installation of Windows it prompts for a new account to ensure it reflects the hardcoded name and password in the InstallScript.ps1 'FauxAdmin', 'Password1234'. A Scheduled Task is added that will execute at logon with 'FauxAdmin'. The default hostname is Desktop-####, you'll be asked to enter a new hostname. Pre-Create a Computer object in AD, with the planned hostname of the client being deployed. Domain credentials will be required with delegated permissions to add computer objects to the domain. Update the InstallScript.ps1 with the correct FQDN and OU path $DomainN = "trg.loc" $ouPath = "OU=wks,OU=org,DC=trg,DC=loc" Windows 10 CU and Apps will install with various reboots. A bit of a tidy to remove the AutoLogon and Scheduled Task and then a final reboot. To prevent an attempted re-installation or repeat an action a 'check.txt' file is updated at the end of each step. If validated $true then the step will be skipped. Deployment Boot PC and enter Bios\UEFI. Set UEFI to boot or initial boot to USB, F10 to save and exit. Insert USB and boot. Setup will start and prompt for disk partitioning, delete the volumes and create new default partitions. OK, Cortana. Create an account of 'fauxadmin' + 'Password1234' - these account details are hardcoded in the script. At initial logon, the PowerShell will launch. The process is completed when the client has been added to the domain and rebooted. Warning. Now reset the FauxAdmin account's password, don't forget it's hardcoded in the script and could allow an attacker to gain access if the password isn't updated. Notes: The unattended disk partitioning proved to be unreliable and required manual intervention some of the time. This step is now manual. It is assumed that the USB during deployment will map to D:\ this is hardcoded for the Scheduled Task. Hiding Cortana resulted in removing the prompt for a new admin account, it's considered a security benefit to create a new admin account and disable Administrator with SID 500.

  • Managing Local Admin Passwords with LAPS

    How are you managing your local administrator passwords? Are they stored in a spreadsheet on a network share, or worse, is the same password used everywhere? Microsoft LAPS (Local Administrator Password Solution) could be the answer. LAPS is a lightweight tool that, with a few simple GPO settings, automatically randomizes local administrator passwords across your domain. It ensures each client and server has a unique, securely managed password, removing the need for spreadsheets or manual updates. Download LAPS from the Microsoft site Copy the file to the Domain Controller and ensure that the account you are logged on has 'Schema Admin'. Install only the Management Tools. As its a DC its optional whether to install the 'Fat Client UI', Schema updates should always be performed on a DC directly. Open Powershell and run the following command after seeking approval. Update-AdmPwdSchema SELF will need updating on the OU's for your workstations and servers. Add SELF as the Security Principal. Select 'Write ms-Mcs-AdmPwd Now change the GPO settings on the OU's. The default is 14 characters but I would go higher and set above 20. Install LAPS on a client and select only the AdmPwd GPO Extension On the Domain Controller open the LAPS UI and search and Set a client. Once the password has reset open the properties of the client and check the ms-Mcs-AdmPwd for the new password. Now every 30 days the local Admin password will be automatically updated and unique. Deploy the client with ConfigMgr to remaining estate. By default Domain Admin have access to read the password attribute and this can be delegated to a Security Group. AND.....this is the warning.....Any delegated privileges that allow delegated Computer management and the 'Extended Attributes' can also read the 'ms-MCS-AdmPwd'.

  • Using SCOM to Monitor AD and Local Accounts and Groups

    For those that have deployed SCOM without ACS or another monitoring service, but don't have a full-blown IDS\IPS. With a little effort, it's possible to at least monitor and alert when critical groups and accounts. As a free alternative, ELK (Elastic Search) or Security Onion. The following example is SCOM being configured to alert when Domain Admins is updated. On the Authoring Tab, Management Pack Objects, Rules, select 'NT Event Log (Alert)' Create a new Management Pack if required, don't ever use the default MP The 'Rule Name' should have an aspect that is unique and all subsequent rules to assist searching later on. Rules that monitor Groups or Accounts will be pre-fixed with 'GpMon'. The 'Rule Target' in this case is 'Windows Domain Controllers', it's a domain group. Change the 'Log Name' to 'Security'. Add Event ID 4728 (A member was added to a security-enabled global group) Update the Event Source to 'Contains' with a value of 'Domain Admins'. Update the priorities to High and Critical. Sit back grab a coffee (or 2) and wait whilst the rule is distributed to the Domain Controllers, this can take a while. Test the rule by adding a group or account to Domain Admins, in the SCOM Monitoring tab, an alert will almost immediately appear with full details. Now for the laborious bit, create further monitors for the following: Server Operators Account Operators Print Operators Schema and Enterprise Admins Any delegation or role-up groups SCCM Administrative groups CA Administrative groups That's the obvious groups covered, now to target all Windows Servers and Clients (if SCOM has been deployed to the clients) Local accounts for creation, addition to local groups and password resets. Applocker to alert on any unauthorised software being installed or accessed. Finally here's what Microsoft recommens. With a few hours of effort and you'll have better visibility of the system and any changes to those critical groups.

  • Always Patch Before Applocker or Device Guard are Deployed.

    Labs don't tend to follow the best practices or any security standards, they're quick dirty installations for developing and messing around. Here's some food for thought the next time you're wanting to test Applocker or Windows Defender Application Control (WADC) aka Device Guard, you may wish to at least patch. For the most part, deploying Domain Infrastructure, scripts and services works great, until Device Guard is deployed to an unpatched Windows 11 client. Firstly the steps on how to configure Device Guard, then the fun... DeviceGuardBasic.ps1 script can be downloaded from ( here ). Run the script as Admin and point the Local GPO to Initial.bin following the help. Device Guard is set to enforced, no audit mode for me, that's for wimps, been here hundreds of times......what's the worse that can happen..... arrghhhhh. The first indication Windows 11 had issues was 'Settings' crashed upon opening. This isn't my first rodeo, straight to the eventlogs. Ah, a bloodbath of red Code Integrity errors complaining that a file hasn't been signed correctly. How could this be.... the files are Microsoft files. This doesn't look good, the digital signature can't be verified meaning the signing certificate isn't in the Root Certificate Store for the Computer. This is not the first time I've seen the 'Microsoft Development PCA 2014' certificate. A few years back a sub-optimal Office 2016 update prevented Word, PowerPoint and Excel from launching. It was Applocker protecting me from the Microsoft Development certificate at that time. Well done Microsoft, I see your test and release cycle hasn’t improved. A Windows update and all is fine….right.....as if. I'm unable to click on the Install updates button, it's part of Settings and no longer accessible. Bring back Control Panel. No way I’m waiting for Windows to get around to installing the updates by itself. The choices: Disable Device Guard by removing the GPO and deleting the SIPolicyp7b file. Create an additional policy based on hashes. Start again, 2 hours effort, most of that waiting for updates to install. Creating an additional policy based on hashes and then merging them into the ‘initial’ policy allows for testing Device Guard's behaviour. Does Device Guard prevent untrusted and poorly signed files from running when hashes are present? Observed behaviour is for Device Guard policy to create hashes for unsigned files as a fallback. The new and improved Device Guard script, aptly named 'DeviceGuard-withMerge.ps1' can be downloaded from ( here ). The only additional lines of note are the New-CIPolicy to create only hashes for the “C\Windows\SystemApps” directory and to merge the 2 XML policy files. New-CIPolicy -Level Hash -FilePath $HashCIPolicy -UserPEs 3> $HashCIPolicyTxt -ScanPath "C:\Windows\SystemApps\" Merge-CIPolicy -PolicyPaths $IntialCIPolicy,$HashCIPolicy -OutputFilePath $MergedCIPolicy The result, 'Settings' now works despite Microsoft's best effort to ruin my day. Creating Device Guard policies based on hashes for files incorrectly signed by Microsoft's internal development CA is resolved. Below is the proof, 'Settings' is functional even with those dodgy files. Conclusion: This may come as a shock to some….. Microsoft does make mistakes and release files incorrectly sighed… shocking. Device Guard will allow files to run providing the hashes are present even when incorrectly signed. Did I learn something, hell yeah! always patch before deploying Device Guard or Applocker. The time spent faffing resolving the issue far exceeded the time it would have taken to patch it in the first place.

  • LAPS Leaks Local Admin Passwords

    On a previous blog ( here ), LAPS (Local Administrator Password Solution) was installed. LAPS manages and updates the local Administrator passwords on clients and member servers, controlled via GPO. Only Domain Admins have default permission to view the local administrator password for clients and member servers. Access to view the passwords by non-Domain Admins is via delegation, here lies the problem. Access to the local administrator passwords may be delegated unintentionally. This could lead to a serious security breach, leaking all local admin accounts passwords for all computer objects to those that shouldn't have access. This article will demonstrate a typical delegation for adding a computer object to an OU and how to tweak the delegation to prevent access to the ms-Mcs-AdmPwd attribute. Prep Work There is some prep-work, LAPS is required to be installed and configured, follow the above link. At least 1 non-domain joined client, preferably 2 eg Windows 10 or 11 Enterprise. A test account, mine's named TestAdmin, with no privileges or delegations and an OU named 'Workstation Test'. Ideally, I'll be using AD Groups and not adding TestAdmin directly to the OU, it's easy for demonstration purposes. Delegation of Account Open Active Directory Users and Computers or type dsa.msc in the run command. With a Domain Admin account right-click on 'Workstation Test' OU, Properties, Security Tab and then Advanced. Click Add and select the TestAdmin as the principal. Select, Applies to: This Object and all Descendant Objects In the Permission window below select: Create Computer Objects Delete Computer Objects Apply the change. This is a 2 step process, repeat this time selecting. Applies to: Descendant Computer Objects Select Full Control in the Permissions window. Test Delegation Log on to a domain workstation with RSAT installed and open Active Directory Users and Computers. Test by pre-creating a Computer object, right-click on the OU and select New > Computer, and type the hostname of the client to be added. Log on to a non-domain joined Windows client as the administrator and add to the domain using the TestAdmin credentials, reboot. Then wait for the LAPS policy to apply.......I've set a policy to update daily. View the LAPS Password As the TestAdmin, from within AD Users and Computers go to View and select Advanced. Right-click properties on the client, select the Attribute tab Scroll down and locate 'ms-Mcs-AdmPwd', that the Administrator password for that client. The Fix.... To prevent TestAdmin from reading the ms-Mcs-AdmPwd attribute value, a slight amendment to the delegation is required. As the Domain Admin right-click on 'Workstation Test' OU, Properties, Security Tab and then Advanced. Select the TestAdmin entry, it should say 'Full Control'. Remove 'All Extended Rights', 'Change Password' and 'Reset Password' and apply the change. As TestAdmin open AD Users and open the Computer attributes. ms-Mcs-AdmPwd is no longer visible. Did I Just Break Something...... Test the change by adding a computer object to the OU and adding a client to the domain. Introducing computers to the domain is functional... No harm no foul. Final Thoughts Removing the Extended Rights and Password permissions prevents the delegated account from reading the local administrator password from ms-Mcs-AdmPwd AD attribute without causing any noticeable problems. Watch for any future delegations ensuring the permissions aren't restored by accident. Enjoy and hope this was insightful.

  • Code Signing PowerShell Scripts

    In this article, I'll describe the process of Code Signing PowerShell scripts from a Microsoft CA. I'll not cover how Code Signing adds security, simply put Code Signing doesn't provide or was intended to provide a robust security layer. However, Code Signing does provide both Authenticity and Integrity: The Authenticity that the script was written or reviewed by a trusted entity and then signed. Integrity ensures that once signed the script hasn't been modified, useful when deploying scripts or executing scripts by a scheduled task with a service account. Bypassing Code Signing requirements is simple, open ISE, paste in the code and F8, instant bypass. However, my development 'Enterprise' system is not standard, ISE won't work as Constrained Language Mode prevents all but core functionality from loading, meaning no API's, .Net, Com and most modules. As a note, even with the script, code signed, ISE is next to useless with Constrained Language Mode enforced. Scripts require both signing and authorising in Applocker\WDAC and will only execute from native PowerShell. Back to it..... This is a typical message when executing a PowerShell script with the system requiring Code Signing. To successfully execute the script, the script must be signed with a digital signature from either a CA or Self Signed certificate. I'm not going to Self Sign, it's filth and I've access to a Microsoft Certificate Authority (CA) as part of the Enterprise. Login to the CA, launch 'Manage' and locate the 'Code Signing' template, then 'Duplicate Template'. Complete the new template with the following settings: General: Name the new certificate template with something meaningful and up the validity to 3 years or to the maximum the corporate policy allows. Compatibility: Update the Compatibility Settings and Certificate Recipient to 'Windows Server 2016' and 'Windows 10/Windows Server 2016' respectively. Request Handling: Check the 'Allow private key to be exported'. Cryptographic: Set 'Minimum key size' to either 1024, 2048, 4096, 8192 or 16,384 Select 'Requests must use one of the following providers:' and check 'Microsoft Enhanced RSA and AES Cryptographic Provider' ( description ) Security: Ideally, enrolment is controlled via an AD Group with both READ and Enrol permissions. Do not under any circumstances allow WRITE or FULL. Save the new template and then issue by right-clicking on 'Certificate Template' > New and 'Certificate Template to Issue'. From a client and logged on with the account that is a member of the 'CRT_PowerShellCodeSigning' group, launch MMC and add the Certificate snap-in for the Current User. Browse to Personal > Certificates and right-click in the empty space to the right, then click on 'All Tasks' > 'Request New Certificate. Select the 'Toyo Code Signing' template and then click on 'Properties' to add in some additional information. Add a Friendly Name and Description. Enrol the template. Now, right-click on the new 'Code Signing' certificate > All Tasks > Export. Select 'Yes, export the private key'. Ensure the 2 PKCS options are selected. Check the 'Group or username (recommended)' and on the Encryption drop-down select 'AES256-SHA256'. Complete the wizard by exporting the .pfx file The final step is to sign a script with the .pfx file using PowerShell. Set-AuthenticodeSignature -FilePath "C:\Downloads\SecureReport9.4.ps1" -cert "C:\Downloads\CodeSigning.pfx" Open the newly signed script and at the bottom of the script is the digital signature. Launch PowerShell.exe and run the script. For those with Applocker\WDAC then the script requires adding to the allow list by file hash. Now I'll be able to execute my own Pentest script on my allegedly secure system and locate any missing settings..... As always thanks for your support.

  • How to Delegate Active Directory OU's with PowerShell

    Today is a quick explanation regarding OU delegation using PowerShell with usable examples and how-to located the GUID that identifies the object type being delegated. All the required scripts can be found on my Github ( here ). Delegated Test Account: For demonstration purposes, the following is executed directly on the Domain Controller and as a Domain Admin. Create a test user named 'SrvOps' and add it to the 'Server Operators', group. This effectively provides Administrator privileges on the DC's without access to AD. Create the following Global Groups, CompDele, UserDele and GroupDele and to the SrvOps user. Greate the following OU's, Computer, User and Group. Shift and Right-click 'Active Directory Users and Computer' and 'Run as a Different User', and enter the SrvOps credentials. Right-click on the Computer OU and you will notice that there's no options to New and select an object type. ADSI Edit and Object GUID: Close the AD snap-in. Back to Domain Admin and launch 'adsiedit.msc'. Select 'Schema' from the 'Select a well known Naming Context:' and OK. Scroll down and select 'CN=Computer' properties. On the 'Attribute Editor' tab scroll down and locate 'schemaIDGUID'. This is the Guid object identity used for delegating Computer objects. It's not possible to copy the value directly and double clicking provides Hex or Binary values which can be copied. The following converts the Hex to the required Guid value. $trim = ("86 7A 96 BF E6 0D D0 11 A2 85 00 AA 00 30 49 E2").replace(" ","") $oct = "$trim" $oct1 = $oct.substring(0,2) $oct2 = $oct.substring(2,2) $oct3 = $oct.substring(4,2) $oct4 = $oct.substring(6,2) $oct5 = $oct.substring(8,2) $oct6 = $oct.substring(10,2) $oct7 = $oct.substring(12,2) $oct8 = $oct.substring(14,2) $oct9 = $oct.substring(16,4) $oct10 = $oct.substring(20,12) $strOut = "$oct4" + "$oct3" + "$oct2" + "$oct1" + "-" + "$oct6" + "$oct5" + "-" + "$oct8" + "$oct7" + "-" + "$oct9" + "-" + "$oct10" write-host $strOut #result = BF967A86-0DE6-11D0-A285-00AA003049E2 The Script: Download the scripts from Github ( here ) and open with Powershell_ise. Update the DN, the OU path to the Computer OU created earlier. Execute the script and repeat for Users and Groups scripts. Relaunch 'Active Directory Users and Computers' as a different user and enter the SrvOps account credentials. Right-click on each of the OU's and 'New'. You will notice SrvOps can now create objects relative to the name of the OU. Final Considerations: Retrieving the 'schemaIDGUID' from ADSI Edit allows the delegation of pretty much any object type within AD and for the most part a couple of minor tweaks to the scripts provided and your set. Enjoy and if you find this useful please provide some feedback via the homepage's comment box.

  • Failure Deploying Applications with SCCM\MECM with Error 0x87d01106 and 0x80070005

    I encountered an issue with SCCM\MECM failing to deploy the LAPS application to clients and servers. This was previously working fine but now was failing with a Past Due error in Software Center. The AppEnforce.log produced the only meaningful SCCM error events of 0x87d01106 and 0x80070005. 0x80070005 CMsiHandler::EnforceApp failed (0x80070005). AppProvider::EnforceApp - Failed to invoke EnforceApp on Application handler(0x80070005). CommenceEnforcement failed with error 0x80070005. Method CommenceEnforcement failed with error code 80070005 ++++++ Failed to enforce app. Error 0x80070005. ++++++ CMTrace Error Lookup reported ‘Access denied’ 0x87d01106 Invalid executable file C:\Windows\msiexec.exe CMsiHandler::EnforceApp failed (0x87d01106). AppProvider::EnforceApp - Failed to invoke EnforceApp on Application handler(0x87d01106). CommenceEnforcement failed with error 0x87d01106. Method CommenceEnforcement failed with error code 87D01106 ++++++ Failed to enforce app. Error 0x87d01106. ++++++ CMTrace Error Lookup reported Failed to verify the executable file is valid or to construct the associated command line. Source: Microsoft Endpoint Configuration Manager Interestingly testing revealed that .msi applications, configuration items aka compliance and WDAC policy were affected with .exe deployments remaining unaffected. Executing the install string from the administrator account also worked. Somewhat concerning as SCCM deployments execute as System, the highest privilege possible, yet all application installs failed across the entire domain. At this point, Google is normally your friend..... but the results suggested PowerShell, and the wrong user context, as it's a msi issue, these suggestions were not helpful. Clearly, I'm asking the wrong question...... When in doubt or.... stuck, trawl the eventlogs, the SCCM logs weren't going to give up anything further. Fortunately, in fairly short order the following errors were located in the Windows Defender log. Microsoft Defender Exploit Guard has blocked an operation that is not allowed by your IT administrator. For more information please contact your IT administrator. ID: D1E49AAC-8F56-4280-B9BA-993A6D77406C Detection time: 2023-02-23T21:03:46.265Z User: NT AUTHORITY\SYSTEM Path: C:\Windows\System32\msiexec.exe Process Name: C:\Windows\System32\wbem\WmiPrvSE.exe Target Commandline: "C:\Windows\system32\msiexec.exe" /i "LAPS.x64.msi" /q /qn Parent Commandline: C:\Windows\system32\wbem\wmiprvse.exe -Embedding Involved File: Inheritance Flags: 0x00000000 Security intelligence Version: 1.383.518.0 Engine Version: 1.1.20000.2 Product Version: 4.18.2301.6 Now I know the correct question to ask Google 'D1E49AAC-8F56-4280-B9BA-993A6D77406C', with Attack Surface Reduction (ASR) being the culprit. The following is an extract from the Microsoft page: 'Block process creations originating from PSExec and WMI commands D1E49AAC-8F56-4280-B9BA-993A6D77406C Block process creations originating from PSExec and WMI commands This rule blocks processes created through PsExec and WMI from running. Both PsExec and WMI can remotely execute code. There's a risk of malware abusing the functionality of PsExec and WMI for command and control purposes, or to spread infection throughout an organization's network. Warning Only use this rule if you're managing your devices with Intune or another MDM solution. This rule is incompatible with management through Microsoft Endpoint Configuration Manager because this rule blocks WMI commands the Configuration Manager client uses to function correctly. There is no fix, only a workaround, involving updating the ASR setting Block Mode to Audit Mode in Group Policy. Open GPO Management and locate the ASR rules under Windows Components/Microsoft Defender Antivirus/Microsoft Defender Exploit Guard/Attack Surface Reduction. Open the 'Configure Attack Surface Reduction Rules'. Update value name 'D1E49AAC-8F56-4280-B9BA-993A6D77406C' from 1 to 2. Gpupdate /force to refresh the GPO's on the client, then check the eventlog for 5007 recording the change from Block to Audit Mode. Test an SCCM Application deployment to confirm the fix. One final check of the event log confirming event id 1122 for the deployed application.

bottom of page