top of page

81 results found with an empty search

  • Create a WMI Filter on a PDC with PowerShell

    While building automation for domain deployment , OU structure, and delegation , I experienced one of those, too hard to do now, which nearly slipped past me, scripting the creation of a WMI filter for the PDC Emulator role. Time Source The goal is to make sure the PDC, and only the PDC, manages authoritative time. This particular GPO includes a root NTP server setting, whether that’s an IP address, a local atomic clock, or an external internet time source, and it’s vital that only the PDC syncs with it. Every other domain controller should, in turn, sync from the PDC, maintaining a proper hierarchy and preventing clock chaos. Not Keen on WMI Filters I’ll be honest, I don’t generally like WMI filters in GPO. They introduce performance hits and slow down policy processing, especially in larger environments. But in this case, it’s a pragmatic exception. The filter ensures that only the PDC receives and applies the external time configuration, keeping time consistent across the domain and preventing catastrophic drift when an upstream time source fails spectacularly. Prevent death and destruction I’ve experienced this first-hand, the time source collapsed in a heap, and the PDC leapt forward a full 24 hours in an instant. The aftermath was... memorable. It's hard to believe the chaos inflicted when time goes awry. MaxPhaseCorrention - Thank MS's Default value To guard against that kind of chaos, the GPO settings MaxPosPhaseCorrection and MaxNegPhaseCorrection limit how far the system clock is allowed to jump forward or backward during synchronization. The issue is that Microsoft's default value is 86400 seconds or 24 hours, these overly generous settings have the potential to lead to carnage. The recommendation is to set both POS and NEG settings to 3600 or 1 hour. These settings and the WMI filter ensure the domain’s time stays sane, stable, and immune to upstream meltdowns. GPO Settings - Prevent the Meltdown These are the current settings provided from GitHub , with the annex at the end of the blog providing the technical details. Computer Configuration/Policies/Administrative Templates/System/Windows Time Service: Global Configuration Settings MaxNegPhaseCorrection = 3600 MaxPosPhaseCorrection = 3600 Computer Configuration/Policies/Administrative Templates/System/Windows Time Service/Time Providers: Configure Windows NTP Client = Enabled NTPServer = 192.168.30.1,0x8 Type = NTP CrossSiteSyncFlags = 2 ResolverPeerBackoffMinutes = 15 ResolverPeerBackoffMaxTimes = 7 SpeicalPollInterval = 1024 EventLogFlags = 0 Enable Windows NTP Client = Enabled Enable Windows NTP Server = Enabled Script Prep Download both the script and the zip file from GitHub . Copy the files to the PDC into the 'C:\ADBackups\PDCNTP\' directory. Don’t use another domain controller. Running GPO scripts from a secondary DC introduces extra latency when connecting back to the PDC, which can cause failures. Extract the zip file, ensuring the GUID directory is nested within the 'PDCNTP' directory. C:\ADBackups\PDCNTP\{A5214940-95CC-4E93-837D-5D64CA58935C}\ If you prefer to use your own GPO export, that’s fine, as long as the path is correct, the script will automatically resolve the appropriate GUID and BackupID. Execution of the Script With Domain Admin privileges, open an elevated PowerShell window and execute the following commands CD to C:\ADBackups\PDCNTP\ ; .\Create_WMI_NTP_PDC.ps1 Open Group Policy Management. Confirm that the GPO has been created and linked to the Domain Controller OU Update the IP Address for the NTP Server, it's unlikely we share the same time server IP. The only remaining task for me is to integrate the NTP GPO into the fully automated Domain deployment script . Enjoy and thanks for your time. ANNEX - Breakdown of GPO Settings Computer Configuration > Policies > Administrative Templates > System > Windows Time Service > Global Configuration Settings MaxNegPhaseCorrection Current value: 3600 (1 hour) Purpose: Defines the maximum number of seconds the clock can be moved backward when synchronizing time. If the correction exceeds this, Windows logs an event instead of applying the adjustment. Relevance: Prevents the PDC from winding time back too far due to an erratic NTP source, which can break Kerberos authentication and replication. Alternative values: 0 — disables large backward corrections entirely. 300 — 5 minutes (useful for high-availability or sensitive environments). 86400 — 24 hours (default Microsoft value, overly generous for a PDC). 4294967295 (0xFFFFFFFF) — disables the limit completely (not recommended). MaxPosPhaseCorrection Current value: 3600 (1 hour) Purpose: Defines the maximum number of seconds the clock can be moved forward. Relevance: Protects against catastrophic jumps when an upstream NTP server malfunctions. This provides a 1-hour safety window in either direction, preventing massive jumps while still allowing normal synchronization drift to be corrected automatically. Alternative values: 300 — a conservative 5-minute correction limit. 900 — 15 minutes (a good balance for stable networks). 86400 — 24 hours (default). 4294967295 — disables the limit (unsafe on domain controllers). Computer Configuration > Policies > Administrative Templates > System > Windows Time Service > Time Providers Configure Windows NTP Client = Enabled Enables policy control of NTP client behaviour to enforce the following parameters. NTPServer = 192.168.30.1,0x8 Purpose: Defines the external NTP source the PDC syncs with. The ,0x8 flag tells Windows to use client mode. Alternative formats and examples: pool.ntp.org ,0x8 — public NTP pool. time.google.com ,0x8 — Google’s NTP service. ntp.nist.gov ,0x8 — US NIST time source. gps-clock.local,0x8 — local GPS or atomic reference. Relevance: This should be a reliable and stratum-1 or stratum-2 source. The PDC is the only domain controller that should query an external NTP server. Type = NTP Purpose: Forces synchronization using the NTP protocol with the specified NTPServer. Other valid values: NT5DS — Default for domain-joined machines (syncs from domain hierarchy). AllSync — Uses all available sync mechanisms (rarely needed). NoSync — Disables synchronization entirely. Relevance: For the PDC, NTP ensures it pulls time from the defined external source, not from another DC. CrossSiteSyncFlags = 2 Purpose: Controls cross-site time synchronization. Value meanings: 0 — Allow synchronization across all sites. 1 — Only sync from DCs in the same site. 2 — Never sync from DCs in other sites (recommended for PDC). Relevance: Keeps the PDC isolated as the domain’s root time authority, avoiding cross-site time loops. ResolverPeerBackoffMinutes = 15 Purpose: Specifies how long the service waits before retrying after a failed NTP sync. Alternatives: 5 — More aggressive retry. 30 — More relaxed retry, suitable for unreliable WANs. ResolverPeerBackoffMaxTimes = 7 Purpose: Defines the maximum number of exponential backoff attempts before giving up. Alternatives: 3 — Faster failover (good for testing). 10 — More patient retry window. SpecialPollInterval = 1024 Purpose: Sets how often (in seconds) the PDC polls the NTP source — roughly every 17 minutes. Alternatives: 3600 — Once per hour (lighter network load). 900 — Every 15 minutes (more aggressive for accuracy). 86400 — Once per day (not advised for volatile networks). Relevance: Frequent polling maintains accurate time and compensates for drift. EventLogFlags = 0 Purpose: Controls event logging verbosity. Values: 0 — Only critical errors. 1 — Informational and error events. 2 — All events, including debugging. Relevance: On a PDC, 0 keeps logs clean while still alerting to serious time issues. Enable Windows NTP Client = Enabled Purpose: Ensures the time service actively synchronizes with the defined NTP source. Relevance: Essential for keeping the PDC accurate and stable. Enable Windows NTP Server = Enabled Purpose: Turns the PDC into an NTP server for the domain. Relevance: Other DCs and domain members sync from the PDC rather than directly from the external NTP source, maintaining a clean and authoritative time hierarchy.

  • Zero Trust for the Home Lab - IPSec between Windows Domain and Linux using Certs (Part 7)

    The Road to the World's Most Secure Home Lab: Implementing IPSec Between Windows Domain and Rocky Linux So far, in the pursuit of the world's most secure home lab, I've implemented several key strategies. Today, I’ll dive into the specifics of implementing IPSec between my Windows Domain and Rocky Linux. What's Covered in This Blog This post covers the implementation of IPSec, focusing on the integration between my Windows Domain and Rocky Linux. What Is Zero Trust - Recap Zero Trust is a security framework that assumes no user, device, or network segment is inherently trustworthy, regardless of where it sits in the network. The core principles include: Verify explicitly : Always authenticate and authorize access. Use least privilege access : Limit access to only what's necessary. Assume breach : Design as if attackers are already in the network. IPSec and Its Back Story If you haven’t already, start with Part 4 , where I implement IPSec in a Windows environment using certificates. And yes, you guessed it, there’s more certificate configuration ahead. Wooohoooo, living the dream! Rocky Linux Rocky Linux version 10 is today’s Linux OS of choice and will be installed onto a Hyper-V platform. Rocky will serve as a Wazuh monitoring platform as part of the Zero Trust implementation for the home lab. The installation of Wazuh isn’t covered here; it’ll be the focus of the next article. Microsoft's SCOM might seem like the obvious choice for me, but there’s a longer-term goal to move away from Microsoft. As the company pivots to a Cloud and AI-first strategy, on-prem support and partner benefits are steadily being erased. This shift removes my ability and choice to deploy what and where I want. PfSense/Managed Switch VLAN To support the Rocky Linux servers and Wazuh, a new VLAN on the 192.168.90.0/24 subnet will be required. This aligns with the Zero Trust principle of service segregation. Initially, pfSense is configured to allow unrestricted traffic between VLAN 20 , VLAN 30 , and VLAN 90 in both directions. Don’t forget to update the managed switch to also allow the new VLAN tag of 90. A step-by-step guide for setting up VLANs, firewalls, etc., for pfSense is available in Part 2 . IPSec Additional GPO for SSH An additional GPO exemption allowing SSH (port 22) access between the member server and the Rocky Linux hosts will ease deployment. This allows copy and paste between host and VM. Current Domain IPSec Settings Crucial! Windows domain traffic only supports IKEv1, not that Microsoft will make this obvious or configurable via GPO. Make a note of the current IPSec settings; any deviation will result in IPSec negotiation failure. The following IPSec settings are known to work reliably. While some configurations using AES-GCM 128 and 256 are supported, AES-GCM 192 is not supported on Rocky Linux. If you plan to deviate from this setup, be sure to confirm that your chosen ciphers are supported on both Windows and Linux. A step-by-step guide for setting up IPSec in a Windows Domain is available in Part 4 . I strongly recommend following that guide before attempting to add Linux to the mix. IPSec Settings in GPO - Just for Info Open GPO Management and navigate to the IPSec policies and edit: Computer Configuration > Policies > Windows Settings > Security Settings Right-click and select Properties on Windows Defender firewall and Advanced Security. Select the IPSec Settings tab. Open Main Mode's Customize... Select and edit the SHA384 integrity policy. Make a note of all the settings. Audit Quick Mode. Audit Authentication, which is using the Trusted Root certificate. DNS Create a host record for the intended Rocky host. Linux Packages for IPSec The latest release of Rocky Linux is installed as a Hyper-V VM, with 6GB RAM and 250GB disk. Finally, the virtual NIC is set for VLAN 90. During installation, disable the root account and ensure that the user you create is added to the wheel group to grant administrative (sudo) privileges. Connection from Server Once Rocky is properly installed, pfSense should assign it an IP address via DHCP. In my case, it was 192.168.90.100 . Here’s the command to set a static IP, gateway, and DNS. SSH from PowerShell I’ll be connecting from my Windows server with PowerShell—no more Putty for me! Hostname Update the hostname to match the DNS entry created earlier. Updates Start where you mean to finish. Apply any updates to ensure stability and security fixes are applied. AD Packages Install the packages that will allow Rocky to be a domain member. Strongswan Install the following two packages in order. Time and Timezone Rocky will source its time from the DCs, not only to support authentication protocols but also to ensure that log timestamps are accurate and consistent across the environment. Search for your locale; mine's London. Copy the result and then set the timezone. Enable and start the time sync service. Update `chrony.conf` with the following, so it points at the DCs: `server 192.168.20.245 iburst` `server 192.168.20.247 iburst` `server 192.168.90.249 iburst` Restart the time service. Run the following commands once IPSec is implemented to confirm time and time sync. AD or Not to AD This step is included in case Rocky needs to join the domain. However, for its intended role as a monitoring solution, it’s best to minimize open ports and limit connectivity between it and the domain to reduce the attack surface. To Join the Domain In Active Directory Users and Computers, pre-create the computer object `wazuh90` in the required OU. If you don't do this step, Rocky will be added to the AD Computer container. Discover your domain (use your actual domain name in ALL CAPS). Join the domain using an account with permissions. Pull password information for an Active Directory user. Pull some domain info. IPSec Certificate for Linux Preparation Advanced certificate requests using version 3 templates are not supported through the traditional web enrollment interface (certsrv) unless you're using legacy systems like Windows XP or Server 2003. Clients running Windows Vista or newer cannot request v3 template certificates via this method due to compatibility limitations. Microsoft’s recommended approach for handling version 3 templates is to use Certificate Enrollment Web Services (CEP/CES) or leverage Autoenrollment via Group Policy. Both support modern certificate features and provide a more secure and scalable enrollment process. I’m not deploying a CES server; that's for another day and another blog, and it’s unnecessary for our needs. CES is mainly used by Windows clients for advanced certificate enrollment. Linux doesn’t require it, since it still supports the legacy method. New Linux Certificate Template Let's prep a certificate. Open the CA management snap-in, and then right-click on Certificate Templates and Manage. Duplicate a Certificate Template Either duplicate the IPSec (Offline) certificate or the previously created 'Non-TPM' template for server or workstation. General Tab: Set the validity period to 1 year. Compatibility Tab: Set both Compatibility settings to Windows 2003. Failure to do this will mean the template won't be available in the certificate web console. Request Handling Tab: Allow the private key to be exported. Cryptography Tab: Set the Algorithm to Determined by CSP and key size to 2048. Subject Name Tab: Set to Supply in the request. Extensions Tab: Edit the Application Policies and add in: - Client Authentication - IP Security IKE Intermediate - IP Security Tunnel Termination - IP Security User - Server Authentication Security Tab: Add the user or group that will perform the certificate enrollment. Remove any group that auto-enrolls. Publish the Certificate Template Return to the main CA Management snap-in. Right-click on Certificate Templates . Select New > Certificate Template to Issue > select Toyo Linux IPSec. Certificate Enrollment In this section, we’ll walk you through the process of requesting a certificate for a Linux system using the Windows CA web interface. SSH onto Rocky. Private Key A private key is generated locally to ensure it never leaves the system. A CSR is then created using that key to securely request a certificate from the CA without exposing the key itself. Create a working directory. Create a private key that remains on the host; I'll secure it shortly. Create CSR Create a CSR derived from the Private key. Update the following with the FQDN of the Rocky host. Copy and paste into the SSH sessions. Cat the CSR, select all the text including the Begin and End Certificate Requests lines, and press Enter to copy to the Windows clipboard. Cert Request from CA Web Console The CSR needs to be copied to the CA Web console to complete the certificate enrolment. From the Windows Server, open a browser and enter the address to the CA Web server, e.g., https://certs.toyo.loc/certsrv. Select Request a certificate. Select Submit a Certificate request by using a base-64-encoded CMC. Paste the CSR into the Base-64-encoded window. Select the Toyo Linux IPSec template. Select Base 64 encoded. Click on Download certificate. Open the downloaded certificate with Notepad. Copy the entire contents to clipboard. Create the Certificate Return to the SSH session. From this point onwards, every command will require sudo. `sudo nano FQDN.crt` and paste the contents of the Windows clipboard. Ctrl + O to output the contents to file. Ctrl + X to exit Nano. Copy the Private Key and Certificate to Strongswan The private key and the certificate are required to be copied or moved to the strongswan directory and configured with the correct permissions. Copy the Private key. Set the private key to be readable and writable only by the file's owner. Set Root as the Owner. Repeat the steps to secure the private key in the home directory. Copy the certificate to the strongswan x509 directory. Set the certificate permissions so the owner can write and everyone else can read. Trusted Root CA The root CA certificate is required on the local host to establish trust in certificates issued by that authority. Without it, the system cannot validate or trust incoming connections or services secured with those certificates. In the CA web console, click the Home link, then select Download a CA certificate, certificate chain, or CRL. Select Base 64 and then Download CA Certificate. Open with Notepad and copy the contents. Ensure you're in the 'certs' working directory. Open nano and paste the Base64-encoded root certificate from your clipboard into the file. Ctrl + O to output the contents to file. Ctrl + X to exit Nano. Root Trust Copy the root CA certificate to the trusted anchors directory so the system recognizes it as a valid certificate authority. Copy the root CA to anchors so the browser trusts sites on my domain. Refresh the system’s trusted certificate store with the new certificate. Copy the root CA to the Strongswan x509CA directory. Set the certificate permissions so the owner can write and everyone else can read. Firewalls The following commands permanently open the required ports and protocols for IPSec traffic. Note that port 4500 and the AH protocol are not needed for non-VPN traffic or this specific configuration. sudo firewall-cmd --list-all sudo firewall-cmd --permanent --add-port=500/udp sudo firewall-cmd --permanent --add-protocol=esp sudo firewall-cmd --permanent --add-port=4500/udp sudo firewall-cmd --permanent --add-protocol=ah sudo firewall-cmd --reload Swanctl.conf and Not Strongswan StrongSwan is an open-source implementation of the IPSec protocol suite, used to establish secure, encrypted connections between hosts or networks. It uses IKE (Internet Key Exchange), typically IKEv2, to negotiate and manage security associations. Naturally, Microsoft only supports IKEv2 for VPNs, so we're stuck with IKEv1. Configuration is handled through `swanctl.conf`, and the `swanctl` utility is used to load, manage, and monitor IPSec connections in real-time. It supports certificates, EAP, and various authentication methods, making it ideal for inter-domain and subnet traffic, site-to-site, and remote access VPNs. `Swanctl` is to be used as the IPsec command is deprecated. `swanctl.conf` provides a more modular, flexible, and systemd-friendly way to manage StrongSwan. Backup the original `swanctl.conf`. Create a new `swanctl.conf` with nano. Download my swanctl.conf from Github and paste it into nano. Crucial! Update the highlighted values to exactly match your Windows domain. They’re explicit, and any mismatch will prevent the IPSec tunnel from negotiating: aes256-sha384-ecp384 = Key Exchange (Main Mode) Integrity algorithm - SHA384 Encryption algorithm - AES-CBC 256 Key exchange algorithm - EC DH P-384 esp_proposals = aes128gcm128 = Data Protection (Quick Mode) Encryption algorithm - AES-GCM 128 Integrity algorithm - AES-GMAC\GCM 128 In transport mode, only traffic that matches `local_ts` and `remote_ts` will be protected by IPSec. Any traffic not matching these rules will pass as normal, unencrypted traffic. Ctrl + O to output the contents to file. Ctrl + X to exit Nano. Instruct the Charon daemon to load plugins dynamically, making the setup more flexible and easier to manage across different use cases. Start Strongswan Service Up to this point, access to Rocky has primarily been from Windows via SSH. The upcoming steps may terminate your session, and any misconfiguration will terminate the connection. With that in mind, you may want to switch to direct console access before proceeding. Note: Ignore any errors or warnings for the sqlite plugin; it’s harmless noise. Remove the SSH Exemption in GPO The IPSec GPO exemption for SSH, between the Windows Server and Rocky. Enable and Start Strongswan Execute the following commands. Any misconfigurations, typos, or incorrect parameters with `swanctl.conf` will likely prevent the service from starting or successfully establishing an IPSec connection. Enable Strongswan service. Start Strongswan service. Load all parameters stored in the `swanctl.conf` file. Status of Strongswan Let’s go through a few configuration steps to verify that IPSec is running correctly and establishing a successful connection to the Windows endpoint. `swanctl.conf` does not allow exemptions and communicates exclusively over IPSec with Windows. So, I found that using `nslookup` is a better way to test the initial connection with Windows domain controllers. Check the status of the Strongswan service to ensure it is running and enabled. I prefer retrieving a backdated list of events, so I use the `-n` option. This is particularly useful when troubleshooting issues where viewing only the latest events might miss the critical error that triggered the problem. `journalctl -u strongswan -f` displays StrongSwan events in real-time as they occur. `sudo swanctl --list-conns` displays all configured IPSec connections from the `swanctl.conf` file, including their settings and current status. `sudo swanctl --list-certs` lists all loaded X.509 certificates, showing details like subject, issuer, validity period, and key usage. `sudo tcpdump -n esp or udp port 500` captures and displays network packets that are either ESP (IPsec encrypted) traffic or use UDP port 500, which is commonly used for IKE (Internet Key Exchange) in IPSec. Finally, let’s examine the Windows side of the IPSec connection. Open `wf.msc` and navigate to either the Main Mode or Quick Mode section. There, you should see the IPSec connection established between Rocky Linux and the Windows Server. It Never Works First Time.... Windows Firewall From my experience, the following `journalctl` messages usually mean that IKE or ESP traffic is being blocked by a firewall, either on the Windows endpoint or by pfSense. If there are no matching log entries on the Windows server, I take it as a sign that the packets never made it through. In that case, I’ll enable or check the firewall logs on pfSense to confirm if it’s dropping the traffic. journalctl -u strongswan -n 50 wazuh90.toyo.loc charon-systemd[8207]: sending packet: from 192.168.90.100[500] to 192.168.30.61[500] (180 bytes) wazuh90.toyo.loc charon-systemd[8207]: creating delete job for CHILD_SA ESP/0x00000000/192.168.20.245 wazuh90.toyo.loc charon-systemd[8207]: CHILD_SA ESP/0x00000000/192.168.20.245 not found for delete wazuh90.toyo.loc charon-systemd[8207]: giving up after 5 retransmits wazuh90.toyo.loc charon-systemd[8207]: establishing IKE_SA failed, peer not responding wazuh90.toyo.loc charon-systemd[8207]: creating acquire job for policy 192.168.90.100/32[udp/35655] === 192.168.20.245/32[udp/domain] with reqid {2} wazuh90.toyo.loc charon-systemd[8207]: initiating Main Mode IKE_SA windows-ipsec[1317] to 192.168.20.245 wazuh90.toyo.loc charon-systemd[8207]: generating ID_PROT request 0 [ SA V V V V V ] wazuh90.toyo.loc charon-systemd[8207]: sending packet: from 192.168.90.100[500] to 192.168.20.245[500] (180 bytes) wazuh90.toyo.loc charon-systemd[8207]: creating delete job for CHILD_SA ESP/0x00000000/192.168.20.247 wazuh90.toyo.loc charon-systemd[8207]: CHILD_SA ESP/0x00000000/192.168.20.247 not found for delete wazuh90.toyo.loc charon-systemd[8207]: giving up after 5 retransmits Syntax with swanctl.conf The first three log extracts show that the `swanctl.conf` file is misconfigured, either due to a typo or something in the syntax being incorrect. Starting the service immediately fails with an exit code, which usually points to a parsing error or a missing/invalid configuration directive. sudo systemctl start strongswan.service Job for strongswan.service failed because the control process exited with error code. See "systemctl status strongswan.service" and "journalctl -xeu strongswan.service" for details. Running `sudo swanctl --load-all` gives the same result, confirming that the daemon can’t even load the connection definitions. sudo swanctl --load-all Job for strongswan.service failed because the control process exited with error code. See "systemctl status strongswan.service" and "journalctl -xeu strongswan.service" for details. Checking `journalctl -u strongswan -n 50` reveals that `charon-systemd` is shutting down with `status=22`, which typically means there’s a configuration error (e.g., invalid parameters, wrong file paths for certificates, or unsupported options). journalctl -u strongswan -n 50 wazuh90.toyo.loc systemd[1]: strongswan.service: Control process exited, code=exited, status=22/n/a wazuh90.toyo.loc charon-systemd[2882]: SIGTERM received, shutting down wazuh90.toyo.loc systemd[1]: strongswan.service: Failed with result 'exit-code'. wazuh90.toyo.loc systemd[1]: Failed to start strongswan.service - strongSwan IPsec IKEv1/IKEv2 daemon using swanctl. The final log extract, however, tells a slightly different story. Here, I can see the IKE negotiation starting, but it’s failing with “header verification failed.” This points to either an IKE proposal mismatch (e.g., incorrect algorithms or key sizes), a certificate identity issue, or even corrupted packets caused by a misbehaving firewall/NAT device. journalctl -u strongswan -n 50 wazuh90.toyo.loc charon-systemd[22855]: 192.168.20.247 is initiating a Main Mode IKE_SA wazuh90.toyo.loc charon-systemd[22855]: selected proposal: IKE:AES_CBC_256/HMAC_SHA2_384_192/PRF_HMAC_SHA2_384/ECP_384 wazuh90.toyo.loc charon-systemd[22855]: generating ID_PROT response 0 [ SA V V V V ] wazuh90.toyo.loc charon-systemd[22855]: sending packet: from 192.168.90.100[500] to 192.168.20.247[500] (160 bytes) wazuh90.toyo.loc charon-systemd[22855]: header verification failed wazuh90.toyo.loc charon-systemd[22855]: received invalid IKE header from 192.168.20.247 - ignored Thanks for Your Time and Support... Another IPSec and certificate-based blog wrapped up, and just one more to go before my home lab’s Zero Trust panacea of perfection is fully implemented. Honestly, I loved working on this one. My first Linux IPSec deployment in prepping for this blog was Linux-to-Linux, and it was smooth, stable, and just worked. Then I brought Windows into the mix… and suddenly I was questioning my life choices and the tech I’ve devoted my time to. Back in the Vista days, when I was running 100% OpenSuse, I really should have stayed the course. Next up: installing Wazuh on the Rocky 10 VM I’ve just prepped. Related Posts: Part 1 - Zero Trust Introduction Part 2 - VLAN Tagging and Firewalls with pfSense Part 3 - pfSense and 802.1x Part 4 - IPSec for the Windows Domain Part 5 - AD Delegation and Separation of Duties Part 6 - Yubikey and Domain Smartcard Authentication Setup Part 7 - IPSec between Windows Domain and Linux using Certs Yet to complete: Part 8 - Monitoring, IPS and IDS Part 9 - DNS-over-HTTPS

  • Zero Trust for the Home Lab - Yubikey and Domain Smartcard Authentication Setup (Part 6)

    The Road to the World's Most Secure Home Lab.... So far in the pursuit of the World's most secure home lab, the following have been implemented: Related Posts: Part 1  - Zero Trust Introduction Part 2  - VLAN Tagging and Firewalls with pfSense Part 3 - pfSense and 802.1x Part 4 - IPSec Part 5  - AD Delegation and Separation of Duties What's Covered in this Blog This post covers implementing YubiKey smart card authentication and how it's implemented with a Windows Enterprise CA. What Is Zero Trust - Recap Zero Trust is a security framework that assumes no user, device, or network segment is inherently trustworthy, regardless of where it sits in the network. The core principles include: Verify explicitly - Always authenticate and authorize access. Use least privilege access – Limit access to only what's needed. Assume breach – Design as if attackers are already in the network. How Smart Cards Address Zero Trust Security Zero Trust is built on the principle of “never trust, always verify.” It requires strict identity verification, least-privilege access, and continuous authentication. Strong Identity Verification: Smartcards use embedded chips to store cryptographic keys securely. They require something you have (the card) and something you know (a PIN), making them ideal for strong, multi-factor authentication. Credential Protection: Because authentication happens on the card itself, sensitive credentials are never exposed to the device, reducing the risk of phishing, keylogging, or malware-based theft. How YubiKey Functions as a SmartCard YubiKey devices support smart card functionality through their PIV (Personal Identity Verification) capability, which implements the NIST SP 800-73 standard. This allows organizations to use YubiKeys for authentication, signing, and encryption in enterprise environments. The PIV applet on the YubiKey securely stores cryptographic keys and certificates, enabling seamless authentication in Windows Active Directory domains. Yubikey Core SmartCard Functionality A YubiKey operates as a hardware security module that: Stores private keys securely in tamper-resistant hardware Performs cryptographic operations internally (signing, decryption) Prevents private key material from ever leaving the device Key Technical Components Secure Element: A dedicated cryptographic processor with: Protected memory for storing private keys and certificates Hardware-based random number generation Tamper-resistant design to prevent physical attacks Certificate Storage Architecture YubiKeys store certificates in a structured slot system: 24 Total Storage Slots: Slots 9a, 9c, 9d, and 9e are the primary slots used for certificates Each slot can store one certificate/key pair Slot 9a: Authentication (typically used for workstation login) Slot 9c: Digital Signature Slot 9d: Key Management (encryption/decryption) Slot 9e: Card Authentication YubiKey Smart Card Implementation To enable smartcard authentication in a Domain, we’ll need to configure Group Policy and create a certificate smart card template. GPO Settings Enable the following 3 settings under Computer Configuration > Admin Templates> Windows Components > Smart Card: Allow certificate with no extended key usage certificate attributes Allow ECC certificates to be used for logon and authentication Turn on Smart Card Plug and Play Service Note: When the Certificates employs Elliptic Curve, the 'Allow ECC certificates to be used for logon and authentication' must be enabled. YubiKey Smartcards Software YubiKey functionality requires the following: Drivers: Any system where the smartcard is used must have the appropriate drivers installed for the YubiKey to be recognized. Management Software: The YubiKey Manager software is needed to configure the devices and set users’ smartcard PINs. Download: The software can be downloaded from this link https://www.yubico.com/support/download YubiKey Manager To configure a YubiKey open the Manager application Insert the first YubiKey Navigate to Interfaces Deselect all USB types other than PIV To set the user's PIN, this is the pin used by the user during logon, navigate to: Applications, select PIV > PIN Management Configure PIN Select 'Use Default' or enter 123456 Enter a new PIN Smartcard Certificate Template Log in to the Certificate Authority (CA) server using an account with Domain Admin rights or delegated CA Manager permissions. At the Run prompt, enter certsrv.msc, navigate to Certificate Templates, right-click, and select Manage. According to YubiKey's guidance, you can use the Smartcard Logon template for deployment, and this is my intention. However, when the User certificate template is already issued and the Smartcard Logon template is later deployed for enrollment. Testing has shown that overlapping certificate purposes can lead to authentication failures, confusion on the part of which certificate is presented, during smartcard logon. Duplicate the Smartcard template. General Tab: Publish certificate in Active Directory Do not automatically reenroll if a duplicate certificate exists in Active Directory Compatibility tab: Windows Server 2016 Windows 10 / Windows Server 2016 Request Handling tab: Include symmetric algorithms allowed by this subject For automatic renewal of smart card certificates, use the existing key if a new key cannot be created. Prompt the user during enrollment. Cryptography tab: ​ Key Storage Provider' and 'ECDH_P384 (ECDH_P512 isn't supported) ​Requests must use one of the following providers Microsoft Smart Card Key Storage Provider Request hash is 256 ​ The table below compares the equivalent security levels of Rivest–Shamir–Adleman (RSA) and Elliptic Curve Cryptography (ECC). It highlights how ECC achieves the same level of security with significantly shorter key lengths and lower computational overhead. As ECC key sizes grow modestly, the corresponding RSA key sizes increase disproportionately. RSA ECC Devisable by 512 112 4.6 1024 160 6.4 2048 224 9.1 3072 256 12.1 7680 384 20 15360 512 30 Security tab: Add Authenticated Users and Autoenroll A named AD group could be used instead for a more targeted enrollment. Subject Name tab: User principal name (UPN) is enabled E-mail name is unchecked Add the email address to the User attributes as an alternative to removing. Enrolment will fail if the E-mail name is enabled and not provided at enrollment. Smartcard Enrollment Once the YubiKey drivers are installed on the client machine, the user can enroll for a Smartcard certificate. Enrollment: The user opens 'certmgr.msc' Navigate to Personal > Certificates Right click on Certificates > All Tasks > Request New Certificate Select Active Directory Select the Smartcard template and enroll. During the enrollment process, when prompted, enter the PIN that was configured earlier during YubiKey setup. The YubiKey smartcard is now configured for User logon. Smart Card Misconceptions and Important Next Steps Windows password authentication is vulnerable to brute force, dictionary, guessing, and phishing attacks. Smartcards significantly reduce these risks. Although commonly thought to eliminate passwords entirely, that's not entirely accurate. When a user account is configured for smartcard authentication within the User AD account, the password is reset one time to a random 120 character string. Failing to set the "Smart card is required for interactive logon" flag leaves the user's existing password unchanged, allowing them to continue logging in with their original, potentially insecure credentials. That Password is still there for SSO: During the AS_REP stage of Kerberos authentication, the Key Distribution Center (KDC) includes the NTLM hash of this password in the PAC to support fallback to SSO when Kerberos is unavailable. The random password is highly resistant to offline cracking, even against well-resourced attacks using tools like John the Ripper. The User password is not so resistant. Pass the Hash Enabling smartcard authentication still leaves user accounts exposed to Pass-the-Hash attacks. As previously stated, when the 'Smartcard is required for Interactive logon' is set, the user’s password is reset to a long, random value, but it remains static indefinitely. Without regular password rotation, the account stays vulnerable to Pass-the-Hash. To mitigate this risk, use the script below to refresh user passwords daily. $scTrue = Get-ADUser -Filter  -Properties  | where {$_.SmartcardLogonRequired -eq "True"} | Select-Object name,SmartcardLogonRequired foreach ($user in $scTrue) { $name = $user.name Set-ADUser -Identity $name -SmartcardLogonRequired:$false Set-ADUser -Identity $name -SmartcardLogonRequired:$true } Important: Do not create a scheduled task on a Domain Controller running under a Domain Admin account to flip the smartcard attribute, this is reckless and a serious abuse of privileged credentials. The correct approach is to: Create a dedicated service account with delegated 'Write' permissions to the smartcard logon attribute on the Users OU. Assign the service account 'Log on as a batch job' rights on a hardened member server with the Active Directory tools installed. Create a scheduled task that runs daily, encoding the PowerShell script in Base64 and embedding it directly in the Task Scheduler's action tab. That's the Easy Part of the Zero Trust Completed.... The home lab’s in pretty good shape, certs everywhere, and a solid step toward Zero Trust. But let’s be honest, plastering certificates on every domain object is just providing a false sense of security. There are still gaps, some can be secured, whilst others probably not. Those Linux devices, the pfSense, the wifi AP and the printer all require my attention. The biggest challenge is monitoring, it's resource hungry and needs an enormous amount of effort to correctly configure. Despite the effort, monitoring will provide a major feature in the Zero Trust architecture, allowing me insight and visibility into the Home Lab. There's no rest for the wicked and even less for those trying to keep the wicked at bay... thanks and hope content so far is proving insightful. Next up is IPSec for Linux, after a break and some sleep. Related Posts: Part 1  - Zero Trust Introduction Part 2  - VLAN Tagging and Firewalls with pfSense Part 3 - pfSense and 802.1x Part 4 - IPSec for the Windows Domain Part 5  - AD Delegation and Separation of Duties Part 6  - Yubikey and Domain Smartcard Authentication Setup Part 7  - IPSec between Windows Domain and Linux using Certs Yet to complete Part 8 - Monitoring, IPS and IDS Part 9 - DNS-over-HTTPS

  • Windows PE add-on for the Windows ADK for Windows 11, version 22H2 Error

    Windows ADK PE for Windows 11 22H2 fails to install completely generating the following errors. Error 1 Clicking on the Windows PE tab crashes MMC generating the following error: Could not find a part of the path 'C:\Program Files (x86)\Windows Kits\10\Assessment and Deployment Kit\Windows Preinstallation Environment\x86\WinPE_OCs'. Google suggests the ADK PE isn't installed..... Error 2 It's not possible to update the boot images. Unable to open the specified WIM file. ---> System.Exception: Unable to open the specified WIM file. ---> System.ComponentModel.Win32Exception: The system cannot find the path specified. Something is definitely wrong and missing Both errors suggest missing files, with error 1 providing a path: C:\Program Files (x86)\Windows Kits\10\Assessment and Deployment Kit\Windows Preinstallation Environment\ Comparing the latest ADK PE for Windows 11 22H2 to an older installation something is very wrong. The x86 and Arm directories are missing..... ADK PE for Windows 11 22H2 ADK PE for Windows 10 1809 Is this a one-off... The initial problem presented itself whilst upgrading MDT and ADK installed on a 2012 R2 Server. A new instance of 2022 Server and Windows 11 were equally affected. Each instance of ADK PE was from a fresh download. Microsoft, I've got the contact details of some really good Test Managers. The Fix Not wishing to faff and spend too much time comparing downloads against different versions, it was taking me away from my planned day of Hack the Box. There's the recommended fix, downloading an earlier version on ADK PE, 1809 for Windows 10 should do the trick. Windows 11 is purely cosmetic under the hood it identifies as Windows 10. Alternatively, copy the missing contents from a working installation, not recommended and it's a bit quick and dirty. It's functional with some minor scripting errors with the MDT deployment wizard.

  • Shift+F10 PXE Attack....nearly 4 years on

    During MDT or ConfiMgr deployment of Windows 10, press Shift+F10 whilst Windows detects devices. A command prompt with System Privileges will pop up, allowing all sorts of shenanigans and without being logged by SIEM, those agents won't be running yet. Also, during Windows 10 upgrades, Bitlocker drive encryption is disabled, allowing the same attack. This is an old issue raised some 3 to 4 years ago.... Well, today on my test rig during a 1909 deployment, I was just curious, it can't still be vulnerable.... oops. The fix is pretty straightforward, although I can't take credit, that belongs to Johan Arwidmark and this post here # Declare Mount Folders for DISM Offline Update $mountFolder1 = 'D:\Mount1' $mountFolder2 = 'D:\Mount2' $WinImage = 'D:\MDTDeployment\Operating Systems\Windows 10 x64 1909\sources' #Mount install.wim to first mount folder Mount-WindowsImage -ImagePath $WinImage\install.wim -Index 1 -Path $mountFolder1 #Mount winre.wim to second mount folder Mount-WindowsImage -ImagePath $mountFolder1\Windows\System32\Recovery\winre.wim -Index 1 -Path $mountFolder2 #Create folder for DisableCMDRequest.TAG file in Winre.wim New-Item $mountFolder2\Windows\setup\scripts -ItemType Directory #Create DisableCMDRequest.TAG file for Winre.wim New-Item $mountFolder2\Windows\setup\scripts\DisableCMDRequest.TAG -ItemType File #Commit changes to Winre.wim Dismount-WindowsImage -Path $mountFolder2 -Save #Create folder for DisableCMDRequest.TAG in install.wim New-Item $mountFolder1\Windows\setup\scripts -ItemType Directory #Create DisableCMDRequest.TAG file for install.wim New-Item $mountFolder1\Windows\setup\scripts\DisableCMDRequest.TAG -ItemType File #Commit changes to Winre.wim Dismount-WindowsImage -Path $mountFolder1 -Save

  • Deploying without MDT or SCCM\MECM....

    The best methods for deploying Windows are SCCM and then MDT, hands down. But what if you don’t have either deployment service? Seriously… despite all the step-by-step guides and even scripts claiming you can deploy MDT in 45 minutes, some still opt to manually deploy or clone Windows, maybe they never moved past RIS. The real question is: can Windows 10 and a suite of applications, including Office, be automated without fancy deployment tools? The short answer: yes, but it’s not pretty. There are problems that MDT and SCCM simply make disappear. I’m not thrilled about dealing with these issues. Manual prep takes way more time, is less functional, and only starts to make sense if you have more than a handful of Windows clients to deploy. If you ever consider doing it this way, it’s only for very limited scenarios. My recommendation: use the proper deployment services designed specifically for Windows. It’s faster, cleaner, and far less frustrating. Pre-requisites 16Gb USB3 as a minimum, preferably 32Gb Windows 10 media MS Office 2019 Chrome, MS Edge, Visual C++, Notepad++ Windows ADK Windows Media Download Windows 10 ISO and double click to mount as D:\ Create a directory at C:\ named "Version of Windows" eg C:\Windows21H2\. Don't copy the contents of D:\ directly to the USB due to install.wim being larger than the permitted supported file size for Fat32, greater than 4Gb. Copy the files from D:\ (Windows ISO) to C:\Windows21H2\ Split install.wim into 2Gbs files to support Fat32. Dism /Split-Image /ImageFile:C:\Window21H2\sources\install.wim /SWMFile:C:\Window21H2\sources\install.swm /FileSize:2000 Delete C:\Window21H2\sources\install.wim. Insert USB pen and format as Fat32, in this case, it will be assigned as E:\ Copy the entire contents from C:\Windows21H2\ to E:\. Applications Create directory E:\Software, this is the root for all downloaded software to be saved to. Create the following sub-directories under E:\Software, and download the software to the relevant sub-directory. 7Zip & cmd.exe /c 7z2107-x64. exe /S Chrome & cmd.exe /c msiexec.exe /i GoogleChromeStandaloneEnterprise64. msi /norestart /quiet Drivers & cmd /c pnputil.exe /add-driver Path/*. inf /subdirs /install MS-VS-CPlus & cmd.exe /c vcredist_x86_2013. exe /S MS-Win10-CU & cmd /c wusa.exe windows10.0-kb5011487-x64. msu /quiet /norestart MS-Win10-SSU & cmd /c wusa.exe ssu-19041.1161-x64. msu /quiet MS-Edge & cmd.exe /c msiexec.exe /i MicrosoftEdgeEnterpriseX64. msi /norestart /quiet MS-Office2019 & cmd.exe /c MS-Office2019\Office\Setup64. exe NotepadPlus & cmd.exe /c npp.8.3.3.Installer.x64. exe /S TortoiseSVN & cmd.exe /c msiexec.exe /i TortoiseSVN-1.14.2.29370-x64-svn-1.14.1. msi /qn /norestart WinSCP & cmd.exe /c WinSCP-5.19.6-Setup.exe /VERYSILENT /NORESTART /ALLUSERS I've provided the unattended commands with an extension, its important the correct file type is downloaded for the script to work correctly. Place any driver files in the 'Drivers' directory unpacked as *.inf files. AutoUnatteneded Download ADK for Windows ( here ). Install only the 'Deployment Tools'. From the Start Menu open 'Windows System Image Manager' and create a 'New Answer File' and save it to the root of the E:\ (USB), name the file 'AutoUnattend.xml'. I cheated at this point, didn't fancy creating the AutoUnattend.xml from scratch, so I "borrowed" a pre-configured unattend.xml from MDT. To save you the pain download the 'AutoUnattend.xml' from Github ( here ). Save to the Root of E:\ (USB). Within the autounattend.xml the following line is referenced to execute 'InstallScript.ps1' at first logon. C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -executionpolicy bypass -file D:\software\InstallScript.ps1 Not that the PartitionID is '3' and the InstallFrom is updated from 'install.wim' to 'install.swm'. OnError 0 3 install.swm To select a different edition, the default is Education to run the following command with Admin rights. dism /Get-WimInfo /WimFile:"d:\sources\install.wim" Index : 1 Name : Windows 10 Education Description : Windows 10 Education Index : 2 Name : Windows 10 Education N Description : Windows 10 Education N Index : 3 Name : Windows 10 Enterprise Description : Windows 10 Enterprise Index : 4 Name : Windows 10 Enterprise N Description : Windows 10 Enterprise N Index : 5 Name : Windows 10 Pro Description : Windows 10 Pro Edit the AutoUnattend.xml and update the MetaData value under OSImage to reflect the desired index value. The Script Download 'InstallScript.ps1' from ( here ) and save it to E:\Software. A Brief Script Overview The first action is to copy the Software directory to C:\ so it can be referenced between reboots. The script adds Registry settings to Autologon as 'FauxAdmin' with a password of 'Password1234'. I strongly suggest changing the hardcoded password to something more secure. Warning: During the installation of Windows it prompts for a new account to ensure it reflects the hardcoded name and password in the InstallScript.ps1 'FauxAdmin', 'Password1234'. A Scheduled Task is added that will execute at logon with 'FauxAdmin'. The default hostname is Desktop-####, you'll be asked to enter a new hostname. Pre-Create a Computer object in AD, with the planned hostname of the client being deployed. Domain credentials will be required with delegated permissions to add computer objects to the domain. Update the InstallScript.ps1 with the correct FQDN and OU path $DomainN = "trg.loc" $ouPath = "OU=wks,OU=org,DC=trg,DC=loc" Windows 10 CU and Apps will install with various reboots. A bit of a tidy to remove the AutoLogon and Scheduled Task and then a final reboot. To prevent an attempted re-installation or repeat an action a 'check.txt' file is updated at the end of each step. If validated $true then the step will be skipped. Deployment Boot PC and enter Bios\UEFI. Set UEFI to boot or initial boot to USB, F10 to save and exit. Insert USB and boot. Setup will start and prompt for disk partitioning, delete the volumes and create new default partitions. OK, Cortana. Create an account of 'fauxadmin' + 'Password1234' - these account details are hardcoded in the script. At initial logon, the PowerShell will launch. The process is completed when the client has been added to the domain and rebooted. Warning. Now reset the FauxAdmin account's password, don't forget it's hardcoded in the script and could allow an attacker to gain access if the password isn't updated. Notes: The unattended disk partitioning proved to be unreliable and required manual intervention some of the time. This step is now manual. It is assumed that the USB during deployment will map to D:\ this is hardcoded for the Scheduled Task. Hiding Cortana resulted in removing the prompt for a new admin account, it's considered a security benefit to create a new admin account and disable Administrator with SID 500.

  • Managing Local Admin Passwords with LAPS

    How are you managing your local administrator passwords? Are they stored in a spreadsheet on a network share, or worse, is the same password used everywhere? Microsoft LAPS (Local Administrator Password Solution) could be the answer. LAPS is a lightweight tool that, with a few simple GPO settings, automatically randomizes local administrator passwords across your domain. It ensures each client and server has a unique, securely managed password, removing the need for spreadsheets or manual updates. Download LAPS from the Microsoft site Copy the file to the Domain Controller and ensure that the account you are logged on has 'Schema Admin'. Install only the Management Tools. As its a DC its optional whether to install the 'Fat Client UI', Schema updates should always be performed on a DC directly. Open Powershell and run the following command after seeking approval. Update-AdmPwdSchema SELF will need updating on the OU's for your workstations and servers. Add SELF as the Security Principal. Select 'Write ms-Mcs-AdmPwd Now change the GPO settings on the OU's. The default is 14 characters but I would go higher and set above 20. Install LAPS on a client and select only the AdmPwd GPO Extension On the Domain Controller open the LAPS UI and search and Set a client. Once the password has reset open the properties of the client and check the ms-Mcs-AdmPwd for the new password. Now every 30 days the local Admin password will be automatically updated and unique. Deploy the client with ConfigMgr to remaining estate. By default Domain Admin have access to read the password attribute and this can be delegated to a Security Group. AND.....this is the warning.....Any delegated privileges that allow delegated Computer management and the 'Extended Attributes' can also read the 'ms-MCS-AdmPwd'.

  • Using SCOM to Monitor AD and Local Accounts and Groups

    For those that have deployed SCOM without ACS or another monitoring service, but don't have a full-blown IDS\IPS. With a little effort, it's possible to at least monitor and alert when critical groups and accounts. As a free alternative, ELK (Elastic Search) or Security Onion. The following example is SCOM being configured to alert when Domain Admins is updated. On the Authoring Tab, Management Pack Objects, Rules, select 'NT Event Log (Alert)' Create a new Management Pack if required, don't ever use the default MP The 'Rule Name' should have an aspect that is unique and all subsequent rules to assist searching later on. Rules that monitor Groups or Accounts will be pre-fixed with 'GpMon'. The 'Rule Target' in this case is 'Windows Domain Controllers', it's a domain group. Change the 'Log Name' to 'Security'. Add Event ID 4728 (A member was added to a security-enabled global group) Update the Event Source to 'Contains' with a value of 'Domain Admins'. Update the priorities to High and Critical. Sit back grab a coffee (or 2) and wait whilst the rule is distributed to the Domain Controllers, this can take a while. Test the rule by adding a group or account to Domain Admins, in the SCOM Monitoring tab, an alert will almost immediately appear with full details. Now for the laborious bit, create further monitors for the following: Server Operators Account Operators Print Operators Schema and Enterprise Admins Any delegation or role-up groups SCCM Administrative groups CA Administrative groups That's the obvious groups covered, now to target all Windows Servers and Clients (if SCOM has been deployed to the clients) Local accounts for creation, addition to local groups and password resets. Applocker to alert on any unauthorised software being installed or accessed. Finally here's what Microsoft recommens. With a few hours of effort and you'll have better visibility of the system and any changes to those critical groups.

  • Always Patch Before Applocker or Device Guard are Deployed.

    Labs don't tend to follow the best practices or any security standards, they're quick dirty installations for developing and messing around. Here's some food for thought the next time you're wanting to test Applocker or Windows Defender Application Control (WADC) aka Device Guard, you may wish to at least patch. For the most part, deploying Domain Infrastructure, scripts and services works great, until Device Guard is deployed to an unpatched Windows 11 client. Firstly the steps on how to configure Device Guard, then the fun... DeviceGuardBasic.ps1 script can be downloaded from ( here ). Run the script as Admin and point the Local GPO to Initial.bin following the help. Device Guard is set to enforced, no audit mode for me, that's for wimps, been here hundreds of times......what's the worse that can happen..... arrghhhhh. The first indication Windows 11 had issues was 'Settings' crashed upon opening. This isn't my first rodeo, straight to the eventlogs. Ah, a bloodbath of red Code Integrity errors complaining that a file hasn't been signed correctly. How could this be.... the files are Microsoft files. This doesn't look good, the digital signature can't be verified meaning the signing certificate isn't in the Root Certificate Store for the Computer. This is not the first time I've seen the 'Microsoft Development PCA 2014' certificate. A few years back a sub-optimal Office 2016 update prevented Word, PowerPoint and Excel from launching. It was Applocker protecting me from the Microsoft Development certificate at that time. Well done Microsoft, I see your test and release cycle hasn’t improved. A Windows update and all is fine….right.....as if. I'm unable to click on the Install updates button, it's part of Settings and no longer accessible. Bring back Control Panel. No way I’m waiting for Windows to get around to installing the updates by itself. The choices: Disable Device Guard by removing the GPO and deleting the SIPolicyp7b file. Create an additional policy based on hashes. Start again, 2 hours effort, most of that waiting for updates to install. Creating an additional policy based on hashes and then merging them into the ‘initial’ policy allows for testing Device Guard's behaviour. Does Device Guard prevent untrusted and poorly signed files from running when hashes are present? Observed behaviour is for Device Guard policy to create hashes for unsigned files as a fallback. The new and improved Device Guard script, aptly named 'DeviceGuard-withMerge.ps1' can be downloaded from ( here ). The only additional lines of note are the New-CIPolicy to create only hashes for the “C\Windows\SystemApps” directory and to merge the 2 XML policy files. New-CIPolicy -Level Hash -FilePath $HashCIPolicy -UserPEs 3> $HashCIPolicyTxt -ScanPath "C:\Windows\SystemApps\" Merge-CIPolicy -PolicyPaths $IntialCIPolicy,$HashCIPolicy -OutputFilePath $MergedCIPolicy The result, 'Settings' now works despite Microsoft's best effort to ruin my day. Creating Device Guard policies based on hashes for files incorrectly signed by Microsoft's internal development CA is resolved. Below is the proof, 'Settings' is functional even with those dodgy files. Conclusion: This may come as a shock to some….. Microsoft does make mistakes and release files incorrectly sighed… shocking. Device Guard will allow files to run providing the hashes are present even when incorrectly signed. Did I learn something, hell yeah! always patch before deploying Device Guard or Applocker. The time spent faffing resolving the issue far exceeded the time it would have taken to patch it in the first place.

  • LAPS Leaks Local Admin Passwords

    On a previous blog ( here ), LAPS (Local Administrator Password Solution) was installed. LAPS manages and updates the local Administrator passwords on clients and member servers, controlled via GPO. Only Domain Admins have default permission to view the local administrator password for clients and member servers. Access to view the passwords by non-Domain Admins is via delegation, here lies the problem. Access to the local administrator passwords may be delegated unintentionally. This could lead to a serious security breach, leaking all local admin accounts passwords for all computer objects to those that shouldn't have access. This article will demonstrate a typical delegation for adding a computer object to an OU and how to tweak the delegation to prevent access to the ms-Mcs-AdmPwd attribute. Prep Work There is some prep-work, LAPS is required to be installed and configured, follow the above link. At least 1 non-domain joined client, preferably 2 eg Windows 10 or 11 Enterprise. A test account, mine's named TestAdmin, with no privileges or delegations and an OU named 'Workstation Test'. Ideally, I'll be using AD Groups and not adding TestAdmin directly to the OU, it's easy for demonstration purposes. Delegation of Account Open Active Directory Users and Computers or type dsa.msc in the run command. With a Domain Admin account right-click on 'Workstation Test' OU, Properties, Security Tab and then Advanced. Click Add and select the TestAdmin as the principal. Select, Applies to: This Object and all Descendant Objects In the Permission window below select: Create Computer Objects Delete Computer Objects Apply the change. This is a 2 step process, repeat this time selecting. Applies to: Descendant Computer Objects Select Full Control in the Permissions window. Test Delegation Log on to a domain workstation with RSAT installed and open Active Directory Users and Computers. Test by pre-creating a Computer object, right-click on the OU and select New > Computer, and type the hostname of the client to be added. Log on to a non-domain joined Windows client as the administrator and add to the domain using the TestAdmin credentials, reboot. Then wait for the LAPS policy to apply.......I've set a policy to update daily. View the LAPS Password As the TestAdmin, from within AD Users and Computers go to View and select Advanced. Right-click properties on the client, select the Attribute tab Scroll down and locate 'ms-Mcs-AdmPwd', that the Administrator password for that client. The Fix.... To prevent TestAdmin from reading the ms-Mcs-AdmPwd attribute value, a slight amendment to the delegation is required. As the Domain Admin right-click on 'Workstation Test' OU, Properties, Security Tab and then Advanced. Select the TestAdmin entry, it should say 'Full Control'. Remove 'All Extended Rights', 'Change Password' and 'Reset Password' and apply the change. As TestAdmin open AD Users and open the Computer attributes. ms-Mcs-AdmPwd is no longer visible. Did I Just Break Something...... Test the change by adding a computer object to the OU and adding a client to the domain. Introducing computers to the domain is functional... No harm no foul. Final Thoughts Removing the Extended Rights and Password permissions prevents the delegated account from reading the local administrator password from ms-Mcs-AdmPwd AD attribute without causing any noticeable problems. Watch for any future delegations ensuring the permissions aren't restored by accident. Enjoy and hope this was insightful.

  • Code Signing PowerShell Scripts

    In this article, I'll describe the process of Code Signing PowerShell scripts from a Microsoft CA. I'll not cover how Code Signing adds security, simply put Code Signing doesn't provide or was intended to provide a robust security layer. However, Code Signing does provide both Authenticity and Integrity: The Authenticity that the script was written or reviewed by a trusted entity and then signed. Integrity ensures that once signed the script hasn't been modified, useful when deploying scripts or executing scripts by a scheduled task with a service account. Bypassing Code Signing requirements is simple, open ISE, paste in the code and F8, instant bypass. However, my development 'Enterprise' system is not standard, ISE won't work as Constrained Language Mode prevents all but core functionality from loading, meaning no API's, .Net, Com and most modules. As a note, even with the script, code signed, ISE is next to useless with Constrained Language Mode enforced. Scripts require both signing and authorising in Applocker\WDAC and will only execute from native PowerShell. Back to it..... This is a typical message when executing a PowerShell script with the system requiring Code Signing. To successfully execute the script, the script must be signed with a digital signature from either a CA or Self Signed certificate. I'm not going to Self Sign, it's filth and I've access to a Microsoft Certificate Authority (CA) as part of the Enterprise. Login to the CA, launch 'Manage' and locate the 'Code Signing' template, then 'Duplicate Template'. Complete the new template with the following settings: General: Name the new certificate template with something meaningful and up the validity to 3 years or to the maximum the corporate policy allows. Compatibility: Update the Compatibility Settings and Certificate Recipient to 'Windows Server 2016' and 'Windows 10/Windows Server 2016' respectively. Request Handling: Check the 'Allow private key to be exported'. Cryptographic: Set 'Minimum key size' to either 1024, 2048, 4096, 8192 or 16,384 Select 'Requests must use one of the following providers:' and check 'Microsoft Enhanced RSA and AES Cryptographic Provider' ( description ) Security: Ideally, enrolment is controlled via an AD Group with both READ and Enrol permissions. Do not under any circumstances allow WRITE or FULL. Save the new template and then issue by right-clicking on 'Certificate Template' > New and 'Certificate Template to Issue'. From a client and logged on with the account that is a member of the 'CRT_PowerShellCodeSigning' group, launch MMC and add the Certificate snap-in for the Current User. Browse to Personal > Certificates and right-click in the empty space to the right, then click on 'All Tasks' > 'Request New Certificate. Select the 'Toyo Code Signing' template and then click on 'Properties' to add in some additional information. Add a Friendly Name and Description. Enrol the template. Now, right-click on the new 'Code Signing' certificate > All Tasks > Export. Select 'Yes, export the private key'. Ensure the 2 PKCS options are selected. Check the 'Group or username (recommended)' and on the Encryption drop-down select 'AES256-SHA256'. Complete the wizard by exporting the .pfx file The final step is to sign a script with the .pfx file using PowerShell. Set-AuthenticodeSignature -FilePath "C:\Downloads\SecureReport9.4.ps1" -cert "C:\Downloads\CodeSigning.pfx" Open the newly signed script and at the bottom of the script is the digital signature. Launch PowerShell.exe and run the script. For those with Applocker\WDAC then the script requires adding to the allow list by file hash. Now I'll be able to execute my own Pentest script on my allegedly secure system and locate any missing settings..... As always thanks for your support.

  • How to Delegate Active Directory OU's with PowerShell

    Today is a quick explanation regarding OU delegation using PowerShell with usable examples and how-to located the GUID that identifies the object type being delegated. All the required scripts can be found on my Github ( here ). Delegated Test Account: For demonstration purposes, the following is executed directly on the Domain Controller and as a Domain Admin. Create a test user named 'SrvOps' and add it to the 'Server Operators', group. This effectively provides Administrator privileges on the DC's without access to AD. Create the following Global Groups, CompDele, UserDele and GroupDele and to the SrvOps user. Greate the following OU's, Computer, User and Group. Shift and Right-click 'Active Directory Users and Computer' and 'Run as a Different User', and enter the SrvOps credentials. Right-click on the Computer OU and you will notice that there's no options to New and select an object type. ADSI Edit and Object GUID: Close the AD snap-in. Back to Domain Admin and launch 'adsiedit.msc'. Select 'Schema' from the 'Select a well known Naming Context:' and OK. Scroll down and select 'CN=Computer' properties. On the 'Attribute Editor' tab scroll down and locate 'schemaIDGUID'. This is the Guid object identity used for delegating Computer objects. It's not possible to copy the value directly and double clicking provides Hex or Binary values which can be copied. The following converts the Hex to the required Guid value. $trim = ("86 7A 96 BF E6 0D D0 11 A2 85 00 AA 00 30 49 E2").replace(" ","") $oct = "$trim" $oct1 = $oct.substring(0,2) $oct2 = $oct.substring(2,2) $oct3 = $oct.substring(4,2) $oct4 = $oct.substring(6,2) $oct5 = $oct.substring(8,2) $oct6 = $oct.substring(10,2) $oct7 = $oct.substring(12,2) $oct8 = $oct.substring(14,2) $oct9 = $oct.substring(16,4) $oct10 = $oct.substring(20,12) $strOut = "$oct4" + "$oct3" + "$oct2" + "$oct1" + "-" + "$oct6" + "$oct5" + "-" + "$oct8" + "$oct7" + "-" + "$oct9" + "-" + "$oct10" write-host $strOut #result = BF967A86-0DE6-11D0-A285-00AA003049E2 The Script: Download the scripts from Github ( here ) and open with Powershell_ise. Update the DN, the OU path to the Computer OU created earlier. Execute the script and repeat for Users and Groups scripts. Relaunch 'Active Directory Users and Computers' as a different user and enter the SrvOps account credentials. Right-click on each of the OU's and 'New'. You will notice SrvOps can now create objects relative to the name of the OU. Final Considerations: Retrieving the 'schemaIDGUID' from ADSI Edit allows the delegation of pretty much any object type within AD and for the most part a couple of minor tweaks to the scripts provided and your set. Enjoy and if you find this useful please provide some feedback via the homepage's comment box.

bottom of page