84 results found with an empty search
- Zero Trust for the Home Lab - Radius and 802.1x (Part 3)
The Road to the World's Most Secure Home Lab.... So far in the pursuit of the World's most secure home lab, the following have been implemented: Part 1 - Zero Trust Introduction Part 2 - VLAN Tagging and Firewalls with pfSense What's Covered in this Blog This post explains how to implement EAP-TLS 802.1X authentication using FreeRADIUS alongside a Windows Enterprise CA for domain joined clients. What Is 802.1X and What Authentication Types Can Be Used? IEEE 802.1X is a network access control standard that enforces authentication before allowing devices to connect to a wired or wireless network, preventing unauthorized access to network resources. 802.1X works by involving three components: Supplicant: The device trying to connect (e.g., a laptop) Authenticator: The network device (e.g., switch or wireless access point) Authentication Server: Typically a RADIUS server that verifies credentials (pfSense) What Is Zero Trust - Recap Zero Trust is a security framework that assumes no user, device, or network segment is inherently trustworthy, regardless of where it sits in the network. The core principles include: Verify explicitly – Always authenticate and authorize access. Use least privilege access – Limit access to only what's needed. Assume breach – Design as if attackers are already in the network. How 802.1x Addresses Zero Trust Security Device and User Authentication at the Network Edge: 802.1x enforces authentication before a device can even communicate on the network. By validating the identity of both users and devices before granting access, it ensures that only trusted entities can join the network. Dynamic Access Based on Identity: After successful authentication, network access can be dynamically assigned based on user roles or device attributes, such as through VLAN tagging or access control lists. This supports Zero Trust’s principle of least-privilege access and helps isolate systems by trust level. Continuous Monitoring and Enforcement: When integrated with a Network Access Control (NAC) system, 802.1X allows ongoing assessment of device posture and compliance, with the ability to quarantine or restrict access if the device falls out of policy. Microsoft’s native NAC solution, Network Access Protection (NAP), has been deprecated and is no longer available. pfSense does not support full NAC functionality, Third-party solutions such as PacketFence, Aruba ClearPass, or Cisco ISE are required to fulfill this role. Segmentation and Isolation: 802.1x pairs well with network segmentation strategies. Devices that fail authentication or are unknown can be automatically placed into restricted VLANs, such as guest or quarantine zones. This limits exposure and aligns with Zero Trust’s goal of minimizing lateral movement. Authentication Options MAC Address Authentication For devices that can't run 802.1X (like printers or IP phones), the switch can authenticate based on the device's MAC address. This is the least secure and is typically used as a fallback method. PEAP with Active Directory (Username/Password-Based Authentication) PEAP (Protected Extensible Authentication Protocol) creates a secure TLS tunnel to protect the authentication exchange. Inside that tunnel, it commonly uses MSCHAPv2, which authenticates users via their Active Directory username and password. While this provides a basic layer of security, MSCHAPv2 has known vulnerabilities. If an attacker manages to capture the encrypted challenge, they can potentially crack it using brute-force or dictionary attacks. The effectiveness of this method ultimately depends on strong password hygiene and the security of your Active Directory environment, which can be a weak link in many setups. EAP-TLS with Certificates (Certificate-Based Mutual Authentication) EAP-TLS, on the other hand, uses digital certificates for both the client and the server to perform mutual authentication. Both sides must present valid certificates, creating a highly secure, trust-based environment. Since no passwords are exchanged, this method is immune to common credential-based attacks like brute-force, phishing, or credential stuffing. It also offers strong cryptographic protections, making it highly resistant to replay and man-in-the-middle (MiTM) attacks. Why I'm Choosing EAP-TLS Given the significantly stronger security model and elimination of password-based vulnerabilities, the obvious choice for my environment is EAP-TLS. It provides robust, certificate-based authentication. 802.1X and RADIUS Configuration Okay, let's set up 802.1X authentication on the pfSense using FreeRADIUS and an Enterprise Certificate Authority (CA). This is a long one, better grab that coffee now... Configure the Microsoft Enterprise CA On the CA server, open the Certification Authority console (certsrv.msc). Create a RADIUS Server Certificate Template: Right-click Certificate Templates and select Manage. Right-click the RAS and IAS Server template (or Workstation Authentication as a base) and select Duplicate Template. Compatibility Tab: Set compatibility levels to Windows Server 2016 General Tab: Template display name: pfSense RADIUS Server. Validity period: 2 years Do not check 'Publish certificate in Active Directory.' Request Handling Tab: Purpose: Signature and encryption. Do not allow the private key to be exported, ensure it is unchecked. Cryptography Tab: Provider Category: Key Storage Provider Algorithm Name: ECDH_P384 Minimum key size: 384 Request hash: SHA384 Subject Name Tab: Select Supply in the request. pfSense will generate the necessary information based on your inputs. Click OK on the warning popup. Extensions Tab: Select Application Policies and click Edit.... Remove Client Authentication, if present. Make sure the Server Authentication is present > Click OK. Select Key Usage and click Edit.... Ensure the Digital signature is checked. Allow key exchange only with key encipherment (key encipherment) is checked. Ensure 'Make' this extension critical is unchecked. Security Tab: Ensure Authenticated Users have Read permission. Create an AD Group and assign 'FULL Control'. Group name is 'RG_CA_pfSense_Radius_Req' Add the account that will be used to create and enroll the Radius certificates Click Apply and OK. Publish the New Template: In the Certification Authority console Right click Certificate Templates, select New, then Certificate Template to Issue. Select your newly created pfSense RADIUS Server Template. Configure pfSense & Generate CSR Install FreeRADIUS Package: Log in to your pfSense web interface. Navigate to System > Package Manager > Available Packages. Search for 'radius'. Click Install and confirm. The installation will take a while. Import the CA Certificate: From the CA, export the Root CA certificate: Right-click your CA name > Properties. On the General tab, click View Certificate. Go to the Details tab, click Copy to File.... Export Next > Select Base-64 encoded X.509 Next > Choose a filename > pfSenseRootCA.cer On pfSense: Navigate to System > Certificates > Authorities Click Add. Descriptive name: Enter something meaningful Method: Import an existing Certificate Authority. Certificate data: Paste the entire content of pfSenseRootCA.cer Click Save. Generate Certificate Signing Request (CSR) on pfSense: Navigate to System > Certificates > Certificates. Click Add/Sign. Add/Sign a New Certificate: Method: Create a Certificate Signing Request. Descriptive name: Toyo pfSense Radius Server. External Signing Request Key type: ECDSA Key length: secp384r1 [HTTPS] - match the template Digest Algorithm: sha384 - match the template Common Name (CN): radius.toyo.loc Crucial! This MUST be the Fully Qualified Domain Name (FQDN) or IP address of the RADIUS server. From DNS console on the DC: Create an 'A Record' to the LAN interface - pfsense.toyo.loc @ 192.168.0.1 Create an ALIAS (CNAME) named 'radius' that resolves to pfsense.toyo.loc Country Code: GB State or Province: Hook City or Locality: Hants Organization: Tenaka.net Organizational Unit: IT Department. Home Lab Certificate Attributes: Certificate Type: Server Certificate Alternative Names: The IP of the pfSense LAN address 192.168.0.1 Add SAN Row FQDN or Hostname: radius.toyo.loc Add SAN Row FQDN or Hostname: radius IP Address: 192.168.0.1 FQDN or Hostname: pfsense FQDN or Hostname: pfsense.toyo.loc Export the CSR Export the newly created CSR by clicking on the arrow with the door icon. Issue the Certificate using the CSR Ensure your account has admin rights and is a member of the 'RG_CA_pfSense_Radius_Req' Group open CMD or PowerShell. Run command: certreq -submit -attrib "CertificateTemplate:pfSenseRADIUSServer" "C:\cert\pfSense+CSR.req" pfSense+CSR.req is the csr previously downloaded CertificiateTemplate:pfSenseRADIUSServer is the Template Name and not the Display Name A dialog will pop up asking you to select the CA; choose your issuing CA and click OK. Another dialog will ask where to save the issued certificate (.cer). Choose a location and filename of 'pfSense Radius Cert.cer' Click Save. Import Issued Certificate into pfSense Navigate to System > Cert. Manager > Certificates Find the CSR entry you created earlier (Toyo pfSense RADIUS Server CSR). Click the Edit icon (pencil). Open the issued certificate file 'Toyo pfSense RADIUS Server.cer' with Notepad. Copy the entire content (including BEGIN/END lines). Paste this content into the Final Certificate data field. Add a description Click Update. The entry should now change from a CSR to a valid certificate, showing its issuer and validity dates. Certificate Revocation List (CRL) You have 7 days... Crucial! When certificate revocation checking is enabled, a valid CRL must be configured to verify whether certificates have been revoked. If revocation checking is enabled but the CRL is unavailable, clients will be unable to confirm their certificate status and authentication will fail. Crucial! In EAP-TLS, both the client and pfSense validate certificates. A critical part of this validation is checking whether a certificate has been revoked, which it does using the CRL . If the CRL has expired or is not available, pfSense will consequently fail authentication for the client. The default expiry is 7 days. With the current pfSense configuration, CRL updates are handled manually, and the CRL expires every 7 days. This is sub-optimal if the CRL isn’t updated on time, client authentication will fail across the board, locking users out of the Wi-Fi. Automating this is on the backlog... honest. The CRL validity is being extended to 6 months, which isn’t ideal. The key caveat is that any time a certificate is revoked, the CRL on pfSense must also be updated to reflect the change. CRL Publishing Parameters: On the CA, right click Revoked Certificates > Properties. Update the CRL publication interval to 6 Months. Update Publish Delta CRLs to weekly. Publish the CRL with the new expiry date. Right click Revoked Certificates > All Tasks > Publish CRL Export\Import The CRL will require exporting to a Base64 file and then pasting into pfSense. Copy the latest crl from C:\Windows\System32\CertSrv\CertEnroll\ to C:\Certs and then convert it to Base64. certutil -encode "c:\certs\TOYO-TOYO01-CA(3)+.crl" c:\certs\crl_base64.txt Navigate to System > Cert. Manager > Certificates > Revocation On the drop-down, select CA (Toyo CA) and Add Select 'Import an existing Certificate Revocation List' Add a Descriptive name Copy and paste the crl_base64.txt content. Save There should now be a CRL entry. Configure FreeRADIUS on pfSense Navigate to Services > FreeRADIUS. Interfaces Tab: Click Add. Interface IP Address: Select the pfSense LAN IP address 192.168.0.1 Port: 1812 (standard RADIUS Authentication port). Interface Type: Authentication. IP Version: IPv4. Description: LAN Radius Authentication Click Save. Click Add again. Interface IP Address: Select the same pfSense IP address. Port: 1813 (standard RADIUS Accounting port). Interface Type: Accounting. IP Version: IPv4. Description: LAN Radius Accounting Click Save. NAS / Clients Tab: This is where you define your switches or wireless access points that will forward authentication requests to pfSense. Click Add. Client IP Address: Zyxel Wifi AP is @ 192.168.0.253 Client IP Version: IPv4. Client Shared Secret: Some ridiculously long password You must configure the same secret on your switch/AP. (Zyxel Wifi AP) Ensure the secret conforms to best practice Client Shortname: Enter the hostname of the AP Add other switches, routers, and AP devices as needed. Click Save. EAP Tab: The EAP tab configures authentication methods like EAP-TLS or PEAP used for secure 802.1X network access. EAP: Check the Disable weak EAP types: MD5, and GTC Default EAP Type: TLS. Disable Weak EAP Types: Checked. Minimum TLS version: 1.2 Certificates for TLS: On each of the drop-downs, select the relevant CA settings Note: If the SSL Revocation List option is set and misconfigured, clients will fail to validate their certificates and won't be able to connect to the 802.1x SSID. It's possible to select None for testing purposes, but this is not a suitable option for production. EAP-TLS: Include Length: Yes Fragment Size: 1024 Check Cert Issuer: Enable CA Subject: Blank Check Client Certificate: EAP-TLS Cache: Leave the defaults PEAP and TTLS: Leave other EAP types like PEAP, TTLS settings at their default values. Click Save. Settings Tab: Select the Settings Tab. General Configuration: Leave the General Configuration settings at their default values. Logging Configuration: This is a personal preference, the Radius Logging Destination is updated to output to the radius.log All other settings remain default. Click Save. Configure Switch/AP (Zyxel Wifi) The original Zyxel NWA50AX is another casualty of implementing Zero Trust, it turns out it doesn’t support 802.1X or WPA2 Enterprise. So, after discovering that fun fact the hard way, I swapped it for an NWA130BE. £160 later, we finally have proper 802.1X support. SSID Profile Wizard: A new SSID was created, subtly named, of course, and initially configured with VLAN ID 1. While this configuration allowed clients to connect, it failed to automatically switch them to VLAN 40 and the Client interface. However, since automatic VLAN switching wasn’t functioning as expected, the VLAN ID was explicitly set to 40 within the SSID configuration. I guess I was asking too much for £160. Security Profile Wizard: Select WPA2 and Enterprise Primary Radius Server Activated: Enable Radius Server IP Address: 192.168.0.1 Radius Server Port: 1812 (authentication port, to match the port assigned on the pfSense) It's time to pull out that shared secret set in the NAS/Clients section of the pfSense earlier. Save Configure Windows Clients The pfSense firewall will require a tweak to allow Clients on VLAN 40 to access the Switch/AP Navigate to the Firewall > Rules > VLAN40_Clients. Add an 'allow' rule between the Zyxel Wifi alias on 192.168.0.253 and the alias for clients. Basic CA and GPO Configuration: Ensure every Windows Domain client trusts the Windows Root CA, and deploy it using GPO if necessary. Enable auto-enrollment of certificates in Group Policy, certificate templates with the Autoenroll permission will do just that, and enroll automatically. Deploy Workstation Authentication Client Certificate - TPM Supported: It would almost feel wrong not to deploy more certificates. This time, the clients are getting the full treatment. Not all of my Windows clients and servers are blessed with a TPM, case in point, the Intel Skull Canyon and the Gen 6 Hyper-V host, they still cling to life with retirement being long overdue. Without the TPM there's no support for the Microsoft Platform Crypto Provider. The fallback option when a TPM isn't available is to use the Microsoft Software Key Storage Provider, which is managed through Active Directory groups that control certificate enrollment. Create the following AD Groups: For Computer objects that do not support TPM RG_CA_WksAuthCert_Deny_TPM_Supt For Computer objects that do support TPM RG_CA_WksAuthCert_Allow_TPM_Supt Workstation Authentication Template: Right-click Certificate Templates and select Manage. Right-click the Workstation Authentication template and select Duplicate Template. Will assume that the Workstation Authentication or Computer templates are NOT deployed. General Tab: Template display name: Toyo Workstation Authentication Validity period: 1 years Renewal period: 6 weeks Check 'Publish certificate in Active Directory.' Compatibility Tab: Set compatibility levels to Windows Server 2016 Cryptography Tab: Provider Category: Key Storage Provider Algorithm Name: RSA RSA is supported for key generation and storage in the TPM. ECC (Elliptic Curve Cryptography) isn't generally supported for TPM storage of certificates. Minimum key size: 2048 Request hash: SHA256 Requests must use one of the following providers: Microsoft Platform Crypto Provider Microsoft Platform Crypto Provider is the Key Storage Provider (KSP) that allows certificates and their private keys to be stored in the Trusted Platform Module (TPM). If no TPM is accessible, the certificate will fail to enroll. Subject Name Tab: Under 'Build from this Active Directory Information' Subject name format: Common Name is selected Include this information in alt sub name: Check DNS Extensions Tab: Select Application Policies and click Edit.... Ensure that the Client Authentication is present. Security Tab: Ensure Domain Computers is removed Add RG_CA_WksAuthCert_Allow_TPM_Supt Allow Read, Enroll, and AutoEnroll. Click Apply and OK. Deploy Workstation Authentication Client Certificate - TPM is Not Supported: Toyo Workstation Authentication Template: Right-click Certificate Templates and select Manage. Right-click the Toyo Workstation Authentication template and select Duplicate Template. General Tab: Update the name to show TPM arent supported Cryptography Tab: Provider Category: Key Storage Provider Algorithm Name: RSA Minimum key size: 2048 Request hash: SHA256 Requests can use any provider available on the subject's computer. Security Tab: Ensure 'Domain Computers' is removed Add RG_CA_WksAuthCert_Deny_TPM_Supt Allow Read, Enroll, and AutoEnroll. Click Apply and OK. Publish the New Templates: In the Certification Authority console Right click Certificate Templates, select New, then Certificate Template to Issue. Select your newly created Toyo Workstation Authentication Templates. Restart the clients to automatically enroll the certificate or gpupdate /force The Final Step (Honest) - Connect the Client to the Wifi Confirm that the Toyo Workstation Authentication certificate has been enrolled on the clients: As an Administrator, run certlm.msc. Confirm the certificate is present. After all that effort, the final step feels a bit anticlimactic, just select the 'Toyo-802.1X' Wi-Fi network and connect. No passwords required. Review the connection settings: GPO Settings For the Home Lab environment, these GPO settings may be somewhat excessive given there are only four domain joined laptops. However, the GPO will enforce connection to the specified Wi-Fi access point and hide all other SSIDs from view Create a new GPO and link it to the Domain workstations OU. Edit Computer Configuration > Policies > Windows Settings > Security Settings > Wireless Network (IEEE 802.11) Policies. Right-click and Create A New Wireless Network Policy for Windows Vista and Later Releases. General Tab: Update the Policy Name: 802.1x Toyo Wifi Add a description. Click Add. Select Infrastrucuture. Connection Tab: Update the Profile Name: Toyo 802.1X Enter the Network Name(s)(SSID): Toyo-802.1X Ensure that only 'Connect automatically when this network is in range' is selected. Security Tab: Authentication: WPA2-Enterprise Encryption: AES-CCMP Select a network authentication method: Microsoft Smartcard or other certificate Authentication Method: Computer Authentication Click OK Network Permissions Tab: The following settings will hide all other SSID's except those named in this GPO: Select Prevent connections to ad-hoc networks Select Prevent connections to infrastructure networks Uncheck Allow user to view denied networks Select Only Group Policy profiles for allowed networks Support Stuff Given the complexity that is now building within the network, it's not unexpected that there's going to be a few bumps along the way. The following section should help point you in the right direction. pfSense Logs Enable the FreeRadius logs by navigating to Services > FreeRadius > Settings Personal choice, I prefer the radius.log output. Enable temporary SSH access by navigating to System > Advanced > Secure Shell Enable Secure Shell Server Add a Firewall rule to allow TCP port 22 Secure Shell into the pfSense. ssh admin@pfsense.toyo.loc cat /var/log/radius.log When you're done, shut the door behind you, disable SSH and kill the firewall rule." Windows Client Logs When it comes to troubleshooting 802.1X on Windows, the built-in logging and diagnostic tools are: As an Admin, run the following netsh command netsh wlan show wlanreport The netsh wlan show wlanreport command generates an HTML report that provides a detailed overview of recent Wi-Fi connection history, including connection successes, failures, signal quality, and reasons for disconnects. The following event logs offer a more traditional, readable output for analyzing connection issues. A Step to Zero Trust We made it...eventually, what a long post and a whole lot of certificates. I’ll keep it short from here. Thanks for sticking with it. Next up: IPsec. Don’t miss it... Everyone loves IPSec Related Posts: Part 1 - Zero Trust Introduction Part 2 - VLAN Tagging and Firewalls with pfSense Part 3 - pfSense and 802.1x Part 4 - IPSec for the Windows Domain Part 5 - AD Delegation and Separation of Duties Part 6 - Yubikey and Domain Smartcard Authentication Setup Part 7 - IPSec between Windows Domain and Linux using Certs
- Zero Trust for the Home Lab - VLAN Tagging and Firewalls with pfSense (Part2)
What Is Zero Trust - Recap Zero Trust is a security framework that assumes no user, device, or network segment is inherently trustworthy, regardless of where it sits in the network. The core principles include: Verify explicitly – Always authenticate and authorize access. Use least privilege access – Limit access to only what's needed. Assume breach – Design as if attackers are already in the network. What's Covered in this Blog This post covers implementing pfSense Netgate 4200, a cheap POE managed switch, VLANs and point-to-point Firewalls. How Do VLANS and Firewalls Address Zero Trust VLAN Tagging: Enforcing Logical Segmentation Virtual LANs (VLANs) break a physical network into multiple, isolated broadcast domains. Each VLAN behaves like a separate network, even if all devices are plugged into the same switch. With 802.1Q tagging, VLANs add a tag to Ethernet frames to denote which virtual segment the traffic belongs to. This enables: Separation of devices by function or trust level (e.g., IoT, guest, management, servers). Containment of potential breaches, malware on a smart TV can't reach your file server. Firewalls: Traffic Policy Enforcement Once VLANs are defined and routed, the firewall rules take over. Each VLAN has its own interface, letting you apply granular, interface-specific policies such as: Blocking traffic between VLANs by default Only allowing explicit communication paths (e.g., IoT devices can talk to the internet but not to the LAN) Logging and monitoring attempts to cross VLAN boundaries Zero Trust for the Home Lab and pfSense This marks the first step in a series of technical changes to the home lab. As outlined in Part 1 , the goal is to implement a Zero Trust model or get as close to it as possible, while avoiding cloud dependencies and keeping costs to a minimum. The old Zyxel USG 60W device has finally been retired, it's out of support, no more firmware updates. This is a basic Zero Trust requirement; keep it updated. Its replacement is a pfSense Netgate 4200. As discussed, one of the key changes is the introduction of 802.1Q VLAN tagging throughout the network. Rather than treating everything behind the firewall as implicitly trusted, I'm segmenting traffic by function: End User Devices, Infrastructure, DNS and IoT. Each VLAN gets its own virtual interface on pfSense, with point-to-point firewall rules and independent DHCP configurations. This rollout has downstream implications. For starters, the Windows Hyper-V hosts will be updated to support trunked VLANs on the switch and the external Virtual Switch will be assigned the Server VLAN. Each virtual machine will map to a specific VLAN, ensuring that VMs are isolated according to their function and access needs. The pfSense management interface will be moved to a dedicated physical LAN port, isolated from all other LANs and VLANs by firewall rules. Additionally, the Pi-hole DNS servers and Domain Controllers, which previously sat on a flat LAN, will require some reconfiguration. DNS Forwarders on the DC's will require re-pointing to the new PiHole IP Addresses. In addition, the DC's Sites and Services will require updating. Setup and Initial Config With the initial setup complete, I've allocated the following: Port 1 is connected to the ISP's router Port 2 to the NETGEAR 16-Port PoE GS316EP Managed Switch. Port 3 is reserved for the Management interface. Interfaces and VLANs Overview The physical interfaces are renamed to their correct designation of WAN, LAN and MGMT. Create the following VLANs and their tags under Interfaces > VLANs: Tag 1, default tag for management and assigned to the LAN interface Tag 10 is for DNS\PiHoles Tag 20 is reserved for Domain Controllers Tag 30 is for Member Servers Tag 40 is for Domain Clients Tag 50 is for Domain Devices such as Printers Tag 60 is reserved for SIEM and Monitoring Tag 100 is for any IOT Assigned the newly created VLANs to the LAN interface, which is igc2. KEA DHCP ISC DHCP is deprecated, despite it being the default option, so swap it out for Kea: Go to System > Advanced > Networking. Under "DHCP Options," select "Kea DHCP" as the "Server Backend DHCP Settings DHCP is to be enabled for each Interface. Go to Services > DHCP Server Under each Interface, "Enable DHCP on LAN Interface" The LAN interface will remain enabled to support devices that don't natively support VLAN tagging via their web management pages. The risk of losing access to systems like the solar inverter will cause an unwelcome distraction. Note: There is no DHCP Scope for the Domain Controllers on VLAN 20. LAN DHCP Scope IP Range 192.168.0.100 to 192.168.0.200 DNS points to the Internet 1.1.1.1, 4.4.4.4, 8.8.8.8 MGMT DHCP Scope IP Range 192.168.99.100 to 192.168.99.200 DNS points to the PiHole Servers - 192.168.10.70 and 192.168.10.71 DNS DHCP Scope (PiHole) IP Range 192.168.10.100 to 192.168.10.200 DNS points to its own PiHole Servers - 192.168.10.70 and 192.168.10.71 Member Servers DHCP Scope IP Range 192.168.30.100 to 192.168.30.200 DNS points to the Domain Controllers - 192.168.20.245, 192.168.20.247, 192.168.20.249 Domain Clients DHCP Scope IP Range 192.168.40.100 to 192.168.40.200 DNS points to the Domain Controllers - 192.168.20.245, 192.168.20.247, 192.168.20.249 Firewall Intro pfSense firewall rules manage separate rules for each interface (LAN, WAN and VLAN) and are processed top-down, meaning the first rule that matches a packet is the one that gets applied. If no rule matches, pfSense blocks the traffic by default. Now for the fun part: initially, every interface, except for the WAN, was configured with permissive any-to-any firewall rules. This meant all devices could communicate freely across all networks. Once system stability was confirmed and I verified that I hadn't broken the system during the transition to VLAN tagging, the rules were gradually tightened to enforce proper network segmentation. The end result is detailed below. In keeping with Zero Trust principles, permissive subnet rules were removed, effectively eliminating lateral movement, particularly between individual client systems. Alias's pfSense firewall aliases let you group IPs, networks, ports, or protocols under a single name. Instead of writing multiple rules for each item, you create one alias and reference it across your firewall rules, making rule management simpler, cleaner, and easier to update. For example, create an alias like Mgmt_Consoles with the IPs of all management addresses, then use that alias in your rules instead of listing each IP individually. I've the following Aliases: Alias_ClientDevices, for printers Alias_Clients Alais_DomainControllers Alias_MemberServers Alias_PiHoles Each has been populated either in the case of the DCs by statically assigned IP's, or by reserving the IP address in DHCP, then updating the corresponding alias. WAN The default behaviour for the WAN Interface is to block RFC 1918 and Bogon network ranges. RFC 1918 is a standard published by the Internet Engineering Task Force (IETF) that defines a set of private IP address ranges. These aren't routable on the public internet These private IP ranges are: 10.0.0.0 to 10.255.255.255 (10.0.0.0/8) 172.16.0.0 to 172.31.255.255 (172.16.0.0/12) 192.168.0.0 to 192.168.255.255 (192.168.0.0/16) A bogon network refers to a block of IP addresses that should not be seen on the public internet. These are addresses that are either unallocated by IANA or reserved for special use, like private networks or multicast. Examples of Bogon IP Ranges: 0.0.0.0/8 – “This” network 10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12 – RFC 1918 private addresses 127.0.0.0/8 – Loopback 169.254.0.0/16 – Link-local APIPA LAN The LAN contains the least trusted devices, such as mobile phones, smart TVs, the RoboRock vacuum, and the solar inverter. As a result, none of these devices is permitted to communicate with any other interface, including the Zyxel Wi-Fi router's management interface at 192.168.0.253. The two allow rules permit communication within the device's own LAN subnet and outbound access to the internet on ports 80 (HTTP) and 443 (HTTPS). The alias, Mgmt_Consoles is used to block access to each interface’s default gateway on ports 80 and 443. This overrides pfSense’s default behavior, which permits access to the web management interface from all interfaces. MGMT The MGMT interface, by design, is allowed to communicate with all other interfaces. I've blocked most management access with the exception of direct connection or via a Member Server. VLAN10_DNS As with the other interfaces, access to pfSense's Web Management, Servers, MGMT, and Clients VLANs, is explicitly blocked Aliases for PiHoles are implemented, allowing only named IP's, the 2 devices, from accessing each other from within the subnet. Outbound DNS traffic (port 53) is permitted to allow name resolution via the Internet. Ports 80 and 443 are allowed to enable the Pi-hole instances to retrieve updates and patches. General Approach to Domain Services The VLANs for DCs, Servers and Clients allow unrestricted traffic between each VLAN. Ideally, firewall rules should be tightened to permit only the necessary services and ports. However, since the plan is to implement IPSec, the effort required to apply stricter rules at this stage aren't justified. IPsec will provide the necessary security controls and require only a couple of ports to be open, and not the plethora of rules required between DCs and Domain members etc. VLAN20_DCs The Domain Controllers provide authentication services and therefore require unrestricted communication with client and server aliases, as well as with each other. DNS traffic (TCP/UDP port 53) is allowed between the Domain Controllers and the Pi-hole instances, while port 123 is open to the Internet to support time synchronization services. VLAN30_Servers Servers are permitted to communicate with each other, as well as with DCs and Client devices, SCCM makes this a nessessity. The ClientDevice rule also enables support for network printing. Additionally, all servers are allowed outbound access to the Internet on ports 80 and 443 to facilitate Windows Updates. Note: DC's, Hyper-V hosts, SCCM, SCOM and Certificate services and shouldn't share the same VLANs. VLAN40_Clients The clients are exposed to the Internet and susceptible to various attacks during routine browsing and application use. A key principle of Zero Trust is to prevent lateral movement that could be used to discover and escalate privileges. The following measures are in place to help mitigate this risk: Client VLAN Isolation: The client VLAN is configured to block client-to-client communication; no firewall rules permit traffic between hosts within the same subnet. URA Logon Restrictions: As outlined in the referenced link below, Server and Domain Admin accounts are explicitly denied logon rights to client machines via User Rights Assignments. https://www.tenaka.net/post/deny-domain-admins-logon-to-workstations Clients are only permitted unrestricted communication with the DC and the Servers VLANs. External outbound traffic is restricted to web access on ports 80 (HTTP) and 443 (HTTPS). Admin Access The default configuration is for the LAN interface to apply anti-lockout rules to pfSense's management interface. The rules have been removed in favour of the MGMT interface, so proceed with caution: Navigate to System > Advanced > Admin Access Locate the 'Anti-lockout' check box and remove. Netgear Managed Switch Of all the devices involved in the VLAN migration, the NETGEAR 16-Port PoE Switch (GS316EP) was the only one that caused real issues. Due to a fatal error on my part, the switch had to be factory reset. Losing the configuration wasn’t a major problem, but diagnosing the loss of access to the switch that led to the reset was frustrating. The issue stemmed from allowing the switch to obtain its IP address via DHCP. Once the trunk uplink, connecting both pfSense and the switch, was assigned, the switch lost its ability to pick up a DHCP IP address, resulting in a loss of management access. The most likely cause is that VLAN 1 traffic may be dropped as it's explicitly tagged and the switch or router will be expecting untagged traffic. Assign a static IP address to the switch. Enable 'Basic 802.1Q VLAN'. Create the desired VLANs Assign Truck (uplink) to the following connections: Port 1 is the connection to pfSense Port 2 is the connection to the Wifi Access Point Ports 3 and 4 are the connections to the Hyper-V Hosts Assign VLAN Tags to the relevant device: Ports 5 and 6 are for the PiHoles on VLAN 10 Ports 7 through 12 are for Clients on VLAN 40 WIFI Access Point To assign a VLAN to a Wi-Fi network, you need to configure the VLAN ID directly in the settings of the wireless access point (AP) or wireless controller. These are part of the settings when configuring the SSID and WIFI password. Manually Set for Windows Client and Server (Non-Hyperv) To manually tag a VLAN on a Windows machine: Open Control Panel > Network and Sharing Centre > Change Adapter Settings Right-click > Properties > go to the Advanced tab. Look for a setting like VLAN ID, Priority & VLAN, or Packet Priority and VLAN. Set the VLAN ID you want and click OK. Once configured, all traffic from that adapter will be tagged with the specified VLAN ID. Note: Not all NIC drivers support VLAN tagging through Windows natively. Intel, Broadcom, and some Realtek adapters typically do, but it often requires their vendor-specific drivers (not just the Microsoft default). If the VLAN option is missing, you may need to install the manufacturer’s advanced network driver suite. Hyper-V Servers The Intel (now ASUS) NUCs are equipped with only a single network interface, which limits the ability to physically separate VLANs across multiple NICs. Meaning Hyper-V management traffic and VM traffic that defaults to the virtual switch go through VLAN 30 To mitigate this, and I certainly don't want Kali on the same interface as my Member Servers is to set the VLANs within the network adapter settings of the individual VM, thus keeping with the Zero Trust approach intact. Additionally, again deviating from strict Zero Trust principles, the physical hosts serve multiple roles, including File and Print services, as well as DFSR replication. Raspberry PI and PiHole VLAN Before moving the PiHoles VLAN, it's important to complete the following steps. VLANs aren't supported out of the box. Install the latest updates and then the vlan package. sudo apt update sudo apt install vlan Load the 8021q kernel module, which is essential for enabling VLAN tagging on network interfaces. sudo modprobe 8021q echo "8021q" | sudo tee -a /etc/modules Define how VLANs are created and configured. sudo nano /etc/systemd/network/25-vlan.network [Match] Name=eth0.VLAN_ID [Network] DHCP=yes This file instructs systemd-networkd how to create and manage the VLAN interface sudo nano /etc/systemd/network/25-vlan.netdev [NetDev] Name=eth0.VLAN_ID Kind=vlan [VLAN] Id=VLAN_ID Update the VLAN tagging on the network switch that the Pi's are plugged into. Restart the network interface. sudo systemctl restart networking sudo systemctl status networking Confirm the IP has updated from 192.168.0.70 to 192.168.10.70. ip addr show Conclussion Firstly, thanks for following along. There’s a lot of moving pieces and implementing VLAN tagging can present a few challenges. However, this is the first critical step toward achieving a more complete Zero Trust architecture. Related Posts: Part 1 - Zero Trust Introduction Part 2 - VLAN Tagging and Firewalls with pfSense Part 3 - pfSense and 802.1x Part 4 - IPSec for the Windows Domain Part 5 - AD Delegation and Separation of Duties Part 6 - Yubikey and Domain Smartcard Authentication Setup Part 7 - IPSec between Windows Domain and Linux using Certs
- Zero Trust for the Home Lab - An Introduction to Zero Trust and its Practical Limits for the Home Lab (Part 1)
Introduction If you're a regular visitor to this site, you’ve probably noticed I enjoy 'messing' with security, especially when it comes to Windows. I've put a lot of effort into securing my home lab over the years, it would be a pretty tough nut to crack. But the saying goes, "Pride before a fall". Of course, I'm a realist and know full well that nothing is 100% secure and there are vulnerabilities that I'm in denial about, but it helps to hide behind layers of firewalls, WDAC, and delegation. There’s this concept called Zero Trust Architecture; it’s intriguing, and I’ll explain what it means in a moment. But with my Home Lab in mind, I’ve been wondering, what aspects of it can realistically be implemented using consumer-grade equipment? How close can I get to that elusive state of Security Nirvana without breaking the bank or the Home Lab. This series of articles will first explore the theory behind each of the Zero Trust security enhancements, followed by its practical implementation, the fun part. Although the theory is wordy and a bit.... boring, it's important to understand the principles and how they apply to the implementation of the tech. The goal? To create the world’s most secure home lab. This should be entirely doable, after all, who else is unhinged enough to even try? Zero Trust Architecture Zero Trust Architecture (ZTA) is a security framework based on the principle of "never trust, always verify." Unlike traditional security models that rely on network perimeters, Zero Trust focuses on securing individual resources by enforcing strict identity verification, least privilege access, and continuous monitoring. The Problem with Traditional Security Models The Perimeter-Based Security Model In the past, organizations secured their networks using firewalls, VPNs, and other perimeter-based defenses. The assumption was that once inside the network, users and devices could be trusted. However, this approach has several flaws: Insider Threats: Employees or compromised accounts can misuse their privileges. Remote Work & Cloud Adoption: Users no longer work within a controlled corporate network. Advanced Cyber Threats: Attackers can breach a single point in the network and move laterally to access sensitive data. Core Principles of Zero Trust Architecture To successfully implement Zero Trust, organizations follow these key principles: Verify Explicitly: Authenticate and authorise every access request based on multiple data points, such as user identity, device health, location, and behavior. Use multi-factor authentication (MFA) to ensure secure logins. Use Least Privilege Access: Grant users and applications only the minimum access they need to perform their tasks. Implement Just-In-Time (JIT) access and role-based access control (RBAC). Assume Breach: Design the network with the assumption that threats exist both inside and outside. Implement micro-segmentation to contain potential intrusions. Continuously monitor and analyze network traffic for anomalies. Implementing Zero Trust: A Step-by-Step Guide Micro-Segmentation and Network Security Break up the network into smaller, isolated segments to limit lateral movement. Use software-defined perimeters (SDP) to restrict access to applications based on user identity and context. Deploy next-generation firewalls and intrusion detection systems to monitor network activity. Implement IPSec to encrypt and authenticate traffic between devices, enforcing secure communication within and across network segments. Use 802.1X and RADIUS for network-level access control, tying access policies to user identity and device trustworthiness. Enforce policy-based routing and segmentation at both physical and virtual levels. Device and Endpoint Security Implement endpoint detection and response (EDR) solutions to detect and mitigate threats. Enforce device compliance checks, ensuring only secure, managed devices can access resources. Use mobile device management (MDM) solutions to secure BYOD (Bring Your Own Device) environments. Continuously assess device posture, including OS patch levels, security configurations, and threat exposure. Implement Strong Identity and Access Management (IAM) Enforce multi-factor authentication (MFA) for all users. Implement Single Sign-On (SSO) to streamline authentication. Adopt passwordless authentication methods such as biometrics or security keys. Continuously verify user identities using risk-based authentication (RBA), which adjusts security policies based on user behavior. Leverage RADIUS for centralized authentication and accounting, particularly for network access control and device-level authentication. Integrate 802.1X for port-based network access control, ensuring that only authenticated users and compliant devices gain network access. Enforce Least Privilege and Access Controls Use role-based access control (RBAC) to restrict access based on job roles. Implement attribute-based access control (ABAC), which considers additional factors like device security posture and user location. Utilize Just-In-Time (JIT) access to grant temporary permissions when needed. Review access regularly to minimize privilege creep and enforce the principle of least privilege. Continuous Monitoring and Threat Detection Deploy Security Information and Event Management (SIEM) solutions to collect and analyze security logs. Use User and Entity Behavior Analytics (UEBA) to detect anomalies in user behavior. Implement automated threat response to isolate compromised accounts or devices in real-time. Home Lab State of Play This lab isn’t just a casual test environment, it’s been running continuously for over a decade, operating 24/7 as a secure, managed domain for browsing and related services. The family acts as the user base, providing constant, real-world UAT, often quite vocally when something breaks. The domain serves as a representative platform for the technologies I’m learning, testing, and developing. The current state, and state is nearer the mark, is a mostly flat network with an out-of-support Zyxel USG60W with multiple Firewall rules dependent on the device's IP and MAC. I’m running multiple Intel NUCs hosting Hyper-V, one is well overdue for retirement and out of support, which in turn runs a Windows 2019 Domain environment. AppLocker and Group Policy are actively deployed, while WDAC is managed via SCCM. There's extensive delegation and a strict separation of privileges throughout the environment. Laptops protect their data with Bitlocker TPM and Pin. DNS queries are handled by two PiHoles with fairly strict filtering lists. As for monitoring, SCOM was decommissioned some time ago due to NUC resource limitations, so currently, there's no centralized monitoring in place. A serious lack of time and it just works has led to the system being largely neglected. This forms an ideal starting point and mirrors what’s often seen in corporate environments, underfunded infrastructure, overworked admins stretched to the breaking point. The Zero Trust Plan of Attack Building on the core principles of Zero Trust to "never trust, always verify" and keeping budget limitations in mind, I’ll explore each technology and explain how it tackles specific challenges. Each of these will be documented in the upcoming blogs. Micro-Segmentation and Network Security Software-Defined Perimeters. This may be a step too far for the home lab. Networking: Replace the Zyxel with a pfSense Netgate 4200 and implement VLANs. Firewall: Transition from Zyxel policies to pfSense. IPSec: Assume compromise and that the network is hostile. Implement a VPN - Not required. PiHole, DNSSec and DNSTLS. Device and Endpoint Security Device Compliance, implement NAP - No longer possible with Windows Server. Endpoint Detection and Response (EDR). Mobile Device Management (MDM) is currently handled through SCCM. There are no plans to transition to Microsoft Azure, particularly Intune, as it lacks enterprise features and would expand the lab's attack surface. The approach is to maintain secure data processing on-premises while using the cloud for processing and storing less sensitive data. Implement Strong Identity and Access Management (IAM) Single-Sign-On, is currently supported within the Microsoft Domain, but not so for all the Linux devices. Authenticate and verify Devices, implement Radius Server and 802.1x MFA, Yubikey smartcard and pins will be implemented. Risk-Based Authentication. Enforce Least Privilege and Access Controls Attribute-Based Access Control (ABAC) Role-Based Access Control (RBAC) Just-In-Time access requires as a minimum PowerShell commands to enable group membership TTL (Time-to-Live), this could be extended further with a Bastion Forest, MIM, PAM and PIM. Continuous Monitoring and Threat Detection Security Information and Event Management (SIEM), implement an event management solution that supports both Windows and Linux. User and Entity Behavior Analytics (UEBA), implement PFSense's IPA and IDA solutions. Realtime response Threat intelligence feeds (pfBlockerNG) Intrusion detection and prevention systems (Snort/Suricata) The Keys to the Zero Trust Kingdom In a Windows environment, an Enterprise Certificate Authority (CA) is the trust anchor for machine identities, user certificates, network authentication, and service encryption. It’s a critical component in any enterprise PKI and foundational to implementing a Zero Trust security model. But without a Hardware Security Module (HSM), your CA's private keys are exposed to unnecessary risk. I don't have an HSM, they're quite expensive. This needs to be called out for the enterprise implementation of Zero Trust. Where to Start..... The CA holds the keys, but the network forms the foundation of Zero Trust, making it the logical place to start. Replacing the outdated Zyxel hardware is the first step, followed by implementing proper network segmentation and firewall policies. The only question... what have I started? Related Posts: Part 1 - Zero Trust Introduction Part 2 - VLAN Tagging and Firewalls with pfSense Part 3 - pfSense and 802.1x Part 4 - IPSec for the Windows Domain Part 5 - AD Delegation and Separation of Duties Part 6 - Yubikey and Domain Smartcard Authentication Setup Part 7 - IPSec between Windows Domain and Linux using Certs
- Create a WMI Filter on a PDC with PowerShell
While building automation for domain deployment , OU structure, and delegation , I experienced one of those, too hard to do now, which nearly slipped past me, scripting the creation of a WMI filter for the PDC Emulator role. Time Source The goal is to make sure the PDC, and only the PDC, manages authoritative time. This particular GPO includes a root NTP server setting, whether that’s an IP address, a local atomic clock, or an external internet time source, and it’s vital that only the PDC syncs with it. Every other domain controller should, in turn, sync from the PDC, maintaining a proper hierarchy and preventing clock chaos. Not Keen on WMI Filters I’ll be honest, I don’t generally like WMI filters in GPO. They introduce performance hits and slow down policy processing, especially in larger environments. But in this case, it’s a pragmatic exception. The filter ensures that only the PDC receives and applies the external time configuration, keeping time consistent across the domain and preventing catastrophic drift when an upstream time source fails spectacularly. Prevent death and destruction I’ve experienced this first-hand, the time source collapsed in a heap, and the PDC leapt forward a full 24 hours in an instant. The aftermath was... memorable. It's hard to believe the chaos inflicted when time goes awry. MaxPhaseCorrention - Thank MS's Default value To guard against that kind of chaos, the GPO settings MaxPosPhaseCorrection and MaxNegPhaseCorrection limit how far the system clock is allowed to jump forward or backward during synchronization. The issue is that Microsoft's default value is 86400 seconds or 24 hours, these overly generous settings have the potential to lead to carnage. The recommendation is to set both POS and NEG settings to 3600 or 1 hour. These settings and the WMI filter ensure the domain’s time stays sane, stable, and immune to upstream meltdowns. GPO Settings - Prevent the Meltdown These are the current settings provided from GitHub , with the annex at the end of the blog providing the technical details. Computer Configuration/Policies/Administrative Templates/System/Windows Time Service: Global Configuration Settings MaxNegPhaseCorrection = 3600 MaxPosPhaseCorrection = 3600 Computer Configuration/Policies/Administrative Templates/System/Windows Time Service/Time Providers: Configure Windows NTP Client = Enabled NTPServer = 192.168.30.1,0x8 Type = NTP CrossSiteSyncFlags = 2 ResolverPeerBackoffMinutes = 15 ResolverPeerBackoffMaxTimes = 7 SpeicalPollInterval = 1024 EventLogFlags = 0 Enable Windows NTP Client = Enabled Enable Windows NTP Server = Enabled Script Prep Download both the script and the zip file from GitHub . Copy the files to the PDC into the 'C:\ADBackups\PDCNTP\' directory. Don’t use another domain controller. Running GPO scripts from a secondary DC introduces extra latency when connecting back to the PDC, which can cause failures. Extract the zip file, ensuring the GUID directory is nested within the 'PDCNTP' directory. C:\ADBackups\PDCNTP\{A5214940-95CC-4E93-837D-5D64CA58935C}\ If you prefer to use your own GPO export, that’s fine, as long as the path is correct, the script will automatically resolve the appropriate GUID and BackupID. Execution of the Script With Domain Admin privileges, open an elevated PowerShell window and execute the following commands CD to C:\ADBackups\PDCNTP\ ; .\Create_WMI_NTP_PDC.ps1 Open Group Policy Management. Confirm that the GPO has been created and linked to the Domain Controller OU Update the IP Address for the NTP Server, it's unlikely we share the same time server IP. The only remaining task for me is to integrate the NTP GPO into the fully automated Domain deployment script . Enjoy and thanks for your time. ANNEX - Breakdown of GPO Settings Computer Configuration > Policies > Administrative Templates > System > Windows Time Service > Global Configuration Settings MaxNegPhaseCorrection Current value: 3600 (1 hour) Purpose: Defines the maximum number of seconds the clock can be moved backward when synchronizing time. If the correction exceeds this, Windows logs an event instead of applying the adjustment. Relevance: Prevents the PDC from winding time back too far due to an erratic NTP source, which can break Kerberos authentication and replication. Alternative values: 0 — disables large backward corrections entirely. 300 — 5 minutes (useful for high-availability or sensitive environments). 86400 — 24 hours (default Microsoft value, overly generous for a PDC). 4294967295 (0xFFFFFFFF) — disables the limit completely (not recommended). MaxPosPhaseCorrection Current value: 3600 (1 hour) Purpose: Defines the maximum number of seconds the clock can be moved forward. Relevance: Protects against catastrophic jumps when an upstream NTP server malfunctions. This provides a 1-hour safety window in either direction, preventing massive jumps while still allowing normal synchronization drift to be corrected automatically. Alternative values: 300 — a conservative 5-minute correction limit. 900 — 15 minutes (a good balance for stable networks). 86400 — 24 hours (default). 4294967295 — disables the limit (unsafe on domain controllers). Computer Configuration > Policies > Administrative Templates > System > Windows Time Service > Time Providers Configure Windows NTP Client = Enabled Enables policy control of NTP client behaviour to enforce the following parameters. NTPServer = 192.168.30.1,0x8 Purpose: Defines the external NTP source the PDC syncs with. The ,0x8 flag tells Windows to use client mode. Alternative formats and examples: pool.ntp.org ,0x8 — public NTP pool. time.google.com ,0x8 — Google’s NTP service. ntp.nist.gov ,0x8 — US NIST time source. gps-clock.local,0x8 — local GPS or atomic reference. Relevance: This should be a reliable and stratum-1 or stratum-2 source. The PDC is the only domain controller that should query an external NTP server. Type = NTP Purpose: Forces synchronization using the NTP protocol with the specified NTPServer. Other valid values: NT5DS — Default for domain-joined machines (syncs from domain hierarchy). AllSync — Uses all available sync mechanisms (rarely needed). NoSync — Disables synchronization entirely. Relevance: For the PDC, NTP ensures it pulls time from the defined external source, not from another DC. CrossSiteSyncFlags = 2 Purpose: Controls cross-site time synchronization. Value meanings: 0 — Allow synchronization across all sites. 1 — Only sync from DCs in the same site. 2 — Never sync from DCs in other sites (recommended for PDC). Relevance: Keeps the PDC isolated as the domain’s root time authority, avoiding cross-site time loops. ResolverPeerBackoffMinutes = 15 Purpose: Specifies how long the service waits before retrying after a failed NTP sync. Alternatives: 5 — More aggressive retry. 30 — More relaxed retry, suitable for unreliable WANs. ResolverPeerBackoffMaxTimes = 7 Purpose: Defines the maximum number of exponential backoff attempts before giving up. Alternatives: 3 — Faster failover (good for testing). 10 — More patient retry window. SpecialPollInterval = 1024 Purpose: Sets how often (in seconds) the PDC polls the NTP source — roughly every 17 minutes. Alternatives: 3600 — Once per hour (lighter network load). 900 — Every 15 minutes (more aggressive for accuracy). 86400 — Once per day (not advised for volatile networks). Relevance: Frequent polling maintains accurate time and compensates for drift. EventLogFlags = 0 Purpose: Controls event logging verbosity. Values: 0 — Only critical errors. 1 — Informational and error events. 2 — All events, including debugging. Relevance: On a PDC, 0 keeps logs clean while still alerting to serious time issues. Enable Windows NTP Client = Enabled Purpose: Ensures the time service actively synchronizes with the defined NTP source. Relevance: Essential for keeping the PDC accurate and stable. Enable Windows NTP Server = Enabled Purpose: Turns the PDC into an NTP server for the domain. Relevance: Other DCs and domain members sync from the PDC rather than directly from the external NTP source, maintaining a clean and authoritative time hierarchy.
- Windows PE add-on for the Windows ADK for Windows 11, version 22H2 Error
Windows ADK PE for Windows 11 22H2 fails to install completely generating the following errors. Error 1 Clicking on the Windows PE tab crashes MMC generating the following error: Could not find a part of the path 'C:\Program Files (x86)\Windows Kits\10\Assessment and Deployment Kit\Windows Preinstallation Environment\x86\WinPE_OCs'. Google suggests the ADK PE isn't installed..... Error 2 It's not possible to update the boot images. Unable to open the specified WIM file. ---> System.Exception: Unable to open the specified WIM file. ---> System.ComponentModel.Win32Exception: The system cannot find the path specified. Something is definitely wrong and missing Both errors suggest missing files, with error 1 providing a path: C:\Program Files (x86)\Windows Kits\10\Assessment and Deployment Kit\Windows Preinstallation Environment\ Comparing the latest ADK PE for Windows 11 22H2 to an older installation something is very wrong. The x86 and Arm directories are missing..... ADK PE for Windows 11 22H2 ADK PE for Windows 10 1809 Is this a one-off... The initial problem presented itself whilst upgrading MDT and ADK installed on a 2012 R2 Server. A new instance of 2022 Server and Windows 11 were equally affected. Each instance of ADK PE was from a fresh download. Microsoft, I've got the contact details of some really good Test Managers. The Fix Not wishing to faff and spend too much time comparing downloads against different versions, it was taking me away from my planned day of Hack the Box. There's the recommended fix, downloading an earlier version on ADK PE, 1809 for Windows 10 should do the trick. Windows 11 is purely cosmetic under the hood it identifies as Windows 10. Alternatively, copy the missing contents from a working installation, not recommended and it's a bit quick and dirty. It's functional with some minor scripting errors with the MDT deployment wizard.
- Shift+F10 PXE Attack....nearly 4 years on
During MDT or ConfiMgr deployment of Windows 10, press Shift+F10 whilst Windows detects devices. A command prompt with System Privileges will pop up, allowing all sorts of shenanigans and without being logged by SIEM, those agents won't be running yet. Also, during Windows 10 upgrades, Bitlocker drive encryption is disabled, allowing the same attack. This is an old issue raised some 3 to 4 years ago.... Well, today on my test rig during a 1909 deployment, I was just curious, it can't still be vulnerable.... oops. The fix is pretty straightforward, although I can't take credit, that belongs to Johan Arwidmark and this post here # Declare Mount Folders for DISM Offline Update $mountFolder1 = 'D:\Mount1' $mountFolder2 = 'D:\Mount2' $WinImage = 'D:\MDTDeployment\Operating Systems\Windows 10 x64 1909\sources' #Mount install.wim to first mount folder Mount-WindowsImage -ImagePath $WinImage\install.wim -Index 1 -Path $mountFolder1 #Mount winre.wim to second mount folder Mount-WindowsImage -ImagePath $mountFolder1\Windows\System32\Recovery\winre.wim -Index 1 -Path $mountFolder2 #Create folder for DisableCMDRequest.TAG file in Winre.wim New-Item $mountFolder2\Windows\setup\scripts -ItemType Directory #Create DisableCMDRequest.TAG file for Winre.wim New-Item $mountFolder2\Windows\setup\scripts\DisableCMDRequest.TAG -ItemType File #Commit changes to Winre.wim Dismount-WindowsImage -Path $mountFolder2 -Save #Create folder for DisableCMDRequest.TAG in install.wim New-Item $mountFolder1\Windows\setup\scripts -ItemType Directory #Create DisableCMDRequest.TAG file for install.wim New-Item $mountFolder1\Windows\setup\scripts\DisableCMDRequest.TAG -ItemType File #Commit changes to Winre.wim Dismount-WindowsImage -Path $mountFolder1 -Save
- Deploying without MDT or SCCM\MECM....
The best methods for deploying Windows are SCCM and then MDT, hands down. But what if you don’t have either deployment service? Seriously… despite all the step-by-step guides and even scripts claiming you can deploy MDT in 45 minutes, some still opt to manually deploy or clone Windows, maybe they never moved past RIS. The real question is: can Windows 10 and a suite of applications, including Office, be automated without fancy deployment tools? The short answer: yes, but it’s not pretty. There are problems that MDT and SCCM simply make disappear. I’m not thrilled about dealing with these issues. Manual prep takes way more time, is less functional, and only starts to make sense if you have more than a handful of Windows clients to deploy. If you ever consider doing it this way, it’s only for very limited scenarios. My recommendation: use the proper deployment services designed specifically for Windows. It’s faster, cleaner, and far less frustrating. Pre-requisites 16Gb USB3 as a minimum, preferably 32Gb Windows 10 media MS Office 2019 Chrome, MS Edge, Visual C++, Notepad++ Windows ADK Windows Media Download Windows 10 ISO and double click to mount as D:\ Create a directory at C:\ named "Version of Windows" eg C:\Windows21H2\. Don't copy the contents of D:\ directly to the USB due to install.wim being larger than the permitted supported file size for Fat32, greater than 4Gb. Copy the files from D:\ (Windows ISO) to C:\Windows21H2\ Split install.wim into 2Gbs files to support Fat32. Dism /Split-Image /ImageFile:C:\Window21H2\sources\install.wim /SWMFile:C:\Window21H2\sources\install.swm /FileSize:2000 Delete C:\Window21H2\sources\install.wim. Insert USB pen and format as Fat32, in this case, it will be assigned as E:\ Copy the entire contents from C:\Windows21H2\ to E:\. Applications Create directory E:\Software, this is the root for all downloaded software to be saved to. Create the following sub-directories under E:\Software, and download the software to the relevant sub-directory. 7Zip & cmd.exe /c 7z2107-x64. exe /S Chrome & cmd.exe /c msiexec.exe /i GoogleChromeStandaloneEnterprise64. msi /norestart /quiet Drivers & cmd /c pnputil.exe /add-driver Path/*. inf /subdirs /install MS-VS-CPlus & cmd.exe /c vcredist_x86_2013. exe /S MS-Win10-CU & cmd /c wusa.exe windows10.0-kb5011487-x64. msu /quiet /norestart MS-Win10-SSU & cmd /c wusa.exe ssu-19041.1161-x64. msu /quiet MS-Edge & cmd.exe /c msiexec.exe /i MicrosoftEdgeEnterpriseX64. msi /norestart /quiet MS-Office2019 & cmd.exe /c MS-Office2019\Office\Setup64. exe NotepadPlus & cmd.exe /c npp.8.3.3.Installer.x64. exe /S TortoiseSVN & cmd.exe /c msiexec.exe /i TortoiseSVN-1.14.2.29370-x64-svn-1.14.1. msi /qn /norestart WinSCP & cmd.exe /c WinSCP-5.19.6-Setup.exe /VERYSILENT /NORESTART /ALLUSERS I've provided the unattended commands with an extension, its important the correct file type is downloaded for the script to work correctly. Place any driver files in the 'Drivers' directory unpacked as *.inf files. AutoUnatteneded Download ADK for Windows ( here ). Install only the 'Deployment Tools'. From the Start Menu open 'Windows System Image Manager' and create a 'New Answer File' and save it to the root of the E:\ (USB), name the file 'AutoUnattend.xml'. I cheated at this point, didn't fancy creating the AutoUnattend.xml from scratch, so I "borrowed" a pre-configured unattend.xml from MDT. To save you the pain download the 'AutoUnattend.xml' from Github ( here ). Save to the Root of E:\ (USB). Within the autounattend.xml the following line is referenced to execute 'InstallScript.ps1' at first logon. C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -executionpolicy bypass -file D:\software\InstallScript.ps1 Not that the PartitionID is '3' and the InstallFrom is updated from 'install.wim' to 'install.swm'. OnError 0 3 install.swm To select a different edition, the default is Education to run the following command with Admin rights. dism /Get-WimInfo /WimFile:"d:\sources\install.wim" Index : 1 Name : Windows 10 Education Description : Windows 10 Education Index : 2 Name : Windows 10 Education N Description : Windows 10 Education N Index : 3 Name : Windows 10 Enterprise Description : Windows 10 Enterprise Index : 4 Name : Windows 10 Enterprise N Description : Windows 10 Enterprise N Index : 5 Name : Windows 10 Pro Description : Windows 10 Pro Edit the AutoUnattend.xml and update the MetaData value under OSImage to reflect the desired index value. The Script Download 'InstallScript.ps1' from ( here ) and save it to E:\Software. A Brief Script Overview The first action is to copy the Software directory to C:\ so it can be referenced between reboots. The script adds Registry settings to Autologon as 'FauxAdmin' with a password of 'Password1234'. I strongly suggest changing the hardcoded password to something more secure. Warning: During the installation of Windows it prompts for a new account to ensure it reflects the hardcoded name and password in the InstallScript.ps1 'FauxAdmin', 'Password1234'. A Scheduled Task is added that will execute at logon with 'FauxAdmin'. The default hostname is Desktop-####, you'll be asked to enter a new hostname. Pre-Create a Computer object in AD, with the planned hostname of the client being deployed. Domain credentials will be required with delegated permissions to add computer objects to the domain. Update the InstallScript.ps1 with the correct FQDN and OU path $DomainN = "trg.loc" $ouPath = "OU=wks,OU=org,DC=trg,DC=loc" Windows 10 CU and Apps will install with various reboots. A bit of a tidy to remove the AutoLogon and Scheduled Task and then a final reboot. To prevent an attempted re-installation or repeat an action a 'check.txt' file is updated at the end of each step. If validated $true then the step will be skipped. Deployment Boot PC and enter Bios\UEFI. Set UEFI to boot or initial boot to USB, F10 to save and exit. Insert USB and boot. Setup will start and prompt for disk partitioning, delete the volumes and create new default partitions. OK, Cortana. Create an account of 'fauxadmin' + 'Password1234' - these account details are hardcoded in the script. At initial logon, the PowerShell will launch. The process is completed when the client has been added to the domain and rebooted. Warning. Now reset the FauxAdmin account's password, don't forget it's hardcoded in the script and could allow an attacker to gain access if the password isn't updated. Notes: The unattended disk partitioning proved to be unreliable and required manual intervention some of the time. This step is now manual. It is assumed that the USB during deployment will map to D:\ this is hardcoded for the Scheduled Task. Hiding Cortana resulted in removing the prompt for a new admin account, it's considered a security benefit to create a new admin account and disable Administrator with SID 500.
- Managing Local Admin Passwords with LAPS
How are you managing your local administrator passwords? Are they stored in a spreadsheet on a network share, or worse, is the same password used everywhere? Microsoft LAPS (Local Administrator Password Solution) could be the answer. LAPS is a lightweight tool that, with a few simple GPO settings, automatically randomizes local administrator passwords across your domain. It ensures each client and server has a unique, securely managed password, removing the need for spreadsheets or manual updates. Download LAPS from the Microsoft site Copy the file to the Domain Controller and ensure that the account you are logged on has 'Schema Admin'. Install only the Management Tools. As its a DC its optional whether to install the 'Fat Client UI', Schema updates should always be performed on a DC directly. Open Powershell and run the following command after seeking approval. Update-AdmPwdSchema SELF will need updating on the OU's for your workstations and servers. Add SELF as the Security Principal. Select 'Write ms-Mcs-AdmPwd Now change the GPO settings on the OU's. The default is 14 characters but I would go higher and set above 20. Install LAPS on a client and select only the AdmPwd GPO Extension On the Domain Controller open the LAPS UI and search and Set a client. Once the password has reset open the properties of the client and check the ms-Mcs-AdmPwd for the new password. Now every 30 days the local Admin password will be automatically updated and unique. Deploy the client with ConfigMgr to remaining estate. By default Domain Admin have access to read the password attribute and this can be delegated to a Security Group. AND.....this is the warning.....Any delegated privileges that allow delegated Computer management and the 'Extended Attributes' can also read the 'ms-MCS-AdmPwd'.
- Using SCOM to Monitor AD and Local Accounts and Groups
For those that have deployed SCOM without ACS or another monitoring service, but don't have a full-blown IDS\IPS. With a little effort, it's possible to at least monitor and alert when critical groups and accounts. As a free alternative, ELK (Elastic Search) or Security Onion. The following example is SCOM being configured to alert when Domain Admins is updated. On the Authoring Tab, Management Pack Objects, Rules, select 'NT Event Log (Alert)' Create a new Management Pack if required, don't ever use the default MP The 'Rule Name' should have an aspect that is unique and all subsequent rules to assist searching later on. Rules that monitor Groups or Accounts will be pre-fixed with 'GpMon'. The 'Rule Target' in this case is 'Windows Domain Controllers', it's a domain group. Change the 'Log Name' to 'Security'. Add Event ID 4728 (A member was added to a security-enabled global group) Update the Event Source to 'Contains' with a value of 'Domain Admins'. Update the priorities to High and Critical. Sit back grab a coffee (or 2) and wait whilst the rule is distributed to the Domain Controllers, this can take a while. Test the rule by adding a group or account to Domain Admins, in the SCOM Monitoring tab, an alert will almost immediately appear with full details. Now for the laborious bit, create further monitors for the following: Server Operators Account Operators Print Operators Schema and Enterprise Admins Any delegation or role-up groups SCCM Administrative groups CA Administrative groups That's the obvious groups covered, now to target all Windows Servers and Clients (if SCOM has been deployed to the clients) Local accounts for creation, addition to local groups and password resets. Applocker to alert on any unauthorised software being installed or accessed. Finally here's what Microsoft recommens. With a few hours of effort and you'll have better visibility of the system and any changes to those critical groups.
- Always Patch Before Applocker or Device Guard are Deployed.
Labs don't tend to follow the best practices or any security standards, they're quick dirty installations for developing and messing around. Here's some food for thought the next time you're wanting to test Applocker or Windows Defender Application Control (WADC) aka Device Guard, you may wish to at least patch. For the most part, deploying Domain Infrastructure, scripts and services works great, until Device Guard is deployed to an unpatched Windows 11 client. Firstly the steps on how to configure Device Guard, then the fun... DeviceGuardBasic.ps1 script can be downloaded from ( here ). Run the script as Admin and point the Local GPO to Initial.bin following the help. Device Guard is set to enforced, no audit mode for me, that's for wimps, been here hundreds of times......what's the worse that can happen..... arrghhhhh. The first indication Windows 11 had issues was 'Settings' crashed upon opening. This isn't my first rodeo, straight to the eventlogs. Ah, a bloodbath of red Code Integrity errors complaining that a file hasn't been signed correctly. How could this be.... the files are Microsoft files. This doesn't look good, the digital signature can't be verified meaning the signing certificate isn't in the Root Certificate Store for the Computer. This is not the first time I've seen the 'Microsoft Development PCA 2014' certificate. A few years back a sub-optimal Office 2016 update prevented Word, PowerPoint and Excel from launching. It was Applocker protecting me from the Microsoft Development certificate at that time. Well done Microsoft, I see your test and release cycle hasn’t improved. A Windows update and all is fine….right.....as if. I'm unable to click on the Install updates button, it's part of Settings and no longer accessible. Bring back Control Panel. No way I’m waiting for Windows to get around to installing the updates by itself. The choices: Disable Device Guard by removing the GPO and deleting the SIPolicyp7b file. Create an additional policy based on hashes. Start again, 2 hours effort, most of that waiting for updates to install. Creating an additional policy based on hashes and then merging them into the ‘initial’ policy allows for testing Device Guard's behaviour. Does Device Guard prevent untrusted and poorly signed files from running when hashes are present? Observed behaviour is for Device Guard policy to create hashes for unsigned files as a fallback. The new and improved Device Guard script, aptly named 'DeviceGuard-withMerge.ps1' can be downloaded from ( here ). The only additional lines of note are the New-CIPolicy to create only hashes for the “C\Windows\SystemApps” directory and to merge the 2 XML policy files. New-CIPolicy -Level Hash -FilePath $HashCIPolicy -UserPEs 3> $HashCIPolicyTxt -ScanPath "C:\Windows\SystemApps\" Merge-CIPolicy -PolicyPaths $IntialCIPolicy,$HashCIPolicy -OutputFilePath $MergedCIPolicy The result, 'Settings' now works despite Microsoft's best effort to ruin my day. Creating Device Guard policies based on hashes for files incorrectly signed by Microsoft's internal development CA is resolved. Below is the proof, 'Settings' is functional even with those dodgy files. Conclusion: This may come as a shock to some….. Microsoft does make mistakes and release files incorrectly sighed… shocking. Device Guard will allow files to run providing the hashes are present even when incorrectly signed. Did I learn something, hell yeah! always patch before deploying Device Guard or Applocker. The time spent faffing resolving the issue far exceeded the time it would have taken to patch it in the first place.
- LAPS Leaks Local Admin Passwords
On a previous blog ( here ), LAPS (Local Administrator Password Solution) was installed. LAPS manages and updates the local Administrator passwords on clients and member servers, controlled via GPO. Only Domain Admins have default permission to view the local administrator password for clients and member servers. Access to view the passwords by non-Domain Admins is via delegation, here lies the problem. Access to the local administrator passwords may be delegated unintentionally. This could lead to a serious security breach, leaking all local admin accounts passwords for all computer objects to those that shouldn't have access. This article will demonstrate a typical delegation for adding a computer object to an OU and how to tweak the delegation to prevent access to the ms-Mcs-AdmPwd attribute. Prep Work There is some prep-work, LAPS is required to be installed and configured, follow the above link. At least 1 non-domain joined client, preferably 2 eg Windows 10 or 11 Enterprise. A test account, mine's named TestAdmin, with no privileges or delegations and an OU named 'Workstation Test'. Ideally, I'll be using AD Groups and not adding TestAdmin directly to the OU, it's easy for demonstration purposes. Delegation of Account Open Active Directory Users and Computers or type dsa.msc in the run command. With a Domain Admin account right-click on 'Workstation Test' OU, Properties, Security Tab and then Advanced. Click Add and select the TestAdmin as the principal. Select, Applies to: This Object and all Descendant Objects In the Permission window below select: Create Computer Objects Delete Computer Objects Apply the change. This is a 2 step process, repeat this time selecting. Applies to: Descendant Computer Objects Select Full Control in the Permissions window. Test Delegation Log on to a domain workstation with RSAT installed and open Active Directory Users and Computers. Test by pre-creating a Computer object, right-click on the OU and select New > Computer, and type the hostname of the client to be added. Log on to a non-domain joined Windows client as the administrator and add to the domain using the TestAdmin credentials, reboot. Then wait for the LAPS policy to apply.......I've set a policy to update daily. View the LAPS Password As the TestAdmin, from within AD Users and Computers go to View and select Advanced. Right-click properties on the client, select the Attribute tab Scroll down and locate 'ms-Mcs-AdmPwd', that the Administrator password for that client. The Fix.... To prevent TestAdmin from reading the ms-Mcs-AdmPwd attribute value, a slight amendment to the delegation is required. As the Domain Admin right-click on 'Workstation Test' OU, Properties, Security Tab and then Advanced. Select the TestAdmin entry, it should say 'Full Control'. Remove 'All Extended Rights', 'Change Password' and 'Reset Password' and apply the change. As TestAdmin open AD Users and open the Computer attributes. ms-Mcs-AdmPwd is no longer visible. Did I Just Break Something...... Test the change by adding a computer object to the OU and adding a client to the domain. Introducing computers to the domain is functional... No harm no foul. Final Thoughts Removing the Extended Rights and Password permissions prevents the delegated account from reading the local administrator password from ms-Mcs-AdmPwd AD attribute without causing any noticeable problems. Watch for any future delegations ensuring the permissions aren't restored by accident. Enjoy and hope this was insightful.
- Code Signing PowerShell Scripts
In this article, I'll describe the process of Code Signing PowerShell scripts from a Microsoft CA. I'll not cover how Code Signing adds security, simply put Code Signing doesn't provide or was intended to provide a robust security layer. However, Code Signing does provide both Authenticity and Integrity: The Authenticity that the script was written or reviewed by a trusted entity and then signed. Integrity ensures that once signed the script hasn't been modified, useful when deploying scripts or executing scripts by a scheduled task with a service account. Bypassing Code Signing requirements is simple, open ISE, paste in the code and F8, instant bypass. However, my development 'Enterprise' system is not standard, ISE won't work as Constrained Language Mode prevents all but core functionality from loading, meaning no API's, .Net, Com and most modules. As a note, even with the script, code signed, ISE is next to useless with Constrained Language Mode enforced. Scripts require both signing and authorising in Applocker\WDAC and will only execute from native PowerShell. Back to it..... This is a typical message when executing a PowerShell script with the system requiring Code Signing. To successfully execute the script, the script must be signed with a digital signature from either a CA or Self Signed certificate. I'm not going to Self Sign, it's filth and I've access to a Microsoft Certificate Authority (CA) as part of the Enterprise. Login to the CA, launch 'Manage' and locate the 'Code Signing' template, then 'Duplicate Template'. Complete the new template with the following settings: General: Name the new certificate template with something meaningful and up the validity to 3 years or to the maximum the corporate policy allows. Compatibility: Update the Compatibility Settings and Certificate Recipient to 'Windows Server 2016' and 'Windows 10/Windows Server 2016' respectively. Request Handling: Check the 'Allow private key to be exported'. Cryptographic: Set 'Minimum key size' to either 1024, 2048, 4096, 8192 or 16,384 Select 'Requests must use one of the following providers:' and check 'Microsoft Enhanced RSA and AES Cryptographic Provider' ( description ) Security: Ideally, enrolment is controlled via an AD Group with both READ and Enrol permissions. Do not under any circumstances allow WRITE or FULL. Save the new template and then issue by right-clicking on 'Certificate Template' > New and 'Certificate Template to Issue'. From a client and logged on with the account that is a member of the 'CRT_PowerShellCodeSigning' group, launch MMC and add the Certificate snap-in for the Current User. Browse to Personal > Certificates and right-click in the empty space to the right, then click on 'All Tasks' > 'Request New Certificate. Select the 'Toyo Code Signing' template and then click on 'Properties' to add in some additional information. Add a Friendly Name and Description. Enrol the template. Now, right-click on the new 'Code Signing' certificate > All Tasks > Export. Select 'Yes, export the private key'. Ensure the 2 PKCS options are selected. Check the 'Group or username (recommended)' and on the Encryption drop-down select 'AES256-SHA256'. Complete the wizard by exporting the .pfx file The final step is to sign a script with the .pfx file using PowerShell. Set-AuthenticodeSignature -FilePath "C:\Downloads\SecureReport9.4.ps1" -cert "C:\Downloads\CodeSigning.pfx" Open the newly signed script and at the bottom of the script is the digital signature. Launch PowerShell.exe and run the script. For those with Applocker\WDAC then the script requires adding to the allow list by file hash. Now I'll be able to execute my own Pentest script on my allegedly secure system and locate any missing settings..... As always thanks for your support.









