top of page

63 items found for ""

  • Quick Guide for Intune's Autopilot

    Intune's Autopilot automates the configuration and setup of new devices, allowing users to start working with pre-configured settings, applications, and security policies as soon as they power on their device. In this blog, we’ll explore how Microsoft Intune Autopilot works, let's get started. Dynamic Group for Deployment Profile From within Intune, browse to Groups and then click on New Group. To ensure that every newly registered device is associated with Autopilot automatically you need to first create a dynamic Azure AD (Entra) Security Group. Edit the Dynamic Query, then paste the following string and Save. (device.devicePhysicalIDs -any (_ -startsWith "[ZTDid]")) Enrollment Configuration From within Intune, browse to Devices, Windows, then Enrollment. Device Platform Restrictions Intune Device Platform Restrictions controls which types of device can access organizational resources based on their platform (e.g., Windows, iOS, Android, macOS). This feature helps enhance security by limiting access to only approved device types and blocking untrusted or unsupported platforms. This step isn't necessary for Autopilot to work as the default is to allow all devices, however we will block Windows Personally owned devices. Click on 'All Users' link. Change Personally owned devices for Windows (MDM) to Block. Deployment Profiles Autopilot deployment profiles in Microsoft Intune are configuration templates that define how new devices are set up and managed during the out-of-box experience (OOBE). These profiles allow automated and customizable deployment processes, specifying settings like Azure AD join type, user-driven or self-deploying mode. Navigate to Deployment Profiles within the Enrollment tab, then select Create Profile. Provide name and select Yes for 'Convert all targeted devices to Autopilot', this enables all non-Autopilot, or current members of Entra to become Autopilot registered when they are assigned to the profile group. Select User-Driven and any other pertinent settings. Assign the Windows Autopilot group created earlier and then save the changes. That covers the basics of configuring auto enrollment. I'll skip the Enrollment Status Page for now, as it's not essential for this introductory guide. Enrollment of a Device For the purposes of this blog, a Windows 11 23H2 OS has been installed on Hyper-V, and the setup has been progressed to the Region selection page. Press Shift & F10 for an Administrative shell Type the following to download the Autopilot PowerShell module. Powershell install-script get-windowsautopilotinfo set-executionpolicy -ex bypass get-windowsautopilotinfo -online Enter Azure credentials to register the device. Accept the permissions request. Wait while the device completes the registration. Go back to Autopilot under the Devices section and verify that the device has been successfully registered. Restart the device, which will then connect to Intune and retrieve the assigned policies. Enter your Azure credentials. Once the device is ready, login, and after a brief wait, any assigned applications will begin to install. That wraps up this quick configuration guide for Intune Autopilot. Links: https://learn.microsoft.com/en-us/autopilot/enrollment-autopilot

  • Deploying Windows Domains as an EC2 Instance with PowerShell - Part 1

    Welcome back! In this blog, I'll demonstrate how you can leverage PowerShell to automate the entire setup of a Windows domain environment on AWS services, from creating the VPC to configuring the EC2 encrypted volumes. Before we start, deploying this will incur AWS costs, the instance type is t3.med ium and the volume is set to $ebsVolType = "io1" and $ebsIops = 1000 This is Part 1 of a 2 parter, and it will focus on setting up the scripting environment and meeting the prerequisites. The ultimate goal is to deploy a public-facing Remote Desktop Server (RDS) and a private Domain Controller (DC) by PowerShell. The Remote Desktop Server will serve as a jump box, providing remote access to the network, while the Domain Controller will be securely tucked away in a private subnet, only accessible through the RDS. Prerequisites There are a few prerequisites before to deploy EC2 instances from Powershell: PowerShell version 7 or Visual Code Studio is required An AWS Account and its corresponding Access ID and Secret Key. The AWS account requires the AdministratorAccess' role or delegated permissions. A basic understanding of both AWS and Windows Domains. The default password for the EC2 Instances is 'ChangeMe1234'.               Previous post on automating Domain and OU creation Before diving into this blog, I highly recommend checking out the previous blogs where I used PowerShell to deploy a domain and create an Organizational Unit (OU) structure. The script used for this AWS blog is a slightly customized version of the Domain script below and as such doesn't require downloading. The description https://www.tenaka.net/post/deploy-domain-with-powershell-and-json-part-1 The Original Domain script https://github.com/Tenaka/Active-Directory-Automated-Deployment-and-Delegation Install Visual Code Studio or PowerShell  I recommend installing either PowerShell 7 (PS7) or Visual Studio Code (VSC), along with the latest .NET SDK. .NET SDKs for Visual Studio https://dotnet.microsoft.com/en-us/download/visual-studio-sdks Download Visual Studio Code https://code.visualstudio.com/download Installing PowerShell on Windows https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell AWS Account and permissions\Access ID From within the AWS console, navigate to IAM and create a service account specifically for executing scripts to create the required AWS services. Ensure this service account has the necessary permissions by adding the following policies and the two custom policies. AmazonEC2FullAccess, AmazonS3FullAccess, AWSKeyManagementServicePowerUser, AmazonSSMReadOnlyAccess, AWSKeyManagementServicePowerUser, IAMFullAccess, AmazonSSMManagedInstanceCore KMS Policy to grant enabling EC2 encrypted volumes, this policy requires further tweaking as it's far too encompassing. { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "kms:Decrypt", "kms:GenerateRandom", "kms:ListRetirableGrants", "kms:CreateCustomKeyStore", "kms:DescribeCustomKeyStores", "kms:ListKeys", "kms:DeleteCustomKeyStore", "kms:UpdateCustomKeyStore", "kms:Encrypt", "kms:ListAliases", "kms:GenerateDataKey", "kms:DisconnectCustomKeyStore", "kms:CreateKey", "kms:DescribeKey", "kms:ConnectCustomKeyStore", "kms:CreateGrant" ], "Resource": "*" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": "kms:*", "Resource": "*" } ] } Additionally, Session Manager rights are needed. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ssm:SendCommand", "ssmmessages:CreateDataChannel", "ssmmessages:OpenDataChannel", "ssmmessages:OpenControlChannel", "ssmmessages:CreateControlChannel" ], "Resource": "*" } ] } If nothing else works, consider adding the 'AdministratorAccess' policy to the service account. Create Access Key Create an Access Key by navigating to the Security tab of the service account and creating a 'Command Line Interface' (CLI) use case. Record the Access Key and Secret Access Key. Download this script... After you've familiarized yourself with the above concepts covered in our previous blogs and created the AWS account with the correct rights, download the PowerShell DeployVPCwithDomain.ps1 script from the link below. https://github.com/Tenaka/AWS-PowerShell/blob/main/DeployVPCwithDomain.ps1 This script is designed to automate the setup of EC2 instances, including a public-facing Remote Desktop Server and a secure, private domain controller. Pick your Scripting Engine I'll be using an elevated Visual Studio Code (VSC) session, all testing has been completed with VSC. While PowerShell version 7 should work, it hasn’t been extensively tested. Variables that need your attention Open the DeployVPCwithDomain.ps1 script in Visual Studio Code (VSC), but hold off on executing it. There are sections you might want to modify first. Update the Region, the default is 'us-east-1' $region1 = "us-east-1" Set-defaultAWSRegion -Region $region1 Update the second and third octets of the CIDR block, as these will form the foundation for your VPC. 10.1.250.0/24 is for a future iteration where Transit Gateways are deployed for additional AD Sites. For now, 10.1.250.0/24 is free to use. $cidr = "10.1.1" # Dont use "10.1.250.0/24" $cidrFull = "$($cidr).0/24" During the execution of DeployVPCwithDomain.ps1, an additional Active Directory script is downloaded from GitHub. This script is used for the configuration of the Domain Controller. $domainZip = "https://github.com/Tenaka/AWS-PowerShell/raw/main/AD-AWS.zip" Invoke-WebRequest -Uri $domainZip -OutFile "$($pwdPath)\AD-AWS.zip" -errorAction Stop DeployVPCwithDomain.ps1, will pause at this point to allow updates to dcPromo.json contained within AD-AWS.zip , this is so the default password of ChangeMe1234 can be changed. If you decide to change the default password, be sure to update it in the UserData sections for both the private and public EC2 instances as well. Set-LocalUser -Name "administrator" -Password (ConvertTo-SecureString -AsPlainText ChangeMe1234 -Force) That's it for now... That's it for this blog, we're all prepped for executing the script! Make sure to come back for Part 2, where I dive into the specifics of what the script creates in AWS. We'll also explore how the script sets up a fully functional Active Directory environment, complete with a domain controller and remote access configurations. Stay tuned!

  • Deploying Windows Domains as an EC2 Instance with PowerShell - Part 2

    Welcome to Part 2! Let's take a deep dive into the specifics of what the DeployVPCwithDomain.ps1 script creates in AWS. Here's a quick recap, a public-facing Remote Desktop Server (RDS) and a private Domain Controller (DC) will be deployed into AWS with all the required AWS infrastructure and services using PowerShell. If you haven't read Part 1, I strongly suggest you do and ensure all the prerequisites are fulfilled, otherwise, it's likely to get messy. To reiterate, deploying this will incur AWS costs, the instance type is t3.med ium and the volume is set to $ebsVolType = "io1" and $ebsIops = 1000 Prerequisites PowerShell version 7 or Visual Code Studio is required An AWS Account and its corresponding Access ID and Secret Key. The AWS account requires the AdministratorAccess' role or delegated permissions. A basic understanding of both AWS and Windows Domains This blog will focus on the execution of the script and the provisioning of the AWS services, including the configuration of the VPC, subnets, and security groups and the deployment of EC2 instances. You’ll also see how the script sets up a fully functional Active Directory environment, complete with a domain controller, OU, Delegation and GPO configuration. Let's Get Started! Let's begin by loading DeployVPCwithDomain.ps1 in Visual Studio Code with elevated rights. I normally 'Ctrl + A' and then press F8 to execute the script, equally F5 works. The script starts by installing the necessary AWS PowerShell modules from PowerShell Gallery. Loading the modules can be problematic. If any of the modules fail, the script should catch the error. I suggest closing VSC, deleting the modules from "C:\Users\%username%\Documents\PowerShell\Modules\", and then restart the script from VCS. Access Key and Secret Access Key Enter both the Access Key and Secret Key created for the service account. Regions The script sets the default AWS region using `Set-defaultAWSRegion -Region $region1`, and this region is also hardcoded in the userdata script for both S3 and EC2 instances. $region1 = "us-east-1" #this is hardcoded in the ec2 userdata script Set-defaultAWSRegion -Region $region1 VPC The VPC is configured with the following CIDR block: `$cidr = "10.1.1"` and `$cidrFull = "$($cidr).0/24"`. This CIDR block specifies the VPC's address range, providing 254 usable IP addresses. $cidr = "10.1.1" $cidrFull = "$($cidr).0/24" $newVPC = New-EC2vpc -CidrBlock "$cidrFull" $vpcID = $newVPC.VpcId Subnets Two subnets, each with 30 usable addresses will be created from the VPC: one for public access and one for private use. $Ec2subnetPub = New-EC2Subnet -CidrBlock "$($cidr).0/27" -VpcId $vpcID $Ec2subnetPriv = new-EC2Subnet -CidrBlock "$($cidr).32/27" -VpcId $vpcID Internet Gateway An Internet Gateway enables communication between your VPC and the Internet by acting as a bridge, allowing instances within your VPC to send and receive traffic from the Internet. $Ec2InternetGateway = New-EC2InternetGateway $InterGatewayID = $Ec2InternetGateway.InternetGatewayId Add-EC2InternetGateway -InternetGatewayId $InterGatewayID -VpcId $vpcID Public and Private Route Tables To enable internet access for your VPC's public subnet, you'll need to create a route table and configure it to direct traffic to the Internet Gateway. $Ec2RouteTablePub = New-EC2RouteTable -VpcId $vpcID New-EC2Route -RouteTableId $Ec2RouteTablePub.RouteTableId -DestinationCidrBlock "0.0.0.0/0" -GatewayId $InterGatewayID Register-EC2RouteTable -RouteTableId $Ec2RouteTablePubID -SubnetId $SubPubID Public IP `Invoke-WebRequest`, fetches your public IP address by querying ` ifconfig.me/ip` . If the request fails or returns an empty value, it defaults to "10.10.10.10". $whatsMyIP = (Invoke-WebRequest ifconfig.me/ip).Content.Trim() if ([string]::IsNullOrWhiteSpace($whatsMyIP) -eq $true){$whatsMyIP = "10.10.10.10"} If the Jump box becomes inaccessible and unless your public IP is static, it's likely to change, making it necessary to update the public security group. Security Groups This script creates 2 security groups within a specified VPC. The PublicSubnet security group to manages traffic rules for public subnet instances. $SecurityGroupPub = New-EC2SecurityGroup -Description "Public Security Group" -GroupName "PublicSubnet" -VpcId $vpcID -Force -errorAction Stop The script defines inbound and outbound rules for a security group. #Inbound Rules $InTCPWhatmyIP3389 = @{IpProtocol="tcp"; FromPort="3389"; ToPort="3389"; IpRanges="$($whatsMyIP)/32"} #Outbound Rules $EgAllCidr = @{IpProtocol="-1"; FromPort="-1"; ToPort="-1"; IpRanges=$cidrFull} `Grant-EC2SecurityGroupIngress applies inbound rules to the defined security group. Grant-EC2SecurityGroupIngress -GroupId $SecurityGroupPub -IpPermission @($InTCPWhatmyIP3389) S3 Bucket An S3 bucket is created to host the AD script. $news3Bucket = New-S3Bucket -BucketName "auto-domain-create-$($dateTodayMinutes)" $s3BucketName = $news3Bucket.BucketName $S3BucketARN = "arn:aws:s3:::$($s3BucketName)" $s3Url = "https://$($s3BucketName).s3.amazonaws.com/Domain/" S3 Bucket Access To grant EC2 instance access to the S3 bucket for running the AD script, a new IAM user is created. $s3User = "DomainCtrl-S3-READ" $newIAMS3Read = New-IAMUser -UserName $s3User A new access key for the specified IAM user is generated and written into the UserData allowing the EC2 instance access to securely authenticate and access the S3 bucket. $newIAMAccKey = New-IAMAccessKey -UserName $newIAMS3Read.UserName $iamS3AccessID = $newIAMAccKey.AccessKeyId $iamS3AccessKey = $newIAMAccKey.SecretAccessKey The following IAM Group is created and the IAM user added to the group. $s3Group = 'S3-AWS-DC' New-IAMGroup -GroupName 'S3-AWS-DC' Add-IAMUserToGroup -GroupName $s3Group -UserName $s3User The policy for read access to the S3 bucket is defined. $s3Policy = @' { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:Get*", "s3:List*", "s3:Describe*" ], "Resource": "*" } ] } '@ The IAM policy is created and added to the above group. $iamNewS3ReadPolicy = New-IAMPolicy -PolicyName 'S3-DC-Read' -Description 'Read S3 from DC' -PolicyDocument $s3Policy Register-IAMGroupPolicy -GroupName $s3Group -PolicyArn $iamNewS3ReadPolicy.Arn VPC Endpoint A VPC endpoint, which allows resources within your VPC to privately connect to AWS services without needing an internet gateway is created to allow the Private EC2 instance to access the S3 Bucket. $newEnpointS3 = New-EC2VpcEndpoint -ServiceName "com.amazonaws.us-east-1.s3" -VpcEndpointType Gateway -VpcId $vpcID -RouteTableId $Ec2RouteTablePubID, $Ec2RouteTablePrivID UserData Scripts EC2 Userdata provides commands automatically to the instance at its initial launch and at first boot. In this case, the PowerShell script changes the default AWS assigned password to 'ChangeMe1234' and renames the EC2 instance to JUMPBOX1 for the Public instance. $RDPScript = ' Set-LocalUser -Name "administrator" -Password (ConvertTo-SecureString -AsPlainText ChangeMe1234 -Force) Rename-Computer -NewName "JUMPBOX1" shutdown /r /t 10 ' The PowerShell script for EC2 instance Userdata is encoded in Base64 because AWS requires userdata to be in this format. $RDPUserData = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes($RDPScript)) EC2 Encrypted Volumes EC2 encrypted volumes use AWS Key Management Service (KMS) to automatically encrypt data at rest, in transit between the instance and the volume, and during snapshots. This ensures that all data on the volume is securely protected, with encryption keys managed by AWS. To enable EC2 encrypted volumes, KMS permissions must be granted in IAM, and the following values will be specified. $ebsVolType = "io1" $ebsIops = 2000 $ebsTrue = $true $ebsFalse = $false $ebskmsKeyArn = $newKMSKey.Arn $ebsVolSize = 50 $blockDeviceMapping = New-Object Amazon.EC2.Model.BlockDeviceMapping $blockDeviceMapping.DeviceName = "/dev/sda1" $blockDeviceMapping.Ebs = New-Object Amazon.EC2.Model.EbsBlockDevice $blockDeviceMapping.Ebs.DeleteOnTermination = $enc $blockDeviceMapping.Ebs.Iops = $ebsIops $blockDeviceMapping.Ebs.KmsKeyId = $ebsKmsKeyArn $blockDeviceMapping.Ebs.Encrypted = $ebsTrue $blockDeviceMapping.Ebs.VolumeSize = $ebsVolSize $blockDeviceMapping.Ebs.VolumeType = $ebsVolType Additional help can be found @ https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2/image/block_device_mappings.html EC2 Instance Attributes The New-EC2Instance command and the following configuration parameters are declared to deploy and manage the EC2 instances in AWS. $new2022InstancePub = New-EC2Instance ` -ImageId $gtSrv2022AMI.value ` -MinCount 1 -MaxCount 1 ` -KeyName $newKeyPair.KeyName ` -SecurityGroupId $SecurityGroupPub ` -InstanceType t3.medium ` -SubnetId $SubPubID ` -UserData $RDPUserData ` -BlockDeviceMapping $blockDeviceMapping Accessing the Jump Box The public RDP jump box, accessible only from your public IP, will launch quickly. Retrieve the instance's public IP from the AWS EC2 page, type 'mstsc' at the Run command, and enter the IP. Be sure to wait for the instance to fully initialize before connecting. Enter 'Administrator' and the password 'ChangeMe1234', once logged on, change the password to something more secure. Accessing the Domain Controller The Domain Controller will take some time to deploy, even after it shows as Running on the EC2 page. It undergoes a few reboots and runs scripts to install AD roles, create an OU structure, delegate access, and set up the GPOs. It's a good time to grab a coffee and take a 10-minute break. Once you've finished your coffee, retrieve the Domain Controller's private IP, based on the VPC Private Subnet, from within the AWS EC2 page. Then, from within the Jump box, launch 'mstsc' and enter the Domain Controller's IP. The FQDN for the domain is 'testdom.loc'. Enter 'Administrator' and the password 'ChangeMe1234'. To update the password, open 'Active Directory Users and Computers', find the 'Administrator' account, and reset the password. OU Structure A comprehensive OU structure with GPOs, URA, and Restricted and Nested Groups is deployed in a tiered model. It's too involved to cover here, but a full description can be found @ https://www.tenaka.net/post/deploy-domain-with-powershell-and-json-part-2-ou-delegation JSON The script deployed for AWS is a slightly modified version of the original. Similarly, it is tied to the hostname of the Domain Controller, which is hardcoded as 'AWSDC01' in both the UserData and the JSON file. The other modification involves the IP address. The IP section in the JSON file is ignored, with the Domain Controller being statically assigned the IP provided by AWS's DHCP server. { "FirstDC": { "PDCName":"AWSDC01", "PDCRole":"true", "IPAddress":"10.0.2.69", "Subnet":"255.255.255.0", "DefaultGateway":"10.0.2.1", "CreateDnsDelegation":"false", "DatabasePath":"c:\\Windows\\NTDS", "DomainMode":"WinThreshold", "DomainName":"testdom.loc", "DomainNetbiosName":"TESTDOM", "ForestMode":"WinThreshold", "InstallDns":"true", "LogPath":"c:\\Windows\\NTDS", "NoRebootOnCompletion":"false", "SysvolPath":"c:\\Windows\\SYSVOL", "Force":"true", "DRSM":"Recovery1234", "DomAcct":"Administrator", "DomPwd":"ChangeMe1234", "PromptPw":"false" }, Finally..... These two posts only scratch the surface of deploying Active Directory on AWS with PowerShell. Additional AD Sites, VPN's, AWS Transit Gateways and AD integration into AWS are some of the topics I hope to cover in the future. For now, thank you for taking the time to read my blog; I truly appreciate it. I hope you found it useful.

  • Deploy Domain Controllers with PowerShell and JSON (Part 2) - OU Structure and Delegation

    Welcome Back Welcome back to the continuation of our series on deploying Domain Controllers using PowerShell and JSON. If you've been following along with Part 1, you should now have a newly configured Domain Controller with a delegated Organizational Unit (OU) structure in place. If you missed Part 1 of the series, you can access the necessary files by following the provided link or reference, (here). This blog will provide an in-depth explanation of the delegation model that has been delivered by PowerShell. It will also delve into the intricacies of the Organizational Unit (OU) structure, the arrangement of nested Groups and the various Roles assigned. Aim of the Game The objective is to establish an Organizational Unit (OU) structure that aligns with a clear and consistent delegation model. This approach incorporates well-defined naming standards to enhance comprehensibility and facilitate ease of navigation and management within the structure. AD Group Best Practice Group management will follow Microsoft's best practice of assigning Domain Local groups against the object, eg an OU or GPO. The Domain Local group is then added as a 'Member of' a Domain Global group. The user is added to Domain Global as a 'Member'. The naming convention I've persisted with over the years, again from Microsoft, is naming delegation groups 'Action Tasks', a task being an individual permission set. And 'Roles', a role being a collection of Tasks or individual permissions. AG is Action Task Global Group AL is Action Task Domain Local Group RG is a Role Global Group RL is a Role Domain Local Group Again, something that I've persisted with over the years is that Groups and OUs are named based on their Distinguished Name (DN). Let's break down an example of a group name: AG_RG_Member Servers_SCCM_Servers_ResGrpAdmin AG\AL\RG\RL - Action Task Global, AL for AT Domain Local, R for Role RG\OU\GPO - Restricted Group, OU or GPO - Type of object delegation Member Servers - The Top-Tier OU name SCCM - The Application or Service eg SCCM or Certificates Servers - It's for Computer objects ResGrpAdmin - ResGrpAdmin is a Restricted Group providing Admin privileges. ResGrpUser is a Restricted Group providing User privileges. CompMgmt, create\delete and modify Computer objects. UserMgmt, create\delete and modify User objects. GroupMgmt, create\delete and modify Group objects. GPOModify, edit GPO settings. SvcMgmt, create\delete and modify user objects. FullCtrl, full control over OU's and any child objects. JSON OU Configuration Traditionally there are only 3 tiers, the lower the tier the less trustworthy: Zero = Domain Controllers and CA's One = Member Servers Two = Clients and Users Given that this script can potentially generate numerous levels or hierarchies, it seemed more suitable to avoid the term "tier" and instead opted to label the top-level OU's as "Organizations" for a more meaningful representation. The JSON configuration provided creates an OU structure based on a default OU structure for many businesses, where Orgainisation1 is for Member Servers and Orgainisation2 is for Clients and Users. In addition, Organisation0 provides Admin Resources OU for the management of all delegation, role and admin account provision. Organisation0 - Admin Resources Organisation0, creates a top-level management OU named Admin Resources' This OU serves as the central hub for all delegation and management groups across subsequent Organizations. Each Organization benefits from having its own dedicated management OU within the Admin Resources OU. Organisation specific delegation groups, roles, and admin accounts are created. This approach allows for potential future delegation. Admin Accounts Member Servers Admin Tasks Member Servers Admin Roles Member Servers "OU": { "Organisation0": { "Name":"Admin Resources", "Path":"Root", "Type":"Admin", "Protect":"false", "AdministrativeOU":"Administrative Resources", "AdministrativeResources": [ "AD Roles,Group", "AD Tasks,Group", "Admin Accounts,User" ] }, Organisation1 - Member Servers Organisation1 represents the typical Member Server OU and it's of the Type Server. The type Server designates a behavioural difference for assigning policy. AppResources designates application service OU's that will be created eg Exchange. Service Resources is used for creating OU's based on a set of standard administrative functions for example Servers and the delegation and object type of Computers. "Organisation1": { "Name":"Member Servers", "Path":"Root", "Type":"Server", "Protect":"false", "AdministrativeOU":"Service Infrastructure", "AdministrativeResources": [ "AD Roles,Group", "AD Tasks,Group", "Admin Accounts,User" ], "AppResources":"Certificates,MOSS,SCCM,SCOM,File Server,Exchange", "ServiceResources": [ "Servers,Computer", "Application Groups,Group", "Service Accounts,SvcAccts", "URA,Group" ] }, Organisation2 - Client Services Organisation2 represents the typical User Services OU and it's of the Type 'Clients'. "Organisation2": { "Name":"User Services", "Path":"Root", "Type":"Clients", "Protect":"false", "AdministrativeOU":"Service Infrastructure", "AdministrativeResources": [ "AD Roles,Group", "AD Tasks,Group", "Admin Accounts,User" ], "AppResources":"Clients", "ServiceResources": [ "Workstations,Computer", "Groups,Group", "Accounts,User", "URA,Group" ] } } Hundreds and thousands It's possible to add further top-level OU's by duplicating an Organisation, then updating the Organisation(*) and Name values as they need to be unique. It's possible to add hundreds or even thousands of Organisations, with this possibility in mind, the management and delegation structure reflects this within the design. Levels of OU Delegation As we delve deeper into the structure of each organization, we encounter a hierarchy consisting of three levels of delegation, using Member Servers as an example: Organisation = Member Servers (Level 1) Application Service = Certificates (Level 2) Resources = Computer, Groups, Users and Service Accounts (Level 3) OU delegation controls the level of access to manage objects eg create a Computer or Group object. Level 1 Level 1 is the organisation level in this case it's the Member Server OU. It's delegated with AL_OU_Member Servers_FullCtrl. The group provides full control over the OU, sub-OU's and all objects within. The arrow serves as an indicator, denoting the point at which the group's application takes effect within the structure. Level 2 Level 2 is the Service Application level, in this case, Certificate services. AL_OU_Member Servers_Certificates_FullCtrl is applied a level below Member Servers and provides full control over itself and any subsequent objects. Level 3 At Level 3, the delegation involves the management of Service Applications resources, which includes items such as Server objects and service accounts. The 4 default OU's allow the delegation and management of their respective resource types, for example, the Application Groups OU permits the creation and deletion of Group objects via AL_OU_Member Servers_Certifcates_Applications Groups_GroupMgmt. Application Groups - Application specific Groups Servers - Server or Computer objects Service Accounts - Service Accounts for running the application services URA - User Rights Assignments for services that require LogonAsAService etc Restricted Groups and User Rights Assignment (URA) Levels In this delegated model, Restricted Groups facilitate access by allowing administrative access whilst User Rights Assignments (URA) allow admins or users to log on over Remote Desktop Protocol (RDP). There are two primary levels of organization. The first level encompasses the entire organization, including all subsequent Organizational Units (OUs). The second level consists of a dedicated Servers OU for each specific Service Application. Level 1 of Restricted Groups The GPO GPO_Member Server_RestrictedGroups is linked to the Member Servers OU and has the following groups assigned: URA: Allow log on through Terminal Services: AL_RG_Member Servers_ResGrpAdmin AL_RG_Member Servers_ResGrpUser Restricted Group: Administrators: AL_RG_Member Servers_ResGrpAdmin Remote Desktop Users: AL_RG_Member Servers_ResGrpUser This is how it looks when applied in GPO. Within this delegation model, the ability to manage Group Policy Object (GPO) settings is also delegated to ensure comprehensive control and management of the environment. via AL_GPO_Member Servers_GPOModify Group. Level 2 of Restricted Groups The GPO GPO_Member Server_Certificates_Servers_RestrictedGroups is linked to the sub-OU Servers under Certificates and has the following groups assigned, that of the Organisation and of the Service Application: URA: Allow log on through Terminal Services: AL_RG_Member Servers_ResGrpAdmin AL_RG_Member Servers_ResGrpUser AL_RG_Member Servers_Certifcates_ResGrpAdmin AL_RG_Member Servers_Certificates_ResGrpUser Restricted Group: Administrators: AL_RG_Member Servers_ResGrpAdmin AL_RG_Member Servers_Certifcates_ResGrpAdmin Remote Desktop Users: AL_RG_Member Servers_ResGrpUser AL_RG_Member Servers_Certificates_ResGrpUser This is how it looks when applied in GPO. As above Group Policy Object (GPO) settings are also delegated via AL_GPO_Member Servers_Certificates_Servers_GPOModify Bringing it all together with Roles In this demonstration, an account named 'CertAdmin01' has been specifically created to oversee the management of resources within the Certificates OU. The account is added to the role group RG_OU_Member Servers Certificates_AdminRole. Opening the RG_ group and then selecting the 'Members Of' tab displays the nested RL_ group. Drilling down into the RL_ group displays the individual delegated task groups. Delegated Admin To test the certificate Admin (CertAdmin01) deploy an additional server, adding to the domain and ensuring the computer object is in the Certificate Servers OU. Login as CertAdmin01 to the new member server and install the GPO Management and AD Tools. Browse to Member Server and then Certificates OU and complete the following tests: Right-click on Applications Group > New > Group Right-click on Servers > New > Computer Right-click on Service Accounts > New > User Right-click on URA > New > Group. Open Group Policy Management and Edit GPO_Member Servers_Certificates_Servers_RestrictedGroup. Open Compmgmt.msc and confirm that the Administrators group contains the 2 _ResGrpAdmin groups and the local Administrator. AL_RG_Member Servers_Certificates_Servers_ResGrpAdmin AL_RG_Member Servers_ResGrpAdmin Confirm that CertAdmin01 is unable to create or manage any object outside the delegated OU's. Nearly there.....SCM Policies and ADMX Files As part of the delivery and configuration of the OU structure, Microsoft's Security Compliance Manager (SCM) GPOs and a collection of Administrative (ADMX) templates are included. SCM GPOs: Microsoft's SCM offers a set of pre-configured GPOs that are designed to enhance the security and compliance of Windows systems. These GPOs contain security settings, audit policies, and other configurations that align with industry best practices and Microsoft's security recommendations. ADMX Templates: ADMX files, also known as Administrative Template files, extend functionality within Group Policy Management enabling settings for Microsoft and 3rd party applications. Within a Domain, ADMX files are copied to the PolicyDefinition directory within Sysvol. Zipped... Both SCM and ADMX files are zipped and will automatically be uncompressed during the OU deployment. However, if you would like to add your own policies and ADMX files you can. SCM Policy Placement The SCM policies are delivered in their default configuration, without any modifications or merging. The policies are placed directly into the designated target directory, imported and linked to their respective OU. For example, the Member Server directory content will be linked to any OU that is of type 'Server'. The SCM imported policies are prefixed with 'MSFT,' indicating that they are Microsoft-provided policies. There are a substantial number of these policies linked from the root of the domain down to client and server-specific policies. As far as delegation the SCM policies remain under the jurisdiction of the Domain Admin with control to effect change delegated to the _'RestrictedGroup' policies. Thank you for taking the time to read this blog. I hope you found the information valuable and that it has been helpful. Your support is greatly appreciated!

  • Deploy Domain Controllers with PowerShell and JSON (Part 1) - Domain Controllers

    How to deploy Domain Controllers with PowerShell and JSON? In my experience, while there are numerous Windows Server administration tasks suitable for automation, promoting Domain Controllers or deploying a new Forest is not typically among them. Automating Dcpromo can raise the risk of inadvertently exposing plain-text credentials in scripts, which is far from an ideal situation. Furthermore, such tasks are not frequently performed on a daily basis or repeated regularly in standard bau tasks. And now the Thousandath Time lets Lab a Domain Recently I've been engaged in a fair amount of lab work, involving dismantling and rebuilding domains. One such lab involved using Cloudformation, AWS and deploying a domain via Desired State, pre-packaged code provided by AWS. After going through the experience, I couldn't help but feel that I could deploy a Microsoft Domain setup far more effectively than relying on AWS and so we're here and I've a new PowerShell project to keep me amused... enjoy. The First of Many This is the first instalment of a two-part blog series. In this post, we'll delve into the automated deployment of a Domain using PowerShell in tandem with a JSON configuration file. This setup encompasses installing essential features such as DNS and AD and automatic logins via scheduled tasks. In the second blog, the focus will shift towards the deployment of Organizational Units (OUs) and Group Policy Objects (GPOs) with Restricted Groups, User Rights Assignments and implementing a comprehensive delegation model. The Requirements A standalone, not domain joined Windows 2022 with an active network is required, I'll be using a Hyper-V VM to host that VM. Testing has exclusively been carried out on Server 2022, the scripts should work with Server 2016 and 2019, it's important to note that I'm unable to provide any guarantees. Download all the files from GitHub (here) to the server, and save them to the Administrator Desktop, the 2 zip files will unpack automatically via the script. The Important Stuff Update DCPromo.json, the hostname of the server must match the "PDCName" value. "FirstDC": { "PDCName":"DC01", "PDCRole":"true", "IPAddress":"10.0.0.1", "Subnet":"255.255.255.0", Either update the passwords in the JSON file or update "PromptPw":"false" to "true". Once set to true the script will prompt for the password to be entered interactively. Regardless, the password is set in clear text into the Registry to allow autologin and later removed during the OU configuration. "DRSM":"Password1234", "DomAcct":"Administrator", "DomPwd":"Password1234", "PromptPw":"false" Any subsequent Domain Controllers can be added, remember that the hostname is the key and the value referenced during deployment. { "DCName":"DC02", "PDCRole":"false", "IPAddress":"10.0.0.2", "Subnet":"255.255.255.0", "DefaultGate way":"10.0.0.254", "SiteName":"Default-First-Site-Name", "DRSM":"Password1234" }, Elevate PowerShell or ISE to execute DCPromo.ps1. Installation of Roles and DCPROMO As long as the above criteria are met, Windows Server will install AD-Domain-Services and DNS Windows Features, set the IP and DCPromo the server to become the first DC in the Forest and the PDC Emulator. Auto-Restart The newly promoted DC will auto-restart twice, this is required to correctly pass domain credentials to execute CreateOU.ps1 the final script.

  • Ansible with Windows Domains and Kerberos

    Welcome Back Hey there! I'm glad to have you back for the third Ansible article. This time, we're diving into using Ansible to manage Windows Domains and authenticating with Kerberos. Catch up If you missed out on the previous articles regarding the setup of Ansible and Encrypting the at rest passwords make sure to catch up on those first, by following the links. Basic Setup of Ansible managing a Standalone Windows Server https://www.tenaka.net/post/basic-ansible-setup-for-windows How to Secure the at Rest Passwords with Ansible Vault https://www.tenaka.net/post/ansible-vault-for-windows Virtual Machines Required Ansible = 10.1.1.100 Ubuntu Domain Controller = 10.1.1.50 FQDN = TENAKA.LOC DHCP Server = 10.1.1.1 Scope Options: 004 Time Server = 10.1.1.50 006 DNS Server = 10.1.1.50 Credentials Domain Account = Administrator Windows Passwords = ChangeMe1234 Ansible Vault Password = Password1234 Help Yourselves.... A working set of files for configuring Ansible to manage a Windows Domain can be found at the following link, do help yourselves. https://github.com/Tenaka/Ansible_Kerberos Ubuntu Kerberos Packages To ensure the smooth installation of new Ubuntu features it's important to keep things up to date. From a terminal shell on Ubuntu execute the following: sudo apt-get update -y && apt-get upgrade -y Additional packages are required to provide Kerberos User Authentication with a Windows Domain. sudo apt-get install python3-dev libkrb5-dev krb5-user Complete the prompts to match that of your Domain. Writing the Fully Qualified Domain Name (FQDN) in capitals is essential. Write the host of the PDC, followed by the FQDN, again in capitals. Repeat the above. I've only 1 Domain Controller (DC), however, this can be updated later so it isn't essential, for now add a single DC. For other Linux Variants If you're using something other than Ubuntu the link below provides support. I've extracted the relevant commands below: https://docs.ansible.com/ansible/latest/os_guide/windows_winrm.html Through Yum (RHEL/Centos/Fedora for the older version) yum -y install gcc python-devel krb5-devel krb5-libs krb5-workstation Through DNF (RHEL/Centos/Fedora for the newer version) dnf -y install gcc python3-devel krb5-devel krb5-libs krb5-workstation Through Apt (Ubuntu older than 20.04 LTS (focal)) sudo apt-get install python-dev libkrb5-dev krb5-user Through Apt (Ubuntu newer than 20.04 LTS) sudo apt-get install python3-dev libkrb5-dev krb5-user Through Portage (Gentoo) emerge -av app-crypt/mit-krb5 emerge -av dev-python/setuptools Through Pkg (FreeBSD) sudo pkg install security/krb5 Through OpenCSW (Solaris) pkgadd -d http://get.opencsw.org/now /opt/csw/bin/pkgutil -U /opt/csw/bin/pkgutil -y -i libkrb5_3 Through Pacman (Arch Linux) pacman -S krb5 KrbFive Config Let's enhance the readability and tailor the default krb5.conf file to better suit our requirements. sudo nano /etc/krb5.conf Pressing Ctrl + K deletes a line, allowing you to eliminate all lines except those containing domain-specific settings. This section is where you can add extra Domain Controllers (DCs) as kdc entries. To verify Kerberos authentication, we'll utilize kinit along with the following command, ensuring that the FQDN is in capitals. kinit administrator@TENAKA.LOC Run klist to display the contents of a Kerberos Ticket Granting Ticket (TGT). WinRM and GPO WinRM (Windows Remote Management) is a Microsoft implementation of the WS-Management Protocol, which allows for remote management of Windows-based systems over HTTP(S). It enables administrators to remotely execute commands. on all permissible computers and Servers. To provide WinRM access in a domain environment using GPOs, administrators can configure GPO settings to enable WinRM, define WinRM listeners, specify trusted hosts, configure authentication settings, and set other WinRM-related policies. These policies are then applied to the relevant organizational units (OUs), groups, or individual computers within the Active Directory domain. Tier Zero and Ansible Only the Domain Controller (DC) is being managed remotely for demonstration purposes. This service falls under tier zero, similar to Certificate Authorities (CAs) and any other service that manages or was mentioned previously. Ansible should not manage these tier zero services unless other precautions are taken For instance, consider isolating a dedicated Ansible server specifically tasked with managing tier zero services. Group Policy Move to the Domain Controller, open Group Policy Management, creating a new GPO at the root of the Domain. Navigate to 'System Services' and set the 'Windows Remote Management (WS-Management)' service to Automatic. Create a new 'Inbound' firewall rule with the following settings: Protocol = TCP Port = 5985 and 5986 Remote IP Address = 10.1.1.100 (Ansible) Profile = Domain Only Navigate to 'WinRM Services' under Administrative Templates, Windows Components then to Windows Remote Management (WinRM). Set the following: Enable - Allow remote server management through WinRM IPv4 Filter = * Disable - Allow Basic authentication Disable - Allow CredSSP authentication Enable - Allow unencrypted traffic Disable - Disallow Kerberos authentication Regarding the 'Allow unencrypted traffic' setting. Kerberos encrypts data between client-server communications. Ansible, leveraging Kerberos, doesn't need HTTPS because Kerberos handles encryption and authentication, ensuring secure communication. Ansible Config for Kerberos If you've been keeping up with the earlier articles on Ansible's Windows management, create a new directory titled 'Domain' and duplicate hosts.ini, ping.yml, and win.yml into it. Alternatively, the files can be downloaded from: https://github.com/Tenaka/Ansible_Kerberos If not, launch nano to duplicate the files below, not forgetting to change the hostname to that of your own DC. Host.ini maintains your hosts and variables including the Ansible Jinja2 variable ansible_password="{{vault_ansible_password}}" and resolves to the values in win.yml Ping.yml provides a simple ping test to confirm authentication and network accessibility. To create win.yml and encrypt the Windows Domain password, execute the following command. ansible-vault create win.yml Enter the encryption password of 'Password1234' at the prompts Type 'vault_ansible_password: ChangeMe1234' Here's what the output looks like with cat. To test the playbook against the Domain Controller execute the following: ansible-playbook -i hosts.ini ping.yml --ask-vault-pass Enter the Vault password of 'Password1234' The test ping via Ansible using Kerberos authentication was successful and the world of free management of Microsoft Windows infrastructure is at your feet. Final Thoughts Implementing Ansible for Windows domain management proved straightforward, requiring minimal adjustments to existing Ansible files and only a few GPO tweaks. In production, avoiding Domain Admin usage and employing delegated service accounts with segregated roles enhances security. Relying on a single domain admin service account for all tasks would be less than ideal.

  • Basic Ansible Setup for Windows

    Introduction to Ansible Welcome to this introduction to managing Windows from Ansible, unlike Microsoft's management solutions, it's free and agentless! Imagine a single tool that automates the setup, configuration, and maintenance of multiple Windows and Linux servers. With its simplicity, Ansible lets you easily orchestrate your server infrastructure. No more manual tasks, no more sleepless nights—just smooth sailing through the seas of automation. Well, it will allow those repetitive tasks to be automated at least. Aims for Ansible This article aims to offer straightforward guidance on configuring Ansible for the management of a non-domain joined Windows Server via the execution of remote tasks. Subsequent articles will expand upon this foundation by incorporating features such as Vault's password management, domain-joined servers, and Kerberos authentication. What you will need to download Latest Ubuntu Desktop Download ISO https://ubuntu.com/download/desktop Visual Code for Linux https://code.visualstudio.com/docs/setup/linux Windows WinRM Configurator Script https://github.com/AlbanAndrieu/ansible-windows/blob/master/files/ConfigureRemotingForAnsible.ps1 Ansible Documentation https://docs.ansible.com/ansible/latest/index.html Ansible Host and Yaml Files https://github.com/Tenaka/Ansible/tree/main Pick your Linux of Choice (Ubuntu Desktop) I'll be opting for my less preferred Linux distribution, Ubuntu Desktop. However, I find it to be the most user-friendly choice for Microsoft-focused engineers. Rocky Linux is a viable alternative, though its configuration might involve additional steps. I won't go into a detailed step-by-step installation of Linux, but simply download the ISO, mount it within your preferred VM solution and install, following the default setup. Some Sort of Virtualization or Cloud I'll be opting for Hyper-V as my preferred virtualization platform to host both Ubuntu and Windows Server 2022. Its seamless integration with both Windows Server and Windows 11 client eliminates any compatibility or migration concerns I may face moving images between the 2. There are two recommended Hyper-V configurations for Linux installation. Opt for a Generation 2 VM to enable Secure Boot capability, and within the Security section of the VM, select 'Microsoft UEFI Certificate Authority'. Post-deployment, run the following command from PowerShell, once the Linux VM is powered down, select the resolution that aligns best with your monitor. Set-VMVideo Ansible2 -horizontalresolution:1900 -verticalresolution:1200 -ResolutionType Single Update Ubuntu After successfully deploying Ubuntu, it is crucial to install any updates to ensure the smooth execution of future installations by running the following command from a shell terminal. sudo apt-get update -y && apt-get upgrade -y Install Ansible Ansible is installed with the following command. sudo apt-get install ansible -y List currently installed collections, as you will see there's support for OS, Cloud, Network devices and much more. ansible-galaxy collection list To update the Windows community collection that's installed by default. ansible-galaxy collection install community.windows To install the latest stable collection by Ansible, run the following ansible-galaxy collection install ansible.windows Before continuing type ip address in the terminal and record for later use. Install Microsoft's Visual Code for Linux To assist with writing Yaml and to minimise the moving of files Microsoft's Visual Code for Linux will be installed on Ubuntu. If you can't outdo them, it seems the strategy is to join them. Well played Microsoft. Instructions can be found @ https://code.visualstudio.com/docs/setup/linux for Ubuntu and other distro's. For Ubuntu follow the next set of instructions. sudo apt-get install wget gpg wget -qO- https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > packages.microsoft.gpg sudo install -D -o root -g root -m 644 packages.microsoft.gpg /etc/apt/keyrings/packages.microsoft.gpg sudo sh -c 'echo "deb [arch=amd64,arm64,armhf signed-by=/etc/apt/keyrings/packages.microsoft.gpg] https://packages.microsoft.com/repos/code stable main" > /etc/apt/sources.list.d/vscode.list' rm -f packages.microsoft.gpg sudo apt install apt-transport-https sudo apt-get update sudo apt-get install code Launch Visual Code once it's installed, then create a new directory in the Documents directory named Ansible. That concludes the installation and configuration of Ubuntu and Ansible. Now, let's proceed to the setup of Windows. WinRM and Windows Server Configuring Windows for remote management from Ansible is a little involved with instructions available from the Anisble website: Windows Setup https://docs.ansible.com/ansible/latest/os_guide/windows_setup.html Nevertheless, there exists a pre-configured script accessible on Github: Windows Anisble Configurator Script https://github.com/AlbanAndrieu/ansible-windows/blob/master/files/ConfigureRemotingForAnsible.ps1 To get up and running with this basic implementation download the 'ConfigureRemotingForAnsible.ps1' and execute the script from PowerShell with Administrative rights. A cautionary note: the implemented configuration is open, granting remote WinRM access to any client. To address this, simply modify lines 417 and 423 by adding the specific remote IP of the Ansible server; in my case, it's 10.1.1.100. This updates the firewall from allowing any address to that of the one specified. 10.1.1.1 = Windows Server 10.1.1.100 = Ubuntu\Ansible ln 417 netsh advfirewall firewall add rule profile=any name="Allow WinRM HTTPS" dir=in localport=5986 protocol=TCP action=allow remoteIP=10.1.1.100 ln 423 netsh advfirewall firewall set rule name="Allow WinRM HTTPS" new profile=any remoteIP=10.1.1.100 To assess WinRM access from another Windows client, input the following commands in PowerShell. Remember to update the password and AnsibleIP with your system's information. In case the Windows Firewall imposes the above RemoteIP restriction, include the test client's IP in the 'Allow WinRM HTTPS' remote scope firewall rule. $username = "administrator" $password = ConvertTo-SecureString -String "ChangeMe1234" -AsPlainText -Force $cred = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $username, $password $session_option = New-PSSessionOption -SkipCACheck -SkipCNCheck -SkipRevocationCheck Invoke-Command -ComputerName AnisbleIP -UseSSL -ScriptBlock { ipconfig } -Credential $cred -SessionOption $session_option Confirm that the WinRM Service is running. Get-Service WinRM If the WinRM service isn't started execute the following to set the service to automatic and start. Set-Service -Name WinRM -StartupType Automatic -ErrorAction SilentlyContinue Get-Service -Name WinRM | Start-Service To get the WinRM configuration execute the following: winrm enumerate winrm/config/listener Listener Address = * Transport = HTTP Port = 5985 Hostname Enabled = true URLPrefix = wsman CertificateThumbprint ListeningOn = 10.1.1.1, 127.0.0.1, ::1, fe80::a81e:3b96:6d3b:3d6c%3 Listener Address = * Transport = HTTPS Port = 5986 Hostname = WIN-JE1B7QU8B8R Enabled = true URLPrefix = wsman CertificateThumbprint = FC24D87A798ECA4EA8BF4EE0C8CD7FD2CC51A67C ListeningOn = 10.1.1.1, 127.0.0.1, ::1, fe80::a81e:3b96:6d3b:3d6c%3 Ansible Environment In Ansible, host files and YAML are crucial in defining and organizing the infrastructure you intend to manage. Host Files: A host file in Ansible is where you specify the details of the servers or systems you want to manage. It typically includes information like IP addresses, hostnames, and grouping of hosts based on certain criteria (e.g., development, production). Host files help Ansible understand the inventory of systems it can control, making it an essential component for playbook execution. Without Ansible Vault passwords are hardcoded and clear text within the Hosts file. Vault will be covered in a subsequent article. [Windows] 10.1.1.1 [Windows: vars] ansible_user=administrator ansible_password="ChangeMe1234" ansible_connection=winrm ansible_winrm_scheme=https ansible_port=5986 ansible_winrm_server_cert_validation=ignore ansible_kerberos_delegation=false YAML (YAML Ain't Markup Language): YAML is a human-readable data serialization format often used for configuration files and data exchange between languages with different data structures. In Ansible, YAML is used to write playbooks, which are scripts that define the tasks to be executed on the managed hosts. It uses indentation to represent data hierarchy, making it easy to read. Writing can present a bit of a challenge as its hierarchal nature requires the structure to be indented and spaced correctly. In this example, the contents from the Ansible directory are copied to the targeted Windows Administrator's Desktop. --- - name: Copy hosts: Windows become: false gather_facts: false vars: source: "/home/user/Documents/Ansible" destination: "Desktop/" tasks: - name: copy ping ansible.windows.win_copy: src: "{{ source }}" dest: "{{ destination }}" Host and YAML files play a crucial role in making Ansible configurations clear, structured, and easy to manage. Host files define the inventory, while YAML defines the tasks and configurations to be applied to the hosts. Host File and Initial Test Ensure you're logged on to Ubuntu\Ansible and launch Visual Code. Navigate to '/home/user/Documents/Ansible' and create a file named 'hosts.ini. Taking the above host file as an example, incorporate the necessary details that match your Windows system and save the file. Or download the examples provided: https://github.com/Tenaka/Ansible/tree/main Let's create the most basic ping test to confirm access to Windows, create a file named 'ping.yml' and insert the following. --- - name: Ping Windows Test hosts: Windows gather_facts: false tasks: - name: Ping targets win_ping: Launch a shell and CD to '/home/user/Documents/Ansible'. Type and execute the following command ansible-playbook -i hosts.ini ping.yml Kudos on acing the Ansible setup for managing Windows! File Copies To and Fro Before delving into the YAML file, it's essential to acquaint yourself with the following path rules. The Windows path rules should be written in the following format. Good tempdir=C:\\Windows\\Temp Works tempdir='C:\\Windows\\Temp' tempdir="C:\\Windows\\Temp" Bad, but sometimes works tempdir=C:\Windows\Temp tempdir='C:\Windows\Temp' tempdir="C:\Windows\Temp" tempdir=C:/Windows/Temp Fails tempdir=C:\Windows\temp tempdir='C:\Windows\temp' tempdir="C:\Windows\temp" Copies the contents of the Ansible directory to the Desktop of the target Windows server. --- - name: Copy hosts: Windows become: false gather_facts: false vars: source: "/home/user/Documents/Ansible" destination: "Desktop/" tasks: - name: copy ping ansible.windows.win_copy: src: "{{ source }}" dest: "{{ destination }}" Copies a named file from the Windows Desktop up to the Ansible directory using 'fetch'. --- - name: Copy hosts: Windows become: false become_user: false gather_facts: false vars: source: "Desktop/test1.txt" destination: "/home/user/Documents/Ansible/test1.txt" tasks: - name: copy ping ansible.builtin.fetch: src: "{{ source }}" dest: "{{ destination }}" Further guidelines can be found @ https://docs.ansible.com/ansible/latest/os_guide/windows_usage.html Basic Commands This concludes the introduction by running a command line on the designated Windows server and saving the results to a text file. --- - name: cmds hosts: Windows become: false gather_facts: false tasks: - name: some cmd win_command: cmd.exe /c whoami.exe > "Desktop\whoami.txt" - name: ipconfig win_command: cmd.exe /c ipconfig /all > "Desktop\ipconfig.txt" Finally Done! Thanks for your time reading this intro to managing Windows from Ansible. Creating each article demands time and effort, diverting me from other learning pursuits. Your comments and shares are highly valued and greatly appreciated. Finally a big shout-out to Harv for opening my eyes to a life beyond SCCM.

  • Ansible Vault for Windows

    Welcome Back Hey there! Glad to have you back for the second Ansible article. This time around, we're diving into Ansible Vault and how to keep those Microsoft Windows passwords safe by encrypting them whilst they are at rest. If you missed out on the last article regarding the setup of Ansible and handling some basic tasks on a non-domain joined Windows Server, make sure to catch up on that first, by following this link. https://www.tenaka.net/post/basic-ansible-setup-for-windows What is Ansible Vault Ansible Vault is a feature that allows users to encrypt sensitive information, such as passwords and secret keys, within Ansible playbooks and files. This encryption ensures that the secrets are secure while they are at rest. To encrypt a secret, you simply use the "ansible-vault encrypt" command followed by the name of the file or "ansible-vault encrypt_string 'Secret'" followed by the name to be assigned to the secret. You'll then be prompted to enter and confirm a password or passphrase. Once encrypted, the secret is stored in a format that is unreadable without the decryption key, providing a secure way to protect sensitive information within Ansible projects. Ansible Vault uses AES symmetric encryption by using the same password or passphrase for both encryption and decryption. Basic Commands Below are a few fundamental commands for utilizing Ansible Vault: Create an encrypted file ansible-vault create newFile.yml Encrypt an existing file ansible-vault encrypt existingFile.yml View encrypted content of a file anisble-vault view existingFile.yml Edit the encrypted file ansible-vault edit existingFile.yml Decrypt an encrypted file ansible-vault decrypt existingFile.yml Change the password that encrypts\decrypts the secret (Rekeying) ansible-vault rekey existingFile.yml Create an encrypted string ansible-vault encrypt_string 'ChangeMe1234' --name ansible_password Help Yourselves.... A working set of files deploying ansible-vault with encrypted secrets can be found at the following link, do help yourselves. https://github.com/Tenaka/Ansible_Encrypted_Password Set Nano as the Default Editor To avoid ansible-vault opening new files with vi, let's designate Nano as the default editor. Type 'select-editor' and then choose option 1 Let's prove it works before Encrypting I won't immediately introduce encrypted passwords into the mix. Instead, we'll set up and test the files using plain text passwords. Later, I'll encrypt them, this will aid in troubleshooting. Ansible Jinja2 is a templating engine used to create dynamic content within Ansible playbooks. It allows for the use of variables, conditionals, loops, and filters to customize configurations based on the environment or data. The ansible_password="{{vault_ansible_password}}" is one such example and it's used in the hosts.ini file and resolves to the values in win.yml. If you have been following, Visual Code for Linux is installed, if not nano will suffice. First, navigate to the Ansible directory previously creating under the Documents directory and execute the following command: mkdir win-encrypt Change Directory (cd win-encrypt) into the directory and create the following 3 files, hosts.ini, ping.yml and win.yml. This will provide a simple ping test to the Windows Server on 10.1.1.1 with the Administrator account and a password of 'ChangeMe1234'. Ensure that 'ping.yml' adheres to the Yaml framework or a whole world of pain and 'why aren't you working' will ensue. The "no_log: true" parameter in Ansible is used to prevent sensitive data, such as passwords or API keys, from being displayed in the console output or logged to files. Including this now will make life difficult, waiting until your fully working. hosts.ini [win] 10.1.1.1 [win:vars] ansible_user=administrator ansible_connection=winrm ansible_password="{{vault_ansible_password}}" ansible_winrm_scheme=https ansible_port=5986 ansible_winrm_server_cert_validation=ignore ansible_kerberos_delegation=false ping.yml --- - name: Ping win Test hosts: win gather_facts: false vars_files: - win.yml tasks: - name: Ping targets win_ping: no_log: True win.yml vault_ansible_password: ChangeMe1234 Execute the following command to test the use of the clear text password: ansible-playbook -i hosts.ini ping.yml Let's get it Encrypted Once we've confirmed the clear text password works, we can proceed to encrypt the win.yml file using the following command. ansible-vault encrypt win.yml Enter the password used for encrypting the file, I'm using the ultra-secure 'Password1234'. In production don't do this..... Confirm the win.yml is encrypted with 'cat win.yml'. It should look something like the image below. Type the following command to test accessing Windows using the encrypted vault file: ansible-playbook -i host.ini ping.yml --ask-vault-pass Enter the password 'Password1234' at the prompt. Alternative Method to Encrypt the Password Another way to encrypt the password is by utilizing the encrypt-string option. Type the following command directing the output to winString.yml ansible-vault encrypt-string 'ChangeMe1234' --name vault_ansible_password > winString.yml I then renamed the existing win.yml and then renamed winString.yml to win.yml using the mv command. This is a Bad Idea....... Once we've secured the Windows passwords and grown weary of the password prompts or the playbooks are to be scheduled, we'll embed the ansible-vault password into a plaintext file, undoing our previous efforts. I've rooted enough Linux boxes to know this is a bad idea. However, today is all about encrypting the Windows passwords whilst at rest. Vault Password File Here we go, create a file named 'key' in the root of the Ansible directory and enter the vault password of 'Password1234': nano ../key Secure the key file to allow the owner Read and Write access. chmod 600 ../key Execute the playbook swapping out --ask-vault-pass for --vault-password-file ../key. ansible-playbook -i host.ini ping.yml --vault-password-file ../key Alternatively, if you prefer not to use --vault-password-file, create an ansible.cfg file within the win-encrypt directory using Nano, and input the following details. Run the playbook again without the vault password or by specifying the file location. Final Thoughts That wraps up this guide on employing ansible vault to secure Windows passwords while they're at rest. While Ansible Vault effectively secures Windows passwords, its effectiveness is compromised by storing the vault password in plain text. Despite its encryption capabilities, this vulnerability underscores the importance of implementing additional security measures to safeguard sensitive information effectively or another product in addition to ansible vault to manage secrets. Maybe that should be the aim of the next article, it's that or ansible managing domain computers with Kerberos. Drop a comment and let me know? Thank you for taking the time to read this article, your feedback, comments, and shares are immensely valued and deeply appreciated.

  • MDT with SQL Database Access. Issues (ZTI Error opening SQL Connection)

    Microsoft’s Deployment Toolkit (MDT) supports integration with SQL Server providing far better control over deployment options, eg Client A gets Task Sequence 1, whereas Client B gets Task Sequence 2, both are assigned their respective static IP's. Previously I completed a comprehensive series on deploying MDT (here) including SQL Server Express integration and baulk import of client data into SQL. In this article, I’ll address common connection issues that may arise between MDT and SQL Server and how to fault find those issues. If you had followed the guides, the subsequent steps are likely unnecessary. Nevertheless, it is beneficial to offer guidance on diagnosing connection issues. The current MDT server is equipped with SQL, but in my haste, I had overlooked certain post-integration steps. As a result, there is a noticeable delay at the 'CSetting' stage during the initial WinPE for client deployment. Certain prerequisites must be met, including the establishment of a functional MDT server and the installation and configuration of SQL Express with the necessary connection settings listed in CustomSettings. PXE boot a client to the point where it's possible to select a Task Sequence. As WinPE offers limited diagnostic functionality and tools, it's back to the basics with Notepad and logs. Press F8 to access the command prompt CD to 'X:\MININT\SMSOSD\OSDLogs\' or execute the following command: Notepad X:\MININT\SMSOSD\OSDLogs\ZTIGather.log Near the bottom of the log search for SQL Connection errors: ZTI error opening SQL Connection: Unable to establish database connection using [CSETTINGS] properties If you are not aware SQL uses the SQL Browser service on port UDP 1434 for application communications. Two potential issues warrant investigation. First, verify that the SQL Browser service is configured to start automatically by accessing services.msc. The second issue involves checking UDP port 1434 in the Inbound firewall rules. However, if you prefer to confirm the port, proceed with the following steps. Utilize either wf.msc or gpedit.msc to set up Windows Firewall Public Profile logging for dropped packets only following the example below. Restart and PXE the client to the Task Sequence window. On the MDT Server launch Notepad with Administrative permissions and open: C:\Windows\System32\Logfiles\Firewall\PFirewall.log Search for the IP of the client. Note the dropped packets on 1434. While on the MDT server, launch either gpedt.msc or wf.msc. Add an inbound UDP rule to allow port 1434. Return to the client, restart and then review the ZTIGather.log as previously demonstrated. The error is pretty self-explanatory. The MDT Service account requires login and access rights to the MDT SQL Database. Switch to the MDT Server and open SQL Server Management Studio. Browse to Security then Logins. Right click on Logins and select 'New Login' If you followed the preceding installation guides, you likely created a service account to grant access to the MDT share and its credentials are listed in CustomSettings.ini and BootStrap.ini. Add this account as a Windows Login to SQL. Adjust the User Mapping by granting db_dataread access to the MDT database for the service account. Review the ZTIGather.log after restarting the client for a final time and confirm the successful access to SQL. The settings for clients included in the MDT Database will now take precedence over CustomSettings.

  • PowerShell Code Signing with a Self-Signed Certificate

    Hey PowerShell enthusiasts! Ever wondered how to beef up your script security? Not every system gets the luxury of a Certificate Authority (CA)? Imagine your scheduled management scripts getting messed around by that one admin who loves tinker or worse, some bad actors. Today, let's tackle that risk head-on! We're diving into the world of self-signed certificates and code signing to keep your scripts safe and sound. Creating self-signed certificates for PowerShell script validation involves generating digital certificates locally and without relying on a Certificate Authority (CA). Using PowerShell's New-SelfSignedCertificate cmdlet, parameters like Subject and KeyUsage are specified. This process allows script integrity through code signing. Once created, the certificate can be used to digitally sign scripts with the `Set-AuthenticodeSignature` cmdlet, providing a level of assurance about the script's legitimacy and origin. Self-signed certificates may lack third-party validation, they boost script security by mitigating the risks of unauthorized changes. Still, be cautious; mishandling self-signed certificates could introduce vulnerabilities. Properly document and securely distribute certificates to maintain signed PowerShell script integrity in controlled environments. This guide is geared towards Active Directory Domains lacking a CA and DevOps keen on signing their PowerShell scripts. Don't worry; we're all about good practices here! To get started, make sure you have an offline Windows Server for crafting your Self-Signed certificate, a Windows 11 client (not extensively tested, but should work), and a separate client for testing the signed scripts with Admin access for tweaking Group Policy and importing certificates into the local machine store. Less chat more script..... Certificate Server Here are the key snippets from the script – the ones that matter. The script is downloadable from Github. https://github.com/Tenaka/Self-Signed-Certificates Declare working directories, either create the directories or allow the script to, not forgetting to add scripts that need signing to "C:\_PSScripts\". $certExport = "C:\_Certs\" $ScriptRepo = "C:\_PSScripts\" Set parameters. $params = @{ Subject = 'Self Signed PS Code Signing' DnsName = 'Self@Tenaka.net' FriendlyName = 'Self Signed PS Code Signing' NotAfter = (Get-Date).AddYears(5) Type = 'CodeSigning' CertStoreLocation = 'cert:\CurrentUser\My' KeyUsage = 'DigitalSignature' KeyAlgorithm = 'RSA' KeyLength = 2048 HashAlgorithm = 'sha256' } Create a new self-signed certificate based on the above parameters and send the details to 'newCodeSigningCert' variable for reference later. New-SelfSignedCertificate @params -OutVariable newCodeSigningCert Export the public key to the file system. Export-Certificate -Cert "cert:\CurrentUser\My\$($newCodeSigningCert.Thumbprint)" -FilePath "$($certExport)\CodeSigning.cer" Re-import certificate into Trusted Root otherwise it's not possible to validate any signed scripts. Import-Certificate -FilePath "$($certExport)\CodeSigning.cer" -Cert Cert:\LocalMachine\root Sign all scripts in C:\_PSScripts using a Foreach loop $gtPSscripts = Get-ChildItem -Path $ScriptRepo -filter *.ps1 -Recurse -Force foreach ($PSscriptItem in $gtPSscripts) {Set-AuthenticodeSignature $PSscriptItem.fullname -Certificate (Get-ChildItem "cert:\CurrentUser\My\$($newCodeSigningCert.Thumbprint)" -CodeSigningCert)} And there you have it! Snag those signed scripts and the exported certificate (.cer), then copy them over to the test client. Easy peasy! Check out any of the signed scripts, and you'll spot a signature block appended to the script. # SIG # Begin signature block # MIIFrQYJKoZIhvcNAQcCoIIFnjCCBZoCAQExCzAJBgUrDgMCGgUAMGkGCisGAQQB # gjcCAQSgWzBZMDQGCisGAQQBgjcCAR4wJgIDAQAABBAfzDtgWUsITrck0sYpfvNR # vhJhRK4rqe9AhAcGnbPDQg37+EgaN93UzTn2YIOVmbFrQcOwQfDJEzzVOrkLKJdX # yjdMD070/gJajAELBJDoxsY= # SIG # End signature block Test the Signed Scripts on a Client Let's assume the freshly signed scripts and certificate file reside in the same directories. Now open PowerShell with admin rights and execute the following commands. Declare the working directories. $certExport = "C:\_Certs\" $ScriptRepo = "C:\_PSScripts\" Import the certificate into the Trusted Root LocalMachine Certificate store. Import-Certificate -FilePath "$($certExport)\CodeSigning.cer" -Cert Cert:\LocalMachine\root To prevent the following prompt: Do you want to run software from this untrusted publisher? File C:\_PSScripts\gwmi-signed.ps1 is published by CN=Self Signed PS Code Signing and is not trusted on your system. Only run scripts from trusted publishers. [V] Never run [D] Do not run [R] Run once [A] Always run [?] Help (default is "D"): A Import the certificate into the Trusted Publishers LocalMachine Certificate store to prevent any prompts when executing the scripts. Import-Certificate -FilePath "$($certExport)\CodeSigning.cer" -Cert Cert:\LocalMachine\AuthRoot Launch Group Policy Editor or gpedit.msc. Browse to Computer Configuration, Administrative Templates, Windows Components, Windows PowerShell Enable 'Turn on Script Execution', select 'Allow Only Signed Scritps' in the drop-down and click OK. Run 'gpupdate /force' to apply the settings. If your scripts have a digital signature using your own certificate, they'll run smoothly in PowerShell. But the ones that aren't signed won't work. Perfect Script Security... mostly. Scripts that are signed and then updated without re-signing won't run either and you'll receive the error below. .\gwmi-signed.ps1 .\gwmi-signed.ps1 : File C:\_PSScripts\gwmi-signed.ps1 cannot be loaded. The file C:\Certs\gwmi-signed.ps1 is not digitally signed. You cannot run this script on the current system. For more information about running scripts and setting execution policy. Bypassing the Execution Policy from PowerShell isn't possible. Set-ExecutionPolicy -ExecutionPolicy Bypass Execution Policy Change The execution policy helps protect you from scripts that you do not trust. Changing the execution policy might expose you to the security risks [Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "N"): y Set-ExecutionPolicy : Windows PowerShell updated your execution policy successfully, but the setting is overridden by a policy defined at a more specific scope. Due to the override, your shell will retain its current effective ReadMe: PowerShell_ISE doesn't impose any limitations or restrictions. Unlike other environments, it doesn't enforce the Execution Policy, allowing the execution of any script, whether signed or not. Keep it Secret, Keep it Safe A PFX certificate, also called PKCS#12 or P12, is a file format used for keeping and moving cryptographic stuff like private keys and their matching public key certificates. It provides a secure way to store and share these sensitive elements. A PFX file typically includes: Private Key Public Key Certificate Certificate Chain Password Protection Once you use the New-SelfSignedCertificate command, the resulting certificate comes with both the public and private keys and can be exported as a PFX file containing the private key – basically, the whole shebang. That's why it's crucial to keep the signing server offline and well-guarded. It's also a good idea to back up the certificate, just for safety or to migrate to another host. The following commands will do just that Create a secure string password. $CertPassword = ConvertTo-SecureString -String "ChangeME1234" -Force -AsPlainText Export the private key as a pfx and password protect. Export-PfxCertificate -Cert "cert:\CurrentUser\My\$($newCodeSigningCert.Thumbprint)" -FilePath "$($certExport)\selfsigncert.pfx" -Password $CertPassword Happy scripting! Remember, signing your PowerShell scripts with a self-signed certificate adds an extra layer of security to your code. Stay vigilant, keep those scripts locked and loaded with your personalized signature, and code on with confidence! Thanks for your time, really appreciate it! Take care and goodbye!

  • Intel NUC as a Home Lab Server

    Sweating the Assets It's time to bid farewell to the ageing NUC hardware, the current NUCs are from the 5th and 6th generations, dating back to 2016, and they've been in constant operation since their initial deployment. These systems are now struggling to keep up with the demands placed on them, especially NUC2, which regularly maxes out its CPU as it valiantly attempts to handle the workload of running SCCM and SCOM. There's a little nod to one of the best Syfy series ever, cruelly cut short, comment below if you know the name of the series. What's a NUC The Intel NUC (Next Unit of Computing) is an ideal choice for home labs due to its compact form factor, versatility, high performance and energy efficiency. Depending on the NUC variant this miniature PC can pack a powerful punch, ranging from a lowly i3 to an i9 processor and dedicated GPU in the form of the Intel Raptor Extreme, making it perfect for various lab setups and experimentation. Windows Server and Hyper-V I'm pretty agnostic as long as it's Microsoft, only kidding. Deploying Windows Servers as Hyper-V hosts in a home lab environment offers several advantages and a few disadvantages. The key advantages are: Multipurpose Functionality: Hyper-V hosts can serve as versatile servers, not limited to just virtualization. They can join the domain, be managed via System Center Configuration Manager (SCCM), and be monitored through System Center Operations Manager (SCOM). DFS Replication: Hyper-V hosts can host Distributed File System Replication (DFSR) File Servers for replicating user and group shares, enhancing data redundancy and availability. Deduplication: The virtual machines running on Hyper-V hosts can take advantage of deduplication, which helps save storage space by eliminating redundant data. However, there are some disadvantages to consider: Complexity: Managing enterprise-level services, such as SCCM and SCOM, can be complex and may require significant setup and maintenance effort, even in a home lab environment. Cost: Subscribing to Microsoft's Action Pack, so Servers don't time bomb after 90 days inflicts an annual cost of £450. Luckily for me, the company picks up the cost, this is not an option for everyone. Intel NUC 13 Hardware I acquired the new Intel NUC from www.scan.co.uk due to its competitive pricing, which proved to be a bit more budget-friendly in comparison to other websites. The hardware components that were acquired include a 2TB Samsung 990, which might be a bit overkill for running Windows OS and possibly hosting a virtual Domain Controller. In contrast, the 4TB 870 is intended to accommodate the bulk of the virtual machines (VMs). LN1359491 - Intel Arena Canyon i7 Tall NUC = £569.99 LN1192071 - 2x32G Corsair Vengence = £119.99 LN130047 - 2TB Samsung 990 PRO M.2 SSD = £161.99 LN1136891 - Samsung 4TB 870 EVO 2.5 = £189.98 Here's a quick how to install all the components: Install the Vengence RAM and the Samsung 990 Pro after carefully removing the base. Remove the 4 rubber grommets from the base. Slot the 4TB 870 EVO 2.5 connecting it to the SATA interface. Using the supplied screws secure the 2.5 SSD. Windows Server 2022 Installation Media Creating Windows boot media involves preparing a USB drive that can be used to install a Windows operating system. The initial and critically important step is to download the latest firmware and drivers, which you can access by following the provided link below. https://www.intel.com/content/www/us/en/products/sku/233114/intel-nuc-13-pro-board-nuc13anbi7/downloads.html It seems that the drivers included for the Intel NUC 13 Pro aren't compatible with Windows Server 2022. However, the Intel LAN Drivers tailored for the Intel 12th Gen NUC do work. Intel LAN-Win11-1.1.3.34 As an optional step, you can download the latest Windows Server 2022 Cumulative Update and copy it to the USB pen. This ensures that when network connectivity is established the most recent Windows patches are applied. USB Preparations You can download Windows Media in the form of an .iso file from Microsoft or the Partner site at a cost of £450 per year (includes many other benefits). Double click the iso to mount it on your computer. Copy the entire contents of the mounted image to an empty USB drive. Don't forget to include the necessary drivers and firmware files on the USB drive as well. Windows Server Installation Once you've connected the NUC's power supply, KVM, and the network, insert the USB pen with the bootable Windows installation files and drivers. Then, power on the NUC. Windows will boot and then follow the installation prompts. At the point of selecting the disk ensure it's the Samsun 990. I'm going to split the 2Tb and allocate 120Gb to the Windows OS partition. Set the Administrator password at the prompt and then log on. Install the drivers, firmware and any additional patches and reboot where necessary. Run 'diskmgmt.msc' to create any required partitions and assign drive letters. Run 'sysdm.cpl' and enable Remote Desktop access, allowing the NUC's KVM to be disconnected. Drivers for the Onboard NIC Now to resolve the connectivity issues and install the network drivers. At the run command type "Devmgmt.msc", select the network device and update drivers. Select 'Browse my computer for drivers' Select 'Let me pick from a list of available drivers on my computer' Select 'Have Disk...' and then browse to the Intel NIC drivers for the NUC Gen 12. Select the 'Killer E3100 2.5 Gigabit Ethernet Controller'. Select 'Yes' to the warning. Either set an IP address or allow DHCP to automatically assign an IP. Check for Windows Defender and any other missing updates. Hyper-V Setup Open PowerShell as Administrator (elevated) and execute the following command to install Hyper-V: Install-WindowsFeature -Name Hyper-V -IncludeManagementTools -Restart Once restarted configure the following Hyper-V settings: Create a new 'External' virtual switch, allowing management operations. Set the Virtual Hard Disks and Virtual Machines to point to the 4Tb 870 partition, mines on Z:\. Enable both Enhanced Session Mode check boxes. The NUC will be joined to the Domain with the LAPS, SCCM and SCOM agents installed automatically. The process of migrating VMs from the old NUCs is quite straightforward. Begin by removing any snapshots and shutting down the VMs. Then, proceed to perform a direct network copy of the VMs' directory structure to Z:\VM, followed by importing the VMs. Thanks For Your Time Thank you for taking the time to read this blog about the new Intel NUC for my home lab. We hope this information has been valuable. Stay tuned for more tech updates, and feel free to reach out if you have any questions or need further assistance.

  • Delegation of DNS with PowerShell

    DNS Delegation DNSAdmins is a default security group in Active Directory that delegates administrative control over the DNS Zones and some DNS servers settings to a specific user account or Group. Members of this group have permission to manage DNS zones and records and configure DNS server settings including Forwarders etc. However, it may not be desirable to delegate the entire DNSAdmin permission to a user via DNSAdmins and a more targeted approach of delegating zone management or creation could be necessary. The script (here), creates the required groups to delegate DNS Server management, the ability to create and delete zones and finally zone management. Group names will either be named DNSServer or DNSZone, where 'MicrosoftDNS' is used the group defines a top-level permission. Also, AD groups follow the suggested Microsoft naming convention of 'AT' or Action Task. Here are a few examples: AT_DNSServer_MicrosoftDNS_Manage is defined as the ability to change settings for the DNS Server eg create Forwarders or scavenging. AT_DNSZone_MicrosoftDNS_Manage is defined as the ability to create and delete Zones but not change any DNS Server settings. AT_DNSZone_Microsoft.com_Manage is defined as the ability to manage the Microsoft.com DNS Zone. Note: DNSAdmin group on its own does not have enough permissions and requires Server Operators, Administrators for the Domain or Domain Admin, basically local administrative rights over Domain Controllers. Setup The setup is pretty straightforward a virtual Domain Controller and Member Server. An OU for the delegated groups with a pre-existing group named AT_Server_User. This is to provide login via a user account to the Member Server with Remote Desktop User Rights Assignment and the delegated DNS group(s). Update the Member Server OU GPO with the following changes. Create 'Restricted Groups' for Administrators and add AT_Server_Admin. Create 'Restricted Groups' for Remote Desktop Users and add AT_Server_User. Add both Remote Desktop Users and AT_Server_User to the 'Allow log on through Remote Desktop Service' User Rights Assignment. Create a user account and add it to the AT_Server_User group. Deploy the DNS delegation script (here) with Domain Admin rights on the Domain Controller. After executing the script the delegation OU should be similar to the picture below with groups for both forward and reverse zones and 2 default MicrosoftDNS groups. DNS Server Delegation Members of AT_DNSServer_MicrosoftDNS_Manage are able to connect DNS and manage server settings but not create, delete or manage any existing zone. Due to the issue of requiring administrative rights on Domain Controllers, not all settings can be managed. Setting for interface options, DNSSec or Trustpoints requires further rights, most other DNS configuration options are available. All DNS Delegation groups require a minimum of READ to connect via the DNS snapin. DNS Server permissions can be found under System, MicrosoftDNS in dsa.msc DNS Zone Creation and Deletion To create and delete zones open adsiedit and type 'dc=domaindnszones,dc=fqdn'. Full control for AT_DNSZone_Manage is set against CN=MicrosoftDNS without inheritance. DNS Zone Management Finally, each zone is delegated to a named DNS zone group. use adsiedit, connect to the 'default naming context' to browse to each zone to interrogate permissions.

bottom of page