81 results found with an empty search
- Failure Deploying Applications with SCCM\MECM with Error 0x87d01106 and 0x80070005
I encountered an issue with SCCM\MECM failing to deploy the LAPS application to clients and servers. This was previously working fine but now was failing with a Past Due error in Software Center. The AppEnforce.log produced the only meaningful SCCM error events of 0x87d01106 and 0x80070005. 0x80070005 CMsiHandler::EnforceApp failed (0x80070005). AppProvider::EnforceApp - Failed to invoke EnforceApp on Application handler(0x80070005). CommenceEnforcement failed with error 0x80070005. Method CommenceEnforcement failed with error code 80070005 ++++++ Failed to enforce app. Error 0x80070005. ++++++ CMTrace Error Lookup reported ‘Access denied’ 0x87d01106 Invalid executable file C:\Windows\msiexec.exe CMsiHandler::EnforceApp failed (0x87d01106). AppProvider::EnforceApp - Failed to invoke EnforceApp on Application handler(0x87d01106). CommenceEnforcement failed with error 0x87d01106. Method CommenceEnforcement failed with error code 87D01106 ++++++ Failed to enforce app. Error 0x87d01106. ++++++ CMTrace Error Lookup reported Failed to verify the executable file is valid or to construct the associated command line. Source: Microsoft Endpoint Configuration Manager Interestingly testing revealed that .msi applications, configuration items aka compliance and WDAC policy were affected with .exe deployments remaining unaffected. Executing the install string from the administrator account also worked. Somewhat concerning as SCCM deployments execute as System, the highest privilege possible, yet all application installs failed across the entire domain. At this point, Google is normally your friend..... but the results suggested PowerShell, and the wrong user context, as it's a msi issue, these suggestions were not helpful. Clearly, I'm asking the wrong question...... When in doubt or.... stuck, trawl the eventlogs, the SCCM logs weren't going to give up anything further. Fortunately, in fairly short order the following errors were located in the Windows Defender log. Microsoft Defender Exploit Guard has blocked an operation that is not allowed by your IT administrator. For more information please contact your IT administrator. ID: D1E49AAC-8F56-4280-B9BA-993A6D77406C Detection time: 2023-02-23T21:03:46.265Z User: NT AUTHORITY\SYSTEM Path: C:\Windows\System32\msiexec.exe Process Name: C:\Windows\System32\wbem\WmiPrvSE.exe Target Commandline: "C:\Windows\system32\msiexec.exe" /i "LAPS.x64.msi" /q /qn Parent Commandline: C:\Windows\system32\wbem\wmiprvse.exe -Embedding Involved File: Inheritance Flags: 0x00000000 Security intelligence Version: 1.383.518.0 Engine Version: 1.1.20000.2 Product Version: 4.18.2301.6 Now I know the correct question to ask Google 'D1E49AAC-8F56-4280-B9BA-993A6D77406C', with Attack Surface Reduction (ASR) being the culprit. The following is an extract from the Microsoft page: 'Block process creations originating from PSExec and WMI commands D1E49AAC-8F56-4280-B9BA-993A6D77406C Block process creations originating from PSExec and WMI commands This rule blocks processes created through PsExec and WMI from running. Both PsExec and WMI can remotely execute code. There's a risk of malware abusing the functionality of PsExec and WMI for command and control purposes, or to spread infection throughout an organization's network. Warning Only use this rule if you're managing your devices with Intune or another MDM solution. This rule is incompatible with management through Microsoft Endpoint Configuration Manager because this rule blocks WMI commands the Configuration Manager client uses to function correctly. There is no fix, only a workaround, involving updating the ASR setting Block Mode to Audit Mode in Group Policy. Open GPO Management and locate the ASR rules under Windows Components/Microsoft Defender Antivirus/Microsoft Defender Exploit Guard/Attack Surface Reduction. Open the 'Configure Attack Surface Reduction Rules'. Update value name 'D1E49AAC-8F56-4280-B9BA-993A6D77406C' from 1 to 2. Gpupdate /force to refresh the GPO's on the client, then check the eventlog for 5007 recording the change from Block to Audit Mode. Test an SCCM Application deployment to confirm the fix. One final check of the event log confirming event id 1122 for the deployed application.
- Change MDT Mapped Z: Drive
When deploying a Windows operating system or installing MDT applications, a mapped network drive is usually mounted temporarily as Z:\. The letter "Z" is chosen because it is typically not used for local drives in most deployments, it's less likely to conflict with existing drive letters on the target computer. What occurs when an application necessitates the use of the Z:\ drive during the process of deploying an image through MDT? It's often better to overlook your initial reaction.....Z: Being engaged during the operating system installation. Applications can persist with preconfigured mapped network drives. The illustration provided represents a common example of a regular operating system deployment, and it's evident that the drive letter Z: is assigned to the MDT Deployment share. There appear to be two approaches to altering the fixed Z:\ drive mapping to a different designated letter, although there might be additional methods available as well . During my search for a solution, Google yielded no results, which could potentially be attributed to me asking the wrong questions. Late to the party and whilst writing this blog, ChatGPT provided a suggestion to address this issue, update the 'CustomSettings.ini' file by incorporating 'DriveLetter=Y'. Had it succeeded on the initial attempt, it would have presented a more graceful resolution, unfortunately, that wasn't the case, I haven't delved into the reasons behind the failure. Let's proceed with a working solution by modifying the hardcoded drive letter in ZTIUtility.vbs. I'm using PowerShell_ISE as it conveniently displays the line number. Browse to C:\MDTDeploymentShare\Scripts\ZTIUtility.vbs Search for "z" and on line 3003 or thereabouts, depending on the version of MDT installed, update the hardcoded drive 'Z' to something else, not C: or X: as these are also used by the OS and MDT. In this case, I've designated the letter 'T' as the new MDT mapped network drive. Regenerate the Boot images by Updating the Deployment Share. Choose 'Completely regenerate the boot images', then grab a coffee. Launch WDS and Replace the Image. Browse to the MDT Share and select the LiteTouchPE_x64.wim. Deploy a new Windows OS from MDT Pxe and the MDTDeploymentShare is now mapped as "T:\". If you found the content valuable, I encourage you to explore the MDT deployment guides and instructional resources available under the main website sections. Finally, I'm headed off to have strong words with the individual responsible for implementing an application that requires hardcoded drives for configuration components.
- Sorting Files into Years and Month
Thousands of files, no structure, let's get them organised into months and years with PowerShell. Duplicates are moved to another directory for review. This script was written in response to trying to manage the 10’s of thousands of photos and videos being uploaded to a file share each year. Management is near impossible with Synology’s DS Photo Android App automatically uploading new photo’s to the root of the share. Plus any taken with cameras or other mobiles were also dumped into the same share. A bit of a mess. For the purposes of testing and this blog, a Data directory was created off the root of C:\. A few hundred photos and videos have been dumped… oops… copied into the folder. The files were copied to create duplicates. Download the 'hash and then sort by month' script from @ Tenaka/FileSystem (github.com) Open PowerShell_ise and browse to the downloaded script. Update the $path variable, Ctrl + A and then F8, sit back and wait for the files to be organised. On a serious note, please don't run this without testing. So what does it do: All files are compared based on their file hash to find all duplicates. Duplicate file names are amended to include an incremental number preventing potential loss of data with files overwriting each other. Files that aren't duplicates are moved based on their creation date to Year\Month directory.
- Ivanti Endpoint Manager Initial Setup for Endpoint Protection
Ivanti's Endpoint Protection's Application Control: Ivanti Endpoint Protection is a comprehensive security solution that provides organizations with a comprehensive set of security tools designed to protect their endpoints, networks, and data. It is designed to protect users from the latest threats, such as malware, ransomware, and phishing attacks. It also provides advanced capabilities, such as patch management, application control, and user privilege management. With Ivanti Endpoint Protection, organizations can ensure their endpoints are secure and protected from the latest threats. This article focuses on the initial setup of Ivanti Endpoint Manager and Endpoint Security Application Control, agent deployment and policy. This will provide the bases for the next round of 'verses' articles having thoroughly abused Windows Applocker, WDAC and GPO. The following has been extracted from the Ivanti Endpoint Protection user guide downloadable from ( here ). Ivanti® Endpoint Manager and Endpoint Security for Endpoint Manager consists of a wide variety of powerful and easy-to-use tools you can use to help manage and protect your Windows, Macintosh, mobile, Linux, and UNIX devices. Endpoint Manager and Security tools are proven to increase end user and IT administrator productivity and efficiencyLANDesk Application control offers the following system-level security: Kernel-level, rule-based file-system protection Registry Protection Startup Control Detection of stealth rootkits Network filtering Process and file/application certification File protection rules that restrict actions that executable programs can perform on specified files The initial Ivanti setup focus's on Ivanti Endpoint Protection's (EP) Application Control to compare and pit against Microsoft's Applocker and WDAC. Ivanti's EP Firewall, Device Control and AV policies won't be configured, although it is capable of providing a full management suite of protections from within a single console. The focus is Ivant EP vs Microsoft's application control, the paid 3rd part tools versus the free inbuilt tools. Ivanti Download: The good news, Ivanti provides 45 day, fully featured trial software, allowing plenty of time for EP to be put through its paces. The bad news, the trial software is not current, the download is for the 2020.1 version and not the latest 2022.2 or higher. A little sub-optimal considering it's for endpoint protection and security. Links to access Ivanti Endpoint Manager 2020.1: 45 day trial sign-up ( here ). Installation guide ( here ), Domain with a SQL server is required. Exclaimers: After following the installation guide, Ivanti will require a fair amount of fettling to deploy Application Control in enforcement mode. Remember, it's only for application execution to provide a direct comparison to Applocker and WDAC and a baseline reference for EP configuration. I'm not an Ivanti expert, I've spent a day installing and learning Ivanti. It's expected that the lack of experience with this product results in some ambiguity, I'm not interested in the journey but the net result of trying to exploit Windows with Ivanti Endpoint Protection enabled. Initial Login: Let's get to it...... From the Start Menu launch 'Ivanti Management Console', and enter the account details used during setup. Add LDAP Configuration: To integrate AD, providing search and deployment of policy, agent and software: Click on 'Configuration' in the lower left pane. Right-click on 'Directory' and 'Manage Directory...' 'Add', follow the wizard to include the domain structure using the Domain Admin account. Initial Agent Audit Policy: Initially, the endpoint and its software is unknown and an agent is required to be deployed. Click 'Configuration' in the bottom left windows and then select 'Agent Configuration', then the top left. In the 'Agent Configuration' window, bottom right, right-click and select 'New Windows agent configuration'. Update the 'Agent Configuration': Update 'Configuration Name' with something meaningful. Check the 'Endpoint Security option. Browse and then select 'Endpoint Protection' under 'Distribute and Patch' and then 'Security and Compliance'. Click 'Configure'. Within 'Endpoint Security' check 'Application Control:' and then click on '....' to configure the Application Control policy. Select 'Advanced' under 'Application Protection' and click on 'Learning'. With the initial policy when Ivanti is 'Learning' there is no reason to tempt fate by locking ourselves out of the client. Select 'Learning' for 'Whitelisting'. Save the changes and close both the 'Application Control' and 'Agent Configuration wizards. Agent Deployment: The agent and EP policy has been created and requires deploying to a client. Ivanti Management is fully featured and comes with LANDesk. For those that aren't familiar it's on par with SCCM\MECM. Here's a guide to assist in deploying the Ivanti agent ( here ). For expedience, I've opted for manual agent deployment. Right-click on the new agent and select 'Advance Agent'. Copy the URL and log on to the Windows 10 or 11 client. Download the .exe and install. Both Windows Defender and SmartScreen GPO's required updating to allow the Ivanti agent to install. Once the agent's installed, launch 'Ivanti Endpoint Security' from the Start Menu for a quick review. Excellent, Application Control and Whitelist learning policies are in effect. In preparation for blocking mode, launch installed applications on the client and run through some user activity. This activity is audited and logged to the Ivanti server for approval. It's time for a long coffee break, the file activity can take a little while to report back to the Ivanti server console. The initial audit results will take a few hours, a full audit will take overnight. Audited Files: With the agent installed the 'Win10-01' client becomes available to manage by right-clicking. Top tip, from Diagnostics its possible to see Ivant client and core logs. To view the audited files select 'Security and Patch' then 'Application Information'. As this is a new installation of Ivanti Endpoint Protection the audited files are classed as 'undecided'. It's not as simple as clicking and then approving the files, this can only be accomplished by updating the 'Agent Configuration' settings. Endpoint Security Policy - Blocking Mode: The agent has been deployed in learning mode, enabling file data collection to be available in the console. At this point, those files require authorising and blocking mode enabling. The easiest method of updating the client from learning to blocking was to update the agent and not just the Endpoint Security policy, having failed repeated attempts. Right-click the 'Agent Deployment - Initial Config', Copy and then Paste, maintaining the original agent settings. Rename the agent configuration to reflect its purpose, 'Agent Deployment - Windows Client Blocking'. Right-click the new agent config, 'Properties'. Navigate to 'Endpoint Security' via 'Distribution and Patch' and then 'Security and Compliance'. Click 'Configure...' and in the 'Configure endpoint security setting' click 'New'. Add a meaningful name to the 'Endpoint Security' wizard. Click on 'Default Policy' and select ... next to the 'Application control' dropdown. Click on 'New...' On the 'General Settings' update the name. Click on 'Application Protection' and check the following: Enable application behaviour protections Prevent master boot record (MBR) encryption Auto detect and blacklist crypto-ransomware Under 'File protection rules' select all the options, not all these options may be suitable for an enterprise, and some trial and error may be required. Under 'Application Protection' click on 'Advanced' and 'Blocking', and remove any checks for 'Learning mode ...' Under 'Whitelisting' check all options and 'Configure' and select all the script options. Scripts will require authorising to work. Again on the 'Advanced' page select 'Blocking' and uncheck 'Learning mode ...' save the changes. Highlight the new policy and then 'Use Selected'. Enable Microsoft * as a trusted signer, under 'Digital Signatures'. As Ivanti is authorising files by hash it seems prudent to trust and thus allow all Microsoft files. Ivanti operates at the kernel level, any file not authorised will be denied including system files, it's reasonable to expect blue bends (BSoD) in this case. Click 'Add...' on the 'Application File List'. Click 'New'. To authorise collected from the client click on the yellow circle with a downward arrow. Click 'Import from other application file lists... ' Check the 'Computer' and select the client. Ctrl + A to highlight all files and right-click, the 'Override reputation...' Enable 'Good'. To ensure that blocking mode is enabled, set CMD.exe's reputation to 'Bad'. Click 'Next', returning to the Application File List. Highlight CMD.exe and then click on the pencil, 'Edit Application Files'. Set the execution from Allow to Block. OK the changes, close the Application File List, returning to the 'Configure Application File Lists'. Highlight the new blocking policy then click 'Use selected'. Update the 'Learning list:' drop down to that of the Win10 approval file list and save the changes. Ensure the 'Machine Configuration' is configured with the new Windows 10 Client Policy and save the changes. Point of note: No Dll's were listed in the authorised file list, from previous testing bypassing application protections can be achieved when dll file types arent protected. Read this ( here ) where Applocker was successfully bypassed by malware with a DLL file extention. Deploy Agent in Blocking Mode: Click on 'Configuration' in the bottom left pane and then 'Agent Configuration'. In the bottom right pane select 'My Configurations'. Right-click and properties on the 'Agent Deployment - Windows Client Blocking' As the target client already has the agent installed a 'scheduled agent deployment' or 'scheduled update to agent settings' should work. I've opted for the agent deployment, removing the old agent and settings alnd installing the new agent with the new blocking configuration. Click on 'Targets', then 'Targeted Devices', and click on 'Add'. Select the Windows client with the agent installed and ensure the client box is checked. In 'Schedule task', select 'Start Now' and then 'Save'. The Client: Log in to the client and after about 15 minutes the Ivanti agent with the blocking configuring will have been deployed. The client is likely to show that the 'Status' is disabled for all components with 'Application Control' also displaying 'Off'. Reboot the client. After the reboot the agent should show the following: Launching cmd.exe displays the following Ivanti message, cmd is indeed blocked and policy and settings are successfully applied. The process of creating and deploying Ivanti EP is understood and is repeatable. The next step is to test how effective Ivanti EP is at protecting Windows from various Remote Code Exploits, Local Code Exploits and Reverse Shells following the same patterns used testing Applocker and Device Guard (WDAC). To follow shortly......
- The Onion Router (TOR) in a Box
Invizbox If you’re looking to take your online privacy up a notch, combining Tor with an InvizBox router is a smart move. The InvizBox makes it easy to route your network traffic through the Tor network, giving you anonymity without having to tinker with complex configurations on each device. In this blog I'll walk through how to get Tor running on your InvizBox so you can browse the web more securely and privately. TOR TOR protects the user's privacy and your IP address from your ISP and anyone interested in the traffic leaving the property by applying multiple layers of encryption to your browser traffic and passing the traffic through a series of random Tor relays. As the traffic progresses through the relays a layer of encryption is decrypted revealing the next hope unit the exit node where the final layer is decrypted and the original web request is sent on to its final destination. Simplified diagram of Tor. The green lines are encrypted. That's the basics of how Tor works and I tend to run it from a Linux variant such as Kali or Backbox. A while back I purchased an Invizbox One, tested it and then chucked it in the back of the drawer. But with some extra time on my hands due to CV-19 I thought I would revisit the Invizbox. To start with the Invizbox didn't power on, a great start, it didn't like being plugged into the USB port of the router and so I moved it to a PC. Once connected to the Admin page the firmware had to be updated before Tor would start. On the Zyxel I assigned the DMZ to port 5, configured the Firewall, DHCP, DNS and then plugged in the yellow cable. On the Invizbox Admin page, I set the Privacy Mode to 'Tor' Set the country options to Europe and UK, wasn't sure if the UK was considered part of the EU or not...... That was pretty much it, nice and easy. Any client, Windows, Linux or even...Mac (yuck) can connect to the Invizbox wifi and browse from any country in Europe or UK. Yesterday apparently I was visiting Romania and today it's Germany. To sum up, it's a nifty little device that makes it easy and more accessible to more devices including those you can't install software on. The Invizbox was purchased a few years back at a cost of £50, it's now £80 on Amazon, direct from the Invizbox there's now a subscription for the VPN. There are alternatives like Anonabox. Would I purchase one today at £80, unlikely, if I had to use a device I would rather build an Onion Pi or Odroid. But likely I would carry on using Kali with Tor, it's free. Now the words of warning: There have been security flaws with Tor devices and with Tor as a browser, regularly check for updates. To maintain anonymity don't use the computer where your also logging on to Facebook, Amazon etc.... I would stay away from using Windows as it's a little heavy on the MS spyware and there's the potential for AV and Windows updates to be tampered with on the exit nodes. Only use secure websites to prevent the exit nodes from performing Man In The Middle attacks. The relay nodes are run and maintained by volunteers, which means that the nodes can't be trusted and some will be run by the NSA, FBI or criminals. https://tails.boum.org/ is recommended for maintaining privacy Invizbox and Alternatives https://www.anonabox.com/buy-anonabox-original.html https://www.invizbox.com/products/invizbox/#pricing https://www.raspberrypi.org/blog/onion-pi-tor-proxy/
- Basics of Creating Webpages with PowerShell
Creating a simple web report with PowerShell doesn't need to be a chore, there are limitations and it's definitely not a proper HTML editor. It doesn't mean the output should look shoddy. Like many, I'm using PowerShell to analyse Windows and display the results. The screen grab below is a section of a report I'm currently working on and soon to be published. The script is a comprehensive vulnerability assessment written entirely in PowerShell and made to look pretty without trawling through copious amounts of log outputs. This blog will cover the basics of taking PowerShell objects from various sources and creating HTLM output. It's not difficult, just fiddley, a couple of different techniques to successfully convert PowerShell to HTML may be required. Before everyone gets critical regarding the script formatting, some are due to how ConvertTo-HTML expects the data, most are to help those that aren’t familiar with scripting. There is a conscious decision not to use aliases or abbreviations and where possible to create variables. #Set Output Location Variables Nothing challenging here, creates a working directory, and sets the variable for the report output. Tests the existence of the path and if doesn’t exist creates the directory structure. $RootPath = "C:\Report" $OutFunc = "SystemReport" $tpSec10 = Test-Path "$RootPath \$OutFunc\" if ($tpSec10 -eq $false) { New-Item -Path "$RootPath \$OutFunc\" -ItemType Directory -Force } $working = "$RootPath \$OutFunc\" $Report = "$RootPath \$OutFunc\"+ "$OutFunc.html" #HTML to Text Keep it simple, create a variable and add some text. This is the one that ought to be straightforward and ended up being a bit of a pain. The conversion to HTML ended up producing garbage. Google gave some interesting solutions…. The fix I discovered turned out to be super simple. The fragment needs to be set as a ‘Table’ and not a ‘List’. Doh….. $Intro = "The results in this report are a guide and not a guarantee that the tested system is not without further defects or vulnerabilities." #Simple WMI This is a report about Windows, had better collect some wmi attributes. There are 2 methods, dump the attributes into a variable and process them later. Or create a variable for each required attribute and hashtable the data, the latter is a lot of effort. $hn = Get-CimInstance -ClassName win32_computersystem $os = Get-CimInstance -ClassName win32_operatingsystem $bios = Get-CimInstance -ClassName win32_bios $cpu = Get-CimInstance -ClassName win32_processor #Foreach and New-Object. Now life starts to get interesting. The date format needs updating from “23/11/2021 00:00:00” to “23/11/2021” to maintain the formatting a ‘foreach’ is required to strip out the additional characters per line, then added to an array. Under normal circumstances, the red code snippet would suffice. Foreach ($hfitem in $getHF) { $hfid = $hfitem.hotfixid $hfdate = ($hfitem.installedon).ToShortDateString() $hfurl = $hfitem.caption $newObjHF = $hfid, $hfdate,$hfurl $HotFix += $newObjHF } When dealing with HTML the correct method requires the use of ‘New-Object’ command. $HotFix=@() $getHF = Get-HotFix | Select-Object HotFixID,InstalledOn,Caption Foreach ($hfitem in $getHF) { $hfid = $hfitem.hotfixid $hfdate = $hfitem.installedon $hfurl = $hfitem.caption $newObjHF = New-Object psObject Add-Member -InputObject $newObjHF -Type NoteProperty -Name HotFixID -Value $hfid Add-Member -InputObject $newObjHF -Type NoteProperty -Name InstalledOn -Value ($hfdate).Date.ToString("dd-MM-yyyy") Add-Member -InputObject $newObjHF -Type NoteProperty -Name Caption -Value $hfurl $HotFix += $newObjHF } #Pulling Data from the Registry Registry keys require the ‘Get-ChildItem’ followed by ‘Get-ItemProperty’ to extract the individual settings from the Registry Hive. Each setting is then assigned to a variable. $getUnin = Get-ChildItem "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\" $UninChild = $getUnin.Name.Replace("HKEY_LOCAL_MACHINE","HKLM:") $InstallApps =@() Foreach ( $uninItem in $UninChild) { $getUninItem = Get-ItemProperty $uninItem $UninDisN = $getUninItem.DisplayName -replace "$null","" $UninDisVer = $getUninItem.DisplayVersion -replace "$null","" $UninPub = $getUninItem.Publisher -replace "$null","" $UninDate = $getUninItem.InstallDate -replace "$null","" $newObjInstApps = New-Object -TypeName PSObject Add-Member -InputObject $newObjInstApps -Type NoteProperty -Name Publisher -Value $UninPub Add-Member -InputObject $newObjInstApps -Type NoteProperty -Name DisplayName -Value $UninDisN Add-Member -InputObject $newObjInstApps -Type NoteProperty -Name DisplayVersion -Value $UninDisVer Add-Member -InputObject $newObjInstApps -Type NoteProperty -Name InstallDate -Value $UninDate $InstallApps += $newObjInstApps } #Cascading Style Sheets (CSS) To apply a consistent style to each element we use a CSS containing text size, colour and font as well as spacing and background colours. Each style, for example 'h1' has a set of properties that applies to any number of elements tagged "variable or text". reducing repeat lines of code required, updating the CSS and all elements receive the change. CSS Tutorial (w3schools.com) is a good resource to learn and try out CSS. In the example below h1, h2 and h3 set different sized fonts and colours. $style = @"
- Import Geo IP Data in to Wireshark
Ever looked at a packet trace and wondered where all those network connections are coming from, or where they’re headed, without having to query each IP one by one? Wireshark has you covered. Whether from a live capture or an imported file (say, from a Zyxel firewall), it can generate a clean, visual map of the traffic, like the example below. This is the standard log output from a Zyxel, nothing exciting, honest. Ignore 192.168.0.247 attempting to establish a UDP port 500 Isakmp to somewhere not local to query time. Enable a packet capture from the Diagnostic section and capture, add at least the external facing port, wan1. Once the capture has run for a while, stop and then export the files to the local computer where Wireshark is installed. Sign up to MaxMind.com, it's free to download the GeoLite2 Geo Data. https://dev.maxmind.com/geoip/geolite2-free-geolocation-data?lang=en At the bottom of the 'Products' list select 'GeoLite2 Free Geolocation Data' or click the link below. https://www.maxmind.com/en/accounts/699472/geoip/downloads Download the 3 zip files, GeoLite2 ASN, GeoLite2 City and GeoLite2 Country. Unpack and more to a common directory. Open Wireshark, File, Open and select the Zyxel packet capture to import. To import the Geo-Location data, select 'Edit' then 'Preferences'. Select 'Name Resolution' and scroll to the bottom of the page. Select 'Edit' for MaxMind Database Directories. Set the location for the unpacked files. To view the map, select 'Statistics' then 'Endpoints'. Select IPv4 or a tab with a number. At the bottom of the page, select 'Map' and then 'Open in Browser'. That's it.... done
- Delegation of DNS with PowerShell
Introduction This post walks through how to use PowerShell to set up targeted delegation for DNS, creating the right AD groups with clear scopes and following Microsoft’s recommended naming conventions. DNS Delegation DNSAdmins is a default security group in Active Directory that delegates administrative control over the DNS Zones and some DNS servers settings to a specific user account or Group. Members of this group have permission to manage DNS zones and records and configure DNS server settings including Forwarders etc. However, it may not be desirable to delegate the entire DNSAdmin permission to a user via DNSAdmins and a more targeted approach of delegating zone management or creation could be necessary. The script ( here ), creates the required groups to delegate DNS Server management, the ability to create and delete zones and finally zone management. Group names will either be named DNSServer or DNSZone, where 'MicrosoftDNS' is used the group defines a top-level permission. Also, AD groups follow the suggested Microsoft naming convention of 'AT' or Action Task. Here are a few examples: AT_DNSServer_MicrosoftDNS_Manage is defined as the ability to change settings for the DNS Server eg create Forwarders or scavenging. AT_DNSZone_MicrosoftDNS_Manage is defined as the ability to create and delete Zones but not change any DNS Server settings. AT_DNSZone_Microsoft.com_Manage is defined as the ability to manage the Microsoft.com DNS Zone. Note: DNSAdmin group on its own does not have enough permissions and requires Server Operators, Administrators for the Domain or Domain Admin, basically local administrative rights over Domain Controllers. Setup The setup is pretty straightforward a virtual Domain Controller and Member Server. An OU for the delegated groups with a pre-existing group named AT_Server_User. This is to provide login via a user account to the Member Server with Remote Desktop User Rights Assignment and the delegated DNS group(s). Update the Member Server OU GPO with the following changes. Create 'Restricted Groups' for Administrators and add AT_Server_Admin. Create 'Restricted Groups' for Remote Desktop Users and add AT_Server_User. Add both Remote Desktop Users and AT_Server_User to the 'Allow log on through Remote Desktop Service' User Rights Assignment. Create a user account and add it to the AT_Server_User group. Deploy the DNS delegation script ( here ) with Domain Admin rights on the Domain Controller. After executing the script the delegation OU should be similar to the picture below with groups for both forward and reverse zones and 2 default MicrosoftDNS groups. DNS Server Delegation Members of AT_DNSServer_MicrosoftDNS_Manage are able to connect DNS and manage server settings but not create, delete or manage any existing zone. Due to the issue of requiring administrative rights on Domain Controllers, not all settings can be managed. Setting for interface options, DNSSec or Trustpoints requires further rights, most other DNS configuration options are available. All DNS Delegation groups require a minimum of READ to connect via the DNS snapin. DNS Server permissions can be found under System, MicrosoftDNS in dsa.msc DNS Zone Creation and Deletion To create and delete zones open adsiedit and type 'dc=domaindnszones,dc=fqdn'. Full control for AT_DNSZone_Manage is set against CN=MicrosoftDNS without inheritance. DNS Zone Management Finally, each zone is delegated to a named DNS zone group. use adsiedit, connect to the 'default naming context' to browse to each zone to interrogate permissions.
- Deploy Domain Controllers with PowerShell and JSON (Part 2) - OU Structure and Delegation
Welcome Back Welcome back to the continuation of our series on deploying Domain Controllers using PowerShell and JSON. If you've been following along with Part 1 , you should now have a newly configured Domain Controller with a delegated Organizational Unit (OU) structure in place. If you missed Part 1 of the series, you can access the necessary files by following the provided link or reference, ( here ). This blog will provide an in-depth explanation of the delegation model that has been delivered by PowerShell. It will also delve into the intricacies of the Organizational Unit (OU) structure, the arrangement of nested Groups and the various Roles assigned. Aim of the Game The objective is to establish an Organizational Unit (OU) structure that aligns with a clear and consistent delegation model. This approach incorporates well-defined naming standards to enhance comprehensibility and facilitate ease of navigation and management within the structure. AD Group Best Practice Group management will follow Microsoft's best practice of assigning Domain Local groups against the object, eg an OU or GPO. The Domain Local group is then added as a 'Member of' a Domain Global group. The user is added to Domain Global as a 'Member'. The naming convention I've persisted with over the years, again from Microsoft, is naming delegation groups 'Action Tasks', a task being an individual permission set. And 'Roles', a role being a collection of Tasks or individual permissions. AG is Action Task Global Group AL is Action Task Domain Local Group RG is a Role Global Group RL is a Role Domain Local Group Again, something that I've persisted with over the years is that Groups and OUs are named based on their Distinguished Name (DN). Let's break down an example of a group name: AG_RG_Member Servers_SCCM_Servers_ResGrpAdmin AG\AL\RG\RL - Action Task Global, AL for AT Domain Local, R for Role RG\OU\GPO - Restricted Group, OU or GPO - Type of object delegation Member Servers - The Top-Tier OU name SCCM - The Application or Service eg SCCM or Certificates Servers - It's for Computer objects ResGrpAdmin - ResGrpAdmin is a Restricted Group providing Admin privileges. ResGrpUser is a Restricted Group providing User privileges. CompMgmt, create\delete and modify Computer objects. UserMgmt, create\delete and modify User objects. GroupMgmt, create\delete and modify Group objects. GPOModify, edit GPO settings. SvcMgmt, create\delete and modify user objects. FullCtrl, full control over OU's and any child objects. JSON OU Configuration Traditionally, there are only 3 tiers, the lower the tier the less trustworthy: Zero = Domain Controllers and CA's One = Member Servers Two = Clients and Users Given that this script can potentially generate numerous levels or hierarchies, it seemed more suitable to avoid the term "tier" and instead opted to label the top-level OU's as "Organizations" for a more meaningful representation. The JSON configuration provided creates an OU structure based on a default OU structure for many businesses, where Orgainisation1 is for Member Servers and Orgainisation2 is for Clients and Users. In addition, Organisation0 provides Admin Resources OU for the management of all delegation, role and admin account provision. Organisation0 - Admin Resources Organisation0 , creates a top-level management OU named Admin Resources ' This OU serves as the central hub for all delegation and management groups across subsequent Organizations. Each Organization benefits from having its own dedicated management OU within the Admin Resources OU. Organisation specific delegation groups, roles, and admin accounts are created. This approach allows for potential future delegation. Admin Accounts Member Servers Admin Tasks Member Servers Admin Roles Member Servers "OU": { " Organisation0 ": { "Name":"Admin Resources", "Path":"Root", "Type":"Admin", "Protect":"false", "AdministrativeOU":"Administrative Resources", "AdministrativeResources": [ "AD Roles,Group", "AD Tasks,Group", "Admin Accounts,User" ] }, Organisation1 - Member Servers Organisation1 represents the typical Member Server OU and it's of the Type Server . The type Server designates a behavioural difference for assigning policy. AppResources designates application service OU's that will be created eg Exchange. Service Resources is used for creating OU's based on a set of standard administrative functions for example Servers and the delegation and object type of Computers. " Organisation1 ": { "Name":"Member Servers ", "Path":"Root", "Type":"Server", "Protect":"false", "AdministrativeOU":"Service Infrastructure", "AdministrativeResources": [ "AD Roles,Group", "AD Tasks,Group", "Admin Accounts,User" ], "AppResources":"Certificates,MOSS,SCCM,SCOM,File Server,Exchange", "ServiceResources": [ "Servers,Computer", "Application Groups,Group", "Service Accounts,SvcAccts", "URA,Group" ] }, Organisation2 - Client Services Organisation2 represents the typical User Services OU and it's of the Type 'Clients'. " Organisation2 ": { "Name":"User Services", "Path":"Root", "Type":"Clients", "Protect":"false", "AdministrativeOU":"Service Infrastructure", "AdministrativeResources": [ "AD Roles,Group", "AD Tasks,Group", "Admin Accounts,User" ], "AppResources":"Clients", "ServiceResources": [ "Workstations,Computer", "Groups,Group", "Accounts,User", "URA,Group" ] } } Hundreds and thousands It's possible to add further top-level OU's by duplicating an Organisation, then updating the Organisation(*) and Name values as they need to be unique. It's possible to add hundreds or even thousands of Organisations, with this possibility in mind, the management and delegation structure reflects this within the design. Levels of OU Delegation As we delve deeper into the structure of each organization, we encounter a hierarchy consisting of three levels of delegation, using Member Servers as an example: Organisation = Member Servers (Level 1) Application Service = Certificates (Level 2) Resources = Computer, Groups, Users and Service Accounts (Level 3) OU delegation controls the level of access to manage objects eg create a Computer or Group object. Level 1 Level 1 is the organisation level in this case it's the Member Server OU. It's delegated with AL_OU_Member Servers_FullCtrl . The group provides full control over the OU, sub-OU's and all objects within. The arrow serves as an indicator, denoting the point at which the group's application takes effect within the structure. Level 2 Level 2 is the Service Application level, in this case, Certificate services. AL_OU_Member Servers_Certificates_FullCtrl is applied a level below Member Servers and provides full control over itself and any subsequent objects. Level 3 At Level 3, the delegation involves the management of Service Applications resources, which includes items such as Server objects and service accounts. The 4 default OU's allow the delegation and management of their respective resource types, for example, the Application Groups OU permits the creation and deletion of Group objects via AL_OU_Member Servers_Certifcates_Applications Groups_GroupMgmt . Application Groups - Application specific Groups Servers - Server or Computer objects Service Accounts - Service Accounts for running the application services URA - User Rights Assignments for services that require LogonAsAService etc Restricted Groups and User Rights Assignment (URA) Levels In this delegated model, Restricted Groups facilitate access by allowing administrative access whilst User Rights Assignments (URA) allow admins or users to log on over Remote Desktop Protocol (RDP). There are two primary levels of organization. The first level encompasses the entire organization, including all subsequent Organizational Units (OUs). The second level consists of a dedicated Servers OU for each specific Service Application. Level 1 of Restricted Groups The GPO GPO_Member Server_RestrictedGroups is linked to the Member Servers OU and has the following groups assigned: URA: Allow log on through Terminal Services: AL_RG_Member Servers_ResGrpAdmin AL_RG_Member Servers_ResGrpUser Restricted Group: Administrators: AL_RG_Member Servers_ResGrpAdmin Remote Desktop Users: AL_RG_Member Servers_ResGrpUser This is how it looks when applied in GPO. Within this delegation model, the ability to manage Group Policy Object (GPO) settings is also delegated to ensure comprehensive control and management of the environment. via AL_GPO_Member Servers_GPOModify Group. Level 2 of Restricted Groups The GPO GPO_Member Server_Certificates_Servers_RestrictedGroups is linked to the sub-OU Servers under Certificates and has the following groups assigned, that of the Organisation and of the Service Application: URA: Allow log on through Terminal Services: AL_RG_Member Servers_ResGrpAdmin AL_RG_Member Servers_ResGrpUser AL_RG_Member Servers_Certifcates_ResGrpAdmin AL_RG_Member Servers_Certificates_ResGrpUser Restricted Group: Administrators: AL_RG_Member Servers_ResGrpAdmin AL_RG_Member Servers_Certifcates_ResGrpAdmin Remote Desktop Users: AL_RG_Member Servers_ResGrpUser AL_RG_Member Servers_Certificates_ResGrpUser This is how it looks when applied in GPO. As above Group Policy Object (GPO) settings are also delegated via AL_GPO_Member Servers_Certificates_Servers_GPOModify Bringing it all together with Roles In this demonstration, an account named 'CertAdmin01' has been specifically created to oversee the management of resources within the Certificates OU. The account is added to the role group RG_OU_Member Servers Certificates_AdminRole . Opening the RG_ group and then selecting the 'Members Of' tab displays the nested RL_ group. Drilling down into the RL_ group displays the individual delegated task groups. Delegated Admin To test the certificate Admin (CertAdmin01) deploy an additional server, adding to the domain and ensuring the computer object is in the Certificate Servers OU. Login as CertAdmin01 to the new member server and install the GPO Management and AD Tools. Browse to Member Server and then Certificates OU and complete the following tests: Right-click on Applications Group > New > Group Right-click on Servers > New > Computer Right-click on Service Accounts > New > User Right-click on URA > New > Group. Open Group Policy Management and Edit GPO_Member Servers_Certificates_Servers_RestrictedGroup. Open Compmgmt.msc and confirm that the Administrators group contains the 2 _ResGrpAdmin groups and the local Administrator. AL_RG_Member Servers_Certificates_Servers_ResGrpAdmin AL_RG_Member Servers_ResGrpAdmin Confirm that CertAdmin01 is unable to create or manage any object outside the delegated OU's. Nearly there.....SCM Policies and ADMX Files As part of the delivery and configuration of the OU structure, Microsoft's Security Compliance Manager (SCM) GPOs and a collection of Administrative (ADMX) templates are included. SCM GPOs: Microsoft's SCM offers a set of pre-configured GPOs that are designed to enhance the security and compliance of Windows systems. These GPOs contain security settings, audit policies, and other configurations that align with industry best practices and Microsoft's security recommendations. ADMX Templates: ADMX files, also known as Administrative Template files, extend functionality within Group Policy Management enabling settings for Microsoft and 3rd party applications. Within a Domain, ADMX files are copied to the PolicyDefinition directory within Sysvol. Zipped... Both SCM and ADMX files are zipped and will automatically be uncompressed during the OU deployment. However, if you would like to add your own policies and ADMX files you can. SCM Policy Placement The SCM policies are delivered in their default configuration, without any modifications or merging. The policies are placed directly into the designated target directory, imported and linked to their respective OU. For example, the Member Server directory content will be linked to any OU that is of type 'Server'. The SCM imported policies are prefixed with 'MSFT,' indicating that they are Microsoft-provided policies. There are a substantial number of these policies linked from the root of the domain down to client and server-specific policies. As far as delegation the SCM policies remain under the jurisdiction of the Domain Admin with control to effect change delegated to the _'RestrictedGroup' policies. Thank you for taking the time to read this blog. I hope you found the information valuable and that it has been helpful. Your support is greatly appreciated!
- Deploy Domain Controllers with PowerShell and JSON (Part 1) - Domain Controllers
Introduction In this post, we'll delve into the automated deployment of a Domain using PowerShell in tandem with a JSON configuration file. In my experience, while there are numerous Windows Server administration tasks suitable for automation, promoting Domain Controllers or deploying a new Forest is not typically among them. Automating Dcpromo can raise the risk of inadvertently exposing plain-text credentials in scripts, which is far from an ideal situation. Furthermore, such tasks are not frequently performed on a daily basis or repeated regularly in standard bau tasks. And now the Thousandath Time lets Lab a Domain Recently, I've been engaged in a fair amount of lab work, involving dismantling and rebuilding domains. One such lab involved using Cloudformation, AWS and deploying a domain via Desired State, pre-packaged code provided by AWS. After going through the experience, I couldn't help but feel that I could deploy a Microsoft Domain setup far more effectively than relying on AWS and so we're here and I've a new PowerShell project to keep me amused... enjoy. The First of Many This is the first instalment of a two-part blog series, Part 2 covers the OU structure and Delegation model. In this post, we'll delve into the automated deployment of a Domain using PowerShell in tandem with a JSON configuration file. This setup encompasses installing essential features such as DNS and AD and automatic logins via scheduled tasks. In the second blog, the focus will shift towards the deployment of Organizational Units (OUs) and Group Policy Objects (GPOs) with Restricted Groups, User Rights Assignments and implementing a comprehensive delegation model. The Requirements A standalone, not domain joined Windows 2022 with an active network is required, I'll be using a Hyper-V VM to host that VM. Testing has exclusively been carried out on Server 2022, the scripts should work with Server 2016 and 2019, it's important to note that I'm unable to provide any guarantees. Download all the files from GitHub ( here ) to the server, and save them to the Administrator Desktop, the 2 zip files will unpack automatically via the script. The Important Stuff Update DCPromo.json, the hostname of the server must match the "PDCName" value. "FirstDC": { "PDCName":"DC01", "PDCRole":"true", "IPAddress":"10.0.0.1", "Subnet":"255.255.255.0", Either update the passwords in the JSON file or update "PromptPw":"false" to "true" . Once set to true the script will prompt for the password to be entered interactively. Regardless, the password is set in clear text into the Registry to allow autologin and later removed during the OU configuration. "DRSM":"Password1234", "DomAcct":"Administrator", "DomPwd":"Password1234", "PromptPw":"false" Any subsequent Domain Controllers can be added, remember that the hostname is the key and the value referenced during deployment. { "DCName":"DC02", "PDCRole":"false", "IPAddress":"10.0.0.2", "Subnet":"255.255.255.0", "DefaultGate way":"10.0.0.254", "SiteName":"Default-First-Site-Name", "DRSM":"Password1234" }, Elevate PowerShell or ISE to execute DCPromo.ps1. Installation of Roles and DCPROMO As long as the above criteria are met, Windows Server will install AD-Domain-Services and DNS Windows Features, set the IP and DCPromo the server to become the first DC in the Forest and the PDC Emulator. Auto-Restart The newly promoted DC will auto-restart twice, this is required to correctly pass domain credentials to execute CreateOU.ps1 the final script. Part 2 - GPO's, OUs and Delegation https://www.tenaka.net/post/deploy-domain-with-powershell-and-json-part-2-ou-delegation
- PowerShell's Custom Runtime for AWS Lambda's - Importing Modules
Welcome to the second part of the installation and configuration process for the AWS Custom Runtime for PowerShell Recap In the first part, we covered the installation process of AWS's Custom Runtime for PowerShell, which involved deploying Windows Subsystem for Linux (WSL) and initializing the Runtime and deploying the Demo Lambda Function. Here's the link, with instructions on how to install WSL and deploy the Custom Runtime. https://www.tenaka.net/post/wsl2-ps-custom-runtime-deployment What's in Part 2 The first part left on a bit of a cliffhanger, functionally, the Custom Runtime for PowerShell worked, but without additional modules, there's very little that could be accomplished. The subsequent steps entail the creation of Lambda layers that incorporate additional modules, which will be utilized in Lambda Functions to finalize the end-to-end deployment process. Copy and Paste Upon completing this process, the objective is to successfully deploy a Lambda Function equipped with a layer containing both the AWS.Tools.Common and AWS.Tools.EC2 PowerShell modules. This will enable the ability to start and stop an EC2 instance within the AWS environment. Continuing where we previously left off, we are going to utilise the work that has already been completed by AWS, by amending an existing example. Before we start, only 5 layers can be added to a Lambda Function, but a layer can contain multiple modules. Change the directory into the AWSToolsforPowerShell directory. cd /Downloads/aws-sam/powershell-modules/AWSToolsforPowerShell Copy the existing S3EventBridge directory. cp AWS.Tools.S3EventBridge AWS.Tools.EC2 -r cd AWS.Tools.EC2 Amendments The 3 files that will require amending to successfully publish additional modules as layers are: build-AWSToolsLayer.ps1 template.yml /buildlayer/make The process is straightforward, find and replace all references to the current module functionality with the new module functionality. Although updating build-AWSToolsLayer.ps1 is not strictly essential since we'll be relying on the Make command, taking a few seconds to do so ensures consistency among all the files involved. nano build-AWSToolsLayer.ps1 Ctrl + o to save (output the file) Ctrl _ x to exit nano Add additional lines for modules that are to be extracted from aws.tools.zip. Note: It is crucial to ensure the correct ordering of modules, with AWS.Tools.Common listed before the module for EC2. The EC2 module relies on the functionality provided by AWS.Tools.Common. In the original S3EventBridge version of template.yml AWSTools.EC2 read S3EventBridge. Ensure !Ref values are updated from AWSToolsS3EventBridgeLayer to AWSToolsEC2Layer, this value is passed between files and needs to be consistent. Save and exit template.yml. cd buildlayer nano Make The first line references !Ref and it must be consistent with the value set in template.yml. Modify the unzip commands to accommodate any supplementary modules. Save and exit Make. Build and Deploy After each amendment to the configuration files, the content must be redeployed in order to reflect the changes made: sam build To publish to AWS run the following: sam deploy -g Layers and a Lambda Login to AWS Lambda and confirm the new layer has been created. Let us bring the entire Custom Runtime endeavour to fruition, by creating a new Lambda Function designed to initiate the start of an EC2 Instance, by clicking Create Function. Name the function and select the Amazon Linux 2 Runtime. Ensure the Architecture is set to x86_64. 'Create a new role with basic Lambda permissions' is also selected. Create Function Within the Function Overview click on Layers, then Add Layers. Select Custom Layers and then add in order: PwshRuntimeLayer AWSToolsEC2Layer PwshRuntimeLayer is listed first, followed by any modules. Click Configuration and Edit Update memory to 512Mb and timeout to 1 minute. Before saving the configuration updates, open the IAM link in another browser tab to grant the function the additional permissions required for execution. Within IAM, add AmazonEC2FullAccess and AWSLambdaExecute to the Role. Navigate back to Lambda and then select Code. Update the Runtime Settings Handler information to reflect the name of the PowerShell script followed by "::handler". In this example, the handler will be "Start-Ec2.ps1::handler" Navigate back to Code and delete all the default files. Right-click on the folder and New File, rename to "Start-Ec2.ps1". Copy and paste the provided script, and make sure to modify the Reservation ID with the ID of your own EC2 instance. Start-EC2.ps1 #$VerbosePreference = "continue" #$VerbosePreference = "SilentlyContinue" Write-Verbose "Run script init tasks before handler" Write-Verbose "Importing Modules" Import-Module "AWS.Tools.Common" Import-Module "AWS.Tools.EC2" function handler { [CmdletBinding()] param( [parameter()] $lambdaInput, [parameter()] $lambdaContext ) Get-EC2Instance | where {$_.ReservationId -eq "r-06856f1f55c199e49"} | Start-EC2Instance } Deploy the changes. Click Test Complete the Configure Test Event by providing an Event Name. Navigate to the Test tag and click Test to execute the Lambda Function. I'm hoping this guide provides a starting point for further modules and functionality, especially those that come from a native Microsoft background. I wish to thank everyone for their time and any feedback would be gratefully received.
- PowerShell's Custom Runtime for AWS Lambda's - Installation
Introduction This walkthrough covers how to set up and deploy an AWS Lambda Custom Runtime for PowerShell from within Windows Subsystem for Linux 2 (WSL2). We’ll go through the environment setup, packaging, and deployment process so you can build and run PowerShell-based Lambda functions without needing a full Linux host. PowerShell custom runtime for AWS Lambda is an addition to the AWS Lambda services, offering developers and Microsoft engineers the ability to leverage PowerShell within the serverless environment. Unlike the standard runtimes supported by AWS Lambda, which include languages like Python, Node.js, and Java, the PowerShell custom runtime, developers can now build and deploy Lambda functions using their existing PowerShell skills. It allows for the integration of PowerShell's vast library of cmdlets and modules, enabling developers to leverage a wide range of pre-built functions and automation tasks. PowerShell's object-oriented scripting approach also provides a means for manipulating and managing AWS resources, making interacting with other AWS services like Amazon S3, Amazon DynamoDB, and AWS CloudFormation easier. Additionally, it's now possible to edit the PowerShell script directly within the published Lambda, which was not previously possible. The Truth of the Matter The issue, it's PowerShell, any real DevOps will be using anything but PowerShell as it's a scripting language, so there's limited support for PowerShell on AWS. However, if you're a Microsoft engineer who needs to manage the Windows Infrastructure on AWS then PowerShell will be your go to scripting language for Lambda functions. The PowerShell custom runtime setup provides 3 options for deployment, Linux or WSL, native PowerShell and Docker. The native PowerShell deployment doesn't work, at least I couldn't get it working and others have faced similar issues, with no resolution provided. The good news is that Windows Subsystem for Linux (WSL) deployment does successfully deploy and execute and this is what I'll be using. Requirements WSL 2 requires the Hyper-V Hypervisor, this rules out any AWS EC2 instance, Hyper-V isn't supported. A Windows 2022 or Windows 11 with the latest patches installed is required. I've Windows 11 installed on a Zenbook Space Edition laptop with the Hyper-V feature installed and virtualization enabled in the system's BIOS or UEFI. WSL 2 isn't directly installed on the laptop, it can be, I prefer keeping my clients free of clutter and instead opted for a Windows Server 2022 Hyper-V vm. Any issues the vm will be rolled back or redeployed. Now deploy a Gen2 Windows Server 2022 Hyper-V image named, ensure the latest Windows updates are applied. AWS Configuration An account named 'svc_lambda' has been created with Administrative access in IAM. The excessive rights are for ease of deployment, the permissions will be adjusted to those needed later. The account's Access and Secret have been exported for use during the creation of the PowerShell Runtime Lambda. Installation of Windows Subsystem for Linux version 2 WSL version 2 was not supported by Server 2022 or Windows 11 at release. Install the latest Windows patches to enable WSL2 support. I may have mentioned this a few times now. Power off the VM and from the host open an elevated Powershell session. Then type the following command to enable nested hypervisor. AWS-Mgmt01 is vm's name in the Hyper-V console and not its hostname. Set-VMProcessor -VMName AWS-Mgmt01 -ExposeVirtualizationExtensions $true Power on, AWS-Mgmt01, login and elevate a PowerShell session and execute the following command. This will install all components and features required. If the command fails to be recognised, then Windows updates aren't applied or the experience I had, they failed to install correctly. wsl --install Restart AWS-Mgmt01, log in and WSL should auto launch, if not run wsl --install from PowerShell. Type in a username and password at the prompt. Installation confirmation will show that the latest version of Ubuntu and WSL 2 are configured. In the Linux shell execute the following commands to update and install all required dependencies. sudo apt update -y && sudo apt upgrade -y sudo apt install glibc-source groff less unzip make -y AWS Serverless Application Model Installation AWS SAM (Serverless Application Model) is a framework provided by AWS that simplifies the development, deployment, and management of serverless applications. It extends the capabilities of AWS CloudFormation, allowing developers to define serverless application resources using a simplified YAML syntax and is next to install. Type pwd and it will return '/home/user'. Type: mkdir Downloads to create a working directory and cd into the directory. Download the SAM client for Linux, unzip and Install. wget https://github.com/aws/aws-sam-cli/releases/latest/download/aws-sam-cli-linux-x86_64.zip unzip aws-sam-cli-linux-x86_64.zip -d sam-installation sudo ./sam-installation/install Confirm version and successful installation. /usr/local/bin/sam --version Download the AWS Client for Linux, unzip and Install wget "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" unzip awscli-exe-linux-x86_64.zip sudo ./aws/install Confirm version and successful installation. /usr/local/bin/aws --version Download the AWS Lambda PowerShell Runtime. git clone https://github.com/awslabs/aws-lambda-powershell-runtime mv aws-lambda-powershell-runtime/ aws-sam cd aws-sam/examples/demo-runtime-layer-function Export the access and secret keys for the Lambda service account via AIM. Configure access for the Lambda-Svc user. aws configure AWS Access Key ID [None]: AKIA5IZEOZXQ4XXXXX AWS Secret Access Key [None]: 2O8hYlEtAzyw/KFLc4fGRXXXXXXXXXX Default region name [None]: us-east-2 Default output format [None]: Build the custom runtime . sam build --parallel Deploy Custom Runtime to AWS. sam deploy -g Stack Name [sam-app]: PowerShellLambdaRuntime AWS Region [us-east-2]: us-east-2 Confirm changes before deploy [y/N]: n Allow SAM CLI IAM role creation [Y/n]: y Disable rollback [y/N]: n Save arguments to configuration file [Y/n]: n The deployment will take a few minutes as it creates CloudFormation, an S3 bucket and finally the Lambda. Testing the Runtime Lambda Function From the AWS console, open Lambda and browse to Functions to confirm the successful deployment of the PowerShell Runtime Demo. It's at this point when native PowerShell is used, the whole runtime falls apart and fails to execute. Click on Test after reviewing the PowerShell code. This is a first not only can it be viewed, it's editable. Add an Event Name and Save. Click on Test and review the details. The Runtime is installed, but not much else..... This is just the beginning and a bit of a problem if you thought that it was a simple matter of creating new Lambda's and applying PwsRuntimeLayer. I'm the bearer of bad news, let me explain. Two layers were created for the demo, the DemoAWSToolsLayer and PwshRuntimeLayer. For PowerShell, the correct modules need importing and these are supplied in the Lambda layers. In this case, it's the DemoAWSToolsLayer that loads the required module for the Lambda demo. And in the Demo's case, it's only the AWS.Tools.Common module needed by the function to the Get-AWSRegion. Consequently, additional layers containing the necessary modules for the function are required. For instance, to create a Lambda function to stop an EC2 instance, both the AWS.Tools.Common and AWS.Tools.EC2 modules are needed. We will delve into this in the next blog ( here ). Links: https://aws.amazon.com/blogs/compute/introducing-the-powershell-custom-runtime-for-aws-lambda/ https://aws.amazon.com/blogs/compute/extending-powershell-on-aws-lambda-with-other-services/ https://www.youtube.com/live/FAU0V_SM9eE?feature=share












