Now that I have some simple functions and some tests in my master branch on Github, I want to make sure the tests are executed (and pass!) every time I merge some new code. At this point I do not care too much about my dev branch because I know I’ll commit often and things will not always be stable, but I’ll make sure all my test will pass before I merge from dev to master (and later I way want to publish from master to the Powershell Gallery, but only if all tests pass of course!). The first thing I need is an account on Azure DevOps (you can start with a free account) and when ready, head to Pipelines then Builds: Since my code is in Github, that’s what I’ll choose: The first time we setup this integration, Azure Pipelines must be authorized to access Github: Since I don’t have a yaml file already, I’ll select Starter pipeline At this point in my tests things got a bit murky. The Azure Devops Marketplace has (as of this post) two free extensions to run Pester tasks so I decided to try them. I installed both extensions and added them…
-
-
Test Azure custom modules with Pester
Before I go too far along with building my LSECosmos module I must add proper tests. Just as a quick refresher (or to get some context if you’re not familiar with the concept), here are some pointers about Test Driven Development and Unit Testing: Test Driven Development (Wikipedia) Unit Testing (Wikipedia) Software Testing Fundamentals While it is relatively straightforward to test simple scripts (we would likely manually run the script testing a 2-3 core scenarios to make sure nothing terrible happens), things can get complicated fairly quickly with longer scripts or modules, especially when they are using a variety of cmdlets to take actions (think about Azure resources for example, or any other system-wide on-prem operation), need to pass data and objects back and forth between calls and so on. If you have written enough lines of code (no matter the language/tool you use), I bet you can remember at least one occasion where you decided to make an apparently small and innocent change to a well working piece of software an all hell broke loose ๐ต. I recently came across this meme on Facebook, it sums it up nicely ๐ (thanks to CodeChef for sharing): At its core proper…
-
CosmosDb module
CosmosDb is a non-relational, distributed database you can to build highly scalable, global applications in Azure. It is: – Always on – Offers throughput scalability – Low latency, guaranteed – No schema or indexes management – Globally distributed – Enterprise ready – Use popular no-Sql APIs https://docs.microsoft.com/en-us/azure/cosmos-db/introduction It’s all great from a development perspective, but when it comes to management things are a bit different. CosmosDb can be managed to a certain extent through Azure CLI but as of this writing there is no Powershell module available: I admit I have only superficially explored the existing modules and while it is great to see the Powershell Community sharing modules and scripts, it would probably be nice to have an official module by Microsoft as many other Resource Provider offer. This is a good opportunity though to continue exploring the topic I have introduced in my previous post about calling Resource Provider actions and since these script will likely be reused (I can actually use them at work), why not build a reusable module? Module basics Writing a Windows Powershell Module is a good point to start to get an overview of this topic; I’ll write a script module and I’ll…
-
Invoke Azure Resource Provider actions with Powershell
Recently I started to convert all my script and modules to run on Powershell Core and I soon realized I have a problem. When it comes to Azure resources, I work with a combination of both ARM and RDFE and all is good in Powershell Desktop on Windows: just load (or let Powershell load for me) both the Azure module and the combination of Az.<RPName> components I need. I have now changed to Powershell Core as my default on Windows (on macOS/Linux I don’t really have a choice ๐) but I encountered compatibility and runtime errors with the Azure and Azure.Storage modules, even if I import Windows Compatibility first. Typically Powershell Core complains about duplicate assemblies already loaded but I also got weird runtime errors trying to run basic cmdlets. Since I want to move to Powershell Core anyway I decided to not try to figure out how to solve the problem but rather move to ARM completely, writing my own cmdlets and functions not otherwise available. Luckily the Resource Providers I am interested in (Cloud Services for example) expose APIs and actions for Classic (RDFE) resources, so to get started I just need to find the right one ๐ค.…
-
RTM (it does not mean what you think)
Well… it actually stands for… Read The Manual (no cuss words please ๐). I realized it while I was experimenting with some more Dynamic Parameters scenarios and I was playing with filters. It is a basic scenario, Get-ChildItem (or one of its forms) is a fairly common cmdlet I use every day without thinking too much about it, but it still surprised me. This is how I (and I guess, likely most of the Powershell users) use Get-ChildItem: And if I’m looking for some specific files I can do: Easy enough. Anyway Get-ChildItem offers more advanced filtering capabilities, so let’s say I want to get the list of txt files but I also want to exclude the file1.txt from the output: No files returned? ๐ฒOk, let’s try to qualify the path (even though Get-ChildItem by default takes the current directory as -Path): Again no output, no matter if I pass “.” (current folder) or explicitly pass the full folder path to the -Path parameter ๐คWell, let me try to explicitly include the files I want then: No matter the parameter combination I try I cannot get the output I expect. Time to admit defeat and go back to the basics:…
-
Dynamic Parameters discoverability
In my previous post About Dynamic Parameters I forgot to mention an important point about discoverability. When I come across a new script or module, usually the first thing I do is to check its syntax to get an idea of the kind of arguments it can accept, like this: This concise syntax tells me for example that all parameters in the first ParameterSet are optional (each parameter and its type is enclosed in square brackets), meaning I can simply run Get-AzResource on an Azure Subscription and get the list of all available Resources. The second ParameterSet on the other hand requires at least the ResourceId parameter since it is not enclosed in square brackets; the other parameters are optional though, so I may or may not use them. And so on. Get-Help too shows the script’s syntax, along with additional help details if available: Dynamic Parameters are special though: As you can see, FolderPath is displayed as an optional parameter (expected) but there is no sign of FileName which we know will be created at runtime. That is the core of the matter: FileName does not appear in the param declaration, therefore Powershell does not see this as a…
-
About Dynamic Parameters
A fundamental best practice for any programming or scripting language is do not trust your input parameters, always validate data users (but also other pieces of automation) pass into your program. It is easy it imagine how things can go badly wrong when a user by mistake passes a string where you are expecting an integer, or an array in place of a boolean, not to mention the security implications (and potential disaster) of, for example, accepting and running things such as a sql command or other shell commands malicious users may try to use to exploit your system. In Powershell we can use Parameter Validation attributes to check the format or the type of an input parameter, or check for null or empty strings, or that the passed value falls within a certain range, or force the user to pass only a value selected from a restricted list. This last type is called ValildateSet and allows the script author to decide the list of values the user can chose from and have Powershell throws an error if this constraint is not respected. I used it often in my scripts and modules, this is how a very simple script looks like: [CmdletBinding()]param (…
-
One-liners vs. reusable scripts and proper indentation
I mentioned before, Powershell is a great tool for IT/SysAdmins, for Cloud Engineers (Service Engineers, SRE etc…) and in my opinion even developers, if used to the full extent of its capabilities. Everyone can find their own dimension using Powershell: you can fire up the shell and type away commands and one-liners, or write quick scripts ready to go next time around, or you can transform those scripts into Advanced Functions and combine them into Modules for reuse and distribution among your colleagues and maybe share online. All these different uses allow (I think almost call for) different writing styles: if I’m at the interactive console (other languages would call it REPL) I use all sort of shortcuts and aliases to save time and typing. For example let’s take one of the commands I used in Length, Count and arrays: At the console I would instead use: Here’s the breakdown: “dir” is an alias for Get-ChildItem “-di” is a contraction for “-Directory” (I want to list only folders, not files) “?” is again an alias for Where-Object “-m” is a contraction for -Match “select” is an alias for Select-Object “-exp” is a contraction for -ExpandProperty You can get a list…
-
Length, Count and arrays
Powershell was born with ease of use in mind, it has a fairly flexible syntax and is good at guessing the user’s intention. For example the Addition operator can deal with math if the passed values are numbers: PS >_ $a = 1PS >_ $b = 2PS >_ $a + $b3 It can also properly concatenate strings if the passed values are of that type: PS >_ $a = 'sample_'PS >_ $b = 'string'PS >_ $a + $bsample_string When used interactively at the console, Powershell tries to print a nice textual data representation with tables, lists and so on. For the sake of this discussion let’s assume we want to filter a list of folders: PS >_ Get-ChildItem -Directory Directory: C:\varCount Mode LastWriteTime Length Name ---- ------------- ------ ---- d----- 4/8/2019 1:23 PM Microsoft.ADHybridHealthService d----- 4/8/2019 1:23 PM Microsoft.Advisor d----- 4/8/2019 1:23 PM Microsoft.AlertsManagement d----- 4/8/2019 1:23 PM Microsoft.Authorization d----- 4/8/2019 1:23 PM Microsoft.Automation d----- 4/8/2019 1:23 PM Microsoft.Billing d----- 4/8/2019 1:23 PM Microsoft.Cache d----- 4/8/2019 1:23 PM Microsoft.ClassicCompute d----- 4/8/2019 1:23 PM Microsoft.ClassicNetwork d----- 4/8/2019 1:23 PM Microsoft.ClassicStorage d----- 4/8/2019 1:23 PM Microsoft.ClassicSubscription d----- 4/8/2019 1:23 PM Microsoft.Commerce d----- 4/8/2019 1:23 PM Microsoft.Compute d----- 4/8/2019 1:23 PM Microsoft.Consumption d-----…
-
Get any Function’s source code from the Function PSDrive
You may be already familiar with the concept of PSDrive or Powershell Providers: PowerShell providers are Microsoft .NET Framework-based programs that make the data in a specialized data store available in PowerShell so that you can view and manage it.The data that a provider exposes appears in a drive, and you access the data in a path like you would on a hard disk drive. You can use any of the built-in cmdlets that the provider supports to manage the data in the provider drive. And, you can use custom cmdlets that are designed especially for the data.The providers can also add dynamic parameters to the built-in cmdlets. These are parameters that are available only when you use the cmdlet with the provider data. You are likely using some Providers (especially the File System Provider) without even realizing it, while some come in handy if you need to perform actions on specific types of objects. For example you can list the certificates under your profile running Get-ChildItem -Path Cert:\CurrentUser\My. Notice the use of “Cert:” (Certificate) Provider (or PSDrive). The Function: drive allows to list all functions available in the current powershell session: As you can see some functions come with…