CosmosDb is a non-relational, distributed database you can to build highly scalable, global applications in Azure. It is: – Always on – Offers throughput scalability – Low latency, guaranteed – No schema or indexes management – Globally distributed – Enterprise ready – Use popular no-Sql APIs https://docs.microsoft.com/en-us/azure/cosmos-db/introduction It’s all great from a development perspective, but when it comes to management things are a bit different. CosmosDb can be managed to a certain extent through Azure CLI but as of this writing there is no Powershell module available: I admit I have only superficially explored the existing modules and while it is great to see the Powershell Community sharing modules and scripts, it would probably be nice to have an official module by Microsoft as many other Resource Provider offer. This is a good opportunity though to continue exploring the topic I have introduced in my previous post about calling Resource Provider actions and since these script will likely be reused (I can actually use them at work), why not build a reusable module? Module basics Writing a Windows Powershell Module is a good point to start to get an overview of this topic; I’ll write a script module and I’ll…
-
-
Invoke Azure Resource Provider actions with Powershell
Recently I started to convert all my script and modules to run on Powershell Core and I soon realized I have a problem. When it comes to Azure resources, I work with a combination of both ARM and RDFE and all is good in Powershell Desktop on Windows: just load (or let Powershell load for me) both the Azure module and the combination of Az.<RPName> components I need. I have now changed to Powershell Core as my default on Windows (on macOS/Linux I don’t really have a choice ๐) but I encountered compatibility and runtime errors with the Azure and Azure.Storage modules, even if I import Windows Compatibility first. Typically Powershell Core complains about duplicate assemblies already loaded but I also got weird runtime errors trying to run basic cmdlets. Since I want to move to Powershell Core anyway I decided to not try to figure out how to solve the problem but rather move to ARM completely, writing my own cmdlets and functions not otherwise available. Luckily the Resource Providers I am interested in (Cloud Services for example) expose APIs and actions for Classic (RDFE) resources, so to get started I just need to find the right one ๐ค.…
-
RTM (it does not mean what you think)
Well… it actually stands for… Read The Manual (no cuss words please ๐). I realized it while I was experimenting with some more Dynamic Parameters scenarios and I was playing with filters. It is a basic scenario, Get-ChildItem (or one of its forms) is a fairly common cmdlet I use every day without thinking too much about it, but it still surprised me. This is how I (and I guess, likely most of the Powershell users) use Get-ChildItem: And if I’m looking for some specific files I can do: Easy enough. Anyway Get-ChildItem offers more advanced filtering capabilities, so let’s say I want to get the list of txt files but I also want to exclude the file1.txt from the output: No files returned? ๐ฒOk, let’s try to qualify the path (even though Get-ChildItem by default takes the current directory as -Path): Again no output, no matter if I pass “.” (current folder) or explicitly pass the full folder path to the -Path parameter ๐คWell, let me try to explicitly include the files I want then: No matter the parameter combination I try I cannot get the output I expect. Time to admit defeat and go back to the basics:…
-
Dynamic Parameters discoverability
In my previous post About Dynamic Parameters I forgot to mention an important point about discoverability. When I come across a new script or module, usually the first thing I do is to check its syntax to get an idea of the kind of arguments it can accept, like this: This concise syntax tells me for example that all parameters in the first ParameterSet are optional (each parameter and its type is enclosed in square brackets), meaning I can simply run Get-AzResource on an Azure Subscription and get the list of all available Resources. The second ParameterSet on the other hand requires at least the ResourceId parameter since it is not enclosed in square brackets; the other parameters are optional though, so I may or may not use them. And so on. Get-Help too shows the script’s syntax, along with additional help details if available: Dynamic Parameters are special though: As you can see, FolderPath is displayed as an optional parameter (expected) but there is no sign of FileName which we know will be created at runtime. That is the core of the matter: FileName does not appear in the param declaration, therefore Powershell does not see this as a…
-
About Dynamic Parameters
A fundamental best practice for any programming or scripting language is do not trust your input parameters, always validate data users (but also other pieces of automation) pass into your program. It is easy it imagine how things can go badly wrong when a user by mistake passes a string where you are expecting an integer, or an array in place of a boolean, not to mention the security implications (and potential disaster) of, for example, accepting and running things such as a sql command or other shell commands malicious users may try to use to exploit your system. In Powershell we can use Parameter Validation attributes to check the format or the type of an input parameter, or check for null or empty strings, or that the passed value falls within a certain range, or force the user to pass only a value selected from a restricted list. This last type is called ValildateSet and allows the script author to decide the list of values the user can chose from and have Powershell throws an error if this constraint is not respected. I used it often in my scripts and modules, this is how a very simple script looks like: [CmdletBinding()]param (…
-
One-liners vs. reusable scripts and proper indentation
I mentioned before, Powershell is a great tool for IT/SysAdmins, for Cloud Engineers (Service Engineers, SRE etc…) and in my opinion even developers, if used to the full extent of its capabilities. Everyone can find their own dimension using Powershell: you can fire up the shell and type away commands and one-liners, or write quick scripts ready to go next time around, or you can transform those scripts into Advanced Functions and combine them into Modules for reuse and distribution among your colleagues and maybe share online. All these different uses allow (I think almost call for) different writing styles: if I’m at the interactive console (other languages would call it REPL) I use all sort of shortcuts and aliases to save time and typing. For example let’s take one of the commands I used in Length, Count and arrays: At the console I would instead use: Here’s the breakdown: “dir” is an alias for Get-ChildItem “-di” is a contraction for “-Directory” (I want to list only folders, not files) “?” is again an alias for Where-Object “-m” is a contraction for -Match “select” is an alias for Select-Object “-exp” is a contraction for -ExpandProperty You can get a list…
-
Length, Count and arrays
Powershell was born with ease of use in mind, it has a fairly flexible syntax and is good at guessing the user’s intention. For example the Addition operator can deal with math if the passed values are numbers: PS >_ $a = 1PS >_ $b = 2PS >_ $a + $b3 It can also properly concatenate strings if the passed values are of that type: PS >_ $a = 'sample_'PS >_ $b = 'string'PS >_ $a + $bsample_string When used interactively at the console, Powershell tries to print a nice textual data representation with tables, lists and so on. For the sake of this discussion let’s assume we want to filter a list of folders: PS >_ Get-ChildItem -Directory Directory: C:\varCount Mode LastWriteTime Length Name ---- ------------- ------ ---- d----- 4/8/2019 1:23 PM Microsoft.ADHybridHealthService d----- 4/8/2019 1:23 PM Microsoft.Advisor d----- 4/8/2019 1:23 PM Microsoft.AlertsManagement d----- 4/8/2019 1:23 PM Microsoft.Authorization d----- 4/8/2019 1:23 PM Microsoft.Automation d----- 4/8/2019 1:23 PM Microsoft.Billing d----- 4/8/2019 1:23 PM Microsoft.Cache d----- 4/8/2019 1:23 PM Microsoft.ClassicCompute d----- 4/8/2019 1:23 PM Microsoft.ClassicNetwork d----- 4/8/2019 1:23 PM Microsoft.ClassicStorage d----- 4/8/2019 1:23 PM Microsoft.ClassicSubscription d----- 4/8/2019 1:23 PM Microsoft.Commerce d----- 4/8/2019 1:23 PM Microsoft.Compute d----- 4/8/2019 1:23 PM Microsoft.Consumption d-----…
-
Get any Function’s source code from the Function PSDrive
You may be already familiar with the concept of PSDrive or Powershell Providers: PowerShell providers are Microsoft .NET Framework-based programs that make the data in a specialized data store available in PowerShell so that you can view and manage it.The data that a provider exposes appears in a drive, and you access the data in a path like you would on a hard disk drive. You can use any of the built-in cmdlets that the provider supports to manage the data in the provider drive. And, you can use custom cmdlets that are designed especially for the data.The providers can also add dynamic parameters to the built-in cmdlets. These are parameters that are available only when you use the cmdlet with the provider data. You are likely using some Providers (especially the File System Provider) without even realizing it, while some come in handy if you need to perform actions on specific types of objects. For example you can list the certificates under your profile running Get-ChildItem -Path Cert:\CurrentUser\My. Notice the use of “Cert:” (Certificate) Provider (or PSDrive). The Function: drive allows to list all functions available in the current powershell session: As you can see some functions come with…
-
Livesite, resource names and maintain sanity under stress
Coming from CSS (Customer Service and Support) I was used to work under pressure. As you can imagine, when a customer opens a support ticket it means something is broken or at the bare minimum is not working as they would like, so being able to quickly figure out the root cause of the problem and suggest how to resolve it a key component to the role. Equally important is to be able to manage those situations when things are really broken badly and the stakes are high: imagine an e-commerce website where transactions keep failing for some reason. The customer is losing money and his customers are unhappy (frustrated? fuming?) and likely taking their business elsewhere. Not nice. As Service Engineer in Azure, when one of our services is down it does not impact one customer, it impacts half a Continent! ? Something I learned quickly in my new role is to think in terms of livesite. What happens if I need to do “x” during a livesite incident? How quickly can I find that information, or get to that tool?ย This applies to almost everything I do, from seemingly negligible decisions (I need a new Storage Account, how…
-
[TOC..TOC..] … is this thing on?!?
2 years 2 months and 10 days, or 26.3 months, or 324.2 weeks, or 2,270 days (you get the idea ?) since my last blog post. Professionally speaking that was another life entirely. I used to write on MSDN, someone may even have read and still remember some of my posts on https://blogs.msdn.com/b/carloc: at that time I was working in Microsoft CSS (Customer Service and Support) with web development technologies, and I used to write about debugging tough problems and describe troubleshooting techniques I was coming across working with customers across Europe. It was interesting and fun and I learned a lot and I enjoyed sharing it. As all good things tend to come to an end, I decided it was time to moved on, I wanted a new challenge, I wanted to learn something new, and that required a good amount of energy on itself. Figure out what to do next, where to go and what to look for. At that point in my career I had worked as a Windows and the Web developer, then joined Microsoft and worked helping other developers fix and improve their web applications, I figured the next logical move after building and troubleshooting…