Powershell Snippets

A collection of code snippets, small pearls of wisdom and bits of knowledge that may come handy at times.

Don’t expect well thought out descriptions or highly structured content here; these are snippets.
(But there are some links spread around for further reading.)

You can also find more infos in blog posts with a “powershell” tag, and more practical functions in my Git repository on BitBucket.org.

Contents

Function/variable not available in session after running a script file

When you run “.\script.ps1”, you are executing the script but your function or variable will not be accessible from your session. You need to load the script via dot sourcing to make its functions, variables, etc. available to your running session (see also “Script Scope And Dot Sourcing” in ‘about_Scripts’):

> . .\script.ps1

Convert a SID to a name

Convert SID to an account name, respectively an account name to a SID; both within the same domain.

$SID = 'S-1-5-21-314159-2658589793-314159265-3589'
$AccountName = ([System.Security.Principal.SecurityIdentifier]$SID).Translate([System.Security.Principal.NTAccount]).Value

$AccountName = 'contoso\john.doe'
$SID = ([System.Security.Principal.NTAccount]$AccountName).Translate([System.Security.Principal.SecurityIdentifier]).Value

In case the AD object’s SID is from a different (but trusted) domain:

(Get-ACL -Path C:\Folder\Subfolder1\Subfolder2).Access.IdentityReference |
    % { Get-ADObject -Server example.net -Filter "ObjectSID -eq '$_'" }

And here it is again, this time packaged as a helper function:

function convertSIDToNameAsString
{
    param
    (
        [Parameter(Mandatory=$true)] [string] $SID,
        [Parameter(Mandatory=$true)] [string] $Domain
    )
    Get-ADObject -Server $Domain -Filter "ObjectSID -eq '$SID'" | select Name -ExpandProperty Name
}

$dom = "example.com"
$sid = "S-1-2-34-5678901234-567890123-0006660000-00305"

convertSIDToNameAsString -SID $sid -Domain $dom

# ---- Another usage example: ----

$acl_access_obj = (Get-ACL C:\path).Access

$acl_access_obj | select -property *, @{n="Resolved Name"; e={convertSIDToNameAsString -Domain $Domain -SID $_.IdentityReference}}

Another way to resolve the SID from a different (but trusted) domain (there’s probably a reason, why it is more verbose)

Add-Type -AssemblyName System.DirectoryServices.AccountManagement

$principal_context = [System.DirectoryServices.AccountManagement.PrincipalContext]::new(
    [System.DirectoryServices.AccountManagement.ContextType]::Domain, $domain)

$object = [System.DirectoryServices.AccountManagement.GroupPrincipal]::FindByIdentity(
    $principal_context, [System.DirectoryServices.AccountManagement.IdentityType]::Sid, $SID)

$object

Note that in the example above we explicitly search for a group; there are other possible values: (Slightly weird inheritance chain, if one compare group and computers/users, but that’s what Microsoft writes…)


List all AD domains with which our own domains have a trust relationship

$Forest = [System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest()

ForEach ($forest_domain in $Forest.Domains)
{       
    Write-Output "Own domain: $($forest_domain.Name)"
}

$TrustedDomains = @()
$Trusts         = $Forest.GetAllTrustRelationships()

ForEach ($trust in $Trusts)
{
    Foreach ($Domain in $trust.TrustedDomainInformation)
    {
        $dom = $Domain.dnsname
        $dom = $dom.ToLower()
        
        If (-not ($dom.Contains("custom")) -and
            ($trust.TrustDirection -eq "Inbound" -or $trust.TrustDirection -eq "Bidirectional"))
        {
            $TrustedDomains += $dom
        }
    }
}

Write-Output "---- Trusted Domains ----"
return $TrustedDomains

For the above mentioned points (SID from a different [trusted] domain), there are also functions here that may help:


Find all AD groups that have term in its name

Because I had a mental block and needed to look up the -Filter syntax… :-O

Get-ADGroup -Filter {Name -like "*term*"} -server example.net | select name | sort name

Using variables for Get-ADUser/Get-ADGroup -Filter {…}

I recently had a problem with the -Filter parameter of these cmdlets, which left me bewildered for a while, until I found out that I’m neither the first nor the only one hitting this: As it turns out, its a bit picky when it comes to working with variables, since they don’t get resolved in the filter block!

So, instead of using it directly in-place (like -Filter {name -like "*$SearchTerm*"}, or similar variants), one needs to prepare it beforehand:

$WildcardSearchTerm = '*' + $SearchTerm + '*'
Get-ADUser -Server $domain -Filter {(samaccountName -eq $SearchTerm) -or (name -like $WildcardSearchTerm)}

Or, as I later found out, one could also use the LDAP way of filtering, because there, variables do get resolved:

Get-ADUser -Server $domain -LDAPFilter "(|(samaccountName=$SearchTerm)(name=*$SearchTerm*))"

Since the LDAP grammar is bit different, see the section Notes on LDAP on this very page for some useful links.


Getting information about Active Directory attributes

This is more an Active Directory topic than a Powershell one; but it might be helpful once in while, when dealing with the AD.
(And since I don’t have a dedicated AD page and also don’t know the Powershell equivalents of those commands, I think it fits in nicely on this page.)

More about dsquery:


Get-ADUser/Get-ADGroup: Check if an attribute exists

Asking for the value of an AD attribute is easy, when you know is exists (and you have selected it). But what if you need to filter in/out those objects where the attribute is available/missing? Checking for $null is not the right way, as I learned the hard way; use this instead:

# Get all users where 'AttributeName' is missing:
$x = Get-ADUser -filter { (-not (AttributeName -like "*") } -server example.net

Normal checks in the control flow work as usual by relying on a true or false answer:

if (-not ($UserObject.AttributeName)) { ... }
else { ... }

For scripts with a (Windows Forms) GUI: Show or hide the console

Based on http://powershell.cz/2013/04/04/hide-and-show-console-window-from-gui/

Add-Type -Name Window -Namespace Console `
    -MemberDefinition '[DllImport("Kernel32.dll")] public static extern IntPtr GetConsoleWindow();
                       [DllImport("user32.dll")]   public static extern bool ShowWindow(IntPtr hWnd, Int32 nCmdShow);'
 
function Show-Console ([bool] $show)
{
    $consolePtr = [Console.Window]::GetConsoleWindow()
    if ($show) { [Console.Window]::ShowWindow($consolePtr, 5) } # Show
    else       { [Console.Window]::ShowWindow($consolePtr, 0) } # Hide
}

Report the (free) disk space on a remote computer

$ComputerName = "MyComputer"

$disk = Get-WmiObject Win32_LogicalDisk -ComputerName $ComputerName -Filter "DeviceID='C:'" |
    Select-Object Size,FreeSpace

Write-Output ("Disk C: on remote computer $ComputerName has {0:#.0} GB free `
    of {1:#.0} GB total" -f ($disk.FreeSpace/1GB), ($disk.Size/1GB))

Tail a file

Keep an eye on a file and update the last 10 lines after the file has been modified (and saved!); might be useful for logfiles.

Get-Content -Path C:\folder\file.txt -Tail 10 -Wait

Progress indicator

There are multiple ways of how to show the progress of an action (that happens in a loop, for example):

As simple one-line text

The same line will be overwritten on each iteration; that way, the progress information stays more or less in the same place and remains compact.
But please take note of this:

$Data  = (1..10)

$total = @($Data).count
$ctr1 = 0

ForEach ($i in $Data)
{
    $ctr1++
    write-host "`r[$($ctr1)/$($total)]" -NoNewline
    sleep 1
}

"  " # ----------

$ctr2 = 0
$pad  = 2 # Pad how many empty characters?

ForEach ($i in $Data)
{
    $ctr2++
    write-host "`r[$($ctr2.ToString().PadLeft($pad, ' '))/$($total)]" -NoNewline # Variant 2.
    sleep 1
}

As a (nested) progress bar

The cmdlet Write-Progress displays a progress bar in a PowerShell command window to depict the status of a command.

$total = (1..100)
$subtask = (1..10)

ForEach ($a in $total)
{
    $percent = [math]::Round($a/@($total).Count*100)
        # $total (and below, $subtask) are enclosed in @(...) to force a conversion into an array:
        # Single objects don't provide a 'Count', so a call might otherwise trigger a "PropertyNotFoundException".

    Write-Progress `
    -Id 1               <# The ID Must be an integer value #> `
    -Activity "Working..." `
    -Status "Please wait." `
    -CurrentOperation "$percent% complete" `
    -PercentComplete $percent
    
    Write-Output "Main task: $percent% completed."
    
    ForEach ($b in $subtask)
    {
        $percent2 = [math]::Round($b/@($subtask).Count*100)
        
        Write-Progress `
        -Id 2 `
        -ParentId 1     <# To get a nested progress bar, specify the parent's ID here #> `
        -Activity "Working on sub-task..." `
        -Status "Please wait." `
        -CurrentOperation "$percent2% complete" `
        -PercentComplete $percent2
        
        Write-Output "`tSub task: $percent2% completed."
    }
}

GUI: Folder Browser Dialogue Box

Inspired by https://code.adonline.id.au/folder-file-browser-dialogues-powershell/

function Find-Folders
{
    [Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms") | Out-Null
    [System.Windows.Forms.Application]::EnableVisualStyles()

    $browse                     = New-Object System.Windows.Forms.FolderBrowserDialog
    $browse.SelectedPath        = "C:\"
    # or e.g.: $browse.RootFolder = [System.Environment+SpecialFolder]'MyComputer'
    $browse.ShowNewFolderButton = $false
    $browse.Description         = "Select a directory"

    $loop = $true

    while ($loop)
    {
        if ($browse.ShowDialog() -eq "OK")
        {
            $loop = $false
            # >>--> Insert your script here <--<<
        }
        else { return }
    }
    $browse.SelectedPath
    $browse.Dispose()
}

Find-Folders

GUI: File Browser Dialogue Box (with Multiselect)

Inspired by https://code.adonline.id.au/folder-file-browser-dialogues-powershell/

Add-Type -AssemblyName System.Windows.Forms
$FileBrowser = New-Object System.Windows.Forms.OpenFileDialog -Property @{
    Multiselect = $true                               # Multiple files can be chosen
    Filter      = 'Images (*.jpg, *.png)|*.jpg;*.png' # Specified file types
}
 
[void] $FileBrowser.ShowDialog()
$path = $FileBrowser.FileNames;

if ($FileBrowser.FileNames -like "*\*")
{
    # Do something before work on individual files commences
    $FileBrowser.FileNames #Lists selected files (optional)

    foreach($file in Get-ChildItem $path)
    {
        Get-ChildItem ($file) |	ForEach-Object { <# Do something to each file #> }
    }
    # Do something when work on individual files is complete
}
else { Write-Host "Cancelled by user" }

Match users to computers

Compares two lists: One CSV file that contains computer names, one TXT file that contains users names. The computer data in the CSV file, like “Inventory - Computer.OS Login”, comes from a special tool, so adjust as needed!

Loops over all entries and looks for matches, that will be collected in a new list, that is then saved to an output/result text file at the end.

Nice goodie: Shows two nested Write-Progress bars for the work, using the -Id and -ParentId festures.

$computers = Import-Csv -Path 'H:\AllComputers.csv' -Delimiter ";"

function match-users-to-computers ($file)
{
    $list_of_matched_computers = New-Object System.Collections.Generic.List[System.Object]
    $users                     = Get-Content -Path $file
    $user_progress             = 0

    foreach ($user in $users)
    {
        $comp_progress = 0

        foreach ($comp in $computers)
        {
            $u = $comp | Select-Object -ExpandProperty "Inventory - Computer.OS Login"

            if ($u -eq $user)
            {
                $c = $comp | Select-Object -ExpandProperty "Computer.Computer Name"
                $list_of_matched_computers.Add($c)
            }

            Write-Progress -ParentId 1 -Activity "Work in Progress" ` 
                -Status "Computers: $comp_progress of $($computers.Count)" ` 
                -PercentComplete (($comp_progress / $computers.Count) * 100);
            $comp_progress++
        }

        Write-Progress -Id 1 -Activity "Work in Progress" ` 
            -Status "Users: $user_progress of $($users.Count)" ` 
            -PercentComplete (($user_progress / $users.Count) * 100);
        $user_progress++
    }

    $filename = Get-ChildItem $file | Select-Object -ExpandProperty Name
    $list_of_matched_computers | Out-File -FilePath .\$($filename)_output.txt
}

match-users-to-computers "C:\users.txt"

CSV

For a standard .csv file, where the first line specifies the column names:

$csv = Import-Csv -Path (Join-Path (Get-Location) "test.csv") -Delimiter ';' -Encoding 'UTF-8'

CSV file is missing a header row

If the CSV file that you want to import doesn’t have a header row, you can define it explictly when getting the data:

$csv_no_header = Get-Content (Join-Path (Get-Location) "test.csv") | ConvertFrom-Csv -Delimiter ';' -Header Name, Path, ID

Count of items seperated by ‘\’ in a column of a CSV file

Sample input:

"test1";"folder1\folder1.1\folder1.2\folder1.3";"0010"
"test2";"folder2\folder2.1\folder2.2";"0020"
"test3";"folder3\folder3.1\folder3.2\folder3.3\folder3.4";"0030"
$csv = Get-Content (Join-Path (Get-Location) "test.csv") |
    ConvertFrom-Csv -Delimiter ';' -Header Name, Path, ID

$level_count = @()
$path_values = @()
$i = 0

ForEach ($line in $csv)
{
    $level_count += $line.Path.split('\').count
    New-Variable -Name "path_values_$($i)" -Force
    Set-Variable -Name path_values_$($i) -Value ($line.Path.split('\'))
    $path_values += Get-Variable -Name path_values_$($i)
    $i++
}

$path_values

Unexpected output in a CSV file: Length numbers instead of property values

With Export-CSV, one can save the values of object properties to a file; but the only property of a String[] object (in a strict sense) is its length, so the generated file will only contain the lengths of the strings as a numeric value – which is usually not what the user expects or wants.

To fix this, you’ll need to convert the type of the input from String[] to Object[]; the easiest way to achive that is to simply apply Select-Object for all properties in the pipeline:

$data | Select-Object * | Export-CSV -Path "C:\foo.csv" -Delimiter ';' -Encoding Default -NoTypeInformation

See also: https://stackoverflow.com/questions/19450616/export-csv-exports-length-but-not-name


Clipboard

Set-Clipboard -Value "This is a test string."
Set-Clipboard -Value "This is appended." -Append  # Appends also a linebreak!

Get-Clipboard -Raw
    # 'Raw' ignores newline characters and gets the entire contents of the clipboard.

Get 6 random unique numbers from a range of 1 to 49

Get-Random -Count 6 -InputObject (1..49)

Measure objects and commands

Calculates the numeric properties of objects, and the characters, words, and lines in string objects, such as files of text:

> ($x = 1, 5, 20, 30) | Measure-Object -Maximum -Sum

Count    : 4
Average  :
Sum      : 56
Maximum  : 30
Minimum  :
Property :

Measure the time it takes to run script blocks and cmdlets:

Measure-Command -Expression { <CommandA>; <CommandB>; ... }

Select-Object (select)

One can use it to:

  1. Predefine the name & expression beforehand and use it later (or multiple times); or put it directly in place.
  2. Rename properties
  3. Calculate values with properties
  4. Select not only specific properties, but also filter.

General tip: @{Name="..."; Expression={...}} can be shortend to @{n="..."; e={...}}

$x = @{n="New Id Name";e={$_.Id}}                                                    # (1), (3)
$y = @{Name = "multiplied by 100"; Expression={"x 100 = "+($_.Id*100)}}              # (1), (2)
  
Get-Process -Name p* |
  Select-Object -Property Name, $x, $y, Han*, @{n="HC * 2";e={$_.HandleCount * 2}} | # (1), (4)
      format-table

Leads to a result akin to this:

Name                New Id Name multiplied by 100 Handles Handle HandleCount HC * 2
----                ----------- ----------------- ------- ------ ----------- ------
pageant                   13788 x 100 = 1378800       209   5616         209    418
PhoneExperienceHost       12076 x 100 = 1207600      1756   4944        1756   3512
PowerMgr                   6024 x 100 = 602400        383   5948         383    766
powershell                42168 x 100 = 4216800      1147   5408        1147   2294
procexp64                  7652 x 100 = 765200        550   3716         550   1100
pwsh                      37872 x 100 = 3787200       963   3032         963   1926

Also helpful: Select-Object -ExcludeProperty ... and Select-Object -ExpandProperty ...


EventLog and Format-Table and Out-GridView

Get the ten most recent errors and warning from the System event log:

Get-EventLog -LogName System -EntryType Error, Warning -Newest 10

Show it in a table, with the most interesting part (the message) sadly being truncated.
To get the full view, use this:

Get-EventLog -LogName System -EntryType Error, Warning -Newest 10 | Format-Table -Wrap

Out-Gridview

The cmdlet Out-Gridview (ogv) is a nice utility for displaying data sets, and also (in some cases) a good alternative to a full-blown GUI, if you only need a minimalistic dialog for a simple selection dialog.

$selection = get-process | Out-GridView -Title "Window Title" -OutputMode Single

Caveats


Data structure: Array

$array = @(1, 2, 3, 4)         # or simply: $array = 1, 2, 3, 4
$array[0]                      # Accessing the first element
1

# Two-Dimensional array (array of arrays); see also below!
$mdarray = @(1, 2, 3), @(4, 5, 6), @(7, 8, 9)

$mdarray[1][1]                 # Accessing the second element of the second array.
5

$array = @(                    # One can declare an array also on multiple lines.
    'Zero'                     # The comma is then optional and generally left out.
    'One'
    'Two'
    'Three'
)

Array of Arrays

$SearchTerms = @(
        ,@("User", "Group")      # The leading comma is important; see note below!
        ,@("XML", "JSON", "CSV")
        ,@("Password")
    )

    ForEach ($SearchTerm in $SearchTerms)
    {
        $result = @()
        $SearchTerm | % {
                $st = $_
                $result += get-command | select Name, Source | ? { $_.Name -like "*$st*" }
        }

        write-host ("{0,-60}" -f ($SearchTerm -join " | ")) -BackgroundColor DarkRed
        $result | sort -property Name | format-table
    }
    
write-host ("`$SearchTerms[1][1] is '{0}'" -f $SearchTerms[1][1]) # Will output "JSON"

[…] the leading comma in that statement is interpreted as the unary array construction operator [to] get an array nested in another array […]
https://stackoverflow.com/a/42772846

Combination: Array of Hashtables

Inspiration:

$Records = @()

$Records += (@{ 
    name        = "Name #1";
    description = "Desc #1";
})

$Records += (@{ 
    name        = "Name #2";
    description = "Desc #2";
})

if ($Selection = $Records | Where-Object { $_.name -eq "Name #2" })
{
    $Selection.name
    $Selection.description
}

Data structure: Hashtable

$hashtable = @{ Cat = 'Frisky'; Dog = 'Spot'; Fish = 'Nimo'; Hamster = 'Whiskers' }

$hashtable["Fish"]             # Accessing the value for the key "Fish", which is 'Nimo'.

$hashtable = @{                # One can declare an array also on multiple lines.
    0 = 'Zero'                 # The semicolon is then optional and generally left out.
    1 = 'One'  
    2 = 'Two'  
    3 = 'Three'
}

Performance

The dictionary-style ($h['k'] = "v") is on average the fastest way, while the property-style ($h.'k' = "v") is way slower:

$ht = @{}

# Adding 10000 items property-style: ~8822ms
(Measure-Command { 1..10000 | ForEach-Object { $ht.$_ = $_ } }).TotalMilliseconds
$ht.Clear()

# Adding 10000 items using the Add method: ~41ms
(Measure-Command { 1..10000 | ForEach-Object { $ht.Add($_, $_) } }).TotalMilliseconds
$ht.Clear()

# Adding 10000 items dictionary-style: ~40ms
(Measure-Command { 1..10000 | ForEach-Object { $ht[$_] = $_ } }).TotalMilliseconds

Source: 1

Case sensitivity of hashtable keys

If one creates a hashtable in one of the following ways, then the keys will saved case-sensitive:

$hashtable1 = New-Object -TypeName hashtable
$hashtable1 = New-Object System.Collections.Hashtable
$hashtable2 = [hashtable]::new()

If the keys should be saved in a case-INsensitive manner, then create the hashtable using a literal hashtable expression:

$hashtable3 = = @{}

Sources: 1, 2, 3

Get all items of an hashtable as if it was a linear array

$ht = @{
    "One" = @(1, 2, 3)
    "Two" = @("I", "II", "III")
    "Three" = @('a', 'b', 'c')
}

You can unwrap the object’s individual elements with the GetEnumerator() method to look for the key or the value items:
(Note the difference between the table and the element access: It shown as Name, but it’ really Key!)

> $ht

Name                           Value
----                           -----
One                            {1, 2, 3}
Three                          {a, b, c}
Two                            {I, II, III}

> $ht.GetEnumerator() | % {$_.Key}
One
Three
Two

> $ht.GetEnumerator() | % {$_.Value}
1
2
3
a
b
c
I
II
III

Find by value (instead of key)

If you want to use the hashtable both ways (i.e. looking up a value and getting it’s key), that is also possible, but please consider that values can appear multiple times, so it’s not an unique item (or “key”) like the real key! But with some discipline, it might work:

$Mapping = @{
    'ShortA' = 'Long Name for A'
    'ShortB' = 'Long Name for B'
    'ShortC' = 'Long Name for C'
}

# Get the key that fits the value (ignore case in comparison)
$Mapping.GetEnumerator() | ? {($_.Value).ToLower() -eq ($i.SomeVar).ToLower() }).Name

Iterate over the items of an hashtable/array collection

$data = @{
    "1xx" = @("101", "102")
    "2xx" = @("201", "202")
}

foreach ($key in $data.keys | sort)
{
    "Key: $key"

    foreach ($value in $data[$key])
    {
        "  Value: $value"
    }
}

Output:

Key: 1xx
  Value: 101
  Value: 102
Key: 2xx
  Value: 201
  Value: 202

Combination: Hashtable of Arrays

$hashtable2 = @{
    "Level 1" = @()
    "Level 2" = @()
    "Level 3" = @()
}

$hashtable2["Level 1"] += 1, 2, 3
$hashtable2["Level 2"] += 'A', 'B', 'C'

> $hashtable2

Name                           Value
----                           -----
Level 1                        {1, 2, 3}
Level 2                        {A, B, C}
Level 3                        {}

> $hashtable2["Level 2"]
A
B
C

> $hashtable2["Level 2"][1]
B

Data structures: Use a List, not an Array

While a normal array is flexible, easy to handle and works fine for many tasks…

$array1 = @()
$array1 += $value

$array2 = "mix", 1, "of", 2, "types"

… it is advisable to use System.Collections.Generic.List instead for bigger lists/collections of data, due to performance reasons:
When you get in the thousands of items, things tend to get slow with an array: By using the += operator, a new array is created (possibly each time you use the operator), to which all the existing items, plus the new one, will be moved; and then the original array will be destroyed; this all takes time and energy.

So, for those cases, better use one of the list types from the .NET framework: ArrayList or (better) List:
System.Collections.ArrayList is the older of the two and not really recommended any longer; use System.Collections.Generic.List<T> instead (see Microsoft Docs 1 and Microsoft Docs 2).

$o = New-Object System.Collections.ArrayList         # Not recommended anymore!

$x = New-Object Collections.Generic.List[Int]
$y = New-Object Collections.Generic.List[String]
$z = New-Object Collections.Generic.List[PSObject]

$a = [System.Collections.Generic.List[String]] @()
$b = [System.Collections.Generic.List[PSObject]] @()
$c = [System.Collections.Generic.List[PSCustomObject]] @()
$d = [System.Collections.Generic.List[Object]] @()

$b.Add($SomeValue)

Also, if possible, try to specify the exact type (e.g. int or string), instead of a generic type like Object, PSObject or PSCustomObject: For very big lists, this may give you an additional performance boost (sometimes that isn’t feasible: I’ve often used my own PSCustomObject formats to make life easier…).

See also 👉 Powershell: PSCustomObject for some more information on Object vs. PSObject vs. PSCustomObject.


Regular Expressions (RegEx)

👉 Has its own page.


Operator -contains and method .contains()

Examples

Simple one-liner

"This is a string" -contains "a"                      # = False
"This is a string" -contains "This is a string"       # = True

"This is a string".contains("A")                      # = False
"This is a string".contains("a")                      # = True

@("First item", "Second item") -contains "First"      # = False
@("First item", "Second item") -contains "first item" # = True (not case sensitive)

@("First item", "Second item").contains("First")      # = False
@("First item", "Second item").contains("first item") # = False (case sensitive!)

@(1, 2, 3) -contains 4                                # = False
@(1, 2, 3) -contains 2                                # = True

@('A', 'B', 'C') -contains 'a'                        # = True (not case sensitive)
@('A', 'B', 'C') -contains 'A'                        # = True

@('A', 'B', 'C').contains('a')                        # = False (case sensitive!)
@('A', 'B', 'C').contains('A')                        # = True

Read user IDs from a CSV file and filter them out of a collection of AD users

  1. Get the AD users that you want to check.
  2. Load from a CSV file the IDs (e.g. the SamAccountName) of the users that should be filtered out.
    Assumes that the CSV doesn’t have a header line and that only the first column interests us (as the ID).
    Lines beginning with a # symbol which will not be imported.
  3. Compare and split the users accordingly.
$users_to_keep   = [System.Collections.Generic.List[PSCustomObject]] @()
$users_to_ignore = [System.Collections.Generic.List[PSCustomObject]] @()

$users  = Get-ADUser -Server $server -Filter { <# ... #> } -Properties *
$ignore = Import-CSV -Delimiter ';' -Path 'C:\IgnoreList.csv' -Header 'ID', 'Name' -Encoding Default |
            Where-Object { $_.ID -notlike "#*"}

ForEach ($u in $users)
{
    if ($ignore.ID -Contains $u.SamAccountName) { $users_to_ignore.Add($u) }
    else                                        { $users_to_keep.Add($u) }
}

Unicode

Embed an Unicode glyph in a string by its Code Point

I prefer to save my scripts as UTF-8 encoded files (without BOM, if possible); but I once had the strange case where I was using Send-MailMessage (see note below) with an HTML mail body and encoding set to UTF8, but still had problems with the subject line.

While the text in the mail body was OK (mind you, it was read from an UTF-8 text file), the subject line, which was a string within the UTF-8 encoded *.ps1 file and using a german Umlaut, displayed garbled symbols or question marks instead of the desired ü when printed or when the mail was received.

So, I read a bit on the Internet, tried a few conversion things, but none of those helped. The only one which lead to success was the tip to embed the code point of the character’s glyph in the string with its hexadecimal representation (i.e. U+00FC applied as 0x00FC for “ü”), and then cast to a char:

$Subject = [System.Text.Encoding]::UTF8 # Not sure if this is strictly necessary, but it works.
$Subject = "Benutzername und Passwort f$([char]0x00FC)r den Anmeldeprozess"

Regarding SendMailMessage: Yes, I know that Microsoft says it’s obsolete, but since they don’t offer an on-board alternative: What should one do for those handy ’notification-from-a-script’ mails instead…?


Copy files from A to Z and keep original CreationTime, LastWriteTime and LastAccessTime

Notes on this snippet:

$source_dir      = "\\domain1.example.net\shareA\"
$destination_dir = "\\domain2.example.net\shareZ\"
$all_source_files = Get-ChildItem -File -Path $source_dir

ForEach ($source_file in $all_source_files)
{
    $source_file_CreationTime = Get-ItemPropertyValue -Path $source_file.fullname -Name CreationTime
    $source_file_LastWriteTime = Get-ItemPropertyValue -Path $source_file.fullname -Name LastWriteTime
    $source_file_LastAccessTime = Get-ItemPropertyValue -Path $source_file.fullname -Name LastAccessTime
   
    Copy-Item -Force -Path $source_file.fullname -Destination $destination_dir
    
    $destination_file = Get-Item -Path (Join-Path -Path $destination_dir -Childpath $source_file.name)
   
    Set-ItemProperty -Path $destination_file -Name CreationTime -Value $source_file_CreationTime
    Set-ItemProperty -Path $destination_file -Name LastWriteTime -Value $source_file_LastWriteTime
    Set-ItemProperty -Path $destination_file -Name LastAccessTime -Value $source_file_LastAccessTime
}

Extract the domain from the DistinguishedName

From a text string

$DistinguishedName = "CN=Some_Group_Name,OU=Users,OU=Headquarters,DC=sub,DC=example,DC=net"
(($DistinguishedName -split ',' | ? {$_ -like "DC=*"}) -replace 'DC=', '') -join '.' 

Returns sub.example.net.

From an AD user object

(((get-aduser -Server example.net -Identity user123 | select -expand DistinguishedName) -split ',' | ? {$_ -like "DC=*"}) -replace 'DC=', '') -join '.'

Returns example.net.


Check the bound parameters for the -Verbose flag

If you want to simply output more text in a Verbose mode, you can rely on the built-in features of an advanced function:

[CmdletBinding()]
Param()
write-verbose "Only available in verbose mode"

… and then call the function/script with the -Verbose parameter.

But what if you need more than this?
For example, you may need to calculate something before you can print the verbose result, but you don’t want to cram it all into the write-verbose statement.
A simple check like if ($verbose) {...} is sadly not possible by default.

Workaround: Check the automatic variable that holds the bound parameters of the function or script:

$VerboseMode = $null # … or $false…

$PSBoundParameters.TryGetValue("Verbose", [ref] $VerboseMode) | out-null

if ($VerboseMode) { "Verbose mode is $VerboseMode" }
else              { "Verbose mode is $VerboseMode" }

By using TryGetValue(), an unlikely (but valid) variant like -Verbose:$false will also be detected:

> script.ps1
Verbose mode is          # Nothing, since the variable was set to $null by default in the example above.

> script.ps1 -Verbose
Verbose mode is True

> script.ps1 -Verbose:$false
Verbose mode is False

> script.ps1 -Verbose:$true
Verbose mode is True

Get the folder target of a DFS link folder

Assume that you have a network share named “Admin” in a domain, either mapped to a drive or reachable via UNC path.
On that share, you have access to multiple folders, e.g. “AdminScripts”, “Tools” and “Setup”.

But on which fileservers are those directories really located?

One could open its properties in the GUI (= File Explorer) and look in the DFS tab under Referral List: Path.
But that is cumbersome and not viable if you have many folders or want to process that data further.

Instead, one could use this:

(Get-DfsnRoot -Domain example.net).Path |
    % { (Get-DfsnFolder -Path (Join-Path -Path $_ -ChildPath "\*")).Path } |
        % { Get-DfsnFolderTarget -Path $_ | select Path, TargetPath } |
            sort Path |
                ft -autosize

Notes:

With that snippt, you get an output like this:

Path                             TargetPath
----                             ----------
\\example.net\Admin\AdminScripts \\FS101XY0123.example.net\SysAdmin$\AdminScripts
\\example.net\Admin\Setup        \\fs302az8901.example.net\SysAdmin$\Tools\Setup
\\example.net\Admin\Tools        \\fs202zz4567.example.net\SysAdmin$\Tools

Update (2023-09-21): Made a proper function out of it: 👉 Get-saoeDFSPath.


Finding shares in a domain

The sample code below (based on code from here) first collects all computers in a domain and then checks each of them for available network shares, while omitting some common ones like C$.

Depending on the amount of machines in an environment, that may take a while; additionally there may appear some Access Denied exceptions.

A note on using Invoke-Command: It creates a user profile on the target machine for whoever runs the cmdlet! (see also)
This may or may not be an issue for you; it was in my case: One admin was ticked off by it, because one of the analyzed machines was designed as a template for other virtual machines and should be kept pristine (*oops*)
Allegedly one can prevent it with the following parameter, but I haven’t tested it yet:
Invoke-Command -SessionOption (New-PSSessionOption -NoMachineProfile)
$machines    = Get-ADComputer -Filter * -server 'example.net' | Select-Object -ExpandProperty DNSHostName
$skip_shares = @("C$", "D$", "ADMIN$", "IPC$")
$result      = @()
$count       = 0 # Optional, for the progress bar.
 
ForEach ($m in $machines)
{
    $count++ # Optional, for the progress bar.
    write-progress -activity "Analyzing..." -status "$count/$(@($machines).count)" # Optional progress bar.
    
    try
    {
        $shares = Invoke-Command -ComputerName $m -ErrorAction Stop -ScriptBlock { Get-WmiObject -class Win32_Share }
            # See note above on using "Invoke-Command"!
        
        ForEach ($share in $shares)
        {
            if ($skip_shares -contains $share.Name) { <# uninteresting #> }
            else                                    { $result += $share | select PSComputerName, Name, Path, Description }
        }
    }
    catch
    {
        write-warning "$($m): $($_.Exception.Message)"
    }
}

write-output $result | Format-Table -AutoSize -Wrap

Handle the components of an UNC path

If you need to deal with the components of an UNC path (e.g. \\server.example.net\share$\Directory\Foo\about.txt), then the cmdlet Split-Path will be of not much use.

For example, it can’t separate the root/server from the rest of the UNC path.
Of course, one could resort to manually splitting the string and similiar techniques for that, but why bother when there’s a better alternative available?
Instead, simply turn the UNC path into a System.URI type:

# Variant 1:
$X = New-Object System.Uri -ArgumentList "\\server.example.net\share$\Directory\Foo\about.txt"

# Variant 2:
[system.uri] "\\server.example.net\share$\Directory\Foo\about.txt"

The object will have something like this, from which you can pick and choose what you need (e.g. the Host, or the array of Segments, etc.):

AbsolutePath   : /share$/Directory/Foo/about.txt
AbsoluteUri    : file://server.example.net/share$/Directory/Foo/about.txt
LocalPath      : \\server.example.net\share$\Directory\Foo\about.txt
Authority      : server.example.net
HostNameType   : Dns
IsDefaultPort  : True
IsFile         : True
IsLoopback     : False
PathAndQuery   : /share$/Directory/Foo/about.txt
Segments       : {/, share$/, Directory/, Foo/...}
IsUnc          : True
Host           : server.example.net
Port           : -1
Query          :
Fragment       :
Scheme         : file
OriginalString : \\server.example.net\share$\Directory\Foo\about.txt
DnsSafeHost    : server.example.net
IdnHost        : server.example.net
IsAbsoluteUri  : True
UserEscaped    : False
UserInfo       :

On NTFS permissions and ACLs for files and folders

General overview and help

Tip: Use method SetAccessControl() instead of cmdlet Set-ACL

I wanted to set the ACL (access control list) from within domain A on a UNC path in domain B, but got an error when using Set-ACL:

EN: “Some or all Identity references could not be translated”
DE: “Manche oder alle Identitätsverweise konnten nicht übersetzt werden”

So, after much testing and researching, I stumbled upon a post which mentioned a bug with Set-ACL, and lo and behold, all worked fine with SetAccessControl()!

From Changing NTFS security permissions using Powershell:

Saving the ACL Changes

To avoid a bug in Set-Acl we will be using the SetAccessControl() method to save our permissions.
If we need to be compatible with PowerShell 6 we must use Set-Acl, but you may experience odd behavior after using it and from personal experience I don’t recommend using Set-Acl.

What we are going to do instead is use the SetAccessControl() method of the DirectoryInfo class to apply our ACL.
To do this we can use Get-Item and then call SetAccessControl() on the returned object.

  (Get-Item .\Directory\ChildDirectory\).SetAccessControl($acl)

Putting all the steps together we can get or create an ACL object, add a rule, and apply that rule to a file system object.
The code block below shows all three steps together.

  PS C:\Users\larntz\Documents> $acl = Get-Acl .\Directory\
  PS C:\Users\larntz\Documents> $rule = New-Object System.Security.AccessControl.FileSystemAccessRule( `
                                        "DOMAIN\larntz","Write","ContainerInherit,ObjectInherit","None","Allow")
  PS C:\Users\larntz\Documents> $acl.SetAccessRule($rule)

On Inheritance

SetAccessRuleProtection(isProtected, preserveInheritance)

Sets or removes protection of the access rules associated with this ObjectSecurity object.
Protected access rules cannot be modified by parent objects through inheritance.

isProtected [is a boolean value]:
true: Protect the access rules associated with this ObjectSecurity object from inheritance.
false: Allow inheritance.
preserveInheritance [is a boolean value]:
true: Preserve inherited access rules.
false: Remove inherited access rules. This parameter is ignored if isProtected is false.

So, there are three possible permutations:

  1. SetAccessRuleProtection($true, $true): Break inheritance and convert all inherited permissions to explicit permissions.
  2. SetAccessRuleProtection($true, $false): Break inheritance and remove all inherited permissions.
  3. SetAccessRuleProtection($false, <# ignored #>): Keep inheritance.

Example:

# Break inheritance and convert all inherited permissions to explicit permissions: 
$acl = Get-Acl -Path $folder
$acl.SetAccessRuleProtection($true, $true)
Set-Acl -Path $folder -AclObject $acl

Get-ACLs vs. GetAccessControl()

Related to the above mentioned tip to use the method SetAccessControl() instead of the cmdlet Set-ACL, a similar hint exists for Get-ACL vs. GetAccessControl(), but in my case it was more related to the performance than the capabilities, limitations or reliability of the Get-/Set-ACL cmdlet:

In my (few) test cases, GetAccessControl() was much faster than Get-ACL; I only had one exception on a single run, maybe a loading/caching issue…

> measure-command { (Get-ACL \\sub.example.net\folder\foo).Access }

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 58
Ticks             : 584343
TotalDays         : 6,76322916666667E-07
TotalHours        : 1,623175E-05
TotalMinutes      : 0,000973905
TotalSeconds      : 0,0584343
TotalMilliseconds : 58,4343                 # <----

> measure-command { (get-item \\sub.example.net\folder\foo).GetAccessControl('access').Access }

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 20
Ticks             : 209298
TotalDays         : 2,42243055555556E-07
TotalHours        : 5,81383333333333E-06
TotalMinutes      : 0,00034883
TotalSeconds      : 0,0209298
TotalMilliseconds : 20,9298                 # <----

Handle long paths in a NTFS filesystem

A while ago I had to handle very long paths (get the ACLs, delete them…).

Back then, that happend in an older environment, where only Powershell 5.0 on a Windows Server 2012 R2 (best case) or a Windows Server 2008 with Powershell 2 (wort case) was available. And since this was all happening in a foreign domain of a client, I couldn’t test if it would have worked if I would have accessed remotely from my actual work environment). So, for that case, I used some old workarounds and tricks (shortening the paths manually and via subst…) — no idea if there is a more elegant way.
But luckily, in newer environments (with at least Powershell 5.1), the support for very long and deeply nested paths is much better.

The usual suspects, like New-Item, Get-Item, Remove-Item, Get-ACL, etc. all worked fine with the new path notation.
Important: Sometimes you must use these cmdlets then with -LiteralPath (instead of -Path) for the new, converted paths; otherwise it may not work!

There are two different prefixes, for either local or UNC paths; and, you should know beforehand, which prefix to use (except you prefer trial-&-error). For that, there is a nice little IsUnc property available in System.Uri, which can be used for checking. It returns either $true or $false:

$PathInfo = [System.Uri] "C:\"
if ($PathInfo.IsUnc) { <# Use UNC-prefix #> } else { <# Use the other prefix #> } 

By the way: To also be able to get to these locations via GUI/File Explorer, the server needs also to be prepared (didn’t work on my Terminal Server: At one point, I couldn’t get further, it just stopped without an error or notice). Since this is no my area, here’s just some half-truth and hearsay: A registry key named HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\LongPathsEnabled seems to be put to the value 1 — maybe that’s enough… else: Ask someone who knows server ;-)


Windows Registry

👉 Has its own page.


Find unique/multiple items in a collection or list all unique items

$c = @('a', 'b', 'c', 'a', 'd', 'e', 'a', 'f', 'b', 'g')

# Only items that appear multiple times (count is greater than 1):
$c | Group-Object | Where-Object {$_.Count -gt 1}

# Only items that are unique (count is exactly 1):
$c | Group-Object | Where-Object {$_.Count -eq 1}

# List all unique items
$c | Sort-Object -Unique      # Variant 1
$c | Sort-Object | Get-Unique # Variant 2

Source


Windows User Name Formats

Taken from an ActiveDirectory user; see also User Name Formats.

> Get-ADUser -Identity UserX -Server fully.qualified.domain.net -Properties SamAccountName, msDS-PrincipalName, UserPrincipalname

...
SamAccountName    : UserX
msDS-PrincipalName: DOMAIN\UserX
UserPrincipalName : GivenName.Surname@fully.qualified.domain.net
...

Pad/Align/Format output strings

A small tip on how to output some status/progress info that is supposed to look somewhat uniformly when scrolling down; for example:

Entry   1/100 -> Value: '1'
Entry   1/100 -> The Value is   1
...
Entry  99/100 -> Value: '99'
Entry  99/100 -> The Value is  99
Entry 100/100 -> Value: '100'
Entry 100/100 -> The Value is 100

That was generated by the following code, which uses two different approaches:

  1. write-host and some helper techniques to convert the format.
  2. The format operator (-f).
$DataCollection = @( 1..100 | % { $_.ToString() } ) # Setting up some random example data.
$Padding = (@($DataCollection).count).ToString().Length
$counter = 1

ForEach ($i in $DataCollection)
{
    # -- 1. --
    write-host "Entry $($counter.ToString().PadLeft($Padding, ' '))/$(@($DataCollection).count) -> Value: '$i'"
    
    # -- 2. --
    "Entry {0,$Padding}/$(@($DataCollection).count) -> The Value is {1,3}" -f $counter, $i
    
    $counter++
}

Show data, from a given date, up until now (today), and all days in between

Generates an overview on certain data (here: “for how many AD user accounts was the password changed on that day?”), starting with a given date, up until now (today), and all days in between.

Example:

Date       | Accounts
-----------+---------
2023-01-30 |        0
2023-01-31 |        1
2023-02-01 |        2
           +=========
Total count:        3
$users = get-aduser -server 'example.net' -filter {SamAccountName -like "xyz*"} -properties PasswordLastSet
 
$time = get-date -Year 2023 -Month 1 -Day 30 -Hour 0 -Minute 0 -Second 0 # Starting point in time.
$now  = get-date                             -Hour 0 -Minute 0 -Second 0 # Today/Now.

[int] $total_count = 0

"Date       | Accounts"
"-----------+---------"

do
{
    $result = @()
   
    foreach ($user in $users)
    {
        $pls = $user | select -expand PasswordLastSet
       
        if ($pls)
        {
            [datetime] $dpls = $pls
           
            if (($dpls -ge $time) -and ($dpls -le $time.AddDays(1)))
            {
                $result += $user | select samaccountname, PasswordLastSet
            }
        }
    }
    
    $total_count += @($result).count
    
    "$(get-date -date $time -format 'yyyy-MM-dd') | $((@($result).count).ToString().PadLeft(8, ' '))"
    #$result | ft 
   
    $time = $time.AddDays(1)
}
while ($time -le $now)

"           +========="
"Total count: $($total_count.ToString().PadLeft(8, ' '))"

Write text to the console and save it also to a variable

The cmdlet Write-Output can do that by using the common parameter -OutVariable (or: -ov).

Please take note: The name of the variable needs to be provided to the parameter without a dollar sign ($) in front of it!
✔️ -OutVariable X
-OutVariable $X

Example 1: Simply (over)write the text to the variable:

> write-output "Text 1" -OutVariable BufferVar
Text 1

> $BufferVar
Text 1

Example 2: Append the new text to the variable:

With a plus symbol (+) in front of the variable name, the value will be appended to the out-variable; otherwise, it will just overwrite the previous value.

> write-output "Text 2" -OutVariable +BufferVar
Text 2

> $BufferVar
Text 1
Text 2

By the way: One “gotcha” is described here:
When you expect the OutVariable to have a certain type, but reality disagrees, you shoud take a look at that article (I’ve only skimmed it to date, since I didn’t yet had that case in my code).


Modify an AD account with Set-ADUser

This cmdlet has plenty of predefined parameters, with which one can to change most of the common attributes of an AD user account:

Set-ADUser -GivenName "Alexander" ...
Set-ADUser -Enabled $false ...
Set-ADUser -City "Hamburg" ...
etc.

When attributes need to be accessed that are not associated with any of the parameters, use:

The attributes and values must be provided in a hashtable format (@{name=value}) and the names must be provided in LDAP syntax (see the section Notes on LDAP on this very page for some useful links on this topic):

Set-ADUser -Add @{otherTelephone='+4940001', '+4940002'; otherMobile='+491510009'} -Remove @{otherTelephone='+4940003'}
Set-ADUser -Replace @{ Attribute1LDAPName=value[], Attribute2LDAPName=value[] }

Hint: In some cases, an existing value cannot be overwritten directly, but must be cleared first; otherwise an error like “Multiple values were specified for an attribute that can have only one value” can appear.

Set-ADUser -server 'example.net' -Identity 'UserID' -Clear mailNickname
Set-ADUser -server 'example.net' -Identity 'UserID' -Add @{ mailNickname = "New Value" }

More info:


Timed Loop

When making my first steps into the cloudy nightmares of AzureAD, I quickly learned that actions there (understandably) took a bit longer than in an on-premise AD.

For example, the creation of an account (or at least the positive feedback of such an action) varies very much: It often takes between 10 seconds and over a minute until the account is ready in our environment.

That also means that using a fixed time in a loop is not gonna work: It will often either waste time (e.g. if the account is done in 15 seconds, but the timout is set to 1 minute), or will fail, because the time limit is set too short (e.g. creation occasionally takes 40 seconds, but the timeout is set to 30 seconds).

Looking for a solution, I found a very helpful text and code:

Update (2023-07-16): Made a function out of it, with some bells and whistles: 👉 Wait-saoeTimedLoop.


Password and Credential Handling

  1. Get the password from the user via Read-Host as a System.Security.SecureString
    … or use Get-Credential to produce a secure System.Management.Automation.PSCredential object:

    $secpw = Read-Host -AsSecureString -Prompt "Please enter the password"
    
    $cred = Get-Credential -Message "Please enter the..." -Username "TheUsername"
    
  2. Convert a plain-text string to an encrypted string and save that in an XML file…
    … or export the credential object to an XML file:
    Important: A secure string, once encrypted by user “A” on computer “C”, cannot then be decrypted by a different user or on a different machine anymore!

    ConvertTo-SecureString -Force -AsPlainText -String 'Passphrase' | Export-CliXml -Path '.\password.xml'
    
    $cred | Export-CliXml -Path '.\credential.xml'
    
  3. Read the password/credential from an XML file:

    $SecurePassword = Import-Clixml -Path '.\password.xml'
    
    $c = Import-Clixml -Path '.\credential.xml'
    $c.UserName   # <- a String
    $c.Pasword    # <- a SecureString
    
  4. Decrypt the password from a back to a plain-text again (e.g. for control purposes or tests or if the other party can’t work with a SecureString):

    # Since Powershell 7:
    $UnsecurePassword = ConvertFrom-SecureString -SecureString $SecurePassword -AsPlainText
    
    # In earlier Powershell versions, use this workaround instead:
    $BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($SecurePassword)
    $UnsecurePassword = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)
    [Runtime.InteropServices.Marshal]::ZeroFreeBSTR($BSTR) # Don't forget to clean up!
    

    Bonus: Store the encrypted password back in a PSCredential object:

    $Credential = New-Object System.Management.Automation.PSCredential('<username>', $SecurePassword)
    

Alternative approach

Although saving credentials (username and password) in a PSCredential object is pretty easy (as shown above), the downside is that it can’t be used by a different user or on another computer.

And relying/hoping that each many team members will generate each generate and maintain an individual PSCredential/XML file isn’t viable, for multiple reasons:

A different way (relatively simple and relatively secure) could be this:
Generate a random 256-bit AES key (and store it securily!) to encrypt and decrypt the actual password from a file (which should also be saved securely, elsewhere!).

The gotcha/weak spot/risk in this is: To decrypt the password again from that file (so that the code can use/provide it), it is required that the team member has read-access to the generated 256-bit AES key.

And if someone can read this key file, then that someone could also decrypt the password file with it!
Therefore one must ensure that the key file is stored in a different and secure location, to which only certain accounts (e.g. your team colleagues) have access.

Inspiration:

Example

Requires two separate code snippets:

  1. Do this part only once and NOT in the actual script itself!
    Create a 256-bit AES encryption key, encrypt the password with it and save both in (possibly separate) secure location(s):

    $Username      = "TheUser"
    $Credential    = Get-Credential -Message "The Message" -Username $Username
    $EncryptionKey = New-Object Byte[] 32
    [Security.Cryptography.RNGCryptoServiceProvider]::Create().GetBytes($EncryptionKey)
    $EncryptionKey       | Out-File -Filepath "<SecLocA>\aes.key"
    $Credential.Password | ConvertFrom-SecureString -Key $EncryptionKey | Out-File -Filepath "<SecLocB>\pw.txt"
    
  2. Do this part every time in the actual script that need the password.
    Decrypt the password with the key and then use it:

    $Username      = "TheUser"
    $DecryptionKey = Get-Content -Path "<Path-A>\aes.key"
    $EncryptedPW   = Get-Content -Path "<Path-B>\pw.txt"
    $DecryptedPW   = $EncryptedPW | ConvertTo-SecureString -Key $DecryptionKey
    
    # Only if needed! This converts a secure string to a plain text string:
    $BSTR        = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($DecryptedPW)
    $PlainTextPW = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)
    [Runtime.InteropServices.Marshal]::ZeroFreeBSTR($BSTR)
    
    # (... use the password to login or whatever...)
    

A test or should produce something like this:

# ---- Test/Check: ----
write-host ("Username             : {0}" -f $Username)
write-host ("Password (encrypted) : {0}" -f $EncryptedPW)
write-host ("Password (decrypted) : {0}" -f $DecryptedPW)
write-host ("Password (plain text): {0}" -f $PlainTextPW)
Username             : TheUser
Password (encrypted) : 76(... and so on...)A=
Password (decrypted) : System.Security.SecureString
Password (plain text): The_Password_123!

Timestamps

More: https://ss64.com/ps/get-date.html

> (Get-Date).ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ss.fffK") # UTC + Timezone
2023-07-17T18:34:33.949Z

> (Get-Date).ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ssK")     # UTC + Timezone
2023-07-17T18:34:54Z

> (Get-Date).ToLocalTime().ToString("yyyy-MM-ddTHH:mm:ssK")         # Local Time + Timezone/Offset
2023-07-17T20:36:33+02:00

> (Get-Date).ToLocalTime().ToString("yyyy-MM-dd HH:mm:ss")          # Local Time
2023-07-17 21:04:50

> get-date -Format "yyyy-MM-dd HH:mm:ss"                            # Local Time (.NET Format)
2023-07-17 21:05:59

> get-date -UFormat "%Y-%m-%d %H:%M:%S"                             # Local Time (UFormat)
2023-07-17 21:08:12

> (Get-Date).ToLocalTime().ToString("yyyy-MM-ddTHH:mm:ssz")         # Local Time + TZ offset
2023-07-17T20:38:36+2

> (Get-Date).ToLocalTime().ToString("yyyy-MM-ddTHH:mm:sszz")        # Local Time + TZ offset
2023-07-17T20:38:48+02

> (Get-Date).ToLocalTime().ToString("yyyy-MM-ddTHH:mm:sszzz")       # Local Time + TZ offset
2023-07-17T20:38:51+02:00

Import a module, but exclude certain functions

The premise is that there is a module that provides many functions, but I want to exclude several functions of it from being imported.

Why? For example because I know that they won’t work under a certain condition, in a certain environment, and I don’t want to distract the users by offering them, so they should be loaded at all.

And since this may be a dynamic and changing collection of functions, I don’t want to modify the module’s manifest (*.psd1).

The cmdlet “Import-Module” has the “-Function” parameter, where one can list the names (with wildcards) of the functions that should be imported exclusively – but not the inverse (nor does the cmdlet have a “-Filter” parameter). But one can bend that parameter anyways.

The following gets the job done, albeit I have to admit that it looks a bit awkward first:
This statement will import the module “NameOfTheModule”, but will exclude any functions from being importet if they start with the the names Test* or Update*:

"NameOfTheModule" |
    % { Import-Module -Name $_ -function $(((Get-Module -ListAvailable -Name $_).ExportedCommands).Keys |
        ? {-not (($_ -like "Test*") -or ($_ -like "Update*"))}) }

You can check that it works with Get-Command -Module "NameOfTheModule" | sort at the end.


LDAP

Since the LDAP grammar and syntax is bit strange for newcomers (that includes myself, too), here are some useful links:


Exceptions

How to get the full type of an exception

try   { get-content NonExistingItem -EA Stop }
catch { $_.Exception.GetType().FullName }

Current User(name)

get-aduser -identity ([System.Security.Principal.WindowsIdentity]::GetCurrent().User.Value) -Properties Mail

Other methods to get the current user(name) may have disadvantages, but on the other hand work without the ActiveDirectory cmdlets:

Regarding the last variant: Win32_ComputerSystem class can return a lot of local system/Windows properties.

Name of a user that is logged on currently. This property must have a value. In a terminal services session, UserName returns the name of the user that is logged on to the console – not the user logged on during the terminal service session.


JSON

👉 Has its own page.


Variables

Dynamic variables (New-Variable etc.)

Example 1

This is a (slightly adjusted) snippet from a bigger script from my dayjob, in which directories are created and ACLs/ACEs (access control lists/access control entries) are set according to certain rules (e.g. the names of the permission groups for the ACL incorporate the nesting level of the affected directory) – but all that is irrelevant here and just background information:

# [...]
$level_count = 1
$level_limit = 4
   
ForEach ($PathComponent in $PathComponents)
{
    if ($level_count -lt $level_limit)
    {
        New-Variable -Name "ACL_Level_$level_count" -Value $level_count | Out-Null
        Set-Variable -Name "ACL_Level_$level_count" -Value (Get-ACL $path)

        $path = (Join-Path $path $PathComponent)
        $Rule_ListDirectories = Get-Variable -Name ('Level_' + $level_count + '_ListDirectories_AccessRule')

        $acl = Get-Variable -Name "ACL_Level_$level_count"
        ($acl.Value).AddAccessRule(($Rule_ListDirectories.Value))

        Set-Acl -Path $path -AclObject ($acl.Value)
        $level_count++
    }
    # [...]
}
# [...]

Example 2

Just some “artifical” lines of code to show the use of Get-/Set-/New-/Remove-/Clear-Variable:

$Name1 = "Some Random Text"
$Name2 = $Name1 -replace ' ', '_'

ForEach ($i in @(0..9))
{ 
    try
    {
        Get-Variable -Name "$($Name2)_Collection" -Scope Script -ErrorAction Stop | Out-Null
        
        if (($i % 2) -eq 0)
        {
            Clear-Variable  -Name "$($Name2)_Collection" -Scope Script
        }
        else
        {
            Remove-Variable -Name "$($Name2)_Collection" -Scope Script
            New-Variable -Name "$($Name2)_Collection" -Scope Script -Value @()
        }
    }
    catch [System.Management.Automation.ItemNotFoundException]
    {
        New-Variable -Name "$($Name2)_Collection" -Scope Script
        Set-Variable -Name "$($Name2)_Collection" -Scope Script -Value @()
    }
}

$TempVar1 = Get-Variable -Name "$($Name2)_Collection" -Scope Script
$TempVar1.Value += "Foo"
$TempVar1.Name
$TempVar1.Value 
'----------'
$TempVar2 = Get-Variable -Name "$($Name2)_Collection" -Scope Script -ValueOnly
$TempVar2 += "Foo"
$TempVar2

Example 3

If you need the name of the variable (not its value) and want to keep it a bit more flexible, try this:

$foo = 123
write-host "Name of the variable: $((get-variable -Name ("`$foo").replace("`$", '')).Name)"

This is the plain version, where the variable name still needs to be known. Unfortunately it doesn’t seem that one could resolve for example a reference parameter (given to a function), back to its original name…

Get the type of a variable

Sometimes one needs to know the exact type of a variable.
Use the GetType() method for that.

For example, if the input variable for the function below is not already an ADUser object, it needs to be verified/fetched first; otherwise the parameter can be used just as is:

function Test-User ($Identity)
{
    if ($Identity.GetType().FullName -eq 'Microsoft.ActiveDirectory.Management.ADUser')
    {
        $ADUser = $Identity
    }
    elseif ($Identity.GetType().FullName -eq 'System.String')
    {
        $ADUser = get-aduser -server $Server -identity $Identity
            # Beware that any error handling is skipped in this example!
    }
    else
    {
        write-warning "Don't know how to handle type $($Identity.GetType().FullName)!"
        return
    }
    
    # [Additional work on/for the AD user...]
}

Variable with space or special characters

I needed to provide a mandatory variable with some additional help information, but couldn’t use a call to Read-Host; so instead of…

> $x = Read-Host -Prompt "Foo (Default is '2')"
Foo (Default is '2'): 100

… I used something like this as a name for the mandatory variable: ${Foo (Default is '2')}:

> function x ([Parameter(Mandatory=$true)] ${Foo (Default is '2')}) { return ${Foo (Default is '2')} }
> x
cmdlet x at command pipeline position 1
Supply values for the following parameters:
Foo (Default is '2'): 100
100

But one issue remains unsolved: How to use such a parameter directly in the shell?

> x -Foo (Default is '2') 100
> x -"Foo (Default is '2')" 100
> x -(Foo (Default is '2')) 100
> x -$(Foo (Default is '2')) 100
> x -{Foo (Default is '2')} 100
etc.

Because all these result in an error:

x: A positional parameter cannot be found that accepts argument '100'.
Get-Variable: A positional parameter cannot be found that accepts argument 'space'.
x: A positional parameter cannot be found that accepts argument 'Foo (Default is '2')'.
etc.

Checks against IsNullOrEmpty and IsNullOrWhitespace

"String", "", " ", "`t", $null, @(), @("ArrayItem"), @(" "), @("`t"), @($null) |
    % { "[IsNullOrEmpty] {0,-5} : {1,-5} [IsNullOrWhitespace]" `
        -f [string]::IsNullOrEmpty($_), [string]::IsNullOrWhiteSpace($_) }
Input [string]::IsNullOrEmpty() [string]::IsNullOrWhitespace()
"String" $False $False
"" $True $True
" " $False $True
"`t" $False $True
$null $True $True
@() $True $True
@("ArrayItem") $False $False
@(" ") $False $True
@("`t") $False $True
@($null) $True $True

Performance tip for suppressing output

There are four common ways to suppress output with PowerShell:

  1. Pipe the output to the Cmdlet Out-Null: "Sample Text" | Out-Null
  2. Redirect the output to $Null: "Sample Text" > $Null
  3. Cast/Convert the output into void with a type accelerator: [void] "Sample Text"
  4. Assign the output to $Null: $Null = "Sample Text"

And they do differ in its performance:
Especially creating the pipeline and invoking the cmdlet Out-Null makes the first first version very slow, while the assignment to $Null is usually the fastest:

'Pipe    : {0} ms' -f (Measure-Command { for ($i = 0; $i -lt 100000; $i++) { Get-Random | Out-Null } }).TotalMilliseconds
'Redirect: {0} ms' -f (Measure-Command { for ($i = 0; $i -lt 100000; $i++) { Get-Random > $Null    } }).TotalMilliseconds
'Void    : {0} ms' -f (Measure-Command { for ($i = 0; $i -lt 100000; $i++) { [void] (Get-Random)   } }).TotalMilliseconds
'Assign  : {0} ms' -f (Measure-Command { for ($i = 0; $i -lt 100000; $i++) { $Null = Get-Random    } }).TotalMilliseconds
Measurement #1 Measurement #2 Measurement #3
Pipe : 2331,9334 ms Pipe : 2346,4813 ms Pipe : 2328,5482 ms
Redirect: 1158,4730 ms Redirect: 1149,0418 ms Redirect: 1149,1416 ms
Void : 1138,2298 ms Void : 1138,6937 ms Void : 1133,2445 ms
Assign : 1139,6566 ms Assign : 1128,5159 ms Assign : 1124,0977 ms

Stumbled over it in the comments(!) of this post


Reference Variable

A reference variable (type [ref]) can be used to change the value of a variable that is passed to a function. That way a function can directly modify an existing variable, instead of just returning a value.

Some general remarks:

Notes on the code below:

  1. Use [ref] (or nothing) in the parameter definition of a function, not a type like [int] or [string].
  2. Refer to $variable.Value when using a [ref] parameter in a function.
  3. One must enclose the parameter in parenthesis when passing a reference to a function: -Param ([ref] $Var).
[int] $Var1 = $null
[int] $Var2 = $null

$Var1
$Var2
"----"

function TestFunc ([ref] $Param1, [int] $Param2) # See point 1 above.
{
    $Param1.Value++ # See point 2 above.
    $Param2++
}

1..3 | % {
    TestFunc -Param1 ([ref] $Var1) -Param2 $Var2 # See point 3 above.
    $Var1
    $Var2
}

"----"
$Var1
$Var2

Further reading:


Writing big text files faster

Using the pipeline and Out-File (... | Out-File -FilePath...) can be a bit slow when the data is pretty big.

I noticed it while troubleshooting an issue where the collected log records at the end resulted in (multiple) text files of sizes between 5 and 20+ Megabytes: Saving them took a few seconds.

In such cases, [System.IO.File]::WriteAllText() can improve the performance.

Note: WriteAllText() doesn’t resolve relative paths as one might expect; therefore it’s advisable to provide a fully qualified path:

$full_path = (Resolve-Path -Path ".\file.txt").Path

[System.IO.File]::WriteAllText($full_path, $data, [System.Text.Encoding]::Default)