Tutorial PowerShell
Tutorial PowerShell
Tutorial PowerShell
Topics Covered:
Starting PowerShell
First Steps with the Console
o Incomplete and Multi-Line Entries
o Important Keyboard Shortcuts
o Deleting Incorrect Entries
o Overtype Mode
o Command History: Reusing Entered Commands
o Automatically Completing Input
o Scrolling Console Contents
o Selecting and Inserting Text
o QuickEdit Mode
o Standard Mode
Customizing the Console
o Opening Console Properties
o Defining Options
o Specifying Fonts and Font Sizes
o Setting Window and Buffer Size
o Selecting Colors
o Directly Assigning Modifications in PowerShell
o Saving Changes
Piping and Routing
o Piping: Outputting Information Page by Page
o Redirecting: Storing Information in Files
Summary
Starting PowerShell
On Windows 7 and Server 2008 R2, Windows PowerShell is installed by default. To use PowerShell on older
systems, you need to download and install it. The update is free. The simplest way to find the appropriate download
is to visit an Internet search engine and search for "KB968930 Windows XP" (replace the operating system with the
one you use). Make sure you pick the correct update. It needs to match your operating system language and
architecture (32-bit vs. 64-bit).
After you installed PowerShell, you'll find PowerShell in the Accessories program group. Open this program group,
click on Windows PowerShell and then launch the PowerShell executable. On 64-bit systems, you will also find a
version marked as (x86) so you can run PowerShell both in the default 64-bit environment and in an extra 32-bit
environment for backwards compatibility.
You can also start PowerShell directly. Just press (Windows)+(R) to open the Run window and then enter
powershell (Enter). If you use PowerShell often, you should open the program folder for Windows PowerShell and
right-click on Windows PowerShell. That will give you several options:
Add to the start menu: On the context menu, click on Pin to Start Menu so that PowerShell will be
displayed directly on your start menu from now on and you won't need to open its program folder first.
Quick Launch toolbar: Click Add to Quick Launch toolbar if you use Windows Vista and would like to
see PowerShell right on the Quick Launch toolbar inside your taskbar. Windows XP lacks this command so
XP users will have to add PowerShell to the Quick Launch toolbar manually.
Jump List: On Windows 7, after launching PowerShell, you can right-click the PowerShell icon in your
taskbar and choose Pin to Taskbar. This will not only keep the PowerShell icon in your taskbar so you can
later easily launch PowerShell. It also gives access to its new "Jump List": right-click the icon (or pull it
upwards with your mouse). The jump list contains a number of useful PowerShell functions: you can
launch PowerShell with full administrator privileges, run the PowerShell ISE, or open the PowerShell help
file. By the way: drag the pinned icon all to the left in your taskbar. Now, pressing WIN+1 will always
launch PowerShell. And here are two more tips: hold SHIFT while clicking the PowerShell icon in your
taskbar will open a new instance, so you can open more than one PowerShell console. Holding
SHIFT+CTRL while clicking the PowerShell icon opens the PowerShell console with full Administrator
privileges (provided User Account Control is enabled on your system).
Keyboard shortcuts: Administrators particularly prefer using a keyboard instead of a mouse. If you select
Properties on the context menu, you can specify a key combination in the hot-key field. Just click on this
field and press the key combination intended to start PowerShell, such as (Alt)+(P). In the properties
window, you also have the option of setting the default window size to start PowerShell in a normal,
minimized, or maximized window.
hello (Enter)
As soon as you press (Enter), your entry will be sent to PowerShell. Because PowerShell has never heard of the
command "hello" you will be confronted with an error message highlighted in red.
For example, if you'd like to see which files and folders are in your current directory, then type dir (Enter). You'll
get a text listing of all the files in the directory. PowerShell's communication with you is always text-based.
PowerShell can do much more than display simple directory lists. You can just as easily list all running processes or
all installed hotfixes: Just pick a different command as the next one provides a list of all running processes:
Get-Process (Enter)
Get-Hotfix (Enter)
PowerShell's advantage is its tremendous flexibility since it allows you to control and display nearly all the
information and operations on your computer. The command cls deletes the contents of the console window and the
exit command ends PowerShell.
The "incomplete input" prompt will also appear when you enter an incomplete arithmetic problem like this one:
2 + (Enter)
>> 6 (Enter)
>> (Enter)
8
The continuation prompt generally takes its cue from initial and terminal characters like open and closed brackets or
quotation marks at both ends of a string. As long as the symmetry of these characters is incorrect, you'll continue to
see the prompt. However, you can activate it even in other cases:
dir `(Enter)
>> -recurse(Enter)
>>(Enter)
So, if the last character of a line is what is called a "back-tick" character, the line will be continued. You can retrieve
that special character by pressing (`).
If you haven't entered anything, then the cursor won't move since it will only move within entered text. There's one
exception: if you've already entered a line and pressed (Enter) to execute the line, you can make this line appear
again character-by-character by pressing (Arrow right).
The hotkey (Ctrl)+(Home) works more selectively: it deletes all the characters at the current position up to the
beginning of the line. Characters to the right of the current position (if there are any) remain intact. (Ctrl)+(End)
does it the other way around and deletes everything from the current position up to the end of the line. Both
combinations are useful only after you've pressed (Arrow left) to move the cursor to the middle of a line, specifically
when text is both to the left and to the right of the cursor.
Overtype Mode
If you enter new characters and they overwrite existing characters, then you know you are in type-over mode. By
pressing (Insert) you can switch between insert and type-over modes. The default input mode depends on the
console settings you select. You'll learn more about console settings soon.
If you just wanted to polish or correct one of your most recent commands, press (Arrow up) to re-display the
command that you entered. Press (Arrow up) and (Arrow down) to scroll up and down your command history.
Using (F5) and (F8) do the same as the up and down arrow keys.
This command history feature is extremely useful. Later, you'll learn how to configure the number of commands the
console ―remembers‖. The default setting is the last 50 commands. You can display all the commands in your
history by pressing (F7) and then scrolling up and down the list to select commands using (Arrow up) and (Arrow
down) and (Enter).
The numbers before the commands in the Command History list only denote the sequence number. You cannot enter
a number to select the associated command. What you can do is move up and down the list by hitting the arrow
keys.
Simply press (F9) to ‗activate‘ the numbers so that you can select a command by its number. This opens a menu that
accepts the numbers and returns the desired command.
The keyboard sequence (Alt)+(F7) will clear the command history and start you off with a new list.
(F8) provides more functionality than (Arrow up) as it doesn't just show the last command you entered, but keeps a
record of the characters you've already typed in. If, for example, you'd like to see all the commands you've entered
that begin with "d", type:
d(F8)
Press (F8) several times. Every time you press a key another command will be displayed from the command history
provided that you've already typed in commands with an initial "d."
cd (Tab)
The command cd changes the directory in which you are currently working. Put at least one space behind the
command and then press (Tab). PowerShell suggests a sub-directory. Press (Tab) again to see other suggestions. If
(Tab) doesn't come up with any suggestions, then there probably aren't any sub-directories available.
This feature is called Tab-completion, which works in many places. For example, you just learned how to use the
command Get-Process, which lists all running processes. If you want to know what other commands there are that
begin with "Get-", then type:
Get-(Tab)
Just make sure that there's no space before the cursor when you press (Tab). Keep hitting (Tab) to see all the
commands that begin with "Get-".
Tab-completion works really well with long path names that require a lot of typing. For example:
c:\p(Tab)
Every time you press (Tab), PowerShell will prompt you with a new directory or a new file that begins with "c:\p."
So, the more characters you type, the fewer options there will be. In practice, you should type in at least four or five
characters to reduce the number of suggestions.
When the list of suggestions is long, it can take a second or two until PowerShell has compiled all the possible
suggestions and displays the first one.
Wildcards are allowed in path names. For example, if you enter c:\pr*e (Tab) in a typical Windows system,
PowerShell will respond with "c:\Program Files".
PowerShell will automatically put the entire response inside double quotation marks if the response contains
whitespace characters.
QuickEdit Mode
QuickEdit is the default mode for selecting and copying text in PowerShell. Select the text using your mouse and
PowerShell will highlight it. After you've selected the text, press (Enter) or right-click on the marked area. This will
copy the selected text to the clipboard which you can now paste into other applications. To unselect press (Esc).
You can also insert the text in your console at the blinking command line by right-clicking your mouse.
Standard Mode
If QuickEdit is turned off and you are in Standard mode, the simplest way to mark and copy text is to right-click in
the console window. If QuickEdit is turned off, a context menu will open.
Select Mark to mark text and Paste if you want to insert the marked text (or other text contents that you've copied to
the clipboard) in the console.
It's usually more practical to activate QuickEdit mode so that you won't have to use the context menu.
That will open a context menu. You should select Properties and a dialog box will open.
To get help, click on the question mark button on the title bar of the window. A question mark is then pinned to your
mouse pointer. Next, click on the option you need help for. The help appears as a ScreenTip window.
Defining Options
Under the heading Options are four panels of options:
The console often uses the raster font as its default. This font is available in a specific range of sizes with available
sizes shown in the "Size" list. Scalable TrueType fonts are much more flexible. They're marked in the list by a "TT"
symbol. When you select a TrueType font, you can choose any size in the size list or enter them as text in the text
box. TrueType fonts can be dynamically scaled.
You should also try experimenting with TrueType fonts by using the "bold fonts" option. TrueType fonts are often
more readable if they're displayed in bold.
Your choice of fonts may at first seem a bit limited. To get more font choices, you can add them to the console font
list. The limited default font list is supposed to prevent you from choosing unsuitable fonts for your console.
One reason for this is that the console always uses the same width for each character (fixed width fonts). This
restricts the use of most Windows fonts because they're proportional typefaces: every character has its own width.
For example, an "i" is narrower than an "m". If you're sure that a certain font will work in the console, then here's
how to add the font to the console font list.
If there's already an entry that has this name, then call the new entry "000" or add as many zeroes as required to
avoid conflicts with existing entries. You should then double-click your new entry to open it and enter the name of
the font. The name must be exactly the same as the official font name, just the way it's stated under the key
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Fonts.
The newly added font will now turn up in the console's option field. However, the new font will work only after you
either log off at least once or restart your computer. If you fail to do so, the console will ignore your new font when
you select it in the dialog box.
You should select a width of at least 120 characters in the window buffer size area with the height should be at least
1,000 lines or larger. This gives you the opportunity to use the scroll bars to scroll the window contents back up so
that you can look at all the results of your previous commands.
You can also set the window size and position on this tab if you'd like your console to open at a certain size and
screen position on your display. Choose the option Let system position window and Windows will automatically
determine at what location the console window will open.
Selecting Colors
On the Colors tab, you can select your own colors for four areas:
You have a palette of 16 colors for these four areas. So, if you want to specify a new font color, you should first
select the option Screen Text and click on one of the 16 colors. If you don't like any of the 16 colors, then you can
mix your own special shade of color. Just click on a palette color and choose your desired color value at the upper
right from the primary colors red, green, and blue.
$host.ui.rawui (Enter)
$host.ui.rawui.ForegroundColor = "Yellow" (Enter)
$host.ui.rawui.WindowTitle = "My Console" (Enter)
These changes will only be temporary. Once you close and re-open PowerShell, the changes are gone. You would
have to include these lines into one of your "profile scripts" which run every time you launch PowerShell to make
them permanent. You can read more about this in Chapter 10.
Saving Changes
Once you've successfully specified all your settings in the dialog box, you can close the dialog box. If you're using
Windows Vista or above, all changes will be saved immediately, and when you start PowerShell the next time, your
new settings will already be in effect. You may need Admin rights to save settings if you launched PowerShell with
a link in your start menu that applies for all users.
If you're using Windows XP, you'll see an additional window and a message asking you whether you want to save
changes temporarily (Apply properties to current window only) or permanently (Modify shortcut that started this
window).
Piping uses the vertical bar (|). The results of the command to the left of the pipe symbol are then fed into the
command on the right side of the pipe symbol. This kind of piping is also known in PowerShell as the "pipeline":
You can press (Ctrl)+(C) to stop output. Piping also works with other commands, not just more. For example, if
you‘d like to get a sorted directory listing, pipe the result to Sort-Object and specify the columns you would like to
sort:
You'll find more background information on piping as well as many useful examples in Chapter 5.
The information won't appear in the console but will instead be redirected to the specified file. You can then open
the file.
However, opening a file in PowerShell is different from opening a file in the classic console:
help.txt (Enter)
The term "help.txt" is not recognized as a cmdlet, function,
operable program, or script file. Verify the term and try again.
At line:1 character:8
+ help.txt <<<<
If you only specify the file name, PowerShell will look for it in all folders listed in the PATH environment variable.
So to open a file, you will have to specify its absolute or relative path name. For example:
.\help.txt (Enter)
Or, to make it even simpler, you can use Tab-completion and hit (Tab) after the file name:
help.txt(Tab)
The file name will automatically be completed with the absolute path name, and then you can open it by pressing
(Enter):
You can also append data to an existing file. For example, if you'd like to supplement the help information in the file
with help on native commands, you can attach this information to the existing file with the redirection symbol ">>":
If you'd like to directly process the result of a command, you won't need traditional redirection at all because
PowerShell can also store the result of any command to a variable:
Variables are universal data storage and variable names always start with a "$". You'll find out more about variables
in Chapter 3.
Summary
PowerShell is part of the operating system starting with Windows 7 and Server 2008 R2. On older operating systems
such as Windows XP or Server 2003, it is an optional component. You will have to download and install PowerShell
before using it.
The current version is 2.0, and the easiest way to find out whether you are using the most current PowerShell
version is to launch the console and check the copyright statement. If it reads "2006", then you are still using the old
and outdated PowerShell 1.0. If it reads "2009", you are using the correct version. There is no reason why you
should continue to use PowerShell 1.0, so if you find it on your system, update to 2.0 as soon as possible. If you
wanted to find out your current PowerShell version programmatically, output the automatic variable $psversiontable
(simply by entering it). It not only tells you the current PowerShell version but also the versions of the core
dependencies. This variable was introduced in PowerShell version 2.0, so on version 1.0 it does not exist.
The PowerShell console resembles the interactive part of PowerShell where you can enter commands and
immediately get back results. The console relies heavily on text input. There are plenty of special keys listed in
Table 1.1.
Key Meaning
(Alt)+(F7) Deletes the current command history
(PgUp), (PgDn) Display the first (PgUp) or last (PgDn) command you used in current session
(Enter) Send the entered lines to PowerShell for execution
(End) Moves the editing cursor to the end of the command line
(Del) Deletes the character to the right of the insertion point
(Esc) Deletes current command line
Moves in current command line to the next character corresponding to specified
(F2)
character
(F4) Deletes all characters to the right of the insertion point up to specified character
(F7) Displays last entered commands in a dialog box
Displays commands from command history beginning with the character that you already
(F8)
entered in the command line
Opens a dialog box in which you can enter the number of a command from your
(F9) command history to return the command. (F7) displays numbers of commands in
command history
(Left arrow), (Right
Move one character to the left or right respectively
arrow)
(Arrow up), (Arrow
Repeat the last previously entered command
down), (F5), (F8)
(Home) Moves editing cursor to beginning of command line
(Backspace) Deletes character to the left of the insertion point
(Ctrl)+(C) Cancels command execution
(Ctrl)+(End) Deletes all characters from current position to end of command line
(Ctrl)+(Arrow left),
Move insertion point one word to the left or right respectively
(Ctrl)+(Arrow right)
(Ctrl)+(Home) Deletes all characters of current position up to beginning of command line
(Tab) Automatically completes current entry, if possible
Table 1.1: Important keys and their meaning in the PowerShell console
You will find that the keys (Arrow up), which repeats the last command, and (Tab), which completes the current
entry, are particularly useful. By hitting (Enter), you complete an entry and send it to PowerShell. If PowerShell
can't understand a command, an error message appears highlighted in red stating the possible reasons for the error.
Two special commands are cls (deletes the contents of the console) and exit (ends PowerShell).
You can use your mouse to select information in the console and copy it to the Clipboard by pressing (Enter) or by
right-clicking when you have the QuickEdit mode turned on. With QuickEdit mode turned off, you will have to
right-click inside the console and then select Mark in a context menu.
The basic settings of the console—QuickEdit mode as well as colors, fonts, and font sizes—can be customized in
the properties window of the console. This can be accessed by right-clicking the icon to the far left in the title bar of
the console window. In the dialog box, select Properties.
Along with the commands, a number of characters in the console have special meanings and you have already
become acquainted with three of them:
Piping: The vertical bar "|" symbol pipes the results of a command to the next. When you pipe the results
to the command more, the screen output will be paused once the screen is full, and continued when you
press a key.
Redirection: The symbol ">" redirects the results of a command to a file. You can then open and view the
file contents. The symbol ">>" appends information to an existing file.
PowerShell 2.0 also comes with a simple script editing tool called "ISE" (Integrated Script Environment). You find
it in PowerShell‘s jump list (if you are using Windows 7), and you can also launch it directly from PowerShell by
entering ise ENTER. ISE requires .NET Framework 3.5.1. On Windows Server 2008 R2, it is an optional feature
that needs to be enabled first in your system control panel. You can do that from PowerShell as well:
PowerShell has two faces: interactivity and script automation. In this chapter, you will first learn how to work with
PowerShell interactively. Then, we will take a look at PowerShell scripts.
Topics Covered:
PowerShell as a Calculator
o Calculating with Number Systems and Units
Executing External Commands
o Starting the "Classic" Console
o Discovering Useful Console Commands
o Security Restrictions
o Special Places
Cmdlets: PowerShell Commands
o Using Parameters
o Using Named Parameters
o Switch Parameter
o Positional Parameters
o Common Parameters
Aliases: Shortcuts for Commands
o Resolving Aliases
o Creating Your Own Aliases
o Removing or Permanently Keeping an Alias
o Overwriting and Deleting Aliases
Functions: PowerShell-"Macros"
o Calling Commands with Arguments
Invoking Files and Scripts
o Starting Scripts
Running Batch Files
Running VBScript Files
Running PowerShell Scripts
Summary
PowerShell as a Calculator
You can use the PowerShell console to execute arithmetic operations the same way you use a calculator. Just enter a
math expression and PowerShell will give you the result:
2+4 (Enter)
6
You can use all of the usual basic arithmetic operations. Even parentheses will work the same as when you use your
pocket calculator:
(12+5) * 3 / 4.5 (Enter)
11.3333333333333
Parentheses play a special role in PowerShell. They always work from the inside out: the results inside the
parentheses are produced before evaluating the expressions outside of the parentheses, i.e. (2*2)*2 = 4*2. For
example, operations performed within parentheses have priority and ensure that multiplication operations do not
take precedence over addition operations. As you'll discover in upcoming chapters, parentheses are also important
when using PowerShell commands. For example, you can list the contents of sub-directories with the dir command
and then determine the number of files in a folder by enclosing the dir command in parentheses.
@() will also execute the code inside the brackets but return the result always as an array. The previous line would
have not returned the number of items if the folder contained only one or none file. This line will always count
folder content reliably:
Try and replace the folder with some empty folder and you still get back the correct number of items. $() is used
inside strings so you can use this line if you'd like to insert the result of some code into text:
Note that PowerShell always uses the decimal point for numbers. Some cultures use other characters in numbers,
such as a comma. PowerShell does not care. It always uses the decimal point. Using a comma instead of a decimal
point will return something entirely different:
4,3 + 2 (Enter)
4
3
2
The comma always creates an array. So in this example, PowerShell created an array with the elements 4 and 3. It
then adds the number 2 to that array, resulting in an array of three numbers. The array content is then dumped by
PowerShell into the console. So the important thing to take with you is that the decimal point is always a point and
not a comma in PowerShell.
The example above calculates how many CD-ROMs can be stored on a DVD. PowerShell will support the common
unit‘s kilobyte (KB), megabyte (MB), gigabyte (GB), terabyte (TB), and petabyte (PT). Just make sure you do not
use a space between a number and a unit.
1mb (Enter)
1048576
These units can be in upper or lower case – PowerShell does not care. However, whitespace characters do matter
because they are always token delimiters. The units must directly follow the number and must not be separated from
it by a space. Otherwise, PowerShell will interpret the unit as a new command and will get confused because there is
no such command.
12 + 0xAF (Enter)
187
PowerShell can easily understand hexadecimal values: simply prefix the number with "0x":
0xAFFE (Enter)
45054
Operators: Arithmetic problems can be solved with the help of operators. Operators evaluate the two
values to the left and the right. For basic operations, a total of five operators are available, which are also
called "arithmetic operators" (Table2.1).
Brackets: Brackets group statements and ensure that expressions in parentheses are evaluated first.
Decimal point: Fractions use a point as a decimal separator (never a comma).
Comma: Commas create arrays and are irrelevant for normal arithmetic operations.
Special conversions: Hexadecimal numbers are designated by the prefix "0x", which ensures that they are
automatically converted into decimal values. If you add one of the KB, MB, GB, TB, or PB units to a
number, the number will be multiplied by the unit. Whitespace characters aren't allowed between numbers
and values.
Results and formats: Numeric results are always returned as decimal values. You can use a format
operator like -f if you'd like to see the results presented in a different way. This will be discussed in detail
later in this book.
Ipconfig
Windows IP Configuration
Wireless LAN adapter Wireless Network Connection:
This following command enables you to verify if a Web site is online and tells you the route the data packets are
sent between a Web server and your computer:
Tracert powershell.com
Trace route to powershell.com [74.208.54.218] over a maximum of 30 hops:
1 12 ms 7 ms 11 ms TobiasWeltner-PC [192.168.1.1]
2 15 ms 16 ms 16 ms dslb-088-070-064-001.pools.arcor-ip.net [88.70.64.1]
3 15 ms 16 ms 16 ms han-145-254-11-105.arcor-ip.net [145.254.11.105]
(...)
17 150 ms 151 ms 152 ms vl-987.gw-ps2.slr.lxa.oneandone.net [74.208.1.134]
18 145 ms 145 ms 149 ms ratdog.info [74.208.54.218]
You can execute any Windows programs. Just type notepad (Enter) or explorer (Enter).
However, there's a difference between text-based commands like ipconfig and Windows programs like Notepad.
Text-based commands are executed synchronously, and the console waits for the commands to complete. Windows-
based programs are executed asynchronously. Press (Ctrl)+(C) to cancel a text-based command.
Note that you can use the cmdlet Start-Process with all of its parameters when you want to launch an external
program with special options. With Start-Process, you can launch external programs using different credentials; you
can make PowerShell wait for Windows-based programs or control window size.
Cmd /c Help
For more information on a specific command, type HELP command-name
You can use all of the above commands in your PowerShell console. To try this, pick some commands from the list.
For example:
As an added safety net, you can run PowerShell without administrator privileges when experimenting with new
commands. That will protect you against mistakes as most dangerous commands can no longer be executed without
administrator rights:
defrag c:
You must have Administrator privileges to defragment a volume.
Use an administrator command line and then run the program again.
Remember to start your PowerShell explicitly with administrator rights if you must use admin privileges and have
enabled User Account Control.. To do this, right-click PowerShell.exe and in the context menu, select Run as
Administrator.
Security Restrictions
While you can launch notepad, you cannot launch wordpad:
wordpad
The term "wordpad" is not recognized as a cmdlet, function,
operable program or script file. Verify the term and try again.
At line:1 char:7
+ wordpad <<<<
Here, PowerShell simply did not know where to find WordPad, so if the program is not located in one of the
standard system folders, you can specify the complete path name like this:
C:\programs\Windows NT\accessories\wordpad.exe
The term " C:\program" is not recognized as a cmdlet,
function, operable program or script file. Verify the
term and try again.
At line:1 char:21
+ C:\programs\Windows <<<< NT\accessories\wordpad.exe
Since the path name includes whitespace characters and because PowerShell interprets them as separators,
PowerShell is actually trying to start the program C:\program. So if path names include spaces, quote it. But that
can cause another problem:
"C:\programs\Windows NT\accessories\wordpad.exe"
C:\programs\Windows NT\accessories\wordpad.exe
PowerShell now treats quoted information as string and immediately outputs it back to you. You can prefix it with
an ampersand to ensure that PowerShell executes the quoted text:
Wouldn't it be easier to switch from the current folder to the folder with the program we're looking, and then launch
the program right there?
Cd "C:\programs\Windows NT\accessories"
wordpad.exe
The term "wordpad" is not recognized as a cmdlet,
function, operable program or script file.
Verify the term and try again.
At line:1 char:11
+ wordpad.exe <<<<
This results in another red exception because PowerShell wants a relative or absolute path. So, if you don't want to
use absolute paths like in the example above, you need to specify the relative path where "." represents the current
folder:
.\wordpad.exe
Special Places
You won't need to provide the path name or append the file extension to the command name if the program is
located in a folder that is listed in the PATH environment variable. That's why common programs, such as regedit,
notepad, powershell,or ipconfig work as-is and do not require you to type in the complete path name or a relative
path.
You can put all your important programs in one of the folders listed in the environment variable Path. You can find
this list by entering:
$env:Path
C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\program
Files\Softex\OmniPass;C:\Windows\System32\WindowsPowerShell\v1.0\;c
:\program Files\Microsoft SQL Server\90\Tools\binn\;C:\program File
s\ATI Technologies\ATI.ACE\Core-Static;C:\program Files\MakeMsi\;C:
\program Files\QuickTime\QTSystem\
You'll find more on variables, as well as special environment variables, in the next chapter.
As a clever alternative, you can add other folders containing important programs to your Path environment
variables, such as:
After this change, you can launch WordPad just by entering its program name. Note that your change to the
environment variable Path is valid only in the current PowerShell session. If you'd like to permanently extend Path,
you will need to update the path environment variable in one of your profile scripts. Profile scripts start
automatically when PowerShell starts and customize your PowerShell environment. Read more about profile scripts
in Chapter 10.
Watch out for whitespace characters: If whitespace characters occur in path names, you can enclose the
entire path in quotes so that PowerShell doesn't interpret whitespace characters as separators. Stick to single
quotes because PowerShell "resolves" text in double quotation marks, replacing variables with their values,
and unless that is what you want you can avoid it by using single quotes by default.
Specifying a path: You must tell the console where it is if the program is located somewhere else. To do
so, specify the absolute or relative path name of the program.
The "&" changes string into commands: PowerShell doesn't treat text in quotes as a command. Prefix a
string with "&" to actually execute it. The "&" symbol will allow you to execute any string just as if you
had entered the text directly on the command line.
If you have to enter a very long path names, remember (Tab), the key for automatic completion:
C:\(Tab)
Press (Tab) again and again until the suggested sub-directory is the one you are looking for. Add a "\" and press
(Tab) once again to specify the next sub-directory.
The moment a whitespace character turns up in a path, the tab-completion quotes the path and inserts an "&" before
it.
It retrieves a list of all available cmdlets, whose names always consist of an action (verb) and something that is acted
on (noun). This naming convention will help you to find the right command. Let's take a look at how the system
works.
If you're looking for a command for a certain task, you can first select the verb that best describes the task. There are
relatively few verbs that the strict PowerShell naming conditions permit (Table 2.2). If you know that you want to
obtain something, the proper verb is "get." That already gives you the first part of the command name, and now all
you have to do is to take a look at a list of commands that are likely candidates:
There is an approved list of verbs that are used with cmdlet names. You can list it with Get-Verb.
You can also look up help for any cmdlet using Get-Help:
Using Parameters
Parameters add information so a cmdlet knows what to do. Once again, Get-Help will show you which parameters
are supported by any given cmdlet. For example, the cmdlet Get-ChildItem lists the contents of the current sub-
directory. The contents of the current folder will be listed if you enter the cmdlet without additional parameters:
Get-ChildItem
For example, if you'd prefer to get a list of the contents of another sub-directory, you can enter the sub-directory
name after the cmdlet:
Get-ChildItem c:\windows
You can use Get-Help to output full help on Get-ChildItem to find out which parameters are supported:
This will give you comprehensive information as well as several examples. Of particular interest is the "Parameters"
section that you can also retrieve specifically for one or all parameters:
Omits the specified items. The value of this parameter qualifies the Path parameter. Enter a path element or pattern,
such as "*.txt". Wildcards are permitted.
Required? false
Position? named
Default value
Accept pipeline input? false
Accept wildcard characters? false
-Filter <string[]>
Specifies a filter in the provider's format or language. The value of this parameter qualifies the Path parameter. The
syntax of the filter, including the use of wildcards, depends on the provider. Filters are more efficient than other
parameters, because the provider applies them when retrieving the objects, rather than having Windows PowerShell
filter the objects after they are retrieved.
Required? false
Position? 2
Default value
Accept pipeline input? false
Accept wildcard characters? false
-Force <string[]>
Allows the cmdlet to get items that cannot otherwise not be accessed by the user, such as hidden or system files.
Implementation varies from provider to provider. For more information, see about_Providers. Even using the Force
parameter, the cmdlet cannot override security restrictions.
Required? false
Position? named
Default value
Accept pipeline input? false
Accept wildcard characters? false
-Include <string[]>
Retrieves only the specified items. The value of this parameter qualifies the Path parameter. Enter a path element or
pattern, such as "*.txt". Wildcards are permitted.
The Include parameter is effective only when the command includes the Recurse parameter or the path leads to the
contents of a directory, such as C:\Windows\*, where the wildcard character specifies the contents of the
C:\Windows directory.
Required? false
Position? named
Default value
Accept pipeline input? false
Accept wildcard characters? false
-LiteralPath <string[]>
Specifies a path to one or more locations. Unlike Path, the value of LiteralPath is used exactly as it is typed. No
characters are interpreted as wildcards. If the path includes escape characters, enclose it in single quotation marks.
Single quotation marks tell Windows PowerShell not to interpret any characters as escape sequences.
Required? true
Position? 1
Default value
Accept pipeline input? true (ByPropertyName)
Accept wildcard characters? false
-Name <string[]>
Retrieves only the names of the items in the locations. If you pipe the output of this command to another command,
only the item names are sent.
Required? false
Position? 1
Default value
Accept pipeline input? false
Accept wildcard characters? false
(...)
There are clever tricks to make life easier. You don't have to specify the complete parameter name as long as you
type as much of the parameter name to make it unambiguous:
Just play with it: If you shorten parameter names too much, PowerShell will report ambiguities and list the
parameters that are conflicting:
You can also turn off parameter recognition. This is necessary only rarely when the argument reads like a parameter
name
Write-Host -BackgroundColor
Write-Host : Missing an argument for parameter
'BackgroundColor'. Specify a parameter of type
"System.consoleColor" and try again.
At line:1 char:27
+ Write-Host -BackgroundColor <<<<
You can always quote the text. Or you can expressly turn off parameter recognition by typing "--". Everything
following these two symbols will no longer be recognized as a parameter:
Write-Host "-BackgroundColor"
-BackgroundColor
Write-Host -- -BackgroundColor
-BackgroundColor
Switch Parameter
Sometimes, parameters really are no key-value pairs but simple yes/no-switches. If they're specified, they turn on a
certain functionality. If they're left out, they don't turn on the function. For example, the parameter -recurse ensures
that Get-ChildItem searches not only the -path specified sub-directories, but all sub-directories. And the switch
parameter -name makes Get-ChildItem output only the names of files (as string rather than rich file and folder
objects).
The help on Get-ChildItem will clearly identify switch parameters and place a "<SwitchParameter>" after the
parameter name:
Positional Parameters
For some often-used parameters, PowerShell assigns a "position." This enables you to omit the parameter name
altogether and simply specify the arguments. With positional parameters, your arguments need to be submitted in
just the right order according to their position numbers.
That's why you could have expressed the command we just discussed in one of the following ways:
In all three cases, PowerShell will identify and eliminate the named arguments -recurse and -name first because they
are clearly specified. The remaining arguments are "unnamed" and need to be assigned based on their position:
Get-ChildItem c:\windows *.exe
The parameter -path has the position 1, and no value has yet been assigned to it. So, PowerShell attaches the first
remaining argument to this parameter.
-path <string[]>
Specifies a path to one or more locations. Wildcards are
permitted. The default location is the current directory (.).
Required? false
Position? 1
Standard value used <NOTE: if not specified uses
the Current location>
Accept pipeline input? true (ByValue, ByPropertyName)
Accept wildcard characters? true
The parameter -filter has the position 2. Consequently, it is assigned the second remaining argument. The position
specification will make it easier to use a cmdlet because you don't have to specify any parameter names for the most
frequently and commonly used parameters.
Here is a tip: In daily interactive PowerShell scripting, you will want short and fast commands so use aliases,
positional parameters, and abbreviated parameter names. Once you write PowerShell scripts, you should not use
these shortcuts. Instead, you can use the true cmdlet names and stick to fully named parameters. One reason is that
scripts can be portable and not depend on specific aliases you may have defined. Second, scripts are more complex
and need to be as readable and understandable as possible. Named parameters help other people better understand
what you are doing.
Common Parameters
Cmdlets also support a set of generic "CommonParameters":
<CommonParameters>
This cmdlet supports the common parameters: -Verbose,
-Debug, -ErrorAction, -ErrorVariable, and -OutVariable.
For more information, type "get-help about_commonparameters".
These parameters are called "common" because they are permitted for (nearly) all cmdlets and behave the same way.
Common
Type Description
Parameter
Generates as much information as possible. Without this switch, the cmdlet restricts itself
-Verbose Switch
to displaying only essential information
Outputs additional warnings and error messages that help programmers find the causes of
-Debug Switch
errors. You can find more information in Chapter 11.
Determines how the cmdlet responds when an error occurs. Permitted values:
NotifyContinue: Reports error and continues (default)
NotifyStop: Reports error and stops
-ErrorAction Value SilentContinue: Displays no error message, continues
SilentStop: Displays no error message, stops
Inquire: Asks how to proceed
You can find more information in Chapter 11.
-ErrorVariable Value Name of a variable in which in the event of an error information about the error is stored.
You can find more information in Chapter 11.
Name of a variable in which the result of a cmdlet is to be stored. This parameter is usually
superfluous because you can directly assign the value to a variable. The difference is that it
will no longer be displayed in the console if you assign the result to a variable.
-OutVariable Value $result = Get-ChildItem
It will be output to the console and stored in a variable if you assign the result additionally
to a variable:
Get-ChildItem -OutVariable result
Get-Command dir
CommandType Name Definition
----------- ---- ----------
Alias dir Get-ChildItem
Get-Alias -Definition Get-Childitem
CommandType Name Definition
----------- ---- ----------
Alias dir Get-ChildItem
Alias gci Get-ChildItem
Alias ls Get-ChildItem
Get-ChildItem c:\Dir c:\ls c:\
Historical: NFind and use important cmdlets by using familiar command names you know from older
shells
Speed: Fast access to cmdlets using short alias names instead of longer formal cmdlet names
Resolving Aliases
Use these lines if you'd like to know what "genuine" command is hidden in an alias:
$alias:Dir
Get-ChildItem
$alias:ls
Get-ChildItem
Get-Command Dir
Get-Command Dir
CommandType Name Definition
----------- ---- ----------
Alias dir Get-ChildItem
$alias:Dir lists the element Dir of the drive alias:. That may seem somewhat surprising because there is no drive
called alias: in the classic console. PowerShell supports many additional virtual drives, and alias: is only one of
them. If you want to know more, the cmdlet Get-PSDrive lists them all. You can also list alias: like any other drive
with Dir. The result would be a list of aliases in their entirety:
Dir alias:
CommandType Name Definition
----------- ---- ----------
alias ac Add-Content
alias asnp Add-PSSnapin
alias clc Clear-Content
(...)
Get-Command can also resolve aliases. Whenever you want to know more about a particular command, you can
submit it to Get-Command, and it will tell you the command type and where it is located.
You can also get the list of aliases using the cmdlet Get-Alias. You will receive a list of individual alias definitions:
You can use the parameter -Definition to list all aliases for a given cmdlet.
This will get you all aliases pointing to the cmdlet or command you submitted to -Definition.
As it turns out, there's even a third alias for Get-ChildItem called "gci". There are more approaches to the same
result. The next examples find alias definitions by doing a keyword search and by grouping:
Edit
Set-Alias edit notepad.exe
Edit
Edit typically launches the console-based Editor program. You can press (Alt)+(F) and then (X) to exit without
completely closing the console window.
If you create a new alias called "Edit" and set it to "notepad.exe", the command Edit will be re-programmed. The
next time you enter it, PowerShell will no longer run the old Editor program, but the Notepad.
$alias:edit
Try these options if you'd like to keep your own aliases permanently:
Manually each time: Set your aliases after every start manually using Set-Alias. That is, of course, rather
theoretical.
Automated in a profile: Let your alias be set automatically when PowerShell starts: add your aliases to a
start profile. You'll learn how to do this in Chapter 10.
Import and export: You can use the built-in import and export function for aliases.
For example, if you'd like to export all currently defined aliases as a list to a file, enter:
Export-Alias
Because you haven't entered any file names after Export-Alias, the command will ask you what the name are under
which you want to save the list. Type in:
alias1 (Enter)
The list will be saved. You can look at the list afterwards and manipulate it. For example, you might want the list to
include a few of your own alias definitions:
Notepad alias1
Import-Alias alias1
Import-Alias : Alias not allowed because an alias with the
name "ac" already exists.
At line:1 char:13
+ Import-Alias <<<< alias1
Import-Alias will notify you that it cannot create some aliases of the list because these aliases already exist. Specify
additionally the option -Force to ensure that Import-Alias overwrites existing aliases:
Import-Alias alias1 -Force
You can add the Import-Alias instruction to your start profile and specify a permanent path to the alias list. This will
make PowerShell automatically read this alias list when it starts. Later, you can add new aliases. Then, it will suffice
to update the alias list with Export-Alias and to write over the old file. This is one way for you to keep your aliases
permanently.
Del alias:edit
This instruction deletes the "Edit" alias. Here, the uniform provider approach becomes evident. The very same "Del"
command will allow you to delete files and sub-directories in the file system as well. Perhaps you're already familiar
with the command from the classic console:
Del C:\garbage.txt
Here is an example that finds all aliases that point to no valid target, which is a great way of finding outdated or
damaged aliases:
Get-Alias | ForEach-Object {
if (!(Get-Command $_.Definition -ea SilentlyContinue)) {$_}}
Functions: PowerShell-"Macros"
Aliases are simple shortcuts to call commands with another name (shortcut names), or to make the transition to
PowerShell easier (historic aliases). However, the arguments of a command can never be included in an alias. You
will need to use functions if you want that.
Aliases won't work here because they can't specify command arguments. Functions can:
Unlike alias definitions, functions can run arbitrary code that is placed in brackets. Any additional information a user
submitted to the function can be found in $args if you don't specify explicit parameters. $args is an array and holds
every piece of extra information submitted by the caller as separate array elements. You'll read more about functions
later.
Starting Scripts
Scripts and batch files are pseudo-executables. The script itself is just a plain text file, but it can be run by its
associated script interpreter.
Notepad ping.bat
@echo off
echo An attacker can do dangerous things here
pause
Dir %windir%
pause
Dir %windir%\system
Save the text and close Notepad. Your batch file is ready for action. Try to launch the batch file by entering its
name:
Ping
The batch file won't run. Because it has the same name and you didn't specify any IP address or Web site address,
the ping command spits out its internal help message. If you want to launch your batch file, you're going to have to
specify either the relative or absolute path name.
.\ping
Your batch file will open and then immediately runs the commands it contains.
PowerShell has just defended a common attack. If you were using the classic console, you would have been tricked
by the attacker. Switch over to the classic console to see for yourself:
Cmd
Ping 10.10.10.10
An attacker can do dangerous things here
Press any key . . .
If an attacker had smuggled a batch file named "ping.bat" into your current folder, then the ping command, harmless
though it might seem, could have had catastrophic consequences. A classic console doesn't distinguish between files
and commands. It will look first in the current folder, find the batch file, and execute it immediately. Such a mix-up
will never happen in the PowerShell console. So, return to your much-safer PowerShell environment:
Exit
Notepad test.vbs
Cscript.exec:\samples\test.vbs (Enter)
The script opens a small dialog window and asks for some information. The information entered into the dialog is
then output to the console where PowerShell can receive it. This way, you can easily merge VBScript logic into your
PowerShell solutions. You can even store the results into a variable and process it inside PowerShell:
If you do not get back the name you entered into the dialog, but instead the VBScript copyright information, then the
VBScript interpreter has output the copyright information first, which got in the way. The safest way is to turn off
the copyright message explicitly:
You can also generally turn off VBScript logos. Try calling wscript.exe to open the settings dialog, and turn off the
logo.
Notepad $env:temp\test.ps1
You can now enter any PowerShell code you want, and save the file. Once saved, you can also open your script with
more sophisticated and specialized script editors. PowerShell comes with an editor called PowerShell ISE, and here
is how you'd open the file you created with Notepad:
Ise $env:temp\test.ps1
.\test.ps1
File "C:\Users\UserA\test.ps1" cannot be loaded because the
execution of scripts is disabled on this system. Please see
"get-help about_signing" for more details.
At line:1 char:10
+ .\test.ps1 <<<<
You'll probably receive an error message similar to the one in the above example. All PowerShell scripts are initially
disabled. You need to allow PowerShell to execute scripts first. This only needs to be done once:
This grants permission to run locally stored PowerShell scripts. Scripts from untrusted sources, such as the Internet,
will need to carry a valid digital signature or else they won't run. This is to protect you from malicious scripts, but if
you want to, you can turn this security feature off. Replace RemoteSigned with Bypass. The implications of
signatures and other security settings will be discussed in Chapter 10. For now, the line above is enough for you to
experiment with your own PowerShell scripts. To restore the original setting, set the setting to Undefined:
To get a complete picture, also try using the -List parameter with Get-ExecutionPolicy:
Get-ExecutionPolicy -List
Scope ExecutionPolicy
----- ---------------
MachinePolicy Undefined
UserPolicy Undefined
Process Undefined
CurrentUser RemoteSigned
LocalMachine Restricted
You now see all execution policies. The first two are defined by Group Policy so a corporation can centrally control
execution policy. The scope "Process" refers to your current session only. So, you can use this scope if you want to
only temporarily change the execution policy. No other PowerShell session will be affected by your change.
"CurrentUser" will affect only you, but no other users. That's how you can change this scope without special
privileges. "LocalMachine," which is the only scope available in PowerShell v.1, will affect any user on your
machine. This is the perfect place for companies to set initial defaults that can be overridden. The default setting for
this scope is "Restricted."
The effective execution policy is the first policy from top to bottom in this list that is not set to "Undefined." If all
policies are set to "Undefined," then scripts are prohibited.
Note: To turn off signature checking altogether, you can set the execution policy to "Bypass." This can be useful if
you must run scripts regularly that are stored on file servers outside your domain. Otherwise, you may get security
warnings and confirmation dialogs. Always remember: execution policy exists to help and protect you from
potentially malicious scripts. If you are confident you can safely identify malicious scripts, then nothing is wrong by
turning off signature checking. However, we recommend not using the "Bypass" setting if you are new to
PowerShell.
Summary
The PowerShell console can run all kinds of commands interactively. You simply enter a command and the console
will return the results.
Cmdlets are PowerShell's own internal commands. A cmdlet name is always composed of a verb (what it does) and
a noun (where it acts upon).
To find a particular command, you can either guess or use Get-Command. For example, this will get you a list if you
wanted to find all cmdlets dealing with event logs:
Search for the verb "Stop" to find all cmdlets that stop something:
You can also use wildcards. This will list all cmdlets with the keyword "computer":
Once you know the name of a particular cmdlet, you can use Get-Help to get more information. This function will
help you view help information page by page:
Get-Help Stop-Computer
Help Stop-Computer -examples
Help Stop-Computer -parameter *
Cmdlets are just one of six command types you can use to get work done:
Alias: Shortcuts to other commands, such as dir or ls
Function: "Macros" that run code and resemble "self-made" new commands
Cmdlet: Built-in PowerShell commands
Application: External executables, such as ipconfig, ping or notepad
PowerShell scripts: Files with extension *.ps1 which can contain any valid PowerShell code
Other files: Batch files, VBScript script files, or any other file associated with an executable
If commands are ambiguous, PowerShell will stick to the order of that list. So, since the command type "Alias" is at
the top of that list, if you define an alias like "ping", it will be used instead of ping.exe and thus can override any
other command type.
It is time to combine commands whenever a single PowerShell command can't solve your problem. One way of
doing this is by using variables. PowerShell can store results of one command in a variable and then pass the
variable to another command. In this chapter, we'll explain what variables are and how you can use them to solve
more complex problems.
Topics Covered:
Personal Variables
o Selecting Variable Names
o Assigning and Returning Values
o Assigning Multiple Variable Values
o Exchanging the Contents of Variables
o Assigning Different Values to Several Variables
Listing Variables
o Finding Variables
o Verify Whether a Variable Exists
o Deleting Variables
Using Special Variable Cmdlets
o Write-Protecting Variables: Creating Constants
o Variables with Description
"Automatic" PowerShell Variables
Environment Variables
o Reading Environment Variables
o Searching for Environment Variables
o Modifying Environment Variables
o Permanent Modifications of Environment Variables
Scope of Variables
o Automatic Restriction
o Changing Variable Visibility
o Setting Scope
Variable Type and "Strongly Typing"
o Strongly Typing
o The Advantages of Specialized Types
Variable Management: Behind the Scenes
o Modification of Variable Options
o Write Protecting Variables
o Examining Strongly Typed Variables
o Validating Variable Contents
Summary
Personal Variables
Variables store pieces of information. This way, you can first gather all the information you may need and store
them in variables. The following example stores two pieces of information in variables and then calculates a new
result:
# Calculate:
$result = $amount * $VAT
# Output result
$result
22.8
# Replace variables in text with values:
$text = "Net amount $amount matches gross amount $result"
$text
Net amount 120 matches gross amount 142.8
Of course, you can have hard-coded the numbers you multiplied. However, variables are the prerequisite for
reusable code. By assigning your data to variables, you can easily change the information, either by manually
assigning different values to your variables or by assigning user-defined values to your variables. By simply
replacing the first two lines, your script can interactively ask for the variable content:
Note that I strongly-typed the variables in this example. You will hear more about variable typing later in that
character , but whenever you use Read-Host or another method that accepts user input, you have to specify the
variable data type or else PowerShell will treat your input as simple string. Simple text is something very different
from numbers and you cannot calculate with pieces of text.
PowerShell creates new variables automatically so there is no need to specifically "declare" variables. Simply assign
data to a variable. The only thing you do need to know is that variable names are always prefixed with a "$" to
access the variable content.
You can then output the variable content by entering the variable name or you can merge the variable content into
strings. Just make sure to use double-quotes to do that. Single-quoted text will not expand variable values.
There are some special characters that have special meaning to PowerShell. If you used those in your variable
names, PowerShell can get confused. So the best thing is to first avoid special characters in your variable names. But
if you must use them for any reason, be sure to enclose the variable name in brackets:
Media state
. . . . . . . . . . . : Medium disconnected
Connection-specific DNS Suffix:
Ethernet adapter LAN Connection 2:
Media state
. . . . . . . . . . . : Medium disconnected
Connection-specific DNS Suffix:
Wireless LAN adapter wireless network connection:
Media state
. . . . . . . . . . . : Medium disconnected
Connection-specific DNS Suffix:
With PowerShell, swapping variable content is much easier because you can assign multiple values to multiple
variables. Have a look:
Listing Variables
PowerShell keeps a record of all variables, which is accessible via a virtual drive called variable:. Here is how you
see all currently defined variables:
Dir variable:
Aside from your own personal variables, you'll see many more. PowerShell also defines variables and calls them
"automatic variables." You'll learn more about this soon.
Finding Variables
Using the variable: virtual drive can help you find variables. If you'd like to see all the variables containing the word
"Maximum," try this:
Dir variable:*maximum*
Name Value
---- -----
MaximumErrorCount 256
MaximumVariableCount 4096
MaximumFunctionCount 4096
MaximumAliasCount 4096
MaximumDriveCount 4096
MaximumHistoryCount 1000
The solution isn't quite so simple if you'd like to know which variables currently contain the value 20. It consists of
several commands piped together.
Here, the output from Dir is passed on to Out-String, which converts the results of Dir into string. The parameter -
Stream ensures that every variable supplied by Dir is separately output as string. Select-String selects the lines that
include the desired value, filtering out the rest. White space is added before and after the number 20 to ensure that
only the desired value is found and not other values that contain the number 20 (like 200).
# Verify whether the variable $psversiontable exists which is present only in PS v2:
Test-Path variable:\psversiontable
True
# Use this information to check for PS v2
If (Test-Path variable:psversiontable) {
'You are running PowerShell v2'
} else {
'You are running PowerShell v1 and should update to v2'
}
False
Deleting Variables
PowerShell will keep track of variable use and remove variables that are no longer used so there is no need for you
to remove variables manually. If you'd like to delete a variable immediately, again, do exactly as you would in the
file system:
# delete variable:
del variable:\test
1. New-Variable enables you to specify options, such as a description or write protection. This makes a
variable into a constant. Set-Variable does the same for existing variables.
2. Get-Variable enables you to retrieve the internal PowerShell variables store.
PowerShell doesn't distinguish between variables and constants. However, it does offer you the option of write-
protecting a variable. In the following example, the write-protected variable $test is created with a fixed value of
100. In addition, a description is attached to the variable.
The variable is now write-protected and its value may no longer be changed. You'll receive an error message if you
try it anyway. Because the variable is write-protected, it behaves like a read-only file. You'll have to specify the
parameter -Force to delete it:
As you just saw, a write-protected variable can still be modified by deleting it and creating a new copy of it. If you
need stronger protection, you can create a variable with the Constant option. Now, it can neither be modified nor
deleted. Only when you quit PowerShell are constants removed. Variables with the Constant option may only be
created with New-Variable. If a variable already exists, you cannot make it constant anymore because you‘ll get an
error message:
You can overwrite an existing variable by using the -Force parameter of New-Variable if the existing variable
wasn't created with the Constant option. Variables of the constant type are unchangeable once they have been
created and -Force does not change this:
Get-Childitem variable:
Name Value
---- -----
Error {}
DebugPreference SilentlyContinue
PROFILE C:\Users\Tobias Weltner\Documents\WindowsPowerShell\Micro...
HOME C:\Users\Tobias Weltner
(...)
You can show their description to understand the purpose of automatic variables:
PowerShell write protects several of its automatic variables. While you can read them, you can't modify them. This
makes sense because information, like the process-ID of the PowerShell console or the root directory, must not be
modified.
$pid = 12
Cannot overwrite variable "PID" because it is read-only or constant.
At line:1 char:5
+ $pid <<<< = 12
A little later in this chapter, you'll find out more about how write-protection works. You'll then be able to turn write-
protection on and off for variables that already exist. However, don't do this for automatic variables because
PowerShell may crash. One reason is because PowerShell continually modifies some variables. If you set them to
read-only, PowerShell may stop and not respond to any inputs.
Environment Variables
There is another set of variables maintained by the operating system: environment variables.
Working with environment variables in PowerShell is just as easy as working with internal PowerShell variables.
All you need to do is add the prefix to the variable name: env:.
Reading Environment Variables
You can read the location of the Windows folder of the current computer from a Windows environment variable:
$env:windir
C:\Windows
By adding env:, you‘ve told PowerShell not to look for the variable windir in the default PowerShell variable store,
but in Windows environment variables. In other word, the variable behaves just like any other PowerShell variable.
For example, you can embed it in some text:
You can just as easily use the variable with commands and switch over temporarily to the Windows folder like this:
Get-Childitem env:
Name Value
---- -----
Path C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\
TEMP C:\Users\TOBIAS~1\AppData\Local\Temp
ProgramData C:\ProgramData
PATHEXT .COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC;.4mm
ALLUSERSPROFILE C:\ProgramData
PUBLIC C:\Users\Public
OS Windows_NT
USERPROFILE C:\Users\Tobias Weltner
HOMEDRIVE C:
(...)
You‘ll be able to retrieve the information it contains when you‘ve located the appropriate environment variable and
you know its name:
$env:userprofile
C:\Users\Tobias Weltner
The next example shows how you can create a new folder and add it to the PATH environment variable. Any script
you place into that folder will then be accessible simply by entering its name:
# All scripts and commands in this folder can be launched by entering their name now:
sayHello
Hello!
You have two choices if you need to make permanent changes to your environment variables. You can either make
the changes in one of your profile scripts, which get executed each time you launch PowerShell (then your changes
are effective in any PowerShell session but not outside) or you can use sophisticated .NET methods directly to
change the underlying original environment variables (in which case the environment variable change is visible to
anyone, not just PowerShell sessions). This code adds a path to the Path environment variable and the change is
permanent.
Access to commands of the .NET Framework as shown in this example will be described in depth in Chapter 6
When you close and restart PowerShell, the Path environment variable will now retain the changed value. You can
easily check this:
$env:Path
The permanent change you just made applies only to you, the logged-on user. If you‘d like this change to be in
effect for all computer users, you can replace the "User" argument by "Machine." You will need full administrator
privileges to do that.
You should only change environment variables permanently when there is no other way. For most purposes, it is
completely sufficient to change the temporary process set from within PowerShell. You can assign it the value of
$null to remove a value.
Scope of Variables
PowerShell variables can have a "scope," which determines where a variable is available. PowerShell supports four
special variable scopes: global, local, private, and script. These scopes allow you to restrict variable visibility in
functions or scripts.
Automatic Restriction
Typically, a script will use its own variable scope and isolate all of its variables from the console. So when you run a
script to do some task, it will not leave behind any variables or functions defined by that script once the script is
done.
Dot-sourcing is used when you want to (a) debug a script and examine its variables and functions after the script ran,
and (b) for library scripts whose purpose is to define functions and variables for later use. The profile script, which
launches automatically when PowerShell starts, is an example of a script that always runs dot-sourced. Any function
you define in any of your profile scripts will be accessible in your entire PowerShell session – even though the
profile script is no longer running.
Setting Scope
While the user of a script can somewhat control scope by using dot-sourcing, a script developer has even more
control over scope by prefixing variable and function names. Let's use the scope modifiers private, local, script, and
global.
Scope
Description
allocation
$private:test
The variable exists only in the current scope. It cannot be accessed in any other scope.
=1
Variables will be created only in the local scope. That's the default for variables that are specified
$local:test =
without a scope. Local variables can be read from scopes originating from the current scope, but they
1
cannot be modified.
$script:test = This scope represents the top-level scope in a script. All functions and parts of a script can share
1 variables by addressing this scope.
$global:test = This scope represents the scope of the PowerShell console. So if a variable is defined in this scope, it
1 will still exist even when the script that is defining it is no longer running.
Functions again create their own scope and functions defined inside of other functions create additional sub-scopes.
Here is a little walk-through. Inside the console, all scopes are the same, so prefixing a variable will not make much
difference:
$test = 1
$local:test
1
$script:test = 12
$global:test
12
$private:test
12
Differences become evident only once you create additional scopes, such as by defining a function:
When you don't use any special scope prefix, a child scope can read the variables of the parent scope, but not change
them. If the child scope modifies a variable that was present in the parent scope, as in the example above, then the
child scope actually creates a completely new variable in its own scope, and the parent scope's variable remains
unchanged.
There are exceptions to this rule. If a parent scope declares a variable as "private," then it is accessible only in that
scope and child scopes will not see the variable.
Only when you create a completely new variable by using $private: is it in fact private. If the variable already
existed, PowerShell will not reset the scope. To change scope of an existing variable, you will need to first remove it
and then recreate it: Remove-Variable a would remove the variable $a. Or, you can manually change the variable
options: (Get-Variable a).Options = "Private." You can change a variable scope back to the initial default "local‖ by
assigning (Get-Variable a).Options = "None."
(12).GetType().Name
Int32
(1000000000000).GetType().Name
Int64
(12.5).GetType().Name
Double
(12d).GetType().Name
Decimal
("H").GetType().Name
String
(Get-Date).GetType().Name
DateTime
PowerShell will by default use primitive data types to store information. If a number is too large for a 32-bit integer,
it switches to 64-bit integer. If it's a decimal number, then the Double data type best represents the data. For text
information, PowerShell uses the String data type. Date and time values are stored in DateTime objects.
This process of automatic selection is called "weak typing," and while easy, it's also often restrictive or risky.
Weakly typed variables will happily accept anything, even wrong pieces of information. You can guarantee that the
variable gets the information you expected by strongly typing a variable — or else will throw an exception that can
alarm you.
Also, PowerShell will not always pick the best data type. Whenever you specify text, PowerShell will stick to the
generic string type. If the text you specified was really a date or an IP address, then there are better data types that
will much better represent dates or IP addresses.
So, in practice, there are two important reasons for you to choose the data type yourself:
Type safety: If you have assigned a type to a variable yourself, then the type will be preserved no matter
what and will never be automatically changed to another data type. You can be absolutely sure that a value
of the correct type is stored in the variable. If someone later on wants to mistakenly assign a value to the
variable that doesn't match the originally chosen type, this will cause an exception.
Special variable types: When automatically assigning a variable type, PowerShell will choose from
generic variable types like Int32 or String. Often, it's much better to store values in a specialized and more
meaningful variable type like DateTime.
Strongly Typing
You can enclose the type name in square brackets before the variable name to assign a particular type to a variable.
For example, if you know that a particular variable will hold only numbers in the range 0 to 255, you can use the
Byte type:
[Byte]$flag = 12
$flag.GetType().Name
Byte
The variable will now store your contents in a single byte, which is not only very memory-efficient, but it will also
raise an error if a value outside the range is specified:
$flag = 300
The value "300" cannot be converted to the type "System.Byte".
Error: "The value for an unsigned byte was too large or too small."
At line:1 char:6
+ $flag <<<< = 300
If you store a date as String, then you'll have no access to special date functions. Only DateTime objects make them
available. So, if you're working with date and time indicators, it's better to store them explicitly as DateTime:
Now, since the variable converted the text information into a specific DateTime object, it tells you the day of the
week and also enables specific date and time methods. For example, a DateTime object can easily add and subtract
days from a given date. This will get you the date 60 days from the date you specified:
$date.AddDays(60)
Tuesday, January 11, 2005 00:00:00
PowerShell supports all.NET data types. XML documents will be much better represented using the XML data type
then the standard String data type:
If you retrieve a variable in PowerShell, PowerShell will return only the variable value. If you'd like to see the
remaining information that was assigned to the variable, you'll need the underlying PSVariable object. Get-Variable
will get it for you:
$testvariable = "Hello"
$psvariable = Get-Variable testvariable
You can now display all the information about $testvariable by outputting $psvariable. Pipe the output to the cmdlet
Select-Object to see all object properties and not just the default properties:
$psvariable | Select-Object
Name : testvariable
Description :
Value : Hello
Options : None
Attributes : {}
# Modify description:
$psvariable.Description = "Subsequently added description"
Dir variable:\test | Format-Table name, description
Name Description
---- -----------
test Subsequently added description
# Get PSVariable object and directly modify the description:
(Get-Variable test).Description =
"An additional modification of the description."
Dir variable:\test | Format-Table name, description
Name Description
---- -----------
test An additional modification of the description.
# Modify a description of an existing variable with Set-Variable:
Set-Variable test -description "Another modification"
Dir variable:\test | Format-Table name, description
Name Description
---- -----------
test Another modification
As you can see in the example above, you do not need to store the PSVariable object in its own variable to access its
Description property. Instead, you can use a sub-expression, i.e. a statement in parentheses. PowerShell will then
evaluate the contents of the sub-expression separately. The expression directly returns the required PSVariable
object so you can then call the Description property directly from the result of the sub-expression. You could have
done the same thing by using Set-Variable. Reading the settings works only with the PSVariable object:
(Get-Variable test).Description
An additional modification of the description.
Write-Protecting Variables
For example, you can add the ReadOnly option to a variable if you'd like to write-protect it:
$Example = 10
The Constant option must be set when a variable is created because you may not convert an existing variable into a
constant.
If you delete the Attributes property, the variable will be unspecific again so in essence you remove the strong type
again:
$a = "Hello"
$aa = Get-Variable a
$aa.Attributes.Add($(New-Object `
System.Management.Automation.ValidateLengthAttribute `
-argumentList 2,8))
$a = "Permitted"
$a = "This is prohibited because its length is not from 2 to 8 characters"
Because of an invalid value verification (Prohibited because
its length is not from 2 to 8 characters) may not be carried out for
the variable "a".
At line:1 char:3
+ $a <<<< = "Prohibited because its length is not from 2 to 8
In the above example Add() method added a new .NET object to the attributes with New-Object. You'll learn more
about New-Object in Chapter 6. Along with ValidateLengthAttribute, there are additional restrictions that you can
place on variables.
Restriction Category
Variable may not be zero ValidateNotNullAttribute
Variable may not be zero or empty ValidateNotNullOrEmptyAttribute
Variable must match a Regular Expression ValidatePatternAttribute
Variable must match a particular number range ValidateRangeAttribute
Variable may have only a particular set value ValidateSetAttribute
In the following example, the variable must contain a valid e-mail address or all values not matching an e-mail
address will generate an error. The e-mail address is defined by what is called a Regular Expression. You'll learn
more about Regular Expressions in Chapter 13.
$email = "[email protected]"
$v = Get-Variable email
$pattern = "\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b"
$v.Attributes.Add($(New-Object `
System.Management.Automation.ValidatePatternAttribute `
-argumentList $pattern))
$email = "[email protected]"
$email = "invalid@email"
Because of an invalid value verification (invalid@email) may not
be carried out for the variable "email".
At line:1 char:7
+ $email <<<< = "invalid@email"
If you want to assign a set number range to a variable, use ValidateRangeAttribute. The variable $age accepts only
numbers from 5 to 100:
$age = 18
$v = Get-Variable age
$v.Attributes.Add($(New-Object `
System.Management.Automation.ValidateRangeAttribute `
-argumentList 5,100))
$age = 30
$age = 110
Because of an invalid value verification (110) may not be
carried out for the variable "age".
At line:1 char:7
+ $age <<<< = 110
If you would like to limit a variable to special key values, ValidateSetAttribute is the right option. The variable
$option accepts only the contents yes, no, or perhaps:
$option = "yes"
$v = Get-Variable option
$v.Attributes.Add($(New-Object `
System.Management.Automation.ValidateSetAttribute `
-argumentList "yes", "no", "perhaps"))
$option = "no"
$option = "perhaps"
$option = "don't know"
Verification cannot be performed because of an invalid value
(don't know) for the variable "option".
At line:1 char:8
+ $option <<<< = "don't know"
Summary
Variables store information. Variables are by default not bound to a specific data type, and once you assign a value
to a variable, PowerShell will automatically pick a suitable data type. By strongly-typing variables, you can restrict a
variable to a specific data type of your choice. You strongly-type a variable by specifying the data type before the
variable name:
You can prefix the variable name with "$" to access a variable. The variable name can consist of numbers,
characters, and special characters, such as the underline character "_". Variables are not case-sensitive. If you'd like
to use characters in variable names with special meaning to PowerShell (like parenthesis), the variable name must be
enclosed in brackets. PowerShell doesn't require that variables be specifically created or declared before use.
There are pre-defined variables that PowerShell will create automatically. They are called "automatic variables."
These variables tell you information about the PowerShell configuration. For example, beginning with PowerShell
2.0, the variable $psversiontable will dump the current PowerShell version and versions of its dependencies:
PS > $PSVersionTable
Name Value
---- -----
CLRVersion 2.0.50727.4952
BuildVersion 6.1.7600.16385
PSVersion 2.0
WSManStackVersion 2.0
PSCompatibleVersions {1.0, 2.0}
SerializationVersion 1.1.0.1
PSRemotingProtocolVersion 2.1
You can change the way PowerShell behaves by changing automatic variables. For example, by default PowerShell
stores only the last 64 commands you ran (which you can list with Get-History or re-run with Invoke-History). To
make PowerShell remember more, just adjust the variable $MaximumHistoryCount:
PS > $MaximumHistoryCount
64
PS > $MaximumHistoryCount = 1000
PS > $MaximumHistoryCount
1000
PowerShell will store variables internally in a PSVariable object. It contains settings that write-protect a variable or
attach a description to it (Table 3.6). It's easiest for you to set this special variable options by using the New-
Variable or Set-Variable cmdlets (Table 3.1).
Every variable is created in a scope. When PowerShell starts, an initial variable scope is created, and every script
and every function will create their own scope. By default, PowerShell accesses the variable in the current scope, but
you can specify other scopes by adding a prefix to the variable name\: local:, private:, script:, and global:.
Whenever a command returns more than one result, PowerShell will automatically wrap the results into an array. So
dealing with arrays is important in PowerShell. In this chapter, you will learn how arrays work. We will cover
simple arrays and also so-called "associative arrays," which are also called "hash tables."
Topics Covered:
$a = ipconfig
$a
Windows IP Configuration
Ethernet adapter LAN Connection
Media state
. . . . . . . . . . . : Medium disconnected
In reality, the result consists of a number of pieces of data, and PowerShell returns them as an array. This occurs
automatically whenever a command returns more than a single piece of data.
Discovering Arrays
You can check the data type to find out whether a command will return an array:
$a = "Hello"
$a -is [Array]
False
$a = ipconfig
$a -is [Array]
True
An array will always supports the property Count, which will return the number of elements stored in that array:
$a.Count
53
Here, the ipconfig command returned 53 single results that were all stored in $a. If you‘d like to examine a single
array element, you can specify its index number. If an array has 53 elements, then its valid index numbers are 0 to
52 (the index always starts at 0).
It is important to understand just when PowerShell will use arrays. If a command returns just one result, it will
happily return that exact result to you. Only when a command returns more than one result will it wrap them in an
array.
$result = Dir
$result -is [array]
True
$result = Dir C:\autoexec.bat
$result -is [array]
False
Of course, this will make writing scripts difficult because sometimes you cannot predict whether a command will
return one, none, or many results. That's why you can make PowerShell return any result as an array.
Use @() if you'd like to force a command to always return its result in an array. This way you find out the number of
files in a folder:
Or in a line:
As such, the result of ipconfig was passed to Where-Object, which filtered out all text lines that did not contain the
keyword you were seeking. With minimal effort, you can now reduce the results of ipconfig to the information you
deem relevant.
Dir
Directory: Microsoft.PowerShell.Core\FileSystem::C:\Users\
Tobias Weltner
Mode LastWriteTime Length Name
---- ------------- ------ ----
d---- 10/01/2007 16:09 Application Data
d---- 07/26/2007 11:03 Backup
d-r-- 04/13/2007 15:05 Contacts
d---- 06/28/2007 18:33 Debug
d-r-- 10/04/2007 14:21 Desktop
d-r-- 10/04/2007 21:23 Documents
d-r-- 10/09/2007 12:21 Downloads
(...)
$result = Dir
$result.Count
82
Every element in an array will represent a file or a directory. So if you output an element from the array to the
console, PowerShell will automatically convert the object into text:
$array = 1,2,3,4
$array
1
2
3
4
$array = 1..4
$array
1
2
3
4
Polymorphic Arrays
Just like variables, individual elements of an array can store any type of value you assign. This way, you can store
whatever you want in an array, even a mixture of different data types. Again, you can separate the elements by using
commas:
Why is the Get-Date cmdlet enclosed in parentheses? Just try it without parentheses. Arrays can only store data.
Get-Date is a command and no data. Since you want PowerShell to evaluate the command first and then put its
result into the array, you will need to use parentheses. Parentheses will identify a sub-expression and tell PowerShell
to evaluate and process it first.
$array = ,1
$array.Length
1
You'll need to use the construct @(...)to create an array without any elements at all:
$array = @()
$array.Length
0
$array = @(12)
$array
12
$array = @(1,2,3,"Hello")
$array
1
2
3
Hello
Why would you want to create an empty array in the first place? Because you can add elements to it like this when
you start with an empty array:
$array = @()
$array += 1
$array += 3
1
3
Remember, the first element in your array will always have the index number 0. The index -1 will always give you
the last element in an array. The example demonstrates that the total number of all elements will be returned in two
properties: Count and Length. Both of these properties will behave identically.
Here is a real-world example using arrays and accessing individual elements. First, assume you have a path and
want to access only the file name. Every string object has a built-in method called Split() that can split the text into
chunks. All you will need to do is submit the split character that is used to separate the chunks:
As you see, by splitting a path at the backslash, you will get its components. The file name is always the last element
of that array. So to access the filename, you will access the last array element:
PS > $array[-1]
file.txt
Likewise, if you are interested in the file name extension, you can change the split character and use "." instead:
PS > $path.Split('.')[-1]
txt
The second line will select the second, fifth, eighth, and thirteenth elements (remember that the index begins at 0).
You can use this approach to reverse the contents of an array:
Reversing the contents of an array using the approach described above is not particularly efficient because
PowerShell has to store the result in a new array. Instead, you can use the special array functions of the .NET
Framework (see Chapter 6). This will enable you to reverse the contents of an array very efficiently:
As you can imagine, creating new arrays to add or remove array elements is a slow and expensive approach and is
only useful for occasional array manipulations. A much more efficient way is to convert an array to an ArrayList
object, which is a specialized array. You can use it as a replacement for regular arrays and benefit from the added
functionality, which makes it easy to add, remove, insert or even sort array contents:
The example shows that you how to retrieve the values in the hash table using the assigned key. There are two forms
of notation you can use to do this:
You can now use your hash table to add the calculated property to objects:
Note: Because of a PowerShell bug, this will only work when you create the hash table with initial values like in the
example above. It will not work when you first create an empty hash table and then add the key-value pairs in a
second step.
Hash tables can control even more aspects when using them in conjunction with the family of Format-* cmdlets. For
example, if you use Format-Table, you can then pass it a hash table with formatting details:
You can just define a hash table with the formatting information and pass it on to Format-Table:
# Setting formatting specifications for each column in a hash table:
$column1 = @{expression="Name"; width=30; label="filename"; alignment="left"}
$column2 = @{expression="LastWriteTime"; width=40; label="last modification";
alignment="right"}
# Output Dir command result with format table and selected formatting:
Dir | Format-Table $column1, $column2
File Name last modification
--------- ---------------
Application data 10/1/2007 16:09:57
Backup 07/26/2007 11:03:07
Contacts 04/13/2007 15:05:30
Debug 06/28/2007 18:33:29
Desktop 10/4/2007 14:21:20
Documents 10/4/2007 21:23:10
(...)
You'll learn more about format cmdlets like Format-Table in the Chapter 5
# Check result:
$list
Name Value
---- -----
Name PC01
Location Hanover
Date 08/21/2007 13:00:18
IP 10.10.10.10
User Tobias Weltner
You can create empty hash tables and then insert keys as needed because it's easy to insert new keys in an existing
hash table:
# Overwrite the value of an existing key with a new value (two possible notations):
$list["Date"] = (Get-Date).AddDays(-1)
$list.Location = "New York"
Name Value
---- -----
Name PC01
Location New York
Date 08/20/2007 13:10:12
IP 10.10.10.10
User Tobias Weltner
If you'd like to completely remove a key from the hash table, use Remove() and as an argument specify the key that
you want to remove:
$list.remove("Date")
If you use Format-Table, you can pass it a hash table with formatting specifications. This enables you to control
how the result of the command is formatted.
Every column is defined with its own hash table. In the hash table, values are assigned to the following four keys:
All you need to do is to pass your format definitions to Format-Table to ensure that your listing shows just the name
and date of the last modification in two columns:
You'll learn more about format cmdlets like Format-Table in the Chapter 5.
Although the contents of $array2 were changed in this example, this affects $array1 as well, because they are both
identical. The variables $array1 and $array2 internally reference the same storage area. Therefore, you have to
create a copy if you want to copy arrays or hash tables,:
$array1 = 1,2,3
$array2 = $array1.Clone()
$array2[0] = 99
$array1[0]
1
Whenever you add new elements to an array (or a hash table) or remove existing ones, a copy action takes place
automatically in the background and its results are stored in a new array or hash table. The following example
clearly shows the consequences:
# Assign a new element to $array2. A new array is created in the process and stored in $array2:
$array2 += 4
$array2[0]=99
# Create a strongly typed array that can store whole numbers only:
[int[]]$array = 1,2,3
In the example, $array was defined as an array of the Integer type. Now, the array is able to store only whole
numbers. If you try to store values in it that cannot be turned into whole numbers, an error will be reported.
Summary
Arrays and hash tables can store as many separate elements as you like. Arrays assign a sequential index number to
elements that always begin at 0. Hash tables in contrast use a key name. That's why every element in hash tables
consists of a key-value pair.
You create new arrays with @(Element1, Element2, ...). You can also leave out @() for arrays and only use the
comma operator. You create new hash tables with @{key1=value1;key2=value2; ...). @{} must always be specified
for hash tables. Semi-colons by themselves are not sufficient to create a new hash table.
You can address single elements of an array or hash able by using square brackets. Specify either the index number
(for arrays) or the key (for hash tables) of the desired element in the square brackets. Using this approach you can
select and retrieve several elements at the same time.
The PowerShell pipeline chains together a number of commands similar to a production assembly. So, one
command hands over its result to the next, and at the end, you receive the result.
Topics Covered:
The PowerShell pipeline chains together a number of commands similar to a production assembly. So, one
command will hand over its result to the next, and at the end, you will receive the result.
Using the PowerShell Pipeline
Command chains are really nothing new. The old console was able to forward (or "pipe") the results of a command
to the next with the "pipe" operator "|". One of the more known usages was to pipe data to the tool more, which
would then present the data screen page by screen page:
Dir | more
In contrast to the traditional concept of text piping, the PowerShell pipeline will take an object-oriented approach
and implement it in real time. Have a look:
It returns an HTML report on the windows directory contents sorted by file size. All of this can start with a Dir
command, which then passes its result to Sort-Object. The sorted result will then get limited to only the properties
you want in the report. ConvertTo-Html will convert the objects to HTML, which is then written to a file.
Object-oriented Pipeline
What you see here is a true object-oriented pipeline so the results from a command remain rich objects. Only at the
end of the pipeline will the results be reduced to text or HTML or whatever you choose for output.
Take a look at Sort-Object. It will sort the directory listing by file size. If the pipeline had simply fed plain text into
Sort-Object, you would have had to tell Sort-Object just where the file size information was to be found in the raw
text. You would also have had to tell Sort-Object to sort this information numerically and not alphabetically. Not so
here. All you need to do is tell Sort-Object which object‘s property you want to sort. The object nature tells Sort-
Object all it needs to know: where the information you want to sort is found and whether it is numeric or letters.
You only have to tell Sort-Object which object‘s property to use for sorting because PowerShell will send results as
rich .NET objects through the pipeline. Sort-Object does the rest automatically. Simply replace Length with another
object‘s property, such as Name or LastWriteTime, to sort according to these criteria. Unlike text, information in an
object is clearly structured: this is a crucial PowerShell pipeline advantage.
Even a simple Dir command is appended internally and converted into a pipeline command:
Of course, the real pipeline benefits show only when you start adding more commands. The chaining of several
commands will allow you to use commands like Lego building blocks to assemble a complete solution from single
commands. The following command will output only a directory's text files listing in alphabetical order:
Just make sure that the commands you use in a pipeline actually do process information from the pipeline. While it
is technically OK, the following line is really useless because notepad.exe cannot process pipeline results:
If you'd like to open pipeline results in an editor, you can put the results in a file first and then open the file with the
editor:
# Attention: danger!
Dir C:\ -recurse | Sort-Object
If you execute this example, you won't see any signs of life from PowerShell for a long time. If you let the command
run too long, you may even run out of memory.
Here Dir returns all files and directors on your drive C:\. These results are passed by the pipeline to Sort-Object, and
because Sort-Object can only sort the results after all of them are available, it will collect the results as they come in.
Those results eventually block too much memory for your system to handle. The two problem areas in sequential
mode are:
First problem: You won't see any activity as long as data is being collected. The more data that has to be acquired,
the longer the wait time will be for you. In the above example, it can take several minutes.
Second problem: Because enormous amounts of data have to be stored temporarily before Sort-Object can process
them, the memory space requirement is very high. In this case, it's even higher so that the entire Windows system
will respond more and more clumsily until finally you won't be able to control it any longer.
That's not all. In this specific case, confusing error messages may pile up. If you have Dir output a complete
recursive folder listing, it may encounter sub-directories where you have no access rights. While Sort-Object
continues to collect results (so no results appear), error messages are not collected by Sort-Object and appear
immediately. Error messages and results get out of sync and may be misinterpreted.
Whether a command supports streaming is up to the programmer. For Sort-Object, there are technical reasons why
this command must first wait for all results. Otherwise, it wouldn't be able to sort the results. If you use commands
that are not designed for PowerShell then their authors had no way to implement the special demands of PowerShell.
For example, it will work if you use the traditional command more.com to output information one page at a time, but
more.com is also a blocking command that could interrupt pipeline streaming:
But also genuine PowerShell cmdlets, functions, or scripts can block pipelines if the programmer doesn't use
streaming. Surprisingly, PowerShell developers forgot to add streaming support to the integrated more function.
This is why more essentially doesn't behave much differently than the ancient more.com command:
# If the preceding command can execute its task quickly, you may not notice
that it can be
# a block:
Dir | more.com
# If the preceding command requires much time, its blocking action may cause
issues:
Dir c:\ -recurse | more.com
Tip: Use Out-Host -Paging instead of more! Out-Host is a true PowerShell cmdlet and will support streaming:
Dir
Dir | Out-Default
Out-Default will transform the pipeline result into visible text. To do so, it will first call Format-Table (or Format-
List when there are more than five properties to output) internally, followed by Out-Host. Out-Host will output the
text in the console. So, this is what happens internally:
Dir | Format-Table *
PSPat PSPar PSChi PSDri PSPro PSIsC Mode Name Pare Exis Root Full Exte Crea
Crea Last Last Last Last Attr
h entPa ldNam ve vider ontai nt ts Name nsio tion
tion Acce Acce Writ Writ ibut
th e ner n Time
Time ssTi ssTi eTim eTim es
You now get so much information that columns shrink to an unreadable format.
For example, if you'd prefer not to reduce visual display because of lack of space, you can use the -Wrap parameter,
like this:
Still, the horizontal table design is unsuitable for more than just a handful of properties. This is why PowerShell will
use Format-List, instead of Format-Table, whenever there are more than five properties to display. You should do
the same:
Dir | Format-List *
You will now see a list of several lines for each object's property. For a folder, it might look like this:
PSPath : Microsoft.PowerShell.Core\FileSystem::C:\Users\Tobias
Weltner\Music
PSParentPath : Microsoft.PowerShell.Core\FileSystem::C:\Users\Tobias
Weltner
PSChildName : Music
PSDrive : C
PSProvider : Microsoft.PowerShell.Core\FileSystem
PSIsContainer : True
Mode : d-r--
Name : Music
Parent : Tobias Weltner
Exists : True
Root : C:\
FullName : C:\Users\Tobias Weltner\Music
Extension :
CreationTime : 13.04.2007 01:54:53
CreationTimeUtc : 12.04.2007 23:54:53
LastAccessTime : 10.05.2007 21:37:26
LastAccessTimeUtc : 10.05.2007 19:37:26
LastWriteTime : 10.05.2007 21:37:26
LastWriteTimeUtc : 10.05.2007 19:37:26
Attributes : ReadOnly, Directory
A file has slightly different properties:
PSPath : Microsoft.PowerShell.Core\FileSystem::C:\Users\Tobias
Weltner\views.PS1
PSParentPath : Microsoft.PowerShell.Core\FileSystem::C:\Users\Tobias
Weltner
PSChildName : views.PS1
PSDrive : C
PSProvider : Microsoft.PowerShell.Core\FileSystem
PSIsContainer : False
Mode : -a---
Name : views.PS1
Length : 4045
DirectoryName : C:\Users\Tobias Weltner
Directory : C:\Users\Tobias Weltner
IsReadOnly : False
Exists : True
FullName : C:\Users\Tobias Weltner\views.PS1
Extension : .PS1
CreationTime : 18.09.2007 16:30:13
CreationTimeUtc : 18.09.2007 14:30:13
LastAccessTime : 18.09.2007 16:30:13
LastAccessTimeUtc : 18.09.2007 14:30:13
LastWriteTime : 18.09.2007 16:46:12
LastWriteTimeUtc : 18.09.2007 14:46:12
Attributes : Archive
The property names are located on the left and their content on the right. You now know how to find out which
properties an object contains.
These formatting cmdlets are not just useful for converting all of an object's properties into text, but you can also
select the properties you want to see.
The next instruction will retrieve you a directory listing with only Name and Length. Because sub-directories don't
have a property called Length, you will see that the Length column for the sub-directory is empty:
Or maybe you'd like your directory listing to show how many days have passed since a file or a folder was last
modified. By using the New-TimeSpan cmdlet, you can calculate how much time has elapsed up to the current date.
To see how this works, you can look at the line below as an example that calculates the time difference between
January 1, 2000, and the current date:/p>
New-TimeSpan "01/01/2000"
Days : 4100
Hours : 21
Minutes : 13
Seconds : 15
Milliseconds : 545
Ticks : 3543163955453834
TotalDays : 4100,8842077012
TotalHours : 98421,2209848287
TotalMinutes : 5905273,25908972
TotalSeconds : 354316395,545383
TotalMilliseconds : 354316395545,383
Use this script block to output how many days have elapsed from the LastWriteTime property up to the current date
and to read it out in its own column:
Dir would then return a sub-directory listing that shows how old the file is in days:
When you do that, Sort-Object will select the property it uses for sorting. It's better to choose the sorting criterion
yourself as every object‘s property may be used as a sorting criterion. For example, you could use one to create a
descending list of a sub-directory's largest files:
You must know which properties are available to use Sort-Object and all the other following cmdlets. In the last
section, you learned how to do that. Send the result of a cmdlet to Select-Object *, and you'll get a list of all
properties available that you can use for sorting:
Sort-Object can sort by more than one property at the same time. For example, if you'd like to sort all the files in a
folder by type first (Extension property) and then by name (Name property), you can specify both properties:
Dir | Sort-Object
@{expression="Length";Descending=$true},@{expression="Name";
Ascending=$true}
The hash table will allow you to append additional information to a property so you can separately specify for each
property your preferred sorting sequence.
Grouping Information
Group-Object works by grouping objects based on one or more properties and then counting the groups. You will
only need to specify the property you want to use as your grouping option. The next line will return a status
overview of services:
The number of groups will depend only on how many different values are found in the property specified in the
grouping operation. The results' object contains the properties Count, Name, and Group. Services are grouped
according to the desired criteria in the Group property. The following will show you how to obtain a list of all
currently running services:
The script block is not limited to returning True or False. The next example will use a script block that returns a file
name's first letter. The result: Group-Object will group the sub-directory contents by first letters:
This way, you can even create listings that are divided into sections:
You can use the parameter -NoElement if you don't need the grouped objects and only want to know which groups
exist. This will save a lot of memory:
Where-Object takes a script block and evaluates it for every pipeline object. The current object that is travelling the
pipeline is found in $_. So Where-Object really works like a condition (see Chapter 7): if the expression results in
$true, the object will be let through.
If you aren't logged on with administrator privileges, you may not retrieve the information from some processes.
However, you can avoid exceptions by adding -ErrorAction SilentlyContinue (shortcut: -ea 0):
Statistical Calculations
Using the Measure-Object cmdlet, you can get statistic information. For example, if you want to check file sizes, let
Dir give you a directory listing and then examine the Length property:
Measure-Object also accepts text files and discovers the frequency of characters, words, and lines in them:
Out-File will support the parameter -Encoding, which you can use to set the output format If you don't remember
which encoding formats are allowed. Just specify an invalid value and then the error message will tell you which
values are allowed:
Out-* cmdlets turn results into plain text so you are reducing the richness of your results (Out-GridView is the only
exception to the rule which displays the results in an extra window as a mini-spreadsheet).
Export it instead and use one of the xport-* cmdlets to preserve the richness of your results. For example, to open
results in Microsoft Excel, do this:
# This command not only creates a new directory but also returns the new
directory:
md testdirectory
Directory: Microsoft.PowerShell.Core\FileSystem::C:\Users\Tobias Weltner
Mode LastWriteTime Length Name
---- ------------- ------ ----
d---- 19.09.2007 14:31 testdirectory
rm testdirectory
HTML Outputs
If you‘d like, PowerShell can also pack its results into (rudimentary) HTML files. Converting objects into HTML
formats is done by ConvertTo-Html:
In this chapter, you will learn what objects are and how to get your hands on PowerShell objects before they get
converted to simple text.
Topics Covered:
How would you describe this object to someone, over the telephone? You would probably carefully examine the
object and then describe what it is and what it can do:
Properties: A pocketknife has particular properties, such as its color, manufacturer, size, or number of
blades. The object is red, weights 55 grams, has three blades, and is made by the firm Idera. So properties<
describe what an object is.
Methods: n addition, you can do things with this object, such as cut, turn screws, or pull corks out of wine
bottles. The object can cut, screw, and remove corks. Everything that an object can is called its methods.
In the computing world, an object is very similar: its nature is described by properties, and the actions it can perform
are called its methods. Properties and methods are called members.
This new object is actually pretty useless. If you call for it, PowerShell will return "nothing":
$pocketknife
Adding Properties
Next, let's start describing what our object is. To do that, you can add properties to the object.
You can uUse the Add-Member cmdlet to add properties. Here, you added the property color with the value red to
the object $pocketknife. If you call for the object now, it suddenly has a property telling the world that its color is
red:
$pocketknife
Color
-----
Red
You can then add more properties to describe the object even better. This time, we use positional parameters to
shorten the code necessary to add members to the object:
By now, you've described the object in $pocketknife with a total of four properties. If you output the object in
$pocketknife in the PowerShell console, PowerShell will automatically convert the object into readable text:
You will now get a quick overview of its properties when you output the object to the console. You can access the
value of a specific property by either using Select-Object with the parameter -expandProperty, or add a dot, and then
the property name:
The actions your object can do are called its methods. So let's teach your object a few useful methods:
Again, you used the Add-Member cmdlet, but this time you added a method instead of a property (in this case, a
ScriptMethod). The value is a scriptblock marked by brackets, which contains the PowerShell instructions you want
the method to perform. If you output your object, it will still look the same because PowerShell only visualizes
object properties, not methods:
$pocketknife
Color Weight Manufacturer
Blades
----- ------- ----------
-------
Red 55 Idera
3
You can add a dot and then the method name followed by two parentheses to use any of the three newly added
methods. They are part of the method name, so be sure to not put a space between the method name and the opening
parenthesis. Parentheses formally distinguishes properties from methods.
For example, if you'd like to remove a cork with your virtual pocketknife, you can use this code:
$pocketknife.corkscrew()
Pop! Cheers!
Your object really does carry out the exact script commands you assigned to the corkscrew() method. So, methods
perform actions, while properties merely provide information. Always remember to add parentheses to method
names. If you forget them, something interesting like this will happen:
The "virtual pocketknife" example reveals that objects are containers that contain data (properties) and actions
(methods).
Our virtual pocketknife was a somewhat artificial object with no real use. Next, let's take a look at a more interesting
object: PowerShell! There is a variable called $host which represents your PowerShell host.
$Host
Name : ConsoleHost
Version : 1.0.0.0
InstanceId : e32debaf-3d10-4c4c-9bc6-ea58f8f17a8f
UI :
System.Management.Automation.Internal.Host.InternalHostUserInterface
CurrentCulture : en-US
CurrentUICulture : en-US
PrivateData : Microsoft.PowerShell.ConsoleHost+ConsoleColorProxy
The object stored in the variable $host apparently contains seven properties. The properties‘ names are listed in the
first column. So, if you want to find out which PowerShell version you're using, you could access and return the
Version property:
$Host.Version
Major Minor Build Revision
----- ----- ----- --------
1 0 0 0
It works—you get back the PowerShell host version. The version isn't displayed as a single number. Instead,
PowerShell displays four columns: Major, Minor, Build, and Revision. Whenever you see columns, you know these
are object properties that PowerShell has just converted into text. So, the version in itself is again a special object
designed to store version numbers. Let's check out the data type that the Version property uses:
$version = $Host.Version
$version.GetType().FullName
System.Version
The version is not stored as a String object but as a System.Version object. This object type is perfect for storing
versions, allowing you to easily read all details about any given version:
$Host.Version.Major
1
$Host.Version.Build
0
Knowing an object type is very useful because once you know there is a type called System.Version, you can use it
for your own purposes as well. Try to convert a simple string of your choice into a rich version object! To do that,
simply make sure the string consists of four numbers separated by dots (the typical format for versions), then make
PowerShell convert the string into a System.Version type. You can convert things by adding the target type in
square brackets in front of the string:
[System.Version]'12.55.3.28334'
Major Minor Build Revision
----- ----- ----- --------
12 55 3 28334
The CurrentCulture property is just another example of the same concept. Read this property to find out its type:
$Host.CurrentCulture
LCID Name DisplayName
---- ---- -----------
1033 en-US English (United States)
$Host.CurrentCulture.GetType().FullName
System.Globalization.CultureInfo
Country properties are again stored in a highly specialized type that describes a culture with the properties LCID,
Name, and DisplayName. If you want to know which international version of PowerShell you are using, you can
read the DisplayName property:
$Host.CurrentCulture.DisplayName
English (United States)
$Host.CurrentCulture.DisplayName.GetType().FullName
System.String
Likewise, you can convert any suitable string into a CultureInfo-object. Try this if you wanted to find out details
about the 'de-DE' locale:
[System.Globalization.CultureInfo]'de-DE'
LCID Name DisplayName
---- ---- -----------
1031 de-DE German (Germany)
You can also convert the LCID into a CultureInfo object by converting a suitable number:
[System.Globalization.CultureInfo]1033
LCID Name DisplayName
---- ---- -----------
1033 en-US English (United States)
$Host
Name : ConsoleHost
Version : 1.0.0.0
InstanceId : e32debaf-3d10-4c4c-9bc6-ea58f8f17a8f
UI :
System.Management.Automation.Internal.Host.InternalHostUserInterface
CurrentCulture : en-US
CurrentUICulture : en-US
PrivateData : Microsoft.PowerShell.ConsoleHost+ConsoleColorProxy
This is because both these properties again contain an object. If you'd like to find out what is actually stored in the
UI property, you can read the property:
$Host.UI
RawUI
-----
System.Management.Automation.Internal.Host.InternalHostRawUserInterface
You see that the property UI contains only a single property called RawUI, in which yet another object is stored.
Let's see what sort of object is stored in the RawUI property:
$Host.ui.rawui
ForegroundColor : DarkYellow
BackgroundColor : DarkMagenta
CursorPosition : 0,136
WindowPosition : 0,87
CursorSize : 25
BufferSize : 120,3000
WindowSize : 120,50
MaxWindowSize : 120,62
MaxPhysicalWindowSize : 140,62
KeyAvailable : False
WindowTitle : PowerShell
"RawUI" stands for "Raw User Interface" and exposes the raw user interface settings your PowerShell console uses.
You can read all of these properties, but can you also change them?
Properties need to accurately describe an object. So, if you modify a property, the underlying object has to also be
modified to reflect that change. If this is not possible, the property cannot be changed and is called "read-only."
Console background and foreground colors are a great example of properties you can easily change. If you do, the
console will change colors accordingly. Your property changes are reflected by the object, and the changed
properties still accurately describe the object.
$Host.ui.rawui.BackgroundColor = "Green"
$Host.ui.rawui.ForegroundColor = "White"
Other properties cannot be changed. If you try anyway, you'll get an error message:
$Host.ui.rawui.keyavailable = $true
"KeyAvailable" is a ReadOnly-property.
At line:1 char:16
+ $Host.ui.rawui.k <<<< eyavailable = $true
Whether the console receives key press input or not, depends on whether you pressed a key or not. You cannot
control that by changing a property, so this property refuses to be changed. You can only read it.
Property Description
Text color. Optional values are Black, DarkBlue, DarkGreen, DarkCyan, DarkRed,
ForegroundColor DarkMagenta, DarkYellow, Gray, DarkGray, Blue, Green, Cyan, Red, Magenta, Yellow,
and White.
Background color. Optional values are Black, DarkBlue, DarkGreen, DarkCyan,
BackgroundColor DarkRed, DarkMagenta, DarkYellow, Gray, DarkGray, Blue, Green, Cyan, Red,
Magenta, Yellow, and White.
CursorPosition Current position of the cursor
WindowPosition Current position of the window
CursorSize Size of the cursor
BufferSize Size of the screen buffer
WindowSize Size of the visible window
MaxWindowSize Maximally permissible window size
MaxPhysicalWindowSize Maximum possible window size
KeyAvailable Makes key press input available
WindowTitle Text in the window title bar
Property Types
Some properties accept numeric values. For example, the size of a blinking cursor is specified as a number from 0 to
100 and corresponds to the fill percentage. The next line sets a cursor size of 75%. Values outside the 0-100 numeric
range will generate an error:
# A value from 0 to 100 is permitted:
$Host.ui.rawui.cursorsize = 75
Other properties expect color settings. However, you cannot specify any color that comes to mind. Instead,
PowerShell expects a "valid" color and if your color is unknown, you will receive an error message listing the colors
you can use:
If you assign an invalid value to the property ForegroundColor, the error message will list the possible values. If
you assign an invalid value to the property CursorSize, you get no hint. Why?
Every property expects a certain object type. Some object types are more specific than others. You can use Get-
Member to find out which object types a given property will expect:
As you can see, ForegroundColor expects a System.ConsoleColor type. This type is a highly specialized type: a list
of possible values, a so-called enumeration:
[system.ConsoleColor].IsEnum
True
Whenever a type is an enumeration, you can use a special .NET method called GetNames() to list the possible values
defined in that enumeration:
[System.Enum]::GetNames([System.ConsoleColor])
Black
DarkBlue
DarkGreen
DarkCyan
DarkRed
DarkMagenta
DarkYellow
Gray
DarkGray
Blue
Green
Cyan
Red
Magenta
Yellow
White
If you do not specify anything contained in the enumeration, the error message will simply return the enumeration‘s
contents.
CursorSize stores its data in a System.Int32 object, which is simply a 32-bit number. So, if you try to set the cursor
size to 1,000, you are actually not violating the object boundaries because the value of 1,000 can be stored in a
System.Int32 object. You get an error message anyway because of the validation code that the CursorSize property
executes internally. So, whether you get detailed error information will really depend on the property‘s definition. In
the case of CursorSize, you will receive only an indication that your value is invalid, but not why.
Sometimes, a property expects a value to be wrapped in a specific object. For example, if you'd like to change the
PowerShell window size, you can use the WindowSize property. As it turns out, the property expects a new window
size wrapped in an object of type System.Management.Automation.Host.Size. Where can you get an object like that?
$Host.ui.rawui.WindowSize = 100,100
Exception setting "WindowSize": "Cannot convert "System.Object[]"
to "System.Management.Automation.Host.Size"."
At line:1 char:16
+ $Host.ui.rawui.W <<<< indowSize = 100,100
There are a number of ways to provide specialized objects for properties. The easiest approach: read the existing
value of a property (which will get you the object type you need), change the result, and then write back the
changes. For example, here's how you would change the PowerShell window size to 80 x 30 characters:
$value = $Host.ui.rawui.WindowSize
$value
Width
Height
-----
------
110
64
$value.Width = 80
$value.Height = 30
$Host.ui.rawui.WindowSize = $value
Or, you can freshly create the object you need by using New-Object:
Or in a line:
$host.ui.rawui.WindowSize = New-Object
System.Management.Automation.Host.Size(80,30)
In the column Name, you will now see all supported properties in $host. In the column Definition, the property
object type is listed first. For example, you can see that the Name property stores a text as System.String type. The
Version property uses the System.Version type.
At the end of each definition, curly brackets will report whether the property is read-only ({get;}) or can also be
modified ({get;set;}). You can see at a glance that all properties of the $host object are only readable. Now, take a
look at the $host.ui.rawui object:
This result is more differentiated. It shows you that some properties could be changed, while others could not.
There are different "sorts" of properties. Most properties are of the Property type, but PowerShell can add additional
properties like ScriptProperty. So if you really want to list all properties, you can use the -MemberType parameter
and assign it a value of *Property. The wildcard in front of "property" will also select all specialized properties like
"ScriptProperty."
TypeName: System.Management.Automation.Internal.Host.InternalHost
TypeName: System.Management.Automation.Internal.Host.InternalHost
Any method that starts with "get_" is really designed to retrieve a property value. So the method "get_someInfo()"
will retrieve the very same information you could also have gotten with the "someInfo" property.
# Query property:
$Host.version
The same is true for Set_ methods: they change a property value and exist for properties that are read/writeable.
Note in this example: all properties of the $host object can only be read so there are no Set_ methods. There can be
more internal methods like this, such as Add_ and Remove_ methods. Generally speaking, when a method name
contains an underscore, it is most likely an internal method.
Standard Methods
In addition, nearly every object contains a number of "inherited" methods that are also not specific to the object but
perform general tasks for every object:
Method Description
Equals Verifies whether the object is identical to a comparison object
GetHashCode Retrieves an object's digital "fingerprint"
GetType Retrieves the underlying object type
ToString Converts the object into readable text
Calling a Method
Before you invoke a method: make sure you know what the method will do. Methods are commands that do
something, which could be dangerous. You can add a dot to the object and then the method name to call a method.
Add an opened and closed parenthesis, like this:
$host.EnterNestedPrompt()
The PowerShell prompt changes to ">>" (unless you changed your default prompt function). You have used
EnterNestedPrompt() to open a nested prompt. Nested prompts are not especially useful in a normal console, so be
sure to exit it again using the exit command or call $host.ExitNestedPrompt().
Nested prompts can be useful in functions or scripts because they work like breakpoints. They can temporarily stop
a function or script so you can verify variable contents or make code changes, after which you continue the code by
entering exit. You'll learn more about this in Chapter 11.
Most methods require additional arguments from you, which are listed in the Definition column.
Pick out a method from the list, and then ask Get-Member to get you more info. Let's pick WriteDebugLine():
# Definition shows which arguments are required and which result will be
returned:
$info.Definition
System.Void WriteDebugLine(String message)
The Definition property tells you how to call the method. Every definition will begin with the object type that a
method returns. In this example, it is System.Void, a special object type because it represents "nothing": the method
doesn't return anything at all. A method "returning" System.Void is really a procedure, not a function.
Next, a method‘s name follows, which is then followed by required arguments. WriteDebugLine needs exactly one
argument called message, which is of String type. Here is how you call WriteDebugLine():
$Host.ui.WriteDebugLine("Hello!")
Hello!
The definition is hard to read at first. You can make it more readable by using Replace() to add line breaks.
Remember the "backtick" character ("`"). It introduces special characters; "`n" stands for a line break.
This definition tells you: You do not necessarily need to supply arguments:
$host.ui.WriteLine()
To output text, you can specify one argument only, the text itself:
$Host.ui.WriteLine("Hello world!")
Hello world!
The third variant adds support for foreground and background colors:
Write-Host
Write-Host "Hello World!"
Write-Host -ForegroundColor Red -BackgroundColor White Alarm!
A new functionality is exposed by the method PromptForChoice(). Let's first examine which arguments this method
expects:
You can get the same information if you call the method without parentheses:
You can get the same information if you call the method without parentheses:
$Host.ui.PromptForChoice
MemberType : Method
OverloadDefinitions : {System.Int32 PromptForChoice(String caption, String
message,
Collection`1 choices, Int 32 defaultChoice)}
TypeNameOfValue : System.Management.Automation.PSMethod
Value : System.Int32 PromptForChoice(String caption, String
message,
Collection`1 choices, Int32 defaultChoice)
Name : PromptForChoice
IsInstance : True
The definition reveals that this method returns a numeric value (System.Int32). It requires a heading and a message
respectively as text (String). The third argument is a bit strange: Collection`1 choices. The fourth argument is a
number (Int32), the standard selection. You may have noticed by now the limitations of PowerShell's built-in
description.
$yes = ([System.Management.Automation.Host.ChoiceDescription]"&yes")
$no = ([System.Management.Automation.Host.ChoiceDescription]"&no")
$selection =
[System.Management.Automation.Host.ChoiceDescription[]]($yes,$no)
$answer = $Host.ui.PromptForChoice('Reboot', 'May the system now be
rebooted?',$selection,1)
$selection[$answer]
if ($selection -eq 0) {
"Reboot"
} else {
"OK, then not"
}
When you dump the variable content to the console, the results stored inside of it will be converted to plain text,
much like if you had output the information to the console in the first place:
$listing
Directory: Microsoft.PowerShell.Core\FileSystem::C:\Users\Tobias Weltner
Mode LastWriteTime Length Name
---- ------------- ------ ----
d---- 20.07.2007 11:37 Application data
d---- 26.07.2007 11:03 Backup
d-r-- 13.04.2007 15:05 Contacts
d---- 28.06.2007 18:33 Debug
(...)
To get to the real objects, you can directly access them inside of a variable. Dir has stored its result in $listing. It is
wrapped in an array since the listing consists of more than one entry. Access an array element to get your hands on a
real object:
The object picked here happens to match the folder Application Data; so it represents a directory. You can do this if
you prefer to directly pick a particular directory or file:
# Address a folder:
$object = Get-Item $env:windir
You can use Get-Member again to produce a list of all available properties:
# $object is a fully functional object that describes the "Application Data"
directory
# First, list all object properties:
$object | Get-Member -membertype *property
Name MemberType Definition
---- ---------- ----------
Mode CodeProperty System.String Mode{get=Mode;}
PSChildName NoteProperty System.String PSChildName=Windows
PSDrive NoteProperty System.Management.Automation.PSDriveInfo
PS...
PSIsContainer NoteProperty System.Boolean PSIsContainer=True
PSParentPath NoteProperty System.String
PSParentPath=Microsoft.PowerS...
PSPath NoteProperty System.String
PSPath=Microsoft.PowerShell.C...
PSProvider NoteProperty System.Management.Automation.ProviderInfo
P...
Attributes Property System.IO.FileAttributes Attributes
{get;set;}
CreationTime Property System.DateTime CreationTime {get;set;}
CreationTimeUtc Property System.DateTime CreationTimeUtc {get;set;}
Exists Property System.Boolean Exists {get;}
Extension Property System.String Extension {get;}
FullName Property System.String FullName {get;}
LastAccessTime Property System.DateTime LastAccessTime {get;set;}
LastAccessTimeUtc Property System.DateTime LastAccessTimeUtc {get;set;}
LastWriteTime Property System.DateTime LastWriteTime {get;set;}
LastWriteTimeUtc Property System.DateTime LastWriteTimeUtc {get;set;}
Name Property System.String Name {get;}
Parent Property System.IO.DirectoryInfo Parent {get;}
Root Property System.IO.DirectoryInfo Root {get;}
BaseName ScriptProperty System.Object BaseName {get=$this.Name;}
Properties marked with {get;set;} in the column Definition are readable and writeable. You can actually change their
value, too, by simply assigning a new value (provided you have sufficient privileges):
# Change Date:
$object.LastAccessTime = Get-Date
PowerShell-Specific Properties
PowerShell can add additional properties to an object. Whenever that occurs, Get-Member will label the property
accordingly in the MemberType column. Native properties are just called "Property." Properties that are added by
PowerShell use a prefix, such as "ScriptProperty" or "NoteProperty."
A NoteProperty like PSChildName contains static data. PowerShell will add it to tag additional information to an
object. A ScriptProperty like Mode executes PowerShell script code that calculates the property‘s value.
MemberType Description
AliasProperty Alternative name for a property that already exists
CodeProperty Static .NET method returns property contents
Property Genuine property
NoteProperty Subsequently added property with set data value
ScriptProperty Subsequently added property whose value is calculated by a script
ParameterizedProperty Property requiring additional arguments
You can apply methods just like you did in the previous examples. For example, you can use the
CreateSubDirectory method if you'd like to create a new sub-directory. First, you should find out which arguments
this method requires and what it returns:
You can see that the method has two signatures. Try using the first to create a sub-directory and the second to add
access permissions.
The next line creates a sub-directory called "My New Directory" without any special access privileges:
Because the method returns a DirectoryInfo object as a result and you haven't caught and stored this object in a
variable, the pipeline will convert it into text and output it. You could just as well have stored the result of the
method in a variable:
$subdirectory = $object.CreateSubDirectory("Another subdirectory")
$subdirectory.CreationTime = "September 1, 1980"
$subdirectory.CreationTime
Monday, September 1, 1980 00:00:00
MemberType Description
CodeMethod Method mapped to a static .NET method
Method Genuine method
ScriptMethod Method invokes PowerShell code
$date = Get-Date
$date.GetType().FullName
System.DateTime
Every type can have its own set of private members called "static" members. You can simply specify a type in
square brackets, pipe it to Get-Member, and then use the -static parameter to see the static members of a type.
There are a lot of method names starting with "op_," with "op" standing for "operator." These are methods that are
called internally whenever you use this data type with an operator. op_GreaterThanOrEqual is the method that does
the internal work when you use the PowerShell comparison operator "-ge" with date values.
The System.DateTime class supplies you with a bunch of important date and time methods. For example, you should
use Parse() to convert a date string into a real DateTime object and the current locale:
You could easily find out whether a certain year is a leap year:
[System.DateTime]::isLeapYear(2010)
False
Two dates are being subtracted from each other here so you now know what happened during this operation:
The first time indication is actually text. For it to become a DateTime object, you must specify the desired
object type in square brackets. Important: Converting a String to a DateTime this way always uses the U.S.
locale. To convert a String to a DateTime using your current locale, you can use the Parse() method as
shown a couple of moments ago!
• The second time comes from the Now static property, which returns the current time as DateTime object.
This is the same as calling the Get-Date cmdlet (which you'd then need to put in parenthesis because you
wouldn't want to subtract the Get-Date cmdlet, but rather the result of the Get-Date cmdlet).
• The two timestamps are subtracted from each other using the subtraction operator ("-"). This was possible
because the DateTime class defined the op_Subtraction() static method, which is needed for this operator.
Of course, you could have called the static method yourself and received the same result:
Now it's your turn. In the System.Math class, you'll find a lot of useful mathematical methods. Try to put some of
these methods to work.
For example, you can use System.Net.IPAddress to work with IP addresses. This is an example of a .NET type
conversion where a string is converted into a System.Net.IPAddress type:
[system.Net.IPAddress]'127.0.0.1'
IPAddressToString : 127.0.0.1
Address : 16777343
AddressFamily : InterNetwork
ScopeId :
IsIPv6Multicast : False
IsIPv6LinkLocal : False
IsIPv6SiteLocal : False
Or you can use System.Net.DNS to resolve host names. This is an example of accessing a static type method, such as
GetHostByAddress():
[system.Net.Dns]::GetHostByAddress("127.0.0.1")
HostName Aliases AddressList
-------- ------- -----------
PCNEU01 {} {127.0.0.1}
Or you can derive an instance of a type and use its dynamic members. For example, to download a file from the
Internet, try this:
The DateTime type has one constructor that takes no argument. If you create a new instance of a DateTime object,
you will get back a date set to the very first date a DateTime type can represent, which happens to be January 1,
0001:
New-Object System.DateTime
Monday, January 01, 0001 12:00:00 AM
You can use a different constructor to create a specific date. There is one that takes three numbers for year, month,
and day:
New-Object System.DateTime(2000,5,1)
Monday, May 01, 2000 12:00:00 AM
If you simply add a number, yet another constructor is used which interprets the number as ticks, the smallest time
unit a computer can process:
New-Object System.DateTime(568687676789080999)
Monday, February 07, 1803 7:54:38 AM
Using Constructors
When you create a new object using New-Object, you can submit additional arguments by adding argument values
as a comma separated list enclosed in parentheses. New-Object is in fact calling a method called ctor, which is the
type constructor. Like any other method, it can support different argument signatures.
Let's check out how you can discover the different constructors, which a type will support. The next line creates a
new instance of a System.String and uses a constructor that accepts a character and a number:
To list the available constructors for a type, you can use the GetConstructors() method available in each type. For
example, you can find out which constructors are offered by the System.String type to produce System.String
objects:
In fact, there are eight different signatures to create a new object of the System.String type. You just used the last
variant: the first argument is the character, and the second a number that specifies how often the character will be
repeated. PowerShell will use the next to last constructor so if you specify text in quotation marks, it will interpret
text in quotation marks as a field with nothing but characters (Char[]).
So, if you enclose the desired .NET type in square brackets and put it in front of a variable name, PowerShell will
require you to use precisely the specified object type for this variable. If you assign a value to the variable,
PowerShell will automatically convert it to that type. That process is sometimes called "implicit type conversion."
Explicit type conversion works a little different. Here, the desired type is put in square brackets again, but placed on
the right side of the assignment operator:
PowerShell would first convert the text into a date because of the type specification and then assign it to the variable
$value, which itself remains a regular variable without type specification. Because $value is not limited to DateTime
types, you can assign other data types to the variable later on.
$value = "McGuffin"
Using the type casting, you can also create entirely new objects without New-Object. First, create an object using
New-Object:
New-Object system.diagnostics.eventlog("System")
Max(K) Retain OverflowAction Entries Name
------ ------ -------------- ------- ----
20,480 0 OverwriteAsNeeded 64,230 System
[System.Diagnostics.EventLog]"System"
Max(K) Retain OverflowAction Entries Name
------ ------ -------------- ------- ----
20,480 0 OverwriteAsNeeded 64,230 System
In the second example, the string System is converted into the System.Diagnostics.Eventlog type: The result is an
EventLog object representing the System event log.
So, when can you use New-Object and when type conversion? It is largely a matter of taste, but whenever a type has
more than one constructor and you want to select the constructor, you should use New-Object and specify the
arguments for the constructor of your choice. Type conversion will automatically choose one constructor, and you
have no control over which constructor is picked.
# Using New-Object, you can select the constructor you wish of the type
yourself:
New-Object System.String(".", 100)
.............................................................................
.......................
Type conversion can also include type arrays (identified by "[]") and can be a multi-step process where you convert
from one type over another type to a final type. This is how you would convert string text into a character array:
[char[]]"Hello!"
H
e
l
l
o
!
You could then convert each character into integers to get the character codes:
[Int[]][Char[]]"Hello World!"
72
97
108
108
111
32
87
101
108
116
33
Conversely, you could make a numeric list out of a numeric array and turn that into a string:
[string][char[]](65..90)
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
$OFS = ","
[string][char[]](65..90)
A,B,C,D,E,F,G,H,I,J,K,L,M,N,O,P,Q,R,S,T,U,V,W,X,Y,Z
Just remember: if arrays are converted into a string, PowerShell uses the separator in the $ofs automatic variable as a
separator between the array elements.
Once you do that, you have access to a whole bunch of new types:
TypeName: Microsoft.VisualBasic.Interaction
Or, you can use a much-improved download method, which shows a progress bar while downloading files from the
Internet:
COM objects each have a unique name, known as ProgID or Programmatic Identifier, which is stored in the
registry. So, if you want to look up COM objects available on your computer, you can visit the registry:
Once you know the ProgID of a COM component, you can use New-Object to put it to work in PowerShell. Just
specify the additional parameter -COMObject:
You'll get an object which behaves very similar to .NET objects. It will contain properties with data and methods
that you can execute. And, as always, Get-Member finds all object members for you. Let's look at its methods:
The information required to understand how to use a method may be inadequate. Only the expected object types are
given, but not why the arguments exist. The Internet can help you if you want to know more about a COM
command. Go to a search site of your choice and enter two keywords: the ProgID of the COM components (in this
case, it will be WScript.Shell) and the name of the method that you want to use.
Some of the commonly used COM objects are WScript.Shell, WScript.Network, Scripting.FileSystemObject,
InternetExplorer.Application, Word.Application, and Shell.Application. Let‘s create a shortcut to powershell.exe
using WScript.Shell Com object and its method CreateShorcut():
# Create an object:
$wshell = New-Object -comObject WScript.Shell
# Assign a path to Desktop to the variable $path
$path = [system.Environment]::GetFolderPath('Desktop')
TypeName: System.__ComObject#{f935dc23-1cf0-11d0-adb9-00c04fd58a0b}
Objects are the result of all PowerShell commands and are not converted to readable text until you output the objects
to the console. However, if you save a command‘s result in a variable, you will get a handle on the original objects
and can evaluate their properties or call for their commands. If you would like to see all of an object‘s properties,
then you can pass the object to Format-List and type an asterisk after it. This allows all—not only the most
important—properties to be output as text.
The Get-Member cmdlet retrieves even more data, enabling you to output detailed information on the properties and
methods of any object.
All the objects that you will work with in PowerShell originate from .NET framework, which PowerShell is layered.
Aside from the objects that PowerShell commands provide to you as results, you can also invoke objects directly
from the .NET framework and gain access to a powerful arsenal of new commands. Along with the dynamic
methods furnished by objects, there are also static methods, which are provided directly by the class from which
objects are also derived.
If you cannot perform a task with the cmdlets, regular console commands, or methods of the .NET framework, you
can resort to the unmanaged world outside the .NET framework. You can directly access the low-level API
functions, the foundation of the .NET framework, or use COM components.
Conditions are what you need to make scripts clever. Conditions can evaluate a situation and then take appropriate
action. There are a number of condition constructs in the PowerShell language which that we will look at in this
chapter.
In the second part, you'll employ conditions to execute PowerShell instructions only if a particular condition is
actually met.
Topics Covered:
Creating Conditions
o Table 7.1: Comparison operators
o Carrying Out a Comparison
o "Reversing" Comparisons
o Combining Comparisons
Table 7.2: Logical operators
o Comparisons with Arrays and Collections
Verifying Whether an Array Contains a Particular Element
Where-Object
o Filtering Results in the Pipeline
o Putting a Condition
If-ElseIf-Else
Switch
o Testing Range of Values
o No Applicable Condition
o Several Applicable Conditions
o Using String Comparisons
Case Sensitivity
Wildcard Characters
Regular Expressions
o Processing Several Values Simultaneously
Summary
Creating Conditions
A condition is really just a question that can be answered with yes (true) or no (false). The following PowerShell
comparison operators allow you to compare values,
PowerShell doesn't use traditional comparison operators that you may know from other programming languages. In
particular, the "=" operator is an assignment operator only in PowerShell, while ">" and "<" operators are used for
redirection.
There are three variants of all comparison operators. The basic variant is case-insensitive so it does not distinguish
between upper and lower case letters (if you compare text). To explicitly specify whether case should be taken into
account, you can use variants that begin with "c" (case-sensitive) or "i" (case-insensitive).
4 -eq 10
False
"secret" -ieq "SECRET"
True
As long as you compare only numbers or only strings, comparisons are straight-forward:
12 -eq "Hello"
False
12 -eq "000012"
True
"12" -eq 12
True
"12" -eq 012
True
"012" -eq 012
False
123 –lt 123.4
True
123 –lt "123.4"
False
123 –lt "123.5"
True
Are the results surprising? When you compare different data types, PowerShell will try to convert the data types into
one common data type. It will always look at the data type to the left of the comparison operator and then try and
convert the value to the right to this data type.
"Reversing" Comparisons
With the logical operator -not you can reverse comparison results. It will expect an expression on the right side that
is either true or false. Instead of -not, you can also use "!":
$a = 10
$a -gt 5
True
-not ($a -gt 5)
False
You should make good use of parentheses if you're working with logical operators like –not. Logical operators are
always interested in the result of a comparison, but not in the comparison itself. That's why the comparison should
always be in parentheses.
Combining Comparisons
You can combine several comparisons with logical operators because every comparison returns either True or False.
The following conditional statement would evaluate to true only if both comparisons evaluate to true:
In this case, comparison operators work pretty much as a filter and return a new array that only contains the
elements that matched the comparison.
1,2,3,4,3,2,1 -eq 3
3
3
If you'd like to see only the elements of an array that don't match the comparison value, you can use -ne (not equal)
operator:
1,2,3,4,3,2,1 -ne 3
1
2
4
2
1
But how would you find out whether an array contains a particular element? As you have seen, -eq provides
matching array elements only. -contains and -notcontains. verify whether a certain value exists in an array.
Where-Object
In the pipeline, the results of a command are handed over to the next one and the Where-Object cmdlet will work
like a filter, allowing only those objects to pass the pipeline that meet a certain condition. To make this work, you
can specify your condition to Where-Object.
Here are two things to note: if the call does not return anything at all, then there are probably no Notepad processes
running. Before you make the effort and use Where-Object to filter results, you should make sure the initial cmdlet
has no parameter to filter the information you want right away. For example, Get-Process already supports a
parameter called -name, which will return only the processes you specify:
The only difference with the latter approach: if no Notepad process is running, Get-Process throws an exception,
telling you that there is no such process. If you don't like that, you can always add the parameter -ErrorAction
SilentlyContinue, which will work for all cmdlets and hide all error messages.
When you revisit your Where-Object line, you'll see that your condition is specified in curly brackets after the
cmdlet. The $_ variable contains the current pipeline object. While sometimes the initial cmdlet is able to do the
filtering all by itself (like in the previous example using -name), Where-Object is much more flexible because it can
filter on any piece of information found in an object.
You can use the next one-liner to retrieve all processes whose company name begins with "Micro" and output name,
description, and company name:
# The two following instructions return the same result: all running services
Get-Service | Foreach-Object {$_.Status -eq 'Running' }
Get-Service | ? {$_.Status -eq 'Running' }
If-ElseIf-Else
Where-object works great in the pipeline, but it is inappropriate if you want to make longer code segments
dependent on meeting a condition. Here, the If..ElseIf..Else statement works much better. In the simplest case, the
statement will look like this:
The condition must be enclosed in parentheses and follow the keyword If. If the condition is met, the code in the
curly brackets after it will be executed, otherwise, it will not. Try it out:
It's likely, though, that you won't (yet) see a result. The condition was not met, and so the code in the curly brackets
wasn't executed. To get an answer, you can make sure that the condition is met:
$a = 11
if ($a -gt 10) { "$a is larger than 10" }
11 is larger than 10
Now, the comparison is true, and the If statement ensures that the code in the curly brackets will return a result. As it
is, that clearly shows that the simplest If statement usually doesn't suffice in itself, because you would like to always
get a result, even when the condition isn't met. You can expand the If statement with Else to accomplish that:
Now, the code in the curly brackets after If is executed if the condition is met. However, if the preceding condition
isn‘t true, the code in the curly brackets after Else will be executed. If you have several conditions, you may insert as
many ElseIf blocks between If and Else as you like:
The If statement here will always execute the code in the curly brackets after the condition that is met. The code
after Else will be executed when none of the preceding conditions are true. What happens if several conditions are
true? Then the code after the first applicable condition will be executed and all other applicable conditions will be
ignored.
The fact is that the If statement doesn't care at all about the condition that you state. All that the If statement
evaluates is $true or $false. If condition evaluates $true, the code in the curly brackets after it will be executed,
otherwise, it will not. Conditions are only a way to return one of the requested values $true or $false. But the value
could come from another function or from a variable:
This example shows that the condition after If must always be in parentheses, but it can also come from any source
as long as it is $true or $false. In addition, you can also write the If statement in a single line. If you'd like to execute
more than one command in the curly brackets without having to use new lines, then you should separate the
commands with a semi-colon ";".
Switch
If you'd like to test a value against many comparison values, the If statement can quickly become unreadable. The
Switch code is much cleaner:
This is how you can use the Switch statement: the value to switch on is in the parentheses after the Switch keyword.
That value is matched with each of the conditions on a case-by-case basis. If a match is found, the action associated
with that condition is then performed. You can use the default comparison operator, the –eq operator, to verify
equality.
$value = 8
switch ($value)
{
# Instead of a standard value, a code block is used that results in True
for numbers smaller than 5:
{$_ -le 5} { "Number from 1to 5" }
# A value is used here; Switch checks whether this value matches $value:
6 { "Number 6" }
The code block {$_ -le 5} includes all numbers less than or equal to 5.
The code block {(($_ -gt 6) -and ($_ -le 10))} combines two conditions and results in true if the number is
either larger than 6 or less than-equal to 10. Consequently, you can combine any PowerShell statements in
the code block and also use the logical operators listed in Table 7.2.
Here, you can use the initial value stored in $_ for your conditions, but because $_ is generally available anywhere
in the Switch block, you could just as well have put it to work in the result code:
$value = 8
switch ($value)
{
# The initial value (here it is in $value) is available in the variable $_:
{$_ -le 5} { "$_ is a number from 1 to 5" }
6 { "Number 6" }
{(($_ -gt 6) -and ($_ -le 10))} { "$_ is a number from 7 to 10" }
}
8 is a number from 7 to 10
No Applicable Condition
In contrast to If, the Switch clause will execute all code for all conditions that are met. So, if there are two conditions
that are both met, Switch will execute them both whereas If had only executed the first matching condition code. To
change the Switch default behavior and make it execute only the first matching code, you should use the statement
continue inside of a code block.
If no condition is met, the If clause will provide the Else statement, which serves as a catch-all. Likewise, Switch has
a similar catch-all called default:
$value = 50
switch ($value)
{
{$_ -le 5} { "$_is a number from 1 to 5" }
6 { "Number 6" }
{(($_ -gt 6) -and ($_ -le 10))} { "$_ is a number from 7 to 10" }
# The code after the next statement will be executed if no other condition
has been met:
default {"$_ is a number outside the range from 1 to 10" }
}
50 is a number outside the range from 1 to 10
$value = 50
switch ($value)
{
50 { "the number 50" }
{$_ -gt 10} {"larger than 10"}
{$_ -is [int]} {"Integer number"}
}
The Number 50
Larger than 10
Integer number
Consequently, all applicable conditions will ensure that the following code is executed. So in some circumstances,
you may get more than one result.
Try out that example, but assign 50.0 to $value. In this case, you'll get just two results instead of three. Do you know
why? That's right: the third condition is no longer fulfilled because the number in $value is no longer an integer
number. However, the other two conditions continue to remain fulfilled.
If you'd like to receive only one result, you can add the continue or break statement to the code.
$value = 50
switch ($value)
{
50 { "the number 50"; break }
{$_ -gt 10} {"larger than 10"; break}
{$_ -is [int]} {"Integer number"; break}
}
The number 50
The keyword break tells PowerShell to leave the Switch construct. In conditions, break and continue are
interchangeable. In loops, they work differently. While breaks exits a loop immediately, continue would only exit
the current iteration.
$action = "sAVe"
switch ($action)
{
"save" { "I save..." }
"open" { "I open..." }
"print" { "I print..." }
Default { "Unknown command" }
}
I save...
Case Sensitivity
Since the –eq comparison operator doesn't distinguish between lower and upper case, case sensitivity doesn't play a
role in comparisons. If you want to distinguish between them, you can use the –case option. Working behind the
scenes, it will replace the –eq comparison operator with –ceq, after which case sensitivity will suddenly become
crucial:
$action = "sAVe"
switch -case ($action)
{
"save" { "I save..." }
"open" { "I open..." }
"print" { "I print..." }
Default { "Unknown command" }
}
Unknown command
Wildcard Characters
In fact, you can also exchange a standard comparison operator for –like and –match operators and then carry out
wildcard comparisons. Using the –wildcard option, you can activate the -like operator, which is conversant, among
others, with the "*" wildcard character:
Regular Expressions
Simple wildcard characters ca not always be used for recognizing patterns. Regular expressions are much more
efficient. But they assume much more basic knowledge, which is why you should take a peek ahead at Chapter 13,
discussion of regular expression in greater detail.
With the -regex option, you can ensure that Switch uses the –match comparison operator instead of –eq, and thus
employs regular expressions. Using regular expressions, you can identify a pattern much more precisely than by
using simple wildcard characters. But that's not all!. As in the case with the –match operator, you will usually get
back the text that matches the pattern in the $matches variable. This way, you can even parse information out of the
text:
The result of the –match comparison with the regular expression is returned in $matches, a hash table with each
result, because regular expressions can, depending on their form, return several results. In this example, only the
first result you got by using $matches[0] should interest you.. The entire expression is embedded in $(...) to ensure
that this result appears in the output text.
$array = 1..5
switch ($array)
{
{$_ % 2} { "$_ is uneven."}
Default { "$_ is even."}
}
1 is uneven.
2 is even.
3 is uneven.
4 is even.
5 is uneven.
There you have it: Switch will accept not only single values, but also entire arrays and collections. As such, Switch
would be an ideal candidate for evaluating results on the PowerShell pipeline because the pipeline character ("|") is
used to forward results as arrays or collections from one command to the next.
The next line queries Get-Process for all running processes and then pipes the result to a script block (& {...}). In the
script block, Switch will evaluate the result of the pipeline, which is available in $input. If the WS property of a
process is larger than one megabyte, this process is output. Switch will then filter all of the processes whose WS
property is less than or equal to one megabyte:
However, this line is extremely hard to read and seems complicated. You can formulate the condition in a much
clearer way by using Where-Object:
This variant also works more quickly because Switch had to wait until the pipeline has collected the entire results of
the preceding command in $input. In Where-Object, it processes the results of the preceding command precisely
when the results are ready. This difference is especially striking for elaborate commands:
Summary
Intelligent decisions are based on conditions, which in their simplest form can be reduced to plain Yes or No
answers. Using the comparison operators listed in Table 7.1, you can formulate such conditions and even combine
these with the logical operators listed in Table 7.2 to form complex queries.
The simple Yes/No answers of your conditions will determine whether particular PowerShell instructions can carried
out or not. In their simplest form, you can use the Where-Object cmdlet in the pipeline. It functions there like a
filter, allowing only those results through the pipeline that correspond to your condition.
If you would like more control, or would like to execute larger code segments independently of conditions, you can
use the If statement, which evaluates as many different conditions as you wish and, depending on the result, will
then execute the allocated code. This is the typical "If-Then" scenario: if certain conditions are met, then certain
code segments will be executed.
An alternative to the If statement is the Switch statement. Using it, you can compare a fixed initial value with various
possibilities. Switch is the right choice when you want to check a particular variable against many different possible
values.
Loops repeat PowerShell code and are the heart of automation. In this chapter, you will learn the PowerShell loop
constructs.
Topics Covered:
ForEach-Object
o Invoking Methods
Foreach
Do and While
o Continuation and Abort Conditions
o Using Variables as Continuation Criteria
o Endless Loops without Continuation Criteria
For
o For Loops: Just Special Types of the While Loop
o Unusual Uses for the For Loop
Switch
Exiting Loops Early
o Continue: Skipping Loop Cycles
o Nested Loops and Labels
Summary
ForEach-Object
Many PowerShell cmdlets return more than one result object. You can use a Pipeline loop: foreach-object to process
them all one after another.. In fact, you can easily use this loop to repeat the code multiple times. The next line will
launch 10 instances of the Notepad editor:
Foreach-Object is simply a cmdlet, and the script block following it really is an argument assigned to Foreach-
Object:
Inside of the script block, you can execute any code. You can also execute multiple lines of code. You can use a
semicolon to separate statements from each other in one line:
The element processed by the script block is available in the special variable $_:
Most of the time, you will not feed numbers into Foreach-Object, but instead the results of another cmdlet. Have a
look:
Get-Process | Foreach-Object { 'Process {0} consumes {1} seconds CPU time' -f $_.Name, $_.CPU }
Invoking Methods
Because ForEach-Object will give you access to each object in a pipeline, you can invoke methods of these objects.
In Chapter 7, you learned how to take advantage of this to close all instances of the Notepad. This will give you
much more control. You could use Stop-Process to stop a process. But if you want to close programs gracefully, you
should provide the user with the opportunity to save unsaved work by also invoking the method
CloseMainWindow(). The next line closes all instances of Notepad windows. If there is unsaved data, a dialog
appears asking the user to save it first:
You can also solve more advanced problems. If you want to close only those instances of Notepad that were running
for more than 10 minutes, you can take advantage of the property StartTime. All you needed to do is calculate the
cut-off date using New-Timespan. Let's first get a listing that tells you how many minutes an instance of Notepad has
been running:
Check out a little trick. In the above code, the script block creates a copy of the incoming object using Select-Object,
which selects the columns you want to view. We specified an additional property called Minutes to display the
running minutes, which are not part of the original object. Select-Object will happily add that new property to the
object. Next, we can fill in the information into the Minutes property. This is done using New-Timespan, which
calculates the time difference between now and the time found in StartTime. Don't forget to output the $info object
at the end or the script block will have no result.
To kill only those instances of Notepad that were running for more than 10 minutes, you will need a condition:
This code would only return Notepad processes running for more than 10 minutes and you could pipe the result into
Stop-Process to kill those.
What you see here is a Foreach-Object loop with an If condition. This is exactly what Where-Object does so if you
need loops with conditions to filter out unwanted objects, you can simplify:
Foreach
There is another looping construct called Foreach. Don't confuse this with the Foreach alias, which represents
Foreach-Object. So, if you see a Foreach statement inside a pipeline, this really is a Foreach-Object cmdlet. The
true Foreach loop is never used inside the pipeline. Instead, it can only live inside a code block.
While Foreach-Object obtains its entries from the pipeline, the Foreach statement iterates over a collection of
objects:
The true Foreach statement does not use the pipeline architecture. This is the most important difference because it
has very practical consequences. The pipeline has a very low memory footprint because there is always only one
object travelling the pipeline. In addition, the pipeline processes objects in real time. That's why it is safe to process
even large sets of objects. The following line iterates through all files and folders on drive c:\. Note how results are
returned immediately:
Dir C:\ -recurse -erroraction SilentlyContinue | ForEach-Object { $_.FullName }
If you tried the same with foreach, the first thing you will notice is that there is no output for a long time. Foreach
does not work in real time. So, it first collects all results before it starts to iterate. If you tried to enumerate all files
and folders on your drive c:\, chances are that your system runs out of memory before it has a chance to process the
results. You must be careful with the following statement:
# careful!
foreach ($element in Dir C:\ -recurse -erroraction SilentlyContinue) { $element.FullName }
On the other hand, foreach is much faster than foreach-object because the pipeline has a significant overhead. It is
up to you to decide whether you need memory efficient real-time processing or fast overall performance:
Do and While
Do and While generate endless loops. Endless loops are a good idea if you don't know exactly how many times the
loop should iterate. You must set additional abort conditions to prevent an endless loop to really run endlessly. The
loop will end when the conditions are met.
do {
$Input = Read-Host "Your homepage"
} while (!($Input -like "www.*.*"))
This loop asks the user for his home page Web address. While is the criteria that has to be met at the end of the loop
so that the loop can be iterated once again. In the example, -like is used to verify whether the input matches the
www.*.* pattern. While that's only an approximate verification, it usually suffices. You could also use regular
expressions to refine your verification. Both procedures will be explained in detail in Chapter 13.
This loop is supposed to re-iterate only if the input is false. That's why "!" is used to simply invert the result of the
condition. The loop will then be iterated until the input does not match a Web address.
In this type of endless loop, verification of the loop criteria doesn't take place until the end. The loop will go through
its iteration at least once because you have to query the user at least once before you can check the criteria.
There are also cases in which the criteria needs to be verified at the beginning and not at the end of the loop. An
example would be a text file that you want to read one line at a time. The file could be empty and the loop should
check before its first iteration whether there's anything at all to read. To accomplish this, just put the While statement
and its criteria at the beginning of the loop (and leave out Do, which is no longer of any use):
do {
$Input = Read-Host "Your Homepage"
if ($Input –like "www.*.*") {
# Input correct, no further query:
$furtherquery = $false
} else {
# Input incorrect, give explanation and query again:
Write-Host –Fore "Red" "Please give a valid web address."
$furtherquery = $true
}
} while ($furtherquery)
Your Homepage: hjkh
Please give a valid web address.
Your Homepage: www.powershell.com
while ($true) {
$Input = Read-Host "Your homepage"
if ($Input –like "www.*.*") {
# Input correct, no further query:
break
} else {
# Input incorrect, give explanation and ask again:
Write-Host –Fore "Red" "Please give a valid web address."
}
}
Your homepage: hjkh
Please give a valid web address.
Your homepage: www.powershell.com
For
You can use the For loop if you know exactly how often you want to iterate a particular code segment. For loops are
counting loops. You can specify the number at which the loop begins and at which number it will end to define the
number of iterations, as well as which increments will be used for counting. The following loop will output a sound
at various 100ms frequencies (provided you have a soundcard and the speaker is turned on):
These three expressions can be used to initialize a control variable, to verify whether a final value is achieved, and to
change a control variable with a particular increment at every iteration of the loop. Of course, it is entirely up to you
whether you want to use the For loop solely for this purpose.
A For loop can become a While loop if you ignore the first and the second expression and only use the second
expression, the continuation criteria:
# Second expression: the For loop behaves like the While loop:
$i = 0
for (;$i -lt 5;) {
$i++
$i
}
1
2
3
4
5
In the first expression, the $input variable is set to an empty string. The second expression checks whether a valid
Web address is in $input. If it is, it will use "!" to invert the result so that it is $true if an invalid Web address is in
$input. In this case, the loop is iterated. In the third expression, the user is queried for a Web address. Nothing more
needs to be in the loop. In the example, an explanatory text is output.
In addition, the line-by-line reading of a text file can be implemented by a For loop with less code:
In this example, the first expression of the loop opened the file so it could be read. In the second expression, a check
is made whether the end of the file has been reached. The "!" operator inverts the result again. It will return $true if
the end of the file hasn't been reached yet so that the loop will iterate in this case. The third expression reads a line
from the file. The read line is then output in the loop.
The third expression of the For loop is executed before every loop cycle. In the example, the current line from the
text file is read. This third expression is always executed invisibly, which means you can't use it to output any text.
So, the contents of the line are output within the loop.
Switch
Switch is not only a condition, but also functions like a loop. That makes Switch one of the most powerful statements
in PowerShell. Switch works almost exactly like the Foreach loop. Moreover, it can evaluate conditions. For a quick
demonstration, take a look at the following simple Foreach loop:
$array = 1..5
foreach ($element in $array)
{
"Current element: $element"
}
Current element: 1
Current element: 2
Current element: 3
Current element: 4
Current element: 5
$array = 1..5
switch ($array)
{
Default { "Current element: $_" }
}
Current element: 1
Current element: 2
Current element: 3
Current element: 4
Current element: 5
The control variable that returns the current element of the array for every loop cycle cannot be named for Switch, as
it can for Foreach, but is always called $_. The external part of the loop functions in exactly the same way. Inside
the loop, there's an additional difference: while Foreach always executes the same code every time the loop cycles,
Switch can utilize conditions to execute optionally different code for every loop. In the simplest case, the Switch
loop contains only the default statement. The code that is to be executed follows it in curly brackets.
That means Foreach is the right choice if you want to execute exactly the same statements for every loop cycle. On
the other hand, if you'd like to process each element of an array according to its contents, it would be preferable to
use Switch:
$array = 1..5
switch ($array)
{
1 { "The number 1" }
{$_ -lt 3} { "$_ is less than 3" }
{$_ % 2} { "$_ is odd" }
Default { "$_ is even" }
}
The number 1
1 is less than 3
1 is odd
2 is less than 3
3 is odd
4 is even
5 is odd
If you're wondering why Switch returned this result, take a look at Chapter 7 where you'll find an explanation of how
Switch evaluates conditions. What's important here is the other, loop-like aspect of Switch.
Exiting Loops Early
You can exit all loops by using the Break statement, which will give you the additional option of defining additional
stop criteria in the loop. The following is a little example of how you can ask for a password and then use Break to
exit the loop as soon as the password "secret" is entered.
while ($true)
{
$password = Read-Host "Enter password"
if ($password -eq "secret") {break}
}
The next example nests two Foreach loops. The first (outer) loop cycles through a field with three WMI class
names. The second (inner) loop runs through all instances of the respective WMI class. This allows you to output all
instances of all three WMI classes. The inner loop checks whether the name of the current instance begins with "a";
if not, the inner loop will then invoke Continue skip all instances not beginning with "a." The result is a list of all
services, user accounts, and running processes that begin with "a":
As expected, the Continue statement in the inner loop has had an effect on the inner loop where the statement was
contained. But how would you change the code if you'd like to see only the first element of all services, user
accounts, and processes that begins with "a"? Actually, you would do almost the exact same thing, except now
Continue would need to have an effect on the outer loop. Once an element was found that begins with "a," the outer
loop would continue with the next WMI class:
Summary
The cmdlet ForEach-Object will give you the option of processing single objects of the PowerShell pipeline, such as
to output the data contained in object properties as text or to invoke methods of the object. Foreach is a similar type
of loop whose contents do not come from the pipeline, but from an array or a collection.
In addition, there are endless loops that iterate a code block until a particular condition is met. The simplest type is
While, which checks its continuation criteria at the beginning of the loop. If you want to do the checking at the end
of the loop, choose Do…While. The For loop is an extended While loop, because it can count loop cycles and
automatically terminate the loop after a designated number of iterations.
This means that For is best suited for loops which need to be counted or must complete a set number of iterations.
On the other hand, Do...While and While are designed for loops that have to be iterated as long as the respective
situation and running time conditions require it.
Finally, Switch is a combined Foreach loop with integrated conditions so that you can immediately implement
different actions independently of the read element. Moreover, Switch can step through the contents of text files line-
by-line and evaluate even log files of substantial size.
All loops can exit ahead of schedule with the help of Break and skip the current loop cycle with the help of
Continue. In the case of nested loops, you can assign an unambiguous name to the loops and then use this name to
apply Break or Continue to nested loops.
Functions work pretty much like macros. As such, you can attach a script block to a name to create your own new
commands.
Functions provide the interface between your code and the user. They can define parameters, parameter types, and
even provide help, much like cmdlets.
In this chapter, you will learn how to create your own functions.
Topics Covered:
function Get-InstalledSoftware {
Once you enter this code in your script editor and run it dot-sourced, PowerShell learned a new command called
Get-InstalledSoftware. If you saved your code in a file called c:\somescript.ps1, you will need to run it like this:
. 'c:\somescript.ps1'
If you don't want to use a script, you can also enter a function definition directly into your interactive PowerShell
console like this:
function Get-InstalledSoftware { }
However, defining functions in a script is a better approach because you won't want to enter your functions
manually all the time. Running a script to define the functions is much more practical. You may want to enable
script execution if you are unable to run a script because of your current ExecutionPolicy settings:
Once you defined your function, you can even use code completion. If you enter "Get-Ins" and then press TAB,
PowerShell will complete your function name. Of course, the new command Get-InstalledSoftware won't do
anything yet. The script block you attached to your function name was empty. You can add whatever code you want
to run to make your function do something useful. Here is the beef to your function that makes it report installed
software:
function Get-InstalledSoftware {
$path =
'Registry::HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Unins
tall\*'
Get-ItemProperty -path $path |
Where-Object { $_.DisplayName -ne $null } |
Select-Object DisplayName, DisplayVersion, UninstallString |
Sort-Object DisplayName
}
When you run it, it will return a sorted list of all the installed software packages, their version, and their uninstall
information:
PS > Get-InstalledSoftware
DisplayName DisplayVersion
UninstallString
----------- -------------- -----------
----
64 Bit HP CIO Components Installer 8.2.1 MsiExec.exe
/I{5737101A-27C4-40...
Apple Mobile Device Support 3.3.0.69 MsiExec.exe
/I{963BFE7E-C350-43...
Bonjour 2.0.4.0 MsiExec.exe
/X{E4F5E48E-7155-4C...
(...)
As always, information may be clipped. You can pipe the results to any of the formatting cmdlets to change because
the information returned by your function will behave just like information returned from any cmdlet.
Note the way functions return their results: anything you leave behind will be automatically assigned as return value.
If you leave behind more than one piece of information, it will be returned as an array:
Adding parameters is very simple. You can either add them in parenthesis right behind the function name or move
the list of parameters inside your function and label this part param. Both definitions define the same function:
function Speak-Text {
param ($text)
Your new command Speak-Text converts (English) text to spoken language. It accesses an internal Text-to-Speech-
API, so you can now try this:
Since the function Speak-Text now supports a parameter, it is easy to submit additional information to the function
code. PowerShell will take care of parameter parsing, and the same rules apply that you already know from cmdlets.
You can submit arguments as named parameters, as abbreviated named parameters, and as positional parameters:
To submit more than one parameter, you can add more parameters as comma-separated list. Let's add some
parameters to Get-InstalledSoftware to make it more useful. Here, we add parameters to select the product and when
it was installed:
function Get-InstalledSoftware {
param(
$name = '*',
$days = 2000
)
$path =
'Registry::HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Unins
tall\*'
$column_days = @{
Name='Days'
Expression={
if ($_.InstallDate) {
(New-TimeSpan ([DateTime]::ParseExact($_.InstallDate, 'yyyyMMdd',
$null))).Days
} else { 'n/a' }
}
}
Get-ItemProperty -path $path |
Where-Object { $_.DisplayName -ne $null } |
Where-Object { $_.DisplayName -like $name } |
Where-Object { $_.InstallDate -gt $cutoffstring } |
Select-Object DisplayName, $column_Days, DisplayVersion |
Sort-Object DisplayName
}
Now, Get-InstalledSoftware supports two optional parameters called -Name and -Days. You do not have to submit
them since they are optional. If you don't, they are set to their default values. So when you run Get-
InstalledSoftware, you will get all software installed within the past 2,000 days. If you want to only find software
with "Microsoft" in its name that was installed within the past 180 days, you can submit parameters:
function ConvertTo-Euro {
param(
$dollar,
$rate=1.37
)
$dollar * $rate
}
Since -rate is an optional parameter with a default value, there is no need for you to submit it unless you want to
override the default value:
So, what happens when the user does not submit any parameter since -dollar is optional as well? Well, since you did
not submit anything, you get back nothing.
This function can only make sense if there was some information passed to $dollar, which is why this parameter
needs to be mandatory. Here is how you declare it mandatory:
function ConvertTo-Euro {
param(
[Parameter(Mandatory=$true)]
$dollar,
$rate=1.37
)
$dollar * $rate
}
This works because PowerShell will ask for it when you do not submit the -dollar parameter:
However, the result looks strange because when you enter information via a prompt, PowerShell will treat it as
string (text) information, and when you multiply texts, they are repeated. So whenever you declare a parameter as
mandatory, you are taking the chance that the user will omit it and gets prompted for it. So, you always need to
make sure that you declare the target type you are expecting:
function ConvertTo-Euro {
param(
[Parameter(Mandatory=$true)]
[Double]
$dollar,
$rate=1.37
)
$dollar * $rate
}
function ConvertTo-Euro {
param(
[Parameter(Mandatory=$true)]
[Double]
$dollar,
$rate=1.37,
[switch]
$pretty
)
if ($pretty) {
'${0:0.00} equals EUR{1:0.00} at a rate of {2:0:0.00}' -f
$dollar, $result, $rate
} else {
$result
}
}
<#
.SYNOPSIS
Converts Dollar to Euro
.DESCRIPTION
Takes dollars and calculates the value in Euro by applying an exchange
rate
.PARAMETER dollar
the dollar amount. This parameter is mandatory.
.PARAMETER rate
the exchange rate. The default value is set to 1.37.
.EXAMPLE
ConvertTo-Euro 100
converts 100 dollars using the default exchange rate and positional
parameters
.EXAMPLE
ConvertTo-Euro 100 -rate 2.3
converts 100 dollars using a custom exchange rate
#>
function ConvertTo-Euro {
param(
[Parameter(Mandatory=$true)]
[Double]
$dollar,
$rate=1.37,
[switch]
$pretty
)
if ($pretty) {
'${0:0.00} equals EUR{1:0.00} at a rate of {2:0:0.00}' -f
$dollar, $result, $rate
} else {
$result
}
}
Note that the comment-based Help block may not be separated by more than one blank line if you place it above the
function. If you did everything right, you will now be able to get the same rich help like with cmdlets after running
the code:
PS > ConvertTo-Euro -?
NAME
ConvertTo-Euro
SYNOPSIS
Converts Dollar to Euro
SYNTAX
ConvertTo-Euro [-dollar] <Double> [[-rate] <Object>] [-pretty]
[<CommonParameters>]
DESCRIPTION
Takes dollars and calculates the value in Euro by applying an exchange
rate
RELATED LINKS
REMARKS
To see the examples, type: "get-help ConvertTo-Euro -examples".
for more information, type: "get-help ConvertTo-Euro -detailed".
for technical information, type: "get-help ConvertTo-Euro -full".
NAME
ConvertTo-Euro
SYNOPSIS
Converts Dollar to Euro
C:\PS>ConvertTo-Euro 100
converts 100 dollars using the default exchange rate and positional
parameters
-dollar <Double>
the dollar amount. This parameter is mandatory.
Required? true
Position? 1
Default value
Accept pipeline input? false
Accept wildcard characters?
-rate <Object>
the exchange rate. The default value is set to 1.37.
Required? false
Position? 2
Default value
Accept pipeline input? false
Accept wildcard characters?
-pretty [<SwitchParameter>]
Required? false
Position? named
Default value
Accept pipeline input? false
Accept wildcard characters?
1..10 | ConvertTo-Euro
Instead, you will receive exceptions complaining about PowerShell not being able to "bind" the input object. That's
because PowerShell cannot know which parameter is supposed to receive the incoming pipeline values. If you want
your function to be pipeline aware, you can fix it by choosing the parameter that is to receive the pipeline input.
Here is the enhanced param block:
function ConvertTo-Euro {
param(
[Parameter(Mandatory=$true, ValueFromPipeline=$true)]
[Double]
$dollar,
$rate=1.37,
[switch]
$pretty
)
...
By adding ValueFromPipeline=$true, you are telling PowerShell that the parameter -dollar is to receive incoming
pipeline input. When you rerun the script and then try the pipeline again, there are no more exceptions. Your
function will only process the last incoming result, though:
This is because functions will by default execute all code at the end of a pipeline. If you want the code to process
each incoming pipeline data, you must assign the code manually to a process script block or rename your function
into a filter (by exchanging the keyword function by filter). Filters will by default execute all code in a process
block.
Here is how you move the code into a process block to make a function process all incoming pipeline values:
<#
.SYNOPSIS
Converts Dollar to Euro
.DESCRIPTION
Takes dollars and calculates the value in Euro by applying an exchange
rate
.PARAMETER dollar
the dollar amount. This parameter is mandatory.
.PARAMETER rate
the exchange rate. The default value is set to 1.37.
.EXAMPLE
ConvertTo-Euro 100
converts 100 dollars using the default exchange rate and positional
parameters
.EXAMPLE
ConvertTo-Euro 100 -rate 2.3
converts 100 dollars using a custom exchange rate
#>
function ConvertTo-Euro {
param(
[Parameter(Mandatory=$true, ValueFromPipeline=$true)]
[Double]
$dollar,
$rate = 1.37,
[switch]
$pretty
)
begin {"starting..."}
process {
$result = $dollar * $rate
if ($pretty) {
'${0:0.00} equals EUR{1:0.00} at a rate of {2:0:0.00}' -f
$dollar, $result, $rate
} else {
$result
}
}
end { "Done!" }
As you can see, your function code is now assigned to one of three special script blocks: begin, process, and end.
Once you add one of these blocks, no code will exist outside of any one of these three blocks anymore.
Dir function:
Many of these pre-defined functions perform important tasks in PowerShell. The most important place for
customization is the function prompt, which is executed automatically once a command is done. It is responsible for
displaying the PowerShell prompt. You can change your PowerShell prompt by overriding the function prompt. This
will get you a colored prompt:
function prompt
{
Write-Host ("PS " + $(get-location) +">") -nonewline -foregroundcolor
Magenta
" "
}
You can also insert information into the console screen buffer. This only works with true consoles so you cannot use
this type of prompt in non-console editors, such as PowerShell ISE.
function prompt
{
Write-Host ("PS " + $(get-location) +">") -nonewline -foregroundcolor
Green
" "
$winHeight = $Host.ui.rawui.WindowSize.Height
$curPos = $Host.ui.rawui.CursorPosition
$newPos = $curPos
$newPos.X = 0
$newPos.Y-=$winHeight
$newPos.Y = [Math]::Max(0, $newPos.Y+1)
$Host.ui.rawui.CursorPosition = $newPos
Write-Host ("{0:D} {0:T}" -f (Get-Date)) -foregroundcolor Yellow
$Host.ui.rawui.CursorPosition = $curPos
}
Another good place for additional information is the console window title bar. Here is a prompt that displays the
current location in the title bar to save room inside the console and still display the current location:
And this prompt function changes colors based on your notebook battery status (provided you have a battery):
function prompt
{
$charge = get-wmiobject Win32_Battery |
Measure-Object -property EstimatedChargeRemaining -average |
Select-Object -expandProperty Average
Summary
You can use functions to create your very own new cmdlets. In its most basic form, functions are called script
blocks, which execute code whenever you enter the assigned name. That's what distinguishes functions from aliases.
An alias serves solely as a replacement for another command name. As such, a function can execute whatever code
you want.
PBy adding parameters, you can provide the user with the option to submit additional information to your function
code. Parameters can do pretty much anything that cmdlet parameters can do. They can be mandatory, optional,
have a default value, or a special data type. You can even add Switch parameters to your function.
If you want your function to work as part of a PowerShell pipeline, you will need to declare the parameter that
should accept pipeline input from upstream cmdlets. You will also need to move the function code into a process
block so it gets executed for each incoming result.
You can play with many more parameter attributes and declarations. Try this to get a complete overview:
Help advanced_parameter
PowerShell can be used interactively and in batch mode. All the code that you entered and tested interactively can
also be stored in a script file. When you run the script file, the code inside is executed from top to bottom, pretty
much like if you had entered the code manually into PowerShell.
So script files are a great way of automating complex tasks that consist of more than just one line of code. Scripts
can also serve as a repository for functions you create, so whenever you run a script, it defines all the functions you
may need for your daily work.
You can even set up a so called "profile" script which runs automatically each time you launch PowerShell. A
profile script is used to set up your personal PowerShell environment. It can set colors, define the prompt, and load
additional PowerShell modules and snapins.
Topics Covered:
Creating a Script
o Launching a Script
o Execution Policy - Allowing Scripts to Run
Table 10.1: Execution policy setting options
Invoking Scripts like Commands
Parameters: Passing Arguments to Scripts
o Scopes: Variable Visibility
o Profile Scripts: Automatic Scripts
o Signing Scripts with Digital Signatures
o Finding Certificates
o Creating/Loading a New Certificates
Creating Self-Signed Certificates
o Making a Certificate "Trustworthy"
o Signing PowerShell Scripts
o Checking Scripts
o Table 10.3: Status reports of signature validation and their causes
o Summary
Creating a Script
A PowerShell script is a plain text file with the extension ".ps1". You can create it with any text editor or
use specialized PowerShell editors like the built-in "Integrated Script Environment" called "ise", or
commercial products like "PowerShell Plus".
You can place any PowerShell code inside your script. When you save the script with a generic text editor,
make sure you add the file extension ".ps1".
If your script is rather short, you could even create it directly from within the console by redirecting the
script code to a file:
Invoke-Item $filename
'@ > $env:temp\myscript.ps1
Launching a Script
To actually run your script, you need to either call the script from within an existing PowerShell window,
or prepend the path with "powershell.exe". So, to run the script from within PowerShell, use this:
& "$env:temp\myscript.ps1"
By prepending the call with "&", you tell PowerShell to run the script in isolation mode. The script runs in
its own scope, and all variables and functions defined by the script will be automatically discarded again
once the script is done. So this is the perfect way to launch a "job" script that is supposed to just "do
something" without polluting your PowerShell environment with left-overs.
By prepending the call with ".", which is called "dot-sourcing", you tell PowerShell to run the script in
global mode. The script now shares the scope with the callers' scope, and functions and variables defined
by the script will still be available once the script is done. Use dot-sourcing if you want to debug a script
(and for example examine variables), or if the script is a function library and you want to use the functions
defined by the script later.
To run a PowerShell script from outside PowerShell, for example from a batch file, use this line:
You can use this line within PowerShell as well. Since it always starts a fresh new PowerShell
environment, it is a safe way of running a script in a default environment, eliminating interferences with
settings and predefined or changed variables and functions.
To enable PowerShell scripts, you need to change the ExecutionPolicy. There are actually five different
execution policies which you can list with this command:
Scope
ExecutionPolicy
----- -------
--------
MachinePolicy
Undefined
UserPolicy
Undefined
process
Undefined
CurrentUser
Bypass
LocalMachine
Unrestricted
The first two represent group policy settings. They are set to "Undefined" unless you defined
ExecutionPolicy with centrally managed group policies in which case they cannot be changed manually.
Scope "Process" refers to the current PowerShell session only, so once you close PowerShell, this setting
gets lost. CurrentUser represents your own user account and applies only to you. LocalMachine applies to
all users on your machine, so to change this setting you need local administrator privileges.
The effective execution policy is the first one from top to bottom that is not set to "Undefined". You can
view the effective execution policy like this:
PS > Get-ExecutionPolicy
Bypass
If all execution policies are "Undefined", the effective execution policy is set to "Restricted".
Setting Description
Restricted Script execution is absolutely prohibited.
Default Standard system setting normally corresponding to "Restricted".
Only scripts having valid digital signatures may be executed. Signatures ensure that the
AllSigned script comes from a trusted source and has not been altered. You'll read more about
signatures later on.
Scripts downloaded from the Internet or from some other "public" location must be signed.
Locally stored scripts may be executed even if they aren't signed. Whether a script is
RemoteSigned "remote" or "local" is determined by a feature called Zone Identifier, depending on whether
your mail client or Internet browser correctly marks the zone. Moreover, it will work only if
downloaded scripts are stored on drives formatted with the NTFS file system.
Unrestricted PowerShell will execute any script.
Many sources recommend changing the execution policy to "RemoteSigned" to allow scripts. This setting
will protect you from potentially harmful scripts downloaded from the internet while at the same time, local
scripts run fine.
The mechanism behind the execution policy is just an additional safety net for you. If you feel confident
that you won't launch malicious PowerShell code because you carefully check script content before you run
scripts, then it is ok to turn off this safety net altogether by setting the execution policy to "Bypass". This
setting may be required in some corporate scenarios where scripts are run off file servers that may not be
part of your own domain.
If you must ensure maximum security, you can also set execution policy to "AllSigned". Now, every single
script needs to carry a valid digital signature, and if a script was manipulated, PowerShell immediately
refuses to run it. Be aware that this setting does require you to be familiar with digital signatures and
imposes considerable overhead because it requires you to re-sign any script once you made changes.
To actually invoke scripts just as easily as normal commands—without having to specify relative or
absolute paths and the ".ps1" file extension—pick or create a folder to store your scripts in. Next, add this
folder to your "Path" environment variable. Done.
md $env:appdata\PSScripts
copy-item $env:temp\myscript.ps1 $env:appdata\PSScripts\myscript.ps1
$env:path += ";$env:appdata\PSScripts "
myscript
The changes you made to the "Path" environment variable are temporary and only valid in your current
PowerShell session. To permanently add a folder to that variable, make sure you append the "Path"
environment variable within your special profile script. Since this script runs automatically each time
PowerShell starts, each PowerShell session automatically adds your folder to the search path. You learn
more about profile scripts in a moment.
For example, to add parameters to your event log monitoring script, try this:
@'
Param(
$hours = 24,
[Switch]
$show
)
If ($Show) {
Invoke-Item $filename
} else {
Write-Warning "The report has been generated here: $filename"
}
'@ > $env:temp\myscript.ps1
Now you can run your script and control its behavior by using its parameters. If you copied the script to the
folder that you added to your "Path" environment variable, you can even call your script without a path
name, almost like a new command:
To learn more about parameters, how to make them mandatory or how to add help to your script, refer to
the previous chapter. Functions and scripts share the same mechanism.
So by default, any function or variable you define can be accessed from any other function defined at the
same scope or in a subscope:
function C { A; B }
The caller of this script cannot access any function or variable, so the script will not pollute the callers
context with left-over functions or variables - unless you call the script dot-sourced like described earlier in
this chapter.
By prefixing variables or function names with one of the following prefixes, you can change the default
behavior.
PS > $profile
C:\Users\w7-
pc9\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1
Since this profile script is specific to your current PowerShell host, the path may look different depending
on your host. When you run this command from inside the ISE editor, it looks like this:
PS > $profile
C:\Users\w7-
pc9\Documents\WindowsPowerShell\Microsoft.PowerShellISE_profile.ps1
If this file exists, PowerShell runs it automatically. To test whether the script exists, use Test-Path. Here is
a little piece of code that creates the profile file if it not yet exists and opens it in notepad so you can add
code to it:
There are more profile scripts. $profile.CurrentUserAllHosts returns the path to the script file that
automatically runs with all PowerShell hosts, so this is the file to place code in that should execute
regardless of the host you use. It executes for both the PowerShell console and the ISE editor.
$profile.AllUsersCurrentHost is specific to your current host but runs for all users. To create or change this
file, you need local administrator privileges. $profile.AllUsersAllHosts runs for all users on all PowerShell
hosts. Again, you need local administrator privileges to create or change this file.
If you use more than one profile script, their execution order is from "general to specific", so the profile
script defined in $profile executes last (and if there are conflicting settings, overrides all others).
Finding Certificates
To find all codesigning certificates installed in your personal certificate store, use the virtual cert: drive:
The -codeSigningCert parameter ensures that only those certificates are located that are approved for the
intended "code signing" purpose and for which you have a private and secret key.
If you have a choice of several certificates, pick the certificate you want to use for signing by using Where-
Object:
You can also use low-level -NET methods to open a full-featured selection dialog to pick a certificate:
$Store = New-Object
system.security.cryptography.X509Certificates.x509Store("My",
"CurrentUser")
$store.Open("ReadOnly")
[System.Reflection.Assembly]::LoadWithPartialName("System.Security")
$certificate =
[System.Security.Cryptography.x509Certificates.X509Certificate2UI]::Sel
ectFromCollection($store.certificates, "Your certificates", "Please
select", 0)
$store.Close()
$certificate
Thumbprint Subject
---------- -------
372883FA3B386F72BCE5F475180CE938CE1B8674 CN=MyCertificate
If there is no certificate in your certificate store, you cannot sign scripts. You can then either
request/purchase a codesigning certificate and install it into your personal certificate store by double-
clicking it, or you can temporarily load a certificate file into memory using Get-PfxCertificate.
The key to making self-signed certificates is the Microsoft tool makecert.exe. Unfortunately, this tool can't
be downloaded separately and it may not be spread widely. You have to download it as part of a free
"Software Development Kit" (SDK). Makecert.exe is in the .NET framework SDK which you can find at
http://msdn2.microsoft.com/en-us/netframework/aa731542.aspx.
After the SDK is installed, you'll find makecert.exe on your computer and be able to issue a new code-
signing certificate with a name you specify by typing the following lines:
$name = "PowerShellTestCert"
pushd
Cd "$env:programfiles\Microsoft Visual Studio 8\SDK\v2.0\Bin"
.\makecert.exe -pe -r -n "CN=$name" -eku 1.3.6.1.5.5.7.3.3 -ss "my"
popd
It will be automatically saved to the \CurrentUser\My certificate store. From this location, you can now
call and use any other certificate:
$name = "PowerShellTestCert"
$certificate = Dir cert:\CurrentUser\My | Where-Object { $_.Subject -eq
"CN=$name"}
Certificates you purchased from trusted certificate authorities or your own enterprise IT are considered
trustworthy by default. That's because their root is listed in the "trusted root certification authorities
container. You can examine these settings like this:
Certmgr.msc
Self-signed certificates are not trustworthy by default because anyone can create them. To make them
trustworthy, you need to copy them into the list of trusted root certification authorities and Trusted
Publishers.
The following code grabs the first available codesigning certificate and then signs a script:
When you look at the signed scripts, you'll see a new comment block at the end of a script.
Attention:
You cannot sign script files that are smaller than 4 Bytes, or that are saved with Big Endian Unicode.
Unfortunately, the builtin script editor ISE uses just that encoding scheme to save scripts, so you may not
be able to sign scripts created with ISE unless you save the scripts with a different encoding.
Checking Scripts
To check all of your scripts manually and find out whether someone has tampered with them, use Get-
AuthenticodeSignature:
Dir C:\ -filter *.ps1 -recurse -erroraction SilentlyContinue | Get-
AuthenticodeSignature
If you want to find only scripts that are potentially malicious, whose contents have been tampered with
since they were signed (HashMismatch), or whose signature comes from an untrusted certificate
(UnknownError), use Where-Object to filter your results:
Summary
PowerShell scripts are plain text files with a ".ps1" file extension. They work like batch files and may
include any PowerShell statements.
To run a script, you need to make sure the execution policy setting is allowing the script to execute. By
default, the execution policy disables all PowerShell scripts.
You can run a script from within PowerShell: specify the absolute or relative path name to the script unless
the script file is stored in a folder that is part of the "Path" environment variable in which case it is
sufficient to specify the script file name.
By running a script "dot-sourced" (prepending the path by a dot and a space), the script runs in the callers'
context. All variables and functions defined in the script will remain intact even once the script finished.
This can be useful for debugging scripts, and it is essential for running "library" scripts that define
functions you want to use elsewhere.
To run scripts from outside PowerShell, call powershell.exe and specify the script path. There are
additional parameters like -noprofile which ensures that the script runs in a default powershell environment
that was not changed by profile scripts.
Digital signatures ensure that a script comes from a trusted source and has not been tampered with. You can
sign scripts and also verify a script signature with Set-AuthenticodeSignature and Get-
AuthenticodeSignature.
When you design a PowerShell script, there may be situations where you cannot eliminate all possible runtime
errors. If your script maps network drives, there could be a situation where no more drive letters are available, and
when your script performs a remote WMI query, the remote machine may not be available.
In this chapter, you learn how to discover and handle runtime errors gracefully.
Topics Covered:
Suppressing Errors
Handling Errors
o Try/Catch
o Using Traps
Handling Native Commands
o Understanding Exceptions
o Handling Particular Exceptions
o Throwing Your Own Exceptions
o Stepping And Tracing
Summary
Suppressing Errors
Every cmdlet has built-in error handling which is controlled by the -ErrorAction parameter. The default ErrorAction
is "Continue": the cmdlet outputs errors but continues to run.
This default is controlled by the variable $ErrorActionPreference. When you assign a different setting to this
variable, it becomes the new default ErrorAction. The default ErrorAction applies to all cmdlets that do not specify
an individual ErrorAction by using the parameter -ErrorAction.
To suppress error messages, set the ErrorAction to SilentlyContinue. For example, when you search the windows
folder recursively for some files or folder, your code may eventually touch system folders where you have no
sufficient access privileges. By default, PowerShell would then throw an exception but would continue to search
through the subfolders. If you just want the files you can get your hands on and suppress ugly error messages, try
this:
Likewise, if you do not have full local administrator privileges, you cannot access processes you did not start
yourself. Listing process files would produce a lot of error messages. Again, you can suppress these errors to get at
least those files that you are able to access:
Suppress errors with care because errors have a purpose, and suppressing errors will not solve the underlying
problem. In many situations, it is invaluable to receive errors, get alarmed and act accordingly. So only suppress
errors you know are benign.
NOTE: Sometimes, errors will not get suppressed despite using SilentlyContinue. If a cmdlet encounters a serious
error (which is called "Terminating Error"), the error will still appear, and the cmdlet will stop and not continue
regardless of your ErrorAction setting.
Whether or not an error is considered "serious" or "terminating" is solely at the cmdlet authors discretion. For
example, Get-WMIObject will throw a (non-maskable) terminating error when you use -ComputerName to access a
remote computer and receive an "Access Denied" error. If Get-WMIObject encounters an "RPC system not
available" error because the machine you wanted to access is not online, that is considered not a terminating error,
so this type of error would be successfully suppressed.
Handling Errors
To handle an error, your code needs to become aware that there was an error. It then can take steps to respond to that
error. To handle errors, the most important step is setting the ErrorAction default to Stop:
$ErrorActionPreference = 'Stop'
As an alternative, you could add the parameter -ErrorAction Stop to individual cmdlet calls but chances are you
would not want to do this for every single call except if you wanted to handle only selected cmdlets errors. Changing
the default ErrorAction is much easier in most situations.
The ErrorAction setting not only affects cmdlets (which have a parameter -ErrorAction) but also native commands
(which do not have such a parameter and thus can only be controlled via the default setting).
Once you changed the ErrorAction to Stop, your code needs to set up an error handler to become aware of errors.
There is a local error handler (try/catch) and also a global error handler (trap). You can mix both if you want.
Try/Catch
To handle errors in selected areas of your code, use the try/catch statements. They always come as pair and need to
follow each other immediately. The try-block marks the area of your code where you want to handle errors. The
catch-block defines the code that is executed when an error in the try-block occurs.
It takes a list of computer names (or IP addresses) which could also come from a text file (use Get-Content to read a
text file instead of listing hard-coded computer names). It then uses Foreach-Object to feed the computer names into
Get-WMIObject which remotely tries to get BIOS information from these machines.
Get-WMIObject is encapsulated in a try-block and also uses the ErrorAction setting Stop, so any error this cmdlet
throws will execute the catch-block. Inside the catch-block, in this example a warning is outputted. The reason for
the error is available in $_ inside the catch-block.
Try and play with this example. When you remove the -ErrorAction parameter from Get-WMIObject, you will
notice that errors will no longer be handled. Also note that whenever an error occurs in the try-block, PowerShell
jumps to the corresponding catch-block and will not return and resume the try-block. This is why only Get-
WMIObject is placed inside the try-block, not the Foreach-Object statement. So when an error does occur, the loop
continues to run and continues to process the remaining computers in your list.
The error message created by the catch-block is not yet detailed enough:
You may want to report the name of the script where the error occured, and of course you'd want to output the
computer name that failed. Here is a slight variant which accomplishes these tasks. Note also that in this example,
the general ErrorActionPreference was set to Stop so it no longer is necessary to submit the -ErrorAction parameter
to individual cmdlets:
Here, two procedures were needed: first of all, the current computer name processed by Foreach-Object needed to be
stored in a new variable because the standard $_ variable is reused inside the catch-block and refers to the current
error. So it can no longer be used to read the current computer name. That's why the example stored the content of
$_ in $currentcomputer before an error could occur. This way, the script code became more legible as well.
Second, inside the catch-block, $_ resembles the current error. This variable contains a complex object which
contains all details about the error. Information about the cause can be found in the property Exception whereas
information about the place the error occured are found in InvocationInfo.
To examine the object stored in $_, you can save it in a global variable. This way, the object remains accessible (else
it would be discarded once the catch-block is processed). So when an error was handled, you can examine your test
variable using Get-Member. This is how you would adjust the catch-block:
catch {
$global:test = $_
Write-Warning ('Failed to access "{0}" : {1} in "{2}"' -f
$currentcomputer, `
$_.Exception.Message, $_.InvocationInfo.ScriptName)
}
}
Then, once the script ran (and encountered an error), check the content of $test:
TypeName: System.Management.Automation.ErrorRecord
As you see, the error information has a number of subproperties like the one used in the example. One of the more
useful properties is InvocationInfo which you can examine like this:
TypeName: System.Management.Automation.InvocationInfo
It tells you all details about the place the error occured.
Using Traps
If you do not want to focus your error handler on a specific part of your code, you can also use a global error handler
which is called "Trap". Actually, a trap really is almost like a catch-block without a try-block. Check out this
example:
trap {
Write-Warning ('Failed to access "{0}" : {1} in "{2}"' -f
$currentcomputer, `
$_.Exception.Message, $_.InvocationInfo.ScriptName)
continue
}
This time, the script uses a trap at its top which looks almost like the catch-block used before. It does contain one
more statement to make it act like a catch-block: Continue. Without using Continue, the trap would handle the error
but then forward it on to other handlers including PowerShell. So without Continue, you would get your own error
message and then also the official PowerShell error message.
When you run this script, you will notice differences, though. When the first error occurs, the trap handles the error
just fine, but then the script stops. It does not execute the remaining computers in your list. Why?
Whenever an error occurs and your handler gets executed, it continues execution with the next statement following
the erroneous statement - in the scope of the handler. So when you look at the example code, you'll notice that the
error occurred inside the Foreach-Object loop. Whenever your code uses braces, the code inside the braces
resembles a new "territory" or "scope". So the trap did process the first error correctly and then continued with the
next statement in its own scope. Since there was no code following your loop, nothing else was executed.
This example illustrates that it always is a good idea to plan what you want your error handler to do. You can choose
between try/catch and trap, and also you can change the position of your trap.
If you placed your trap inside the "territory" or "scope" where the error occurs, you could make sure all computers in
your list are processed:
$currentcomputer = $_
Get-WmiObject -class Win32_BIOS -computername $currentcomputer -
ErrorAction Stop |
Select-Object __Server, Version
}
Console-based applications return their error messages through another mechanism: they emit error messages using
the console ErrOut channel. PowerShell can monitor this channel and treat outputs that come from this channel as
regular exceptions. To make this work, you need to do two things: first of all, you need to set
$ErrorActionPreference to Stop, and second, you need to redirect the ErrOut channel to the StdOut channel because
only this channel is processed by PowerShell. Here is an example:
When you run the following native command, you will receive an error, but the error is not red nor does it look like
the usual PowerShell error messages because it comes as plain text directly from the application you ran:
When you redirect the error channel to the output channel, the error suddenly becomes red and is turned into a "real"
PowerShell error:
You can still not handle the error. When you place the code in a try/catch-block, the catch-block never executes:
try {
net user willibald 2>&1
}
catch {
Write-Warning "Oops: $_"
}
As you know from cmdlets, to handle errors you need to set the ErrorAction to Stop. With cmdlets, this was easy
because each cmdlet has a -ErrorAction preference. Native commands do not have such a parameter. This is why
you need to use $ErrorActionPreference to set the ErrorAction to Stop:
try {
$ErrorActionPreference = 'Stop'
net user willibald 2>&1
}
catch {
Write-Warning "Oops: $_"
}
If you do not like the default colors PowerShell uses for error messages, simply change them:
$Host.PrivateData.ErrorForegroundColor = "Red"
$Host.PrivateData.ErrorBackgroundColor = "White"
You can also find additional properties in the same location which enable you to change the colors of warning and
debugging messages (like WarningForegroundColor and WarningBackgroundColor).
Understanding Exceptions
Exceptions work like bubbles in a fish tank. Whenever a fish gets sick, it burps, and the bubble bubbles up to the
surface. If it reaches the surface, PowerShell notices the bubble and throws the exception: it outputs a red error
message.
In this chapter, you learned how you can catch the bubble before it reaches the surface, so PowerShell would never
notice the bubble, and you got the chance to replace the default error message with your own or take appropriate
action to handle the error.
The level the fish swims in the fish tank resembles your code hierarchy. Each pair of braces resembles own
"territory" or "scope", and when a scope emits an exception (a "bubble"), all upstream scopes have a chance to catch
and handle the exception or even replace it with another exception. This way you can create complex escalation
scenarios.
function Test
{
trap [System.DivideByZeroException] { "Divided by null!"; continue }
trap [System.Management.Automation.ParameterBindingException] {
"Incorrect parameter!";
continue
}
1/$null
Dir -MacGuffin
}
Test
Divided by null!
Incorrect parameter!
function TextOutput([string]$text)
{
if ($text -eq "")
{
Throw "You must enter some text."
}
else
{
"OUTPUT: $text"
}
}
The caller can now handle the error your function emitted and choose by himself how he would like to respond to it:
However, PowerShell has also built-in methods to step code or trace execution. To enable tracing, use this:
Directory: C:\Users\w7-pc9
Simple tracing will show you only PowerShell statements executed in the current context. If you invoke a function
or a script, only the invocation will be shown but not the code of the function or script. If you would like to see the
code, turn on detailed traced by using the -trace 2 parameter.
Set-PSDebug -trace 2
Set-PSDebug -trace 0
Set-PSDebug -step
Now, when you execute PowerShell code, it will ask you for each statement whether you want to continue, suspend
or abort.
If you choose Suspend by pressing "H", you will end up in a nested prompt, which you will recognize by the "<<"
sign at the prompt. The code will then be interrupted so you could analyze the system in the console or check
variable contents. As soon as you enter Exit, execution of the code will continue. Just select the "A" operation for
"Yes to All" in order to turn off the stepping mode.
Tip: You can create simple breakpoints by using nested prompts: call $host.EnterNestedPrompt() inside a script or a
function.
Set-PSDebug has another important parameter called -strict. It ensures that unknown variables will throw an error.
Without the Strict option, PowerShell will simply set a null value for unknown variables. On machines where you
develop PowerShell code, you should enable strict mode like this:
This will throw exceptions for unknown variables (possible typos), nonexistent object properties and wrong cmdlet
call syntax.
Summary
To handle errors in your code, make sure you set the ErrorAction to Stop. Only then will cmdlets and native
commands place errors in your control.
To detect and respond to errors, use either a local try/catch-block (to catch errors in specific regions of your code) or
trap (to catch all errors in the current scope). With trap, make sure to also call Continue at the end of your error
handler to tell PowerShell that you handled the error. Else, it would still bubble up to PowerShell and cause the
default error messages.
To catch errors from console-based native commands, redirect their ErrOut channel to StdOut. PowerShell then
automatically converts the custom error emitted by the command into a PowerShell exception.
Anything you define in PowerShell - variables, functions, or settings - have a certain life span. Eventually, they
expire and are automatically removed from memory. This chapter talks about "scope" and how you manage the life
span of objects or scripts.
Understanding and correctly managing scope can be very important. You want to make sure that a production script
is not negatively influenced by "left-overs" from a previous script. Or you want certain PowerShell settings to apply
only within a script. Maybe you are also wondering just why functions defined in a script you run won't show up in
your PowerShell console. These questions all touch "scope".
At the end of this chapter, we will also be looking at how PowerShell finds commands and how to manage and
control commands if there are ambiguous command names.
Topics Covered:
PowerShell Session: Your PowerShell session - the PowerShell console or a development environment like
ISE - always opens the first scope which is called "global". Anything you define in that scope persists until
you close PowerShell.
Script: When you run a PowerShell script, this script by default runs in its own scope. So any variables or
functions a script declares will automatically be cleared again when the script ends. This ensures that a
script will not leave behind left-overs that may influence the global scope or other scripts that you run later.
Note that the default behavior can be changed both by the user and the programmer, enabling the script to
store variables or functions in the callers' scope. You'll learn about that in a minute.
Function: Every function runs yet in another scope, so variables and functions declared in a function are by
default not visible to the outside. This guarantees that functions won't interfere with each other and write to
the same variables - unless that is what you want. To create "shared" variables that are accessible to all
functions, you would manually change scope. Again, that'll be discussed in a minute.
Script Block: Since functions really are named script blocks, what has been said about functions also
applies to script blocks. They run in their own scope or territory too.
"Inheritance" is the wrong term, though, because in PowerShell this works more like a "cross-scope traversal". Let's
check this out by looking at some real world examples.
Yes, it will. By default, anything you define in a scope is visible to all child scopes. Although it looks a bit like
"inheritance", it really works different, though.
Whenever PowerShell tries to access a variable or function, it first looks in the current scope. If it is not found there,
PowerShell traverses the parent scopes and continues its search until it finds the object or ultimately reaches the
global scope. So, what you get will always be the variable or function that was declared in closest possible proximity
to your current scope or territory.
By default, unless a variable is declared in the current scope, there is no guarantee that you access a specific variable
in a specific scope. Let's assume you created a variable $a in the PowerShell console. When you now call a script,
and the script accesses the variable $a, two things can happen: if your script has defined $a itself, you get the scripts'
version of $a. If the script has not defined $a, you get the variable from the global scope that you defined in the
console.
So here is the first golden rule that derives from this: in your scripts and functions, always declare variables and give
them an initial value. If you don't, you may get unexpected results. Here is a sample:
function Test {
if ($true -eq $hasrun) {
'This function was called before'
} else {
$hasrun = $true
'This function runs for the first time'
}
}
When you call the function Test for the first time, it will state that it was called for the first time. When you call it a
second time, it should notice that it was called before. In reality, the function does not, though. Each time you call it,
it reports that it has been running for the first time. Moreover, in the PowerShell console enter this line:
When you now run the function Test again, it suddenly reports that it ran before. So the function is not at all doing
what it was supposed to do. All of the unexpected behaviors can be explained with scopes.
Since each function creates its own scope, all variables defined within only exist while the function executes. Once
the function is done, the scope is discarded. That's why the variable $hasrun cannot be used to remember a previous
function call. Each time the function runs, a new $hasrun variable is created.
So why then does the function report that it has been called before once you define a variable $hasrun with arbitrary
content in the console?
When the function runs, the if statement checks to see whether $hasrun is equal to $true. Since at that point there is
no $hasrun variable in this scope, PowerShell starts to search for the variable in the parent scopes. Here, it finds the
variable. And since the if statement compares a boolean value with the variable content, automatic type casting takes
place: the content of the variable is automatically converted to a boolean value. Anything except $null will result in
$true. Check it out, and assign $null to the variable, then call the function again:
To solve this problem and make the function work, you have to use global variables. A global variable basically is
what you created manually in the PowerShell console, and you can create and access global variables
programmatically, too. Here is the revised function:
function Test {
if ($global:hasrun -eq $true) {
'This function was called before'
} else {
$global:hasrun = $true
'This function runs for the first time'
}
}
PS> test
This function runs for the first time
PS> test
This function was called before
There are two changes in the code that made this happen:
Since all variables defined inside a function have a limited life span and are discarded once the function
ends, store information that continues to be present after that needs in the global scope. You do that by
adding "global:" to your variable name.
To avoid implicit type casting, reverse the order of the comparison. PowerShell always looks at the type to
the left, so if that is a boolean value, the variable content will also be turned into a boolean value. As you
have seen, this may result in unexpected cross-effects. By using your variable first and comparing it to
$true, the variable type will not be changed.
Note that in place of global:, you can also use script:. That's another scope that may be useful. If you run the
example in the console, they both represent the same scope, but when you define your function in a script and then
run the script, script: refers to the script scope, so it creates "shared variables" that are accessible from anywhere
inside the script. You will see an example of this shortly.
The same is true for most PowerShell settings because they too are defined by variables. Let's take a look at the
ErrorActionPreference setting. It determines what a cmdlet should do when it encounters a problem. By default, it is
set to 'Continue', so PowerShell displays an error message but continues to run.
In a script, when you set $ErrorActionPreference to 'Stop', you can trap errors and handle them yourself. Here is a
simple example. Type in the following code and save it as a script, and then run the script:
$ErrorActionPreference = 'Stop'
trap {
"Something bad occured: $_"
continue
}
"Starting"
dir nonexisting:
Get-Process willsmith
"Done"
When you run this script, both errors are caught, and your script controls the error messages itself. Once the script is
done, check the content of $ErrorActionPreference:
PS> $ErrorActionPreference
continue
It is still set to 'Continue'. By default, the change made to $ErrorActionPreference was limited to your script and did
not change the setting in the parent scope. That's good because it prevents unwanted side-effects and left-overs from
previously running scripts.
Note: If the script did change the global setting, you may have called your script "dot-sourced". We'll discuss this
shortly. To follow the example, you need to call your script the default way: in the PowerShell console, enter the
complete path to your script file. If you have to place the path in quotes because of spaces, prepend it with "&".
Now, the second script becomes a child scope, and your initial script is the parent scope. Since your initial script has
changed $ErrorActionPreference, this change is propagated to the second script, and error handling changes there as
well.
Here is a little test scenario. Type in and save this code as script1.ps1:
$ErrorActionPreference = 'Stop'
trap {
"Something bad occured: $_"
continue
}
'Starting Script'
dir nonexisting:
'Starting Subscript'
& "$folder\script2.ps1"
'Done'
Now create a second script and call it script2.ps1. Save it in the same folder:
"script2 starting"
dir nonexisting:
Get-Process noprocess
"script2 ending"
When you run script2.ps1, you get two error messages from PowerShell. As you can see, the entire script2.ps1 is
executed. You can see both the start message and the end message:
PS> & 'C:\scripts\script2.ps1'
script2 starting
Get-ChildItem : Cannot find drive. A drive with the name 'nonexisting' does
not exist.
At C:\scripts\script2.ps1:2 char:4
+ dir <<<< nonexisting:
+ CategoryInfo : ObjectNotFound: (nonexisting:String) [Get-
ChildItem], DriveNotFoundException
+ FullyQualifiedErrorId :
DriveNotFound,Microsoft.PowerShell.Commands.GetChildItemCommand
Get-Process : Cannot find a process with the name "noprocess". Verify the
process name and call the cmdlet again.
At C:\scripts\script2.ps1:3 char:12
+ Get-Process <<<< noprocess
+ CategoryInfo : ObjectNotFound: (noprocess:String) [Get-
process], ProcessCommandException
+ FullyQualifiedErrorId :
NoProcessFoundForGivenName,Microsoft.PowerShell.Commands.GetProcessCommand
script2 ending
That is expected behavior. By default, the ErrorActionPreference is set to "Continue", so PowerShell outputs error
messages and continues with the next statement.
Now call script1.ps1 which basically calls script2.ps1 internally. The output suddenly is completely different:
No PowerShell error messages anymore. script1.ps1 has propagated the ErrorActionPreference setting to the child
script, so the child script now also uses the setting "Continue". Any error in script2.ps1 now bubbles up to the next
available error handler which happens to be the trap in script1.ps1. That explains why the first error in script2.ps1
was output by the error handler in script1.ps1.
When you look closely at the result, you will notice though that script2.ps1 was aborted. It did not continue to run.
Instead, when the first error occurred, all remaining calls where skipped.
That again is default behavior: the error handler in script1.ps1 uses the statement "Continue", so after an error was
reported, the error handler continues. It just does not continue in script2.ps1. That's because an error handler always
continues with the next statement that resides in the same scope the error handler is defined. script2.ps1 is a child
scope, though.
$private:ErrorActionPreference = 'Stop'
trap {
"Something bad occured: $_"
continue
}
'Starting Script'
dir nonexisting:
'Starting Subscript'
& "$folder\script2.ps1"
'Done'
script2 ending
Done
Now, errors in script1.ps1 are handled by the built-in error handler, and errors in script2.ps1 are handled by
PowerShell.
And this is the revised script2.ps1 that uses its own error handler.
trap {
"Something bad occured: $_"
continue
}
"script2 starting"
dir nonexisting:
Get-Process noprocess
"script2 ending"
Make sure you change script1.ps1 back to the original version by removing "private:" again before you run
it:
This time, all code in script2.ps1 was executed and each error was handled by the new error handler in
script2.ps1.
In Figure 12.1 you see that by default, the global scope (representing the PowerShell console or
development environment) and the script scope (representing a script you called from global scope) are two
different scopes. This guarantees that a script cannot change the caller‘s scope (unless the script developer
used the 'global:' prefix as described earlier).
If the caller calls the script "dot-sourced", though, the script scope is omitted, and what would have been
the script scope now is the global scope - or put differently, global scope and script scope become the same.
This is how you can make sure functions and variables defined in a script remain accessible even after the
script is done. Here is a sample. Type in the code and save it as script3.ps1:
function test-function {
'I am a test function!'
}
test-function
When you run this script the default way, the function test-function runs once because it is called from
within the script. Once the script is done, the function is gone. You can no longer call test-function.
Now, run the script dot-sourced! You do that by replacing the call operator "&" by a dot:
PS> . 'C:\script\script3.ps1'
I am a test function!
PS> test-function
I am a test function!
Since now the script scope and the global scope are identical, the script did define the function test-function
in the global scope. That's why the function is still there once the script ended.
The profile script that PowerShell runs automatically during startup ($profile) is an example of a script that
is running dot-sourced, although you cannot see the actual dot-sourcing call.
Note: To make sure functions defined in a script remain accessible, a developer could also prepend the
function name with "global:". However, that may not be such a clever idea. The prefix "global:" always
creates the function in the global context. Dot-sourcing is more flexible because it creates the function in
the caller's context. So if a script runs another script dot-sourced, all functions defined in the second script
are also available in the first, but the global context (the console) remains unaffected and unpolluted.
This default behavior is completely transparent if there is no ambiguity. If however you have different
command types with the same name, this may lead to surprising results:
# Function has priority over external program and turns off command:
ping -n 1 10.10.10.10
Ping is not allowed.
As you can see, your function was able to "overwrite" ping.exe. Actually, it did not overwrite anything. The
scope functions live in has just a higher priority than the scope applications live in. Aliases live in yet
another scope which has the highest priority of them all:
Now, Ping calls the Echo command, which is an alias for Write-Output and simply outputs the parameters
that you may have specified after Ping in the console.
Get-Command Ping
CommandType Name Definition
----------- ---- ----------
function Ping "Ping is not
allowed."
Alias ping echo
Application PING.EXE
C:\Windows\system32\PING.EXE
Summary
PowerShell uses scopes to manage the life span and visibility of variables and functions. By default, the
content of scopes is visible to all child scopes and does not change any parent scope.
There is always at least one scope which is called "global scope". New scopes are created when you define
scripts or functions.
The developer can control the scope to use by prepending variable and function names with one of these
keywords: global:, script:, private: and local:. The prefix local: is the default and can be omitted.
The user can control scope by optionally dot-sourcing scripts, functions or script blocks. With dot sourcing,
for the element you are calling, no new scope is created. Instead, the caller's context is used.
A different flavor of scope is used to manage the five different command types PowerShell supports. Here,
PowerShell searches for commands in a specific order. If the command name is ambiguous, PowerShell
uses the first command it finds. To find the command, it searches the command type scopes in this order:
alias, function, cmdlet, application, external script, and script. Use Get-Command to locate a command
yourself based on name and command type if you need more control.
Often, you need to deal with plain text information. You may want to read the content from some text file and
extract lines that contain a keyword, or you would like to isolate the file name from a file path. So while the object-
oriented approach of PowerShell is a great thing, at the end of a day most useful information breaks down to plain
text. In this chapter, you'll learn how to control text information in pretty much any way you want.
Topics Covered:
Defining Text
o Special Characters in Text
o Resolving Variables
o "Here-Strings": Multi-Line Text
o Communicating with the User
Composing Text with "-f"
o Setting Numeric Formats
o Outputting Values in Tabular Form: Fixed Width
o String Operators
o String Object Methods
o Analyzing Methods: Split() as Example
Simple Pattern Recognition
Regular Expressions
o Describing Patterns
o Quantifiers
o Anchors
o Recognizing Addresses
o Validating E-Mail Adddresses
o Simultaneous Searches for Different Terms
o Case Sensitivity
o Finding Information in Text
o Searching for Several Keywords
o Forming Groups
o Greedy or Lazy? Shortest or Longest Possible Result
o Finding Segments
o Replacing a String
o Using Back References
o Putting Characters First at Line Beginnings
o Removing White Space
o Finding and Removing Doubled Words
Summary
Defining Text
To define text, place it in quotes. If you want PowerShell to treat the text exactly the way you type it, use single
quotes. Use double quotes with care because they can transform your text: any variable you place in your text will
get resolved, and PowerShell replaces the variable with its context. Have a look:
Placed in single quotes, PowerShell returns the text exactly like you entered it. With double quotes, the result is
completely different:
If you used single quotes to delimit the text, you can freely use double quotes inside the text, and vice versa:
If you must use the same type of quote both as delimiter and inside the text, you can "escape" quotes (remove their
special meaning) by either using two consecutive quotes, or by placing a "backtick" character in front of the quote:
The second most wanted special character you may want to include in text is a new line so you can extend text to
more than one line. Again, you have a couple of choices.
When you use double quotes to delimit text, you can insert special control characters like tabs or line breaks by
adding a backtick and then a special character where "t" stands for a tab and "n" represents a line break. This
technique does require that the text is defined by double quotes:
Resolving Variables
A rather unusual special character is "$". PowerShell uses it to define variables that can hold information. Text in
double quotes also honors this special character and recognizes variables by resolving them: PowerShell
automatically places the variable content into the text:
$name = 'Weltner'
"Hello Mr $name"
This only works for text enclosed in double quotes. If you use single quotes, PowerShell ignores variables and treats
"$" as a normal character:
'Hello Mr $name'
At the same time, double quotes protect you from unwanted variable resolving. Take a look at this example:
As turns out, $$ is again a variable (it is an internal "automatic" variable maintained by PowerShell which happens
to contain the last command token PowerShell processed which is why the result of the previous code line can vary
and depends on what you executed right before), so as a rule of thumb, you should start using single quotes by
default unless you really want to resolve variables in your text. Resolving text can be enormously handy:
Now, what would you do if you needed to use "$" both to resolve variables and to display literally in the same text?
Again, you can use the backtick to escape the "$" and remove its special resolving capability:
Tip: You can use the "$" resolving capabilities to insert live code results into text. Just place the code you want to
evaluate in brackets. To make PowerShell treat these brackets as it would outside of text, place a "$" before:
A much more readable way is using here-strings. They work like quotes except they use a "@" before and after the
quote to indicate that the text extends over multiple lines.
$text = @"
>> Here-Strings can easily stretch over several lines and may also include
>>"quotation marks". Nevertheless, here, too, variables are replaced with
>> their values: C:\Windows, and subexpressions like 4 are likewise replaced
>> with their result. The text will be concluded only if you terminate the
>> here-string with the termination symbol "@.
>> "@
>>
$text
Here-Strings can easily stretch over several lines and may also include
"quotation marks". Nevertheless, here, too, variables are replaced with
their values: C:\Windows, and subexpressions like 4 are likewise replaced
with their result. The text will be concluded only if you terminate the
here-string with the termination symbol "@.
Text accepted by Read-Host is treated literally, so it behaves like text enclosed in single quotes. Special characters
and variables are not resolved. If you want to resolve the text a user entered, you can however send it to the internal
ExpandString() method for post-processing. PowerShell uses this method internally when you define text in double
quotes:
C:\Windows
You can also request secret information from a user. To mask input, use the switch parameter -asSecureString. This
time, however, Read-Host won't return plain text anymore but instead an encrypted SecureString. So, not only the
input was masked with asterisks, the result is just as unreadable. To convert an encrypted SecureString into plain
text, you can use some internal .NET methods:
The -f format operator formats a string and requires a string, along with wildcards on its left side and on its right
side, that the results are to be inserted into the string instead of the wildcards:
It is absolutely necessary that exactly the same results are on the right side that are to be used in the string are also
on the left side. If you want to just calculate a result, then the calculation should be in parentheses. As is generally
true in PowerShell, the parentheses ensure that the enclosed statement is evaluated first and separately and that
subsequently, the result is processed instead of the parentheses. Without parentheses, -f would report an error:
You may use as many wildcard characters as you wish. The number in the braces states which value will appear
later in the wildcard and in which order:
"{0} {3} at {2}MB fit into one CD at {1}MB" -f (720mb/1.44mb), 1.44, 720,
"diskettes"
500 diskettes at 720MB fit into one CD at 1.44MB
Index: This number indicates which value is to be used for this wildcard. For example, you could use
several wildcards with the same index if you want to output one and the same value several times, or in
various display formats. The index number is the only obligatory specification. The other two
specifications are voluntary.
Alignment: Positive or negative numbers can be specified that determine whether the value is right
justified (positive number) or left justified (negative number). The number states the desired width. If the
value is wider than the specified width, the specified width will be ignored. However, if the value is
narrower than the specified width, the width will be filled with blank characters. This allows columns to be
set flush.
Format: The value can be formatted in very different ways. Here you can use the relevant format name to
specify the format you wish. You'll find an overview of available formats below.
Formatting statements are case sensitive in different ways than what is usual in PowerShell. You can see how large
the differences can be when you format dates:
Using the formats in Table 13.3, you can format numbers quickly and comfortably. No need for you to squint your
eyes any longer trying to decipher whether a number is a million or 10 million:
10000000000
"{0:N0}" -f 10000000000
10,000,000,000
There's also a very wide range of time and date formats. The relevant formats are listed in Table 13.4 and their
operation is shown in the following lines:
$date= Get-Date
foreach ($format in "d","D","f","F","g","G","m","r","s","t","T","u","U","y",`
"dddd, MMMM dd yyyy","M/yy","dd-MM-yy") {
"DATE with $format : {0}" -f $date.ToString($format)
}
DATE with d : 10/15/2007
DATE with D : Monday, 15 October, 2007
DATE with f : Monday, 15 October, 2007 02:17 PM
DATE with F : Monday, 15 October, 2007 02:17:02 PM
DATE with g : 10/15/2007 02:17
DATE with G : 10/15/2007 02:17:02
DATE with m : October 15
DATE with r : Mon, 15 Oct 2007 02:17:02 GMT
DATE with s : 2007-10-15T02:17:02
DATE with t : 02:17 PM
DATE with T : 02:17:02 PM
DATE with u : 2007-10-15 02:17:02Z
DATE with U : Monday, 15 October, 2007 00:17:02
DATE with y : October, 2007
DATE with dddd, MMMM dd yyyy : Monday, October 15 2007
DATE with M/yy : 10/07
DATE with dd-MM-yy : 15-10-07
If you want to find out which type of formatting options are supported, you need only look for .NET types that
support the toString() method:
[AppDomain]::CurrentDomain.GetAssemblies() | ForEach-Object {
$_.GetExportedTypes() | Where-Object {! $_.IsSubclassOf([System.Enum])}
} | ForEach-Object {
$Methods = $_.GetMethods() | Where-Object {$_.Name -eq "tostring"}
|%{"$_"};
if ($methods -eq "System.String ToString(System.String)") {
$_.FullName
}
}
System.Enum
System.DateTime
System.Byte
System.Convert
System.Decimal
System.Double
System.Guid
System.Int16
System.Int32
System.Int64
System.IntPtr
System.SByte
System.Single
System.UInt16
System.UInt32
System.UInt64
Microsoft.PowerShell.Commands.MatchInfo
For example, among the supported data types is the "globally unique identifier" System.Guid. Because you'll
frequently require GUID, which is clearly understood worldwide, here's a brief example showing how to create and
format a GUID:
$guid = [GUID]::NewGUID()
foreach ($format in "N","D","B","P") {"GUID with $format : {0}" -f
$GUID.ToString($format)}
GUID with N : 0c4d2c4c8af84d198b698e57c1aee780
GUID with D : 0c4d2c4c-8af8-4d19-8b69-8e57c1aee780
GUID with B : {0c4d2c4c-8af8-4d19-8b69-8e57c1aee780}
GUID with P : (0c4d2c4c-8af8-4d19-8b69-8e57c1aee780)
The following result with fixed column widths is far more legible. To set widths, add a comma to the sequential
number of the wildcard and after it specify the number of characters available to the wildcard. Positive numbers will
set values to right alignment, negative numbers to left alignment:
More options are offered by special text commands that PowerShell furnishes from three different areas:
String operators: PowerShell includes a number of string operators for general text tasks which you can
use to replace text and to compare text (Table 13.2).
Dynamic methods: the String data type, which saves text, includes its own set of text statements that you
can use to search through, dismantle, reassemble, and modify text in diverse ways (Table 13.6).
Static methods: finally, the String .NET class includes static methods bound to no particular text.
String Operators
All string operators work in basically the same way: they take data from the left and the right and then do something
with them. The –replace operator for example takes a text and some replacement text and then replaces the
replacement text in the original text:
The format operator -f works in exactly the same way. You heard about this operator at the beginning of this
chapter. It takes a static string template with placeholders and an array with values, and then fills the values into the
placeholders.
Two additional important string operators are -join and -split. They can be used to automatically join together an
array or to split a text into an array of substrings.
Let's say you want to output information that really is an array of information. When you query WMI for your
operating system to identify the installed MUI languages, the result can be an array (when more than one language is
installed). So, this line produces an incomplete output:
You would have to join the array to one string first using -join. Here is how:
The -split operator does the exact opposite. It takes a text and a split pattern, and each time it discovers the split
pattern, it splits the original text in chunks and returns an array. This example illustrates how you can use -split to
parse a path:
Note that -replace expects the pattern to be a regular expression, so if your pattern is composed of reserved
characters (like the backslash), you have to escape it. Note also that the Split-Path cmdlet can split paths more
easily.
To auto-escape a simple text pattern, use .NET methods. The Escape() method takes a simple text pattern and
returns the escaped version that you can use wherever a regular expression is needed:
PS> [RegEx]::Escape('some.\pattern')
some\.\\pattern
$path = "c:\test\Example.bat"
$path.Substring( $path.LastIndexOf(".")+1 )
bat
Another approach uses the dot as separator and Split() to split up the path into an array. The result is that the last
element of the array (-1 index number) will include the file extension:
$path.Split(".")[-1]
bat
EndsWith() Tests whether the string ends with a specified string ("Hello").EndsWith("lo")
Equals() Tests whether one string is identical to another string ("Hello").Equals($a)
Returns the index of the first occurrence of a comparison
IndexOf() ("Hello").IndexOf("l")
string
Returns the index of the first occurrence of any character in
IndexOfAny() ("Hello").IndexOfAny("loe")
a comparison string
Insert() Inserts new string at a specified index in an existing string ("Hello World").Insert(6, "brave ")
Retrieves a new object that can enumerate all characters of
GetEnumerator() ("Hello").GetEnumerator()
a string
Finds the index of the last occurrence of a specified
LastIndexOf() ("Hello").LastIndexOf("l")
character
Finds the index of the last occurrence of any character of a
LastIndexOfAny() ("Hello").LastIndexOfAny("loe")
specified string
Pads a string to a specified length and adds blank characters
PadLeft() ("Hello").PadLeft(10)
to the left (right-aligned string)
Pads string to a specified length and adds blank characters
PadRight() ("Hello").PadRight(10) + "World!"
to the right (left-aligned string)
Removes any requested number of characters starting from
Remove() ("Hello World").Remove(5,6)
a specified position
Replace() Replaces a character with another character ("Hello World").Replace("l", "x")
Converts a string with specified splitting points into an
Split() ("Hello World").Split("l")
array
StartsWith() Tests whether a string begins with a specified character ("Hello World").StartsWith("He")
Substring() Extracts characters from a string ("Hello World").Substring(4, 3)
ToCharArray() Converts a string into a character array ("Hello World").toCharArray()
ToLower() Converts a string to lowercase ("Hello World").toLower()
Converts a string to lowercase using casing rules of the ("Hello
ToLowerInvariant()
invariant language World").toLowerInvariant()
ToUpper() Converts a string to uppercase ("Hello World").toUpper()
Converts a string to uppercase using casing rules of the ("Hello
ToUpperInvariant()
invariant language World").ToUpperInvariant()
Trim() Removes blank characters to the right and left (" Hello ").Trim() + "World"
TrimEnd() Removes blank characters to the right (" Hello ").TrimEnd() + "World"
TrimStart() Removes blank characters to the left (" Hello ").TrimStart() + "World"
Chars() Provides a character at the specified position ("Hello").Chars(0)
Definition gets output, but it isn't very easy to read. Because Definition is also a string object, you can use methods
from Table 13.6, including Replace(), to insert a line break where appropriate. That makes the result much more
understandable:
There are six different ways to invoke Split(). In simple cases, you might use Split() with only one argument, Split(),
you will expect a character array and will use every single character as a possible splitting separator. That's
important because it means that you may use several separators at once:
"a,b;c,d;e;f".Split(",;")
a
b
c
d
e
f
If the splitting separator itself consists of several characters, then it has got to be a string and not a single Char
character. There are only two signatures that meet this condition:
You must make sure that you pass data types to the signature that is exactly right for it to be able to use a particular
signature. If you want to use the first signature, the first argument must be of the String[] type and the second
argument of the StringSplitOptions type. The simplest way for you to meet this requirement is by assigning
arguments first to a strongly typed variable. Create the variable of exactly the same type that the signature requires:
Split() in fact now uses a separator consisting of several characters. It splits the string only at the points where it
finds precisely the characters that were specified. There does remain the question of how do you know it is
necessary to assign the value "None" to the StringSplitOptions data type. The simple answer is: you don‘t know and
it isn‘t necessary to know. If you assign a value to an unknown data type that can't handle the value, the data type
will automatically notify you of all valid values:
By now it should be clear to you what the purpose is of the given valid values and their names. For example, what
was RemoveEmptyEntries() able to accomplish? If Split() runs into several separators following each other, empty
array elements will be the consequence. RemoveEmptyEntries() deletes such empty entries. You could use it to
remove redundant blank characters from a text:
[StringSplitOptions]$option = "RemoveEmptyEntries"
"This text has too much whitespace".Split(" ", $option)
This
text
has
too
much
whitespace
Now all you need is just a method that can convert the elements of an array back into text. The method is called
Join(); it is not in a String object but in the String class.
A simple form of wildcards was invented for the file system many years ago and it still works today. In fact, you've
probably used it before in one form or another:
# List all files in the current directory that have the txt file extension:
Dir *.txt
# List all files in the Windows directory that begin with "n" or "w":
dir $env:windir\[nw]*.*
# List all files whose file extensions begin with "t" and which are exactly 3
characters long:
Dir *.t??
# List all files that end in one of the letters from "e" to "z"
dir *[e-z].*
The placeholders in Table 13.7 work in the file system, but also with string comparisons like -like and -notlike. For
example, if you want to verify whether a user has given a valid IP address, you could do so in the following way:
If you want to verify whether a valid e-mail address was entered, you could check the pattern like this:
$email = "[email protected]"
$email -like "*.*@*.*"
# Wildcards are appropriate only for very simple pattern recognition and
leave room for erroneous entries:
$ip = "300.werner.6666."
if ($ip -like "*.*.*.*") { "valid" } else { "invalid" }
valid
Regular Expressions
Use regular expressions for more accurate pattern recognition. Regular expressions offer highly specific wildcard
characters; that's why they can describe patterns in much greater detail. For the very same reason, however, regular
expressions are also much more complicated.
Describing Patterns
Using the regular expression elements listed in Table 13.11, you can describe patterns with much greater precision.
These elements are grouped into three categories:
Placeholder: The placeholder represents a specific type of data, for example a character or a digit.
Quantifier: Allows you to determine how often a placeholder occurs in a pattern. You could, for example,
define a 3-digit number or a 6-character-word.
Anchor: Allows you to determine whether a pattern is bound to a specific boundary. You could define a
pattern that needs to be a separate word or that needs to begin at the beginning of the text.
The pattern represented by a regular expression may consist of four different character types:
Literal characters like "abc" that exactly matches the "abc" string.
Masked or "escaped" characters with special meanings in regular expressions; when preceded by
"\", they are understood as literal characters: "\[test\]" looks for the "[test]" string. The following
characters have special meanings and for this reason must be masked if used literally: ". ^ $ * + ? { [
] \ | ( )".
Pre-defined wildcard characters that represent a particular character category and work like placeholders.
For example, "\d" represents any number from 0 to 9.
Custom wildcard characters: They consist of square brackets, within which the characters are specified
that the wildcard represents. If you want to use any character except for the specified characters, use "^" as
the first character in the square brackets. For example, the placeholder "[^f-h]" stands for all characters
except for "f", "g", and "h".
Element Description
. Exactly one character of any kind except for a line break (equivalent to [^\n])
[^abc] All characters except for those specified in brackets
[^a-z] All characters except for those in the range specified in the brackets
[abc] One of the characters specified in brackets
[a-z] Any character in the range indicated in brackets
\a Bell alarm (ASCII 7)
\c Any character allowed in an XML name
\cA-\cZ Control+A to Control+Z, equivalent to ASCII 0 to ASCII 26
\d A number (equivalent to [0-9])
\D Any character except for numbers
\e Escape (ASCII 9)
\f Form feed (ASCII 15)
\n New line
\r Carriage return
\s Any whitespace character like a blank character, tab, or line break
\S Any character except for a blank character, tab, or line break
\t Tab character
\uFFFF Unicode character with the hexadecimal code FFFF. For example, the Euro symbol has the code 20AC
\v Vertical tab (ASCII 11)
\w Letter, digit, or underline
\W Any character except for letters
\xnn Particular character, where nn specifies the hexadecimal ASCII code
.* Any number of any character (including no characters at all)
Quantifiers
Every pattern listed in Table 13.8 represents exactly one instance of that kind. Using quantifiers, you can tell how
many instances are parts of your pattern. For example, "\d{1,3}" represents a number occurring one to three times
for a one-to-three digit number.
Element Description
* Preceding expression is not matched or matched once or several times (matches as much as possible)
*? Preceding expression is not matched or matched once or several times (matches as little as possible)
.* Any number of any character (including no characters at all)
? Preceding expression is not matched or matched once (matches as much as possible)
?? Preceding expression is not matched or matched once (matches as little as possible)
{n,} n or more matches
{n,m} Inclusive matches between n and m
{n} Exactly n matches
+ Preceding expression is matched once
Anchors
Anchors determine whether a pattern has to match a certain boundary. For example, the regular expression
"\b\d{1,3}" finds numbers only up to three digits if these turn up separately in a string. The number "123" in the
string "Bart123" would not qualify.
Elements Description
$ Matches at end of a string (\Z is less ambiguous for multi-line texts)
\A Matches at beginning of a string, including multi-line texts
\b Matches on word boundary (first or last characters in words)
\B Must not match on word boundary
\Z Must match at end of string, including multi-line texts
^ Must match at beginning of a string (\A is less ambiguous for multi-line texts)
Recognizing IP Addresses
Patterns such as an IP address can be very precisely described by regular expressions. Usually, you would use a
combination of characters and quantifiers to specify which characters may occur in a string and how often:
$ip = "10.10.10.10"
$ip -match "\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b"
True
$ip = "a.10.10.10"
$ip -match "\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b"
False
$ip = "1000.10.10.10"
$ip -match "\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b"
False
The pattern is described here as four numbers (char: \d) between one and three digits (using the quantifier {1,3}) and
anchored on word boundaries (using the anchor \b), meaning that it is surrounded by white space like blank
characters, tabs, or line breaks. Checking is far from perfect since it is not verified whether the numbers really do lie
in the permitted number range from 0 to 255.
$email = "[email protected]"
$email -match "\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b"
True
$email = ".@."
$email -match "\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b"
False
Whenever you look for an expression that occurs as a single "word" in text, delimit your regular expression by word
boundaries (anchor: \b). The regular expression will then know you're interested only in those passages that are
demarcated from the rest of the text by white space like blank characters, tabs, or line breaks.
The regular expression subsequently specifies which characters may be included in an e-mail address. Permissible
characters are in square brackets and consist of "ranges" (for example, "A-Z0-9") and single characters (such as
"._%+-"). The "+" behind the square brackets is a quantifier and means that at least one of the given characters must
be present. However, you can also stipulate as many more characters as you wish.
Following this is "@" and, if you like, after it a text again having the same characters as those in front of "@". A dot
(\.) in the e-mail address follows. This dot is introduced with a "\" character because the dot actually has a different
meaning in regular expressions if it isn't within square brackets. The backslash ensures that the regular expression
understands the dot behind it literally.
After the dot is the domain identifier, which may consist solely of letters ([A-Z]). A quantifier ({2,4}) again follows
the square brackets. It specifies that the domain identifier may consist of at least two and at most four of the given
characters.
However, this regular expression still has one flaw. While it does verify whether a valid e-mail address is in the text
somewhere, there could be another text before or after it:
Because of "\b", when your regular expression searches for a pattern somewhere in the text, it only takes into
account word boundaries. If you prefer to check whether the entire text corresponds to an authentic e-mail, use the
elements for sentence beginnings (anchor: "^") and endings (anchor: "$") instead of word boundaries.
The "?" character here doesn't represent any character at all, as you might expect after using simple wildcards. For
regular expressions, "?" is a quantifier and always specifies how often a character or expression in front of it may
occur. In the example, therefore, "u?" ensures that the letter "u" may, but not necessarily, be in the specified location
in the pattern. Other quantifiers are "*" (may also match more than one character) and "+" (must match characters at
least once).
If you prefer to mark more than one character as optional, put the character in a sub-expression, which are placed in
parentheses. The following example recognizes both the month designator "Nov" and "November":
If you'd rather use several alternative search terms, use the OR character "|":
And if you want to mix alternative search terms with fixed text, use sub-expressions again:
If you want case sensitivity in only some pattern segments, use –match. Also, specify in your regular expression
which text segments are case sensitive and which are insensitive. Anything following the "(?i)" construct is case
insensitive. Conversely, anything following "(?-i)" is case sensitive. This explains why the word "test" in the below
example is recognized only if its last two characters are lowercase, while case sensitivity has no importance for the
first two characters:
If you use a .NET framework RegEx object instead of –match, it will work case-sensitive by default, much like –
cmatch. If you prefer case insensitivity, either use the above construct to specify the option (i?) in your regular
expression or submit extra options to the Matches() method (which is a lot more work):
Of course, a regular expression can perform any number of detailed checks, such as verifying whether numbers in an
IP address lie within the permissible range from 0 to 255. The problem is that this makes regular expressions long
and hard to understand. Fortunately, you generally won't need to invest much time in learning complex regular
expressions like the ones coming up. It's enough to know which regular expression to use for a particular pattern.
Regular expressions for nearly all standard patterns can be downloaded from the Internet. In the following example,
we'll look more closely at a complex regular expression that evidently is entirely made up of the conventional
elements listed in Table 13.11:
$ip = "300.400.500.999"
$ip -match "\b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-
5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b"
False
The expression validates only expressions running into word boundaries (the anchor is \b). The following sub-
expression defines every single number:
(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)
The construct ?: is optional and enhances speed. After it come three alternatively permitted number formats
separated by the alternation construct "|". 25[0-5] is a number from 250 through 255. 2[0-4][0-9] is a number from
200 through 249. Finally, [01]?[0-9][0-9]? is a number from 0-9 or 00-99 or 100-199. The quantifier "?" ensures
that the preceding pattern must be included. The result is that the sub-expression describes numbers from 0 through
255. An IP address consists of four such numbers. A dot always follows the first three numbers. For this reason, the
following expression includes a definition of the number:
(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}
A dot, (\.), is appended to the number. This construct is supposed to be present three times ({3}). When the fourth
number is also appended, the regular expression is complete. You have learned to create sub-expressions (by using
parentheses) and how to iterate sub-expressions (by indicating the number of iterations in braces after the sub-
expression), so you should now be able to shorten the first used IP address regular expression:
$ip = "10.10.10.10"
$ip -match "\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b"
True
$ip -match "\b(?:\d{1,3}\.){3}\d{1,3}\b"
True
Since the RegEx object is case-sensitive by default, put the "(?i)" option before the regular expression to make it
work like -match.
# A raw text contains several e-mail addresses. –match finds the first one
only:
$rawtext = "[email protected] sent an e-mail that was forwarded to [email protected]."
$rawtext -match "\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b"
True
$matches
Name Value
---- -----
0 [email protected]
# A RegEx object can find any pattern but is case sensitive by default:
$regex = [regex]"(?i)\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b"
$regex.Matches($rawtext)
Groups : {[email protected]}
Success : True
Captures : {[email protected]}
Index : 4
Length : 13
Value : [email protected]
Groups : {[email protected]}
Success : True
Captures : {[email protected]}
Index : 42
Length : 13
Value : [email protected]
$matches tells you which keyword actually occurs in the string. But note the order of keywords in your regular
expression—it's crucial because the first matching keyword is the one selected. In this example, the result would be
incorrect:
Either change the order of keywords so that longer keywords are checked before shorter ones …:
… or make sure that your regular expression is precisely formulated, and remember that you're actually searching
for single words. Insert word boundaries into your regular expression so that sequential order no longer plays a role:
It's true here, too, that -match finds only the first match. If your raw text has several occurrences of the keyword, use
a RegEx object again:
$regex = [regex]"\b(Get|GetValue|Set|SetValue)\b"
$regex.Matches("Set a=1; GetValue a; SetValue b=12")
Groups : {Set, Set}
Success : True
Captures : {Set}
Index : 0
Length : 3
Value : Set
Forming Groups
A raw text line is often a heaping trove of useful data. You can use parentheses to collect this data in sub-
expressions so that it can be evaluated separately later. The basic principle is that all the data that you want to find in
a pattern should be wrapped in parentheses because $matches will return the results of these sub-expressions as
independent elements. For example, if a text line contains a date first, then text, and if both are separated by tabs,
you could describe the pattern like this:
# Show result:
$matches
Name Value
---- -----
2 Description
1 12/01/2009
0 12/01/2009 Description
$matches[1]
12/01/2009
$matches[2]
Description
When you use sub-expressions, $matches will contain the entire searched pattern in the first array element named
"0". Sub-expressions defined in parentheses follow in additional elements. To make them easier to read and
understand, you can assign sub-expressions their own names and later use the names to call results. To assign names
to a sub-expression, type ? in parentheses for the first statement:
# Show result:
$matches
Name Value
---- -----
Text Description
Date 12/01/2009
0 12/01/2009 Description
$matches.Date
12/01/2009
$matches.Text
Description
Each result retrieved by $matches for each sub-expression naturally requires storage space. If you don't need the
results, discard them to increase the speed of your regular expression. To do so, type "?:" as the first statement in
your sub-expression:
In both cases, the regular expression recognizes the month, but returns different results in $matches. By default, the
regular expression is "greedy" and returns the longest possible match. If the text is "February," then the expression
will search for a match starting with "Feb" and then continue searching "greedily" to check whether even more
characters match the pattern. If they do, the entire (detailed) text is reported back: February.
If your main concern is just standardizing the names of months, you would probably prefer getting back the shortest
possible text: Feb. To switch regular expressions to work lazy (returning the shortest possible match), add "?" to the
expression. "Feb(ruary)??" now stands for a pattern that starts with "Feb", followed by zero or one occurance of
"ruary" (Quantifier "?"), and returning only the shortest possible match (which is turned on by the second "?").
Replacing a String
You already know how to replace a string because you know the string –replace operator. Simply tell the operator
what term you want to replace in a string:
But simple replacement isn't always sufficient, so you can also use regular expressions for replacements. Some of
the following examples show how that could be useful.
Let's say you'd like to replace several different terms in a string with one other term. Without regular expressions,
you'd have to replace each term separately. With regular expressions, simply use the alternation operator, "|":
You can type any term in parentheses and use the "|" symbol to separate them. All the terms will be replaced with
the replacement string you specify.
Using Back References
This last example replaces specified keywords anywhere in a string. Often, that's sufficient, but sometimes you don't
want to replace a keyword everywhere it occurs but only when it occurs in a certain context. In such cases, the
context must be defined in some way in the pattern. How could you change the regular expression so that it replaces
only the names Miller and Meyer? Like this:
The result looks a little peculiar, but the pattern you're looking for was correctly identified. The only replacements
were Mr. or Mrs. Miller and Mr. or Mrs. Meyer. The term "Mr. Werner" wasn't replaced. Unfortunately, the result
also shows that it doesn't make any sense here to replace the entire pattern. At least the name of the person should be
retained. Is that possible?
This is where the back referencing you've already seen comes into play. Whenever you use parentheses in your
regular expression, the result inside the parentheses is evaluated separately, and you can use these separate results in
your replacement string. The first sub-expression always reports whether a "Mr." or a "Mrs." was found in the string.
The second sub-expression returns the name of the person. The terms "$1" and "$2" provide you the sub-expressions
in the replacement string (the number is consequently a sequential number; you could also use "$3" and so on for
additional sub-expressions).
The back references don't seem to work. Can you see why? "$1" and "$2" look like PowerShell variables, but in
reality they are part of the regular expression. As a result, if you put the replacement string inside double quotes,
PowerShell replaces "$2" with the PowerShell variable $2, which is probably undefined. Use single quotation marks
instead, or add a backtick to the "$" special character so that PowerShell won't recognize it as its own variable and
replace it:
# In multiline mode, \A stands for the text beginning and ^ for the line
beginning:
[regex]::Replace($text, "\A", "> ",
[Text.RegularExpressions.RegExOptions]::Multiline)
> Here is a little text.
I want to attach this text to an e-mail as a quote.
That's why I would put a ">" before every line.
"\b(\w+)(\s+\1){1,}\b"
The pattern searched for is a word (anchor: "\b"). It consists of one word (the character "\w" and quantifier "+"). A
blank character follows (the character "\s" and quantifier "?"). This pattern, the blank character and the repeated
word, must occur at least once (at least one and any number of iterations of the word, quantifier "{1,}"). The entire
pattern is then replaced with the first back reference, that is, the first located word.
Summary
Text is defined either by single or double quotation marks. If you use double quotation marks, PowerShell will
replace PowerShell variables and special characters in the text. Text enclosed in single quotation marks remains as-
is. If you want to prompt the user for input text, use the Read-Host cmdlet. Multi-line text can be defined with Here-
Strings, which start with @"(Enter) and end with "@(Enter).
By using the format operator –f, you can compose formatted text. This gives you the option to display text in
different ways or to set fixed widths to output text in aligned columns (Table 13.3 through Table 13.5). Along with
the formatting operator, PowerShell has a number of string operators you can use to validate patterns or to replace a
string (Table 13.2).
PowerShell stores text in string objects, which support methods to work on the stored text. You can use these
methods by typing a dot after the string object (or the variable in which the text is stored) and then activating auto
complete (Table 13.6). Along with the dynamic methods that always refer to text stored in a string object, there are
also static methods that are provided directly by the string data type by qualifying the string object with "[string]::".
The simplest way to describe patterns is to use the simple wildcards in Table 13.7. Simple wildcard patterns, while
easy to use, only support very basic pattern recognition. Also, simple wildcard patterns can only recognize the
patterns; they cannot extract data from them.
A far more sophisticated tool are regular expressions. They consist of very specific placeholders, quantifiers and
anchors listed in Table 13.11. Regular expressions precisely identify even complex patterns and can be used with the
operators -match or –replace. Use the .NET object [regex] if you want to match multiple pattern instances.
In today‘s world, data is no longer presented in plain-text files. Instead, XML (Extensible Markup Language) has
evolved to become a de facto standard because it allows data to be stored in a flexible yet standard way. PowerShell
takes this into account and makes working with XML data much easier than before.
Topics Covered:
<Name>Tobias Weltner</Name>
Nodes can be decorated with attributes. Attributes are stored in the start tag of the node like this:
If a node has no particular content, its start and end tags can be combined, and the ending symbol "/" drifts toward
the end of the tag. If the branch office in Hanover doesn't have any staff currently working in the field, the tag could
look like this:
The following XML structure describes two staff members of the Hanover branch office who are working in the
sales department.
The XML data is wrapped in an XML node which is the top node of the document:
<?xml version="1.0" ?>
This particular header contains a version attribute which declares that the XML structure conforms to the
specifications of XML version 1.0. There can be additional attributes in the XML header. Often you find a reference
to a "schema", which is a formal description of the structure of that XML file. The schema could, for example,
specify that there must always be a node called "staff" as part of staff information, which in turn could include as
many sub-nodes named "staff" as required. The schema would also specify that information relating to name and
function must also be defined for each staff member.
Because XML files consist of plain text, you can easily create them using any editor or directly from within
PowerShell. Let's save the previous staff list as an xml file:
$xml = @'
<?xml version="1.0" standalone="yes"?>
<staff branch="Hanover" Type="sales">
<employee>
<Name>Tobias Weltner</Name>
<function>management</function>
<age>39</age>
</employee>
<employee>
<Name>Cofi Heidecke</Name>
<function>security</function>
<age>4</age>
</employee>
</staff>
'@ | Out-File $env:temp\employee.xml
XML is case-sensitive!
A faster approach uses a blank XML object and its Load() method:
Conversion or loading XML from a file of course only works when the XML is valid and contains no syntactic
errors. Else, the conversion will throw an exception.
Once the XML data is stored in an XML object, it is easy to read its content because PowerShell automatically turns
XML nodes and attributes into object properties. So, to read the staff from the sample XML data, try this:
$xmldata.staff.employee
Name function
Age
---- -----
-----
Tobias Weltner management
39
Cofi Heidecke security
4
If you want to save changes you applied to XML data, call the Save() method:
$xmldata.Save("$env:temp\updateddata.xml")
$xmldata.SelectNodes('staff/employee')
Name function
Age
---- -----
-----
Tobias Weltner management
39
Cofi Heidecke security
4
The result is pretty much the same as before, but XPath is very flexible and supports wildcards and additional
control. The next statement retrieves just the first employee node:
$xmldata.SelectNodes('staff/employee[1]')
Name function
Age
---- -----
-----
Tobias Weltner management
39
If you'd like, you can get a list of all employees who are under the age of 18:
$xmldata.SelectNodes('staff/employee[age<18]')
Name function
Age
---- -----
-----
Cofi Heidecke security
4
$xmldata.SelectNodes('staff/employee[last()]')
$xmldata.SelectNodes('staff/employee[position()>1]')
# Output all employees of the Hanover branch office except for Tobias
Weltner:
$query = "/staff[@branch='Hanover']/employee[Name!='Tobias Weltner']"
$navigator.Select($query) | Format-Table Value
Value
-----
Cofi Heidecke
Accessing Attributes
Attributes are pieces of information that describe an XML node. If you'd like to read the attributes of a node, use
Attributes:
$xmldata.staff.Attributes
#text
-----
Hanover
sales
$xmldata.staff.GetAttribute("branch")
Hanover
# Check result:
$xmldata.staff.employee
Name function
Age
---- -----
-----
Tobias Weltner management
39
Cofi Heidecke security
4
Bernd Seiler expert
# Output plain text:
$xmldata.get_InnerXml()
<?xml version="1.0"?><Branch office staff="Hanover" Type="sales"><employee>
<Name>Tobias Weltner</Name><function>management</function><age>39</age>
</employee><employee><Name>Cofi Heidecke</Name><function>security</function>
<age>4</age></employee><employee><Name>Bernd Seiler</Name><function>
expert</function></employee></staff>
With the basic knowledge about XML that you gained so far, you can start exploring the ETS XML files and learn
more about the inner workings of PowerShell.
Dir $pshome\*.format.ps1xml
All these files define a multitude of Views, which you can examine using PowerShell XML support.
To find out which views exist, take a look into the format.ps1xml files that describe the object type.
Name ObjectType
---- ----------
Dictionary System.Collections.DictionaryEntry
DateTime System.DateTime
Priority System.Diagnostics.Process
StartTime System.Diagnostics.Process
process System.Diagnostics.Process
process System.Diagnostics.Process
ProcessModule System.Diagnostics.ProcessModule
DirectoryEntry
System.DirectoryServices.DirectoryEntry
PSSnapInInfo
System.Management.Automation.PSSnapI...
PSSnapInInfo
System.Management.Automation.PSSnapI...
service
System.ServiceProcess.ServiceController
Here you see all views defined in this XML file. The object types for which the views are defined are listed in the
second column. The Priority and StartTime views, which we just used, are on that list. However, the list just shows
views that use Table format. To get a complete list of all views, here is a more sophisticated example:
Remember there are many format.ps1xml-files containing formatting information. You'll only get a complete list of
all view definitions when you generate a list for all of these files.
Working with files and folders is traditionally one of the most popular areas for administrators. PowerShell eases
transition from classic shell commands with the help of a set of predefined "historic" aliases and functions. So, if
you are comfortable with commands like "dir" or "ls" to list folder content, you can still use them. Since they are
just aliases - references to PowerShell‘s own cmdlets - they do not necessarily work exactly the same anymore,
though.
In this chapter, you'll learn how to use PowerShell cmdlets to automate the most common file system tasks.
Topics Covered:
Getting to Know Your Tools
Accessing Files and Directories
o Listing Folder Contents
o Choosing the Right Parameters
o Getting File and Directory Items
o Passing Files to Cmdlets, Functions, or Scripts
o Selecting Files or Folders Only
Navigating the File System
o Relative and Absolute Paths
o Converting Relative Paths into Absolute Paths
o Pushing and Popping Directory Locations
o Special Directories and System Paths
o Constructing Paths
Working with Files and Directories
o Creating New Directories
o Creating New Files
o Reading the Contents of Text Files
o Processing Comma-Separated Lists
o Moving and Copying Files and Directories
o Renaming Files and Directories
o Bulk Renames
o Deleting Files and Directories
o Deleting Directory Contents
o Deleting Directories Plus Content
In addition, PowerShell provides a set of cmdlets that help dealing with path names. They all use the noun "Path",
and you can use these cmdlets to construct paths, split paths into parent and child, resolve paths or check whether
files or folders exist.
Time to put Get-ChildItem to work: to get a list of all PowerShell script files stored in your profile folder, try this:
Most likely, this will not return anything because, typically, your own files are not stored in the root of your profile
folder. To find script files recursively (searching through all child folders), add the switch parameter -Recurse:
This may take much longer. If you still get no result, then maybe you did not create any PowerShell script file yet.
Try searching for other file types. This line will get all Microsoft Word documents in your profile:
PS> Get-ChildItem -Path $home -Filter *.doc* -Recurse
When searching folders recursively, you may run into situations where you do not have access to a particular
subfolder. Get-ChildItem then raises an exception but continues its search. To hide such error messages, add the
common parameter -Erroraction SilentlyContinue which is present in all cmdlets, or use its short form -ea 0:
The -Path parameter accepts multiple comma-separated values, so you could search multiple drives or folders in one
line. This would find all .log-files on drives C:\ and D:\ (and takes a long time because of the vast number of folders
it searches):
If you just need the names of items in one directory, use the parameter -Name:
To list only the full path of files, use a pipeline and send the results to Select-Object to only select the content of the
FullName property:
You'll see some dramatic speed differences, though: -Filter works significantly faster than -Include.
You also see functional differences because -Include only works right when you also use the -Recurse parameter.
The reason for these differences is the way these parameters work. -Filter is implemented by the underlying drive
provider, so it is retrieving only those files and folders that match the criteria in the first place. That's why -Filter is
fast and efficient. To be able to use -Filter, though, the drive provider must support it.
-Include on the contrary is implemented by PowerShell and thus is independent of provider implementations. It
works on all drives, no matter which provider is implementing that drive. The provider returns all items, and only
then does -Include filter out the items you want. This is slower but universal. -Filter currently only works for file
system drives. If you wanted to select items on Registry drives like HKLM:\ or HKCU:\, you must use -Include.
-Include has some advantages, too. It understands advanced wildcards and supports multiple search criteria:
# -Filter looks for all files that begin with "[A-F]" and finds none:
PS> Get-ChildItem $home -Filter [a-f]*.ps1 -Recurse
# -Include understands advanced wildcards and looks for files that begin with
a-f and
# end with .ps1:
PS> Get-ChildItem $home -Include [a-f]*.ps1 -Recurse
The counterpart to -Include is -Exclude. Use -Exclude if you would like to suppress certain files. Unlike -Filter, the -
Include and -Exclude parameters accept arrays, which enable you to get a list of all image files in your profile or the
windows folder:
If you want to filter results returned by Get-ChildItem based on criteria other than file name, use Where-Object
(Chapter 5).
For example, to find the largest files in your profile, use this code - it finds all files larger than 100MB:
You can also use Measure-Object to count the total folder size or the size of selected files. This line will count the
total size of all .log-files in your windows folder:
PSPath :
Microsoft.PowerShell.Core\FileSystem::C:\Windows\explorer.e
xe
PSParentPath : Microsoft.PowerShell.Core\FileSystem::C:\Windows
PSChildName : explorer.exe
PSDrive : C
PSProvider : Microsoft.PowerShell.Core\FileSystem
PSIsContainer : False
VersionInfo : File: C:\Windows\explorer.exe
InternalName: explorer
OriginalFilename: EXPLORER.EXE.MUI
FileVersion: 6.1.7600.16385 (win7_rtm.090713-1255)
FileDescription: Windows Explorer
Product: Microsoft® Windows® Operating System
ProductVersion: 6.1.7600.16385
Debug: False
Patched: False
PreRelease: False
PrivateBuild: False
SpecialBuild: False
Language: English (United States)
BaseName : explorer
Mode : -a---
Name : explorer.exe
Length : 2871808
DirectoryName : C:\Windows
Directory : C:\Windows
IsReadOnly : False
Exists : True
FullName : C:\Windows\explorer.exe
Extension : .exe
CreationTime : 27.04.2011 17:02:33
CreationTimeUtc : 27.04.2011 15:02:33
LastAccessTime : 27.04.2011 17:02:33
LastAccessTimeUtc : 27.04.2011 15:02:33
LastWriteTime : 25.02.2011 07:19:30
LastWriteTimeUtc : 25.02.2011 06:19:30
Attributes : Archive
You can even change item properties provided the file or folder is not in use, you have the proper permissions, and
the property allows write access. Take a look at this piece of code:
This will create a test file in your temporary folder, read its creation time and then changes the creation time to
November 4, 1812. Finally, explorer opens the temporary file so you can right-click the test file and open its
properties to verify the new creation time. Amazing, isn't it?
Get-ChildItem first retrieved the files and then handed them over to Copy-Item which copied the files to a new
destination.
You can also combine the results of several separate Get-ChildItem commands. In the following example, two
separate Get-ChildItem commands generate two separate file listings, which PowerShell combines into a total list
and sends on for further processing in the pipeline. The example takes all the DLL files from the Windows system
directory and all program installation directories, and then returns a list with the name, version, and description of
DLL files:
Where-Object can filter files according to other criteria as well. For example, use the following pipeline filter if
you'd like to locate only files that were created after May 12, 2011:
PS> Get-Location
Path
----
C:\Users\Tobias
If you want to navigate to another location in the file system, use Set-Location or the Cd alias:
Relative path specifications are useful, for example, when you want to use library scripts that are located in the same
directory as your work script. Your work script will then be able to locate library scripts under relative paths—no
matter what the directory is called. Absolute paths are always unique and are independent of your current directory.
Be careful though: Resolve-Path only works for files that actually exist. If there is no file in your current directory
that's called test.txt, Resolve-Path errors out.
Resolve-Path can also have more than one result if the path that you specify includes wildcard characters. The
following call will retrieve the names of all ps1xml files in the PowerShell home directory:
So, to perform a task that forces you to temporarily leave your current directory, first type Push-Location to store
your current location. Then, you can complete your task and when use Pop-Location to return to where you were
before.
Cd $home will always take you back to your home directory. Also, both Push-Location and Pop-Location support
the -Stack parameter. This enables you to create as many stacks as you want, such as one for each task. Push-
Location -Stack job1 puts the current directory not on the standard stack, but on the stack called ―job1‖; you can use
Pop-Location -Stack job1 to restore the initial directory from this stack.
That's why it is important to understand where you can find the exact location of these folders. Some are covered by
the Windows environment variables, and others can be retrieved via .NET methods.
Table 15.3: Important Windows directories that are stored in environment variables
Environment variables cover only the most basic system paths. If you'd like to put a file directly on a user‘s
Desktop, you'll need the path to the Desktop which is missing in the list of environment variables. The
GetFolderPath() method of the System.Environment class of the .NET framework (Chapter 6) can help. The
following code illustrates how you can put a link on the Desktop.
PS> [Environment]::GetFolderPath("Desktop")
C:\Users\Tobias Weltner\Desktop
To get a list of system folders known by GetFolderPath(), use this code snippet:
And this would get you a list of all system folders covered plus their actual paths:
You can use this to create a pretty useful function that maps drives to all important file locations. Here it is:
function Map-Profiles {
[System.Environment+SpecialFolder] | Get-Member -Static -MemberType Property
|
ForEach-Object {
New-PSDrive -Name $_.Name -PSProvider FileSystem -Root
([Environment]::GetFolderPath($_.Name)) `
-Scope Global
}
}
Map-Profiles
When you run this function, it adds a bunch of new drives. You can now easily take a look at your browser cookies -
or even get rid of them:
And if you'd like to see all the drives accessible to you, run this command:
PS> Get-PSDrive
Note that all custom drives are added only for your current PowerShell session. If you want to use them daily, make
sure you add Map-Profiles and its call to your profile script:
Constructing Paths
Path names are plain-text, so you can set them up any way you like. To put a file onto your desktop, you could add
the path segments together using string operations:
A more robust way is using Join-Path because it keeps track of the backslashes:
PS> $path =
[System.IO.Path]::Combine([Environment]::GetFolderPath("Desktop"),
"test.txt")
PS> $path
C:\Users\Tobias Weltner\Desktop\test.txt
The System.IO.Path class includes a number of additionally useful methods that you can use to put together paths or
extract information from paths. Just prepend [System.IO.Path]:: to methods listed in Table 15.4, for example:
You can also create several sub-directories in one step as PowerShell automatically creates all the directories that
don't exist yet in the specified path:
PS> md test\subdirectory\somethingelse
If you add the -Force parameter, creating new files with New-Item becomes even more interesting - and a bit
dangerous, too. The -Force parameter will overwrite any existing file, but it will also make sure that the folder the
file is to be created it exists. So, New-Item can create several folders plus a file if you use -Force.
Another way to create files is to use old-fashioned redirection using the ">" and ">>" operators, Set-Content or Out-
File.
As it turns out, redirection and Out-File work very similar: when PowerShell converts pipeline results, file contents
look just like they would if you output the information in the console. Set-Content works differently: it does not use
PowerShell‘s sophisticated ETS (Extended Type System) to convert objects into text. Instead, it converts objects into
text by using their own private ToString() method - which provides much less information. That is because Set-
Content is not designed to convert objects into text. Instead, this cmdlet is designed to write text to a file.
You can use all of these cmdlets to create text files. For example, ConvertTo-HTML produces HTML but does not
write it to a file. By sending that information to Out-File, you can create HTML- or HTA-files and display them.
If you want to control the "columns" (object properties) that are converted into HTML, simply use Select-Object
(Chapter 5):
Get-ChildItem | Select-Object name, length, LastWriteTime | ConvertTo-HTML |
Out-File report.htm
.\report.htm
If you rather want to export the result as a comma-separated list, use Export-Csv cmdlet instead of ConvertTo-
HTML | Out-File. Don't forget to use its -UseCulture parameter to automatically use the delimiter that is right for
your culture.
To add content to an existing file, again you can use various methods. Either use the appending redirection operator
">>", or use Add-Content. You can also pipe results to Out-File and use its -Append parameter to make sure it does
not overwrite existing content.
There is one thing you should keep in mind, though: do not mix these methods, stick to one. The reason is that they
all use different default encodings, and when you mix encodings, the result may look very strange:
Third line
All three cmdlets support the -Encoding parameter that you can use to manually pick an encoding. In contrast, the
old redirection operators have no way of specifying encoding which is why you should avoid using them.
There is a shortcut that uses variable notation if you know the absolute path of the file:
PS> ${c:\windows\windowsupdate.log}
However, this shortcut usually isn‘t very practical because it doesn‘t allow any variables inside curly brackets. You
would have to hardcode the exact path to the file into your scripts.
Get-Content reads the contents of a file line by line and passes on every line of text through the pipeline. You can
add Select-Object if you want to read only the first 10 lines of a very long file:
You can also use -Wait with Get-Content to turn the cmdlet into a monitoring mode: once it read the entire file, it
keeps monitoring it, and when new content is appended to the file, it is immediately processed and returned by Get-
Content. This is somewhat similar to "tailing" a file in Unix.
Finally, you can use Select-String to filter information based on keywords and regular expressions. The next line
gets only those lines from the windowsupdate.log file that contain the phrase " successfully installed ":
Note that Select-String will change the object type to a so-called MatchInfo object. That's why when you forward the
filtered information to a file, the result lines are cut into pieces:
To turn the results delivered by Select-String into real text, make sure you pick the property Line from the MatchInfo
object which holds the text line that matched your keyword:
To successfully import CSV files, make sure to use the parameter -UseCulture or -Delimiter if the list is not comma-
separated. Depending on your culture, Excel may have picked a different delimiter than the comma, and -
UseCulture automatically uses the delimiter that Excel used.
Use Get-Childitem to copy recursively. Let it find the PowerShell scripts for you, and then pass the result on to
Copy-Item: Before you run this line you should be aware that there may be hundreds of scripts, and unless you want
to completely clutter your desktop, you may want to first create a folder on your desktop and then copy the files into
that folder.
Bulk Renames
Because Rename-Item can be used as a building block in the pipeline, it provides simple solutions to complex tasks.
For example, if you wanted to remove the term ―-temporary‖ from a folder and all its sub-directories, as well as all
the included files, this instruction will suffice:
This line would now rename all files and folders, even if the term '"-temporary" you're looking for isn't even in the
file name. So, to speed things up and avoid errors, use Where-Object to focus only on files that carry the keyword in
its name:
Rename-Item even accepts a script block, so you could use this code as well:
When you look at the different code examples, note that ForEach-Object is needed only when a cmdlet cannot
handle the input from the upstream cmdlet directly. In these situations, use ForEach-Object to manually feed the
incoming information to the appropriate cmdlet parameter.
Most file system-related cmdlets are designed to work together. That's why Rename-Item knows how to interpret the
output from Get-ChildItem. It is "Pipeline-aware" and does not need to be wrapped in ForEach-Object.
Because deleting files and folders is irreversible, be careful. You can always simulate the operation by using -WhatIf
to see what happens - which is something you should do often when you work with wildcards because they may
affect many more files and folders than you initially thought.
This however would also delete subfolders contained in your Recent folder because Get-ChildItem lists both files
and folders.
If you are convinced that your command is correct, and that it will delete the correct files, repeat the statement
without -WhatIf. Or, you could use -Confirm instead to manually approve or deny each delete operation.
# Delete directory:
PS> del testdirectory
Confirm
The item at "C:\Users\Tobias Weltner\Sources\docs\testdirectory" has children
and the Recurse
parameter was not specified. if you continue, all children will be removed
with the item.
Are you sure you want to continue?
[Y] Yes [A] Yes to All [N] No [K] No to All [H] Suspend [?] Help
(default is "Y"):
Thanks to PowerShells universal "Provider" concept, you can navigate the Windows Registry just as you would the
file system. In this chapter, you will learn how to read and write Registry keys and Registry values.
Using Providers
o Available Providers
o Creating Drives
o Searching for Keys
o Reading One Registry Value
o Reading Multiple Registry Values
o Reading Multiple Keys and Values
o Creating Registry Keys
o Deleting Registry Keys
o Creating Values
o Securing Registry Keys
o Taking Ownership
o Setting New Access Permissions
o Removing an Access Rule
o Controlling Access to Sub-Keys
o Revealing Inheritance
o Controlling Your Own Inheritance
The Registry stores many crucial Windows settings. That's why it's so cool to read and sometimes change
information in the Windows Registry: you can manage a lot of configuration settings and sometimes tweak
Windows in ways that are not available via the user interface.
However, if you mess things up - change the wrong values or deleting important settings - you may well
permanently damage your installation. So, be very careful, and don't change anything that you do not know well.
Using Providers
To access the Windows Registry, there are no special cmdlets. Instead, PowerShell ships with a so-called provider
named "Registry". A provider enables a special set of cmdlets to access data stores. You probably know these
cmdlets already: they are used to manage content on drives and all have the keyword "item" in their noun part:
Thanks to the "Registry" provider, all of these cmdlets (and their aliases) can also work with the Registry. So if you
wanted to list the keys of HKEY_LOCAL_MACHINE\Software, this is how you'd do it:
Dir HKLM:\Software
Available Providers
Get-PSProvider gets a list of all available providers. Your list can easily be longer than in the following example.
Many PowerShell extensions add additional providers. For example, the ActiveDirectory module that ships with
Windows Server 2008 R2 (and the RSAT tools for Windows 7) adds a provider for the Active Directory. Microsoft
SQL Server (starting with 2007) comes with an SQLServer provider.
Get-PSProvider
Name Capabilities Drives
---- ------------ ------
Alias ShouldProcess {Alias}
Environment ShouldProcess {Env}
FileSystem filter, ShouldProcess {C, E, S, D}
function ShouldProcess {function}
Registry ShouldProcess {HKLM, HKCU}
Variable ShouldProcess {Variable}
Certificate ShouldProcess {cert}
What's interesting here is the ―Drives‖ column, which lists the drives that are managed by a respective provider. As
you see, the registry provider manages the drives HKLM: (for the registry root HKEY_LOCAL_MACHINE) and
HKCU: (for the registry root HKEY_CURRENT_USER). These drives work just like traditional file system drives.
Check this out:
Cd HKCU:
Dir
Hive: Microsoft.PowerShell.Core\Registry::HKEY_CURRENT_USER
SKC VC Name Property
--- -- ---- --------
2 0 AppEvents {}
7 1 Console {CurrentPage}
15 0 Control Panel {}
0 2 Environment {TEMP, TMP}
4 0 EUDC {}
1 6 Identities {Identity Ordinal, Migrated7, Last ...
3 0 Keyboard Layout {}
0 0 Network {}
4 0 Printers {}
38 1 Software {(default)}
2 0 System {}
0 1 SessionInformation {ProgramCount}
1 8 Volatile Environment {LOGONSERVER, USERDOMAIN, USERNAME,...
You can navigate like in the file system and dive deeper into subfolders (which here really are registry keys).
That's a bit strange because when you open the Registry Editor regedit.exe, you'll see that there are more than just
two root hives. If you wanted to access another hive, let's say HKEY_USERS, you'd have to add a new drive like
this:
You may not have access to all keys due to security settings, but your new drive HKU: works fine. Using New-
PSDrive, you now can access all parts of the Windows Registry. To remove the drive, use Remove-PSDrive (which
only works if HKU: is not the current drive in your PowerShell console):
Remove-PSDrive HKU
You can of course create additional drives that point to specific registry keys that you may need to access often.
Note that PowerShell drives are only visible inside the session you defined them. Once you close PowerShell, they
will automatically get removed again. To keep additional drives permanently, add the New-PSDrive statements to
your profile script so they get automatically created once you launch PowerShell.
Dir HKLM:\Software
Dir Registry::HKEY_LOCAL_MACHINE\Software
Dir Registry::HKEY_USERS
Dir Registry::HKEY_CLASSES_ROOT\.ps1
With this technique, you can even list all the Registry hives:
Dir Registry::
The registry provider doesn't support filters, though, so you cannot use the parameter -Filter when you search the
registry. Instead, use -Include and -Exclude. For example, if you wanted to find all Registry keys that include the
word ―PowerShell‖, you could search using:
Note that this example searches both HKCU: and HKLM:. The error action is set to SilentlyContinue because in the
Registry, you will run into keys that are access-protected and would raise ugly "Access Denied" errors. All errors
are suppressed that way.
If you want to find all keys that have a value with the keyword in its data, try this:
Unfortunately, the Registry provider adds a number of additional properties so you don't get back the value alone.
Add another Select-Object to really get back only the content of the value you are after:
That again is just a minor adjustment to the previous code because Get-ItemProperty supports wildcards. Have a
look:
0.8.2.232
Microsoft IntelliPoint 8.1 8.15.406.0 msiexec.exe /I
{3ED4AD...
Microsoft Security Esse... 2.1.1116.0 C:\Program
Files\Micro...
NVIDIA Drivers 1.9
C:\Windows\system32\nv...
WinImage "C:\Program
Files\WinI...
Microsoft Antimalware 3.0.8402.2 MsiExec.exe
/X{05BFB06...
Windows XP Mode 1.3.7600.16422 MsiExec.exe
/X{1374CC6...
Windows Home Server-Con... 6.0.3436.0 MsiExec.exe
/I{21E4979...
Idera PowerShellPlus Pr... 4.0.2703.2 MsiExec.exe
/I{7a71c8a...
Intel(R) PROSet/Wireles... 13.01.1000
(...)
Voilá, you get a list of installed software. Some of the lines are empty, though. This occurs when a key does not
have the value you are looking for.
Hive: Registry::HKEY_CURRENT_USER\Software
Name Property
---- --------
NewKey1
PS> md HKCU:\Software\NewKey2
Hive: Registry::HKEY_CURRENT_USER\Software
Name Property
---- --------
NewKey2
If a key name includes blank characters, enclose the path in quotation marks. The parent key has to exist.
To create a new key with a default value, use New-Item and specify the value and its data type:
Hive: Registry::HKEY_CURRENT_USER\Software
Name Property
---- --------
NewKey3 (default) : Default Value Text
This process needs to be manually confirmed if the key you are about to remove contains other keys:
Del HKCU:\Software\KeyWithSubKeys
Confirm
The item at "HKCU:\Software\KeyWithSubKeys" has children and the Recurse
parameter was not specified. if you continue, all children will be removed
with the item. Are you sure you want to continue?
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help
(default is "Y"):
Use the –Recurse parameter to delete such keys without manual confirmation:
To add new values to a Registry key, either use New-ItemProperty or Set-ItemProperty. New-ItemProperty cannot
overwrite an existing value and returns the newly created value in its object form. Set-ItemProperty is more easy
going. If the value does not yet exist, it will be created, else changed. Set-ItemProperty does not return any object.
Here are some lines of code that first create a Registry key and then add a number of values with different data
types:
Name : Smith
ID : 12
Path : C:\Windows
Notes : {First Note, Second Note}
DigitalInfo : {4, 8, 12, 200...}
PSPath : Registry::HKEY_CURRENT_USER\Software\TestKey4
PSParentPath : Registry::HKEY_CURRENT_USER\Software
PSChildName : TestKey4
PSDrive : HKCU
PSProvider : Registry
If you wanted to set the keys' default value, use '(default)' as value name.
Use Remove-ItemProperty to remove a value. This line deletes the value Name value that you created in the previous
example:
Remove-ItemProperty HKCU:\Software\Testkey4 Name
Clear-ItemProperty clears the content of a value, but not the value itself.
Be sure to delete your test key once you are done playing:
md HKCU:\Software\Testkey4
Get-Acl HKCU:\Software\Testkey
Path Owner Access
---- ----- ------
Microsoft.PowerShell.Core\Registr... TobiasWeltne-PC\Tobias Weltner
TobiasWeltne-PC\Tobias Weltner A...
To apply new security settings to a key, you need to know the different access rights that can be assigned to a key.
Here is how you get a list of these rights:
PS> [System.Enum]::GetNames([System.Security.AccessControl.RegistryRights])
QueryValues
SetValue
CreateSubKey
EnumerateSubKeys
Notify
CreateLink
Delete
ReadPermissions
WriteKey
ExecuteKey
ReadKey
ChangePermissions
TakeOwnership
FullControl
Taking Ownership
Always make sure that you are the ―owner‖ of the key before modifying Registry key access permissions. Only
owners can recover from lock-out situations, so if you set permissions wrong, you may not be able to undo the
changes unless you are the owner of the key.
This is how to take ownership of a Registry key (provided your current access permissions allow you to take
ownership. You may want to run these examples in a PowerShell console with full privileges):
$acl.AddAccessRule($rule)
Set-Acl HKCU:\Software\Testkey $acl
The modifications immediately take effect.Try creating new subkeys in the Registry editor or from within
PowerShell, and you‘ll get an error message:
md HKCU:\Software\Testkey\subkey
New-Item : Requested Registry access is not allowed.
At line:1 char:34
+ param([string[]]$paths); New-Item <<<< -type directory -path $paths
Why does the restriction applies to you as an administrator? Aren't you supposed to have full access? No,
restrictions always have priority over permissions, and because everyone is a member of the Everyone group, the
restriction applies to you as well. This illustrates that you should be extremely careful applying restrictions. A better
approach is to assign permissions only.
$acl.RemoveAccessRule($rule)
Set-Acl HKCU:\Software\Testkey $acl -Force
However, removing your access rule may not be as straightforward because you have effectively locked yourself
out. Since you no longer have modification rights to the key, you are no longer allowed to modify the keys' security
settings as well.
You can overrule this only if you take ownership of the key: Open the Registry editor, navigate to the key, and by
right-clicking and then selecting Permissions open the security dialog box and manually remove the entry for
Everyone.
You‘ve just seen how relatively easy it is to lock yourself out. Be careful with restriction rules.
md HKCU:\Software\Testkey2
$acl = Get-Acl HKCU:\Software\Testkey2
Note that in this case the new rules were not entered by using AddAccessRule() but by ResetAccessRule(). This
results in removal of all existing permissions for respective users. Still, the result isn‘t right because regular users
could still create subkeys and write values:
md hkcu:\software\Testkey2\Subkey
Hive:
Microsoft.PowerShell.Core\Registry::HKEY_CURRENT_USER\software\Testkey2
Revealing Inheritance
Look at the current permissions of the key to figure out why your permissions did not work the way you planned:
The key includes more permissions than what you assigned to it. It gets these additional permissions by inheritance
from parent keys. If you want to turn off inheritance, use SetAccessRuleProtection():
Now, when you look at the permissions again, the key now contains only the permissions you explicitly set. It no
longer inherits any permissions from parent keys:
RegistryRights AccessControlType IdentityReference IsInherited
InheritanceFlags PropagationFlags
-------------- ----------------- ----------------- ----------- ------------
---- ----------------
ReadKey Allow Everyone False
None None
FullControl Allow BUILT-in\Admistrators False
None None
del HKCU:\Software\Testkey2
md HKCU:\Software\Testkey2
In your daily work as an administrator, you will probably often deal with applications (processes), services, and
event logs so let's take some of the knowledge you gained from the previous chapters and play with it. The examples
and topics covered in this chapter are meant to give you an idea of what you can do. By no means are they a
complete list of what you can do. They will provide you with a great starting point, though.
PS> Get-Process
This will list all running processes on the local machine, not just yours. So if other people are logged onto your box,
their processes may also show up in that list. However, unless you have local Administrator privileges, you can only
access limited properties of processes you did not launch yourself.
That's why Get-Process throws a number of exceptions when you try and list the executable files of all running
processes. Exceptions occur either when there is no executable for a given process (namely System and Idle), or if
you do not have permission to see them:
To hide error messages and focus only on the information you are able to retrieve, use the common parameter -
ErrorAction SilentlyContinue which is available in every cmdlet - or its short form -ea 0:
Process objects returned from Get-Process contain a lot more information that you can see when you pipe the result
to Select-Object and have it display all object properties:
You can then examine the object properties available, and put together your own reports by picking the properties
that you need:
You can use the standard pipeline cmdlets to take care of that. Use Where-Object to filter out processes that do not
meet your requirements. For example, this line will get you only processes that do have an application window:
Note that you can also retrieve information about processes by using WMI:
WMI will get you even more details about running processes.
Both Get-Process and Get-WmiObject support the parameter -ComputerName, so you can use both to retrieve
processes remotely from other machines. However, only Get-WmiObject also supports the parameter -Credential so
you can authenticate. Get-Process always uses your current identity, and unless you are Domain Administrator or
otherwise have local Administrator privileges at the target machine, you will get an Access Denied error.
Note that even with Get-Process, you can authenticate. Establish an IPC network connection to the target machine,
and use this connection for authentication. Here is an example:
Here are some more examples of using pipeline cmdlets to refine the results returned by Get-Process. Can you
decipher what these lines would do?
PS> notepad
PS> regedit
PS> ipconfig
This works great, but eventually you'll run into situations where you cannot seem to launch an application.
PowerShell might complain that it would not recognize the application name although you know for sure that it
exists.
When this happens, you need to specify the absolute or relative path name to the application file. That can become
tricky because in order to escape spaces in path names, you have to quote them, and in order to run quoted text (and
not echo it back), you need to prepend it with an ampersand. The ampersand tells PowerShell to treat the text as if it
was a command you entered.
So if you wanted to run Internet Explorer from its standard location, this is the line that would do the job:
When you run applications from within PowerShell, these are the rules to know:
Environment variable $env:path: All folders listed in $env:path are special. Applications stored inside
these folders can be launched by name only. You do not need to specify the complete or relative path.
That's the reason why you can simply enter notepad and press ENTER to launch the Windows Editor, or
run commands like ping or ipconfig.
Escaping Spaces: If the path name contains spaces, the entire path name needs to be quoted. Once you
quote a path, though, it becomes a string (text), so when you press ENTER, PowerShell happily echoes the
text back to you but won't start the application. Whenever you quote paths, you need to prepend the string
with "&" so PowerShell knows that you want to launch something.
Synchronous and asynchronous execution: when you run a console-based application such as
ipconfig.exe or netstat.exe, it shares the console with PowerShell so its output is displayed in the
PowerShell console. That's why PowerShell pauses until console-based applications finished. Window-
based applications such as notepad.exe or regedit.exe use their own windows for output. Here, PowerShell
continues immediately and won't wait for the application to complete.
Using Start-Process
Whenever you need to launch a new process and want more control, use Start-Process. This cmdlet has a number of
benefits over launching applications directly. First of all, it is a bit smarter and knows where a lot of applications are
stored. It can for example find iexplore.exe without the need for a path:
PS> Start-Process iexplore.exe
Second, Start-Process supports a number of parameters that allow you to control window size, synchronous or
asynchronous execution or even the user context an application is using to run. For example, if you wanted
PowerShell to wait for a window-based application so a script could execute applications in a strict order, use -Wait
parameter:
You'll notice that PowerShell now waits for the Notepad to close again before it accepts new commands.
Start-Process has just one limitation: it cannot return the results of console-based applications back to you. Check
this out:
This will store the result of ipconfig in a variable. The same done with Start-Process yields nothing:
That's because Start-Process by default runs every command synchronously, so ipconfig runs in its own new
console window which is visible for a split-second if you look carefully. But even if you ask Start-Process to run the
command in no new console, results are never returned:
Instead, they are always output to the console. So if you want to read information from console-based applications,
do not use Start-Process.
Stopping Processes
If you must kill a process immediately, use Stop-Process and specify either the process ID, or use the parameter -
Name to specify the process name. This would close all instances of the Notepad:
Stopping processes this way shouldn‘t be done on a regular basis: since the application is immediately terminated, it
has no time to save unsaved results (which might result in data loss), and it cannot properly clean up (which might
result in orphaned temporary files and inaccurate open DLL counters). Use it only if a process won't respond
otherwise. Use –WhatIf to simulate. Use –Confirm when you want to have each step confirmed.
To close a process nicely, you can close its main window (which is the automation way of closing the application
window by a mouse click). Here is a sample that closes all instances of notepad:
Managing Services
Services are basically processes, too. They are just executed automatically and in the background and do not
necessarily require a user logon. Services provide functionality usually not linked to any individual user.
Cmdlet Description
Get-Service Lists services
New-Service Registers a service
Restart-Service Stops a service and then restarts it. For example, to allow modifications of settings to take effect
Resume-Service Resumes a stopped service
Set-Service Modifies settings of a service
Start-Service Starts a service
Stop-Service Stops a service
Suspend-Service Suspends a service
Examining Services
Use Get-Service to list all services and check their basic status.
PS> Get-Service
You can also check an individual service and find out whether it is running or not:
If a service has dependent services, it cannot be stopped unless you also specify -Force.
Note that you can use WMI to find out more information about services, and also manage services on remote
machines:
Since WMI includes more information that Get-Service, you could filter for all services set to start automatically
that are not running. By examining the service ExitCode property, you'd find services that did initialization tasks and
finished ok (exit code is 0) or that crashed (exit code other than 0):
DisplayName ExitCode
----------- --------
Microsoft .NET Framework NGEN v4.0.30319_X86 0
Microsoft .NET Framework NGEN v4.0.30319_X64 0
Google Update Service (gupdate) 0
Net Driver HPZ12 0
Pml Driver HPZ12 0
Software Protection 0
Windows Image Acquisition (WIA) 0
To list the content of one of the listed event logs, use -LogName instead. This lists all events from the System event
log:
Dumping all events is not a good idea, though, because this is just too much information. In order to filter the
information and focus on what you want to know, take a look at the column headers. If you want to filter by the
content of a specific column, look for a parameter that matches the column name.
This line gets you the latest 20 errors from the System event log:
And this line gets you all error and warning entries that have the keyword "Time" in its message:
Note that an event source must be unique and may not exist already in any other event log. To remove an event
source, use Remove-EventLog with the same parameters as above, but be extremely careful. This cmdlet can wipe
out entire event logs.
Once you have registered your event source, you are ready to log things to an event log. Logging (writing) event
entries no longer necessarily requires administrative privileges. Since we added the event source to the Application
log, anyone can now use it to log events. You could for example use this line inside of your logon scripts to log
status information:
Or you can open the system dialog to view your new event entry that way:
PS> Show-EventLog
And of course you can remove your event source if this was just a test and you want to get rid of it again (but you do
need administrator privileges again, just like when you created the event source):
Windows Management Instrumentation (WMI) is a technique available on all Windows systems starting with
Windows 2000. WMI can provide you with a wealth of information about the Windows configuration and setup. It
works both locally and remotely, and PowerShell makes accessing WMI a snap.
An "object" works like an "animal", so there are zillions of real dogs, cats, and horses. So, there may be one, ten,
thousands, or no objects (or "instances") of a class. Let's take the class "mammoth". There are no instances of this
class these days.
WMI works the same. If you'd like to know something about a computer, you ask WMI about a class, and WMI
returns the objects. When you ask for the class "Win32_BIOS", you get back exactly one instance (or object)
because your computer has just one BIOS. When you ask for "Win32_Share", you get back a number of instances,
one for each share. And when you ask for "Win32_TapeDrive", you get back nothing because most likely, your
computer has no built-in tape drive. Tape drives thus work like mammoths in the real world. While there is a class
("kind"), there is no more instance.
Retrieving Information
How do you ask WMI for objects? It's easy! Just use the cmdlet Get-WmiObject. It accepts a class name and returns
objects, just like the cmdlet name and its parameter suggest:
SMBIOSBIOSVersion : RKYWSF21
Manufacturer : Phoenix Technologies LTD
Name : Phoenix TrustedCore(tm) NB Release SP1 1.0
SerialNumber : 701KIXB007922
Version : PTLTD - 6040000
NameSpace: ROOT\cimv2
SMBIOSBIOSVersion : 02LV.MP00.20081121.hkk
Manufacturer : Phoenix Technologies Ltd.
Name : Phoenix SecureCore(tm) NB Version 02LV.MP00.20081121.hkk
SerialNumber : ZAMA93HS600210
Version : SECCSD - 6040000
To see the red-pill-world, pipe the results to Select-Object and ask it to show all available properties:
Status : OK
Name : Phoenix SecureCore(tm) NB Version
02LV.MP00.20081121.hkk
Caption : Phoenix SecureCore(tm) NB Version
02LV.MP00.20081121.hkk
SMBIOSPresent : True
__GENUS : 2
__CLASS : Win32_BIOS
__SUPERCLASS : CIM_BIOSElement
__DYNASTY : CIM_ManagedSystemElement
__RELPATH : Win32_BIOS.Name="Phoenix SecureCore(tm) NB Version
02LV.MP00.20081121.hkk",SoftwareElementID="Phoenix
SecureCore(tm) NB Version
02LV.MP00.20081121.hkk",Softw
areElementState=3,TargetOperatingSystem=0,Version="SECC
SD - 6040000"
__PROPERTY_COUNT : 27
__DERIVATION : {CIM_BIOSElement, CIM_SoftwareElement,
CIM_LogicalElement, CIM_ManagedSystemElement}
__SERVER : DEMO5
__NAMESPACE : root\cimv2
__PATH : \\DEMO5\root\cimv2:Win32_BIOS.Name="Phoenix
SecureCore(tm) NB Version
02LV.MP00.20081121.hkk",SoftwareElementID="Phoenix
SecureCore(tm) NB Version
02LV.MP00.20081121.hkk",Softw
areElementState=3,TargetOperatingSystem=0,Version="SECC
SD - 6040000"
BiosCharacteristics : {4, 7, 8, 9...}
BIOSVersion : {SECCSD - 6040000, Phoenix SecureCore(tm) NB Version
02LV.MP00.20081121.hkk, Ver 1.00PARTTBL}
BuildNumber :
CodeSet :
CurrentLanguage :
Description : Phoenix SecureCore(tm) NB Version
02LV.MP00.20081121.hkk
IdentificationCode :
InstallableLanguages :
InstallDate :
LanguageEdition :
ListOfLanguages :
Manufacturer : Phoenix Technologies Ltd.
OtherTargetOS :
PrimaryBIOS : True
ReleaseDate : 20081121000000.000000+000
SerialNumber : ZAMA93HS600210
SMBIOSBIOSVersion : 02LV.MP00.20081121.hkk
SMBIOSMajorVersion : 2
SMBIOSMinorVersion : 5
SoftwareElementID : Phoenix SecureCore(tm) NB Version
02LV.MP00.20081121.hkk
SoftwareElementState : 3
TargetOperatingSystem : 0
Version : SECCSD - 6040000
Scope : System.Management.ManagementScope
Path : \\DEMO5\root\cimv2:Win32_BIOS.Name="Phoenix
SecureCore(tm) NB Version
02LV.MP00.20081121.hkk",SoftwareElementID="Phoenix
SecureCore(tm) NB Version
02LV.MP00.20081121.hkk",Softw
areElementState=3,TargetOperatingSystem=0,Version="SECC
SD - 6040000"
Options : System.Management.ObjectGetOptions
ClassPath : \\DEMO5\root\cimv2:Win32_BIOS
Properties : {BiosCharacteristics, BIOSVersion, BuildNumber,
Caption...}
SystemProperties : {__GENUS, __CLASS, __SUPERCLASS, __DYNASTY...}
Qualifiers : {dynamic, Locale, provider, UUID}
Site :
Container :
Once you see the real world, you can pick the properties you find interesting and then put together a custom
selection. Note that PowerShell adds a couple of properties to the object which all start with "__". These properties
are available on all WMI objects. __Server is especially useful because it always reports the name of the computer
system the WMI object came from. Once you start retrieving WMI information remotely, you should always add
__Server to the list of selected properties.
PowerShell can filter WMI results client-side using Where-Object. So, to get only objects that have a MACAddress,
you could use this line:
Client-side filtering is easy because it really just uses Where-Object to pick out those objects that fulfill a given
condition. However, it is slightly inefficient as well. All WMI objects need to travel to your computer first before
PowerShell can pick out the ones you want.
If you only expect a small number of objects and/or if you are retrieving objects from a local machine, there is no
need to create more efficient code. If however you are using WMI remotely via network and/or have to deal with
hundreds or even thousands of objects, you should instead use server-side filters.
These filters are transmitted to WMI along with your query, and WMI only returns the wanted objects in the first
place. Since these filters are managed by WMI and not PowerShell, they use WMI syntax and not PowerShell
syntax. Have a look:
Simple filters like the one above are almost self-explanatory. WMI uses different operators ("!=" instead of "-ne" for
inequality) and keywords ("NULL" instead of $null), but the general logic is the same.
Sometimes, however, WMI filters can be tricky. For example, to find all network cards that have an IP address
assigned to them, in PowerShell (using client-side filtering) you would use:
PS> Get-WmiObject Win32_NetworkAdapterConfiguration | Where-Object {
$_.IPAddress -ne $null } |
>> Select-Object Caption, IPAddress, MACAddress
>>
The reason for this is the nature of the IPAddress property. When you look at the results from your client-side
filtering, you'll notice that the column IPAddress has values in braces and displays more than one IP address. The
property IPAddress is an array. WMI filters cannot check array contents.
So in this scenario, you would have to either stick to client-side filtering or search for another object property that is
not an array and could still separate network cards with IP address from those without. There happens to be a
property called IPEnabled that does just that:
A special WMI filter operator is "LIKE". It works almost like PowerShell‘s comparison operator -like. Use "%"
instead of "*" for wildcards, though. So, to find all services with the keyword "net" in their name, try this:
PowerShell supports the [WmiSearcher] type accelerator, which you can use to achieve basically the same thing you
just did with the –query parameter:
The path consists basically of the class name as well as one or more key properties. For services, the key property is
Name and is the English-language name of the service. If you want to work directly with a particular service through
WMI, specify its path and do a type conversion. Use either the [wmi] type accelerator or the underlying
[System.Management.ManagementObject] .NET type:
[wmi]"Win32_Service.Name='Fax'"
ExitCode : 1077
Name : Fax
ProcessId : 0
StartMode : Manual
State : Stopped
Status : OK
In fact, you don‘t necessarily need to specify the name of the key property as long as you at least specify its value.
This way, you‘ll find all the properties of a specific WMI instance right away.
$disk = [wmi]'Win32_LogicalDisk="C:"'
$disk.FreeSpace
10181373952
[int]($disk.FreeSpace / 1MB)
9710
$disk | Format-List [a-z]*
Status :
Availability :
DeviceID : C:
StatusInfo :
Access : 0
BlockSize :
Caption : C:
Compressed : False
ConfigManagerErrorCode :
ConfigManagerUserConfig :
CreationClassName : Win32_LogicalDisk
Description : Local hard drive
DriveType : 3
ErrorCleared :
ErrorDescription :
ErrorMethodology :
FileSystem : NTFS
FreeSpace : 10181373952
InstallDate :
LastErrorCode :
MaximumComponentLength : 255
MediaType : 12
Name : C:
NumberOfBlocks :
PNPDeviceID :
PowerManagementCapabilities :
PowerManagementSupported :
ProviderName :
Purpose :
QuotasDisabled :
QuotasIncomplete :
QuotasRebuilding :
Size : 100944637952
SupportsDiskQuotas : False
SupportsFileBasedCompression : True
SystemCreationClassName : Win32_ComputerSystem
SystemName : JSMITH-PC
VolumeDirty :
VolumeName :
VolumeSerialNumber : AC039C05
Note that WMI objects returned by PowerShell Remoting always are read-only. They cannot be used to change the
remote system. If you want to change a remote system using WMI objects, you must connect to the remote system
using the -ComputerName parameter provided by Get-WmiObject.
Modifying Properties
Most of the properties that you find in WMI objects are read-only. There are few, though, that can be modified. For
example, if you want to change the description of a drive, add new text to the VolumeName property of the drive:
$drive = [wmi]"Win32_LogicalDisk='C:'"
$drive.VolumeName = "My Harddrive"
$drive.Put()
Path : \\.\root\cimv2:Win32_LogicalDisk.DeviceID="C:"
RelativePath : Win32_LogicalDisk.DeviceID="C:"
Server : .
NamespacePath : root\cimv2
ClassName : Win32_LogicalDisk
IsClass : False
IsInstance : True
IsSingleton : False
This line would kill all instances of the Windows Editor "notepad.exe" on your local machine:
Add the parameter -ComputerName to Get-WmiObject, and you'd be able to kill notepads on one or more remote
machines - provided you have Administrator privileges on the remote machine.
For every instance that Terminate() closes, it returns an object with a number of properties. Only the property
ReturnValue is useful, though, because it tells you whether the call succeeded. That's why it is generally a good idea
to add ".ReturnValue" to all calls of a WMI method. A return value of 0 generally indicates success, any other code
failure. To find out what the error codes mean you would have to surf to an Internet search engine and enter the
WMI class name (like "Win32_Process"). One of the first links will guide you to the Microsoft MSDN
documentation page for that class. It lists all codes and clear text translations for all properties and method calls.
If you already know the process ID of a process, you can work on the process directly just as you did in the last
section because the process ID is the key property of processes. For example, you could terminate the process with
the ID 1234 like this:
([wmi]"Win32_Process='1234'").Terminate()
If you‘d rather check your hard disk drive C:\ for errors, the proper invocation is:
([wmi]"Win32_LogicalDisk='C:'").Chkdsk(...
However, since this method requires additional arguments, the question here is what you should specify. Invoke the
method without parentheses in order to get initial brief instructions:
([wmi]"Win32_LogicalDisk='C:'").Chkdsk
MemberType : Method
OverloadDefinitions : {System.Management.ManagementBaseObject
Chkdsk(System.Boolean FixErrors, System.Boolean
VigorousIndexCheck, System.Boolean SkipFolderCycle,
System.Boolean ForceDismount, Syst
em.Boolean RecoverBadSectors, System.Boolean
OkToRunAtBootUp)}
TypeNameOfValue : System.Management.Automation.PSMethod
Value : System.Management.ManagementBaseObject
Chkdsk(System.Boolean FixErrors, System.Boolean
VigorousIndexCheck, System.Boolean SkipFolderCycle,
System.Boolean ForceDismount, Syste
m.Boolean RecoverBadSectors, System.Boolean
OkToRunAtBootUp)
Name : Chkdsk
IsInstance : True
Static Methods
There are WMI methods not just in WMI objects that you retrieved with Get-WmiObject. Some WMI classes also
support methods. These methods are called "static".
If you want to renew the IP addresses of all network cards, use the Win32_NetworkAdapterConfiguration class and
its static method RenewDHCPLeaseAll():
([wmiclass]"Win32_NetworkAdapterConfiguration").RenewDHCPLeaseAll().ReturnVal
ue
You get the WMI class by using type conversion. You can either use the [wmiclass] type accelerator or the
underlying [System.Management.ManagementClass].
The methods of a WMI class are also documented in detail inside WMI. For example, you get the description of the
Win32Shutdown() method of the Win32_OperatingSystem class like this:
$class = [wmiclass]'Win32_OperatingSystem'
$class.Options.UseAmendedQualifiers = $true
(($class.methods["Win32Shutdown"]).Qualifiers["Description"]).Value
The Win32Shutdown method provides the full set of shutdown options supported
by Win32
operating systems. The method returns an integer value that can be
interpretted as follows:
0 – Successful completion.
Other – for integer values other than those listed above, refer to Win32
error code documentation.
If you‘d like to learn more about a WMI class or a method, navigate to an Internet search page like Google and
specify as keyword the WMI class name, as well as the method. It‘s best to limit your search to the Microsoft
MSDN pages: Win32_NetworkAdapterConfiguration RenewDHCPLeaseAll site:msdn2.microsoft.com.
$class = [wmiclass]'Win32_LogicalDisk'
$class.psbase.Options.UseAmendedQualifiers = $true
($class.psbase.qualifiers["description"]).Value
The Win32_LogicalDisk class represents a data source that resolves to an
actual local storage
device on a Win32 system. The class returns both local as well as mapped
logical disks.
However, the recommended approach is to use this class for obtaining
information on local
disks and to use the Win32_MappedLogicalDisk for information on mapped
logical disk.
In a similarly way, all the properties of the class are documented. The next example retrieves the documentation for
the property VolumeDirty and explains what its purpose is:
$class = [wmiclass]'Win32_LogicalDisk'
$class.psbase.Options.UseAmendedQualifiers = $true
($class.psbase.properties["VolumeDirty"]).Type
Boolean
(($class.psbase.properties["VolumeDirty"]).Qualifiers["Description"]).Value
The VolumeDirty property indicates whether the disk requires chkdsk to be run
at next boot up time.
The property is applicable to only those instances of logical disk that
represent a physical disk in
the machine. It is not applicable to mapped logical drives.
WMI Events
WMI returns not only information but can also wait for certain events. If the events occur, an action will be started.
In the process, WMI can alert you when one of the following things involving a WMI instance happens:
__InstanceCreationEvent: A new instance was added such as a new process was started or a new file
created.
__InstanceModificationEvent: The properties of an instance changed. For example, the FreeSpace
property of a drive was modified.
__InstanceDeletionEvent: An instance was deleted, such as a program was shut down or a file deleted.
__InstanceOperationEvent: This is triggered in all three cases.
You can use these to set up an alarm signal. For example, if you want to be informed as soon as Notepad is started,
type:
WITHIN specifies the time interval of the inspection and ―WITHIN 1‖ means that you want to be informed no later
than one second after the event occurs. The shorter you set the interval, the more effort involved, which means that
WMI will require commensurately more computing power to perform your task. As long as the interval is kept at
not less than one second, the computation effort will be scarcely perceptible. Here is an example:
To be able to use WMI remoting, your network must support DCOM calls (thus, the firewall needs to be set up
accordingly). Also, you need to have Administrator privileges on the target machine.
You can also specify a comma-separated list of a number of computers and return information from all of them. The
parameter -ComputerName accepts an array of computer names. Anything that returns an array of computer names
or IP addresses can be valid input. This line, for example, would read computer names from a file:
If you want to log on to the target system using another user account, use the –Credential parameter to specify
additional log on data as in this example:
$credential = Get-Credential
Get-WmiObject -ComputerName pc023 -Credential $credential Win32_Process
In addition to the built-in remoting capabilities, you can use Get-WmiObject via PowerShell Remoting (if you have
set up PowerShell Remoting correctly). Here, you send the WMI command off to the remote system:
Note that all objects returned by PowerShell Remoting are read-only and do not contain methods anymore. If you
want to change WMI properties or call WMI methods, you need to do this inside the script block you send to the
remote system - so it needs to be done before PowerShell Remoting sends back objects to your own system.
Because the topmost directory in WMI is always named root, from its location you can inspect existing namespaces.
Get a display first of the namespaces on this level:
As you see, the cimv2 directory is only one of them. What other directories are shown here depends on the software
and hardware that you use. For example, if you use Microsoft Office, you may find a directory called MSAPPS12.
Take a look at the classes in it:
The date and time are represented a sequence of numbers: first the year, then the month, and finally the day.
Following this is the time in hours, minutes, and milliseconds, and then the time zone. This is the so-called DMTF
standard, which is hard to read. However, you can use ToDateTime() of the ManagementDateTimeConverter .NET
class to decipher this cryptic format:
Now you can also use standard date and time cmdlets such as New-TimeSpan to calculate the current system uptime:
User administration in the Active Directory was a dark spot in PowerShell Version 1. Microsoft did not ship any
cmdlets to manage AD user accounts or other aspects in Active Directory. That's why the 3rd party vendor Quest
stepped in and published a free PowerShell Snap-In with many useful AD cmdlets. Over the years, this extension
has grown to become a de-facto standard, and many PowerShell scripts use Quest AD cmdlets. You can freely
download this extension from the Quest website.
Beginning with PowerShell Version 2.0, Microsoft finally shipped their own AD management cmdlets. They are
included with Server 2008 R2 and also available for download as "RSAT tools (remote server administration
toolkit). The AD cmdlets are part of a module called "ActiveDirectory". This module is installed by default when
you enable the Domain Controller role on a server. On a member server or client with installed RSAT tools, you
have to go to control panel and enable that feature first.
This chapter is not talking about either one of these extensions. It is introducing you to the build-in low level support
for ADSI methods. They are the beef that makes these two extensions work and can be called directly, as well.
Don't get me wrong: if you work a lot with the AD, it is much easier for you to get one of the mentioned AD
extensions and use cmdlets for your tasks. If you (or your scripts) just need to get a user, change some attributes or
determine group membership details, it can be easier to use the direct .NET framework methods shown in this
chapter. They do not introduce dependencies: your script runs without the need to either install the Quest toolkit or
the RSAT tools.
Topics Covered:
Connecting to a Domain
o Logging On Under Other User Names
Accessing a Container
o Listing Container Contents
Accessing Individual Users or Groups
o Using Filters and the Pipeline
o Directly Accessing Elements
o Obtaining Elements from a Container
o Searching for Elements
Table 19.1: Examples of LDAP queries
o Accessing Elements Using GUID
Reading and Modifying Properties
o Just What Properties Are There?
o Practical Approach: Look
o Theoretical Approach: Much More Thorough
o Reading Properties
o Modifying Properties
o Deleting Properties
Table 19.2: PutEx() operations
o The Schema of Domains
o Setting Properties Having Several Values
Invoking Methods
o Changing Passwords
o Controlling Group Memberships
o In Which Groups Is a User a Member?
o Which Users Are Members of a Group?
o Adding Users to a Group
Creating New Objects
o Creating New Organizational Units
o Create New Groups
Table 19.3: Group Types
o Creating New Users
Connecting to a Domain
If your computer is a member of a domain, the first step in managing users is to connect to a log-on domain. You
can set up a connection like this:
$domain = [ADSI]""
$domain
distinguishedName
-----------------
{DC=scriptinternals,DC=technet}
If your computer isn‘t a member of a domain, the connection setup will fail and generate an error message:
If you want to manage local user accounts and groups, instead of LDAP: use the WinNT: moniker. But watch out:
the text is case-sensitive here. For example, you can access the local administrator account like this:
$user = [ADSI]"WinNT://./Administrator,user"
$user | Select-Object *
We won‘t go into local user accounts in any more detail in the following examples. If you must manage local users,
also look at net.exe. It provides easy to use options to manage local users and groups.
$domain = [DirectoryServices.DirectoryEntry]""
$domain
distinguishedName
-----------------
{DC=scriptinternals,DC=technet}
This is important to know when you want to log on under a different identity. The [ADSI] type accelerator always
logs you on using your current identity. Only the underlying DirectoryServices.DirectoryEntry .NET type gives you
the option of logging on with another identity. But why would anyone want to do something like that? Here are a
few reasons:
External consultant: You may be visiting a company as an external consultant and have brought along
your own notebook computer, which isn‘t a member of the company domain. This prevents you from
setting up a connection to the company domain. But if you have a valid user account along with its
password at your disposal, you can use your notebook and this identity to access the company domain.
Your notebook doesn‘t have to be a domain member to access the domain.
Several domains: Your company has several domains and you want to manage one of them, but it isn‘t
your log-on domain. More likely than not, you‘ll have to log on to the new domain with an identity known
to it.
Logging onto a domain that isn‘t your own with another identity works like this:
$domain = new-object
DirectoryServices.DirectoryEntry("LDAP://10.10.10.1","domain\user", `
"secret")
$domain.name
scriptinternals
$domain.distinguishedName
DC=scriptinternals,DC=technet
Two things are important for ADSI paths: first, their names are case-sensitive. That‘s why the two following
approaches are wrong:
Second, surprisingly enough, ADSI paths use a normal slash. A backslash like the one commonly used in the file
system would generate error messages:
If you don‘t want to put log-on data in plain text in your code, use Get-Credential. Since the password has to be
given when logging on in plain text, and Get-Credential returns the password in encrypted form, an intermediate
step is required in which it is converted into plain text:
$cred = Get-Credential
$pwd = [Runtime.InteropServices.Marshal]::PtrToStringAuto(
[Runtime.InteropServices.Marshal]::SecureStringToBSTR( $cred.Password ))
$domain = new-object
DirectoryServices.DirectoryEntry("LDAP://10.10.10.1",$cred.UserName, $pwd)
$domain.name
scriptinternals
Log-on errors are initially invisible. PowerShell reports errors only when you try to connect with a domain. This
procedure is known as ―binding.‖ Calling the $domain.Name property won‘t cause any errors because when the
connection fails, there isn‘t even any property called Name in the object in $domain.
So, how can you find out whether a connection was successful or not? Just invoke the Bind() method, which does
the binding. Bind() always throws an exception and Trap can capture this error.
The code called by Bind() must be in its own scriptblock, which means it must be enclosed in brackets. If an error
occurs in the block, PowerShell will cut off the block and execute the Trap code, where the error will be stored in a
variable. This is created using script: so that the rest of the script can use the variable. Then If verifies whether an
error occurred. A connection error always exists if the exception thrown by Bind() has the -2147352570 error code.
In this event, If outputs the text of the error message and stops further instructions from running by using Break.
$cred = Get-Credential
$pwd = [Runtime.InteropServices.Marshal]::PtrToStringAuto(
[Runtime.InteropServices.Marshal]::SecureStringToBSTR( $cred.Password ))
$domain = new-object
DirectoryServices.DirectoryEntry("LDAP://10.10.10.1",$cred.UserName, $pwd)
trap { $script:err = $_ ; continue } &{ $domain.Bind($true); $script:err =
$null }
if ($err.Exception.ErrorCode -ne -2147352570)
{
Write-Host -Fore Red $err.Exception.Message
break
}
else
{
Write-Host -Fore Green "Connection established."
}
Logon failure: unknown user name or bad password.
By the way, the error code -2147352570 means that although the connection was established, Bind() didn’t find an
object to which it could bind itself. That‘s OK because you didn‘t specify any particular object in your LDAP path
when the connection was being set up..
Accessing a Container
Domains have a hierarchical structure like the file system directory structure. Containers inside the domain are either
pre-defined directories or subsequently created organizational units. If you want to access a container, specify the
LDAP path to the container. For example, if you want to access the pre-defined directory Users, you could access
like this:
$ldap = "/CN=Users,DC=scriptinternals,DC=technet"
$cred = Get-Credential
$pwd = [Runtime.InteropServices.Marshal]::PtrToStringAuto(
[Runtime.InteropServices.Marshal]::SecureStringToBSTR( $cred.Password ))
$users = new-object
DirectoryServices.DirectoryEntry("LDAP://10.10.10.1$ldap",$cred.UserName,
$pwd)
$users
distinguishedName
-----------------
{CN=Users,DC=scriptinternals,DC=technet}
The fact that you are logged on as a domain member naturally simplifies the procedure considerably because now
you need neither the IP address of the domain controller nor log-on data. The LDAP name of the domain is also
returned to you by the domain itself in the distinguishedName property. All you have to do is specify the container
that you want to visit:
$ldap = "CN=Users"
$domain = [ADSI]""
$dn = $domain.distinguishedName
$users = [ADSI]"LDAP://$ldap,$dn"
$users
While in the LDAP language pre-defined containers use names including CN=, specify OU= for organizational
units. So, when you log on as a user to connect to the sales OU, which is located in the company OU, you should
type:
$ldap = "CN=Users"
$domain = [ADSI]""
$dn = $domain.distinguishedName
$users = [ADSI]"LDAP://$ldap,$dn"
$users.PSBase.Children
distinguishedName
-----------------
{CN=admin,CN=Users,DC=scriptinternals,DC=technet}
{CN=Administrator,CN=Users,DC=scriptinternals,DC=technet}
{CN=All,CN=Users,DC=scriptinternals,DC=technet}
{CN=ASPNET,CN=Users,DC=scriptinternals,DC=technet}
{CN=Belle,CN=Users,DC=scriptinternals,DC=technet}
{CN=Consultation2,CN=Users,DC=scriptinternals,DC=technet}
{CN=Consultation3,CN=Users,DC=scriptinternals,DC=technet}
{CN=ceimler,CN=Users,DC=scriptinternals,DC=technet}
(...)
$ldap = "CN=Users"
$domain = [ADSI]""
$dn = $domain.distinguishedName
$users = [ADSI]"LDAP://$ldap,$dn"
$users.PSBase.Children | Where-Object { $_.sAMAccountType -eq 805306368 }
Another approach makes use of the class that you can always find in the objectClass property.
For example, if you want to access the Guest account directly, specify its distinguishedName. If you‘re a domain
member, you don‘t have to go to the trouble of using the distinguishedName of the domain:
$ldap = "CN=Guest,CN=Users"
$domain = [ADSI]""
$dn = $domain.distinguishedName
$guest = [ADSI]"LDAP://$ldap,$dn"
$guest | Format-List *
objectClass : {top, person, organizationalPerson, user}
cn : {Guest}
description : {Predefined account for guest access to the computer
or domain)
distinguishedName : {CN=Guest,CN=Users,DC=scriptinternals,DC=technet}
instanceType : {4}
whenCreated : {12.11.2005 12:31:31 PM}
whenChanged : {06.27.2006 09:59:59 AM}
uSNCreated : {System.__ComObject}
memberOf : {CN=Guests,CN=Builtin,DC=scriptinternals,DC=technet}
uSNChanged : {System.__ComObject}
name : {Guest}
objectGUID : {240 255 168 180 1 206 85 73 179 24 192 164 100 28
221 74}
userAccountControl : {66080}
badPwdCount : {0}
codePage : {0}
countryCode : {0}
badPasswordTime : {System.__ComObject}
lastLogoff : {System.__ComObject}
lastLogon : {System.__ComObject}
logonHours : {255 255 255 255 255 255 255 255 255 255 255 255 255
255 255 255 255 255 255 255 255
}
pwdLastSet : {System.__ComObject}
primaryGroupID : {514}
objectSid : {1 5 0 0 0 0 0 5 21 0 0 0 184 88 34 189 250 183 7
172 165 75 78 29 245 1 0 0}
accountExpires : {System.__ComObject}
logonCount : {0}
sAMAccountName : {Guest}
sAMAccountType : {805306368}
objectCategory :
{CN=Person,CN=Schema,CN=Configuration,DC=scriptinternals,DC=technet}
isCriticalSystemObject : {True}
nTSecurityDescriptor : {System.__ComObject}
Using the asterisk as wildcard character, Format-List makes all the properties of an ADSI object visible so that you
can easily see which information is contained in it and under which names.
$domain = [ADSI]""
$users = $domain.psbase.Children.Find("CN=Users")
$useraccount = $users.psbase.Children.Find("CN=Administrator")
$useraccount.Description
Predefined account for managing the computer or domain.
$UserName = "*mini*"
$searcher = new-object DirectoryServices.DirectorySearcher([ADSI]"")
$searcher.filter = "(&(objectClass=user)(sAMAccountName= $UserName))"
$searcher.findall()
If you haven‘t logged onto the domain that you want to search, get the domain object through the log-on:
$domain = new-object
DirectoryServices.DirectoryEntry("LDAP://10.10.10.1","domain\user","secret")
$UserName = "*mini*"
$searcher = new-object DirectoryServices.DirectorySearcher($domain)
$searcher.filter = "(&(objectClass=user)(sAMAccountName= $UserName))"
$searcher.findall() | Format-Table -wrap
The results of the search are all the objects that contain the string ―mini‖ in their names, no matter where they‘re
located in the domain:
Path Properties
---- ----------
LDAP://10.10.10.1/CN=Administrator,CN=Users,DC=scripti {samaccounttype,
lastlogon, objectsid,
nternals,DC=technet
whencreated...}
The crucial part takes place in the search filter, which looks a bit strange in this example:
The filter merely compares certain properties of elements according to certain requirements. It checks accordingly
whether the term user turns up in the objectClass property and whether the sAMAccountName property matches the
specified user name. Both criteria are combined by the ―&‖ character, so they both have to be met. This would
enable you to assemble a convenient search function.
The search function Get-LDAPUser searches the current log-on domain by default. If you want to log on to another
domain, note the appropriate lines in the function and specify your log-on data.
Get-LDAPUser can be used very flexibly and locates user accounts everywhere inside the domain. Just specify the
name you‘re looking for or a part of it:
# Find only users with "e" in their names that are in the "main office" OU or
come under it.
Get-LDAPUser *e* “OU=main office,OU=company”
Get-LDAPUser gets the found user objects right back. You can subsequently process them in the PowerShell
pipeline—just like the elements that you previously got directly from children. How does Get-LDAPUser manage to
search only the part of the domain you want it to? The following snippet of code is the reason:
First, we checked whether the user specified the $start second parameter. If yes, Find() is used to access the
specified container in the domain container (of the topmost level) and this is defined as the starting point for the
search. If $start is missing, the starting point is the topmost level of the domain, meaning that every location is
searched.
The function also specifies some options that are defined by the user:
$Searcher.CacheResults = $true
$Searcher.SearchScope = "Subtree"
$Searcher.PageSize = 1000
SearchScope determines whether all child directories should also be searched recursively beginning from the
starting point, or whether the search should be limited to the start directory. PageSize specifies in which ―chunk‖ the
results of the domain are to be retrieved. If you reduce the PageSize, your script may respond more freely, but will
also require more network traffic. If you request more, the respective ―chunk‖ will still include only 1,000 data
records.
You could now freely extend the example function by extending or modifying the search filter. Here are some useful
examples:
In the future, you can access precisely this account via its GUID. Then you won‘t have to care whether the location,
the name, or some other property of the user accounts changes. The GUID will always remain constant:
$acccount = [ADSI]"LDAP://<GUID=f0ffa8b401ce5549b318c0a4641cdd4a>"
$acccount
distinguishedName
-----------------
{CN=Guest,CN=Users,DC=scriptinternals,DC=technet}
Specify the GUID when you log on if you want to log on to the domain:
$guid = "<GUID=f0ffa8b401ce5549b318c0a4641cdd4a>"
$acccount = new-object
DirectoryServices.DirectoryEntry("LDAP://10.10.10.1/$guid","domain\user", `
"secret")
distinguishedName
-----------------
{CN=Guest,CN=Users,DC=scriptinternals,DC=technet}
The elements you get this way are full-fledged objects. You use the methods and properties of these elements to
control them. Basically, everything applies that you read about in Chapter 6. In the case of ADSI, there are some
additional special features:
Twin objects: Every ADSI object actually exists twice: first, as an object PowerShell synthesizes and then
as a raw ADSI object. You can access the underlying raw object via the PSBase property of the processed
object. The processed object contains all Active Directory attributes, including possible schema extensions.
The underlying base object contains the .NET properties and methods you need for general management.
You already saw how to access these two objects when you used Children to list the contents of a
container.
Phantom objects: Search results of a cross-domain search look like original objects only at first sight. In
reality, these are reduced SearchResult objects. You can get the real ADSI object by using the
GetDirectoryEntry() method. You just saw how that happens in the section on GUIDs.
Properties: All the changes you made to ADSI properties won‘t come into effect until you invoke the
SetInfo() method.
In the following examples, we will use the Get-LDAPUser function described above to access user accounts, but you
can also get at user accounts with one of the other described approaches.
The result is meager but, as you know by now, search queries only return a reduced SearchResult object. You get the
real user object from it by calling GetDirectoryEntry(). Then you‘ll get more information:
$useraccount = $useraccount.GetDirectoryEntry()
$useraccount | Format-List *
objectClass : {top, person, organizationalPerson, user}
cn : {Guest}
description : {Predefined account for guest access to the computer
or domain)
distinguishedName : {CN=Guest,CN=Users,DC=scriptinternals,DC=technet}
instanceType : {4}
whenCreated : {12.12.2005 12:31:31 PM}
whenChanged : {06.27.2006 09:59:59 AM}
uSNCreated : {System.__ComObject}
memberOf : {CN=Guests,CN=Builtin,DC=scriptinternals,DC=technet}
uSNChanged : {System.__ComObject}
name : {Guest}
objectGUID : {240 255 168 180 1 206 85 73 179 24 192 164 100 28
221 74}
userAccountControl : {66080}
badPwdCount : {0}
codePage : {0}
countryCode : {0}
badPasswordTime : {System.__ComObject}
lastLogoff : {System.__ComObject}
lastLogon : {System.__ComObject}
logonHours : {255 255 255 255 255 255 255 255 255 255 255 255 255
255 255 255 255 255 255 255 255
}
pwdLastSet : {System.__ComObject}
primaryGroupID : {514}
objectSid : {1 5 0 0 0 0 0 5 21 0 0 0 184 88 34 189 250 183 7
172 165 75 78 29 245 1 0 0}
accountExpires : {System.__ComObject}
logonCount : {0}
sAMAccountName : {Guest}
sAMAccountType : {805306368}
objectCategory :
{CN=Person,CN=Schema,CN=Configuration,DC=scriptinternals,DC=technet}
isCriticalSystemObject : {True}
nTSecurityDescriptor : {System.__ComObject}
In addition, further properties are available in the underlying base object:
$useraccount.PSBase | Format-List *
AuthenticationType : Secure
Children : {}
Guid : b4a8fff0-ce01-4955-b318-c0a4641cdd4a
ObjectSecurity : System.DirectoryServices.ActiveDirectorySecurity
Name : CN=Guest
NativeGuid : f0ffa8b401ce5549b318c0a4641cdd4a
NativeObject : {}
Parent : System.DirectoryServices.DirectoryEntry
Password :
Path :
LDAP://10.10.10.1/CN=Guest,CN=Users,DC=scriptinternals,DC=technet
Properties : {objectClass, cn, description, distinguishedName...}
SchemaClassName : user
SchemaEntry : System.DirectoryServices.DirectoryEntry
UsePropertyCache : True
Username : scriptinternals\Administrator
Options : System.DirectoryServices.DirectoryEntryConfiguration
Site :
Container :
The difference between these two objects: the object that was returned first represents the respective user. The
underlying base object is responsible for the ADSI object itself and, for example, reports where it is stored inside a
domain or what is its unique GUID. The UserName property, among others, does not state whom the user account
represents (which in this case is Guest), but who called it (Administrator).
In this list, you will also learn whether properties are only readable or if they can also be modified. Modifiable
properties are designated by {get;set;} and read-only by {get;}. If you change a property, the modification won‘t
come into effect until you subsequently call SetInfo().
Reading Properties
The convention is that object properties are read using a dot, just like all other objects (see Chapter 6). So, if you
want to find out what is in the Description property of the $useraccount object, formulate:
$useraccount.Description
Predefined account for guest access
But there are also two other options and they look like this:
$useraccount.Get("Description")
$useraccount.psbase.InvokeGet("Description")
At first glance, both seem to work identically. However, differences become evident when you query another
property: AccountDisabled.
$useraccount.AccountDisabled
$useraccount.Get("AccountDisabled")
Exception calling "Get" with 1 Argument(s):"The directory property cannot be
found in the cache.”
At line:1 Char:14
+ $useraccount.Get( <<<< "AccountDisabled")
$useraccount.psbase.InvokeGet("AccountDisabled")
False
The first variant returns no information at all, the second an error message, and only the third the right result. What
happened here?
The object in $useraccount is an object processed by PowerShell. All attributes (directory properties) become
visible in this object as properties. However, ADSI objects can contain additional properties, and among these is
AccountDisabled. PowerShell doesn‘t take these additional properties into consideration. The use of a dot
categorically suppresses all errors as only Get() reports the problem: nothing was found for this element in the
LDAP directory under the name AccountDisabled.
In fact, AccountDisabled is located in another interface of the element as only the underlying PSBase object, with its
InvokeGet() method, does everything correctly and returns the contents of this property.
As long as you want to work on properties that are displayed when you use Format-List * to output the object to the
console, you won‘t have any difficulty using a dot or Get(). For all other properties, you‘ll have to use
PSBase.InvokeGet().Use GetEx() iIf you want to have the contents of a property returned as an array.
Modifying Properties
In a rudimentary case, you can modify properties like any other object: use a dot to assign a new value to the
property. Don‘t forget afterwards to call SetInfo() so that the modification is saved. That‘s a special feature of ADSI.
For example, the following line adds a standard description for all users in the user directory if there isn‘t already
one:
$ldap = "CN=Users"
$domain = [ADSI]""
$dn = $domain.distinguishedName
$users = [ADSI]"LDAP://$ldap,$dn"
$users.PSBase.Children | Where-Object { $_.sAMAccountType -eq 805306368 } |
Where-Object { $_.Description.toString() -eq "" } |
ForEach-Object { $_.Description = "Standard description"; $_.SetInfo();
$_.sAMAccountName + " was changed." }
In fact, there are also a total of three approaches to modifying a property. That will soon become very important as
the three ways behave differently in some respects:
# Method 1:
$useraccount.Description = "A new description"
$useraccount.SetInfo()
# Method 2:
$useraccount.Put("Description", "Another new description")
$useraccount.SetInfo()
# Method 3:
$useraccount.PSBase.InvokeSet("Description", "A third description")
$useraccount.SetInfo()
As long as you change the normal directory attributes of an object, all three methods will work in the same way.
Difficulties arise when you modify properties that have special functions. For example among these is the
AccountDisabled property, which determines whether an account is disabled or not. The Guest account is normally
disabled:
$useraccount.AccountDisabled
The result is ―nothing‖ because this property is—as you already know from the last section—not one of the
directory attributes that PowerShell manages in this object. That‘s not good because something very peculiar will
occur in PowerShell if you now try to set this property to another value:
$useraccount.AccountDisabled = $false
$useraccount.SetInfo()
Exception calling "SetInfo" with 0 Argument(s): "The specified directory
service attribute
or value already exists. (Exception from HRESULT: 0x8007200A)"
At line:1 Char:18
+ $useraccount.SetInfo( <<<< )
$useraccount.AccountDisabled
False
PowerShell has summarily input to the object a new property called AccountDisabled. If you try to pass this object
to the domain, it will resist: the AccountDisabled property added by PowerShell does not match the
AccountDisabled domain property. This problem always occurs when you want to set a property of an ADSI object
that hadn‘t previously been specified.
To eliminate the problem, you have to first return the object to its original state so you basically remove the property
that PowerShell added behind your back. You can do that by using GetInfo() to reload the object from the domain.
This shows that GetInfo() is the opposite number of SetInfo():
$useraccount.GetInfo()
Once PowerShell has added an ―illegal‖ property to the object, all further attempts will fail to store this object in the
domain by using SetInfo(). You must call GetInfo() or create the object again:
Finally, use the third above-mentioned variant to set the property, namely not via the normal object processed by
PowerShell, but via its underlying raw version:
$useraccount.psbase.InvokeSet("AccountDisabled", $false)
$useraccount.SetInfo()
Now the modification works. The lesson: the only method that can reliably and flawlessly modify properties is
InvokeSet() from the underlying PSBase object. The other two methods that modify the object processed by
PowerShell will only work properly with the properties that the object does display when you output it to the
console.
Deleting Properties
If you want to completely delete a property, you don‘t have to set its contents to 0 or empty text. If you delete a
property, it will be completely removed. PutEx() can delete properties and also supports properties that store arrays.
PutEx() requires three arguments. The first specifies what PutEx() is supposed to do and corresponds to the values
listed in Table 19.2. . The second argument is the property name that is supposed to be modified. Finally, the third
argument is the value that you assign to the property or want to remove from it.
To completely remove the Description property, use PutEx() with these parameters:
$useraccount.PutEx(1, "Description", 0)
$useraccount.SetInfo()
Then, the Description property will be gone completely when you call all the properties of the object:
$useraccount | Format-List *
objectClass : {top, person, organizationalPerson, user}
cn : {Guest}
distinguishedName :
{CN=Guest,CN=Users,DC=scriptinternals,DC=technet}instanceType : {4}
whenCreated : {11.12.2005 12:31:31}
whenChanged : {17.10.2007 11:59:36}
uSNCreated : {System.__ComObject}
memberOf : {CN=Guests,CN=Builtin,DC=scriptinternals,DC=technet}
uSNChanged : {System.__ComObject}
name : {Guest}
objectGUID : {240 255 168 180 1 206 85 73 179 24 192 164 100 28
221 74}
userAccountControl : {66080}
badPwdCount : {0}
codePage : {0}
countryCode : {0}
badPasswordTime : {System.__ComObject}
lastLogoff : {System.__ComObject}
lastLogon : {System.__ComObject}
logonHours : {255 255 255 255 255 255 255 255 255 255 255 255 255
255 255 255 255 255 255 255 255
}
pwdLastSet : {System.__ComObject}
primaryGroupID : {514}
objectSid : {1 5 0 0 0 0 0 5 21 0 0 0 184 88 34 189 250 183 7
172 165 75 78 29 245 1 0 0}
accountExpires : {System.__ComObject}
logonCount : {0}
sAMAccountName : {Guest}
sAMAccountType : {805306368}
objectCategory :
{CN=Person,CN=Schema,CN=Configuration,DC=scriptinternals,DC=technet}
isCriticalSystemObject : {True}
nTSecurityDescriptor : {System.__ComObject}
ImportantEven Get-Member won‘t return to you any more indications of the Description property. That‘s a real
deficiency as you have no way to recognize what other properties the ADSI object may possibly support as long as
you‘re using PowerShell‘s own resources.. PowerShell always shows only properties that are defined.
However, this doesn‘t mean that the Description property is now gone forever. You can create a new one any time:
Interesting, isn‘t it? This means you could add entirely different properties that the object didn‘t have before:
$useraccount.wwwHomePage = "http://www.powershell.com"
$useraccount.favoritefood = "Meatballs"
Cannot set the Value property for PSMemberInfo object of type
"System.Management.Automation.PSMethod".
At line:1 Char:11
+ $useraccount.L <<<< oritefood = "Meatballs"
$useraccount.SetInfo()
It turns out that the user account accepts the wwwHomePage property (and so sets the Web page of the user on user
properties), while ―favoritefood‖ was rejected. Only properties allowed by the schema can be set.
$useraccount.psbase.SchemaClassName
user
Take a look under this name in the schema of the domain. The result is the schema object for user objects, which
returns the names of all permitted properties in SystemMayContain.
$schema = $domain.PSBase.Children.find("CN=user,CN=Schema,CN=Configuration")
$schema.systemMayContain | Sort-Object
accountExpires
aCSPolicyName
adminCount
badPasswordTime
badPwdCount
businessCategory
codepage
controlAccessRights
dBCSPwd
defaultClassStore
desktopProfile
dynamicLDAPServer
groupMembershipSAM
groupPriority
groupsToIgnore
homeDirectory
homeDrive
homePhone
initials
lastLogoff
lastLogon
lastLogonTimestamp
lmPwdHistory
localeID
lockoutTime
logonCount
logonHours
logonWorkstation
mail
manager
maxStorage
mobile
msCOM-UserPartitionSetLink
msDRM-IdentityCertificate
msDS-Cached-Membership
msDS-Cached-Membership-Time-Stamp
mS-DS-CreatorSID
msDS-Site-Affinity
msDS-User-Account-Control-Computed
msIIS-FTPDir
msIIS-FTPRoot
mSMQDigests
mSMQDigestsMig
mSMQSignCertificates
mSMQSignCertificatesMig
msNPAllowDialin
msNPCallingStationID
msNPSavedCallingStationID
msRADIUSCallbackNumber
msRADIUSFramedIPAddress
msRADIUSFramedRoute
msRADIUSServiceType
msRASSavedCallbackNumber
msRASSavedFramedIPAddress
msRASSavedFramedRoute
networkAddress
ntPwdHistory
o
operatorCount
otherLoginWorkstations
pager
preferredOU
primaryGroupID
profilePath
pwdLastSet
scriptPath
servicePrincipalName
terminalServer
unicodePwd
userAccountControl
userCertificate
userParameters
userPrincipalName
userSharedFolder
userSharedFolderOther
userWorkstations
But note that this would delete any other previously entered telephone numbers. If you want to add a new telephone
number to an existing list, proceed as follows:
A very similar method allows you to delete selected telephone numbers on the list:
Invoking Methods
All the objects that you‘ve been working with up to now contain not only properties, but also methods. In contrast to
properties, methods do not require you to call SetInfo() when you invoke a method that modifies an object. . To find
out which methods an object contains, use Get-Member to make them visible (see Chapter 6):
Surprisingly, the result is something of a disappointment because the ADSI object PowerShell delivers contains no
methods. The true functionality is in the base object, which you get by using PSBase:
TypeName: System.Management.Automation.PSMemberSet
Name MemberType Definition
---- ---------- ----------
add_Disposed Method System.Void add_Disposed(EventHandler
value)
Close Method System.Void Close()
CommitChanges Method System.Void CommitChanges()
CopyTo Method System.DirectoryServices.DirectoryEntry
CopyTo(DirectoryEntry newPare...
CreateObjRef Method System.Runtime.Remoting.ObjRef
CreateObjRef(Type requestedType)
DeleteTree Method System.Void DeleteTree()
Dispose Method System.Void Dispose()
Equals Method System.Boolean Equals(Object obj)
GetHashCode Method System.Int32 GetHashCode()
GetLifetimeService Method System.Object GetLifetimeService()
GetType Method System.Type GetType()
get_AuthenticationType Method
System.DirectoryServices.AuthenticationTypes get_AuthenticationType()
get_Children Method
System.DirectoryServices.DirectoryEntries get_Children()
get_Container Method System.ComponentModel.IContainer
get_Container()
get_Guid Method System.Guid get_Guid()
get_Name Method System.String get_Name()
get_NativeGuid Method System.String get_NativeGuid()
get_ObjectSecurity Method
System.DirectoryServices.ActiveDirectorySecurity get_ObjectSecurity()
get_Options Method
System.DirectoryServices.DirectoryEntryConfiguration get_Options()
get_Parent Method System.DirectoryServices.DirectoryEntry
get_Parent()
get_Path Method System.String get_Path()
get_Properties Method
System.DirectoryServices.PropertyCollection get_Properties()
get_SchemaClassName Method System.String get_SchemaClassName()
get_SchemaEntry Method System.DirectoryServices.DirectoryEntry
get_SchemaEntry()
get_Site Method System.ComponentModel.ISite get_Site()
get_UsePropertyCache Method System.Boolean get_UsePropertyCache()
get_Username Method System.String get_Username()
InitializeLifetimeService Method System.Object
InitializeLifetimeService()
Invoke Method System.Object Invoke(String methodName,
Params Object[] args)
InvokeGet Method System.Object InvokeGet(String
propertyName)
InvokeSet Method System.Void InvokeSet(String
propertyName, Params Object[] args)
MoveTo Method System.Void MoveTo(DirectoryEntry
newParent), System.Void MoveTo(Dire...
RefreshCache Method System.Void RefreshCache(), System.Void
RefreshCache(String[] propert...
remove_Disposed Method System.Void remove_Disposed(EventHandler
value)
Rename Method System.Void Rename(String newName)
set_AuthenticationType Method System.Void
set_AuthenticationType(AuthenticationTypes value)
set_ObjectSecurity Method System.Void
set_ObjectSecurity(ActiveDirectorySecurity value)
set_Password Method System.Void set_Password(String value)
set_Path Method System.Void set_Path(String value)
set_Site Method System.Void set_Site(ISite value)
set_UsePropertyCache Method System.Void set_UsePropertyCache(Boolean
value)
set_Username Method System.Void set_Username(String value)
ToString Method System.String ToString()
Changing Passwords
The password of a user account is an example of information that isn‘t stored in a property. That‘s why you can‘t
just read out user accounts. Instead, methods ensure the immediate generation of a completely confidential hash
value out of the user account and that it is deposited in a secure location. You can use the SetPassword() and
ChangePassword() methods to change passwords:
$useraccount.SetPassword("New password")
$useraccount.ChangePassword("Old password", "New password")
Here, too, the deficiencies of Get-Member become evident when it is used with ADSI objects because Get-Member
suppresses both methods instead of displaying them. You just have to ―know‖ that they exist.
SetPassword() requires administrator privileges and simply resets the password. That can be risky because in the
process you lose access to all your certificates outside a domain, including the crucial certificate for the Encrypting
File System (EFS), though it‘s necessary when users forget their passwords. ChangePassword doesn‘t need any
higher level of permission because confirmation requires giving the old password.
When you change a password, be sure that it meets the demands of the domain. Otherwise, you‘ll be rewarded with
an error message like this one:
# "secret")
if ($start -ne "")
{
$startelement = $domain.psbase.Children.Find($start)
}
else
{
$startelement = $domain
}
$searcher = new-object DirectoryServices.DirectorySearcher($startelement)
$searcher.filter = "(&(objectClass=group)(sAMAccountName=$UserName))"
$Searcher.CacheResults = $true
$Searcher.SearchScope = "Subtree"
$Searcher.PageSize = 1000
$searcher.findall()
}
Groups on their part can also be members in other groups. So, every group object has not only the Member property
with its members, but also MemberOf with the groups in which this group is itself a member.
In the example, the user Cofi1 is added to the group of Domain Admins. It would have sufficed to specify the user‘s
correct ADSI path to the Add() method. But it‘s easier to get the user and pass the path property of the PSBase
object.
Aside from Add(), there are other ways to add users to groups:
$administrators.Member += $user.distinguishedName
$administrators.SetInfo()
Instead of Add() use the Remove() method to remove users from the group again..
$domain = [ADSI]""
Next, create a new organizational unit called ―company‖ and under it some additional organizational units:
Security groups have their own security ID so you can assign permissions to them. Distribution groups organize
only members, but have no security function. In the following example, a global security group and a global
distribution group are created:
#
$group_newsletter = $company.Create("group", "CN=Newsletter")
$group_newsletter.psbase.InvokeSet("groupType", 2)
$group_newsletter.SetInfo()
Since PowerShell is layered on the .NET Framework, you already know from Chapter 6 how you can use .NET code
in PowerShell to make up for missing functions. In this chapter, we‘ll take up this idea once again. You‘ll learn
about the options PowerShell has for creating command extensions on the basis of the .NET Framework. You
should be able to even create your own cmdlets at the end of this chapter.
Topics Covered:
In Chapter 6, you learned in detail about how this works and what an ―assembly‖ is. PowerShell used Add-Type to
load a system library and was then able to use the classes from it to call a static method like MsgBox().
That‘s extremely useful when there is already a system library that offers the method you‘re looking for, but for
some functionality even the .NET Framework doesn‘t have any right commands. For example, you have to rely on
your own resources if you want to move text to the clipboard. The only way to get it done is to access the low-level
API functions outside the .NET Framework.
$code = @'
Imports Microsoft.VisualBasic
Imports System
Namespace ClipboardAddon
Public Class Utility
Private Declare Function OpenClipboard Lib "user32" (ByVal hwnd As
Integer) As Integer
Private Declare Function EmptyClipboard Lib "user32" () As Integer
Private Declare Function CloseClipboard Lib "user32" () As Integer
Private Declare Function SetClipboardData Lib "user32"(ByVal wFormat As
Integer, ByVal hMem As Integer) As Integer
Private Declare Function GlobalAlloc Lib "kernel32" (ByVal wFlags As
Integer, ByVal dwBytes As Integer) As Integer
Private Declare Function GlobalLock Lib "kernel32" (ByVal hMem As
Integer) As Integer
Private Declare Function GlobalUnlock Lib "kernel32" (ByVal hMem As
Integer) As Integer
Private Declare Function lstrcpy Lib "kernel32" (ByVal lpString1 As
Integer, ByVal lpString2 As String) As Integer
In-Memory Compiling
To compile the source code and make it a type that you can use, feed the source code to Add-Type and specify the
programming language the source code used:
Now, you can derive an object from your new type and call the method CopyToClipboad(). Done!
You might be wondering why in your custom type, you needed to use New-Object first to get an object. With
MsgBox() in the previous example, you could call that method directly from the type.
CopyToClipboard() is created in your source code as a dynamic method, which requires you to first create an
instance of the class, and that‘s exactly what New-Object does. Then the instance can call the method.
Alternatively, methods can also be static. For example, MsgBox() in the first example is a static method. To call
static methods, you need neither New-Object nor any instances. Static methods are called directly through the class
in which they are defined.
If you would rather use CopyToClipboard() as a static method, all you need to do is to make a slight change to your
source code. Replace this line:
Once you have compiled your source code, then you can immediately call the method like this:
[ClipboardAddon.Utility]::CopyToClipboard(“Hi Everyone!”)
DLL Compilation
With Add-Type, you can even compile and generate files. In the previous example, your source code was compiled
in-memory on the fly. What if you wanted to protect your intellectual property somewhat and compile a DLL that
your solution would then load?
Here is how you create your own DLL (make sure the folder c:\powershell exists, or else create it or change the
output path in the command below):
After you run these commands, you should find a file called c:\powershell\extension.dll with the compiled content
of your code. If not, try this code in a new PowerShell console. Your experiments with the in-memory compilation
may have interfered.
To load and use your DLL from any PowerShell session, go ahead and use this code:
You can even compile and create console applications and windows programs that way - although that is an edge
case. To create applications, you better use a specific development environment like Visual Studio.